From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE35D2745E; Fri, 17 Oct 2025 15:26:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760714763; cv=none; b=KWZcvpMk0+hJKN4UFFXoP/iZjBYNp2qf9i00NUK4rdcmK3sBI15Wer0ZZu2c74uJJx5hsSfZ0OziR5Wf3xmNwsxdqLDNqjvYb1kmQppVIL2SUoIk/BYYk0lLQ5RaK01Hbjsn9CjqNrEztcQac8YmyuUHxzuZhNLlf9sQQvIo1nA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760714763; c=relaxed/simple; bh=SuCwcZn/g0I1I3vGGr8tm5Is4t6LrVRpExMjh6niHdc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Js8GGFMWKyv+VPcWmBdJ/bBZmTAjedxIsb4Dd4wGlEbAaGrqE+hx0UpPBvEdbVw/ia3wXl4ADvHCndDmyCyepgrA83c90NZ0/z1k8my37iJJjVnT2veLOYqXoeOS6/fCdnZk4i5VzCvP2ftJudIzXrF2n6SySbYjQVN/tQ3zajI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=pBCDL/SC; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="pBCDL/SC" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 42F4AC4CEE7; Fri, 17 Oct 2025 15:26:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1760714762; bh=SuCwcZn/g0I1I3vGGr8tm5Is4t6LrVRpExMjh6niHdc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pBCDL/SCtJGDNmx/CMN3kI2RahM2SGMyrwZdqsKETSxph6N6gp/8jp6FGuBlv0mQT bZUAK/tvm9AQwCDr92a0hxb3ckmMufKmzyuastiHhkH5usG5dc/nfLkxSJS8brwou3 nTeNtL92ymTLMFAnFU873YGJhdfjddBFrdyFyNh4= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Lance Yang , David Hildenbrand , Dev Jain , Zi Yan , "Liam R. Howlett" , Harry Yoo , Alistair Popple , Baolin Wang , Barry Song , Byungchul Park , Gregory Price , "Huang, Ying" , Jann Horn , Joshua Hahn , Lorenzo Stoakes , Mariano Pache , Mathew Brost , Peter Xu , Rakie Kim , Rik van Riel , Ryan Roberts , Usama Arif , Vlastimil Babka , Yu Zhao , Andrew Morton Subject: [PATCH 6.12 258/277] mm/rmap: fix soft-dirty and uffd-wp bit loss when remapping zero-filled mTHP subpage to shared zeropage Date: Fri, 17 Oct 2025 16:54:25 +0200 Message-ID: <20251017145156.577742769@linuxfoundation.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251017145147.138822285@linuxfoundation.org> References: <20251017145147.138822285@linuxfoundation.org> User-Agent: quilt/0.69 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 6.12-stable review patch. If anyone has any objections, please let me know. ------------------ From: Lance Yang commit 9658d698a8a83540bf6a6c80d13c9a61590ee985 upstream. When splitting an mTHP and replacing a zero-filled subpage with the shared zeropage, try_to_map_unused_to_zeropage() currently drops several important PTE bits. For userspace tools like CRIU, which rely on the soft-dirty mechanism for incremental snapshots, losing the soft-dirty bit means modified pages are missed, leading to inconsistent memory state after restore. As pointed out by David, the more critical uffd-wp bit is also dropped. This breaks the userfaultfd write-protection mechanism, causing writes to be silently missed by monitoring applications, which can lead to data corruption. Preserve both the soft-dirty and uffd-wp bits from the old PTE when creating the new zeropage mapping to ensure they are correctly tracked. Link: https://lkml.kernel.org/r/20250930081040.80926-1-lance.yang@linux.dev Fixes: b1f202060afe ("mm: remap unused subpages to shared zeropage when splitting isolated thp") Signed-off-by: Lance Yang Suggested-by: David Hildenbrand Suggested-by: Dev Jain Acked-by: David Hildenbrand Reviewed-by: Dev Jain Acked-by: Zi Yan Reviewed-by: Liam R. Howlett Reviewed-by: Harry Yoo Cc: Alistair Popple Cc: Baolin Wang Cc: Barry Song Cc: Byungchul Park Cc: Gregory Price Cc: "Huang, Ying" Cc: Jann Horn Cc: Joshua Hahn Cc: Lorenzo Stoakes Cc: Mariano Pache Cc: Mathew Brost Cc: Peter Xu Cc: Rakie Kim Cc: Rik van Riel Cc: Ryan Roberts Cc: Usama Arif Cc: Vlastimil Babka Cc: Yu Zhao Cc: Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman --- mm/migrate.c | 15 ++++++++++----- 1 file changed, 10 insertions(+), 5 deletions(-) --- a/mm/migrate.c +++ b/mm/migrate.c @@ -198,8 +198,7 @@ bool isolate_folio_to_list(struct folio } static bool try_to_map_unused_to_zeropage(struct page_vma_mapped_walk *pvmw, - struct folio *folio, - unsigned long idx) + struct folio *folio, pte_t old_pte, unsigned long idx) { struct page *page = folio_page(folio, idx); pte_t newpte; @@ -208,7 +207,7 @@ static bool try_to_map_unused_to_zeropag return false; VM_BUG_ON_PAGE(!PageAnon(page), page); VM_BUG_ON_PAGE(!PageLocked(page), page); - VM_BUG_ON_PAGE(pte_present(*pvmw->pte), page); + VM_BUG_ON_PAGE(pte_present(old_pte), page); if (folio_test_mlocked(folio) || (pvmw->vma->vm_flags & VM_LOCKED) || mm_forbids_zeropage(pvmw->vma->vm_mm)) @@ -224,6 +223,12 @@ static bool try_to_map_unused_to_zeropag newpte = pte_mkspecial(pfn_pte(my_zero_pfn(pvmw->address), pvmw->vma->vm_page_prot)); + + if (pte_swp_soft_dirty(old_pte)) + newpte = pte_mksoft_dirty(newpte); + if (pte_swp_uffd_wp(old_pte)) + newpte = pte_mkuffd_wp(newpte); + set_pte_at(pvmw->vma->vm_mm, pvmw->address, pvmw->pte, newpte); dec_mm_counter(pvmw->vma->vm_mm, mm_counter(folio)); @@ -266,13 +271,13 @@ static bool remove_migration_pte(struct continue; } #endif + old_pte = ptep_get(pvmw.pte); if (rmap_walk_arg->map_unused_to_zeropage && - try_to_map_unused_to_zeropage(&pvmw, folio, idx)) + try_to_map_unused_to_zeropage(&pvmw, folio, old_pte, idx)) continue; folio_get(folio); pte = mk_pte(new, READ_ONCE(vma->vm_page_prot)); - old_pte = ptep_get(pvmw.pte); entry = pte_to_swp_entry(old_pte); if (!is_migration_entry_young(entry))