From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 148553101D4; Mon, 27 Apr 2026 11:46:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777290417; cv=none; b=YFOFTauSaBp41n8yMj6TZ317HsqXbaFwPEwqhr9fbDMdzQZa5ABWbgWFLdQAAzxkSel/86AugJg2SIlDkezAdvD15gcuRddoL63KrQ2sHVBFU+/wGWQywort2dfz0I4cwLMNAMz3Qu7Tr+wXTYkhO2ywq4bzBvP+xBFTgSz+4/w= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1777290417; c=relaxed/simple; bh=SzqfwSiyUU02tgjSPhqUv8OOpIHzsky3BsDESvPxOA8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FW1bleH8ZKz7+kGDiaaYnHqHk7NaFJfuE278FEpakMIIvIMnobpOOQPoKXvWO4rXDsaAo2eFTQelJ+uMcKCI30IurAuArLXzqwp8y2X1k+/DF67+GI/g5JDCbNXS/pbZe1k9RanbTZ2Gj/Ejj9pUaPwrbH5jPnM8h86XbYpVmAk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Hff/T0kZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Hff/T0kZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 34A31C4AF09; Mon, 27 Apr 2026 11:46:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1777290416; bh=SzqfwSiyUU02tgjSPhqUv8OOpIHzsky3BsDESvPxOA8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Hff/T0kZXtm8tIlNxpj+UPxLNaBhBFQ+1YnpPez8KduSnS2fAfUlsrKfyyB4XM+2S idT2Z5VVHc84+BW71foGQzzcEa9QKBpOPaFR54iprXgkbNmWukB8ac55Co/T4g7VSd QvBJ4kOMFWWUgXnMPVaA8eCe4LtbdpMUz8vcIombMIzrXwwMWTSwXu1IBmB8jxYtej ODU2vSIMC/xGSuTgRf6YkFX5cVfu4bABfyA6UxfiR+JiEBP80P/wsoQqGj5Sl5PBhK IZ8gbIAPK7xuH0Pa7nHC7p6thRibvAUKGKxomnqt5i9rPhi9wmyO2LXc2NaNxuNnmg a8vyZQbN4lZ6A== Received: from phl-compute-05.internal (phl-compute-05.internal [10.202.2.45]) by mailfauth.phl.internal (Postfix) with ESMTP id 5C89FF40069; Mon, 27 Apr 2026 07:46:55 -0400 (EDT) Received: from phl-frontend-03 ([10.202.2.162]) by phl-compute-05.internal (MEProxy); Mon, 27 Apr 2026 07:46:55 -0400 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefhedrtddtgdejkeeiudcutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpuffrtefokffrpgfnqfghnecuuegr ihhlohhuthemuceftddtnecunecujfgurhephffvvefufffkofgjfhgggfestdekredtre dttdenucfhrhhomhepfdfmihhrhihlucfuhhhuthhsvghmrghuucdlofgvthgrmddfuceo khgrsheskhgvrhhnvghlrdhorhhgqeenucggtffrrghtthgvrhhnpefhudejfedvgeekff efvdekheekkeeuveeftdelheegteelgfefveevueekhfdtteenucevlhhushhtvghrufhi iigvpedvnecurfgrrhgrmhepmhgrihhlfhhrohhmpehkihhrihhllhdomhgvshhmthhprg huthhhphgvrhhsohhnrghlihhthidqudeiudduiedvieehhedqvdekgeeggeejvdekqdhk rghspeepkhgvrhhnvghlrdhorhhgsehshhhuthgvmhhovhdrnhgrmhgvpdhnsggprhgtph htthhopedvgedpmhhouggvpehsmhhtphhouhhtpdhrtghpthhtoheprghkphhmsehlihhn uhigqdhfohhunhgurghtihhonhdrohhrghdprhgtphhtthhopehrphhptheskhgvrhhnvg hlrdhorhhgpdhrtghpthhtohepphgvthgvrhigsehrvgguhhgrthdrtghomhdprhgtphht thhopegurghvihgusehkvghrnhgvlhdrohhrghdprhgtphhtthhopehljhhssehkvghrnh gvlhdrohhrghdprhgtphhtthhopehsuhhrvghnsgesghhoohhglhgvrdgtohhmpdhrtghp thhtohepvhgsrggskhgrsehkvghrnhgvlhdrohhrghdprhgtphhtthhopehlihgrmhdrhh hofihlvghtthesohhrrggtlhgvrdgtohhmpdhrtghpthhtohepiihihiesnhhvihguihgr rdgtohhm X-ME-Proxy: Feedback-ID: i10464835:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon, 27 Apr 2026 07:46:53 -0400 (EDT) From: "Kiryl Shutsemau (Meta)" To: akpm@linux-foundation.org, rppt@kernel.org, peterx@redhat.com, david@kernel.org Cc: ljs@kernel.org, surenb@google.com, vbabka@kernel.org, Liam.Howlett@oracle.com, ziy@nvidia.com, corbet@lwn.net, skhan@linuxfoundation.org, seanjc@google.com, pbonzini@redhat.com, jthoughton@google.com, aarcange@redhat.com, sj@kernel.org, usama.arif@linux.dev, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-kselftest@vger.kernel.org, kvm@vger.kernel.org, kernel-team@meta.com, "Kiryl Shutsemau (Meta)" Subject: [PATCH 05/14] mm: add MM_CP_UFFD_RWP change_protection() flag Date: Mon, 27 Apr 2026 12:45:53 +0100 Message-ID: <20260427114607.4068647-6-kas@kernel.org> X-Mailer: git-send-email 2.51.2 In-Reply-To: <20260427114607.4068647-1-kas@kernel.org> References: <20260427114607.4068647-1-kas@kernel.org> Precedence: bulk X-Mailing-List: linux-doc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Preparatory patch. Add the change_protection() primitive that userfaultfd RWP will use. An RWP-protected PTE is PAGE_NONE with the uffd PTE bit set. The PROT_NONE half makes the CPU fault on any access; the uffd bit distinguishes an RWP fault from a plain mprotect(PROT_NONE) or NUMA hinting fault. MM_CP_UFFD_WP and MM_CP_UFFD_RWP share the same PTE bit, so the two cannot be used together on the same range. Two new change_protection() flags: MM_CP_UFFD_RWP install PAGE_NONE and set the uffd bit MM_CP_UFFD_RWP_RESOLVE restore vma->vm_page_prot, clear the uffd bit Both are wired through change_pte_range(), change_huge_pmd(), and hugetlb_change_protection() so anon, shmem, THP, and hugetlb all share the same semantics. Signed-off-by: Kiryl Shutsemau Assisted-by: Claude:claude-opus-4-6 --- include/linux/mm.h | 5 +++++ include/linux/userfaultfd_k.h | 1 - mm/huge_memory.c | 20 ++++++++++++------ mm/hugetlb.c | 25 ++++++++++++++++------ mm/mprotect.c | 40 +++++++++++++++++++++++++++++------ 5 files changed, 71 insertions(+), 20 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 3f53d1e978c0..2b65416bb760 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3291,6 +3291,11 @@ int get_cmdline(struct task_struct *task, char *buffer, int buflen); #define MM_CP_UFFD_WP_RESOLVE (1UL << 3) /* Resolve wp */ #define MM_CP_UFFD_WP_ALL (MM_CP_UFFD_WP | \ MM_CP_UFFD_WP_RESOLVE) +/* Whether this change is for uffd RWP */ +#define MM_CP_UFFD_RWP (1UL << 4) /* do rwp */ +#define MM_CP_UFFD_RWP_RESOLVE (1UL << 5) /* Resolve rwp */ +#define MM_CP_UFFD_RWP_ALL (MM_CP_UFFD_RWP | \ + MM_CP_UFFD_RWP_RESOLVE) bool can_change_pte_writable(struct vm_area_struct *vma, unsigned long addr, pte_t pte); diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h index fcf308dba311..3725e61a7041 100644 --- a/include/linux/userfaultfd_k.h +++ b/include/linux/userfaultfd_k.h @@ -397,7 +397,6 @@ static inline bool userfaultfd_huge_pmd_wp(struct vm_area_struct *vma, return false; } - static inline bool userfaultfd_armed(struct vm_area_struct *vma) { return false; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index d88fcccd386d..2537dca63c6c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2665,6 +2665,8 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, spinlock_t *ptl; pmd_t oldpmd, entry; bool prot_numa = cp_flags & MM_CP_PROT_NUMA; + bool uffd_rwp = cp_flags & MM_CP_UFFD_RWP; + bool uffd_rwp_resolve = cp_flags & MM_CP_UFFD_RWP_RESOLVE; bool uffd_wp = cp_flags & MM_CP_UFFD_WP; bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE; int ret = 1; @@ -2679,11 +2681,18 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, return 0; if (thp_migration_supported() && pmd_is_valid_softleaf(*pmd)) { - change_non_present_huge_pmd(mm, addr, pmd, uffd_wp, - uffd_wp_resolve); + change_non_present_huge_pmd(mm, addr, pmd, + uffd_wp || uffd_rwp, + uffd_wp_resolve || uffd_rwp_resolve); goto unlock; } + /* Already in the desired state */ + if (prot_numa && pmd_protnone(*pmd)) + goto unlock; + if (uffd_rwp && pmd_protnone(*pmd) && pmd_uffd(*pmd)) + goto unlock; + if (prot_numa) { /* @@ -2694,9 +2703,6 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, if (is_huge_zero_pmd(*pmd)) goto unlock; - if (pmd_protnone(*pmd)) - goto unlock; - if (!folio_can_map_prot_numa(pmd_folio(*pmd), vma, vma_is_single_threaded_private(vma))) goto unlock; @@ -2725,9 +2731,9 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, oldpmd = pmdp_invalidate_ad(vma, addr, pmd); entry = pmd_modify(oldpmd, newprot); - if (uffd_wp) + if (uffd_wp || uffd_rwp) entry = pmd_mkuffd(entry); - else if (uffd_wp_resolve) + else if (uffd_wp_resolve || uffd_rwp_resolve) /* * Leave the write bit to be handled by PF interrupt * handler, then things like COW could be properly diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 61cda9992043..63f6b19418b9 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6409,6 +6409,8 @@ long hugetlb_change_protection(struct vm_area_struct *vma, unsigned long last_addr_mask; bool uffd_wp = cp_flags & MM_CP_UFFD_WP; bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE; + bool uffd_rwp = cp_flags & MM_CP_UFFD_RWP; + bool uffd_rwp_resolve = cp_flags & MM_CP_UFFD_RWP_RESOLVE; struct mmu_gather tlb; /* @@ -6434,6 +6436,11 @@ long hugetlb_change_protection(struct vm_area_struct *vma, ptep = hugetlb_walk(vma, address, psize); if (!ptep) { + /* + * uffd_wp installs a pte marker on the unpopulated + * entry; RWP does not install markers so the + * allocation is unnecessary for it. + */ if (!uffd_wp) { address |= last_addr_mask; continue; @@ -6455,7 +6462,8 @@ long hugetlb_change_protection(struct vm_area_struct *vma, * shouldn't happen at all. Warn about it if it * happened due to some reason. */ - WARN_ON_ONCE(uffd_wp || uffd_wp_resolve); + WARN_ON_ONCE(uffd_wp || uffd_wp_resolve || + uffd_rwp || uffd_rwp_resolve); pages++; spin_unlock(ptl); address |= last_addr_mask; @@ -6489,9 +6497,9 @@ long hugetlb_change_protection(struct vm_area_struct *vma, pages++; } - if (uffd_wp) + if (uffd_wp || uffd_rwp) newpte = pte_swp_mkuffd(newpte); - else if (uffd_wp_resolve) + else if (uffd_wp_resolve || uffd_rwp_resolve) newpte = pte_swp_clear_uffd(newpte); if (!pte_same(pte, newpte)) set_huge_pte_at(mm, address, ptep, newpte, psize); @@ -6502,19 +6510,24 @@ long hugetlb_change_protection(struct vm_area_struct *vma, * pte_marker_uffd_wp()==true implies !poison * because they're mutual exclusive. */ - if (pte_is_uffd_wp_marker(pte) && uffd_wp_resolve) + if (pte_is_uffd_wp_marker(pte) && + (uffd_wp_resolve || uffd_rwp_resolve)) /* Safe to modify directly (non-present->none). */ huge_pte_clear(mm, address, ptep, psize); } else { pte_t old_pte; unsigned int shift = huge_page_shift(hstate_vma(vma)); + /* Already protnone with uffd bit set? Nothing to do. */ + if (uffd_rwp && pte_protnone(pte) && huge_pte_uffd(pte)) + goto next; + old_pte = huge_ptep_modify_prot_start(vma, address, ptep); pte = huge_pte_modify(old_pte, newprot); pte = arch_make_huge_pte(pte, shift, vma->vm_flags); - if (uffd_wp) + if (uffd_wp || uffd_rwp) pte = huge_pte_mkuffd(pte); - else if (uffd_wp_resolve) + else if (uffd_wp_resolve || uffd_rwp_resolve) pte = huge_pte_clear_uffd(pte); huge_ptep_modify_prot_commit(vma, address, ptep, old_pte, pte); pages++; diff --git a/mm/mprotect.c b/mm/mprotect.c index 8340c8b228c6..23e71f68cf7a 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -216,6 +216,8 @@ static long change_softleaf_pte(struct vm_area_struct *vma, { const bool uffd_wp = cp_flags & MM_CP_UFFD_WP; const bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE; + const bool uffd_rwp = cp_flags & MM_CP_UFFD_RWP; + const bool uffd_rwp_resolve = cp_flags & MM_CP_UFFD_RWP_RESOLVE; softleaf_t entry = softleaf_from_pte(oldpte); pte_t newpte; @@ -256,7 +258,7 @@ static long change_softleaf_pte(struct vm_area_struct *vma, * to unprotect it, drop it; the next page * fault will trigger without uffd trapping. */ - if (uffd_wp_resolve) { + if (uffd_wp_resolve || uffd_rwp_resolve) { pte_clear(vma->vm_mm, addr, pte); return 1; } @@ -265,9 +267,9 @@ static long change_softleaf_pte(struct vm_area_struct *vma, newpte = oldpte; } - if (uffd_wp) + if (uffd_wp || uffd_rwp) newpte = pte_swp_mkuffd(newpte); - else if (uffd_wp_resolve) + else if (uffd_wp_resolve || uffd_rwp_resolve) newpte = pte_swp_clear_uffd(newpte); if (!pte_same(oldpte, newpte)) { @@ -284,14 +286,16 @@ static __always_inline void change_present_ptes(struct mmu_gather *tlb, { const bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE; const bool uffd_wp = cp_flags & MM_CP_UFFD_WP; + const bool uffd_rwp = cp_flags & MM_CP_UFFD_RWP; + const bool uffd_rwp_resolve = cp_flags & MM_CP_UFFD_RWP_RESOLVE; pte_t ptent, oldpte; oldpte = modify_prot_start_ptes(vma, addr, ptep, nr_ptes); ptent = pte_modify(oldpte, newprot); - if (uffd_wp) + if (uffd_wp || uffd_rwp) ptent = pte_mkuffd(ptent); - else if (uffd_wp_resolve) + else if (uffd_wp_resolve || uffd_rwp_resolve) ptent = pte_clear_uffd(ptent); /* @@ -325,6 +329,7 @@ static long change_pte_range(struct mmu_gather *tlb, long pages = 0; bool is_private_single_threaded; bool prot_numa = cp_flags & MM_CP_PROT_NUMA; + bool uffd_rwp = cp_flags & MM_CP_UFFD_RWP; bool uffd_wp = cp_flags & MM_CP_UFFD_WP; int nr_ptes; @@ -350,6 +355,14 @@ static long change_pte_range(struct mmu_gather *tlb, /* Already in the desired state. */ if (prot_numa && pte_protnone(oldpte)) continue; + /* + * RWP-protected PTEs carry _PAGE_UFFD as a marker on + * top of PROT_NONE. Skip only entries already in that + * exact state; plain PROT_NONE from mprotect() still needs + * to be promoted so future faults can be distinguished. + */ + if (uffd_rwp && pte_protnone(oldpte) && pte_uffd(oldpte)) + continue; page = vm_normal_page(vma, addr, oldpte); if (page) @@ -358,6 +371,8 @@ static long change_pte_range(struct mmu_gather *tlb, /* * Avoid trapping faults against the zero or KSM * pages. See similar comment in change_huge_pmd. + * Skip this filter for uffd RWP which + * must set protnone regardless of NUMA placement. */ if (prot_numa && !folio_can_map_prot_numa(folio, vma, @@ -667,7 +682,16 @@ long change_protection(struct mmu_gather *tlb, pgprot_t newprot = vma->vm_page_prot; long pages; - BUG_ON((cp_flags & MM_CP_UFFD_WP_ALL) == MM_CP_UFFD_WP_ALL); + /* + * MM_CP_UFFD_{WP,RWP} and _RESOLVE are mutually exclusive within one + * change, and WP and RWP cannot mix. Miswired callers get a warn and + * a no-op; userspace cannot reach this state. + */ + if (WARN_ON_ONCE((cp_flags & MM_CP_UFFD_WP_ALL) == MM_CP_UFFD_WP_ALL || + (cp_flags & MM_CP_UFFD_RWP_ALL) == MM_CP_UFFD_RWP_ALL || + ((cp_flags & MM_CP_UFFD_WP_ALL) && + (cp_flags & MM_CP_UFFD_RWP_ALL)))) + return 0; #ifdef CONFIG_NUMA_BALANCING /* @@ -681,6 +705,10 @@ long change_protection(struct mmu_gather *tlb, WARN_ON_ONCE(cp_flags & MM_CP_PROT_NUMA); #endif + if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_PROTNONE) && + (cp_flags & MM_CP_UFFD_RWP)) + newprot = PAGE_NONE; + if (is_vm_hugetlb_page(vma)) pages = hugetlb_change_protection(vma, start, end, newprot, cp_flags); -- 2.51.2