From: Dev Jain <dev.jain@arm.com>
To: akpm@linux-foundation.org, axelrasmussen@google.com,
yuanchu@google.com, david@kernel.org, hughd@google.com,
chrisl@kernel.org, kasong@tencent.com
Cc: weixugc@google.com, ljs@kernel.org, Liam.Howlett@oracle.com,
vbabka@kernel.org, rppt@kernel.org, surenb@google.com,
mhocko@suse.com, riel@surriel.com, harry.yoo@oracle.com,
jannh@google.com, pfalcato@suse.de,
baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com,
nphamcs@gmail.com, bhe@redhat.com, baohua@kernel.org,
youngjun.park@lge.com, ziy@nvidia.com, kas@kernel.org,
willy@infradead.org, yuzhao@google.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, ryan.roberts@arm.com,
anshuman.khandual@arm.com, Dev Jain <dev.jain@arm.com>
Subject: [PATCH 4/9] mm/memory: Batch set uffd-wp markers during zapping
Date: Tue, 10 Mar 2026 13:00:08 +0530 [thread overview]
Message-ID: <20260310073013.4069309-5-dev.jain@arm.com> (raw)
In-Reply-To: <20260310073013.4069309-1-dev.jain@arm.com>
In preparation for the next patch, enable batch setting of uffd-wp ptes.
The code paths passing nr > 1 to zap_install_uffd_wp_if_needed() produce
that nr through either folio_pte_batch or swap_pte_batch, guaranteeing that
all ptes are the same w.r.t belonging to the same type of VMA (anonymous
or non-anonymous, wp-armed or non-wp-armed), and all being marked with
uffd-wp or all being not marked.
Note that we will have to use set_pte_at() in a loop instead of set_ptes()
since the latter cannot handle present->non-present conversion for
nr_pages > 1.
Convert documentation of install_uffd_wp_ptes_if_needed to kerneldoc
format.
No functional change is intended.
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
include/linux/mm_inline.h | 37 +++++++++++++++++++++++--------------
mm/memory.c | 20 +-------------------
mm/rmap.c | 2 +-
3 files changed, 25 insertions(+), 34 deletions(-)
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index ad50688d89dba..d69b9abbdf2a7 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -560,21 +560,30 @@ static inline pte_marker copy_pte_marker(
return dstm;
}
-/*
- * If this pte is wr-protected by uffd-wp in any form, arm the special pte to
- * replace a none pte. NOTE! This should only be called when *pte is already
+/**
+ * install_uffd_wp_ptes_if_needed - install uffd-wp marker on PTEs that map
+ * consecutive pages of the same large folio.
+ * @vma: The VMA the pages are mapped into.
+ * @addr: Address the first page of this batch is mapped at.
+ * @ptep: Page table pointer for the first entry of this batch.
+ * @pteval: old value of the entry pointed to by ptep.
+ * @nr: Number of entries to clear (batch size).
+ *
+ * If the ptes were wr-protected by uffd-wp in any form, arm special ptes to
+ * replace none ptes. NOTE! This should only be called when *pte is already
* cleared so we will never accidentally replace something valuable. Meanwhile
* none pte also means we are not demoting the pte so tlb flushed is not needed.
* E.g., when pte cleared the caller should have taken care of the tlb flush.
*
- * Must be called with pgtable lock held so that no thread will see the none
- * pte, and if they see it, they'll fault and serialize at the pgtable lock.
+ * Context: The caller holds the page table lock. The PTEs map consecutive
+ * pages that belong to the same folio. The PTEs are all in the same PMD
+ * and the same VMA.
*
- * Returns true if an uffd-wp pte was installed, false otherwise.
+ * Returns true if uffd-wp ptes were installed, false otherwise.
*/
static inline bool
-pte_install_uffd_wp_if_needed(struct vm_area_struct *vma, unsigned long addr,
- pte_t *pte, pte_t pteval)
+install_uffd_wp_ptes_if_needed(struct vm_area_struct *vma, unsigned long addr,
+ pte_t *pte, pte_t pteval, unsigned int nr)
{
bool arm_uffd_pte = false;
@@ -604,13 +613,13 @@ pte_install_uffd_wp_if_needed(struct vm_area_struct *vma, unsigned long addr,
if (unlikely(pte_swp_uffd_wp_any(pteval)))
arm_uffd_pte = true;
- if (unlikely(arm_uffd_pte)) {
- set_pte_at(vma->vm_mm, addr, pte,
- make_pte_marker(PTE_MARKER_UFFD_WP));
- return true;
- }
+ if (likely(!arm_uffd_pte))
+ return false;
- return false;
+ for (int i = 0; i < nr; ++i, ++pte, addr += PAGE_SIZE)
+ set_pte_at(vma->vm_mm, addr, pte, make_pte_marker(PTE_MARKER_UFFD_WP));
+
+ return true;
}
static inline bool vma_has_recency(const struct vm_area_struct *vma)
diff --git a/mm/memory.c b/mm/memory.c
index 38062f8e11656..768646c0b3b6a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1594,29 +1594,11 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma,
unsigned long addr, pte_t *pte, int nr,
struct zap_details *details, pte_t pteval)
{
- bool was_installed = false;
-
- if (!uffd_supports_wp_marker())
- return false;
-
- /* Zap on anonymous always means dropping everything */
- if (vma_is_anonymous(vma))
- return false;
-
if (zap_drop_markers(details))
return false;
- for (;;) {
- /* the PFN in the PTE is irrelevant. */
- if (pte_install_uffd_wp_if_needed(vma, addr, pte, pteval))
- was_installed = true;
- if (--nr == 0)
- break;
- pte++;
- addr += PAGE_SIZE;
- }
+ return install_uffd_wp_ptes_if_needed(vma, addr, pte, pteval, nr);
- return was_installed;
}
static __always_inline void zap_present_folio_ptes(struct mmu_gather *tlb,
diff --git a/mm/rmap.c b/mm/rmap.c
index a61978141ee3f..a7570cd037344 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2235,7 +2235,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
* we may want to replace a none pte with a marker pte if
* it's file-backed, so we don't lose the tracking info.
*/
- pte_install_uffd_wp_if_needed(vma, address, pvmw.pte, pteval);
+ install_uffd_wp_ptes_if_needed(vma, address, pvmw.pte, pteval, 1);
/* Update high watermark before we lower rss */
update_hiwater_rss(mm);
--
2.34.1
next prev parent reply other threads:[~2026-03-10 7:31 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-10 7:30 [PATCH 0/9] mm/rmap: Optimize anonymous large folio unmapping Dev Jain
2026-03-10 7:30 ` [PATCH 1/9] mm/rmap: make nr_pages signed in try_to_unmap_one Dev Jain
2026-03-10 7:56 ` Lorenzo Stoakes (Oracle)
2026-03-10 8:06 ` David Hildenbrand (Arm)
2026-03-10 8:23 ` Dev Jain
2026-03-10 12:40 ` Matthew Wilcox
2026-03-11 4:54 ` Dev Jain
2026-03-10 7:30 ` [PATCH 2/9] mm/rmap: initialize nr_pages to 1 at loop start " Dev Jain
2026-03-10 8:10 ` Lorenzo Stoakes (Oracle)
2026-03-10 8:31 ` Dev Jain
2026-03-10 8:39 ` Lorenzo Stoakes (Oracle)
2026-03-10 8:43 ` Dev Jain
2026-03-10 7:30 ` [PATCH 3/9] mm/rmap: refactor lazyfree unmap commit path to commit_ttu_lazyfree_folio() Dev Jain
2026-03-10 8:19 ` Lorenzo Stoakes (Oracle)
2026-03-10 8:42 ` Dev Jain
2026-03-19 15:53 ` Lorenzo Stoakes (Oracle)
2026-03-10 7:30 ` Dev Jain [this message]
2026-03-10 7:30 ` [PATCH 5/9] mm/rmap: batch unmap folios belonging to uffd-wp VMAs Dev Jain
2026-03-10 8:34 ` Lorenzo Stoakes (Oracle)
2026-03-10 23:32 ` Barry Song
2026-03-11 4:14 ` Barry Song
2026-03-11 4:52 ` Dev Jain
2026-03-11 4:56 ` Dev Jain
2026-03-10 7:30 ` [PATCH 6/9] mm/swapfile: Make folio_dup_swap batchable Dev Jain
2026-03-10 8:27 ` Kairui Song
2026-03-10 8:46 ` Dev Jain
2026-03-10 8:49 ` Lorenzo Stoakes (Oracle)
2026-03-11 5:42 ` Dev Jain
2026-03-19 15:26 ` Lorenzo Stoakes (Oracle)
2026-03-19 16:47 ` Matthew Wilcox
2026-03-18 0:20 ` kernel test robot
2026-03-10 7:30 ` [PATCH 7/9] mm/swapfile: Make folio_put_swap batchable Dev Jain
2026-03-10 8:29 ` Kairui Song
2026-03-10 8:50 ` Dev Jain
2026-03-10 8:55 ` Lorenzo Stoakes (Oracle)
2026-03-18 1:04 ` kernel test robot
2026-03-10 7:30 ` [PATCH 8/9] mm/rmap: introduce folio_try_share_anon_rmap_ptes Dev Jain
2026-03-10 9:38 ` Lorenzo Stoakes (Oracle)
2026-03-11 8:09 ` Dev Jain
2026-03-12 8:19 ` Wei Yang
2026-03-19 15:47 ` Lorenzo Stoakes (Oracle)
2026-04-08 7:14 ` Dev Jain
2026-03-10 7:30 ` [PATCH 9/9] mm/rmap: enable batch unmapping of anonymous folios Dev Jain
2026-03-10 8:02 ` [PATCH 0/9] mm/rmap: Optimize anonymous large folio unmapping Lorenzo Stoakes (Oracle)
2026-03-10 9:28 ` Dev Jain
2026-03-10 12:59 ` Lance Yang
2026-03-11 8:11 ` Dev Jain
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260310073013.4069309-5-dev.jain@arm.com \
--to=dev.jain@arm.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=axelrasmussen@google.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bhe@redhat.com \
--cc=chrisl@kernel.org \
--cc=david@kernel.org \
--cc=harry.yoo@oracle.com \
--cc=hughd@google.com \
--cc=jannh@google.com \
--cc=kas@kernel.org \
--cc=kasong@tencent.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@suse.com \
--cc=nphamcs@gmail.com \
--cc=pfalcato@suse.de \
--cc=riel@surriel.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=shikemeng@huaweicloud.com \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
--cc=weixugc@google.com \
--cc=willy@infradead.org \
--cc=youngjun.park@lge.com \
--cc=yuanchu@google.com \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.