From: Dev Jain <dev.jain@arm.com>
To: akpm@linux-foundation.org, axelrasmussen@google.com,
yuanchu@google.com, david@kernel.org, hughd@google.com,
chrisl@kernel.org, kasong@tencent.com
Cc: weixugc@google.com, ljs@kernel.org, Liam.Howlett@oracle.com,
vbabka@kernel.org, rppt@kernel.org, surenb@google.com,
mhocko@suse.com, riel@surriel.com, harry.yoo@oracle.com,
jannh@google.com, pfalcato@suse.de,
baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com,
nphamcs@gmail.com, bhe@redhat.com, baohua@kernel.org,
youngjun.park@lge.com, ziy@nvidia.com, kas@kernel.org,
willy@infradead.org, yuzhao@google.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, ryan.roberts@arm.com,
anshuman.khandual@arm.com, Dev Jain <dev.jain@arm.com>
Subject: [PATCH 8/9] mm/rmap: introduce folio_try_share_anon_rmap_ptes
Date: Tue, 10 Mar 2026 13:00:12 +0530 [thread overview]
Message-ID: <20260310073013.4069309-9-dev.jain@arm.com> (raw)
In-Reply-To: <20260310073013.4069309-1-dev.jain@arm.com>
In the quest of enabling batched unmapping of anonymous folios, we need to
handle the sharing of exclusive pages. Hence, a batched version of
folio_try_share_anon_rmap_pte is required.
Currently, the sole purpose of nr_pages in __folio_try_share_anon_rmap is
to do some rmap sanity checks. Add helpers to set and clear the
PageAnonExclusive bit on a batch of nr_pages. Note that
__folio_try_share_anon_rmap can receive nr_pages == HPAGE_PMD_NR from the
PMD path, but currently we only clear the bit on the head page. Retain this
behaviour by setting nr_pages = 1 in case the caller is
folio_try_share_anon_rmap_pmd.
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
include/linux/page-flags.h | 11 +++++++++++
include/linux/rmap.h | 28 ++++++++++++++++++++++++++--
mm/rmap.c | 2 +-
3 files changed, 38 insertions(+), 3 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 0e03d816e8b9d..1d74ed9a28c41 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -1178,6 +1178,17 @@ static __always_inline void __ClearPageAnonExclusive(struct page *page)
__clear_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags.f);
}
+static __always_inline void ClearPagesAnonExclusive(struct page *page,
+ unsigned int nr)
+{
+ for (;;) {
+ ClearPageAnonExclusive(page);
+ if (--nr == 0)
+ break;
+ ++page;
+ }
+}
+
#ifdef CONFIG_MMU
#define __PG_MLOCKED (1UL << PG_mlocked)
#else
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 1b7720c66ac87..7a67776dca3fe 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -712,9 +712,13 @@ static __always_inline int __folio_try_share_anon_rmap(struct folio *folio,
VM_WARN_ON_FOLIO(!PageAnonExclusive(page), folio);
__folio_rmap_sanity_checks(folio, page, nr_pages, level);
+ /* We only clear anon-exclusive from head page of PMD folio */
+ if (level == PGTABLE_LEVEL_PMD)
+ nr_pages = 1;
+
/* device private folios cannot get pinned via GUP. */
if (unlikely(folio_is_device_private(folio))) {
- ClearPageAnonExclusive(page);
+ ClearPagesAnonExclusive(page, nr_pages);
return 0;
}
@@ -766,7 +770,7 @@ static __always_inline int __folio_try_share_anon_rmap(struct folio *folio,
if (unlikely(folio_maybe_dma_pinned(folio)))
return -EBUSY;
- ClearPageAnonExclusive(page);
+ ClearPagesAnonExclusive(page, nr_pages);
/*
* This is conceptually a smp_wmb() paired with the smp_rmb() in
@@ -804,6 +808,26 @@ static inline int folio_try_share_anon_rmap_pte(struct folio *folio,
return __folio_try_share_anon_rmap(folio, page, 1, PGTABLE_LEVEL_PTE);
}
+/**
+ * folio_try_share_anon_rmap_ptes - try marking exclusive anonymous pages
+ * mapped by PTEs possibly shared to prepare
+ * for KSM or temporary unmapping
+ * @folio: The folio to share a mapping of
+ * @page: The first mapped exclusive page of the batch
+ * @nr_pages: The number of pages to share (batch size)
+ *
+ * See folio_try_share_anon_rmap_pte for full description.
+ *
+ * Context: The caller needs to hold the page table lock and has to have the
+ * page table entries cleared/invalidated. Those PTEs used to map consecutive
+ * pages of the folio passed here. The PTEs are all in the same PMD and VMA.
+ */
+static inline int folio_try_share_anon_rmap_ptes(struct folio *folio,
+ struct page *page, unsigned int nr)
+{
+ return __folio_try_share_anon_rmap(folio, page, nr, PGTABLE_LEVEL_PTE);
+}
+
/**
* folio_try_share_anon_rmap_pmd - try marking an exclusive anonymous page
* range mapped by a PMD possibly shared to
diff --git a/mm/rmap.c b/mm/rmap.c
index 42f6b00cced01..bba5b571946d8 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -2300,7 +2300,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
/* See folio_try_share_anon_rmap(): clear PTE first. */
if (anon_exclusive &&
- folio_try_share_anon_rmap_pte(folio, subpage)) {
+ folio_try_share_anon_rmap_ptes(folio, subpage, 1)) {
folio_put_swap(folio, subpage, 1);
set_pte_at(mm, address, pvmw.pte, pteval);
goto walk_abort;
--
2.34.1
next prev parent reply other threads:[~2026-03-10 7:32 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-10 7:30 [PATCH 0/9] mm/rmap: Optimize anonymous large folio unmapping Dev Jain
2026-03-10 7:30 ` [PATCH 1/9] mm/rmap: make nr_pages signed in try_to_unmap_one Dev Jain
2026-03-10 7:56 ` Lorenzo Stoakes (Oracle)
2026-03-10 8:06 ` David Hildenbrand (Arm)
2026-03-10 8:23 ` Dev Jain
2026-03-10 12:40 ` Matthew Wilcox
2026-03-11 4:54 ` Dev Jain
2026-03-10 7:30 ` [PATCH 2/9] mm/rmap: initialize nr_pages to 1 at loop start " Dev Jain
2026-03-10 8:10 ` Lorenzo Stoakes (Oracle)
2026-03-10 8:31 ` Dev Jain
2026-03-10 8:39 ` Lorenzo Stoakes (Oracle)
2026-03-10 8:43 ` Dev Jain
2026-03-10 7:30 ` [PATCH 3/9] mm/rmap: refactor lazyfree unmap commit path to commit_ttu_lazyfree_folio() Dev Jain
2026-03-10 8:19 ` Lorenzo Stoakes (Oracle)
2026-03-10 8:42 ` Dev Jain
2026-03-19 15:53 ` Lorenzo Stoakes (Oracle)
2026-03-10 7:30 ` [PATCH 4/9] mm/memory: Batch set uffd-wp markers during zapping Dev Jain
2026-03-10 7:30 ` [PATCH 5/9] mm/rmap: batch unmap folios belonging to uffd-wp VMAs Dev Jain
2026-03-10 8:34 ` Lorenzo Stoakes (Oracle)
2026-03-10 23:32 ` Barry Song
2026-03-11 4:14 ` Barry Song
2026-03-11 4:52 ` Dev Jain
2026-03-11 4:56 ` Dev Jain
2026-03-10 7:30 ` [PATCH 6/9] mm/swapfile: Make folio_dup_swap batchable Dev Jain
2026-03-10 8:27 ` Kairui Song
2026-03-10 8:46 ` Dev Jain
2026-03-10 8:49 ` Lorenzo Stoakes (Oracle)
2026-03-11 5:42 ` Dev Jain
2026-03-19 15:26 ` Lorenzo Stoakes (Oracle)
2026-03-19 16:47 ` Matthew Wilcox
2026-03-18 0:20 ` kernel test robot
2026-03-10 7:30 ` [PATCH 7/9] mm/swapfile: Make folio_put_swap batchable Dev Jain
2026-03-10 8:29 ` Kairui Song
2026-03-10 8:50 ` Dev Jain
2026-03-10 8:55 ` Lorenzo Stoakes (Oracle)
2026-03-18 1:04 ` kernel test robot
2026-03-10 7:30 ` Dev Jain [this message]
2026-03-10 9:38 ` [PATCH 8/9] mm/rmap: introduce folio_try_share_anon_rmap_ptes Lorenzo Stoakes (Oracle)
2026-03-11 8:09 ` Dev Jain
2026-03-12 8:19 ` Wei Yang
2026-03-19 15:47 ` Lorenzo Stoakes (Oracle)
2026-04-08 7:14 ` Dev Jain
2026-03-10 7:30 ` [PATCH 9/9] mm/rmap: enable batch unmapping of anonymous folios Dev Jain
2026-03-10 8:02 ` [PATCH 0/9] mm/rmap: Optimize anonymous large folio unmapping Lorenzo Stoakes (Oracle)
2026-03-10 9:28 ` Dev Jain
2026-03-10 12:59 ` Lance Yang
2026-03-11 8:11 ` Dev Jain
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260310073013.4069309-9-dev.jain@arm.com \
--to=dev.jain@arm.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=axelrasmussen@google.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bhe@redhat.com \
--cc=chrisl@kernel.org \
--cc=david@kernel.org \
--cc=harry.yoo@oracle.com \
--cc=hughd@google.com \
--cc=jannh@google.com \
--cc=kas@kernel.org \
--cc=kasong@tencent.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@suse.com \
--cc=nphamcs@gmail.com \
--cc=pfalcato@suse.de \
--cc=riel@surriel.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=shikemeng@huaweicloud.com \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
--cc=weixugc@google.com \
--cc=willy@infradead.org \
--cc=youngjun.park@lge.com \
--cc=yuanchu@google.com \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.