From: Dev Jain <dev.jain@arm.com>
To: akpm@linux-foundation.org, david@kernel.org, hughd@google.com,
chrisl@kernel.org
Cc: ljs@kernel.org, Liam.Howlett@oracle.com, vbabka@kernel.org,
rppt@kernel.org, surenb@google.com, mhocko@suse.com,
kasong@tencent.com, qi.zheng@linux.dev, shakeel.butt@linux.dev,
baohua@kernel.org, axelrasmussen@google.com, yuanchu@google.com,
weixugc@google.com, riel@surriel.com, harry@kernel.org,
jannh@google.com, pfalcato@suse.de,
baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com,
nphamcs@gmail.com, bhe@redhat.com, youngjun.park@lge.com,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
ryan.roberts@arm.com, anshuman.khandual@arm.com,
Dev Jain <dev.jain@arm.com>
Subject: [PATCH v2 8/9] mm/rmap: Add batched version of folio_try_share_anon_rmap_pte
Date: Fri, 10 Apr 2026 16:02:03 +0530 [thread overview]
Message-ID: <20260410103204.120409-9-dev.jain@arm.com> (raw)
In-Reply-To: <20260410103204.120409-1-dev.jain@arm.com>
To enabe batched unmapping of anonymous folios, we need to handle the
sharing of exclusive pages. Hence, a batched version of
folio_try_share_anon_rmap_pte is required.
Currently, the sole purpose of nr_pages in __folio_try_share_anon_rmap is
to do some rmap sanity checks. Add helpers to set and clear the
PageAnonExclusive bit on a batch of nr_pages. Note that
__folio_try_share_anon_rmap can receive nr_pages == HPAGE_PMD_NR from the
PMD path, but currently we only clear the bit on the head page. Retain
this behaviour by setting nr_pages = 1 in case the caller is
folio_try_share_anon_rmap_pmd.
While at it, convert nr_pages to unsigned long to future-proof from
overflow in case P4D-huge mappings etc get supported down the road.
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
include/linux/mm.h | 11 +++++++++++
include/linux/rmap.h | 27 ++++++++++++++++++++-------
2 files changed, 31 insertions(+), 7 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 633bbf9a184a6..2d20954da652a 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -243,6 +243,17 @@ static inline unsigned long folio_page_idx(const struct folio *folio,
return page - &folio->page;
}
+static __always_inline void folio_clear_pages_anon_exclusive(struct page *page,
+ unsigned long nr_pages)
+{
+ for (;;) {
+ ClearPageAnonExclusive(page);
+ if (--nr_pages == 0)
+ break;
+ ++page;
+ }
+}
+
static inline struct folio *lru_to_folio(struct list_head *head)
{
return list_entry((head)->prev, struct folio, lru);
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 8dc0871e5f001..f3b3ee3955afc 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -706,15 +706,19 @@ static inline int folio_try_dup_anon_rmap_pmd(struct folio *folio,
}
static __always_inline int __folio_try_share_anon_rmap(struct folio *folio,
- struct page *page, int nr_pages, enum pgtable_level level)
+ struct page *page, unsigned long nr_pages, enum pgtable_level level)
{
VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio);
VM_WARN_ON_FOLIO(!PageAnonExclusive(page), folio);
__folio_rmap_sanity_checks(folio, page, nr_pages, level);
+ /* We only clear anon-exclusive from head page of PMD folio */
+ if (level == PGTABLE_LEVEL_PMD)
+ nr_pages = 1;
+
/* device private folios cannot get pinned via GUP. */
if (unlikely(folio_is_device_private(folio))) {
- ClearPageAnonExclusive(page);
+ folio_clear_pages_anon_exclusive(page, nr_pages);
return 0;
}
@@ -766,7 +770,7 @@ static __always_inline int __folio_try_share_anon_rmap(struct folio *folio,
if (unlikely(folio_maybe_dma_pinned(folio)))
return -EBUSY;
- ClearPageAnonExclusive(page);
+ folio_clear_pages_anon_exclusive(page, nr_pages);
/*
* This is conceptually a smp_wmb() paired with the smp_rmb() in
@@ -778,11 +782,12 @@ static __always_inline int __folio_try_share_anon_rmap(struct folio *folio,
}
/**
- * folio_try_share_anon_rmap_pte - try marking an exclusive anonymous page
- * mapped by a PTE possibly shared to prepare
+ * folio_try_share_anon_rmap_ptes - try marking exclusive anonymous pages
+ * mapped by PTEs possibly shared to prepare
* for KSM or temporary unmapping
* @folio: The folio to share a mapping of
- * @page: The mapped exclusive page
+ * @page: The first mapped exclusive page of the batch in the folio
+ * @nr_pages: The number of pages to share in the folio (batch size)
*
* The caller needs to hold the page table lock and has to have the page table
* entries cleared/invalidated.
@@ -797,11 +802,19 @@ static __always_inline int __folio_try_share_anon_rmap(struct folio *folio,
*
* Returns 0 if marking the mapped page possibly shared succeeded. Returns
* -EBUSY otherwise.
+ *
+ * The caller needs to hold the page table lock.
*/
+static inline int folio_try_share_anon_rmap_ptes(struct folio *folio,
+ struct page *page, unsigned long nr_pages)
+{
+ return __folio_try_share_anon_rmap(folio, page, nr_pages, PGTABLE_LEVEL_PTE);
+}
+
static inline int folio_try_share_anon_rmap_pte(struct folio *folio,
struct page *page)
{
- return __folio_try_share_anon_rmap(folio, page, 1, PGTABLE_LEVEL_PTE);
+ return folio_try_share_anon_rmap_ptes(folio, page, 1);
}
/**
--
2.34.1
next prev parent reply other threads:[~2026-04-10 10:33 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-10 10:31 [PATCH v2 0/9] Optimize anonymous large folio unmapping Dev Jain
2026-04-10 10:31 ` [PATCH v2 1/9] mm/rmap: initialize nr_pages to 1 at loop start in try_to_unmap_one Dev Jain
2026-04-10 10:31 ` [PATCH v2 2/9] mm/rmap: refactor hugetlb pte clearing " Dev Jain
2026-04-10 10:31 ` [PATCH v2 3/9] mm/rmap: refactor some code around lazyfree folio unmapping Dev Jain
2026-04-10 10:31 ` [PATCH v2 4/9] mm/memory: Batch set uffd-wp markers during zapping Dev Jain
2026-04-10 10:32 ` [PATCH v2 5/9] mm/rmap: batch unmap folios belonging to uffd-wp VMAs Dev Jain
2026-04-10 10:32 ` [PATCH v2 6/9] mm/swapfile: Add batched version of folio_dup_swap Dev Jain
2026-04-10 10:32 ` [PATCH v2 7/9] mm/swapfile: Add batched version of folio_put_swap Dev Jain
2026-04-10 10:32 ` Dev Jain [this message]
2026-04-10 10:32 ` [PATCH v2 9/9] mm/rmap: enable batch unmapping of anonymous folios Dev Jain
2026-04-10 13:53 ` [PATCH v2 0/9] Optimize anonymous large folio unmapping Lorenzo Stoakes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260410103204.120409-9-dev.jain@arm.com \
--to=dev.jain@arm.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=axelrasmussen@google.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bhe@redhat.com \
--cc=chrisl@kernel.org \
--cc=david@kernel.org \
--cc=harry@kernel.org \
--cc=hughd@google.com \
--cc=jannh@google.com \
--cc=kasong@tencent.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@suse.com \
--cc=nphamcs@gmail.com \
--cc=pfalcato@suse.de \
--cc=qi.zheng@linux.dev \
--cc=riel@surriel.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=shakeel.butt@linux.dev \
--cc=shikemeng@huaweicloud.com \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
--cc=weixugc@google.com \
--cc=youngjun.park@lge.com \
--cc=yuanchu@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox