From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Dev Jain <dev.jain@arm.com>,
akpm@linux-foundation.org, ljs@kernel.org, hughd@google.com,
chrisl@kernel.org, kasong@tencent.com
Cc: riel@surriel.com, liam@infradead.org, vbabka@kernel.org,
harry@kernel.org, jannh@google.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, qi.zheng@linux.dev,
shakeel.butt@linux.dev, baohua@kernel.org,
axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com,
rppt@kernel.org, surenb@google.com, mhocko@suse.com,
baolin.wang@linux.alibaba.com, shikemeng@huaweicloud.com,
nphamcs@gmail.com, bhe@redhat.com, youngjun.park@lge.com,
pfalcato@suse.de, ryan.roberts@arm.com,
anshuman.khandual@arm.com
Subject: Re: [PATCH v3 8/9] mm/rmap: Add batched version of folio_try_share_anon_rmap_pte
Date: Mon, 11 May 2026 10:13:14 +0200 [thread overview]
Message-ID: <95b8224d-3ed4-4fe3-9954-d5ba0aa373f8@kernel.org> (raw)
In-Reply-To: <20260506094504.2588857-9-dev.jain@arm.com>
On 5/6/26 11:45, Dev Jain wrote:
> To enable batched unmapping of anonymous folios, we need to handle the
> sharing of exclusive pages. Hence, a batched version of
> folio_try_share_anon_rmap_pte is required.
>
> Currently, the sole purpose of nr_pages in __folio_try_share_anon_rmap is
> to do some rmap sanity checks. Add helpers to clear the PageAnonExclusive
> bit on a batch of nr_pages. Note that __folio_try_share_anon_rmap can
> receive nr_pages == HPAGE_PMD_NR from the PMD path, but currently we only
> clear the bit on the head page. Retain this behaviour by setting
> nr_pages = 1 in case the caller is folio_try_share_anon_rmap_pmd.
>
> While at it, convert nr_pages to unsigned long to future-proof from
> overflow in case P4D-huge mappings etc get supported down the road.
> I haven't made such a change in each function receiving nr_pages in
> try_to_unmap_one - perhaps this can be done incrementally.
>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
> include/linux/mm.h | 11 +++++++++++
> include/linux/rmap.h | 27 ++++++++++++++++++++-------
> 2 files changed, 31 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 31e27ff6a35fa..0b77329cf57a4 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -243,6 +243,17 @@ static inline unsigned long folio_page_idx(const struct folio *folio,
> return page - &folio->page;
> }
>
> +static __always_inline void folio_clear_pages_anon_exclusive(struct page *page,
> + unsigned long nr_pages)
> +{
> + for (;;) {
> + ClearPageAnonExclusive(page);
> + if (--nr_pages == 0)
> + break;
> + ++page;
> + }
> +}
Something called folio that doesn't consume a folio is odd. I'd prefer we don't add this.
Is there a chance to simply change __folio_try_share_anon_rmap, so we get a single loop
inline?
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 8dc0871e5f00..5a1c874b2112 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -708,16 +708,13 @@ static inline int folio_try_dup_anon_rmap_pmd(struct folio *folio,
static __always_inline int __folio_try_share_anon_rmap(struct folio *folio,
struct page *page, int nr_pages, enum pgtable_level level)
{
+ /* device private folios cannot get pinned via GUP. */
+ const bool pinnable = !folio_is_device_private(folio);
+
VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio);
VM_WARN_ON_FOLIO(!PageAnonExclusive(page), folio);
__folio_rmap_sanity_checks(folio, page, nr_pages, level);
- /* device private folios cannot get pinned via GUP. */
- if (unlikely(folio_is_device_private(folio))) {
- ClearPageAnonExclusive(page);
- return 0;
- }
-
/*
* We have to make sure that when we clear PageAnonExclusive, that
* the page is not pinned and that concurrent GUP-fast won't succeed in
@@ -760,19 +757,21 @@ static __always_inline int __folio_try_share_anon_rmap(struct folio *folio,
* so we use explicit ones here.
*/
- /* Paired with the memory barrier in try_grab_folio(). */
- if (IS_ENABLED(CONFIG_HAVE_GUP_FAST))
- smp_mb();
+ if (pinnable) {
+ /* Paired with the memory barrier in try_grab_folio(). */
+ if (IS_ENABLED(CONFIG_HAVE_GUP_FAST))
+ smp_mb();
- if (unlikely(folio_maybe_dma_pinned(folio)))
- return -EBUSY;
+ if (unlikely(folio_maybe_dma_pinned(folio)))
+ return -EBUSY;
+ }
ClearPageAnonExclusive(page);
/*
* This is conceptually a smp_wmb() paired with the smp_rmb() in
* gup_must_unshare().
*/
- if (IS_ENABLED(CONFIG_HAVE_GUP_FAST))
+ if (pinnable && IS_ENABLED(CONFIG_HAVE_GUP_FAST))
smp_mb__after_atomic();
return 0;
--
Cheers,
David
next prev parent reply other threads:[~2026-05-11 8:13 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-06 9:44 [PATCH v3 0/9] Optimize anonymous large folio unmapping Dev Jain
2026-05-06 9:44 ` [PATCH v3 1/9] mm/rmap: initialize nr_pages to 1 at loop start in try_to_unmap_one Dev Jain
2026-05-11 6:48 ` David Hildenbrand (Arm)
2026-05-11 8:18 ` Dev Jain
2026-05-11 8:32 ` David Hildenbrand (Arm)
2026-05-12 8:14 ` Dev Jain
2026-05-12 8:17 ` David Hildenbrand (Arm)
2026-05-12 10:49 ` Dev Jain
2026-05-12 11:01 ` David Hildenbrand (Arm)
2026-05-12 11:16 ` Dev Jain
2026-05-06 9:44 ` [PATCH v3 2/9] mm/rmap: refactor hugetlb pte clearing " Dev Jain
2026-05-11 7:10 ` David Hildenbrand (Arm)
2026-05-11 8:53 ` Dev Jain
2026-05-11 8:59 ` David Hildenbrand (Arm)
2026-05-11 22:20 ` Barry Song
2026-05-12 5:16 ` Dev Jain
2026-05-06 9:44 ` [PATCH v3 3/9] mm/rmap: refactor some code around lazyfree folio unmapping Dev Jain
2026-05-11 7:28 ` David Hildenbrand (Arm)
2026-05-12 5:19 ` Dev Jain
2026-05-06 9:44 ` [PATCH v3 4/9] mm/memory: Batch set uffd-wp markers during zapping Dev Jain
2026-05-11 7:37 ` David Hildenbrand (Arm)
2026-05-12 5:59 ` Dev Jain
2026-05-12 6:04 ` David Hildenbrand (Arm)
2026-05-06 9:45 ` [PATCH v3 5/9] mm/rmap: batch unmap folios belonging to uffd-wp VMAs Dev Jain
2026-05-11 7:41 ` David Hildenbrand (Arm)
2026-05-06 9:45 ` [PATCH v3 6/9] mm/swapfile: Add batched version of folio_dup_swap Dev Jain
2026-05-11 7:45 ` David Hildenbrand (Arm)
2026-05-12 6:07 ` Dev Jain
2026-05-12 6:36 ` David Hildenbrand (Arm)
2026-05-06 9:45 ` [PATCH v3 7/9] mm/swapfile: Add batched version of folio_put_swap Dev Jain
2026-05-11 8:07 ` David Hildenbrand (Arm)
2026-05-06 9:45 ` [PATCH v3 8/9] mm/rmap: Add batched version of folio_try_share_anon_rmap_pte Dev Jain
2026-05-11 8:13 ` David Hildenbrand (Arm) [this message]
2026-05-11 8:14 ` David Hildenbrand (Arm)
2026-05-12 8:57 ` Dev Jain
2026-05-06 9:45 ` [PATCH v3 9/9] mm/rmap: enable batch unmapping of anonymous folios Dev Jain
2026-05-11 8:16 ` David Hildenbrand (Arm)
2026-05-12 8:59 ` Dev Jain
2026-05-08 23:38 ` [PATCH v3 0/9] Optimize anonymous large folio unmapping Andrew Morton
2026-05-11 6:21 ` Dev Jain
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=95b8224d-3ed4-4fe3-9954-d5ba0aa373f8@kernel.org \
--to=david@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=axelrasmussen@google.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bhe@redhat.com \
--cc=chrisl@kernel.org \
--cc=dev.jain@arm.com \
--cc=harry@kernel.org \
--cc=hughd@google.com \
--cc=jannh@google.com \
--cc=kasong@tencent.com \
--cc=liam@infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@suse.com \
--cc=nphamcs@gmail.com \
--cc=pfalcato@suse.de \
--cc=qi.zheng@linux.dev \
--cc=riel@surriel.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=shakeel.butt@linux.dev \
--cc=shikemeng@huaweicloud.com \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
--cc=weixugc@google.com \
--cc=youngjun.park@lge.com \
--cc=yuanchu@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.