From: Usama Arif <usama.arif@linux.dev>
To: "David Hildenbrand (Arm)" <david@kernel.org>
Cc: Usama Arif <usama.arif@linux.dev>, Nico Pache <npache@redhat.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
yuzhao@google.com, usamaarif642@gmail.com, lance.yang@linux.dev,
baohua@kernel.org, dev.jain@arm.com, ryan.roberts@arm.com,
liam@infradead.org, baolin.wang@linux.alibaba.com,
ziy@nvidia.com, ljs@kernel.org, akpm@linux-foundation.org
Subject: Re: [RFC] mm: restrict zero-page remapping to underused THP splits
Date: Sun, 10 May 2026 04:39:59 -0700 [thread overview]
Message-ID: <20260510114001.600681-1-usama.arif@linux.dev> (raw)
In-Reply-To: <04ea0e68-de56-49c4-8c9f-1734139d5e7f@kernel.org>
On Fri, 8 May 2026 23:32:09 +0200 "David Hildenbrand (Arm)" <david@kernel.org> wrote:
> On 5/8/26 19:05, Nico Pache wrote:
> > Since commit b1f202060afe ("mm: remap unused subpages to shared zeropage
> > when splitting isolated thp"), splitting an anonymous THP remaps all
> > zero-filled subpages to the shared zeropage via TTU_USE_SHARED_ZEROPAGE.
> > This flag is set unconditionally for every anonymous folio split,
> > including splits triggered by KSM.
>
> And even when the underused scanner is effectively disabled on a system. Hm.
>
> I don't quite like that we scan for zeropages when nobody even requested us to
> split because of zeropages.
>
> I can see why we would want to scan for zeropages in a setup where the underused
> scanner is active, even when the split was triggered by someone/something else
> (below).
>
> [...]
>
> > /**
> > @@ -4340,7 +4341,13 @@ int folio_split(struct folio *folio, unsigned int new_order,
> > struct page *split_at, struct list_head *list)
> > {
> > return __folio_split(folio, new_order, split_at, &folio->page, list,
> > - SPLIT_TYPE_NON_UNIFORM);
> > + SPLIT_TYPE_NON_UNIFORM, false);
> > +}
> > +
> > +int folio_split_underused(struct folio *folio)
> > +{
> > + return __folio_split(folio, 0, &folio->page, &folio->page,
> > + NULL, SPLIT_TYPE_NON_UNIFORM, true);
> > }
> >
> > /**
> > @@ -4559,7 +4566,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
> > }
> > if (!folio_trylock(folio))
> > goto requeue;
> > - if (!split_folio(folio)) {
> > + if (!folio_split_underused(folio)) {
> > did_split = true;
> > if (underused)
> > count_vm_event(THP_UNDERUSED_SPLIT_PAGE);
>
> In general, this looks clean.
>
> But imagine the following: someone splits the THP for another reason: for
> example, because migration is unable to allocate a 2M THP, or because we have to
> split on swapout etc.
>
> Not freeing the zero-filled pages means that these pages cannot be reclaimed
> anymore easily. We split a possibly underused THP but didn't free the memory.
>
> The only way to free the memory would be to wait for another collapse, and then
> have the new THP be detected as underused.
>
> Hm.
>
> (1) As you say, the alternative is to let KSM say that it wants to handle the
> zero-filled pages itself. I'm not a the biggest fan of that approach. We still
> have two mechanisms interacting to some degree.
>
> (2) Another approach is to just let KSM handle this in VMAs that are marked as
> mergable while KSM is active. That is, we check for VM_MERGABLE and ksm_run ==
> KSM_RUN_MERGE in try_to_map_unused_to_zeropage() to just let KSM do its thing.
>
> That really just stops both mechanisms from interacting.
>
> (3) Yet another approach I could think of (in general) is to disable the
> underused handling in a system where the underused splitting is entirely disabled.
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index e9d499da0ac7..5eca99271957 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -82,6 +82,14 @@ unsigned long huge_anon_orders_madvise __read_mostly;
> unsigned long huge_anon_orders_inherit __read_mostly;
> static bool anon_orders_configured __initdata;
>
> +static bool thp_underused_split_active(void)
> +{
> + if (!split_underused_thp)
> + return false;
> +
> + return khugepaged_max_ptes_none != HPAGE_PMD_NR - 1;
> +}
> +
> static inline bool file_thp_enabled(struct vm_area_struct *vma)
> {
> struct inode *inode;
> @@ -4188,7 +4196,8 @@ static int __folio_split(struct folio *folio, unsigned int
> new_order,
> if (nr_shmem_dropped)
> shmem_uncharge(mapping->host, nr_shmem_dropped);
>
> - if (!ret && is_anon && !folio_is_device_private(folio))
> + if (!ret && is_anon && !folio_is_device_private(folio) &&
> + thp_underused_split_active())
> ttu_flags = TTU_USE_SHARED_ZEROPAGE;
>
> remap_page(folio, 1 << old_order, ttu_flags);
> @@ -4497,7 +4506,7 @@ static bool thp_underused(struct folio *folio)
> int num_zero_pages = 0, num_filled_pages = 0;
> int i;
>
> - if (khugepaged_max_ptes_none == HPAGE_PMD_NR - 1)
> + if (!thp_underused_split_active())
> return false;
>
> if (folio_contain_hwpoisoned_page(folio))
>
>
>
> I tend to like (2), and maybe (3) on top. Opinions?
>
Hello!
I think (3) definitely makes sense.
I have not had a deep look at KSM up until just now, so might be dumb
to say all of below.. :)
What I see is that KSM scans THPs as 512 individual 4K subpages and splits the
THP whenever it actually wants to merge a single 4K chunk. That seems like a
lot of work for a single 4K?
One thing that came to my mind is to have a separate tree for THPs and only
merge the THPs that have the same content, but the possibility of encoutering
2M pages with same content is extremely low? so this is probably a bad idea.
An alternative is, does it even make sense to process and split THPs by KSM
in the way it works now? IMO this is a lot of work for a single 4K merge.
Shrinker is designed to release memory when its needed, i.e. reclaim, at
which point IMO free memory is more important than performance. But KSM runs
all the time.. so constantly splitting THPs everytime a single 4K can be
merged just hurts performance all the time. If someone cares about memory,
they should be running the shrinker. Is a better alternative that KSM skips
THPs, THP shrinker splits THPs into 4K subpages when memory is needed, and
only then KSM gets those 4K subpages?
Above sounds like reworking KSM, but just wanted to put it out there.
(2) + (3) sounds like a good solution, but I wonder if above alternative
of KSM just skipping THPs might be better?
next prev parent reply other threads:[~2026-05-10 11:40 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-08 17:05 [RFC] mm: restrict zero-page remapping to underused THP splits Nico Pache
2026-05-08 21:32 ` David Hildenbrand (Arm)
2026-05-09 8:25 ` Lance Yang
2026-05-10 11:39 ` Usama Arif [this message]
2026-05-11 6:36 ` David Hildenbrand (Arm)
2026-05-09 3:21 ` Lance Yang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260510114001.600681-1-usama.arif@linux.dev \
--to=usama.arif@linux.dev \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@kernel.org \
--cc=dev.jain@arm.com \
--cc=lance.yang@linux.dev \
--cc=liam@infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=npache@redhat.com \
--cc=ryan.roberts@arm.com \
--cc=usamaarif642@gmail.com \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox