From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Luka Bai <lukafocus@icloud.com>
Cc: linux-mm@kvack.org, Jonathan Corbet <corbet@lwn.net>,
Shuah Khan <skhan@linuxfoundation.org>,
Andrew Morton <akpm@linux-foundation.org>,
Lorenzo Stoakes <ljs@kernel.org>, Zi Yan <ziy@nvidia.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
"Liam R. Howlett" <liam@infradead.org>,
Nico Pache <npache@redhat.com>,
Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
Barry Song <baohua@kernel.org>, Lance Yang <lance.yang@linux.dev>,
Vlastimil Babka <vbabka@kernel.org>,
Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>, Jann Horn <jannh@google.com>,
Arnd Bergmann <arnd@arndb.de>, Kairui Song <kasong@tencent.com>,
linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
linux-doc@vger.kernel.org, Luka Bai <lukabai@tencent.com>
Subject: Re: [PATCH 0/5] mm: Support selecting doing direct COW for anonymous pmd entry
Date: Fri, 1 May 2026 20:30:39 +0200 [thread overview]
Message-ID: <785b6164-aa71-4fc4-a4f3-f4977b7db30e@kernel.org> (raw)
In-Reply-To: <afTR48WxGnIpxK8a@LUKABAI-MC1>
>>
>> Note that there was a recent related discussion for executable, which was rejected:
>>
>> https://lore.kernel.org/r/20251226100337.4171191-1-zhangqilong3@huawei.com
>>
>
> Yes, I know this history, and I know that it will cost some memory or latency,
> That’s why I was wondering maybe I can add a switch to it to make it
> configurable :).
Switches for something like that is just not a good fit.
For example, for a short-lived child (e.g., fork+exec) it usually makes no sense
to cow a larger chunk of address space, when you know that it will exit
immediately either way and free up the memory.
>>>
>>> In addition to the problem above, this logic can also generate some
>>> deficiency for THP itself. Currently THP is just a "best-effort" choice
>>> with no "certainty". THP is easily splitted into multiple small pages
>>> on common calling path like reclaiming, COW. A transparent splitting
>>> can cause throughput fluctuation for some workloads. For these workloads,
>>> we may want to give THP some "certainty" just like hugetlbfs,
>>
>> There are no such guarantees, though. And We wouldn't want to commit to any such
>> guarantees today. For example, simple page migration can split the folio.
>> Allocation failures will fallback to small pages etc.
>>
>> If you need guarantees, use hugetlb for now.
>>
>
> The reason why I want to use THP over hugetlb is that I need reclamation for my
> workload :). There are many processes in my workload that need 2M
> aligned folios for better performance, and we want to reclaim them back automatically
> when the process doesn’t need the folios.
Can you share some details how exactly that is supposed to work?
> But hugetlbfs cannot do passive reclamation
> from what I know (except doing active madvise by the processes themselves). And using
Right, you can only return hugetlb folios by doing MADV_DONTNEED or munmap().
> THP can easily split the hugepages. So that’s why I would like to add certainty for THP,
Repeat after me: there are no guarantees. There is no certainty :)
> and use THP for these processes as backend, because THP is very well integrated with
> the swap system and other filesystems. And from what I checked,
> it seems the most common case for splitting a THP is COW and swapping so I am trying
> to handle these two scenarios (But coincidentally, PMD swapping is committed in
> https://lore.kernel.org/all/D3F08F85-76E0-4C5A-ABA1-537C68E038B8@nvidia.com/
> a few days before, which is a great implementation :) ).
Right, but that really only changes how we map large folios, not how we allocate
them. There are no guarantees.
>
>>> The effect
>>> we want is: after some customized setup, if only the system has usable
>>> folio, and the virtual memory alignment permits (or we setup to), we can
>>> make sure we always use THP for it, the system will never split it except
>>> the user wants to do so.
>>>
>>> This patchset is about both two things above, firstly we add pmd level
>>> THP COW support by revising the code in do_huge_pmd_wp_page, we added
>>> switch for it because different workloads may need different resources,
>>
>> The switch is bad, and we won't accept any toggle like that. A system-wide
>> setting does not make sense for such behavior.
>>
>
> Oh, the reason why I added a switch globally is also because the scenario I mentioned
> above, I want those processes to always use PMD sized folios as backend to make sure
> performance.
"Always" is wishful thinking in many scenarios I'm afraid.
> COW is truly not that common like swap out/swap in, it just can happen
> sometimes, which I guess the reason may be about image duplication. Setting the system
> globally is more convenient for my situation :). I can go without this global switch
> if it's more reasonable.
>
>> A per-VMA flag? Maybe, but I expect pushback as well, as it is way too specific.
>> So we'd have to find some concept that abstracts these semantics. But I expect
>> pushback as well.
>>
>> We messed up enough with toggles in THP space, unfortunately.
>>
>> Also, anything that only works for PMD-sized THPs is a warning sign in 2026 :)
>>
>
> And for PMD-sized THPs, actually, I’m also considering adding more support for
> COW to mTHP if the upstream consider it useful. And also for pud sized THPs
> also. But I guess I have to firstly handle stage 1: PMD level COW now :).
> And also, since PMD sized folio is commonly used in my workload, I'm also wondering
> digging into pmd sized KSM in the future, in which I think pmd sized COW may
> be more useful then :).
I'm afraid I have to stop you right there: there has to be a pretty convincing
story to add any of that. In particular KSM with large folios (/me shivering).
But I already don't buy the COW story. Just configure khugepaged in a better way
or use MADV_COLLAPSE and you don't really need to modify the kernel at all in
99.99% of the case. (khugepaged needs a lot of tuning work, it's currently not
the smartest implementation)
Maybe, we might give khugepaged better direction of what to try scanning next
(e.g., where we just COW'ed a THP). Not sure, there are plenty of things to explore.
>
>> You don't really raise any concrete use cases or performance numbers for these
>> use cases. Some details about applications that use fork() and rely on such
>> behavior would be helpful.
>>
>
> Sorry for that, the concrete workload itself hasn't been finished yet.
Okay, what I thought after seeing no workloads an no performance numbers :)
If you don't know the workload, how can you claim that the additional latency
and/or memory consumption is not a problem?
Also: there is no guarantee that you will actually succeed in allocating a PMD
THP during a COW fault.
> Now it just
> can happen sometimes in my multi-2M-sized-processes workload test. But the user
> of our 2M sized folio schema is actually not necessarily myself but also can be the
> userspace developers. I cannot guarantee that fork will not be used in the
> performance test of their workload since that is a normal posix call. Maybe a little
> overthinking? :)
If your application cares about performance, you either shouldn't be using
fork(), or you should be using it very, very wisely (e.g., interaction with
multi-threading, MADV_DONTFORK, avoid touching memory in parent until child
completed).
> I just think swap and COW are two main scenarios that may transparently split pmd sized
> folios, so maybe we can solve it and make THP both reclaimable and stable.
There is page migration, MADV_DONTNEED, munmap/mremap/madvise/mprotect in sub-2M
blocks, memory failure handling and probably a lot more. THP allocation might
fail. THP swapout+swapin might fail to allocate THPs.
Tackling COW handling when you don't even know that it's a real problem seems
premature.
> Maybe
> that can make THP more widely used in real deployed environment since the resource
> can become more controllable for the users :). That's why I was thinking maybe
> implementing it with setup switches is a reasonable solution?
No magical toggles.
>
>> Note that an application that does fork() could use MADV_COLLAPSE after fork()
>> to make sure that it immediately gets THPs back.
>>
>> There is also the option to just use MADV_DONTFORK to not even share ranges with
>> a child process in the first place, avoiding page copies entirely.
>>
>
> MADV_DONTFORK and MADV_COLLAPSE are nice and great options :), but the former one seems
> to be a little wasteful :).
It's actually the right thing to do (tm) if you care about fork() performance
and know that your child will not actually need certain memory areas.
For example, in QEMU we use it to exclude all guest memory from fork(), heavily
improving fork() performance. [there are not a lot of fork() use cases left in
QEMU today, fortunately]
--
Cheers,
David
next prev parent reply other threads:[~2026-05-01 18:30 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-01 5:55 [PATCH 0/5] mm: Support selecting doing direct COW for anonymous pmd entry Luka Bai
2026-05-01 5:55 ` [PATCH 1/5] mm: add basic madvise helpers and branch for THP setup Luka Bai
2026-05-01 5:55 ` [PATCH 2/5] mm: add pmd level THP COW parameter in sysfs Luka Bai
2026-05-01 5:55 ` [PATCH 3/5] mm: add pmd level THP COW judgement helpers Luka Bai
2026-05-01 5:55 ` [PATCH 4/5] mm: enable map_anon_folio_pmd_nopf to handle unshare Luka Bai
2026-05-01 5:55 ` [PATCH 5/5] mm: support choosing to do THP COW for anonymous pmd entry Luka Bai
2026-05-01 7:11 ` David Hildenbrand (Arm)
2026-05-01 15:01 ` Luka Bai
2026-05-01 7:07 ` [PATCH 0/5] mm: Support selecting doing direct " David Hildenbrand (Arm)
2026-05-01 16:16 ` Luka Bai
2026-05-01 18:30 ` David Hildenbrand (Arm) [this message]
2026-05-02 5:06 ` Luka Bai
2026-05-03 7:03 ` [syzbot ci] " syzbot ci
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=785b6164-aa71-4fc4-a4f3-f4977b7db30e@kernel.org \
--to=david@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=arnd@arndb.de \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=corbet@lwn.net \
--cc=dev.jain@arm.com \
--cc=jannh@google.com \
--cc=kasong@tencent.com \
--cc=lance.yang@linux.dev \
--cc=liam@infradead.org \
--cc=linux-arch@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=lukabai@tencent.com \
--cc=lukafocus@icloud.com \
--cc=mhocko@suse.com \
--cc=npache@redhat.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=skhan@linuxfoundation.org \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox