public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Luka Bai <lukafocus@icloud.com>, linux-mm@kvack.org
Cc: Jonathan Corbet <corbet@lwn.net>,
	Shuah Khan <skhan@linuxfoundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Lorenzo Stoakes <ljs@kernel.org>, Zi Yan <ziy@nvidia.com>,
	Baolin Wang <baolin.wang@linux.alibaba.com>,
	"Liam R. Howlett" <liam@infradead.org>,
	Nico Pache <npache@redhat.com>,
	Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
	Barry Song <baohua@kernel.org>, Lance Yang <lance.yang@linux.dev>,
	Vlastimil Babka <vbabka@kernel.org>,
	Mike Rapoport <rppt@kernel.org>,
	Suren Baghdasaryan <surenb@google.com>,
	Michal Hocko <mhocko@suse.com>, Jann Horn <jannh@google.com>,
	Arnd Bergmann <arnd@arndb.de>, Kairui Song <kasong@tencent.com>,
	linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
	linux-doc@vger.kernel.org, Luka Bai <lukabai@tencent.com>
Subject: Re: [PATCH 0/5] mm: Support selecting doing direct COW for anonymous pmd entry
Date: Fri, 1 May 2026 09:07:49 +0200	[thread overview]
Message-ID: <b5379cd3-f7bf-47f9-8a60-c7300b4415a2@kernel.org> (raw)
In-Reply-To: <20260501-thp_cow-v1-0-005377483738@tencent.com>

On 5/1/26 07:55, Luka Bai wrote:

Hi,

> Copy on write support for anonymous pmd level THP is simple right now:
> firstly we'll check whether the folio can be exclusively used by the
> faulting process, if we can (when the ref of the folio is only 1 after
> trying to free swapcache or the page flag AnonExclusive is setup) we'll
> directly use it with few further handling. If we cannot, then we'll
> split the pmd into 512 4K ptes, and do copy on write only for the
> specific 4K page that we faulted on.
> 
> This logic is truly memory efficient since for most workloads we don't
> want to allocate 2M new memory simply on a small write. However, it also
> makes the original 2M page for the process suddenly splitted on a
> write which will generate some performance thrashing. For example, if
> process A and process B share an anonymous 2M pmd, if process B chooses
> to do a writing, then its page table mapping will be changed from 1
> pmd entry into 512 4K pte entries at once, so the tlb benifit will
> suddenly just "vanish" for process B, which sometimes may cause a
> observable performance degeneration. After that, we can only wait for
> khugepaged to do the collapse for this area and merge the pmd back, which
> is not easy to happen.

You probably know that, historically, we did exactly what you describe in this
patch set. It was rather bad regarding memory waste and COW latency, so we
switched to the current model.

Note that there was a recent related discussion for executable, which was rejected:

https://lore.kernel.org/r/20251226100337.4171191-1-zhangqilong3@huawei.com

> 
> In addition to the problem above, this logic can also generate some
> deficiency for THP itself. Currently THP is just a "best-effort" choice
> with no "certainty". THP is easily splitted into multiple small pages
> on common calling path like reclaiming, COW. A transparent splitting
> can cause throughput fluctuation for some workloads. For these workloads,
> we may want to give THP some "certainty" just like hugetlbfs,

There are no such guarantees, though. And We wouldn't want to commit to any such
guarantees today. For example, simple page migration can split the folio.
Allocation failures will fallback to small pages etc.

If you need guarantees, use hugetlb for now.

> The effect
> we want is: after some customized setup, if only the system has usable
> folio, and the virtual memory alignment permits (or we setup to), we can
> make sure we always use THP for it, the system will never split it except
> the user wants to do so.
> 
> This patchset is about both two things above, firstly we add pmd level
> THP COW support by revising the code in do_huge_pmd_wp_page, we added
> switch for it because different workloads may need different resources,

The switch is bad, and we won't accept any toggle like that. A system-wide
setting does not make sense for such behavior.

A per-VMA flag? Maybe, but I expect pushback as well, as it is way too specific.
So we'd have to find some concept that abstracts these semantics. But I expect
pushback as well.

We messed up enough with toggles in THP space, unfortunately.

Also, anything that only works for PMD-sized THPs is a warning sign in 2026 :)

You don't really raise any concrete use cases or performance numbers for these
use cases. Some details about applications that use fork() and rely on such
behavior would be helpful.

Note that an application that does fork() could use MADV_COLLAPSE after fork()
to make sure that it immediately gets THPs back.

There is also the option to just use MADV_DONTFORK to not even share ranges with
a child process in the first place, avoiding page copies entirely.

-- 
Cheers,

David

  parent reply	other threads:[~2026-05-01  7:07 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-01  5:55 [PATCH 0/5] mm: Support selecting doing direct COW for anonymous pmd entry Luka Bai
2026-05-01  5:55 ` [PATCH 1/5] mm: add basic madvise helpers and branch for THP setup Luka Bai
2026-05-01  5:55 ` [PATCH 2/5] mm: add pmd level THP COW parameter in sysfs Luka Bai
2026-05-01  5:55 ` [PATCH 3/5] mm: add pmd level THP COW judgement helpers Luka Bai
2026-05-01  5:55 ` [PATCH 4/5] mm: enable map_anon_folio_pmd_nopf to handle unshare Luka Bai
2026-05-01  5:55 ` [PATCH 5/5] mm: support choosing to do THP COW for anonymous pmd entry Luka Bai
2026-05-01  7:11   ` David Hildenbrand (Arm)
2026-05-01 15:01     ` Luka Bai
2026-05-01  7:07 ` David Hildenbrand (Arm) [this message]
2026-05-01 16:16   ` [PATCH 0/5] mm: Support selecting doing direct " Luka Bai
2026-05-01 18:30     ` David Hildenbrand (Arm)
2026-05-02  5:06       ` Luka Bai
2026-05-03  7:03 ` [syzbot ci] " syzbot ci

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b5379cd3-f7bf-47f9-8a60-c7300b4415a2@kernel.org \
    --to=david@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=arnd@arndb.de \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=corbet@lwn.net \
    --cc=dev.jain@arm.com \
    --cc=jannh@google.com \
    --cc=kasong@tencent.com \
    --cc=lance.yang@linux.dev \
    --cc=liam@infradead.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ljs@kernel.org \
    --cc=lukabai@tencent.com \
    --cc=lukafocus@icloud.com \
    --cc=mhocko@suse.com \
    --cc=npache@redhat.com \
    --cc=rppt@kernel.org \
    --cc=ryan.roberts@arm.com \
    --cc=skhan@linuxfoundation.org \
    --cc=surenb@google.com \
    --cc=vbabka@kernel.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox