public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] mm: Support selecting doing direct COW for anonymous pmd entry
@ 2026-05-01  5:55 Luka Bai
  2026-05-01  5:55 ` [PATCH 1/5] mm: add basic madvise helpers and branch for THP setup Luka Bai
                   ` (6 more replies)
  0 siblings, 7 replies; 13+ messages in thread
From: Luka Bai @ 2026-05-01  5:55 UTC (permalink / raw)
  To: linux-mm
  Cc: Jonathan Corbet, Shuah Khan, Andrew Morton, David Hildenbrand,
	Lorenzo Stoakes, Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lance Yang, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Jann Horn,
	Arnd Bergmann, Kairui Song, linux-kernel, linux-arch, linux-doc,
	Luka Bai

Copy on write support for anonymous pmd level THP is simple right now:
firstly we'll check whether the folio can be exclusively used by the
faulting process, if we can (when the ref of the folio is only 1 after
trying to free swapcache or the page flag AnonExclusive is setup) we'll
directly use it with few further handling. If we cannot, then we'll
split the pmd into 512 4K ptes, and do copy on write only for the
specific 4K page that we faulted on.

This logic is truly memory efficient since for most workloads we don't
want to allocate 2M new memory simply on a small write. However, it also
makes the original 2M page for the process suddenly splitted on a
write which will generate some performance thrashing. For example, if
process A and process B share an anonymous 2M pmd, if process B chooses
to do a writing, then its page table mapping will be changed from 1
pmd entry into 512 4K pte entries at once, so the tlb benifit will
suddenly just "vanish" for process B, which sometimes may cause a
observable performance degeneration. After that, we can only wait for
khugepaged to do the collapse for this area and merge the pmd back, which
is not easy to happen.

In addition to the problem above, this logic can also generate some
deficiency for THP itself. Currently THP is just a "best-effort" choice
with no "certainty". THP is easily splitted into multiple small pages
on common calling path like reclaiming, COW. A transparent splitting
can cause throughput fluctuation for some workloads. For these workloads,
we may want to give THP some "certainty" just like hugetlbfs, The effect
we want is: after some customized setup, if only the system has usable
folio, and the virtual memory alignment permits (or we setup to), we can
make sure we always use THP for it, the system will never split it except
the user wants to do so.

This patchset is about both two things above, firstly we add pmd level
THP COW support by revising the code in do_huge_pmd_wp_page, we added
switch for it because different workloads may need different resources,
for which memory saving may matter more rather than the 2M tlb gain.
The switch is very similar to the "enable" and "shmem_enable" in sysfs
path of transparent_hugepage. THP COW is only enabled when THP itself
is enabled globally or by madvise. And also, we add basic THP setup
helpers and branch in madvise path and add the THP COW choice to it for a
more fine-grained setup. Now the helpers only supports copy on write
related, but in the future we may be able to add more types of THP
configurations into it like swapping.

Patch Details:
========
* Patch 1 adds the basic THP setup helpers and branch in madvise path.
  Then we add THP COW parameter into it.
* Patch 2 adds the THP COW sysfs interface, the logic is very similar
  to enable and shmem_enable of THP.
* Patch 3 adds the helpers that will be used in the actual COW path
  to decide whether we choose to do pmd level THP COW.
* Patch 4 reconstructs map_anon_folio_pmd_nopf and map_anon_folio_pmd_pf
  to make it capable of doing mapping for copied new folio when the
  fault flag has FLAG_FAULT_UNSHARE.
* Patch 5 adds the actual support for pmd level THP COW, and uses all
  the switches and helpers in the above 4 patches to do the strategy
  control.

Thanks for reading. Comments and suggestions are very welcome!

Signed-off-by: Luka Bai <lukabai@tencent.com>
---
Luka Bai (5):
      mm: add basic madvise helpers and branch for THP setup
      mm: add pmd level THP COW parameter in sysfs
      mm: add pmd level THP COW judgement helpers
      mm: enable map_anon_folio_pmd_nopf to handle unshare
      mm: support choosing to do THP COW for anonymous pmd entry.

 .../testing/sysfs-kernel-mm-transparent-hugepage   |   1 +
 Documentation/admin-guide/mm/transhuge.rst         |  27 +++
 include/linux/huge_mm.h                            |  45 ++++-
 include/linux/mm.h                                 |  19 ++
 include/uapi/asm-generic/mman-common.h             |   9 +
 mm/huge_memory.c                                   | 198 ++++++++++++++++++---
 mm/khugepaged.c                                    |   8 +-
 mm/madvise.c                                       |  25 +++
 8 files changed, 308 insertions(+), 24 deletions(-)
---
base-commit: 41cd9e3d23b8fd9e6c3c0311e9cb0304442c6141
change-id: 20260501-thp_cow-94873ed30793

Best regards,
--  
Luka Bai <lukabai@tencent.com>


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/5] mm: add basic madvise helpers and branch for THP setup
  2026-05-01  5:55 [PATCH 0/5] mm: Support selecting doing direct COW for anonymous pmd entry Luka Bai
@ 2026-05-01  5:55 ` Luka Bai
  2026-05-01  5:55 ` [PATCH 2/5] mm: add pmd level THP COW parameter in sysfs Luka Bai
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Luka Bai @ 2026-05-01  5:55 UTC (permalink / raw)
  To: linux-mm
  Cc: Jonathan Corbet, Shuah Khan, Andrew Morton, David Hildenbrand,
	Lorenzo Stoakes, Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lance Yang, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Jann Horn,
	Arnd Bergmann, Kairui Song, linux-kernel, linux-arch, linux-doc,
	Luka Bai

From: Luka Bai <lukabai@tencent.com>

Transparent huge page is now properly working with most of the mm
framework, and well fused with the folio concept that can be
reclaimed or allocated with a large order. However, its deed is not
very "estimable". For example, a THP is easily split in many path like
partially mapped, swap out or fork + COW(for child processes).

In some cases, we may want it to have some concluded result. Since
some workloads expect a relatively "stable" THP, while others may want
to save memory more rather than the performance benifits.

This patch adds some basic helpers and branch in madvise path so that
we can add madvise choices on THP to conduct what we do on different
types of operations like COW or swap that may split THP, on the level
of vma.

We transfer the type of configuration using parameters of madvise,
analyze it and save the result in vma->vm_flags for later use.

Currently the only operation in the list is COW. It decides whether
we want to use hugepages for the child process when it writes a spot
on the shared anonymous pmd so that we can make sure the THP not
being split after writing. This patch only adds the basic setup
helpers, the real usage will be added in the later patches.

Signed-off-by: Luka Bai <lukabai@tencent.com>
---
 include/linux/huge_mm.h                |  6 ++++++
 include/linux/mm.h                     | 19 +++++++++++++++++++
 include/uapi/asm-generic/mman-common.h |  9 +++++++++
 mm/madvise.c                           | 25 +++++++++++++++++++++++++
 4 files changed, 59 insertions(+)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 48496f09909b..a0ce8c0b81f5 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -6,6 +6,7 @@
 
 #include <linux/fs.h> /* only for vma_is_dax() */
 #include <linux/kobject.h>
+#include <uapi/asm-generic/mman-common.h>
 
 vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf);
 int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
@@ -363,6 +364,11 @@ static inline bool thp_disabled_by_hw(void)
 	return transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED);
 }
 
+static inline bool madv_thp_cow(int behavior)
+{
+	return behavior & MADV_THP_COW;
+}
+
 unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
 		unsigned long len, unsigned long pgoff, unsigned long flags);
 unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr,
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1d76da6e0791..8a800819cfa2 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -391,6 +391,10 @@ enum {
 #else
 	DECLARE_VMA_BIT_ALIAS(STACK, GROWSDOWN),
 #endif
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	DECLARE_VMA_BIT(THP_SETUP_1, 43),
+	DECLARE_VMA_BIT_ALIAS(THP_COW, THP_SETUP_1),
+#endif
 };
 #undef DECLARE_VMA_BIT
 #undef DECLARE_VMA_BIT_ALIAS
@@ -510,6 +514,9 @@ enum {
 #define VM_DROPPABLE		VM_NONE
 #define VMA_DROPPABLE		EMPTY_VMA_FLAGS
 #endif
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+#define VM_THP_COW	INIT_VM_FLAG(THP_COW)
+#endif
 
 /* Bits set in the VMA until the stack is in its final location */
 #define VM_STACK_INCOMPLETE_SETUP (VM_RAND_READ | VM_SEQ_READ | VM_STACK_EARLY)
@@ -4128,6 +4135,18 @@ extern int do_munmap(struct mm_struct *, unsigned long, size_t,
 		     struct list_head *uf);
 extern int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int behavior);
 
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static inline bool madv_thp_behavior(int behavior)
+{
+	return behavior >= MADV_THP_SETUP_BASE && behavior < MADV_THP_SETUP_END;
+}
+#else
+static inline bool madv_thp_behavior(int behavior)
+{
+	return false;
+}
+#endif
+
 #ifdef CONFIG_MMU
 extern int __mm_populate(unsigned long addr, unsigned long len,
 			 int ignore_errors);
diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h
index ef1c27fa3c57..1617ed374503 100644
--- a/include/uapi/asm-generic/mman-common.h
+++ b/include/uapi/asm-generic/mman-common.h
@@ -82,6 +82,15 @@
 #define MADV_GUARD_INSTALL 102		/* fatal signal on access to range */
 #define MADV_GUARD_REMOVE 103		/* unguard range */
 
+/* for THP setup */
+#define MADV_THP_SETUP_BASE 256
+enum {
+	MADV_THP_COW_BIT,
+	MADV_THP_SETUP_MAX_BIT,
+};
+#define MADV_THP_COW        (MADV_THP_SETUP_BASE + (1 << MADV_THP_COW_BIT))
+#define MADV_THP_SETUP_END	(MADV_THP_SETUP_BASE + (1 << MADV_THP_SETUP_MAX_BIT))
+
 /* compatibility flags */
 #define MAP_FILE	0
 
diff --git a/mm/madvise.c b/mm/madvise.c
index 69708e953cf5..5dbfc89682d7 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -1331,6 +1331,25 @@ static bool can_madvise_modify(struct madvise_behavior *madv_behavior)
 }
 #endif
 
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static vm_flags_t madvise_thp_setup(struct madvise_behavior *madv_behavior)
+{
+	int thp_behavior = madv_behavior->behavior - MADV_THP_SETUP_BASE;
+	struct vm_area_struct *vma = madv_behavior->vma;
+	vm_flags_t new_flags = vma->vm_flags;
+
+	if (madv_thp_cow(thp_behavior))
+		new_flags |= VM_THP_COW;
+
+	return new_flags;
+}
+#else
+static vm_flags_t madvise_thp_setup(struct madvise_behavior *madv_behavior)
+{
+	return madv_behavior->vma->vm_flags;
+}
+#endif
+
 /*
  * Apply an madvise behavior to a region of a vma.  madvise_update_vma
  * will handle splitting a vm area into separate areas, each area with its own
@@ -1427,6 +1446,10 @@ static int madvise_vma_behavior(struct madvise_behavior *madv_behavior)
 		break;
 	}
 
+	/* Handle THP behaviors */
+	if (madv_thp_behavior(behavior))
+		new_flags = madvise_thp_setup(madv_behavior);
+
 	/* This is a write operation.*/
 	VM_WARN_ON_ONCE(madv_behavior->lock_mode != MADVISE_MMAP_WRITE_LOCK);
 
@@ -1555,6 +1578,8 @@ madvise_behavior_valid(int behavior)
 		return true;
 
 	default:
+		if (madv_thp_behavior(behavior))
+			return true;
 		return false;
 	}
 }

-- 
2.52.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/5] mm: add pmd level THP COW parameter in sysfs
  2026-05-01  5:55 [PATCH 0/5] mm: Support selecting doing direct COW for anonymous pmd entry Luka Bai
  2026-05-01  5:55 ` [PATCH 1/5] mm: add basic madvise helpers and branch for THP setup Luka Bai
@ 2026-05-01  5:55 ` Luka Bai
  2026-05-01  5:55 ` [PATCH 3/5] mm: add pmd level THP COW judgement helpers Luka Bai
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Luka Bai @ 2026-05-01  5:55 UTC (permalink / raw)
  To: linux-mm
  Cc: Jonathan Corbet, Shuah Khan, Andrew Morton, David Hildenbrand,
	Lorenzo Stoakes, Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lance Yang, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Jann Horn,
	Arnd Bergmann, Kairui Song, linux-kernel, linux-arch, linux-doc,
	Luka Bai

From: Luka Bai <lukabai@tencent.com>

We would like to use similar logic of huge anonymous page or huge shmem
pages for THP COW: to categorize the strategies into three types: always,
never, madvise. If setting up to always, then we always do THP COW for
all the existing THPs. If setting up to never, then we never do THP COW.
If setting up to madvise, then we follow the setup we introduced in last
commit to decide whether we do COW for each individual vma.

We add TRANSPARENT_HUGEPAGE_COW_FLAG and
TRANSPARENT_HUGEPAGE_REQ_MADV_COW_FLAG that are very similar to
the TRANSPARENT_HUGEPAGE_FLAG and TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG
which are used to decide whether we do anonymous huge page fault when
it permits. And we add sysfs attribute thp_cow_attr as the interface
to choose from the three strategies we mentioned before.

Signed-off-by: Luka Bai <lukabai@tencent.com>
---
 .../testing/sysfs-kernel-mm-transparent-hugepage   |  1 +
 Documentation/admin-guide/mm/transhuge.rst         | 27 +++++++++++++++
 include/linux/huge_mm.h                            |  2 ++
 mm/huge_memory.c                                   | 39 ++++++++++++++++++++++
 4 files changed, 69 insertions(+)

diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-transparent-hugepage b/Documentation/ABI/testing/sysfs-kernel-mm-transparent-hugepage
index 7bfbb9cc2c11..43a1af13efe0 100644
--- a/Documentation/ABI/testing/sysfs-kernel-mm-transparent-hugepage
+++ b/Documentation/ABI/testing/sysfs-kernel-mm-transparent-hugepage
@@ -11,6 +11,7 @@ Description:
 			- khugepaged
 			- shmem_enabled
 			- use_zero_page
+			- thp_cow
 			- subdirectories of the form hugepages-<size>kB, where <size>
 			  is the page size of the hugepages supported by the kernel/CPU
 			  combination.
diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
index 0ef13c451ac8..0926651bad0d 100644
--- a/Documentation/admin-guide/mm/transhuge.rst
+++ b/Documentation/admin-guide/mm/transhuge.rst
@@ -226,6 +226,33 @@ to "always" or "madvise"), and it'll be automatically shutdown when
 all THP sizes are disabled (when both the per-size anon control and the
 top-level control are "never")
 
+Some workloads may want to do copy on write on the pmd size to acquire the
+tlb benifit when it tries to write on a shared anonymous pmd sized entry.
+They can do so by setting up the thp_cow control. The control is only enabled
+when the global THP controls are set to "always" or "madvise" for the
+specific memory region::
+
+::
+
+	echo always >/sys/kernel/mm/transparent_hugepage/thp_cow
+	echo madvise >/sys/kernel/mm/transparent_hugepage/thp_cow
+	echo never >/sys/kernel/mm/transparent_hugepage/thp_cow
+
+always
+	means that the writing process will always do copy on write on
+	the pmd size. If there is no pmd sized folio available, it will
+	fallback to the pte size.
+
+madvise
+	will do things like ``always`` but only for regions that have
+	used madvise(MADV_THP_COW).
+
+never
+	will not do copy on write on the pmd size no matter what setup
+	is done using madvise. When a process writes on a shared anonymous
+	pmd sized entry, it will just allocate a pte sized page and do copy
+	on write on the pte size.
+
 process THP controls
 --------------------
 
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index a0ce8c0b81f5..2a62f0f92f68 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -57,6 +57,8 @@ enum transparent_hugepage_flag {
 	TRANSPARENT_HUGEPAGE_DEFRAG_REQ_MADV_FLAG,
 	TRANSPARENT_HUGEPAGE_DEFRAG_KHUGEPAGED_FLAG,
 	TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG,
+	TRANSPARENT_HUGEPAGE_COW_FLAG,
+	TRANSPARENT_HUGEPAGE_REQ_MADV_COW_FLAG,
 };
 
 struct kobject;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 1f0d0b780943..babca060feca 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -531,6 +531,44 @@ static ssize_t split_underused_thp_store(struct kobject *kobj,
 static struct kobj_attribute split_underused_thp_attr = __ATTR(
 	shrink_underused, 0644, split_underused_thp_show, split_underused_thp_store);
 
+static ssize_t thp_cow_show(struct kobject *kobj,
+			    struct kobj_attribute *attr, char *buf)
+{
+	const char *output;
+
+	if (test_bit(TRANSPARENT_HUGEPAGE_COW_FLAG, &transparent_hugepage_flags))
+		output = "[always] madvise never";
+	else if (test_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_COW_FLAG,
+			  &transparent_hugepage_flags))
+		output = "always [madvise] never";
+	else
+		output = "always madvise [never]";
+
+	return sysfs_emit(buf, "%s\n", output);
+}
+
+static ssize_t thp_cow_store(struct kobject *kobj,
+			     struct kobj_attribute *attr,
+			     const char *buf, size_t count)
+{
+	ssize_t ret = count;
+
+	if (sysfs_streq(buf, "always")) {
+		clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_COW_FLAG, &transparent_hugepage_flags);
+		set_bit(TRANSPARENT_HUGEPAGE_COW_FLAG, &transparent_hugepage_flags);
+	} else if (sysfs_streq(buf, "madvise")) {
+		clear_bit(TRANSPARENT_HUGEPAGE_COW_FLAG, &transparent_hugepage_flags);
+		set_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_COW_FLAG, &transparent_hugepage_flags);
+	} else if (sysfs_streq(buf, "never")) {
+		clear_bit(TRANSPARENT_HUGEPAGE_COW_FLAG, &transparent_hugepage_flags);
+		clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_COW_FLAG, &transparent_hugepage_flags);
+	} else
+		ret = -EINVAL;
+
+	return ret;
+}
+static struct kobj_attribute thp_cow_attr = __ATTR_RW(thp_cow);
+
 static struct attribute *hugepage_attr[] = {
 	&enabled_attr.attr,
 	&defrag_attr.attr,
@@ -540,6 +578,7 @@ static struct attribute *hugepage_attr[] = {
 	&shmem_enabled_attr.attr,
 #endif
 	&split_underused_thp_attr.attr,
+	&thp_cow_attr.attr,
 	NULL,
 };
 

-- 
2.52.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/5] mm: add pmd level THP COW judgement helpers
  2026-05-01  5:55 [PATCH 0/5] mm: Support selecting doing direct COW for anonymous pmd entry Luka Bai
  2026-05-01  5:55 ` [PATCH 1/5] mm: add basic madvise helpers and branch for THP setup Luka Bai
  2026-05-01  5:55 ` [PATCH 2/5] mm: add pmd level THP COW parameter in sysfs Luka Bai
@ 2026-05-01  5:55 ` Luka Bai
  2026-05-01  5:55 ` [PATCH 4/5] mm: enable map_anon_folio_pmd_nopf to handle unshare Luka Bai
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Luka Bai @ 2026-05-01  5:55 UTC (permalink / raw)
  To: linux-mm
  Cc: Jonathan Corbet, Shuah Khan, Andrew Morton, David Hildenbrand,
	Lorenzo Stoakes, Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lance Yang, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Jann Horn,
	Arnd Bergmann, Kairui Song, linux-kernel, linux-arch, linux-doc,
	Luka Bai

From: Luka Bai <lukabai@tencent.com>

We add hugepage_cow_always and hugepage_cow_madvise as two convenient
helpers to decide whether we want to do THP COW under each specific
circumstance.

Also, we add a helper hugepage_cow_enabled to help us know the setup
more easily. THP COW is only opened when hugepage is globally enabled
or madvise enabled.

Signed-off-by: Luka Bai <lukabai@tencent.com>
---
 include/linux/huge_mm.h | 32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 2a62f0f92f68..3e5c6da3905b 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -203,6 +203,38 @@ static inline bool hugepage_global_always(void)
 			(1<<TRANSPARENT_HUGEPAGE_FLAG);
 }
 
+static inline bool hugepage_cow_always(void)
+{
+	return transparent_hugepage_flags &
+			(1<<TRANSPARENT_HUGEPAGE_COW_FLAG);
+}
+
+static inline bool hugepage_cow_madvise(void)
+{
+	return transparent_hugepage_flags &
+			(1<<TRANSPARENT_HUGEPAGE_REQ_MADV_COW_FLAG);
+}
+
+static inline bool hugepage_cow_enabled(struct vm_area_struct *vma)
+{
+	vm_flags_t vm_flags = vma->vm_flags;
+
+	/* anonymous THP need to be enabled first */
+	if (!hugepage_global_always() &&
+		(!hugepage_global_enabled() || !(vm_flags & VM_HUGEPAGE)))
+		return false;
+
+	/* always enables all the THP COW */
+	if (hugepage_cow_always())
+		return true;
+
+	/* madvise enables THP cow only when vm_flags says so */
+	if (hugepage_cow_madvise() && (vm_flags & VM_THP_COW))
+		return true;
+
+	return false;
+}
+
 static inline int highest_order(unsigned long orders)
 {
 	return fls_long(orders) - 1;

-- 
2.52.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/5] mm: enable map_anon_folio_pmd_nopf to handle unshare
  2026-05-01  5:55 [PATCH 0/5] mm: Support selecting doing direct COW for anonymous pmd entry Luka Bai
                   ` (2 preceding siblings ...)
  2026-05-01  5:55 ` [PATCH 3/5] mm: add pmd level THP COW judgement helpers Luka Bai
@ 2026-05-01  5:55 ` Luka Bai
  2026-05-01  5:55 ` [PATCH 5/5] mm: support choosing to do THP COW for anonymous pmd entry Luka Bai
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 13+ messages in thread
From: Luka Bai @ 2026-05-01  5:55 UTC (permalink / raw)
  To: linux-mm
  Cc: Jonathan Corbet, Shuah Khan, Andrew Morton, David Hildenbrand,
	Lorenzo Stoakes, Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lance Yang, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Jann Horn,
	Arnd Bergmann, Kairui Song, linux-kernel, linux-arch, linux-doc,
	Luka Bai

From: Luka Bai <lukabai@tencent.com>

Function map_anon_folio_pmd_nopf was able to map new anonymous pages.
Like in function do_huge_pmd_anonymous_page, it handles all the mappings
and statistics correctly in one call. However, it doesn't support
FAULT_FLAG_UNSHARE.

Normally, FAULT_FLAG_UNSHARE was set when we just want to separate
multiple non-exclusive sharing apart, it follows the copy on write
process, since it also does the checking like whether we need to copy
memory, or just use the existing one, basically the same work like
what COW does. But it doesn't happen because of writing on a RO pte/pmd
which is actually permitted to be written to but simply for "unsharing".
Hence we need to copy the same permissive and other marker flags into
the copied new page table entry just like the old one when doing the
duplication, without making it writable. Now, map_anon_folio_pmd_nopf
only tries to make the new pmd writable that is not what unsharing wants.

We add unsharing support for map_anon_folio_pmd_nopf by passing the
vm_fault struct as a parameter and get the unsharing hint. If we are in
the unsharing procedure, then we just copy the soft_dirty and uffd_wp
flags into the new pmd instead of trying to make the new pmd writable.

Signed-off-by: Luka Bai <lukabai@tencent.com>
---
 include/linux/huge_mm.h |  5 ++---
 mm/huge_memory.c        | 34 +++++++++++++++++++++++-----------
 mm/khugepaged.c         |  8 +++++++-
 3 files changed, 32 insertions(+), 15 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 3e5c6da3905b..61f0e614ca52 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -610,9 +610,8 @@ void split_huge_pmd_locked(struct vm_area_struct *vma, unsigned long address,
 			   pmd_t *pmd, bool freeze);
 bool unmap_huge_pmd_locked(struct vm_area_struct *vma, unsigned long addr,
 			   pmd_t *pmdp, struct folio *folio);
-void map_anon_folio_pmd_nopf(struct folio *folio, pmd_t *pmd,
-		struct vm_area_struct *vma, unsigned long haddr);
-
+void map_anon_folio_pmd_nopf(struct folio *folio, struct vm_fault *vmf,
+			   bool cow);
 #else /* CONFIG_TRANSPARENT_HUGEPAGE */
 
 static inline bool folio_test_pmd_mappable(struct folio *folio)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index babca060feca..1e661b411b2e 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1423,13 +1423,26 @@ static struct folio *vma_alloc_anon_folio_pmd(struct vm_area_struct *vma,
 	return folio;
 }
 
-void map_anon_folio_pmd_nopf(struct folio *folio, pmd_t *pmd,
-		struct vm_area_struct *vma, unsigned long haddr)
+void map_anon_folio_pmd_nopf(struct folio *folio, struct vm_fault *vmf,
+		bool cow)
 {
 	pmd_t entry;
+	struct vm_area_struct *vma = vmf->vma;
+	pmd_t *pmd = vmf->pmd;
+	pmd_t orig_pmd = vmf->orig_pmd;
+	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
+	const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
 
 	entry = folio_mk_pmd(folio, vma->vm_page_prot);
-	entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
+	if (unlikely(cow && unshare)) {
+		VM_WARN_ON(pmd_write(orig_pmd));
+		if (pmd_soft_dirty(orig_pmd))
+			entry = pmd_mksoft_dirty(entry);
+		if (pmd_uffd_wp(orig_pmd))
+			entry = pmd_mkuffd_wp(entry);
+	} else {
+		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
+	}
 	folio_add_new_anon_rmap(folio, vma, haddr, RMAP_EXCLUSIVE);
 	folio_add_lru_vma(folio, vma);
 	set_pmd_at(vma->vm_mm, haddr, pmd, entry);
@@ -1437,19 +1450,18 @@ void map_anon_folio_pmd_nopf(struct folio *folio, pmd_t *pmd,
 	deferred_split_folio(folio, false);
 }
 
-static void map_anon_folio_pmd_pf(struct folio *folio, pmd_t *pmd,
-		struct vm_area_struct *vma, unsigned long haddr)
+static void map_anon_folio_pmd_pf(struct folio *folio, struct vm_fault *vmf,
+		bool cow)
 {
-	map_anon_folio_pmd_nopf(folio, pmd, vma, haddr);
-	add_mm_counter(vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
+	map_anon_folio_pmd_nopf(folio, vmf, cow);
+	add_mm_counter(vmf->vma->vm_mm, MM_ANONPAGES, HPAGE_PMD_NR);
 	count_vm_event(THP_FAULT_ALLOC);
 	count_mthp_stat(HPAGE_PMD_ORDER, MTHP_STAT_ANON_FAULT_ALLOC);
-	count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC);
+	count_memcg_event_mm(vmf->vma->vm_mm, THP_FAULT_ALLOC);
 }
 
 static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
 {
-	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
 	struct vm_area_struct *vma = vmf->vma;
 	struct folio *folio;
 	pgtable_t pgtable;
@@ -1483,7 +1495,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf)
 			return ret;
 		}
 		pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable);
-		map_anon_folio_pmd_pf(folio, vmf->pmd, vma, haddr);
+		map_anon_folio_pmd_pf(folio, vmf, false);
 		mm_inc_nr_ptes(vma->vm_mm);
 		spin_unlock(vmf->ptl);
 	}
@@ -2174,7 +2186,7 @@ static vm_fault_t do_huge_zero_wp_pmd(struct vm_fault *vmf)
 	if (ret)
 		goto release;
 	(void)pmdp_huge_clear_flush(vma, haddr, vmf->pmd);
-	map_anon_folio_pmd_pf(folio, vmf->pmd, vma, haddr);
+	map_anon_folio_pmd_pf(folio, vmf, true);
 	goto unlock;
 release:
 	folio_put(folio);
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 7d48d4fbd5f3..18d309b69d30 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1402,7 +1402,13 @@ static enum scan_result collapse_huge_page(struct mm_struct *mm, unsigned long s
 	if (is_pmd_order(order)) { /* PMD collapse */
 		pgtable = pmd_pgtable(_pmd);
 		pgtable_trans_huge_deposit(mm, pmd, pgtable);
-		map_anon_folio_pmd_nopf(folio, pmd, vma, pmd_addr);
+		struct vm_fault vmf = {
+			.vma = vma,
+			.flags = 0,
+			.address = pmd_addr,
+			.orig_pmd = pmdp_get(pmd),
+		};
+		map_anon_folio_pmd_nopf(folio, &vmf, false);
 	} else { /* mTHP collapse */
 		map_anon_folio_pte_nopf(folio, pte, vma, start_addr, /*uffd_wp=*/ false);
 		smp_wmb(); /* make PTEs visible before PMD. See pmd_install() */

-- 
2.52.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/5] mm: support choosing to do THP COW for anonymous pmd entry.
  2026-05-01  5:55 [PATCH 0/5] mm: Support selecting doing direct COW for anonymous pmd entry Luka Bai
                   ` (3 preceding siblings ...)
  2026-05-01  5:55 ` [PATCH 4/5] mm: enable map_anon_folio_pmd_nopf to handle unshare Luka Bai
@ 2026-05-01  5:55 ` Luka Bai
  2026-05-01  7:11   ` David Hildenbrand (Arm)
  2026-05-01  7:07 ` [PATCH 0/5] mm: Support selecting doing direct " David Hildenbrand (Arm)
  2026-05-03  7:03 ` [syzbot ci] " syzbot ci
  6 siblings, 1 reply; 13+ messages in thread
From: Luka Bai @ 2026-05-01  5:55 UTC (permalink / raw)
  To: linux-mm
  Cc: Jonathan Corbet, Shuah Khan, Andrew Morton, David Hildenbrand,
	Lorenzo Stoakes, Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lance Yang, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Jann Horn,
	Arnd Bergmann, Kairui Song, linux-kernel, linux-arch, linux-doc,
	Luka Bai

From: Luka Bai <lukabai@tencent.com>

For pmd mapped anonymous folios, we currently do not do COW for the
whole vma region, because we don't want to copy and unshare the full
PMD range on the first write fault.

That proposal holds for the most workloads, however, that also makes
the pmd entry split into 512 4K ptes in the child process after we
write on a part of the folio.

For example, if process A and B share a pmd sized folio, if B does
writing on a small region, its pmd mapping will be split into 511
4K ptes which still point to the original pmd sized folio, and 1 4K
pte pointing to the new 4K page.

This is quite good for memory utilization, but it also make the tlb gain
caused by pmd entry suddenly "vanish" after a simple write, which
causes a observable performance decrease in some workloads. And also,
it adds some "uncertainty" to the THP since it does splitting
transparently in the COW scenorio which sometimes can cause trouble
to ones that need stable hugepages.

This patch adds support for pmd sized COW of anonymous page with
switch controlling. The reason we add switch is that for some scenorio,
the performance matters more, but for other workloads maybe the memory
waste is more unbearable. So we can use the THP setup to control this
configuration, either on the vma level or the global level.

The patch is relatively simple, we add function wp_huge_pmd_page_copy
to do the hugepage copy on write part, and do the allocation, accouting
and cache flushing just like in 4K path. We use the newly reconstructed
map_anon_folio_pmd_pf to do the mapping since it can properly support
FAULT_FLAG_UNSHARE right now.

We remove the ref checking in do_huge_pmd_wp_page, since we have
supported copying the pmd folio right now, we'll check the refcount
in the following folio_ref_count to make sure if the folio can be
exclusively used. If not, we can always do copy on write for this
folio just like in do_wp_page when THP COW is enabled.

Signed-off-by: Luka Bai <lukabai@tencent.com>
---
 mm/huge_memory.c | 125 +++++++++++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 116 insertions(+), 9 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 1e661b411b2e..a05a4456e5a2 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -40,6 +40,7 @@
 #include <linux/pgalloc.h>
 #include <linux/pgalloc_tag.h>
 #include <linux/pagewalk.h>
+#include <linux/delayacct.h>
 
 #include <asm/tlb.h>
 #include "internal.h"
@@ -2196,6 +2197,94 @@ static vm_fault_t do_huge_zero_wp_pmd(struct vm_fault *vmf)
 	return ret;
 }
 
+static vm_fault_t wp_huge_pmd_page_copy(struct vm_fault *vmf, struct folio *old_folio)
+{
+	struct vm_area_struct *vma = vmf->vma;
+	struct mm_struct *mm = vma->vm_mm;
+	struct folio *new_folio = NULL;
+	struct page *new_page, *old_page;
+	unsigned long pmd_address = vmf->address & HPAGE_PMD_MASK;
+	struct mmu_notifier_range range;
+	vm_fault_t ret = 0;
+	int i;
+
+	delayacct_wpcopy_start();
+
+	old_page = folio_page(old_folio, 0);
+	ret = vmf_anon_prepare(vmf);
+	if (unlikely(ret)) {
+		if (ret != VM_FAULT_RETRY)
+			ret = VM_FAULT_FALLBACK;
+		goto out;
+	}
+
+	new_folio = vma_alloc_anon_folio_pmd(vma, vmf->address);
+	if (unlikely(!new_folio)) {
+		ret = VM_FAULT_FALLBACK;
+		goto out;
+	}
+
+	if (copy_user_large_folio(new_folio, old_folio,
+		pmd_address, vma)) {
+		ret = VM_FAULT_HWPOISON;
+		goto out;
+	}
+
+	new_page = folio_page(new_folio, 0);
+	for (i = 0; i < HPAGE_PMD_NR; i++)
+		kmsan_copy_page_meta(new_page + i, old_page + i);
+
+	__folio_mark_uptodate(new_folio);
+	mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm,
+				pmd_address, pmd_address + HPAGE_PMD_SIZE);
+	mmu_notifier_invalidate_range_start(&range);
+
+	spin_lock(vmf->ptl);
+	if (unlikely(!pmd_same(pmdp_get(vmf->pmd), vmf->orig_pmd))) {
+		update_mmu_cache_pmd(vma, pmd_address, vmf->pmd);
+		ret = 0;
+		goto out_unlock;
+	}
+
+	flush_cache_range(vma, pmd_address, pmd_address + HPAGE_PMD_SIZE);
+	/*
+	 * Clear the pmd entry and flush it first, before updating the
+	 * pmd with the new entry, to keep TLBs on different CPUs in
+	 * sync.
+	 */
+	(void)pmdp_huge_clear_flush(vma, pmd_address, vmf->pmd);
+	/*
+	 * We just temporarily decrement the mm_counter here, and it will be added back in
+	 * map_anon_folio_pmd_pf below.
+	 */
+	add_mm_counter(mm, MM_ANONPAGES, -HPAGE_PMD_NR);
+	map_anon_folio_pmd_pf(new_folio, vmf, true);
+	folio_remove_rmap_pmd(old_folio, old_page, vma);
+
+	spin_unlock(vmf->ptl);
+
+	mmu_notifier_invalidate_range_end(&range);
+	/* This put is for the folio_get() in the caller */
+	folio_put(old_folio);
+	free_swap_cache(old_folio);
+
+	/* This put is for decrementing refcount after we switch page table mapping */
+	folio_put(old_folio);
+
+	delayacct_wpcopy_end();
+	return 0;
+out_unlock:
+	spin_unlock(vmf->ptl);
+	mmu_notifier_invalidate_range_end(&range);
+out:
+	folio_put(old_folio);
+	if (new_folio)
+		folio_put(new_folio);
+
+	delayacct_wpcopy_end();
+	return ret;
+}
+
 vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
 {
 	const bool unshare = vmf->flags & FAULT_FLAG_UNSHARE;
@@ -2204,12 +2293,13 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
 	struct page *page;
 	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
 	pmd_t orig_pmd = vmf->orig_pmd;
+	vm_fault_t ret;
 
 	vmf->ptl = pmd_lockptr(vma->vm_mm, vmf->pmd);
 	VM_BUG_ON_VMA(!vma->anon_vma, vma);
 
 	if (is_huge_zero_pmd(orig_pmd)) {
-		vm_fault_t ret = do_huge_zero_wp_pmd(vmf);
+		ret = do_huge_zero_wp_pmd(vmf);
 
 		if (!(ret & VM_FAULT_FALLBACK))
 			return ret;
@@ -2253,14 +2343,6 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
 		goto reuse;
 	}
 
-	/*
-	 * See do_wp_page(): we can only reuse the folio exclusively if
-	 * there are no additional references. Note that we always drain
-	 * the LRU cache immediately after adding a THP.
-	 */
-	if (folio_ref_count(folio) >
-			1 + folio_test_swapcache(folio) * folio_nr_pages(folio))
-		goto unlock_fallback;
 	if (folio_test_swapcache(folio))
 		folio_free_swap(folio);
 	if (folio_ref_count(folio) == 1) {
@@ -2282,6 +2364,31 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
 		return 0;
 	}
 
+	/*
+	 * Only do hugepage copy on write if the parameter setup supports it.
+	 */
+	if (!hugepage_cow_enabled(vma))
+		goto unlock_fallback;
+
+	/*
+	 * For vma without a vm_ops(anonymous vma), there should not be VM_SHARED or
+	 * VM_MAYSHARE types.
+	 */
+	VM_WARN_ON_ONCE_VMA(vma->vm_flags & (VM_SHARED | VM_MAYSHARE), vma);
+
+	folio_unlock(folio);
+	/*
+	 * Copy on write branch here.
+	 * We are about to unlock the ptl here, so we need to get folio before that
+	 * in case the folio gets freed in the meantime.
+	 */
+	folio_get(folio);
+	spin_unlock(vmf->ptl);
+	ret = wp_huge_pmd_page_copy(vmf, folio);
+	if (ret & VM_FAULT_FALLBACK)
+		goto fallback;
+	return ret;
+
 unlock_fallback:
 	folio_unlock(folio);
 	spin_unlock(vmf->ptl);

-- 
2.52.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/5] mm: Support selecting doing direct COW for anonymous pmd entry
  2026-05-01  5:55 [PATCH 0/5] mm: Support selecting doing direct COW for anonymous pmd entry Luka Bai
                   ` (4 preceding siblings ...)
  2026-05-01  5:55 ` [PATCH 5/5] mm: support choosing to do THP COW for anonymous pmd entry Luka Bai
@ 2026-05-01  7:07 ` David Hildenbrand (Arm)
  2026-05-01 16:16   ` Luka Bai
  2026-05-03  7:03 ` [syzbot ci] " syzbot ci
  6 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand (Arm) @ 2026-05-01  7:07 UTC (permalink / raw)
  To: Luka Bai, linux-mm
  Cc: Jonathan Corbet, Shuah Khan, Andrew Morton, Lorenzo Stoakes,
	Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts,
	Dev Jain, Barry Song, Lance Yang, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Jann Horn, Arnd Bergmann,
	Kairui Song, linux-kernel, linux-arch, linux-doc, Luka Bai

On 5/1/26 07:55, Luka Bai wrote:

Hi,

> Copy on write support for anonymous pmd level THP is simple right now:
> firstly we'll check whether the folio can be exclusively used by the
> faulting process, if we can (when the ref of the folio is only 1 after
> trying to free swapcache or the page flag AnonExclusive is setup) we'll
> directly use it with few further handling. If we cannot, then we'll
> split the pmd into 512 4K ptes, and do copy on write only for the
> specific 4K page that we faulted on.
> 
> This logic is truly memory efficient since for most workloads we don't
> want to allocate 2M new memory simply on a small write. However, it also
> makes the original 2M page for the process suddenly splitted on a
> write which will generate some performance thrashing. For example, if
> process A and process B share an anonymous 2M pmd, if process B chooses
> to do a writing, then its page table mapping will be changed from 1
> pmd entry into 512 4K pte entries at once, so the tlb benifit will
> suddenly just "vanish" for process B, which sometimes may cause a
> observable performance degeneration. After that, we can only wait for
> khugepaged to do the collapse for this area and merge the pmd back, which
> is not easy to happen.

You probably know that, historically, we did exactly what you describe in this
patch set. It was rather bad regarding memory waste and COW latency, so we
switched to the current model.

Note that there was a recent related discussion for executable, which was rejected:

https://lore.kernel.org/r/20251226100337.4171191-1-zhangqilong3@huawei.com

> 
> In addition to the problem above, this logic can also generate some
> deficiency for THP itself. Currently THP is just a "best-effort" choice
> with no "certainty". THP is easily splitted into multiple small pages
> on common calling path like reclaiming, COW. A transparent splitting
> can cause throughput fluctuation for some workloads. For these workloads,
> we may want to give THP some "certainty" just like hugetlbfs,

There are no such guarantees, though. And We wouldn't want to commit to any such
guarantees today. For example, simple page migration can split the folio.
Allocation failures will fallback to small pages etc.

If you need guarantees, use hugetlb for now.

> The effect
> we want is: after some customized setup, if only the system has usable
> folio, and the virtual memory alignment permits (or we setup to), we can
> make sure we always use THP for it, the system will never split it except
> the user wants to do so.
> 
> This patchset is about both two things above, firstly we add pmd level
> THP COW support by revising the code in do_huge_pmd_wp_page, we added
> switch for it because different workloads may need different resources,

The switch is bad, and we won't accept any toggle like that. A system-wide
setting does not make sense for such behavior.

A per-VMA flag? Maybe, but I expect pushback as well, as it is way too specific.
So we'd have to find some concept that abstracts these semantics. But I expect
pushback as well.

We messed up enough with toggles in THP space, unfortunately.

Also, anything that only works for PMD-sized THPs is a warning sign in 2026 :)

You don't really raise any concrete use cases or performance numbers for these
use cases. Some details about applications that use fork() and rely on such
behavior would be helpful.

Note that an application that does fork() could use MADV_COLLAPSE after fork()
to make sure that it immediately gets THPs back.

There is also the option to just use MADV_DONTFORK to not even share ranges with
a child process in the first place, avoiding page copies entirely.

-- 
Cheers,

David

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 5/5] mm: support choosing to do THP COW for anonymous pmd entry.
  2026-05-01  5:55 ` [PATCH 5/5] mm: support choosing to do THP COW for anonymous pmd entry Luka Bai
@ 2026-05-01  7:11   ` David Hildenbrand (Arm)
  2026-05-01 15:01     ` Luka Bai
  0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand (Arm) @ 2026-05-01  7:11 UTC (permalink / raw)
  To: Luka Bai, linux-mm
  Cc: Jonathan Corbet, Shuah Khan, Andrew Morton, Lorenzo Stoakes,
	Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts,
	Dev Jain, Barry Song, Lance Yang, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Jann Horn, Arnd Bergmann,
	Kairui Song, linux-kernel, linux-arch, linux-doc, Luka Bai


> -	/*
> -	 * See do_wp_page(): we can only reuse the folio exclusively if
> -	 * there are no additional references. Note that we always drain
> -	 * the LRU cache immediately after adding a THP.
> -	 */
> -	if (folio_ref_count(folio) >
> -			1 + folio_test_swapcache(folio) * folio_nr_pages(folio))
> -		goto unlock_fallback;
>  	if (folio_test_swapcache(folio))

I don't see why you would want to remove this check, really. Instead of
"fallback", you might want to try copying the PMD.

-- 
Cheers,

David

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 5/5] mm: support choosing to do THP COW for anonymous pmd entry.
  2026-05-01  7:11   ` David Hildenbrand (Arm)
@ 2026-05-01 15:01     ` Luka Bai
  0 siblings, 0 replies; 13+ messages in thread
From: Luka Bai @ 2026-05-01 15:01 UTC (permalink / raw)
  To: David Hildenbrand (Arm)
  Cc: linux-mm, Jonathan Corbet, Shuah Khan, Andrew Morton,
	Lorenzo Stoakes, Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lance Yang, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Jann Horn,
	Arnd Bergmann, Kairui Song, linux-kernel, linux-arch, linux-doc,
	Luka Bai

在 Fri, May 01, 2026 at 09:11:42AM +0200,David Hildenbrand (Arm) 写道:
> 
> > -	/*
> > -	 * See do_wp_page(): we can only reuse the folio exclusively if
> > -	 * there are no additional references. Note that we always drain
> > -	 * the LRU cache immediately after adding a THP.
> > -	 */
> > -	if (folio_ref_count(folio) >
> > -			1 + folio_test_swapcache(folio) * folio_nr_pages(folio))
> > -		goto unlock_fallback;
> >  	if (folio_test_swapcache(folio))
> 
> I don't see why you would want to remove this check, really. Instead of
> "fallback", you might want to try copying the PMD.
> 
> -- 
> Cheers,
> 
> David

Sorry I didn't notice this thread earlier. That is a great suggestion! I can keep
this branch and do the folio copying instead of fallback, that actually helps
to avoid the extra checking about swapcache when the refs is definitely not
suitable for exclusive use. Thanks! :)

Best,
Luka

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/5] mm: Support selecting doing direct COW for anonymous pmd entry
  2026-05-01  7:07 ` [PATCH 0/5] mm: Support selecting doing direct " David Hildenbrand (Arm)
@ 2026-05-01 16:16   ` Luka Bai
  2026-05-01 18:30     ` David Hildenbrand (Arm)
  0 siblings, 1 reply; 13+ messages in thread
From: Luka Bai @ 2026-05-01 16:16 UTC (permalink / raw)
  To: David Hildenbrand (Arm)
  Cc: linux-mm, Jonathan Corbet, Shuah Khan, Andrew Morton,
	Lorenzo Stoakes, Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lance Yang, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Jann Horn,
	Arnd Bergmann, Kairui Song, linux-kernel, linux-arch, linux-doc,
	Luka Bai

在 Fri, May 01, 2026 at 09:07:49AM +0200,David Hildenbrand (Arm) 写道:

Hi David,

Thanks for your review and opinion :) I really appreciate it!
I'm sorry that I'm not that familiar with the mail sending in lore, so my
earlier reply in
https://lore.kernel.org/all/8F6F0691-91A1-4645-A218-2219DE6047AD@icloud.com/ 
may be in a wrong format, if you haven't read it, just ignore it, and read
this mail instead, I also revised the reply a little in this mail. thanks. :)

> On 5/1/26 07:55, Luka Bai wrote:
> 
> Hi,
> 
> > Copy on write support for anonymous pmd level THP is simple right now:
> > firstly we'll check whether the folio can be exclusively used by the
> > faulting process, if we can (when the ref of the folio is only 1 after
> > trying to free swapcache or the page flag AnonExclusive is setup) we'll
> > directly use it with few further handling. If we cannot, then we'll
> > split the pmd into 512 4K ptes, and do copy on write only for the
> > specific 4K page that we faulted on.
> > 
> > This logic is truly memory efficient since for most workloads we don't
> > want to allocate 2M new memory simply on a small write. However, it also
> > makes the original 2M page for the process suddenly splitted on a
> > write which will generate some performance thrashing. For example, if
> > process A and process B share an anonymous 2M pmd, if process B chooses
> > to do a writing, then its page table mapping will be changed from 1
> > pmd entry into 512 4K pte entries at once, so the tlb benifit will
> > suddenly just "vanish" for process B, which sometimes may cause a
> > observable performance degeneration. After that, we can only wait for
> > khugepaged to do the collapse for this area and merge the pmd back, which
> > is not easy to happen.
> 
> You probably know that, historically, we did exactly what you describe in this
> patch set. It was rather bad regarding memory waste and COW latency, so we
> switched to the current model.
> 
> Note that there was a recent related discussion for executable, which was rejected:
> 
> https://lore.kernel.org/r/20251226100337.4171191-1-zhangqilong3@huawei.com
>

Yes, I know this history, and I know that it will cost some memory or latency,
That’s why I was wondering maybe I can add a switch to it to make it
configurable :). But I don’t know the existence of the discussion in 
https://lore.kernel.org/r/20251226100337.4171191-1-zhangqilong3@huawei.com,
I’ll check it out, thanks for informing me that :).
 
> > 
> > In addition to the problem above, this logic can also generate some
> > deficiency for THP itself. Currently THP is just a "best-effort" choice
> > with no "certainty". THP is easily splitted into multiple small pages
> > on common calling path like reclaiming, COW. A transparent splitting
> > can cause throughput fluctuation for some workloads. For these workloads,
> > we may want to give THP some "certainty" just like hugetlbfs,
> 
> There are no such guarantees, though. And We wouldn't want to commit to any such
> guarantees today. For example, simple page migration can split the folio.
> Allocation failures will fallback to small pages etc.
> 
> If you need guarantees, use hugetlb for now.
> 

The reason why I want to use THP over hugetlb is that I need reclamation for my
workload :). There are many processes in my workload that need 2M
aligned folios for better performance, and we want to reclaim them back automatically
when the process doesn’t need the folios. But hugetlbfs cannot do passive reclamation
from what I know (except doing active madvise by the processes themselves). And using
THP can easily split the hugepages. So that’s why I would like to add certainty for THP,
and use THP for these processes as backend, because THP is very well integrated with
the swap system and other filesystems. And from what I checked,
it seems the most common case for splitting a THP is COW and swapping so I am trying
to handle these two scenarios (But coincidentally, PMD swapping is committed in
https://lore.kernel.org/all/D3F08F85-76E0-4C5A-ABA1-537C68E038B8@nvidia.com/
a few days before, which is a great implementation :) ).

> > The effect
> > we want is: after some customized setup, if only the system has usable
> > folio, and the virtual memory alignment permits (or we setup to), we can
> > make sure we always use THP for it, the system will never split it except
> > the user wants to do so.
> > 
> > This patchset is about both two things above, firstly we add pmd level
> > THP COW support by revising the code in do_huge_pmd_wp_page, we added
> > switch for it because different workloads may need different resources,
> 
> The switch is bad, and we won't accept any toggle like that. A system-wide
> setting does not make sense for such behavior.
>

Oh, the reason why I added a switch globally is also because the scenario I mentioned
above, I want those processes to always use PMD sized folios as backend to make sure
performance. COW is truly not that common like swap out/swap in, it just can happen
sometimes, which I guess the reason may be about image duplication. Setting the system
globally is more convenient for my situation :). I can go without this global switch
if it's more reasonable.

> A per-VMA flag? Maybe, but I expect pushback as well, as it is way too specific.
> So we'd have to find some concept that abstracts these semantics. But I expect
> pushback as well.
> 
> We messed up enough with toggles in THP space, unfortunately.
> 
> Also, anything that only works for PMD-sized THPs is a warning sign in 2026 :)
>

And for PMD-sized THPs, actually, I’m also considering adding more support for
COW to mTHP if the upstream consider it useful. And also for pud sized THPs
also. But I guess I have to firstly handle stage 1: PMD level COW now :).
And also, since PMD sized folio is commonly used in my workload, I'm also wondering
digging into pmd sized KSM in the future, in which I think pmd sized COW may
be more useful then :).

> You don't really raise any concrete use cases or performance numbers for these
> use cases. Some details about applications that use fork() and rely on such
> behavior would be helpful.
> 

Sorry for that, the concrete workload itself hasn't been finished yet. Now it just
can happen sometimes in my multi-2M-sized-processes workload test. But the user
of our 2M sized folio schema is actually not necessarily myself but also can be the
userspace developers. I cannot guarantee that fork will not be used in the
performance test of their workload since that is a normal posix call. Maybe a little
overthinking? :)
I just think swap and COW are two main scenarios that may transparently split pmd sized
folios, so maybe we can solve it and make THP both reclaimable and stable. Maybe
that can make THP more widely used in real deployed environment since the resource
can become more controllable for the users :). That's why I was thinking maybe
implementing it with setup switches is a reasonable solution?

> Note that an application that does fork() could use MADV_COLLAPSE after fork()
> to make sure that it immediately gets THPs back.
> 
> There is also the option to just use MADV_DONTFORK to not even share ranges with
> a child process in the first place, avoiding page copies entirely.
> 

MADV_DONTFORK and MADV_COLLAPSE are nice and great options :), but the former one seems
to be a little wasteful :). And the latter one can greatly solve fork situation, but
we have to recognize all the possible regions that may be accessed in performance
test and collapse them. And it's also not easy to solve the situation like pmd swap in
for pmd pages mapped by more than two processes. :)

> -- 
> Cheers,
> 
> David

Look forward to your further opionion, thanks!

Best,
Luka


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/5] mm: Support selecting doing direct COW for anonymous pmd entry
  2026-05-01 16:16   ` Luka Bai
@ 2026-05-01 18:30     ` David Hildenbrand (Arm)
  2026-05-02  5:06       ` Luka Bai
  0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand (Arm) @ 2026-05-01 18:30 UTC (permalink / raw)
  To: Luka Bai
  Cc: linux-mm, Jonathan Corbet, Shuah Khan, Andrew Morton,
	Lorenzo Stoakes, Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lance Yang, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Jann Horn,
	Arnd Bergmann, Kairui Song, linux-kernel, linux-arch, linux-doc,
	Luka Bai

>>
>> Note that there was a recent related discussion for executable, which was rejected:
>>
>> https://lore.kernel.org/r/20251226100337.4171191-1-zhangqilong3@huawei.com
>>
> 
> Yes, I know this history, and I know that it will cost some memory or latency,
> That’s why I was wondering maybe I can add a switch to it to make it
> configurable :).

Switches for something like that is just not a good fit.

For example, for a short-lived child (e.g., fork+exec) it usually makes no sense
to cow a larger chunk of address space, when you know that it will exit
immediately either way and free up the memory.

>>>
>>> In addition to the problem above, this logic can also generate some
>>> deficiency for THP itself. Currently THP is just a "best-effort" choice
>>> with no "certainty". THP is easily splitted into multiple small pages
>>> on common calling path like reclaiming, COW. A transparent splitting
>>> can cause throughput fluctuation for some workloads. For these workloads,
>>> we may want to give THP some "certainty" just like hugetlbfs,
>>
>> There are no such guarantees, though. And We wouldn't want to commit to any such
>> guarantees today. For example, simple page migration can split the folio.
>> Allocation failures will fallback to small pages etc.
>>
>> If you need guarantees, use hugetlb for now.
>>
> 
> The reason why I want to use THP over hugetlb is that I need reclamation for my
> workload :). There are many processes in my workload that need 2M
> aligned folios for better performance, and we want to reclaim them back automatically
> when the process doesn’t need the folios. 

Can you share some details how exactly that is supposed to work?

> But hugetlbfs cannot do passive reclamation
> from what I know (except doing active madvise by the processes themselves). And using

Right, you can only return hugetlb folios by doing MADV_DONTNEED or munmap().

> THP can easily split the hugepages. So that’s why I would like to add certainty for THP,

Repeat after me: there are no guarantees. There is no certainty :)

> and use THP for these processes as backend, because THP is very well integrated with
> the swap system and other filesystems. And from what I checked,
> it seems the most common case for splitting a THP is COW and swapping so I am trying
> to handle these two scenarios (But coincidentally, PMD swapping is committed in
> https://lore.kernel.org/all/D3F08F85-76E0-4C5A-ABA1-537C68E038B8@nvidia.com/
> a few days before, which is a great implementation :) ).

Right, but that really only changes how we map large folios, not how we allocate
them. There are no guarantees.

> 
>>> The effect
>>> we want is: after some customized setup, if only the system has usable
>>> folio, and the virtual memory alignment permits (or we setup to), we can
>>> make sure we always use THP for it, the system will never split it except
>>> the user wants to do so.
>>>
>>> This patchset is about both two things above, firstly we add pmd level
>>> THP COW support by revising the code in do_huge_pmd_wp_page, we added
>>> switch for it because different workloads may need different resources,
>>
>> The switch is bad, and we won't accept any toggle like that. A system-wide
>> setting does not make sense for such behavior.
>>
> 
> Oh, the reason why I added a switch globally is also because the scenario I mentioned
> above, I want those processes to always use PMD sized folios as backend to make sure
> performance. 

"Always" is wishful thinking in many scenarios I'm afraid.

> COW is truly not that common like swap out/swap in, it just can happen
> sometimes, which I guess the reason may be about image duplication. Setting the system
> globally is more convenient for my situation :). I can go without this global switch
> if it's more reasonable.
> 
>> A per-VMA flag? Maybe, but I expect pushback as well, as it is way too specific.
>> So we'd have to find some concept that abstracts these semantics. But I expect
>> pushback as well.
>>
>> We messed up enough with toggles in THP space, unfortunately.
>>
>> Also, anything that only works for PMD-sized THPs is a warning sign in 2026 :)
>>
> 
> And for PMD-sized THPs, actually, I’m also considering adding more support for
> COW to mTHP if the upstream consider it useful. And also for pud sized THPs
> also. But I guess I have to firstly handle stage 1: PMD level COW now :).
> And also, since PMD sized folio is commonly used in my workload, I'm also wondering
> digging into pmd sized KSM in the future, in which I think pmd sized COW may
> be more useful then :).

I'm afraid I have to stop you right there: there has to be a pretty convincing
story to add any of that. In particular KSM with large folios (/me shivering).

But I already don't buy the COW story. Just configure khugepaged in a better way
or use MADV_COLLAPSE and you don't really need to modify the kernel at all in
99.99% of the case. (khugepaged needs a lot of tuning work, it's currently not
the smartest implementation)

Maybe, we might give khugepaged better direction of what to try scanning next
(e.g., where we just COW'ed a THP). Not sure, there are plenty of things to explore.

> 
>> You don't really raise any concrete use cases or performance numbers for these
>> use cases. Some details about applications that use fork() and rely on such
>> behavior would be helpful.
>>
> 
> Sorry for that, the concrete workload itself hasn't been finished yet. 

Okay, what I thought after seeing no workloads an no performance numbers :)

If you don't know the workload, how can you claim that the additional latency
and/or memory consumption is not a problem?

Also: there is no guarantee that you will actually succeed in allocating a PMD
THP during a COW fault.


> Now it just
> can happen sometimes in my multi-2M-sized-processes workload test. But the user
> of our 2M sized folio schema is actually not necessarily myself but also can be the
> userspace developers. I cannot guarantee that fork will not be used in the
> performance test of their workload since that is a normal posix call. Maybe a little
> overthinking? :)

If your application cares about performance, you either shouldn't be using
fork(), or you should be using it very, very wisely (e.g., interaction with
multi-threading, MADV_DONTFORK, avoid touching memory in parent until child
completed).

> I just think swap and COW are two main scenarios that may transparently split pmd sized
> folios, so maybe we can solve it and make THP both reclaimable and stable.

There is page migration, MADV_DONTNEED, munmap/mremap/madvise/mprotect in sub-2M
blocks, memory failure handling and probably a lot more. THP allocation might
fail. THP swapout+swapin might fail to allocate THPs.

Tackling COW handling when you don't even know that it's a real problem seems
premature.

> Maybe
> that can make THP more widely used in real deployed environment since the resource
> can become more controllable for the users :). That's why I was thinking maybe
> implementing it with setup switches is a reasonable solution?

No magical toggles.

> 
>> Note that an application that does fork() could use MADV_COLLAPSE after fork()
>> to make sure that it immediately gets THPs back.
>>
>> There is also the option to just use MADV_DONTFORK to not even share ranges with
>> a child process in the first place, avoiding page copies entirely.
>>
> 
> MADV_DONTFORK and MADV_COLLAPSE are nice and great options :), but the former one seems
> to be a little wasteful :). 

It's actually the right thing to do (tm) if you care about fork() performance
and know that your child will not actually need certain memory areas.

For example, in QEMU we use it to exclude all guest memory from fork(), heavily
improving fork() performance. [there are not a lot of fork() use cases left in
QEMU today, fortunately]

-- 
Cheers,

David

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/5] mm: Support selecting doing direct COW for anonymous pmd entry
  2026-05-01 18:30     ` David Hildenbrand (Arm)
@ 2026-05-02  5:06       ` Luka Bai
  0 siblings, 0 replies; 13+ messages in thread
From: Luka Bai @ 2026-05-02  5:06 UTC (permalink / raw)
  To: David Hildenbrand (Arm)
  Cc: linux-mm, Jonathan Corbet, Shuah Khan, Andrew Morton,
	Lorenzo Stoakes, Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Lance Yang, Vlastimil Babka,
	Mike Rapoport, Suren Baghdasaryan, Michal Hocko, Jann Horn,
	Arnd Bergmann, Kairui Song, linux-kernel, linux-arch, linux-doc,
	Luka Bai

在 Fri, May 01, 2026 at 08:30:39PM +0200,David Hildenbrand (Arm) 写道:

Hi David,

Thanks for replying again :). I've read your advices, and I agreed with your opinion,
THP COW is premature now for the upstream, we can reconsider other approaches. :)

> >>
> >> Note that there was a recent related discussion for executable, which was rejected:
> >>
> >> https://lore.kernel.org/r/20251226100337.4171191-1-zhangqilong3@huawei.com
> >>
> > 
> > Yes, I know this history, and I know that it will cost some memory or latency,
> > That’s why I was wondering maybe I can add a switch to it to make it
> > configurable :).
> 
> Switches for something like that is just not a good fit.
> 
> For example, for a short-lived child (e.g., fork+exec) it usually makes no sense
> to cow a larger chunk of address space, when you know that it will exit
> immediately either way and free up the memory.
> 

Yeah, for most workloads, fork will be used with a exec call soon, which makes the
COW for THP not so useful for these workloads.

> >>>
> >>> In addition to the problem above, this logic can also generate some
> >>> deficiency for THP itself. Currently THP is just a "best-effort" choice
> >>> with no "certainty". THP is easily splitted into multiple small pages
> >>> on common calling path like reclaiming, COW. A transparent splitting
> >>> can cause throughput fluctuation for some workloads. For these workloads,
> >>> we may want to give THP some "certainty" just like hugetlbfs,
> >>
> >> There are no such guarantees, though. And We wouldn't want to commit to any such
> >> guarantees today. For example, simple page migration can split the folio.
> >> Allocation failures will fallback to small pages etc.
> >>
> >> If you need guarantees, use hugetlb for now.
> >>
> > 
> > The reason why I want to use THP over hugetlb is that I need reclamation for my
> > workload :). There are many processes in my workload that need 2M
> > aligned folios for better performance, and we want to reclaim them back automatically
> > when the process doesn’t need the folios. 
> 
> Can you share some details how exactly that is supposed to work?
> 

Sorry for that, it's basically just a bunch of processes in the environment with 2M sized
folio as their backend, but we want the OS to reclaim them when it's possible. And we want to
balance performance and memory saving, and don't split so often since it may cause fraction
and may influence the future 2M folio allocation. :)

> > But hugetlbfs cannot do passive reclamation
> > from what I know (except doing active madvise by the processes themselves). And using
> 
> Right, you can only return hugetlb folios by doing MADV_DONTNEED or munmap().
> 
> > THP can easily split the hugepages. So that’s why I would like to add certainty for THP,
> 
> Repeat after me: there are no guarantees. There is no certainty :)
> 
> > and use THP for these processes as backend, because THP is very well integrated with
> > the swap system and other filesystems. And from what I checked,
> > it seems the most common case for splitting a THP is COW and swapping so I am trying
> > to handle these two scenarios (But coincidentally, PMD swapping is committed in
> > https://lore.kernel.org/all/D3F08F85-76E0-4C5A-ABA1-537C68E038B8@nvidia.com/
> > a few days before, which is a great implementation :) ).
> 
> Right, but that really only changes how we map large folios, not how we allocate
> them. There are no guarantees.
> 
> > 
> >>> The effect
> >>> we want is: after some customized setup, if only the system has usable
> >>> folio, and the virtual memory alignment permits (or we setup to), we can
> >>> make sure we always use THP for it, the system will never split it except
> >>> the user wants to do so.
> >>>
> >>> This patchset is about both two things above, firstly we add pmd level
> >>> THP COW support by revising the code in do_huge_pmd_wp_page, we added
> >>> switch for it because different workloads may need different resources,
> >>
> >> The switch is bad, and we won't accept any toggle like that. A system-wide
> >> setting does not make sense for such behavior.
> >>
> > 
> > Oh, the reason why I added a switch globally is also because the scenario I mentioned
> > above, I want those processes to always use PMD sized folios as backend to make sure
> > performance. 
> 
> "Always" is wishful thinking in many scenarios I'm afraid.

Yeah, I agree, so we also think that maybe we can do some other things to increase the
possibility of allocating pmd sized folios. :) 

> 
> > COW is truly not that common like swap out/swap in, it just can happen
> > sometimes, which I guess the reason may be about image duplication. Setting the system
> > globally is more convenient for my situation :). I can go without this global switch
> > if it's more reasonable.
> > 
> >> A per-VMA flag? Maybe, but I expect pushback as well, as it is way too specific.
> >> So we'd have to find some concept that abstracts these semantics. But I expect
> >> pushback as well.
> >>
> >> We messed up enough with toggles in THP space, unfortunately.
> >>
> >> Also, anything that only works for PMD-sized THPs is a warning sign in 2026 :)
> >>
> > 
> > And for PMD-sized THPs, actually, I’m also considering adding more support for
> > COW to mTHP if the upstream consider it useful. And also for pud sized THPs
> > also. But I guess I have to firstly handle stage 1: PMD level COW now :).
> > And also, since PMD sized folio is commonly used in my workload, I'm also wondering
> > digging into pmd sized KSM in the future, in which I think pmd sized COW may
> > be more useful then :).
> 
> I'm afraid I have to stop you right there: there has to be a pretty convincing
> story to add any of that. In particular KSM with large folios (/me shivering).
> 
> But I already don't buy the COW story. Just configure khugepaged in a better way
> or use MADV_COLLAPSE and you don't really need to modify the kernel at all in
> 99.99% of the case. (khugepaged needs a lot of tuning work, it's currently not
> the smartest implementation)
> 
> Maybe, we might give khugepaged better direction of what to try scanning next
> (e.g., where we just COW'ed a THP). Not sure, there are plenty of things to explore.
> 

I see, actually all the things here is about saving memory and keeping the 2M tlb
benifit at the same time, including my KSM thought and THP COW. Giving khugepaged
a better direction is also what we are considering in the next step :).

> > 
> >> You don't really raise any concrete use cases or performance numbers for these
> >> use cases. Some details about applications that use fork() and rely on such
> >> behavior would be helpful.
> >>
> > 
> > Sorry for that, the concrete workload itself hasn't been finished yet. 
> 
> Okay, what I thought after seeing no workloads an no performance numbers :)
> 
> If you don't know the workload, how can you claim that the additional latency
> and/or memory consumption is not a problem?
> 

Oh, our point here is that we don't want these 2M folios to be split, since that
may cause some fraction and increases the difficulty a little for allocating a
2M folio later as the time goes on. It may cause some additional latency and memory
consumption, that's true, but the workload we are trying to do wants to make sure
the 2M folios in the system first. And though the workload hasn't been finished, we
analyzed what it will do, and we think for the most time, COWed 2M folio will be
written in a range much bigger than 4K. So it will will cause more page faults if
the fault size is 4K :). As we see, the time consuming of only one 2M sized page
fault should be able to beat the multiple 4K sized page fault, since the copy
can be merged together and there are not so many times of handling other than
copy. We can see a performance improvement in the end to end copying test which
includes many possible COW on 2M pages, though. The performance improvement is
about 2 times. The improvement is not big as we imagine it should be. We'll dig
into more details then. :)

> Also: there is no guarantee that you will actually succeed in allocating a PMD
> THP during a COW fault.
> 

Yes, so we are actually also considering some solutions to increase the success rate
of allocating PMD sized THP like reserving, the idea comes from TAO of Yu Zhao in
https://lore.kernel.org/all/20240229183436.4110845-1-yuzhao@google.com/. But we may
do the reserving using another approach like migratetype though since that may make
it easier to dynamically change the reserving size. We also want to discuss it with
the upstream about this if anyone has time :).

> 
> > Now it just
> > can happen sometimes in my multi-2M-sized-processes workload test. But the user
> > of our 2M sized folio schema is actually not necessarily myself but also can be the
> > userspace developers. I cannot guarantee that fork will not be used in the
> > performance test of their workload since that is a normal posix call. Maybe a little
> > overthinking? :)
> 
> If your application cares about performance, you either shouldn't be using
> fork(), or you should be using it very, very wisely (e.g., interaction with
> multi-threading, MADV_DONTFORK, avoid touching memory in parent until child
> completed).
> 

Agreed, we'll try that, thank you. :)

> > I just think swap and COW are two main scenarios that may transparently split pmd sized
> > folios, so maybe we can solve it and make THP both reclaimable and stable.
> 
> There is page migration, MADV_DONTNEED, munmap/mremap/madvise/mprotect in sub-2M
> blocks, memory failure handling and probably a lot more. THP allocation might
> fail. THP swapout+swapin might fail to allocate THPs.
> 

Yes, in my former consideration I just thought munmap/mremap/madvise/mprotect are actively
called, so it should be easier to restrict them, like directly fail them when the user calls
them in sub-2M blocks. Page migration is truly another scenorio we need to handle. But
it's not that likely to happen if we configure it so since we have our
CONFIG_ARCH_ENABLE_THP_MIGRATION setup, things like numa balancing, live migration, compaction
can all be disabled though. Maybe there is something I missed here? :)

> Tackling COW handling when you don't even know that it's a real problem seems
> premature.
> 

Yeah, I guess THP COW can be replaced by other approaches. We just think maybe we can
firstly add the THP COW, then the COW that may happen in fork, pmd swap in, and maybe other
places can all benifit from it. But maybe it's still not the only way to solve the
problem we meet right now.

> > Maybe
> > that can make THP more widely used in real deployed environment since the resource
> > can become more controllable for the users :). That's why I was thinking maybe
> > implementing it with setup switches is a reasonable solution?
> 
> No magical toggles.
> 
> > 
> >> Note that an application that does fork() could use MADV_COLLAPSE after fork()
> >> to make sure that it immediately gets THPs back.
> >>
> >> There is also the option to just use MADV_DONTFORK to not even share ranges with
> >> a child process in the first place, avoiding page copies entirely.
> >>
> > 
> > MADV_DONTFORK and MADV_COLLAPSE are nice and great options :), but the former one seems
> > to be a little wasteful :). 
> 
> It's actually the right thing to do (tm) if you care about fork() performance
> and know that your child will not actually need certain memory areas.
> 
> For example, in QEMU we use it to exclude all guest memory from fork(), heavily
> improving fork() performance. [there are not a lot of fork() use cases left in
> QEMU today, fortunately]
> 

Yeah, these two are nice in many situations, we'll consider using MADV_DONTFORK and
MADV_COLLAPSE and other tools for our workload, thanks!

> -- 
> Cheers,
> 
> David

Best regards,
Luka

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [syzbot ci] Re: mm: Support selecting doing direct COW for anonymous pmd entry
  2026-05-01  5:55 [PATCH 0/5] mm: Support selecting doing direct COW for anonymous pmd entry Luka Bai
                   ` (5 preceding siblings ...)
  2026-05-01  7:07 ` [PATCH 0/5] mm: Support selecting doing direct " David Hildenbrand (Arm)
@ 2026-05-03  7:03 ` syzbot ci
  6 siblings, 0 replies; 13+ messages in thread
From: syzbot ci @ 2026-05-03  7:03 UTC (permalink / raw)
  To: akpm, arnd, baohua, baolin.wang, corbet, david, dev.jain, jannh,
	kasong, lance.yang, liam, linux-arch, linux-doc, linux-kernel,
	linux-mm, ljs, lukabai, lukafocus, mhocko, npache, rppt,
	ryan.roberts, skhan, surenb, vbabka, ziy
  Cc: syzbot, syzkaller-bugs

syzbot ci has tested the following series

[v1] mm: Support selecting doing direct COW for anonymous pmd entry
https://lore.kernel.org/all/20260501-thp_cow-v1-0-005377483738@tencent.com
* [PATCH 1/5] mm: add basic madvise helpers and branch for THP setup
* [PATCH 2/5] mm: add pmd level THP COW parameter in sysfs
* [PATCH 3/5] mm: add pmd level THP COW judgement helpers
* [PATCH 4/5] mm: enable map_anon_folio_pmd_nopf to handle unshare
* [PATCH 5/5] mm: support choosing to do THP COW for anonymous pmd entry.

and found the following issue:
general protection fault in __page_table_check_pmds_set

Full report is available here:
https://ci.syzbot.org/series/37e78e03-c08b-4de1-9b07-a21c64f4f462

***

general protection fault in __page_table_check_pmds_set

tree:      mm-new
URL:       https://kernel.googlesource.com/pub/scm/linux/kernel/git/akpm/mm.git
base:      41cd9e3d23b8fd9e6c3c0311e9cb0304442c6141
arch:      amd64
compiler:  Debian clang version 21.1.8 (++20251221033036+2078da43e25a-1~exp1~20251221153213.50), Debian LLD 21.1.8
config:    https://ci.syzbot.org/builds/fcdd679f-29e8-43db-8792-d4fd97c62d91/config
syz repro: https://ci.syzbot.org/findings/87820d58-5d91-4c2e-b80b-5a75006e230d/syz_repro

Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] SMP KASAN PTI
KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007]
CPU: 0 UID: 0 PID: 5807 Comm: syz.1.18 Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
RIP: 0010:__page_table_check_pmds_set+0x1d4/0x340 mm/page_table_check.c:240
Code: 00 00 4c 89 6c 24 08 4c 89 3c 24 4a 8d 2c fd f8 ff ff ff 31 db 49 bf 00 00 00 00 00 fc ff df 49 8d 3c 1e 48 89 f8 48 c1 e8 03 <42> 80 3c 38 00 74 05 e8 00 29 f4 ff 4d 8b 24 1e 45 89 e5 41 81 e5
RSP: 0018:ffffc90003c46ee0 EFLAGS: 00010246
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
RDX: ffff8881102a1d80 RSI: 0000000000000001 RDI: 0000000000000000
RBP: 0000000000000000 R08: 0000000000000001 R09: 1ffff110242081d8
R10: dffffc0000000000 R11: ffffed10242081d9 R12: dffffc0000000000
R13: 0000000025c008e7 R14: 0000000000000000 R15: dffffc0000000000
FS:  00007fbef78216c0(0000) GS:ffff88818dc91000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b32f63fff CR3: 000000002350a000 CR4: 00000000000006f0
Call Trace:
 <TASK>
 page_table_check_pmds_set include/linux/page_table_check.h:92 [inline]
 set_pmd_at arch/x86/include/asm/pgtable.h:1209 [inline]
 map_anon_folio_pmd_nopf+0x452/0x480 mm/huge_memory.c:1449
 collapse_huge_page mm/khugepaged.c:1411 [inline]
 mthp_collapse mm/khugepaged.c:1530 [inline]
 collapse_scan_pmd mm/khugepaged.c:1773 [inline]
 collapse_single_pmd+0x4691/0x5540 mm/khugepaged.c:2786
 madvise_collapse+0x300/0x7a0 mm/khugepaged.c:3218
 madvise_vma_behavior+0x11b0/0x4210 mm/madvise.c:1383
 madvise_walk_vmas+0x573/0xae0 mm/madvise.c:1738
 madvise_do_behavior+0x386/0x540 mm/madvise.c:1954
 do_madvise+0x1fa/0x2e0 mm/madvise.c:2047
 __do_sys_madvise mm/madvise.c:2056 [inline]
 __se_sys_madvise mm/madvise.c:2054 [inline]
 __x64_sys_madvise+0xa6/0xc0 mm/madvise.c:2054
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x15f/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fbef699cdd9
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fbef7821028 EFLAGS: 00000246 ORIG_RAX: 000000000000001c
RAX: ffffffffffffffda RBX: 00007fbef6c15fa0 RCX: 00007fbef699cdd9
RDX: 0000000000000019 RSI: 0000000000400000 RDI: 0000200000000000
RBP: 00007fbef6a32d69 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fbef6c16038 R14: 00007fbef6c15fa0 R15: 00007ffdddd640f8
 </TASK>
Modules linked in:
---[ end trace 0000000000000000 ]---
RIP: 0010:__page_table_check_pmds_set+0x1d4/0x340 mm/page_table_check.c:240
Code: 00 00 4c 89 6c 24 08 4c 89 3c 24 4a 8d 2c fd f8 ff ff ff 31 db 49 bf 00 00 00 00 00 fc ff df 49 8d 3c 1e 48 89 f8 48 c1 e8 03 <42> 80 3c 38 00 74 05 e8 00 29 f4 ff 4d 8b 24 1e 45 89 e5 41 81 e5
RSP: 0018:ffffc90003c46ee0 EFLAGS: 00010246
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
RDX: ffff8881102a1d80 RSI: 0000000000000001 RDI: 0000000000000000
RBP: 0000000000000000 R08: 0000000000000001 R09: 1ffff110242081d8
R10: dffffc0000000000 R11: ffffed10242081d9 R12: dffffc0000000000
R13: 0000000025c008e7 R14: 0000000000000000 R15: dffffc0000000000
FS:  00007fbef78216c0(0000) GS:ffff88818dc91000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b32f63fff CR3: 000000002350a000 CR4: 00000000000006f0
----------------
Code disassembly (best guess):
   0:	00 00                	add    %al,(%rax)
   2:	4c 89 6c 24 08       	mov    %r13,0x8(%rsp)
   7:	4c 89 3c 24          	mov    %r15,(%rsp)
   b:	4a 8d 2c fd f8 ff ff 	lea    -0x8(,%r15,8),%rbp
  12:	ff
  13:	31 db                	xor    %ebx,%ebx
  15:	49 bf 00 00 00 00 00 	movabs $0xdffffc0000000000,%r15
  1c:	fc ff df
  1f:	49 8d 3c 1e          	lea    (%r14,%rbx,1),%rdi
  23:	48 89 f8             	mov    %rdi,%rax
  26:	48 c1 e8 03          	shr    $0x3,%rax
* 2a:	42 80 3c 38 00       	cmpb   $0x0,(%rax,%r15,1) <-- trapping instruction
  2f:	74 05                	je     0x36
  31:	e8 00 29 f4 ff       	call   0xfff42936
  36:	4d 8b 24 1e          	mov    (%r14,%rbx,1),%r12
  3a:	45 89 e5             	mov    %r12d,%r13d
  3d:	41                   	rex.B
  3e:	81                   	.byte 0x81
  3f:	e5                   	.byte 0xe5


***

If these findings have caused you to resend the series or submit a
separate fix, please add the following tag to your commit message:
  Tested-by: syzbot@syzkaller.appspotmail.com

---
This report is generated by a bot. It may contain errors.
syzbot ci engineers can be reached at syzkaller@googlegroups.com.

To test a patch for this bug, please reply with `#syz test`
(should be on a separate line).

The patch should be attached to the email.
Note: arguments like custom git repos and branches are not supported.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2026-05-03  7:03 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-01  5:55 [PATCH 0/5] mm: Support selecting doing direct COW for anonymous pmd entry Luka Bai
2026-05-01  5:55 ` [PATCH 1/5] mm: add basic madvise helpers and branch for THP setup Luka Bai
2026-05-01  5:55 ` [PATCH 2/5] mm: add pmd level THP COW parameter in sysfs Luka Bai
2026-05-01  5:55 ` [PATCH 3/5] mm: add pmd level THP COW judgement helpers Luka Bai
2026-05-01  5:55 ` [PATCH 4/5] mm: enable map_anon_folio_pmd_nopf to handle unshare Luka Bai
2026-05-01  5:55 ` [PATCH 5/5] mm: support choosing to do THP COW for anonymous pmd entry Luka Bai
2026-05-01  7:11   ` David Hildenbrand (Arm)
2026-05-01 15:01     ` Luka Bai
2026-05-01  7:07 ` [PATCH 0/5] mm: Support selecting doing direct " David Hildenbrand (Arm)
2026-05-01 16:16   ` Luka Bai
2026-05-01 18:30     ` David Hildenbrand (Arm)
2026-05-02  5:06       ` Luka Bai
2026-05-03  7:03 ` [syzbot ci] " syzbot ci

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox