linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 00/11] add shmem mTHP collapse support
@ 2025-08-20  9:07 Baolin Wang
  2025-08-20  9:07 ` [RFC PATCH 01/11] mm: khugepaged: add khugepaged_max_ptes_none check in collapse_file() Baolin Wang
                   ` (10 more replies)
  0 siblings, 11 replies; 14+ messages in thread
From: Baolin Wang @ 2025-08-20  9:07 UTC (permalink / raw)
  To: akpm, hughd, david, lorenzo.stoakes
  Cc: ziy, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua,
	baolin.wang, linux-mm, linux-kernel

Based on mm/mm-new from today.

This is a follow-up patchset for mTHP collapse to support shmem (or file pages)
mTHP collapse, which is based on Nico's patchset [1].

The strategy for shmem/file mTHP collapse follows the anonymous mTHP collapse,
which is, quoting from Nico:

"while scanning PMD ranges for potential collapse candidates, keep
track of pages in KHUGEPAGED_MIN_MTHP_ORDER chunks via a bitmap. Each bit
represents a utilized region of order KHUGEPAGED_MIN_MTHP_ORDER PTEs.

After the scan is complete, we will perform binary recursion on the bitmap
to determine which mTHP size would be most efficient to collapse to. The
'max_ptes_none' will be scaled by the attempted collapse order to determine
how full a THP must be to be eligible.
"

Moreover, to facilitate the scanning of shmem/file folios, extend the
'cc->mthp_bitmap_temp' bitmap to record whether each index within the
PMD range corresponds to a present page, and then this temp bitmap is used
to determine whether each chunk should be marked as present for mTHP
collapse.

Currently, the collapse_pte_mapped_thp() does not build the mapping for mTHP.
Cause we still expect to establish the mTHP mapping via refault under the
control of fault_around. So collapse_pte_mapped_thp() remains responsible
only for building the mapping for PMD-sized THP, which is reasonable and
makes life easier.

In addition, I have added mTHP collapse selftests, and now all khugepaged
test cases can pass.

[1] https://lore.kernel.org/all/20250819134205.622806-1-npache@redhat.com/

Baolin Wang (11):
  mm: khugepaged: add khugepaged_max_ptes_none check in collapse_file()
  mm: khugepaged: generalize collapse_file for mTHP support
  mm: khugepaged: add an order check for THP statistics
  mm: khugepaged: add shmem/file mTHP collapse support
  mm: shmem: kick khugepaged for enabling none-PMD-sized shmem mTHPs
  mm: khugepaged: allow khugepaged to check all shmem/file large orders
  mm: khugepaged: skip large folios that don't need to be collapsed
  selftests:mm: extend the check_huge() to support mTHP check
  selftests: mm: move gather_after_split_folio_orders() into vm_util.c
    file
  selftests: mm: implement the mTHP hugepage check helper
  selftests: mm: add mTHP collapse test cases

 include/linux/shmem_fs.h                      |   4 +-
 mm/khugepaged.c                               | 177 +++++++++++++----
 mm/shmem.c                                    |  10 +-
 tools/testing/selftests/mm/khugepaged.c       | 162 ++++++++++++----
 tools/testing/selftests/mm/run_vmtests.sh     |   4 +
 .../selftests/mm/split_huge_page_test.c       | 135 +------------
 tools/testing/selftests/mm/uffd-common.c      |   4 +-
 tools/testing/selftests/mm/vm_util.c          | 179 +++++++++++++++++-
 tools/testing/selftests/mm/vm_util.h          |   6 +-
 9 files changed, 455 insertions(+), 226 deletions(-)

-- 
2.43.5


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [RFC PATCH 01/11] mm: khugepaged: add khugepaged_max_ptes_none check in collapse_file()
  2025-08-20  9:07 [RFC PATCH 00/11] add shmem mTHP collapse support Baolin Wang
@ 2025-08-20  9:07 ` Baolin Wang
  2025-08-20 10:13   ` Dev Jain
  2025-08-20  9:07 ` [RFC PATCH 02/11] mm: khugepaged: generalize collapse_file for mTHP support Baolin Wang
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 14+ messages in thread
From: Baolin Wang @ 2025-08-20  9:07 UTC (permalink / raw)
  To: akpm, hughd, david, lorenzo.stoakes
  Cc: ziy, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua,
	baolin.wang, linux-mm, linux-kernel

Similar to the anonymous folios collapse, we should also check the 'khugepaged_max_ptes_none'
when trying to collapse shmem/file folios.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 mm/khugepaged.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 5a3386043f39..5d4493b77f3c 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2125,6 +2125,13 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 					}
 				}
 				nr_none++;
+
+				if (cc->is_khugepaged && nr_none > khugepaged_max_ptes_none) {
+					result = SCAN_EXCEED_NONE_PTE;
+					count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
+					goto xa_locked;
+				}
+
 				index++;
 				continue;
 			}
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 02/11] mm: khugepaged: generalize collapse_file for mTHP support
  2025-08-20  9:07 [RFC PATCH 00/11] add shmem mTHP collapse support Baolin Wang
  2025-08-20  9:07 ` [RFC PATCH 01/11] mm: khugepaged: add khugepaged_max_ptes_none check in collapse_file() Baolin Wang
@ 2025-08-20  9:07 ` Baolin Wang
  2025-08-20  9:07 ` [RFC PATCH 03/11] mm: khugepaged: add an order check for THP statistics Baolin Wang
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Baolin Wang @ 2025-08-20  9:07 UTC (permalink / raw)
  To: akpm, hughd, david, lorenzo.stoakes
  Cc: ziy, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua,
	baolin.wang, linux-mm, linux-kernel

Generalize the order of the collapse_file() function to support future
file/shmem mTHP collapse.

No functional changes in this patch.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 mm/khugepaged.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 5d4493b77f3c..e64ed86d28ca 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2064,21 +2064,23 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
  */
 static int collapse_file(struct mm_struct *mm, unsigned long addr,
 			 struct file *file, pgoff_t start,
-			 struct collapse_control *cc)
+			 struct collapse_control *cc,
+			 int order)
 {
 	struct address_space *mapping = file->f_mapping;
 	struct page *dst;
 	struct folio *folio, *tmp, *new_folio;
-	pgoff_t index = 0, end = start + HPAGE_PMD_NR;
+	int nr_pages = 1 << order;
+	pgoff_t index = 0, end = start + nr_pages;
 	LIST_HEAD(pagelist);
-	XA_STATE_ORDER(xas, &mapping->i_pages, start, HPAGE_PMD_ORDER);
+	XA_STATE_ORDER(xas, &mapping->i_pages, start, order);
 	int nr_none = 0, result = SCAN_SUCCEED;
 	bool is_shmem = shmem_file(file);
 
 	VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem);
-	VM_BUG_ON(start & (HPAGE_PMD_NR - 1));
+	VM_BUG_ON(start & (nr_pages - 1));
 
-	result = alloc_charge_folio(&new_folio, mm, cc, HPAGE_PMD_ORDER);
+	result = alloc_charge_folio(&new_folio, mm, cc, order);
 	if (result != SCAN_SUCCEED)
 		goto out;
 
@@ -2426,14 +2428,14 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 	 * unwritten page.
 	 */
 	folio_mark_uptodate(new_folio);
-	folio_ref_add(new_folio, HPAGE_PMD_NR - 1);
+	folio_ref_add(new_folio, nr_pages - 1);
 
 	if (is_shmem)
 		folio_mark_dirty(new_folio);
 	folio_add_lru(new_folio);
 
 	/* Join all the small entries into a single multi-index entry. */
-	xas_set_order(&xas, start, HPAGE_PMD_ORDER);
+	xas_set_order(&xas, start, order);
 	xas_store(&xas, new_folio);
 	WARN_ON_ONCE(xas_error(&xas));
 	xas_unlock_irq(&xas);
@@ -2496,7 +2498,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 	folio_put(new_folio);
 out:
 	VM_BUG_ON(!list_empty(&pagelist));
-	trace_mm_khugepaged_collapse_file(mm, new_folio, index, addr, is_shmem, file, HPAGE_PMD_NR, result);
+	trace_mm_khugepaged_collapse_file(mm, new_folio, index, addr, is_shmem, file, nr_pages, result);
 	return result;
 }
 
@@ -2599,7 +2601,7 @@ static int collapse_scan_file(struct mm_struct *mm, unsigned long addr,
 			result = SCAN_EXCEED_NONE_PTE;
 			count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
 		} else {
-			result = collapse_file(mm, addr, file, start, cc);
+			result = collapse_file(mm, addr, file, start, cc, HPAGE_PMD_ORDER);
 		}
 	}
 
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 03/11] mm: khugepaged: add an order check for THP statistics
  2025-08-20  9:07 [RFC PATCH 00/11] add shmem mTHP collapse support Baolin Wang
  2025-08-20  9:07 ` [RFC PATCH 01/11] mm: khugepaged: add khugepaged_max_ptes_none check in collapse_file() Baolin Wang
  2025-08-20  9:07 ` [RFC PATCH 02/11] mm: khugepaged: generalize collapse_file for mTHP support Baolin Wang
@ 2025-08-20  9:07 ` Baolin Wang
  2025-08-20  9:07 ` [RFC PATCH 04/11] mm: khugepaged: add shmem/file mTHP collapse support Baolin Wang
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Baolin Wang @ 2025-08-20  9:07 UTC (permalink / raw)
  To: akpm, hughd, david, lorenzo.stoakes
  Cc: ziy, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua,
	baolin.wang, linux-mm, linux-kernel

In order to support file/shmem mTHP collapse in the following patches, add
an PMD-sized THP order check to avoid PMD-sized THP statistics errors.

No functional changes.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 mm/khugepaged.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index e64ed86d28ca..195c26699118 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2411,10 +2411,12 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 		xas_lock_irq(&xas);
 	}
 
-	if (is_shmem)
-		__lruvec_stat_mod_folio(new_folio, NR_SHMEM_THPS, HPAGE_PMD_NR);
-	else
-		__lruvec_stat_mod_folio(new_folio, NR_FILE_THPS, HPAGE_PMD_NR);
+	if (order == HPAGE_PMD_ORDER) {
+		if (is_shmem)
+			__lruvec_stat_mod_folio(new_folio, NR_SHMEM_THPS, HPAGE_PMD_NR);
+		else
+			__lruvec_stat_mod_folio(new_folio, NR_FILE_THPS, HPAGE_PMD_NR);
+	}
 
 	if (nr_none) {
 		__lruvec_stat_mod_folio(new_folio, NR_FILE_PAGES, nr_none);
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 04/11] mm: khugepaged: add shmem/file mTHP collapse support
  2025-08-20  9:07 [RFC PATCH 00/11] add shmem mTHP collapse support Baolin Wang
                   ` (2 preceding siblings ...)
  2025-08-20  9:07 ` [RFC PATCH 03/11] mm: khugepaged: add an order check for THP statistics Baolin Wang
@ 2025-08-20  9:07 ` Baolin Wang
  2025-08-20  9:07 ` [RFC PATCH 05/11] mm: shmem: kick khugepaged for enabling none-PMD-sized shmem mTHPs Baolin Wang
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Baolin Wang @ 2025-08-20  9:07 UTC (permalink / raw)
  To: akpm, hughd, david, lorenzo.stoakes
  Cc: ziy, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua,
	baolin.wang, linux-mm, linux-kernel

Khugepaged already supports the anonymous mTHP collapse. Similarly, let
khugepaged also support the shmem/file mTHP collapse. The strategy for
shmem/file mTHP collapse follows the anonymous mTHP collapse, which is,
quoting from Nico:

"while scanning PMD ranges for potential collapse candidates, keep
track of pages in KHUGEPAGED_MIN_MTHP_ORDER chunks via a bitmap. Each bit
represents a utilized region of order KHUGEPAGED_MIN_MTHP_ORDER PTEs.

After the scan is complete, we will perform binary recursion on the bitmap
to determine which mTHP size would be most efficient to collapse to. The
'max_ptes_none' will be scaled by the attempted collapse order to determine
how full a THP must be to be eligible.
"

Moreover, to facilitate the scanning of shmem/file folios, extend the
'cc->mthp_bitmap_temp' bitmap to record whether each index within the
PMD range corresponds to a present page, and then this temp bitmap is used
to determine whether each chunk should be marked as present for mTHP
collapse.

Currently, the collapse_pte_mapped_thp() does not build the mapping for mTHP.
Cause we still expect to establish the mTHP mapping via refault under the
control of fault_around. So collapse_pte_mapped_thp() remains responsible
only for building the mapping for PMD-sized THP, which is reasonable and
makes life easier.

Note that we do not need to remove pte page tables for shmem/file mTHP
collapse.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 mm/khugepaged.c | 133 ++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 107 insertions(+), 26 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 195c26699118..53ca7bb72fbc 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -113,7 +113,7 @@ struct collapse_control {
 	 * 1bit = order KHUGEPAGED_MIN_MTHP_ORDER mTHP
 	 */
 	DECLARE_BITMAP(mthp_bitmap, MAX_MTHP_BITMAP_SIZE);
-	DECLARE_BITMAP(mthp_bitmap_temp, MAX_MTHP_BITMAP_SIZE);
+	DECLARE_BITMAP(mthp_bitmap_temp, HPAGE_PMD_NR);
 	struct scan_bit_state mthp_bitmap_stack[MAX_MTHP_BITMAP_SIZE];
 };
 
@@ -147,6 +147,10 @@ static struct khugepaged_scan khugepaged_scan = {
 	.mm_head = LIST_HEAD_INIT(khugepaged_scan.mm_head),
 };
 
+static int collapse_file(struct mm_struct *mm, unsigned long addr,
+			struct file *file, pgoff_t start,
+			struct collapse_control *cc, int order);
+
 #ifdef CONFIG_SYSFS
 static ssize_t scan_sleep_millisecs_show(struct kobject *kobj,
 					 struct kobj_attribute *attr,
@@ -1366,7 +1370,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
 
 /* Recursive function to consume the bitmap */
 static int collapse_scan_bitmap(struct mm_struct *mm, unsigned long address,
-			int referenced, int unmapped, struct collapse_control *cc,
+			struct file *file, int referenced, int unmapped,
+			pgoff_t start, struct collapse_control *cc,
 			bool *mmap_locked, unsigned long enabled_orders)
 {
 	u8 order, next_order;
@@ -1401,10 +1406,14 @@ static int collapse_scan_bitmap(struct mm_struct *mm, unsigned long address,
 
 		/* Check if the region is "almost full" based on the threshold */
 		if (bits_set > threshold_bits || is_pmd_only
-			|| test_bit(order, &huge_anon_orders_always)) {
-			ret = collapse_huge_page(mm, address, referenced, unmapped,
-						 cc, mmap_locked, order,
-						 offset * KHUGEPAGED_MIN_MTHP_NR);
+			|| (!file && test_bit(order, &huge_anon_orders_always))) {
+			if (file)
+				ret = collapse_file(mm, address, file,
+						start + offset * KHUGEPAGED_MIN_MTHP_NR, cc, order);
+			else
+				ret = collapse_huge_page(mm, address, referenced, unmapped,
+						cc, mmap_locked, order,
+						offset * KHUGEPAGED_MIN_MTHP_NR);
 
 			/*
 			 * Analyze failure reason to determine next action:
@@ -1418,6 +1427,7 @@ static int collapse_scan_bitmap(struct mm_struct *mm, unsigned long address,
 				collapsed += (1 << order);
 			case SCAN_PAGE_RO:
 			case SCAN_PTE_MAPPED_HUGEPAGE:
+			case SCAN_PAGE_COMPOUND:
 				continue;
 			/* Cases were lower orders might still succeed */
 			case SCAN_LACK_REFERENCED_PAGE:
@@ -1481,7 +1491,7 @@ static int collapse_scan_pmd(struct mm_struct *mm,
 		goto out;
 
 	bitmap_zero(cc->mthp_bitmap, MAX_MTHP_BITMAP_SIZE);
-	bitmap_zero(cc->mthp_bitmap_temp, MAX_MTHP_BITMAP_SIZE);
+	bitmap_zero(cc->mthp_bitmap_temp, HPAGE_PMD_NR);
 	memset(cc->node_load, 0, sizeof(cc->node_load));
 	nodes_clear(cc->alloc_nmask);
 
@@ -1649,8 +1659,8 @@ static int collapse_scan_pmd(struct mm_struct *mm,
 out_unmap:
 	pte_unmap_unlock(pte, ptl);
 	if (result == SCAN_SUCCEED) {
-		result = collapse_scan_bitmap(mm, address, referenced, unmapped, cc,
-					      mmap_locked, enabled_orders);
+		result = collapse_scan_bitmap(mm, address, NULL, referenced, unmapped,
+					      0, cc, mmap_locked, enabled_orders);
 		if (result > 0)
 			result = SCAN_SUCCEED;
 		else
@@ -2067,6 +2077,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 			 struct collapse_control *cc,
 			 int order)
 {
+	int max_scaled_none = khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - order);
 	struct address_space *mapping = file->f_mapping;
 	struct page *dst;
 	struct folio *folio, *tmp, *new_folio;
@@ -2128,9 +2139,10 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 				}
 				nr_none++;
 
-				if (cc->is_khugepaged && nr_none > khugepaged_max_ptes_none) {
+				if (cc->is_khugepaged && nr_none > max_scaled_none) {
 					result = SCAN_EXCEED_NONE_PTE;
 					count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
+					count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_NONE);
 					goto xa_locked;
 				}
 
@@ -2223,6 +2235,18 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 			goto out_unlock;
 		}
 
+		/*
+		 * If the folio order is greater than the collapse order, there is
+		 * no need to continue attempting to collapse.
+		 * And should return SCAN_PAGE_COMPOUND instead of SCAN_PTE_MAPPED_HUGEPAGE,
+		 * then we can build the mapping under the control of fault_around
+		 * when refaulting.
+		 */
+		if (folio_order(folio) >= order) {
+			result = SCAN_PAGE_COMPOUND;
+			goto out_unlock;
+		}
+
 		if (folio_mapping(folio) != mapping) {
 			result = SCAN_TRUNCATED;
 			goto out_unlock;
@@ -2443,12 +2467,12 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 	xas_unlock_irq(&xas);
 
 	/*
-	 * Remove pte page tables, so we can re-fault the page as huge.
+	 * Remove pte page tables for PMD-sized THP collapse, so we can re-fault
+	 * the page as huge.
 	 * If MADV_COLLAPSE, adjust result to call collapse_pte_mapped_thp().
 	 */
-	retract_page_tables(mapping, start);
-	if (cc && !cc->is_khugepaged)
-		result = SCAN_PTE_MAPPED_HUGEPAGE;
+	if (order == HPAGE_PMD_ORDER)
+		retract_page_tables(mapping, start);
 	folio_unlock(new_folio);
 
 	/*
@@ -2504,21 +2528,35 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 	return result;
 }
 
-static int collapse_scan_file(struct mm_struct *mm, unsigned long addr,
-			      struct file *file, pgoff_t start,
+static int collapse_scan_file(struct mm_struct *mm, struct vm_area_struct *vma,
+			      unsigned long addr, struct file *file, pgoff_t start,
 			      struct collapse_control *cc)
 {
+	int max_scaled_none = khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER);
+	enum tva_type type = cc->is_khugepaged ? TVA_KHUGEPAGED : TVA_FORCED_COLLAPSE;
 	struct folio *folio = NULL;
 	struct address_space *mapping = file->f_mapping;
 	XA_STATE(xas, &mapping->i_pages, start);
-	int present, swap;
+	int present, swap, nr_pages;
+	unsigned long enabled_orders;
 	int node = NUMA_NO_NODE;
 	int result = SCAN_SUCCEED;
+	bool is_pmd_only;
 
 	present = 0;
 	swap = 0;
+	bitmap_zero(cc->mthp_bitmap, MAX_MTHP_BITMAP_SIZE);
+	bitmap_zero(cc->mthp_bitmap_temp, HPAGE_PMD_NR);
 	memset(cc->node_load, 0, sizeof(cc->node_load));
 	nodes_clear(cc->alloc_nmask);
+
+	if (cc->is_khugepaged)
+		enabled_orders = thp_vma_allowable_orders(vma, vma->vm_flags,
+				type, THP_ORDERS_ALL_FILE_DEFAULT);
+	else
+		enabled_orders = BIT(HPAGE_PMD_ORDER);
+	is_pmd_only = (enabled_orders == (1 << HPAGE_PMD_ORDER));
+
 	rcu_read_lock();
 	xas_for_each(&xas, folio, start + HPAGE_PMD_NR - 1) {
 		if (xas_retry(&xas, folio))
@@ -2587,7 +2625,20 @@ static int collapse_scan_file(struct mm_struct *mm, unsigned long addr,
 		 * is just too costly...
 		 */
 
-		present += folio_nr_pages(folio);
+		nr_pages = folio_nr_pages(folio);
+		present += nr_pages;
+
+		/*
+		 * If there are folios present, keep track of it in the bitmap
+		 * for file/shmem mTHP collapse.
+		 */
+		if (!is_pmd_only) {
+			pgoff_t pgoff = max_t(pgoff_t, start, folio->index) - start;
+
+			nr_pages = min_t(int, HPAGE_PMD_NR - pgoff, nr_pages);
+			bitmap_set(cc->mthp_bitmap_temp, pgoff, nr_pages);
+		}
+
 		folio_put(folio);
 
 		if (need_resched()) {
@@ -2597,16 +2648,46 @@ static int collapse_scan_file(struct mm_struct *mm, unsigned long addr,
 	}
 	rcu_read_unlock();
 
-	if (result == SCAN_SUCCEED) {
-		if (cc->is_khugepaged &&
-		    present < HPAGE_PMD_NR - khugepaged_max_ptes_none) {
-			result = SCAN_EXCEED_NONE_PTE;
-			count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
-		} else {
-			result = collapse_file(mm, addr, file, start, cc, HPAGE_PMD_ORDER);
+	if (result != SCAN_SUCCEED)
+		goto out;
+
+	if (cc->is_khugepaged && is_pmd_only &&
+	    present < HPAGE_PMD_NR - khugepaged_max_ptes_none) {
+		result = SCAN_EXCEED_NONE_PTE;
+		count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
+		goto out;
+	}
+
+	/*
+	 * Check each KHUGEPAGED_MIN_MTHP_NR page chunks, and keep track of it
+	 * in the bitmap if this chunk has enough present folios.
+	 */
+	if (!is_pmd_only) {
+		int i;
+
+		for (i = 0; i < HPAGE_PMD_NR; i += KHUGEPAGED_MIN_MTHP_NR) {
+			if (bitmap_weight(cc->mthp_bitmap_temp, KHUGEPAGED_MIN_MTHP_NR) >
+					  KHUGEPAGED_MIN_MTHP_NR - max_scaled_none)
+				bitmap_set(cc->mthp_bitmap, i / KHUGEPAGED_MIN_MTHP_NR, 1);
+
+			bitmap_shift_right(cc->mthp_bitmap_temp, cc->mthp_bitmap_temp,
+					   KHUGEPAGED_MIN_MTHP_NR, HPAGE_PMD_NR);
 		}
+
+		bitmap_zero(cc->mthp_bitmap_temp, HPAGE_PMD_NR);
+	}
+	result = collapse_scan_bitmap(mm, addr, file, 0, 0, start,
+				      cc, NULL, enabled_orders);
+	if (result > 0) {
+		if (cc && !cc->is_khugepaged)
+			result = SCAN_PTE_MAPPED_HUGEPAGE;
+		else
+			result = SCAN_SUCCEED;
+	} else {
+		result = SCAN_FAIL;
 	}
 
+out:
 	trace_mm_khugepaged_scan_file(mm, folio, file, present, swap, result);
 	return result;
 }
@@ -2628,7 +2709,7 @@ static int collapse_single_pmd(unsigned long addr,
 
 		mmap_read_unlock(mm);
 		*mmap_locked = false;
-		result = collapse_scan_file(mm, addr, file, pgoff, cc);
+		result = collapse_scan_file(mm, vma, addr, file, pgoff, cc);
 		fput(file);
 		if (result == SCAN_PTE_MAPPED_HUGEPAGE) {
 			mmap_read_lock(mm);
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 05/11] mm: shmem: kick khugepaged for enabling none-PMD-sized shmem mTHPs
  2025-08-20  9:07 [RFC PATCH 00/11] add shmem mTHP collapse support Baolin Wang
                   ` (3 preceding siblings ...)
  2025-08-20  9:07 ` [RFC PATCH 04/11] mm: khugepaged: add shmem/file mTHP collapse support Baolin Wang
@ 2025-08-20  9:07 ` Baolin Wang
  2025-08-20  9:07 ` [RFC PATCH 06/11] mm: khugepaged: allow khugepaged to check all shmem/file large orders Baolin Wang
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Baolin Wang @ 2025-08-20  9:07 UTC (permalink / raw)
  To: akpm, hughd, david, lorenzo.stoakes
  Cc: ziy, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua,
	baolin.wang, linux-mm, linux-kernel

When only non-PMD-sized mTHP is enabled (such as only 64K mTHP enabled),
we should also allow kicking khugepaged to attempt scanning and collapsing
64K shmem mTHP. Modify shmem_hpage_pmd_enabled() to support shmem mTHP
collapse, and while we are at it, rename it to make the function name
more clear.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 include/linux/shmem_fs.h |  4 ++--
 mm/khugepaged.c          |  2 +-
 mm/shmem.c               | 10 +++++-----
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index 6d0f9c599ff7..cbe46e0c8bce 100644
--- a/include/linux/shmem_fs.h
+++ b/include/linux/shmem_fs.h
@@ -118,7 +118,7 @@ int shmem_unuse(unsigned int type);
 unsigned long shmem_allowable_huge_orders(struct inode *inode,
 				struct vm_area_struct *vma, pgoff_t index,
 				loff_t write_end, bool shmem_huge_force);
-bool shmem_hpage_pmd_enabled(void);
+bool shmem_hpage_enabled(void);
 #else
 static inline unsigned long shmem_allowable_huge_orders(struct inode *inode,
 				struct vm_area_struct *vma, pgoff_t index,
@@ -127,7 +127,7 @@ static inline unsigned long shmem_allowable_huge_orders(struct inode *inode,
 	return 0;
 }
 
-static inline bool shmem_hpage_pmd_enabled(void)
+static inline bool shmem_hpage_enabled(void)
 {
 	return false;
 }
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 53ca7bb72fbc..eb0b433d6ccb 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -453,7 +453,7 @@ static bool hugepage_enabled(void)
 	if (READ_ONCE(huge_anon_orders_inherit) &&
 	    hugepage_global_enabled())
 		return true;
-	if (IS_ENABLED(CONFIG_SHMEM) && shmem_hpage_pmd_enabled())
+	if (IS_ENABLED(CONFIG_SHMEM) && shmem_hpage_enabled())
 		return true;
 	return false;
 }
diff --git a/mm/shmem.c b/mm/shmem.c
index 13cc51df3893..a360738ab732 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1791,17 +1791,17 @@ static gfp_t limit_gfp_mask(gfp_t huge_gfp, gfp_t limit_gfp)
 }
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-bool shmem_hpage_pmd_enabled(void)
+bool shmem_hpage_enabled(void)
 {
 	if (shmem_huge == SHMEM_HUGE_DENY)
 		return false;
-	if (test_bit(HPAGE_PMD_ORDER, &huge_shmem_orders_always))
+	if (READ_ONCE(huge_shmem_orders_always))
 		return true;
-	if (test_bit(HPAGE_PMD_ORDER, &huge_shmem_orders_madvise))
+	if (READ_ONCE(huge_shmem_orders_madvise))
 		return true;
-	if (test_bit(HPAGE_PMD_ORDER, &huge_shmem_orders_within_size))
+	if (READ_ONCE(huge_shmem_orders_within_size))
 		return true;
-	if (test_bit(HPAGE_PMD_ORDER, &huge_shmem_orders_inherit) &&
+	if (READ_ONCE(huge_shmem_orders_inherit) &&
 	    shmem_huge != SHMEM_HUGE_NEVER)
 		return true;
 
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 06/11] mm: khugepaged: allow khugepaged to check all shmem/file large orders
  2025-08-20  9:07 [RFC PATCH 00/11] add shmem mTHP collapse support Baolin Wang
                   ` (4 preceding siblings ...)
  2025-08-20  9:07 ` [RFC PATCH 05/11] mm: shmem: kick khugepaged for enabling none-PMD-sized shmem mTHPs Baolin Wang
@ 2025-08-20  9:07 ` Baolin Wang
  2025-08-20  9:07 ` [RFC PATCH 07/11] mm: khugepaged: skip large folios that don't need to be collapsed Baolin Wang
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Baolin Wang @ 2025-08-20  9:07 UTC (permalink / raw)
  To: akpm, hughd, david, lorenzo.stoakes
  Cc: ziy, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua,
	baolin.wang, linux-mm, linux-kernel

We are now ready to enable shmem/file mTHP collapse, allowing
thp_vma_allowable_orders() to check all permissible file large orders.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 mm/khugepaged.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index eb0b433d6ccb..d5ae2e6c4107 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -496,7 +496,7 @@ void khugepaged_enter_vma(struct vm_area_struct *vma,
 	if (!mm_flags_test(MMF_VM_HUGEPAGE, vma->vm_mm) &&
 	    hugepage_enabled()) {
 		unsigned long orders = vma_is_anonymous(vma) ?
-					THP_ORDERS_ALL_ANON : BIT(PMD_ORDER);
+				THP_ORDERS_ALL_ANON : THP_ORDERS_ALL_FILE_DEFAULT;
 
 		if (thp_vma_allowable_orders(vma, vm_flags, TVA_KHUGEPAGED,
 					    orders))
@@ -2780,7 +2780,7 @@ static unsigned int collapse_scan_mm_slot(unsigned int pages, int *result,
 	vma_iter_init(&vmi, mm, khugepaged_scan.address);
 	for_each_vma(vmi, vma) {
 		unsigned long orders = vma_is_anonymous(vma) ?
-					THP_ORDERS_ALL_ANON : BIT(PMD_ORDER);
+				THP_ORDERS_ALL_ANON : THP_ORDERS_ALL_FILE_DEFAULT;
 		unsigned long hstart, hend;
 
 		cond_resched();
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 07/11] mm: khugepaged: skip large folios that don't need to be collapsed
  2025-08-20  9:07 [RFC PATCH 00/11] add shmem mTHP collapse support Baolin Wang
                   ` (5 preceding siblings ...)
  2025-08-20  9:07 ` [RFC PATCH 06/11] mm: khugepaged: allow khugepaged to check all shmem/file large orders Baolin Wang
@ 2025-08-20  9:07 ` Baolin Wang
  2025-08-20  9:07 ` [RFC PATCH 08/11] selftests:mm: extend the check_huge() to support mTHP check Baolin Wang
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Baolin Wang @ 2025-08-20  9:07 UTC (permalink / raw)
  To: akpm, hughd, david, lorenzo.stoakes
  Cc: ziy, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua,
	baolin.wang, linux-mm, linux-kernel

If a VMA has already created a mapping of large folios after a successful
mTHP collapse, we can skip those folios that exceed the 'highest_enabled_order'
when scanning the VMA range again, as they can no longer be collapsed further.
This helps prevent wasting CPU cycles.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 mm/khugepaged.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index d5ae2e6c4107..c25b68b13402 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2537,6 +2537,7 @@ static int collapse_scan_file(struct mm_struct *mm, struct vm_area_struct *vma,
 	struct folio *folio = NULL;
 	struct address_space *mapping = file->f_mapping;
 	XA_STATE(xas, &mapping->i_pages, start);
+	unsigned int highest_enabled_order;
 	int present, swap, nr_pages;
 	unsigned long enabled_orders;
 	int node = NUMA_NO_NODE;
@@ -2556,6 +2557,7 @@ static int collapse_scan_file(struct mm_struct *mm, struct vm_area_struct *vma,
 	else
 		enabled_orders = BIT(HPAGE_PMD_ORDER);
 	is_pmd_only = (enabled_orders == (1 << HPAGE_PMD_ORDER));
+	highest_enabled_order = highest_order(enabled_orders);
 
 	rcu_read_lock();
 	xas_for_each(&xas, folio, start + HPAGE_PMD_NR - 1) {
@@ -2631,8 +2633,11 @@ static int collapse_scan_file(struct mm_struct *mm, struct vm_area_struct *vma,
 		/*
 		 * If there are folios present, keep track of it in the bitmap
 		 * for file/shmem mTHP collapse.
+		 * Skip those folios whose order has already exceeded the
+		 * 'highest_enabled_order', meaning they cannot be collapsed
+		 * into larger order folios.
 		 */
-		if (!is_pmd_only) {
+		if (!is_pmd_only && folio_order(folio) < highest_enabled_order) {
 			pgoff_t pgoff = max_t(pgoff_t, start, folio->index) - start;
 
 			nr_pages = min_t(int, HPAGE_PMD_NR - pgoff, nr_pages);
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 08/11] selftests:mm: extend the check_huge() to support mTHP check
  2025-08-20  9:07 [RFC PATCH 00/11] add shmem mTHP collapse support Baolin Wang
                   ` (6 preceding siblings ...)
  2025-08-20  9:07 ` [RFC PATCH 07/11] mm: khugepaged: skip large folios that don't need to be collapsed Baolin Wang
@ 2025-08-20  9:07 ` Baolin Wang
  2025-08-20  9:07 ` [RFC PATCH 09/11] selftests: mm: move gather_after_split_folio_orders() into vm_util.c file Baolin Wang
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 14+ messages in thread
From: Baolin Wang @ 2025-08-20  9:07 UTC (permalink / raw)
  To: akpm, hughd, david, lorenzo.stoakes
  Cc: ziy, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua,
	baolin.wang, linux-mm, linux-kernel

To support checking for various sized mTHPs during mTHP collapse, it is
necessary to extend the check_huge() function prototype in preparation
for the following patches.

No functional changes.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 tools/testing/selftests/mm/khugepaged.c       | 66 ++++++++++---------
 .../selftests/mm/split_huge_page_test.c       | 10 +--
 tools/testing/selftests/mm/uffd-common.c      |  4 +-
 tools/testing/selftests/mm/vm_util.c          |  4 +-
 tools/testing/selftests/mm/vm_util.h          |  4 +-
 5 files changed, 48 insertions(+), 40 deletions(-)

diff --git a/tools/testing/selftests/mm/khugepaged.c b/tools/testing/selftests/mm/khugepaged.c
index a18c50d51141..e529074a1fdf 100644
--- a/tools/testing/selftests/mm/khugepaged.c
+++ b/tools/testing/selftests/mm/khugepaged.c
@@ -45,7 +45,7 @@ struct mem_ops {
 	void *(*setup_area)(int nr_hpages);
 	void (*cleanup_area)(void *p, unsigned long size);
 	void (*fault)(void *p, unsigned long start, unsigned long end);
-	bool (*check_huge)(void *addr, int nr_hpages);
+	bool (*check_huge)(void *addr, unsigned long size, int nr_hpages, unsigned long hpage_size);
 	const char *name;
 };
 
@@ -319,7 +319,7 @@ static void *alloc_hpage(struct mem_ops *ops)
 		perror("madvise(MADV_COLLAPSE)");
 		exit(EXIT_FAILURE);
 	}
-	if (!ops->check_huge(p, 1)) {
+	if (!ops->check_huge(p, hpage_pmd_size, 1, hpage_pmd_size)) {
 		perror("madvise(MADV_COLLAPSE)");
 		exit(EXIT_FAILURE);
 	}
@@ -359,9 +359,10 @@ static void anon_fault(void *p, unsigned long start, unsigned long end)
 	fill_memory(p, start, end);
 }
 
-static bool anon_check_huge(void *addr, int nr_hpages)
+static bool anon_check_huge(void *addr, unsigned long size,
+			int nr_hpages, unsigned long hpage_size)
 {
-	return check_huge_anon(addr, nr_hpages, hpage_pmd_size);
+	return check_huge_anon(addr, size, nr_hpages, hpage_size);
 }
 
 static void *file_setup_area(int nr_hpages)
@@ -422,13 +423,14 @@ static void file_fault(void *p, unsigned long start, unsigned long end)
 	}
 }
 
-static bool file_check_huge(void *addr, int nr_hpages)
+static bool file_check_huge(void *addr, unsigned long size,
+			int nr_hpages, unsigned long hpage_size)
 {
 	switch (finfo.type) {
 	case VMA_FILE:
-		return check_huge_file(addr, nr_hpages, hpage_pmd_size);
+		return check_huge_file(addr, nr_hpages, hpage_size);
 	case VMA_SHMEM:
-		return check_huge_shmem(addr, nr_hpages, hpage_pmd_size);
+		return check_huge_shmem(addr, size, nr_hpages, hpage_size);
 	default:
 		exit(EXIT_FAILURE);
 		return false;
@@ -464,9 +466,10 @@ static void shmem_cleanup_area(void *p, unsigned long size)
 	close(finfo.fd);
 }
 
-static bool shmem_check_huge(void *addr, int nr_hpages)
+static bool shmem_check_huge(void *addr, unsigned long size,
+			int nr_hpages, unsigned long hpage_size)
 {
-	return check_huge_shmem(addr, nr_hpages, hpage_pmd_size);
+	return check_huge_shmem(addr, size, nr_hpages, hpage_size);
 }
 
 static struct mem_ops __anon_ops = {
@@ -514,7 +517,7 @@ static void __madvise_collapse(const char *msg, char *p, int nr_hpages,
 	ret = madvise_collapse_retry(p, nr_hpages * hpage_pmd_size);
 	if (((bool)ret) == expect)
 		fail("Fail: Bad return value");
-	else if (!ops->check_huge(p, expect ? nr_hpages : 0))
+	else if (!ops->check_huge(p, nr_hpages * hpage_pmd_size, expect ? nr_hpages : 0, hpage_pmd_size))
 		fail("Fail: check_huge()");
 	else
 		success("OK");
@@ -526,7 +529,7 @@ static void madvise_collapse(const char *msg, char *p, int nr_hpages,
 			     struct mem_ops *ops, bool expect)
 {
 	/* Sanity check */
-	if (!ops->check_huge(p, 0)) {
+	if (!ops->check_huge(p, nr_hpages * hpage_pmd_size, 0, hpage_pmd_size)) {
 		printf("Unexpected huge page\n");
 		exit(EXIT_FAILURE);
 	}
@@ -537,11 +540,12 @@ static void madvise_collapse(const char *msg, char *p, int nr_hpages,
 static bool wait_for_scan(const char *msg, char *p, int nr_hpages,
 			  struct mem_ops *ops)
 {
+	unsigned long size = nr_hpages * hpage_pmd_size;
 	int full_scans;
 	int timeout = 6; /* 3 seconds */
 
 	/* Sanity check */
-	if (!ops->check_huge(p, 0)) {
+	if (!ops->check_huge(p, size, 0, hpage_pmd_size)) {
 		printf("Unexpected huge page\n");
 		exit(EXIT_FAILURE);
 	}
@@ -553,7 +557,7 @@ static bool wait_for_scan(const char *msg, char *p, int nr_hpages,
 
 	printf("%s...", msg);
 	while (timeout--) {
-		if (ops->check_huge(p, nr_hpages))
+		if (ops->check_huge(p, size, nr_hpages, hpage_pmd_size))
 			break;
 		if (thp_read_num("khugepaged/full_scans") >= full_scans)
 			break;
@@ -567,6 +571,8 @@ static bool wait_for_scan(const char *msg, char *p, int nr_hpages,
 static void khugepaged_collapse(const char *msg, char *p, int nr_hpages,
 				struct mem_ops *ops, bool expect)
 {
+	unsigned long size = nr_hpages * hpage_pmd_size;
+
 	if (wait_for_scan(msg, p, nr_hpages, ops)) {
 		if (expect)
 			fail("Timeout");
@@ -583,7 +589,7 @@ static void khugepaged_collapse(const char *msg, char *p, int nr_hpages,
 	if (ops != &__anon_ops)
 		ops->fault(p, 0, nr_hpages * hpage_pmd_size);
 
-	if (ops->check_huge(p, expect ? nr_hpages : 0))
+	if (ops->check_huge(p, size, expect ? nr_hpages : 0, hpage_pmd_size))
 		success("OK");
 	else
 		fail("Fail");
@@ -622,7 +628,7 @@ static void alloc_at_fault(void)
 	p = alloc_mapping(1);
 	*p = 1;
 	printf("Allocate huge page on fault...");
-	if (check_huge_anon(p, 1, hpage_pmd_size))
+	if (check_huge_anon(p, hpage_pmd_size, 1, hpage_pmd_size))
 		success("OK");
 	else
 		fail("Fail");
@@ -631,7 +637,7 @@ static void alloc_at_fault(void)
 
 	madvise(p, page_size, MADV_DONTNEED);
 	printf("Split huge PMD on MADV_DONTNEED...");
-	if (check_huge_anon(p, 0, hpage_pmd_size))
+	if (check_huge_anon(p, hpage_pmd_size, 0, hpage_pmd_size))
 		success("OK");
 	else
 		fail("Fail");
@@ -797,7 +803,7 @@ static void collapse_single_pte_entry_compound(struct collapse_context *c, struc
 	madvise(p, hpage_pmd_size, MADV_NOHUGEPAGE);
 	printf("Split huge page leaving single PTE mapping compound page...");
 	madvise(p + page_size, hpage_pmd_size - page_size, MADV_DONTNEED);
-	if (ops->check_huge(p, 0))
+	if (ops->check_huge(p, hpage_pmd_size, 0, hpage_pmd_size))
 		success("OK");
 	else
 		fail("Fail");
@@ -817,7 +823,7 @@ static void collapse_full_of_compound(struct collapse_context *c, struct mem_ops
 	printf("Split huge page leaving single PTE page table full of compound pages...");
 	madvise(p, page_size, MADV_NOHUGEPAGE);
 	madvise(p, hpage_pmd_size, MADV_NOHUGEPAGE);
-	if (ops->check_huge(p, 0))
+	if (ops->check_huge(p, hpage_pmd_size, 0, hpage_pmd_size))
 		success("OK");
 	else
 		fail("Fail");
@@ -840,7 +846,7 @@ static void collapse_compound_extreme(struct collapse_context *c, struct mem_ops
 
 		madvise(BASE_ADDR, hpage_pmd_size, MADV_HUGEPAGE);
 		ops->fault(BASE_ADDR, 0, hpage_pmd_size);
-		if (!ops->check_huge(BASE_ADDR, 1)) {
+		if (!ops->check_huge(BASE_ADDR, hpage_pmd_size, 1, hpage_pmd_size)) {
 			printf("Failed to allocate huge page\n");
 			exit(EXIT_FAILURE);
 		}
@@ -869,7 +875,7 @@ static void collapse_compound_extreme(struct collapse_context *c, struct mem_ops
 
 	ops->cleanup_area(BASE_ADDR, hpage_pmd_size);
 	ops->fault(p, 0, hpage_pmd_size);
-	if (!ops->check_huge(p, 1))
+	if (!ops->check_huge(p, hpage_pmd_size, 1, hpage_pmd_size))
 		success("OK");
 	else
 		fail("Fail");
@@ -890,7 +896,7 @@ static void collapse_fork(struct collapse_context *c, struct mem_ops *ops)
 
 	printf("Allocate small page...");
 	ops->fault(p, 0, page_size);
-	if (ops->check_huge(p, 0))
+	if (ops->check_huge(p, hpage_pmd_size, 0, hpage_pmd_size))
 		success("OK");
 	else
 		fail("Fail");
@@ -901,7 +907,7 @@ static void collapse_fork(struct collapse_context *c, struct mem_ops *ops)
 		skip_settings_restore = true;
 		exit_status = 0;
 
-		if (ops->check_huge(p, 0))
+		if (ops->check_huge(p, hpage_pmd_size, 0, hpage_pmd_size))
 			success("OK");
 		else
 			fail("Fail");
@@ -919,7 +925,7 @@ static void collapse_fork(struct collapse_context *c, struct mem_ops *ops)
 	exit_status += WEXITSTATUS(wstatus);
 
 	printf("Check if parent still has small page...");
-	if (ops->check_huge(p, 0))
+	if (ops->check_huge(p, hpage_pmd_size, 0, hpage_pmd_size))
 		success("OK");
 	else
 		fail("Fail");
@@ -939,7 +945,7 @@ static void collapse_fork_compound(struct collapse_context *c, struct mem_ops *o
 		skip_settings_restore = true;
 		exit_status = 0;
 
-		if (ops->check_huge(p, 1))
+		if (ops->check_huge(p, hpage_pmd_size, 1, hpage_pmd_size))
 			success("OK");
 		else
 			fail("Fail");
@@ -947,7 +953,7 @@ static void collapse_fork_compound(struct collapse_context *c, struct mem_ops *o
 		printf("Split huge page PMD in child process...");
 		madvise(p, page_size, MADV_NOHUGEPAGE);
 		madvise(p, hpage_pmd_size, MADV_NOHUGEPAGE);
-		if (ops->check_huge(p, 0))
+		if (ops->check_huge(p, hpage_pmd_size, 0, hpage_pmd_size))
 			success("OK");
 		else
 			fail("Fail");
@@ -968,7 +974,7 @@ static void collapse_fork_compound(struct collapse_context *c, struct mem_ops *o
 	exit_status += WEXITSTATUS(wstatus);
 
 	printf("Check if parent still has huge page...");
-	if (ops->check_huge(p, 1))
+	if (ops->check_huge(p, hpage_pmd_size, 1, hpage_pmd_size))
 		success("OK");
 	else
 		fail("Fail");
@@ -989,7 +995,7 @@ static void collapse_max_ptes_shared(struct collapse_context *c, struct mem_ops
 		skip_settings_restore = true;
 		exit_status = 0;
 
-		if (ops->check_huge(p, 1))
+		if (ops->check_huge(p, hpage_pmd_size, 1, hpage_pmd_size))
 			success("OK");
 		else
 			fail("Fail");
@@ -997,7 +1003,7 @@ static void collapse_max_ptes_shared(struct collapse_context *c, struct mem_ops
 		printf("Trigger CoW on page %d of %d...",
 				hpage_pmd_nr - max_ptes_shared - 1, hpage_pmd_nr);
 		ops->fault(p, 0, (hpage_pmd_nr - max_ptes_shared - 1) * page_size);
-		if (ops->check_huge(p, 0))
+		if (ops->check_huge(p, hpage_pmd_size, 0, hpage_pmd_size))
 			success("OK");
 		else
 			fail("Fail");
@@ -1010,7 +1016,7 @@ static void collapse_max_ptes_shared(struct collapse_context *c, struct mem_ops
 			       hpage_pmd_nr - max_ptes_shared, hpage_pmd_nr);
 			ops->fault(p, 0, (hpage_pmd_nr - max_ptes_shared) *
 				    page_size);
-			if (ops->check_huge(p, 0))
+			if (ops->check_huge(p, hpage_pmd_size, 0, hpage_pmd_size))
 				success("OK");
 			else
 				fail("Fail");
@@ -1028,7 +1034,7 @@ static void collapse_max_ptes_shared(struct collapse_context *c, struct mem_ops
 	exit_status += WEXITSTATUS(wstatus);
 
 	printf("Check if parent still has huge page...");
-	if (ops->check_huge(p, 1))
+	if (ops->check_huge(p, hpage_pmd_size, 1, hpage_pmd_size))
 		success("OK");
 	else
 		fail("Fail");
diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c
index 80eb1f91261e..cbf190598988 100644
--- a/tools/testing/selftests/mm/split_huge_page_test.c
+++ b/tools/testing/selftests/mm/split_huge_page_test.c
@@ -311,7 +311,7 @@ static void verify_rss_anon_split_huge_page_all_zeroes(char *one_page, int nr_hp
 	unsigned long rss_anon_before, rss_anon_after;
 	size_t i;
 
-	if (!check_huge_anon(one_page, nr_hpages, pmd_pagesize))
+	if (!check_huge_anon(one_page, nr_hpages * pmd_pagesize, nr_hpages, pmd_pagesize))
 		ksft_exit_fail_msg("No THP is allocated\n");
 
 	rss_anon_before = rss_anon();
@@ -326,7 +326,7 @@ static void verify_rss_anon_split_huge_page_all_zeroes(char *one_page, int nr_hp
 		if (one_page[i] != (char)0)
 			ksft_exit_fail_msg("%ld byte corrupted\n", i);
 
-	if (!check_huge_anon(one_page, 0, pmd_pagesize))
+	if (!check_huge_anon(one_page, nr_hpages * pmd_pagesize, 0, pmd_pagesize))
 		ksft_exit_fail_msg("Still AnonHugePages not split\n");
 
 	rss_anon_after = rss_anon();
@@ -362,7 +362,7 @@ static void split_pmd_thp_to_order(int order)
 	for (i = 0; i < len; i++)
 		one_page[i] = (char)i;
 
-	if (!check_huge_anon(one_page, 4, pmd_pagesize))
+	if (!check_huge_anon(one_page, 4 * pmd_pagesize, 4, pmd_pagesize))
 		ksft_exit_fail_msg("No THP is allocated\n");
 
 	/* split all THPs */
@@ -381,7 +381,7 @@ static void split_pmd_thp_to_order(int order)
 					   (pmd_order + 1)))
 		ksft_exit_fail_msg("Unexpected THP split\n");
 
-	if (!check_huge_anon(one_page, 0, pmd_pagesize))
+	if (!check_huge_anon(one_page, 4 * pmd_pagesize, 0, pmd_pagesize))
 		ksft_exit_fail_msg("Still AnonHugePages not split\n");
 
 	ksft_test_result_pass("Split huge pages to order %d successful\n", order);
@@ -405,7 +405,7 @@ static void split_pte_mapped_thp(void)
 	for (i = 0; i < len; i++)
 		one_page[i] = (char)i;
 
-	if (!check_huge_anon(one_page, 4, pmd_pagesize))
+	if (!check_huge_anon(one_page, 4 * pmd_pagesize, 4, pmd_pagesize))
 		ksft_exit_fail_msg("No THP is allocated\n");
 
 	/* remap the first pagesize of first THP */
diff --git a/tools/testing/selftests/mm/uffd-common.c b/tools/testing/selftests/mm/uffd-common.c
index f4e9a5f43e24..b6cfcc6950e1 100644
--- a/tools/testing/selftests/mm/uffd-common.c
+++ b/tools/testing/selftests/mm/uffd-common.c
@@ -191,7 +191,9 @@ static void shmem_alias_mapping(uffd_global_test_opts_t *gopts, __u64 *start,
 static void shmem_check_pmd_mapping(uffd_global_test_opts_t *gopts, void __unused *p,
 				    int expect_nr_hpages)
 {
-	if (!check_huge_shmem(gopts->area_dst_alias, expect_nr_hpages,
+	unsigned long size = expect_nr_hpages * read_pmd_pagesize();
+
+	if (!check_huge_shmem(gopts->area_dst_alias, size, expect_nr_hpages,
 			      read_pmd_pagesize()))
 		err("Did not find expected %d number of hugepages",
 		    expect_nr_hpages);
diff --git a/tools/testing/selftests/mm/vm_util.c b/tools/testing/selftests/mm/vm_util.c
index 56e9bd541edd..6058d80c63ef 100644
--- a/tools/testing/selftests/mm/vm_util.c
+++ b/tools/testing/selftests/mm/vm_util.c
@@ -248,7 +248,7 @@ bool __check_huge(void *addr, char *pattern, int nr_hpages,
 	return thp == (nr_hpages * (hpage_size >> 10));
 }
 
-bool check_huge_anon(void *addr, int nr_hpages, uint64_t hpage_size)
+bool check_huge_anon(void *addr, unsigned long size, int nr_hpages, uint64_t hpage_size)
 {
 	return __check_huge(addr, "AnonHugePages: ", nr_hpages, hpage_size);
 }
@@ -258,7 +258,7 @@ bool check_huge_file(void *addr, int nr_hpages, uint64_t hpage_size)
 	return __check_huge(addr, "FilePmdMapped:", nr_hpages, hpage_size);
 }
 
-bool check_huge_shmem(void *addr, int nr_hpages, uint64_t hpage_size)
+bool check_huge_shmem(void *addr, unsigned long size, int nr_hpages, uint64_t hpage_size)
 {
 	return __check_huge(addr, "ShmemPmdMapped:", nr_hpages, hpage_size);
 }
diff --git a/tools/testing/selftests/mm/vm_util.h b/tools/testing/selftests/mm/vm_util.h
index 07c4acfd84b6..a1cd446e5140 100644
--- a/tools/testing/selftests/mm/vm_util.h
+++ b/tools/testing/selftests/mm/vm_util.h
@@ -82,9 +82,9 @@ void clear_softdirty(void);
 bool check_for_pattern(FILE *fp, const char *pattern, char *buf, size_t len);
 uint64_t read_pmd_pagesize(void);
 unsigned long rss_anon(void);
-bool check_huge_anon(void *addr, int nr_hpages, uint64_t hpage_size);
+bool check_huge_anon(void *addr, unsigned long size, int nr_hpages, uint64_t hpage_size);
 bool check_huge_file(void *addr, int nr_hpages, uint64_t hpage_size);
-bool check_huge_shmem(void *addr, int nr_hpages, uint64_t hpage_size);
+bool check_huge_shmem(void *addr, unsigned long size, int nr_hpages, uint64_t hpage_size);
 int64_t allocate_transhuge(void *ptr, int pagemap_fd);
 unsigned long default_huge_page_size(void);
 int detect_hugetlb_page_sizes(size_t sizes[], int max);
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 09/11] selftests: mm: move gather_after_split_folio_orders() into vm_util.c file
  2025-08-20  9:07 [RFC PATCH 00/11] add shmem mTHP collapse support Baolin Wang
                   ` (7 preceding siblings ...)
  2025-08-20  9:07 ` [RFC PATCH 08/11] selftests:mm: extend the check_huge() to support mTHP check Baolin Wang
@ 2025-08-20  9:07 ` Baolin Wang
  2025-08-20  9:07 ` [RFC PATCH 10/11] selftests: mm: implement the mTHP hugepage check helper Baolin Wang
  2025-08-20  9:07 ` [RFC PATCH 11/11] selftests: mm: add mTHP collapse test cases Baolin Wang
  10 siblings, 0 replies; 14+ messages in thread
From: Baolin Wang @ 2025-08-20  9:07 UTC (permalink / raw)
  To: akpm, hughd, david, lorenzo.stoakes
  Cc: ziy, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua,
	baolin.wang, linux-mm, linux-kernel

Move gather_after_split_folio_orders() to vm_util.c as a helper function
in preparation for implementing checks for mTHP collapse. While we are
at it, rename this function to indicate that it is not only used for
large folio splits.

No functional changes.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 .../selftests/mm/split_huge_page_test.c       | 125 +-----------------
 tools/testing/selftests/mm/vm_util.c          | 123 +++++++++++++++++
 tools/testing/selftests/mm/vm_util.h          |   2 +
 3 files changed, 126 insertions(+), 124 deletions(-)

diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c
index cbf190598988..77cf510f18e0 100644
--- a/tools/testing/selftests/mm/split_huge_page_test.c
+++ b/tools/testing/selftests/mm/split_huge_page_test.c
@@ -104,129 +104,6 @@ static bool is_backed_by_folio(char *vaddr, int order, int pagemap_fd,
 	return false;
 }
 
-static int vaddr_pageflags_get(char *vaddr, int pagemap_fd, int kpageflags_fd,
-		uint64_t *flags)
-{
-	unsigned long pfn;
-
-	pfn = pagemap_get_pfn(pagemap_fd, vaddr);
-
-	/* non-present PFN */
-	if (pfn == -1UL)
-		return 1;
-
-	if (pageflags_get(pfn, kpageflags_fd, flags))
-		return -1;
-
-	return 0;
-}
-
-/*
- * gather_after_split_folio_orders - scan through [vaddr_start, len) and record
- * folio orders
- *
- * @vaddr_start: start vaddr
- * @len: range length
- * @pagemap_fd: file descriptor to /proc/<pid>/pagemap
- * @kpageflags_fd: file descriptor to /proc/kpageflags
- * @orders: output folio order array
- * @nr_orders: folio order array size
- *
- * gather_after_split_folio_orders() scan through [vaddr_start, len) and check
- * all folios within the range and record their orders. All order-0 pages will
- * be recorded. Non-present vaddr is skipped.
- *
- * NOTE: the function is used to check folio orders after a split is performed,
- * so it assumes [vaddr_start, len) fully maps to after-split folios within that
- * range.
- *
- * Return: 0 - no error, -1 - unhandled cases
- */
-static int gather_after_split_folio_orders(char *vaddr_start, size_t len,
-		int pagemap_fd, int kpageflags_fd, int orders[], int nr_orders)
-{
-	uint64_t page_flags = 0;
-	int cur_order = -1;
-	char *vaddr;
-
-	if (pagemap_fd == -1 || kpageflags_fd == -1)
-		return -1;
-	if (!orders)
-		return -1;
-	if (nr_orders <= 0)
-		return -1;
-
-	for (vaddr = vaddr_start; vaddr < vaddr_start + len;) {
-		char *next_folio_vaddr;
-		int status;
-
-		status = vaddr_pageflags_get(vaddr, pagemap_fd, kpageflags_fd,
-					&page_flags);
-		if (status < 0)
-			return -1;
-
-		/* skip non present vaddr */
-		if (status == 1) {
-			vaddr += psize();
-			continue;
-		}
-
-		/* all order-0 pages with possible false postive (non folio) */
-		if (!(page_flags & (KPF_COMPOUND_HEAD | KPF_COMPOUND_TAIL))) {
-			orders[0]++;
-			vaddr += psize();
-			continue;
-		}
-
-		/* skip non thp compound pages */
-		if (!(page_flags & KPF_THP)) {
-			vaddr += psize();
-			continue;
-		}
-
-		/* vpn points to part of a THP at this point */
-		if (page_flags & KPF_COMPOUND_HEAD)
-			cur_order = 1;
-		else {
-			vaddr += psize();
-			continue;
-		}
-
-		next_folio_vaddr = vaddr + (1UL << (cur_order + pshift()));
-
-		if (next_folio_vaddr >= vaddr_start + len)
-			break;
-
-		while ((status = vaddr_pageflags_get(next_folio_vaddr,
-						     pagemap_fd, kpageflags_fd,
-						     &page_flags)) >= 0) {
-			/*
-			 * non present vaddr, next compound head page, or
-			 * order-0 page
-			 */
-			if (status == 1 ||
-			    (page_flags & KPF_COMPOUND_HEAD) ||
-			    !(page_flags & (KPF_COMPOUND_HEAD | KPF_COMPOUND_TAIL))) {
-				if (cur_order < nr_orders) {
-					orders[cur_order]++;
-					cur_order = -1;
-					vaddr = next_folio_vaddr;
-				}
-				break;
-			}
-
-			cur_order++;
-			next_folio_vaddr = vaddr + (1UL << (cur_order + pshift()));
-		}
-
-		if (status < 0)
-			return status;
-	}
-	if (cur_order > 0 && cur_order < nr_orders)
-		orders[cur_order]++;
-	return 0;
-}
-
 static int check_after_split_folio_orders(char *vaddr_start, size_t len,
 		int pagemap_fd, int kpageflags_fd, int orders[], int nr_orders)
 {
@@ -240,7 +117,7 @@ static int check_after_split_folio_orders(char *vaddr_start, size_t len,
 		ksft_exit_fail_msg("Cannot allocate memory for vaddr_orders");
 
 	memset(vaddr_orders, 0, sizeof(int) * nr_orders);
-	status = gather_after_split_folio_orders(vaddr_start, len, pagemap_fd,
+	status = gather_folio_orders(vaddr_start, len, pagemap_fd,
 				     kpageflags_fd, vaddr_orders, nr_orders);
 	if (status)
 		ksft_exit_fail_msg("gather folio info failed\n");
diff --git a/tools/testing/selftests/mm/vm_util.c b/tools/testing/selftests/mm/vm_util.c
index 6058d80c63ef..853c8a4caa1d 100644
--- a/tools/testing/selftests/mm/vm_util.c
+++ b/tools/testing/selftests/mm/vm_util.c
@@ -195,6 +195,129 @@ unsigned long rss_anon(void)
 	return rss_anon;
 }
 
+static int vaddr_pageflags_get(char *vaddr, int pagemap_fd, int kpageflags_fd,
+		uint64_t *flags)
+{
+	unsigned long pfn;
+
+	pfn = pagemap_get_pfn(pagemap_fd, vaddr);
+
+	/* non-present PFN */
+	if (pfn == -1UL)
+		return 1;
+
+	if (pageflags_get(pfn, kpageflags_fd, flags))
+		return -1;
+
+	return 0;
+}
+
+/*
+ * gather_folio_orders - scan through [vaddr_start, len) and record
+ * folio orders
+ *
+ * @vaddr_start: start vaddr
+ * @len: range length
+ * @pagemap_fd: file descriptor to /proc/<pid>/pagemap
+ * @kpageflags_fd: file descriptor to /proc/kpageflags
+ * @orders: output folio order array
+ * @nr_orders: folio order array size
+ *
+ * gather_after_split_folio_orders() scan through [vaddr_start, len) and check
+ * all folios within the range and record their orders. All order-0 pages will
+ * be recorded. Non-present vaddr is skipped.
+ *
+ * NOTE: the function is used to check folio orders after a split is performed,
+ * so it assumes [vaddr_start, len) fully maps to after-split folios within that
+ * range.
+ *
+ * Return: 0 - no error, -1 - unhandled cases
+ */
+int gather_folio_orders(char *vaddr_start, size_t len,
+		int pagemap_fd, int kpageflags_fd, int orders[], int nr_orders)
+{
+	uint64_t page_flags = 0;
+	int cur_order = -1;
+	char *vaddr;
+
+	if (pagemap_fd == -1 || kpageflags_fd == -1)
+		return -1;
+	if (!orders)
+		return -1;
+	if (nr_orders <= 0)
+		return -1;
+
+	for (vaddr = vaddr_start; vaddr < vaddr_start + len;) {
+		char *next_folio_vaddr;
+		int status;
+
+		status = vaddr_pageflags_get(vaddr, pagemap_fd, kpageflags_fd,
+				&page_flags);
+		if (status < 0)
+			return -1;
+
+		/* skip non present vaddr */
+		if (status == 1) {
+			vaddr += psize();
+			continue;
+		}
+
+		/* all order-0 pages with possible false postive (non folio) */
+		if (!(page_flags & (KPF_COMPOUND_HEAD | KPF_COMPOUND_TAIL))) {
+			orders[0]++;
+			vaddr += psize();
+			continue;
+		}
+
+		/* skip non thp compound pages */
+		if (!(page_flags & KPF_THP)) {
+			vaddr += psize();
+			continue;
+		}
+
+		/* vpn points to part of a THP at this point */
+		if (page_flags & KPF_COMPOUND_HEAD)
+			cur_order = 1;
+		else {
+			vaddr += psize();
+			continue;
+		}
+
+		next_folio_vaddr = vaddr + (1UL << (cur_order + pshift()));
+
+		if (next_folio_vaddr >= vaddr_start + len)
+			break;
+
+		while ((status = vaddr_pageflags_get(next_folio_vaddr,
+						     pagemap_fd, kpageflags_fd,
+						     &page_flags)) >= 0) {
+			/*
+			 * non present vaddr, next compound head page, or
+			 * order-0 page
+			 */
+			if (status == 1 ||
+			    (page_flags & KPF_COMPOUND_HEAD) ||
+			    !(page_flags & (KPF_COMPOUND_HEAD | KPF_COMPOUND_TAIL))) {
+				if (cur_order < nr_orders) {
+					orders[cur_order]++;
+					cur_order = -1;
+					vaddr = next_folio_vaddr;
+				}
+				break;
+			}
+
+			cur_order++;
+			next_folio_vaddr = vaddr + (1UL << (cur_order + pshift()));
+		}
+
+		if (status < 0)
+			return status;
+	}
+	if (cur_order > 0 && cur_order < nr_orders)
+		orders[cur_order]++;
+	return 0;
+}
+
 char *__get_smap_entry(void *addr, const char *pattern, char *buf, size_t len)
 {
 	int ret;
diff --git a/tools/testing/selftests/mm/vm_util.h b/tools/testing/selftests/mm/vm_util.h
index a1cd446e5140..197a9b69cbba 100644
--- a/tools/testing/selftests/mm/vm_util.h
+++ b/tools/testing/selftests/mm/vm_util.h
@@ -89,6 +89,8 @@ int64_t allocate_transhuge(void *ptr, int pagemap_fd);
 unsigned long default_huge_page_size(void);
 int detect_hugetlb_page_sizes(size_t sizes[], int max);
 int pageflags_get(unsigned long pfn, int kpageflags_fd, uint64_t *flags);
+int gather_folio_orders(char *vaddr_start, size_t len,
+		int pagemap_fd, int kpageflags_fd, int orders[], int nr_orders);
 
 int uffd_register(int uffd, void *addr, uint64_t len,
 		  bool miss, bool wp, bool minor);
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 10/11] selftests: mm: implement the mTHP hugepage check helper
  2025-08-20  9:07 [RFC PATCH 00/11] add shmem mTHP collapse support Baolin Wang
                   ` (8 preceding siblings ...)
  2025-08-20  9:07 ` [RFC PATCH 09/11] selftests: mm: move gather_after_split_folio_orders() into vm_util.c file Baolin Wang
@ 2025-08-20  9:07 ` Baolin Wang
  2025-08-20  9:07 ` [RFC PATCH 11/11] selftests: mm: add mTHP collapse test cases Baolin Wang
  10 siblings, 0 replies; 14+ messages in thread
From: Baolin Wang @ 2025-08-20  9:07 UTC (permalink / raw)
  To: akpm, hughd, david, lorenzo.stoakes
  Cc: ziy, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua,
	baolin.wang, linux-mm, linux-kernel

Implement the mTHP hugepage check helper.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 tools/testing/selftests/mm/vm_util.c | 52 +++++++++++++++++++++++++---
 1 file changed, 48 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/mm/vm_util.c b/tools/testing/selftests/mm/vm_util.c
index 853c8a4caa1d..d0f8aa66b988 100644
--- a/tools/testing/selftests/mm/vm_util.c
+++ b/tools/testing/selftests/mm/vm_util.c
@@ -16,6 +16,10 @@
 #define SMAP_FILE_PATH "/proc/self/smaps"
 #define STATUS_FILE_PATH "/proc/self/status"
 #define MAX_LINE_LENGTH 500
+#define PAGEMAP_PATH "/proc/self/pagemap"
+#define KPAGEFLAGS_PATH "/proc/kpageflags"
+#define GET_ORDER(nr_pages)    (31 - __builtin_clz(nr_pages))
+#define NR_ORDERS 20
 
 unsigned int __page_size;
 unsigned int __page_shift;
@@ -353,7 +357,7 @@ char *__get_smap_entry(void *addr, const char *pattern, char *buf, size_t len)
 	return entry;
 }
 
-bool __check_huge(void *addr, char *pattern, int nr_hpages,
+static bool __check_pmd_huge(void *addr, char *pattern, int nr_hpages,
 		  uint64_t hpage_size)
 {
 	char buffer[MAX_LINE_LENGTH];
@@ -371,19 +375,59 @@ bool __check_huge(void *addr, char *pattern, int nr_hpages,
 	return thp == (nr_hpages * (hpage_size >> 10));
 }
 
+static bool check_large_folios(void *addr, unsigned long size, int nr_hpages, uint64_t hpage_size)
+{
+	int pagesize = getpagesize();
+	int order = GET_ORDER(hpage_size / pagesize);
+	int pagemap_fd, kpageflags_fd;
+	int orders[NR_ORDERS], status;
+	bool ret = false;
+
+	memset(orders, 0, sizeof(int) * NR_ORDERS);
+
+	pagemap_fd = open(PAGEMAP_PATH, O_RDONLY);
+	if (pagemap_fd == -1)
+		ksft_exit_fail_msg("read pagemap fail\n");
+
+	kpageflags_fd = open(KPAGEFLAGS_PATH, O_RDONLY);
+	if (kpageflags_fd == -1) {
+		close(pagemap_fd);
+		ksft_exit_fail_msg("read kpageflags fail\n");
+	}
+
+	status = gather_folio_orders(addr, size, pagemap_fd,
+				kpageflags_fd, orders, NR_ORDERS);
+	if (status)
+		goto out;
+
+	if (orders[order] == nr_hpages)
+		ret = true;
+
+out:
+	close(pagemap_fd);
+	close(kpageflags_fd);
+	return ret;
+}
+
 bool check_huge_anon(void *addr, unsigned long size, int nr_hpages, uint64_t hpage_size)
 {
-	return __check_huge(addr, "AnonHugePages: ", nr_hpages, hpage_size);
+	if (hpage_size == read_pmd_pagesize())
+		return __check_pmd_huge(addr, "AnonHugePages: ", nr_hpages, hpage_size);
+
+	return check_large_folios(addr, size, nr_hpages, hpage_size);
 }
 
 bool check_huge_file(void *addr, int nr_hpages, uint64_t hpage_size)
 {
-	return __check_huge(addr, "FilePmdMapped:", nr_hpages, hpage_size);
+	return __check_pmd_huge(addr, "FilePmdMapped:", nr_hpages, hpage_size);
 }
 
 bool check_huge_shmem(void *addr, unsigned long size, int nr_hpages, uint64_t hpage_size)
 {
-	return __check_huge(addr, "ShmemPmdMapped:", nr_hpages, hpage_size);
+	if (hpage_size == read_pmd_pagesize())
+		return __check_pmd_huge(addr, "ShmemPmdMapped:", nr_hpages, hpage_size);
+
+	return check_large_folios(addr, size, nr_hpages, hpage_size);
 }
 
 int64_t allocate_transhuge(void *ptr, int pagemap_fd)
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC PATCH 11/11] selftests: mm: add mTHP collapse test cases
  2025-08-20  9:07 [RFC PATCH 00/11] add shmem mTHP collapse support Baolin Wang
                   ` (9 preceding siblings ...)
  2025-08-20  9:07 ` [RFC PATCH 10/11] selftests: mm: implement the mTHP hugepage check helper Baolin Wang
@ 2025-08-20  9:07 ` Baolin Wang
  10 siblings, 0 replies; 14+ messages in thread
From: Baolin Wang @ 2025-08-20  9:07 UTC (permalink / raw)
  To: akpm, hughd, david, lorenzo.stoakes
  Cc: ziy, Liam.Howlett, npache, ryan.roberts, dev.jain, baohua,
	baolin.wang, linux-mm, linux-kernel

Add mTHP collapse test cases.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 tools/testing/selftests/mm/khugepaged.c   | 102 +++++++++++++++++++---
 tools/testing/selftests/mm/run_vmtests.sh |   4 +
 2 files changed, 92 insertions(+), 14 deletions(-)

diff --git a/tools/testing/selftests/mm/khugepaged.c b/tools/testing/selftests/mm/khugepaged.c
index e529074a1fdf..f7081e9e20ec 100644
--- a/tools/testing/selftests/mm/khugepaged.c
+++ b/tools/testing/selftests/mm/khugepaged.c
@@ -26,9 +26,11 @@
 
 #define BASE_ADDR ((void *)(1UL << 30))
 static unsigned long hpage_pmd_size;
+static int hpage_pmd_order;
 static unsigned long page_size;
 static int hpage_pmd_nr;
 static int anon_order;
+static int collapse_order;
 
 #define PID_SMAPS "/proc/self/smaps"
 #define TEST_FILE "collapse_test_file"
@@ -61,6 +63,7 @@ struct collapse_context {
 };
 
 static struct collapse_context *khugepaged_context;
+static struct collapse_context *mthp_khugepaged_context;
 static struct collapse_context *madvise_context;
 
 struct file_info {
@@ -538,26 +541,27 @@ static void madvise_collapse(const char *msg, char *p, int nr_hpages,
 
 #define TICK 500000
 static bool wait_for_scan(const char *msg, char *p, int nr_hpages,
-			  struct mem_ops *ops)
+			  int collap_order, struct mem_ops *ops)
 {
-	unsigned long size = nr_hpages * hpage_pmd_size;
+	unsigned long hpage_size = page_size << collap_order;
+	unsigned long size = nr_hpages * hpage_size;
 	int full_scans;
 	int timeout = 6; /* 3 seconds */
 
 	/* Sanity check */
-	if (!ops->check_huge(p, size, 0, hpage_pmd_size)) {
+	if (!ops->check_huge(p, size, 0, hpage_size)) {
 		printf("Unexpected huge page\n");
 		exit(EXIT_FAILURE);
 	}
 
-	madvise(p, nr_hpages * hpage_pmd_size, MADV_HUGEPAGE);
+	madvise(p, size, MADV_HUGEPAGE);
 
 	/* Wait until the second full_scan completed */
 	full_scans = thp_read_num("khugepaged/full_scans") + 2;
 
 	printf("%s...", msg);
 	while (timeout--) {
-		if (ops->check_huge(p, size, nr_hpages, hpage_pmd_size))
+		if (ops->check_huge(p, size, nr_hpages, hpage_size))
 			break;
 		if (thp_read_num("khugepaged/full_scans") >= full_scans)
 			break;
@@ -573,7 +577,7 @@ static void khugepaged_collapse(const char *msg, char *p, int nr_hpages,
 {
 	unsigned long size = nr_hpages * hpage_pmd_size;
 
-	if (wait_for_scan(msg, p, nr_hpages, ops)) {
+	if (wait_for_scan(msg, p, nr_hpages, hpage_pmd_order, ops)) {
 		if (expect)
 			fail("Timeout");
 		else
@@ -595,12 +599,66 @@ static void khugepaged_collapse(const char *msg, char *p, int nr_hpages,
 		fail("Fail");
 }
 
+static void mthp_khugepaged_collapse(const char *msg, char *p, int nr_hpages,
+				struct mem_ops *ops, bool expect)
+{
+	unsigned long hpage_size = page_size << collapse_order;
+	unsigned long size = nr_hpages * hpage_pmd_size;
+	struct thp_settings settings = *thp_current_settings();
+
+	nr_hpages = size / hpage_size;
+
+	/* Set mTHP setting for mTHP collapse */
+	if (ops == &__anon_ops) {
+		settings.thp_enabled = THP_NEVER;
+		settings.hugepages[collapse_order].enabled = THP_ALWAYS;
+	} else if (ops == &__shmem_ops) {
+		settings.shmem_enabled = SHMEM_NEVER;
+		settings.shmem_hugepages[collapse_order].enabled = SHMEM_ALWAYS;
+	}
+
+	thp_push_settings(&settings);
+
+	if (wait_for_scan(msg, p, nr_hpages, collapse_order, ops)) {
+		if (expect)
+			fail("Timeout");
+		else
+			success("OK");
+
+		/* Restore THP settings for mTHP collapse. */
+		thp_pop_settings();
+		return;
+	}
+
+	/*
+	 * For file and shmem memory, khugepaged only retracts pte entries after
+	 * putting the new hugepage in the page cache. The hugepage must be
+	 * subsequently refaulted to install the pmd mapping for the mm.
+	 */
+	if (ops != &__anon_ops)
+		ops->fault(p, 0, size);
+
+	if (ops->check_huge(p, size, expect ? (size / hpage_size) : 0, hpage_size))
+		success("OK");
+	else
+		fail("Fail");
+
+	/* Restore THP settings for mTHP collapse. */
+	thp_pop_settings();
+}
+
 static struct collapse_context __khugepaged_context = {
 	.collapse = &khugepaged_collapse,
 	.enforce_pte_scan_limits = true,
 	.name = "khugepaged",
 };
 
+static struct collapse_context __mthp_khugepaged_context = {
+	.collapse = &mthp_khugepaged_collapse,
+	.enforce_pte_scan_limits = true,
+	.name = "mthp_khugepaged",
+};
+
 static struct collapse_context __madvise_context = {
 	.collapse = &madvise_collapse,
 	.enforce_pte_scan_limits = false,
@@ -650,6 +708,12 @@ static void collapse_full(struct collapse_context *c, struct mem_ops *ops)
 	int nr_hpages = 4;
 	unsigned long size = nr_hpages * hpage_pmd_size;
 
+	/* Only try 1 PMD sized range for mTHP collapse. */
+	if (c == &__mthp_khugepaged_context) {
+		nr_hpages = 1;
+		size = hpage_pmd_size;
+	}
+
 	p = ops->setup_area(nr_hpages);
 	ops->fault(p, 0, size);
 	c->collapse("Collapse multiple fully populated PTE table", p, nr_hpages,
@@ -1074,7 +1138,7 @@ static void madvise_retracted_page_tables(struct collapse_context *c,
 
 	/* Let khugepaged collapse and leave pmd cleared */
 	if (wait_for_scan("Collapse and leave PMD cleared", p, nr_hpages,
-			  ops)) {
+			  hpage_pmd_order, ops)) {
 		fail("Timeout");
 		return;
 	}
@@ -1089,7 +1153,7 @@ static void usage(void)
 {
 	fprintf(stderr, "\nUsage: ./khugepaged [OPTIONS] <test type> [dir]\n\n");
 	fprintf(stderr, "\t<test type>\t: <context>:<mem_type>\n");
-	fprintf(stderr, "\t<context>\t: [all|khugepaged|madvise]\n");
+	fprintf(stderr, "\t<context>\t: [all|khugepaged|mthp_khugepaged|madvise]\n");
 	fprintf(stderr, "\t<mem_type>\t: [all|anon|file|shmem]\n");
 	fprintf(stderr, "\n\t\"file,all\" mem_type requires [dir] argument\n");
 	fprintf(stderr, "\n\t\"file,all\" mem_type requires kernel built with\n");
@@ -1100,6 +1164,7 @@ static void usage(void)
 	fprintf(stderr,	"\t\t-h: This help message.\n");
 	fprintf(stderr,	"\t\t-s: mTHP size, expressed as page order.\n");
 	fprintf(stderr,	"\t\t    Defaults to 0. Use this size for anon or shmem allocations.\n");
+	fprintf(stderr,	"\t\t-c: collapse order for mTHP collapse, expressed as page order.\n");
 	exit(1);
 }
 
@@ -1109,11 +1174,14 @@ static void parse_test_type(int argc, char **argv)
 	char *buf;
 	const char *token;
 
-	while ((opt = getopt(argc, argv, "s:h")) != -1) {
+	while ((opt = getopt(argc, argv, "s:c:h")) != -1) {
 		switch (opt) {
 		case 's':
 			anon_order = atoi(optarg);
 			break;
+		case 'c':
+			collapse_order = atoi(optarg);
+			break;
 		case 'h':
 		default:
 			usage();
@@ -1139,6 +1207,10 @@ static void parse_test_type(int argc, char **argv)
 		madvise_context =  &__madvise_context;
 	} else if (!strcmp(token, "khugepaged")) {
 		khugepaged_context =  &__khugepaged_context;
+	} else if (!strcmp(token, "mthp_khugepaged")) {
+		mthp_khugepaged_context =  &__mthp_khugepaged_context;
+		if (collapse_order == 0 || collapse_order >= hpage_pmd_order)
+			usage();
 	} else if (!strcmp(token, "madvise")) {
 		madvise_context =  &__madvise_context;
 	} else {
@@ -1173,7 +1245,6 @@ static void parse_test_type(int argc, char **argv)
 
 int main(int argc, char **argv)
 {
-	int hpage_pmd_order;
 	struct thp_settings default_settings = {
 		.thp_enabled = THP_MADVISE,
 		.thp_defrag = THP_DEFRAG_ALWAYS,
@@ -1199,10 +1270,6 @@ int main(int argc, char **argv)
 		return KSFT_SKIP;
 	}
 
-	parse_test_type(argc, argv);
-
-	setbuf(stdout, NULL);
-
 	page_size = getpagesize();
 	hpage_pmd_size = read_pmd_pagesize();
 	if (!hpage_pmd_size) {
@@ -1212,6 +1279,10 @@ int main(int argc, char **argv)
 	hpage_pmd_nr = hpage_pmd_size / page_size;
 	hpage_pmd_order = __builtin_ctz(hpage_pmd_nr);
 
+	parse_test_type(argc, argv);
+
+	setbuf(stdout, NULL);
+
 	default_settings.khugepaged.max_ptes_none = hpage_pmd_nr - 1;
 	default_settings.khugepaged.max_ptes_swap = hpage_pmd_nr / 8;
 	default_settings.khugepaged.max_ptes_shared = hpage_pmd_nr / 2;
@@ -1236,11 +1307,14 @@ int main(int argc, char **argv)
 	TEST(collapse_full, khugepaged_context, anon_ops);
 	TEST(collapse_full, khugepaged_context, file_ops);
 	TEST(collapse_full, khugepaged_context, shmem_ops);
+	TEST(collapse_full, mthp_khugepaged_context, anon_ops);
+	TEST(collapse_full, mthp_khugepaged_context, shmem_ops);
 	TEST(collapse_full, madvise_context, anon_ops);
 	TEST(collapse_full, madvise_context, file_ops);
 	TEST(collapse_full, madvise_context, shmem_ops);
 
 	TEST(collapse_empty, khugepaged_context, anon_ops);
+	TEST(collapse_empty, mthp_khugepaged_context, anon_ops);
 	TEST(collapse_empty, madvise_context, anon_ops);
 
 	TEST(collapse_single_pte_entry, khugepaged_context, anon_ops);
diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh
index 75b94fdc915f..12d2a4f28ab5 100755
--- a/tools/testing/selftests/mm/run_vmtests.sh
+++ b/tools/testing/selftests/mm/run_vmtests.sh
@@ -496,6 +496,10 @@ CATEGORY="thp" run_test ./khugepaged all:shmem
 
 CATEGORY="thp" run_test ./khugepaged -s 4 all:shmem
 
+CATEGORY="thp" run_test ./khugepaged -c 4 mthp_khugepaged:anon
+
+CATEGORY="thp" run_test ./khugepaged -c 4 mthp_khugepaged:shmem
+
 CATEGORY="thp" run_test ./transhuge-stress -d 20
 
 # Try to create XFS if not provided
-- 
2.43.5


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 01/11] mm: khugepaged: add khugepaged_max_ptes_none check in collapse_file()
  2025-08-20  9:07 ` [RFC PATCH 01/11] mm: khugepaged: add khugepaged_max_ptes_none check in collapse_file() Baolin Wang
@ 2025-08-20 10:13   ` Dev Jain
  2025-08-21  1:09     ` Baolin Wang
  0 siblings, 1 reply; 14+ messages in thread
From: Dev Jain @ 2025-08-20 10:13 UTC (permalink / raw)
  To: Baolin Wang, akpm, hughd, david, lorenzo.stoakes
  Cc: ziy, Liam.Howlett, npache, ryan.roberts, baohua, linux-mm,
	linux-kernel


On 20/08/25 2:37 pm, Baolin Wang wrote:
> Similar to the anonymous folios collapse, we should also check the 'khugepaged_max_ptes_none'
> when trying to collapse shmem/file folios.
>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
>   mm/khugepaged.c | 7 +++++++
>   1 file changed, 7 insertions(+)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 5a3386043f39..5d4493b77f3c 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2125,6 +2125,13 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
>   					}
>   				}
>   				nr_none++;
> +
> +				if (cc->is_khugepaged && nr_none > khugepaged_max_ptes_none) {
> +					result = SCAN_EXCEED_NONE_PTE;
> +					count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
> +					goto xa_locked;
> +				}
> +
>   				index++;
>   				continue;
>   			}

Isn't this already being checked in collapse_scan_file(), in the block
if (cc->is_khugepaged && present < HPAGE_PMD_NR - khugepaged_max_ptes_none)?


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC PATCH 01/11] mm: khugepaged: add khugepaged_max_ptes_none check in collapse_file()
  2025-08-20 10:13   ` Dev Jain
@ 2025-08-21  1:09     ` Baolin Wang
  0 siblings, 0 replies; 14+ messages in thread
From: Baolin Wang @ 2025-08-21  1:09 UTC (permalink / raw)
  To: Dev Jain, akpm, hughd, david, lorenzo.stoakes
  Cc: ziy, Liam.Howlett, npache, ryan.roberts, baohua, linux-mm,
	linux-kernel



On 2025/8/20 18:13, Dev Jain wrote:
> 
> On 20/08/25 2:37 pm, Baolin Wang wrote:
>> Similar to the anonymous folios collapse, we should also check the 
>> 'khugepaged_max_ptes_none'
>> when trying to collapse shmem/file folios.
>>
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> ---
>>   mm/khugepaged.c | 7 +++++++
>>   1 file changed, 7 insertions(+)
>>
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 5a3386043f39..5d4493b77f3c 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -2125,6 +2125,13 @@ static int collapse_file(struct mm_struct *mm, 
>> unsigned long addr,
>>                       }
>>                   }
>>                   nr_none++;
>> +
>> +                if (cc->is_khugepaged && nr_none > 
>> khugepaged_max_ptes_none) {
>> +                    result = SCAN_EXCEED_NONE_PTE;
>> +                    count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
>> +                    goto xa_locked;
>> +                }
>> +
>>                   index++;
>>                   continue;
>>               }
> 
> Isn't this already being checked in collapse_scan_file(), in the block
> if (cc->is_khugepaged && present < HPAGE_PMD_NR - 
> khugepaged_max_ptes_none)?

Yes, as I said in the commit message, this follows the same behavior as 
for anonymous folios, by checking the folio’s present state again before 
isolating it, since the folio's present state could change after the 
check in collapse_scan_file().

In addition, and importantly, this prepares for the mTHP collapse check, 
see patch 4.

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2025-08-21  1:09 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-20  9:07 [RFC PATCH 00/11] add shmem mTHP collapse support Baolin Wang
2025-08-20  9:07 ` [RFC PATCH 01/11] mm: khugepaged: add khugepaged_max_ptes_none check in collapse_file() Baolin Wang
2025-08-20 10:13   ` Dev Jain
2025-08-21  1:09     ` Baolin Wang
2025-08-20  9:07 ` [RFC PATCH 02/11] mm: khugepaged: generalize collapse_file for mTHP support Baolin Wang
2025-08-20  9:07 ` [RFC PATCH 03/11] mm: khugepaged: add an order check for THP statistics Baolin Wang
2025-08-20  9:07 ` [RFC PATCH 04/11] mm: khugepaged: add shmem/file mTHP collapse support Baolin Wang
2025-08-20  9:07 ` [RFC PATCH 05/11] mm: shmem: kick khugepaged for enabling none-PMD-sized shmem mTHPs Baolin Wang
2025-08-20  9:07 ` [RFC PATCH 06/11] mm: khugepaged: allow khugepaged to check all shmem/file large orders Baolin Wang
2025-08-20  9:07 ` [RFC PATCH 07/11] mm: khugepaged: skip large folios that don't need to be collapsed Baolin Wang
2025-08-20  9:07 ` [RFC PATCH 08/11] selftests:mm: extend the check_huge() to support mTHP check Baolin Wang
2025-08-20  9:07 ` [RFC PATCH 09/11] selftests: mm: move gather_after_split_folio_orders() into vm_util.c file Baolin Wang
2025-08-20  9:07 ` [RFC PATCH 10/11] selftests: mm: implement the mTHP hugepage check helper Baolin Wang
2025-08-20  9:07 ` [RFC PATCH 11/11] selftests: mm: add mTHP collapse test cases Baolin Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).