* [PATCH v4 00/12] khugepaged: mTHP support
@ 2025-04-17 0:02 Nico Pache
2025-04-17 0:02 ` [PATCH v4 01/12] introduce khugepaged_collapse_single_pmd to unify khugepaged and madvise_collapse Nico Pache
` (11 more replies)
0 siblings, 12 replies; 34+ messages in thread
From: Nico Pache @ 2025-04-17 0:02 UTC (permalink / raw)
To: linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
baolin.wang, ryan.roberts, willy, peterx, ziy, wangkefeng.wang,
usamaarif642, sunnanyong, vishal.moola, thomas.hellstrom, yang,
kirill.shutemov, aarcange, raquini, dev.jain, anshuman.khandual,
catalin.marinas, tiwai, will, dave.hansen, jack, cl, jglisse,
surenb, zokeefe, hannes, rientjes, mhocko, rdunlap
The following series provides khugepaged and madvise collapse with the
capability to collapse regions to mTHPs.
To achieve this we generalize the khugepaged functions to no longer depend
on PMD_ORDER. Then during the PMD scan, we keep track of chunks of pages
(defined by KHUGEPAGED_MTHP_MIN_ORDER) that are utilized. This info is
tracked using a bitmap. After the PMD scan is done, we do binary recursion
on the bitmap to find the optimal mTHP sizes for the PMD range. The
restriction on max_ptes_none is removed during the scan, to make sure we
account for the whole PMD range. When no mTHP size is enabled, the legacy
behavior of khugepaged is maintained. max_ptes_none will be scaled by the
attempted collapse order to determine how full a THP must be to be
eligible. If a mTHP collapse is attempted, but contains swapped out, or
shared pages, we dont perform the collapse.
With the default max_ptes_none=511, the code should keep its most of its
original behavior. To exercise mTHP collapse we need to set
max_ptes_none<=255. With max_ptes_none > HPAGE_PMD_NR/2 you will
experience collapse "creep" and constantly promote mTHPs to the next
available size.
Patch 1: Some refactoring to combine madvise_collapse and khugepaged
Patch 2: Refactor/rename hpage_collapse
Patch 3-5: Generalize khugepaged functions for arbitrary orders
Patch 6-9: The mTHP patches
Patch 10-11: Tracing/stats
Patch 12: Documentation
---------
Testing
---------
- Built for x86_64, aarch64, ppc64le, and s390x
- selftests mm
- I created a test script that I used to push khugepaged to its limits
while monitoring a number of stats and tracepoints. The code is
available here[1] (Run in legacy mode for these changes and set mthp
sizes to inherit)
The summary from my testings was that there was no significant
regression noticed through this test. In some cases my changes had
better collapse latencies, and was able to scan more pages in the same
amount of time/work, but for the most part the results were consistent.
- redis testing. I tested these changes along with my defer changes
(see followup post for more details).
- some basic testing on 64k page size.
- lots of general use.
Changes since V3:
- Rebased onto mm-unstable
commit 0e68b850b1d3 ("vmalloc: use atomic_long_add_return_relaxed()")
- small changes to Documentation
Changes since V2:
- corrected legacy behavior for khugepaged and madvise_collapse
- added proper mTHP stat tracking
- Minor changes to prevent a nested lock on non-split-lock arches
- Took Devs version of alloc_charge_folio as it has the proper stats
- Skip cases were trying to collapse to a lower order would still fail
- Fixed cases were the bitmap was not being updated properly
- Moved Documentation update to this series instead of the defer set
- Minor bugs discovered during testing and review
- Minor "nit" cleanup
Changes since V1 [2]:
- Minor bug fixes discovered during review and testing
- removed dynamic allocations for bitmaps, and made them stack based
- Adjusted bitmap offset from u8 to u16 to support 64k pagesize.
- Updated trace events to include collapsing order info.
- Scaled max_ptes_none by order rather than scaling to a 0-100 scale.
- No longer require a chunk to be fully utilized before setting the bit.
Use the same max_ptes_none scaling principle to achieve this.
- Skip mTHP collapse that requires swapin or shared handling. This helps
prevent some of the "creep" that was discovered in v1.
[1] - https://gitlab.com/npache/khugepaged_mthp_test
[2] - https://lore.kernel.org/lkml/20250108233128.14484-1-npache@redhat.com/
Dev Jain (1):
khugepaged: generalize alloc_charge_folio()
Nico Pache (11):
introduce khugepaged_collapse_single_pmd to unify khugepaged and
madvise_collapse
khugepaged: rename hpage_collapse_* to khugepaged_*
khugepaged: generalize hugepage_vma_revalidate for mTHP support
khugepaged: generalize __collapse_huge_page_* for mTHP support
khugepaged: introduce khugepaged_scan_bitmap for mTHP support
khugepaged: add mTHP support
khugepaged: skip collapsing mTHP to smaller orders
khugepaged: avoid unnecessary mTHP collapse attempts
khugepaged: improve tracepoints for mTHP orders
khugepaged: add per-order mTHP khugepaged stats
Documentation: mm: update the admin guide for mTHP collapse
Documentation/admin-guide/mm/transhuge.rst | 10 +-
include/linux/huge_mm.h | 5 +
include/linux/khugepaged.h | 4 +
include/trace/events/huge_memory.h | 34 +-
mm/huge_memory.c | 11 +
mm/khugepaged.c | 457 ++++++++++++++-------
6 files changed, 369 insertions(+), 152 deletions(-)
--
2.48.1
^ permalink raw reply [flat|nested] 34+ messages in thread
* [PATCH v4 01/12] introduce khugepaged_collapse_single_pmd to unify khugepaged and madvise_collapse
2025-04-17 0:02 [PATCH v4 00/12] khugepaged: mTHP support Nico Pache
@ 2025-04-17 0:02 ` Nico Pache
2025-04-23 6:44 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 02/12] khugepaged: rename hpage_collapse_* to khugepaged_* Nico Pache
` (10 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Nico Pache @ 2025-04-17 0:02 UTC (permalink / raw)
To: linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
baolin.wang, ryan.roberts, willy, peterx, ziy, wangkefeng.wang,
usamaarif642, sunnanyong, vishal.moola, thomas.hellstrom, yang,
kirill.shutemov, aarcange, raquini, dev.jain, anshuman.khandual,
catalin.marinas, tiwai, will, dave.hansen, jack, cl, jglisse,
surenb, zokeefe, hannes, rientjes, mhocko, rdunlap
The khugepaged daemon and madvise_collapse have two different
implementations that do almost the same thing.
Create khugepaged_collapse_single_pmd to increase code
reuse and create an entry point for future khugepaged changes.
Refactor madvise_collapse and khugepaged_scan_mm_slot to use
the new khugepaged_collapse_single_pmd function.
Signed-off-by: Nico Pache <npache@redhat.com>
---
mm/khugepaged.c | 92 ++++++++++++++++++++++++-------------------------
1 file changed, 46 insertions(+), 46 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index b8838ba8207a..cecadc4239e7 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2363,6 +2363,48 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
}
#endif
+/*
+ * Try to collapse a single PMD starting at a PMD aligned addr, and return
+ * the results.
+ */
+static int khugepaged_collapse_single_pmd(unsigned long addr,
+ struct vm_area_struct *vma, bool *mmap_locked,
+ struct collapse_control *cc)
+{
+ int result = SCAN_FAIL;
+ struct mm_struct *mm = vma->vm_mm;
+ unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0;
+
+ if (thp_vma_allowable_order(vma, vma->vm_flags,
+ tva_flags, PMD_ORDER)) {
+ if (IS_ENABLED(CONFIG_SHMEM) && !vma_is_anonymous(vma)) {
+ struct file *file = get_file(vma->vm_file);
+ pgoff_t pgoff = linear_page_index(vma, addr);
+
+ mmap_read_unlock(mm);
+ *mmap_locked = false;
+ result = hpage_collapse_scan_file(mm, addr, file, pgoff,
+ cc);
+ fput(file);
+ if (result == SCAN_PTE_MAPPED_HUGEPAGE) {
+ mmap_read_lock(mm);
+ if (hpage_collapse_test_exit_or_disable(mm))
+ goto end;
+ result = collapse_pte_mapped_thp(mm, addr,
+ !cc->is_khugepaged);
+ mmap_read_unlock(mm);
+ }
+ } else {
+ result = hpage_collapse_scan_pmd(mm, vma, addr,
+ mmap_locked, cc);
+ }
+ if (cc->is_khugepaged && result == SCAN_SUCCEED)
+ ++khugepaged_pages_collapsed;
+ }
+end:
+ return result;
+}
+
static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
struct collapse_control *cc)
__releases(&khugepaged_mm_lock)
@@ -2437,33 +2479,9 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
VM_BUG_ON(khugepaged_scan.address < hstart ||
khugepaged_scan.address + HPAGE_PMD_SIZE >
hend);
- if (IS_ENABLED(CONFIG_SHMEM) && !vma_is_anonymous(vma)) {
- struct file *file = get_file(vma->vm_file);
- pgoff_t pgoff = linear_page_index(vma,
- khugepaged_scan.address);
- mmap_read_unlock(mm);
- mmap_locked = false;
- *result = hpage_collapse_scan_file(mm,
- khugepaged_scan.address, file, pgoff, cc);
- fput(file);
- if (*result == SCAN_PTE_MAPPED_HUGEPAGE) {
- mmap_read_lock(mm);
- if (hpage_collapse_test_exit_or_disable(mm))
- goto breakouterloop;
- *result = collapse_pte_mapped_thp(mm,
- khugepaged_scan.address, false);
- if (*result == SCAN_PMD_MAPPED)
- *result = SCAN_SUCCEED;
- mmap_read_unlock(mm);
- }
- } else {
- *result = hpage_collapse_scan_pmd(mm, vma,
- khugepaged_scan.address, &mmap_locked, cc);
- }
-
- if (*result == SCAN_SUCCEED)
- ++khugepaged_pages_collapsed;
+ *result = khugepaged_collapse_single_pmd(khugepaged_scan.address,
+ vma, &mmap_locked, cc);
/* move to next address */
khugepaged_scan.address += HPAGE_PMD_SIZE;
@@ -2783,36 +2801,18 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev,
mmap_assert_locked(mm);
memset(cc->node_load, 0, sizeof(cc->node_load));
nodes_clear(cc->alloc_nmask);
- if (IS_ENABLED(CONFIG_SHMEM) && !vma_is_anonymous(vma)) {
- struct file *file = get_file(vma->vm_file);
- pgoff_t pgoff = linear_page_index(vma, addr);
- mmap_read_unlock(mm);
- mmap_locked = false;
- result = hpage_collapse_scan_file(mm, addr, file, pgoff,
- cc);
- fput(file);
- } else {
- result = hpage_collapse_scan_pmd(mm, vma, addr,
- &mmap_locked, cc);
- }
+ result = khugepaged_collapse_single_pmd(addr, vma, &mmap_locked, cc);
+
if (!mmap_locked)
*prev = NULL; /* Tell caller we dropped mmap_lock */
-handle_result:
switch (result) {
case SCAN_SUCCEED:
case SCAN_PMD_MAPPED:
++thps;
break;
case SCAN_PTE_MAPPED_HUGEPAGE:
- BUG_ON(mmap_locked);
- BUG_ON(*prev);
- mmap_read_lock(mm);
- result = collapse_pte_mapped_thp(mm, addr, true);
- mmap_read_unlock(mm);
- goto handle_result;
- /* Whitelisted set of results where continuing OK */
case SCAN_PMD_NULL:
case SCAN_PTE_NON_PRESENT:
case SCAN_PTE_UFFD_WP:
--
2.48.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v4 02/12] khugepaged: rename hpage_collapse_* to khugepaged_*
2025-04-17 0:02 [PATCH v4 00/12] khugepaged: mTHP support Nico Pache
2025-04-17 0:02 ` [PATCH v4 01/12] introduce khugepaged_collapse_single_pmd to unify khugepaged and madvise_collapse Nico Pache
@ 2025-04-17 0:02 ` Nico Pache
2025-04-23 6:49 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 03/12] khugepaged: generalize hugepage_vma_revalidate for mTHP support Nico Pache
` (9 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Nico Pache @ 2025-04-17 0:02 UTC (permalink / raw)
To: linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
baolin.wang, ryan.roberts, willy, peterx, ziy, wangkefeng.wang,
usamaarif642, sunnanyong, vishal.moola, thomas.hellstrom, yang,
kirill.shutemov, aarcange, raquini, dev.jain, anshuman.khandual,
catalin.marinas, tiwai, will, dave.hansen, jack, cl, jglisse,
surenb, zokeefe, hannes, rientjes, mhocko, rdunlap
functions in khugepaged.c use a mix of hpage_collapse and khugepaged
as the function prefix.
rename all of them to khugepaged to keep things consistent and slightly
shorten the function names.
Signed-off-by: Nico Pache <npache@redhat.com>
---
mm/khugepaged.c | 50 ++++++++++++++++++++++++-------------------------
1 file changed, 25 insertions(+), 25 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index cecadc4239e7..b6281c04f1e5 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -402,14 +402,14 @@ void __init khugepaged_destroy(void)
kmem_cache_destroy(mm_slot_cache);
}
-static inline int hpage_collapse_test_exit(struct mm_struct *mm)
+static inline int khugepaged_test_exit(struct mm_struct *mm)
{
return atomic_read(&mm->mm_users) == 0;
}
-static inline int hpage_collapse_test_exit_or_disable(struct mm_struct *mm)
+static inline int khugepaged_test_exit_or_disable(struct mm_struct *mm)
{
- return hpage_collapse_test_exit(mm) ||
+ return khugepaged_test_exit(mm) ||
test_bit(MMF_DISABLE_THP, &mm->flags);
}
@@ -444,7 +444,7 @@ void __khugepaged_enter(struct mm_struct *mm)
int wakeup;
/* __khugepaged_exit() must not run from under us */
- VM_BUG_ON_MM(hpage_collapse_test_exit(mm), mm);
+ VM_BUG_ON_MM(khugepaged_test_exit(mm), mm);
if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags)))
return;
@@ -503,7 +503,7 @@ void __khugepaged_exit(struct mm_struct *mm)
} else if (mm_slot) {
/*
* This is required to serialize against
- * hpage_collapse_test_exit() (which is guaranteed to run
+ * khugepaged_test_exit() (which is guaranteed to run
* under mmap sem read mode). Stop here (after we return all
* pagetables will be destroyed) until khugepaged has finished
* working on the pagetables under the mmap_lock.
@@ -851,7 +851,7 @@ struct collapse_control khugepaged_collapse_control = {
.is_khugepaged = true,
};
-static bool hpage_collapse_scan_abort(int nid, struct collapse_control *cc)
+static bool khugepaged_scan_abort(int nid, struct collapse_control *cc)
{
int i;
@@ -886,7 +886,7 @@ static inline gfp_t alloc_hugepage_khugepaged_gfpmask(void)
}
#ifdef CONFIG_NUMA
-static int hpage_collapse_find_target_node(struct collapse_control *cc)
+static int khugepaged_find_target_node(struct collapse_control *cc)
{
int nid, target_node = 0, max_value = 0;
@@ -905,7 +905,7 @@ static int hpage_collapse_find_target_node(struct collapse_control *cc)
return target_node;
}
#else
-static int hpage_collapse_find_target_node(struct collapse_control *cc)
+static int khugepaged_find_target_node(struct collapse_control *cc)
{
return 0;
}
@@ -925,7 +925,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
struct vm_area_struct *vma;
unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0;
- if (unlikely(hpage_collapse_test_exit_or_disable(mm)))
+ if (unlikely(khugepaged_test_exit_or_disable(mm)))
return SCAN_ANY_PROCESS;
*vmap = vma = find_vma(mm, address);
@@ -992,7 +992,7 @@ static int check_pmd_still_valid(struct mm_struct *mm,
/*
* Bring missing pages in from swap, to complete THP collapse.
- * Only done if hpage_collapse_scan_pmd believes it is worthwhile.
+ * Only done if khugepaged_scan_pmd believes it is worthwhile.
*
* Called and returns without pte mapped or spinlocks held.
* Returns result: if not SCAN_SUCCEED, mmap_lock has been released.
@@ -1078,7 +1078,7 @@ static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm,
{
gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
GFP_TRANSHUGE);
- int node = hpage_collapse_find_target_node(cc);
+ int node = khugepaged_find_target_node(cc);
struct folio *folio;
folio = __folio_alloc(gfp, HPAGE_PMD_ORDER, node, &cc->alloc_nmask);
@@ -1264,7 +1264,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
return result;
}
-static int hpage_collapse_scan_pmd(struct mm_struct *mm,
+static int khugepaged_scan_pmd(struct mm_struct *mm,
struct vm_area_struct *vma,
unsigned long address, bool *mmap_locked,
struct collapse_control *cc)
@@ -1378,7 +1378,7 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
* hit record.
*/
node = folio_nid(folio);
- if (hpage_collapse_scan_abort(node, cc)) {
+ if (khugepaged_scan_abort(node, cc)) {
result = SCAN_SCAN_ABORT;
goto out_unmap;
}
@@ -1447,7 +1447,7 @@ static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot)
lockdep_assert_held(&khugepaged_mm_lock);
- if (hpage_collapse_test_exit(mm)) {
+ if (khugepaged_test_exit(mm)) {
/* free mm_slot */
hash_del(&slot->hash);
list_del(&slot->mm_node);
@@ -1742,7 +1742,7 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
if (find_pmd_or_thp_or_none(mm, addr, &pmd) != SCAN_SUCCEED)
continue;
- if (hpage_collapse_test_exit(mm))
+ if (khugepaged_test_exit(mm))
continue;
/*
* When a vma is registered with uffd-wp, we cannot recycle
@@ -2264,7 +2264,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
return result;
}
-static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
+static int khugepaged_scan_file(struct mm_struct *mm, unsigned long addr,
struct file *file, pgoff_t start,
struct collapse_control *cc)
{
@@ -2309,7 +2309,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
}
node = folio_nid(folio);
- if (hpage_collapse_scan_abort(node, cc)) {
+ if (khugepaged_scan_abort(node, cc)) {
result = SCAN_SCAN_ABORT;
break;
}
@@ -2355,7 +2355,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
return result;
}
#else
-static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
+static int khugepaged_scan_file(struct mm_struct *mm, unsigned long addr,
struct file *file, pgoff_t start,
struct collapse_control *cc)
{
@@ -2383,19 +2383,19 @@ static int khugepaged_collapse_single_pmd(unsigned long addr,
mmap_read_unlock(mm);
*mmap_locked = false;
- result = hpage_collapse_scan_file(mm, addr, file, pgoff,
+ result = khugepaged_scan_file(mm, addr, file, pgoff,
cc);
fput(file);
if (result == SCAN_PTE_MAPPED_HUGEPAGE) {
mmap_read_lock(mm);
- if (hpage_collapse_test_exit_or_disable(mm))
+ if (khugepaged_test_exit_or_disable(mm))
goto end;
result = collapse_pte_mapped_thp(mm, addr,
!cc->is_khugepaged);
mmap_read_unlock(mm);
}
} else {
- result = hpage_collapse_scan_pmd(mm, vma, addr,
+ result = khugepaged_scan_pmd(mm, vma, addr,
mmap_locked, cc);
}
if (cc->is_khugepaged && result == SCAN_SUCCEED)
@@ -2443,7 +2443,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
goto breakouterloop_mmap_lock;
progress++;
- if (unlikely(hpage_collapse_test_exit_or_disable(mm)))
+ if (unlikely(khugepaged_test_exit_or_disable(mm)))
goto breakouterloop;
vma_iter_init(&vmi, mm, khugepaged_scan.address);
@@ -2451,7 +2451,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
unsigned long hstart, hend;
cond_resched();
- if (unlikely(hpage_collapse_test_exit_or_disable(mm))) {
+ if (unlikely(khugepaged_test_exit_or_disable(mm))) {
progress++;
break;
}
@@ -2473,7 +2473,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
bool mmap_locked = true;
cond_resched();
- if (unlikely(hpage_collapse_test_exit_or_disable(mm)))
+ if (unlikely(khugepaged_test_exit_or_disable(mm)))
goto breakouterloop;
VM_BUG_ON(khugepaged_scan.address < hstart ||
@@ -2509,7 +2509,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
* Release the current mm_slot if this mm is about to die, or
* if we scanned all vmas of this mm.
*/
- if (hpage_collapse_test_exit(mm) || !vma) {
+ if (khugepaged_test_exit(mm) || !vma) {
/*
* Make sure that if mm_users is reaching zero while
* khugepaged runs here, khugepaged_exit will find
--
2.48.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v4 03/12] khugepaged: generalize hugepage_vma_revalidate for mTHP support
2025-04-17 0:02 [PATCH v4 00/12] khugepaged: mTHP support Nico Pache
2025-04-17 0:02 ` [PATCH v4 01/12] introduce khugepaged_collapse_single_pmd to unify khugepaged and madvise_collapse Nico Pache
2025-04-17 0:02 ` [PATCH v4 02/12] khugepaged: rename hpage_collapse_* to khugepaged_* Nico Pache
@ 2025-04-17 0:02 ` Nico Pache
2025-04-23 6:55 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 04/12] khugepaged: generalize alloc_charge_folio() Nico Pache
` (8 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Nico Pache @ 2025-04-17 0:02 UTC (permalink / raw)
To: linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
baolin.wang, ryan.roberts, willy, peterx, ziy, wangkefeng.wang,
usamaarif642, sunnanyong, vishal.moola, thomas.hellstrom, yang,
kirill.shutemov, aarcange, raquini, dev.jain, anshuman.khandual,
catalin.marinas, tiwai, will, dave.hansen, jack, cl, jglisse,
surenb, zokeefe, hannes, rientjes, mhocko, rdunlap
For khugepaged to support different mTHP orders, we must generalize this
function for arbitrary orders.
No functional change in this patch.
Co-developed-by: Dev Jain <dev.jain@arm.com>
Signed-off-by: Dev Jain <dev.jain@arm.com>
Signed-off-by: Nico Pache <npache@redhat.com>
---
mm/khugepaged.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index b6281c04f1e5..54d7f43da69c 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -920,7 +920,7 @@ static int khugepaged_find_target_node(struct collapse_control *cc)
static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
bool expect_anon,
struct vm_area_struct **vmap,
- struct collapse_control *cc)
+ struct collapse_control *cc, int order)
{
struct vm_area_struct *vma;
unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0;
@@ -932,9 +932,9 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
if (!vma)
return SCAN_VMA_NULL;
- if (!thp_vma_suitable_order(vma, address, PMD_ORDER))
+ if (!thp_vma_suitable_order(vma, address, order))
return SCAN_ADDRESS_RANGE;
- if (!thp_vma_allowable_order(vma, vma->vm_flags, tva_flags, PMD_ORDER))
+ if (!thp_vma_allowable_order(vma, vma->vm_flags, tva_flags, order))
return SCAN_VMA_CHECK;
/*
* Anon VMA expected, the address may be unmapped then
@@ -1130,7 +1130,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
goto out_nolock;
mmap_read_lock(mm);
- result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
+ result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER);
if (result != SCAN_SUCCEED) {
mmap_read_unlock(mm);
goto out_nolock;
@@ -1164,7 +1164,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
* mmap_lock.
*/
mmap_write_lock(mm);
- result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
+ result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER);
if (result != SCAN_SUCCEED)
goto out_up_write;
/* check if the pmd is still valid */
@@ -2790,7 +2790,7 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev,
mmap_read_lock(mm);
mmap_locked = true;
result = hugepage_vma_revalidate(mm, addr, false, &vma,
- cc);
+ cc, HPAGE_PMD_ORDER);
if (result != SCAN_SUCCEED) {
last_fail = result;
goto out_nolock;
--
2.48.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v4 04/12] khugepaged: generalize alloc_charge_folio()
2025-04-17 0:02 [PATCH v4 00/12] khugepaged: mTHP support Nico Pache
` (2 preceding siblings ...)
2025-04-17 0:02 ` [PATCH v4 03/12] khugepaged: generalize hugepage_vma_revalidate for mTHP support Nico Pache
@ 2025-04-17 0:02 ` Nico Pache
2025-04-23 7:06 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 05/12] khugepaged: generalize __collapse_huge_page_* for mTHP support Nico Pache
` (7 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Nico Pache @ 2025-04-17 0:02 UTC (permalink / raw)
To: linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
baolin.wang, ryan.roberts, willy, peterx, ziy, wangkefeng.wang,
usamaarif642, sunnanyong, vishal.moola, thomas.hellstrom, yang,
kirill.shutemov, aarcange, raquini, dev.jain, anshuman.khandual,
catalin.marinas, tiwai, will, dave.hansen, jack, cl, jglisse,
surenb, zokeefe, hannes, rientjes, mhocko, rdunlap
From: Dev Jain <dev.jain@arm.com>
Pass order to alloc_charge_folio() and update mTHP statistics.
Co-developed-by: Nico Pache <npache@redhat.com>
Signed-off-by: Nico Pache <npache@redhat.com>
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
include/linux/huge_mm.h | 2 ++
mm/huge_memory.c | 4 ++++
mm/khugepaged.c | 17 +++++++++++------
3 files changed, 17 insertions(+), 6 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index f190998b2ebd..55b242335420 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -123,6 +123,8 @@ enum mthp_stat_item {
MTHP_STAT_ANON_FAULT_ALLOC,
MTHP_STAT_ANON_FAULT_FALLBACK,
MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE,
+ MTHP_STAT_COLLAPSE_ALLOC,
+ MTHP_STAT_COLLAPSE_ALLOC_FAILED,
MTHP_STAT_ZSWPOUT,
MTHP_STAT_SWPIN,
MTHP_STAT_SWPIN_FALLBACK,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e97a97586478..7798c9284533 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -615,6 +615,8 @@ static struct kobj_attribute _name##_attr = __ATTR_RO(_name)
DEFINE_MTHP_STAT_ATTR(anon_fault_alloc, MTHP_STAT_ANON_FAULT_ALLOC);
DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK);
DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
+DEFINE_MTHP_STAT_ATTR(collapse_alloc, MTHP_STAT_COLLAPSE_ALLOC);
+DEFINE_MTHP_STAT_ATTR(collapse_alloc_failed, MTHP_STAT_COLLAPSE_ALLOC_FAILED);
DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT);
DEFINE_MTHP_STAT_ATTR(swpin, MTHP_STAT_SWPIN);
DEFINE_MTHP_STAT_ATTR(swpin_fallback, MTHP_STAT_SWPIN_FALLBACK);
@@ -680,6 +682,8 @@ static struct attribute *any_stats_attrs[] = {
#endif
&split_attr.attr,
&split_failed_attr.attr,
+ &collapse_alloc_attr.attr,
+ &collapse_alloc_failed_attr.attr,
NULL,
};
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 54d7f43da69c..883e9a46359f 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1074,21 +1074,26 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
}
static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm,
- struct collapse_control *cc)
+ struct collapse_control *cc, u8 order)
{
gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
GFP_TRANSHUGE);
int node = khugepaged_find_target_node(cc);
struct folio *folio;
- folio = __folio_alloc(gfp, HPAGE_PMD_ORDER, node, &cc->alloc_nmask);
+ folio = __folio_alloc(gfp, order, node, &cc->alloc_nmask);
if (!folio) {
*foliop = NULL;
- count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
+ if (order == HPAGE_PMD_ORDER)
+ count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
+ count_mthp_stat(order, MTHP_STAT_COLLAPSE_ALLOC_FAILED);
return SCAN_ALLOC_HUGE_PAGE_FAIL;
}
- count_vm_event(THP_COLLAPSE_ALLOC);
+ if (order == HPAGE_PMD_ORDER)
+ count_vm_event(THP_COLLAPSE_ALLOC);
+ count_mthp_stat(order, MTHP_STAT_COLLAPSE_ALLOC);
+
if (unlikely(mem_cgroup_charge(folio, mm, gfp))) {
folio_put(folio);
*foliop = NULL;
@@ -1125,7 +1130,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
*/
mmap_read_unlock(mm);
- result = alloc_charge_folio(&folio, mm, cc);
+ result = alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER);
if (result != SCAN_SUCCEED)
goto out_nolock;
@@ -1849,7 +1854,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem);
VM_BUG_ON(start & (HPAGE_PMD_NR - 1));
- result = alloc_charge_folio(&new_folio, mm, cc);
+ result = alloc_charge_folio(&new_folio, mm, cc, HPAGE_PMD_ORDER);
if (result != SCAN_SUCCEED)
goto out;
--
2.48.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v4 05/12] khugepaged: generalize __collapse_huge_page_* for mTHP support
2025-04-17 0:02 [PATCH v4 00/12] khugepaged: mTHP support Nico Pache
` (3 preceding siblings ...)
2025-04-17 0:02 ` [PATCH v4 04/12] khugepaged: generalize alloc_charge_folio() Nico Pache
@ 2025-04-17 0:02 ` Nico Pache
2025-04-23 7:30 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 06/12] khugepaged: introduce khugepaged_scan_bitmap " Nico Pache
` (6 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Nico Pache @ 2025-04-17 0:02 UTC (permalink / raw)
To: linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
baolin.wang, ryan.roberts, willy, peterx, ziy, wangkefeng.wang,
usamaarif642, sunnanyong, vishal.moola, thomas.hellstrom, yang,
kirill.shutemov, aarcange, raquini, dev.jain, anshuman.khandual,
catalin.marinas, tiwai, will, dave.hansen, jack, cl, jglisse,
surenb, zokeefe, hannes, rientjes, mhocko, rdunlap
generalize the order of the __collapse_huge_page_* functions
to support future mTHP collapse.
mTHP collapse can suffer from incosistant behavior, and memory waste
"creep". disable swapin and shared support for mTHP collapse.
No functional changes in this patch.
Co-developed-by: Dev Jain <dev.jain@arm.com>
Signed-off-by: Dev Jain <dev.jain@arm.com>
Signed-off-by: Nico Pache <npache@redhat.com>
---
mm/khugepaged.c | 46 ++++++++++++++++++++++++++++------------------
1 file changed, 28 insertions(+), 18 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 883e9a46359f..5e9272ab82da 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -565,15 +565,17 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
unsigned long address,
pte_t *pte,
struct collapse_control *cc,
- struct list_head *compound_pagelist)
+ struct list_head *compound_pagelist,
+ u8 order)
{
struct page *page = NULL;
struct folio *folio = NULL;
pte_t *_pte;
int none_or_zero = 0, shared = 0, result = SCAN_FAIL, referenced = 0;
bool writable = false;
+ int scaled_none = khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - order);
- for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
+ for (_pte = pte; _pte < pte + (1 << order);
_pte++, address += PAGE_SIZE) {
pte_t pteval = ptep_get(_pte);
if (pte_none(pteval) || (pte_present(pteval) &&
@@ -581,7 +583,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
++none_or_zero;
if (!userfaultfd_armed(vma) &&
(!cc->is_khugepaged ||
- none_or_zero <= khugepaged_max_ptes_none)) {
+ none_or_zero <= scaled_none)) {
continue;
} else {
result = SCAN_EXCEED_NONE_PTE;
@@ -609,8 +611,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
/* See hpage_collapse_scan_pmd(). */
if (folio_maybe_mapped_shared(folio)) {
++shared;
- if (cc->is_khugepaged &&
- shared > khugepaged_max_ptes_shared) {
+ if (order != HPAGE_PMD_ORDER || (cc->is_khugepaged &&
+ shared > khugepaged_max_ptes_shared)) {
result = SCAN_EXCEED_SHARED_PTE;
count_vm_event(THP_SCAN_EXCEED_SHARED_PTE);
goto out;
@@ -711,13 +713,14 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
struct vm_area_struct *vma,
unsigned long address,
spinlock_t *ptl,
- struct list_head *compound_pagelist)
+ struct list_head *compound_pagelist,
+ u8 order)
{
struct folio *src, *tmp;
pte_t *_pte;
pte_t pteval;
- for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
+ for (_pte = pte; _pte < pte + (1 << order);
_pte++, address += PAGE_SIZE) {
pteval = ptep_get(_pte);
if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
@@ -764,7 +767,8 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
pmd_t *pmd,
pmd_t orig_pmd,
struct vm_area_struct *vma,
- struct list_head *compound_pagelist)
+ struct list_head *compound_pagelist,
+ u8 order)
{
spinlock_t *pmd_ptl;
@@ -781,7 +785,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
* Release both raw and compound pages isolated
* in __collapse_huge_page_isolate.
*/
- release_pte_pages(pte, pte + HPAGE_PMD_NR, compound_pagelist);
+ release_pte_pages(pte, pte + (1 << order), compound_pagelist);
}
/*
@@ -802,7 +806,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
pmd_t *pmd, pmd_t orig_pmd, struct vm_area_struct *vma,
unsigned long address, spinlock_t *ptl,
- struct list_head *compound_pagelist)
+ struct list_head *compound_pagelist, u8 order)
{
unsigned int i;
int result = SCAN_SUCCEED;
@@ -810,7 +814,7 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
/*
* Copying pages' contents is subject to memory poison at any iteration.
*/
- for (i = 0; i < HPAGE_PMD_NR; i++) {
+ for (i = 0; i < (1 << order); i++) {
pte_t pteval = ptep_get(pte + i);
struct page *page = folio_page(folio, i);
unsigned long src_addr = address + i * PAGE_SIZE;
@@ -829,10 +833,10 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
if (likely(result == SCAN_SUCCEED))
__collapse_huge_page_copy_succeeded(pte, vma, address, ptl,
- compound_pagelist);
+ compound_pagelist, order);
else
__collapse_huge_page_copy_failed(pte, pmd, orig_pmd, vma,
- compound_pagelist);
+ compound_pagelist, order);
return result;
}
@@ -1000,11 +1004,11 @@ static int check_pmd_still_valid(struct mm_struct *mm,
static int __collapse_huge_page_swapin(struct mm_struct *mm,
struct vm_area_struct *vma,
unsigned long haddr, pmd_t *pmd,
- int referenced)
+ int referenced, u8 order)
{
int swapped_in = 0;
vm_fault_t ret = 0;
- unsigned long address, end = haddr + (HPAGE_PMD_NR * PAGE_SIZE);
+ unsigned long address, end = haddr + (PAGE_SIZE << order);
int result;
pte_t *pte = NULL;
spinlock_t *ptl;
@@ -1035,6 +1039,12 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
if (!is_swap_pte(vmf.orig_pte))
continue;
+ /* Dont swapin for mTHP collapse */
+ if (order != HPAGE_PMD_ORDER) {
+ result = SCAN_EXCEED_SWAP_PTE;
+ goto out;
+ }
+
vmf.pte = pte;
vmf.ptl = ptl;
ret = do_swap_page(&vmf);
@@ -1154,7 +1164,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
* that case. Continuing to collapse causes inconsistency.
*/
result = __collapse_huge_page_swapin(mm, vma, address, pmd,
- referenced);
+ referenced, HPAGE_PMD_ORDER);
if (result != SCAN_SUCCEED)
goto out_nolock;
}
@@ -1201,7 +1211,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
pte = pte_offset_map_lock(mm, &_pmd, address, &pte_ptl);
if (pte) {
result = __collapse_huge_page_isolate(vma, address, pte, cc,
- &compound_pagelist);
+ &compound_pagelist, HPAGE_PMD_ORDER);
spin_unlock(pte_ptl);
} else {
result = SCAN_PMD_NULL;
@@ -1231,7 +1241,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
result = __collapse_huge_page_copy(pte, folio, pmd, _pmd,
vma, address, pte_ptl,
- &compound_pagelist);
+ &compound_pagelist, HPAGE_PMD_ORDER);
pte_unmap(pte);
if (unlikely(result != SCAN_SUCCEED))
goto out_up_write;
--
2.48.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v4 06/12] khugepaged: introduce khugepaged_scan_bitmap for mTHP support
2025-04-17 0:02 [PATCH v4 00/12] khugepaged: mTHP support Nico Pache
` (4 preceding siblings ...)
2025-04-17 0:02 ` [PATCH v4 05/12] khugepaged: generalize __collapse_huge_page_* for mTHP support Nico Pache
@ 2025-04-17 0:02 ` Nico Pache
2025-04-27 2:51 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 07/12] khugepaged: add " Nico Pache
` (5 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Nico Pache @ 2025-04-17 0:02 UTC (permalink / raw)
To: linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
baolin.wang, ryan.roberts, willy, peterx, ziy, wangkefeng.wang,
usamaarif642, sunnanyong, vishal.moola, thomas.hellstrom, yang,
kirill.shutemov, aarcange, raquini, dev.jain, anshuman.khandual,
catalin.marinas, tiwai, will, dave.hansen, jack, cl, jglisse,
surenb, zokeefe, hannes, rientjes, mhocko, rdunlap
khugepaged scans PMD ranges for potential collapse to a hugepage. To add
mTHP support we use this scan to instead record chunks of utilized
sections of the PMD.
khugepaged_scan_bitmap uses a stack struct to recursively scan a bitmap
that represents chunks of utilized regions. We can then determine what
mTHP size fits best and in the following patch, we set this bitmap while
scanning the PMD.
max_ptes_none is used as a scale to determine how "full" an order must
be before being considered for collapse.
When attempting to collapse an order that has its order set to "always"
lets always collapse to that order in a greedy manner without
considering the number of bits set.
Signed-off-by: Nico Pache <npache@redhat.com>
---
include/linux/khugepaged.h | 4 ++
mm/khugepaged.c | 94 ++++++++++++++++++++++++++++++++++----
2 files changed, 89 insertions(+), 9 deletions(-)
diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h
index 1f46046080f5..18fe6eb5051d 100644
--- a/include/linux/khugepaged.h
+++ b/include/linux/khugepaged.h
@@ -1,6 +1,10 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _LINUX_KHUGEPAGED_H
#define _LINUX_KHUGEPAGED_H
+#define KHUGEPAGED_MIN_MTHP_ORDER 2
+#define KHUGEPAGED_MIN_MTHP_NR (1<<KHUGEPAGED_MIN_MTHP_ORDER)
+#define MAX_MTHP_BITMAP_SIZE (1 << (ilog2(MAX_PTRS_PER_PTE) - KHUGEPAGED_MIN_MTHP_ORDER))
+#define MTHP_BITMAP_SIZE (1 << (HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER))
extern unsigned int khugepaged_max_ptes_none __read_mostly;
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 5e9272ab82da..83230e9cdf3a 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -94,6 +94,11 @@ static DEFINE_READ_MOSTLY_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS);
static struct kmem_cache *mm_slot_cache __ro_after_init;
+struct scan_bit_state {
+ u8 order;
+ u16 offset;
+};
+
struct collapse_control {
bool is_khugepaged;
@@ -102,6 +107,18 @@ struct collapse_control {
/* nodemask for allocation fallback */
nodemask_t alloc_nmask;
+
+ /*
+ * bitmap used to collapse mTHP sizes.
+ * 1bit = order KHUGEPAGED_MIN_MTHP_ORDER mTHP
+ */
+ DECLARE_BITMAP(mthp_bitmap, MAX_MTHP_BITMAP_SIZE);
+ DECLARE_BITMAP(mthp_bitmap_temp, MAX_MTHP_BITMAP_SIZE);
+ struct scan_bit_state mthp_bitmap_stack[MAX_MTHP_BITMAP_SIZE];
+};
+
+struct collapse_control khugepaged_collapse_control = {
+ .is_khugepaged = true,
};
/**
@@ -851,10 +868,6 @@ static void khugepaged_alloc_sleep(void)
remove_wait_queue(&khugepaged_wait, &wait);
}
-struct collapse_control khugepaged_collapse_control = {
- .is_khugepaged = true,
-};
-
static bool khugepaged_scan_abort(int nid, struct collapse_control *cc)
{
int i;
@@ -1118,7 +1131,8 @@ static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm,
static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
int referenced, int unmapped,
- struct collapse_control *cc)
+ struct collapse_control *cc, bool *mmap_locked,
+ u8 order, u16 offset)
{
LIST_HEAD(compound_pagelist);
pmd_t *pmd, _pmd;
@@ -1137,8 +1151,12 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
* The allocation can take potentially a long time if it involves
* sync compaction, and we do not need to hold the mmap_lock during
* that. We will recheck the vma after taking it again in write mode.
+ * If collapsing mTHPs we may have already released the read_lock.
*/
- mmap_read_unlock(mm);
+ if (*mmap_locked) {
+ mmap_read_unlock(mm);
+ *mmap_locked = false;
+ }
result = alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER);
if (result != SCAN_SUCCEED)
@@ -1273,12 +1291,72 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
out_up_write:
mmap_write_unlock(mm);
out_nolock:
+ *mmap_locked = false;
if (folio)
folio_put(folio);
trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result);
return result;
}
+// Recursive function to consume the bitmap
+static int khugepaged_scan_bitmap(struct mm_struct *mm, unsigned long address,
+ int referenced, int unmapped, struct collapse_control *cc,
+ bool *mmap_locked, unsigned long enabled_orders)
+{
+ u8 order, next_order;
+ u16 offset, mid_offset;
+ int num_chunks;
+ int bits_set, threshold_bits;
+ int top = -1;
+ int collapsed = 0;
+ int ret;
+ struct scan_bit_state state;
+ bool is_pmd_only = (enabled_orders == (1 << HPAGE_PMD_ORDER));
+
+ cc->mthp_bitmap_stack[++top] = (struct scan_bit_state)
+ { HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER, 0 };
+
+ while (top >= 0) {
+ state = cc->mthp_bitmap_stack[top--];
+ order = state.order + KHUGEPAGED_MIN_MTHP_ORDER;
+ offset = state.offset;
+ num_chunks = 1 << (state.order);
+ // Skip mTHP orders that are not enabled
+ if (!test_bit(order, &enabled_orders))
+ goto next;
+
+ // copy the relavant section to a new bitmap
+ bitmap_shift_right(cc->mthp_bitmap_temp, cc->mthp_bitmap, offset,
+ MTHP_BITMAP_SIZE);
+
+ bits_set = bitmap_weight(cc->mthp_bitmap_temp, num_chunks);
+ threshold_bits = (HPAGE_PMD_NR - khugepaged_max_ptes_none - 1)
+ >> (HPAGE_PMD_ORDER - state.order);
+
+ //Check if the region is "almost full" based on the threshold
+ if (bits_set > threshold_bits || is_pmd_only
+ || test_bit(order, &huge_anon_orders_always)) {
+ ret = collapse_huge_page(mm, address, referenced, unmapped, cc,
+ mmap_locked, order, offset * KHUGEPAGED_MIN_MTHP_NR);
+ if (ret == SCAN_SUCCEED) {
+ collapsed += (1 << order);
+ continue;
+ }
+ }
+
+next:
+ if (state.order > 0) {
+ next_order = state.order - 1;
+ mid_offset = offset + (num_chunks / 2);
+ cc->mthp_bitmap_stack[++top] = (struct scan_bit_state)
+ { next_order, mid_offset };
+ cc->mthp_bitmap_stack[++top] = (struct scan_bit_state)
+ { next_order, offset };
+ }
+ }
+ return collapsed;
+}
+
static int khugepaged_scan_pmd(struct mm_struct *mm,
struct vm_area_struct *vma,
unsigned long address, bool *mmap_locked,
@@ -1445,9 +1523,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
pte_unmap_unlock(pte, ptl);
if (result == SCAN_SUCCEED) {
result = collapse_huge_page(mm, address, referenced,
- unmapped, cc);
- /* collapse_huge_page will return with the mmap_lock released */
- *mmap_locked = false;
+ unmapped, cc, mmap_locked, HPAGE_PMD_ORDER, 0);
}
out:
trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, referenced,
--
2.48.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v4 07/12] khugepaged: add mTHP support
2025-04-17 0:02 [PATCH v4 00/12] khugepaged: mTHP support Nico Pache
` (5 preceding siblings ...)
2025-04-17 0:02 ` [PATCH v4 06/12] khugepaged: introduce khugepaged_scan_bitmap " Nico Pache
@ 2025-04-17 0:02 ` Nico Pache
2025-04-24 12:21 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 08/12] khugepaged: skip collapsing mTHP to smaller orders Nico Pache
` (4 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Nico Pache @ 2025-04-17 0:02 UTC (permalink / raw)
To: linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
baolin.wang, ryan.roberts, willy, peterx, ziy, wangkefeng.wang,
usamaarif642, sunnanyong, vishal.moola, thomas.hellstrom, yang,
kirill.shutemov, aarcange, raquini, dev.jain, anshuman.khandual,
catalin.marinas, tiwai, will, dave.hansen, jack, cl, jglisse,
surenb, zokeefe, hannes, rientjes, mhocko, rdunlap
Introduce the ability for khugepaged to collapse to different mTHP sizes.
While scanning PMD ranges for potential collapse candidates, keep track
of pages in KHUGEPAGED_MIN_MTHP_ORDER chunks via a bitmap. Each bit
represents a utilized region of order KHUGEPAGED_MIN_MTHP_ORDER ptes. If
mTHPs are enabled we remove the restriction of max_ptes_none during the
scan phase so we dont bailout early and miss potential mTHP candidates.
After the scan is complete we will perform binary recursion on the
bitmap to determine which mTHP size would be most efficient to collapse
to. max_ptes_none will be scaled by the attempted collapse order to
determine how full a THP must be to be eligible.
If a mTHP collapse is attempted, but contains swapped out, or shared
pages, we dont perform the collapse.
Signed-off-by: Nico Pache <npache@redhat.com>
---
mm/khugepaged.c | 122 ++++++++++++++++++++++++++++++++++--------------
1 file changed, 88 insertions(+), 34 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 83230e9cdf3a..ece39fd71fe6 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1136,13 +1136,14 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
{
LIST_HEAD(compound_pagelist);
pmd_t *pmd, _pmd;
- pte_t *pte;
+ pte_t *pte, mthp_pte;
pgtable_t pgtable;
struct folio *folio;
spinlock_t *pmd_ptl, *pte_ptl;
int result = SCAN_FAIL;
struct vm_area_struct *vma;
struct mmu_notifier_range range;
+ unsigned long _address = address + offset * PAGE_SIZE;
VM_BUG_ON(address & ~HPAGE_PMD_MASK);
@@ -1158,12 +1159,13 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
*mmap_locked = false;
}
- result = alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER);
+ result = alloc_charge_folio(&folio, mm, cc, order);
if (result != SCAN_SUCCEED)
goto out_nolock;
mmap_read_lock(mm);
- result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER);
+ *mmap_locked = true;
+ result = hugepage_vma_revalidate(mm, address, true, &vma, cc, order);
if (result != SCAN_SUCCEED) {
mmap_read_unlock(mm);
goto out_nolock;
@@ -1181,13 +1183,14 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
* released when it fails. So we jump out_nolock directly in
* that case. Continuing to collapse causes inconsistency.
*/
- result = __collapse_huge_page_swapin(mm, vma, address, pmd,
- referenced, HPAGE_PMD_ORDER);
+ result = __collapse_huge_page_swapin(mm, vma, _address, pmd,
+ referenced, order);
if (result != SCAN_SUCCEED)
goto out_nolock;
}
mmap_read_unlock(mm);
+ *mmap_locked = false;
/*
* Prevent all access to pagetables with the exception of
* gup_fast later handled by the ptep_clear_flush and the VM
@@ -1197,7 +1200,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
* mmap_lock.
*/
mmap_write_lock(mm);
- result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER);
+ result = hugepage_vma_revalidate(mm, address, true, &vma, cc, order);
if (result != SCAN_SUCCEED)
goto out_up_write;
/* check if the pmd is still valid */
@@ -1208,11 +1211,12 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
vma_start_write(vma);
anon_vma_lock_write(vma->anon_vma);
- mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, address,
- address + HPAGE_PMD_SIZE);
+ mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, _address,
+ _address + (PAGE_SIZE << order));
mmu_notifier_invalidate_range_start(&range);
pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
+
/*
* This removes any huge TLB entry from the CPU so we won't allow
* huge and small TLB entries for the same virtual address to
@@ -1226,10 +1230,10 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
mmu_notifier_invalidate_range_end(&range);
tlb_remove_table_sync_one();
- pte = pte_offset_map_lock(mm, &_pmd, address, &pte_ptl);
+ pte = pte_offset_map_lock(mm, &_pmd, _address, &pte_ptl);
if (pte) {
- result = __collapse_huge_page_isolate(vma, address, pte, cc,
- &compound_pagelist, HPAGE_PMD_ORDER);
+ result = __collapse_huge_page_isolate(vma, _address, pte, cc,
+ &compound_pagelist, order);
spin_unlock(pte_ptl);
} else {
result = SCAN_PMD_NULL;
@@ -1258,8 +1262,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
anon_vma_unlock_write(vma->anon_vma);
result = __collapse_huge_page_copy(pte, folio, pmd, _pmd,
- vma, address, pte_ptl,
- &compound_pagelist, HPAGE_PMD_ORDER);
+ vma, _address, pte_ptl,
+ &compound_pagelist, order);
pte_unmap(pte);
if (unlikely(result != SCAN_SUCCEED))
goto out_up_write;
@@ -1270,20 +1274,35 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
* write.
*/
__folio_mark_uptodate(folio);
- pgtable = pmd_pgtable(_pmd);
-
- _pmd = folio_mk_pmd(folio, vma->vm_page_prot);
- _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma);
-
- spin_lock(pmd_ptl);
- BUG_ON(!pmd_none(*pmd));
- folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE);
- folio_add_lru_vma(folio, vma);
- pgtable_trans_huge_deposit(mm, pmd, pgtable);
- set_pmd_at(mm, address, pmd, _pmd);
- update_mmu_cache_pmd(vma, address, pmd);
- deferred_split_folio(folio, false);
- spin_unlock(pmd_ptl);
+ if (order == HPAGE_PMD_ORDER) {
+ pgtable = pmd_pgtable(_pmd);
+ _pmd = folio_mk_pmd(folio, vma->vm_page_prot);
+ _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma);
+
+ spin_lock(pmd_ptl);
+ BUG_ON(!pmd_none(*pmd));
+ folio_add_new_anon_rmap(folio, vma, _address, RMAP_EXCLUSIVE);
+ folio_add_lru_vma(folio, vma);
+ pgtable_trans_huge_deposit(mm, pmd, pgtable);
+ set_pmd_at(mm, address, pmd, _pmd);
+ update_mmu_cache_pmd(vma, address, pmd);
+ deferred_split_folio(folio, false);
+ spin_unlock(pmd_ptl);
+ } else { //mTHP
+ mthp_pte = mk_pte(&folio->page, vma->vm_page_prot);
+ mthp_pte = maybe_mkwrite(pte_mkdirty(mthp_pte), vma);
+
+ spin_lock(pmd_ptl);
+ folio_ref_add(folio, (1 << order) - 1);
+ folio_add_new_anon_rmap(folio, vma, _address, RMAP_EXCLUSIVE);
+ folio_add_lru_vma(folio, vma);
+ set_ptes(vma->vm_mm, _address, pte, mthp_pte, (1 << order));
+ update_mmu_cache_range(NULL, vma, _address, pte, (1 << order));
+
+ smp_wmb(); /* make pte visible before pmd */
+ pmd_populate(mm, pmd, pmd_pgtable(_pmd));
+ spin_unlock(pmd_ptl);
+ }
folio = NULL;
@@ -1364,31 +1383,58 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
{
pmd_t *pmd;
pte_t *pte, *_pte;
+ int i;
int result = SCAN_FAIL, referenced = 0;
int none_or_zero = 0, shared = 0;
struct page *page = NULL;
struct folio *folio = NULL;
unsigned long _address;
+ unsigned long enabled_orders;
spinlock_t *ptl;
int node = NUMA_NO_NODE, unmapped = 0;
+ bool is_pmd_only;
bool writable = false;
-
+ int chunk_none_count = 0;
+ int scaled_none = khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER);
+ unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0;
VM_BUG_ON(address & ~HPAGE_PMD_MASK);
result = find_pmd_or_thp_or_none(mm, address, &pmd);
if (result != SCAN_SUCCEED)
goto out;
+ bitmap_zero(cc->mthp_bitmap, MAX_MTHP_BITMAP_SIZE);
+ bitmap_zero(cc->mthp_bitmap_temp, MAX_MTHP_BITMAP_SIZE);
memset(cc->node_load, 0, sizeof(cc->node_load));
nodes_clear(cc->alloc_nmask);
+
+ enabled_orders = thp_vma_allowable_orders(vma, vma->vm_flags,
+ tva_flags, THP_ORDERS_ALL_ANON);
+
+ is_pmd_only = (enabled_orders == (1 << HPAGE_PMD_ORDER));
+
pte = pte_offset_map_lock(mm, pmd, address, &ptl);
if (!pte) {
result = SCAN_PMD_NULL;
goto out;
}
- for (_address = address, _pte = pte; _pte < pte + HPAGE_PMD_NR;
- _pte++, _address += PAGE_SIZE) {
+ for (i = 0; i < HPAGE_PMD_NR; i++) {
+ /*
+ * we are reading in KHUGEPAGED_MIN_MTHP_NR page chunks. if
+ * there are pages in this chunk keep track of it in the bitmap
+ * for mTHP collapsing.
+ */
+ if (i % KHUGEPAGED_MIN_MTHP_NR == 0) {
+ if (chunk_none_count <= scaled_none)
+ bitmap_set(cc->mthp_bitmap,
+ i / KHUGEPAGED_MIN_MTHP_NR, 1);
+
+ chunk_none_count = 0;
+ }
+
+ _pte = pte + i;
+ _address = address + i * PAGE_SIZE;
pte_t pteval = ptep_get(_pte);
if (is_swap_pte(pteval)) {
++unmapped;
@@ -1411,10 +1457,11 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
}
}
if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
+ ++chunk_none_count;
++none_or_zero;
if (!userfaultfd_armed(vma) &&
- (!cc->is_khugepaged ||
- none_or_zero <= khugepaged_max_ptes_none)) {
+ (!cc->is_khugepaged || !is_pmd_only ||
+ none_or_zero <= khugepaged_max_ptes_none)) {
continue;
} else {
result = SCAN_EXCEED_NONE_PTE;
@@ -1510,6 +1557,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
address)))
referenced++;
}
+
if (!writable) {
result = SCAN_PAGE_RO;
} else if (cc->is_khugepaged &&
@@ -1522,8 +1570,12 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
out_unmap:
pte_unmap_unlock(pte, ptl);
if (result == SCAN_SUCCEED) {
- result = collapse_huge_page(mm, address, referenced,
- unmapped, cc, mmap_locked, HPAGE_PMD_ORDER, 0);
+ result = khugepaged_scan_bitmap(mm, address, referenced, unmapped, cc,
+ mmap_locked, enabled_orders);
+ if (result > 0)
+ result = SCAN_SUCCEED;
+ else
+ result = SCAN_FAIL;
}
out:
trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, referenced,
@@ -2479,11 +2531,13 @@ static int khugepaged_collapse_single_pmd(unsigned long addr,
fput(file);
if (result == SCAN_PTE_MAPPED_HUGEPAGE) {
mmap_read_lock(mm);
+ *mmap_locked = true;
if (khugepaged_test_exit_or_disable(mm))
goto end;
result = collapse_pte_mapped_thp(mm, addr,
!cc->is_khugepaged);
mmap_read_unlock(mm);
+ *mmap_locked = false;
}
} else {
result = khugepaged_scan_pmd(mm, vma, addr,
--
2.48.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v4 08/12] khugepaged: skip collapsing mTHP to smaller orders
2025-04-17 0:02 [PATCH v4 00/12] khugepaged: mTHP support Nico Pache
` (6 preceding siblings ...)
2025-04-17 0:02 ` [PATCH v4 07/12] khugepaged: add " Nico Pache
@ 2025-04-17 0:02 ` Nico Pache
2025-04-24 7:48 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 09/12] khugepaged: avoid unnecessary mTHP collapse attempts Nico Pache
` (3 subsequent siblings)
11 siblings, 1 reply; 34+ messages in thread
From: Nico Pache @ 2025-04-17 0:02 UTC (permalink / raw)
To: linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
baolin.wang, ryan.roberts, willy, peterx, ziy, wangkefeng.wang,
usamaarif642, sunnanyong, vishal.moola, thomas.hellstrom, yang,
kirill.shutemov, aarcange, raquini, dev.jain, anshuman.khandual,
catalin.marinas, tiwai, will, dave.hansen, jack, cl, jglisse,
surenb, zokeefe, hannes, rientjes, mhocko, rdunlap
khugepaged may try to collapse a mTHP to a smaller mTHP, resulting in
some pages being unmapped. Skip these cases until we have a way to check
if its ok to collapse to a smaller mTHP size (like in the case of a
partially mapped folio).
This patch is inspired by Dev Jain's work on khugepaged mTHP support [1].
[1] https://lore.kernel.org/lkml/20241216165105.56185-11-dev.jain@arm.com/
Co-developed-by: Dev Jain <dev.jain@arm.com>
Signed-off-by: Dev Jain <dev.jain@arm.com>
Signed-off-by: Nico Pache <npache@redhat.com>
---
mm/khugepaged.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index ece39fd71fe6..383aff12cd43 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -625,7 +625,12 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
folio = page_folio(page);
VM_BUG_ON_FOLIO(!folio_test_anon(folio), folio);
- /* See hpage_collapse_scan_pmd(). */
+ if (order != HPAGE_PMD_ORDER && folio_order(folio) >= order) {
+ result = SCAN_PTE_MAPPED_HUGEPAGE;
+ goto out;
+ }
+
+ /* See khugepaged_scan_pmd(). */
if (folio_maybe_mapped_shared(folio)) {
++shared;
if (order != HPAGE_PMD_ORDER || (cc->is_khugepaged &&
--
2.48.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v4 09/12] khugepaged: avoid unnecessary mTHP collapse attempts
2025-04-17 0:02 [PATCH v4 00/12] khugepaged: mTHP support Nico Pache
` (7 preceding siblings ...)
2025-04-17 0:02 ` [PATCH v4 08/12] khugepaged: skip collapsing mTHP to smaller orders Nico Pache
@ 2025-04-17 0:02 ` Nico Pache
2025-04-17 0:02 ` [PATCH v4 10/12] khugepaged: improve tracepoints for mTHP orders Nico Pache
` (2 subsequent siblings)
11 siblings, 0 replies; 34+ messages in thread
From: Nico Pache @ 2025-04-17 0:02 UTC (permalink / raw)
To: linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
baolin.wang, ryan.roberts, willy, peterx, ziy, wangkefeng.wang,
usamaarif642, sunnanyong, vishal.moola, thomas.hellstrom, yang,
kirill.shutemov, aarcange, raquini, dev.jain, anshuman.khandual,
catalin.marinas, tiwai, will, dave.hansen, jack, cl, jglisse,
surenb, zokeefe, hannes, rientjes, mhocko, rdunlap
There are cases where, if an attempted collapse fails, all subsequent
orders are guaranteed to also fail. Avoid these collapse attempts by
bailing out early.
Signed-off-by: Nico Pache <npache@redhat.com>
---
mm/khugepaged.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 383aff12cd43..738dd9c5751d 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1366,6 +1366,23 @@ static int khugepaged_scan_bitmap(struct mm_struct *mm, unsigned long address,
collapsed += (1 << order);
continue;
}
+ /*
+ * Some ret values indicate all lower order will also
+ * fail, dont trying to collapse smaller orders
+ */
+ if (ret == SCAN_EXCEED_NONE_PTE ||
+ ret == SCAN_EXCEED_SWAP_PTE ||
+ ret == SCAN_EXCEED_SHARED_PTE ||
+ ret == SCAN_PTE_NON_PRESENT ||
+ ret == SCAN_PTE_UFFD_WP ||
+ ret == SCAN_ALLOC_HUGE_PAGE_FAIL ||
+ ret == SCAN_CGROUP_CHARGE_FAIL ||
+ ret == SCAN_COPY_MC ||
+ ret == SCAN_PAGE_LOCK ||
+ ret == SCAN_PAGE_COUNT)
+ goto next;
+ else
+ break;
}
next:
--
2.48.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v4 10/12] khugepaged: improve tracepoints for mTHP orders
2025-04-17 0:02 [PATCH v4 00/12] khugepaged: mTHP support Nico Pache
` (8 preceding siblings ...)
2025-04-17 0:02 ` [PATCH v4 09/12] khugepaged: avoid unnecessary mTHP collapse attempts Nico Pache
@ 2025-04-17 0:02 ` Nico Pache
2025-04-24 7:51 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 11/12] khugepaged: add per-order mTHP khugepaged stats Nico Pache
2025-04-17 0:02 ` [PATCH v4 12/12] Documentation: mm: update the admin guide for mTHP collapse Nico Pache
11 siblings, 1 reply; 34+ messages in thread
From: Nico Pache @ 2025-04-17 0:02 UTC (permalink / raw)
To: linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
baolin.wang, ryan.roberts, willy, peterx, ziy, wangkefeng.wang,
usamaarif642, sunnanyong, vishal.moola, thomas.hellstrom, yang,
kirill.shutemov, aarcange, raquini, dev.jain, anshuman.khandual,
catalin.marinas, tiwai, will, dave.hansen, jack, cl, jglisse,
surenb, zokeefe, hannes, rientjes, mhocko, rdunlap
Add the order to the tracepoints to give better insight into what order
is being operated at for khugepaged.
Signed-off-by: Nico Pache <npache@redhat.com>
---
include/trace/events/huge_memory.h | 34 +++++++++++++++++++-----------
mm/khugepaged.c | 10 +++++----
2 files changed, 28 insertions(+), 16 deletions(-)
diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
index 9d5c00b0285c..ea2fe20a39f5 100644
--- a/include/trace/events/huge_memory.h
+++ b/include/trace/events/huge_memory.h
@@ -92,34 +92,37 @@ TRACE_EVENT(mm_khugepaged_scan_pmd,
TRACE_EVENT(mm_collapse_huge_page,
- TP_PROTO(struct mm_struct *mm, int isolated, int status),
+ TP_PROTO(struct mm_struct *mm, int isolated, int status, int order),
- TP_ARGS(mm, isolated, status),
+ TP_ARGS(mm, isolated, status, order),
TP_STRUCT__entry(
__field(struct mm_struct *, mm)
__field(int, isolated)
__field(int, status)
+ __field(int, order)
),
TP_fast_assign(
__entry->mm = mm;
__entry->isolated = isolated;
__entry->status = status;
+ __entry->order = order;
),
- TP_printk("mm=%p, isolated=%d, status=%s",
+ TP_printk("mm=%p, isolated=%d, status=%s order=%d",
__entry->mm,
__entry->isolated,
- __print_symbolic(__entry->status, SCAN_STATUS))
+ __print_symbolic(__entry->status, SCAN_STATUS),
+ __entry->order)
);
TRACE_EVENT(mm_collapse_huge_page_isolate,
TP_PROTO(struct page *page, int none_or_zero,
- int referenced, bool writable, int status),
+ int referenced, bool writable, int status, int order),
- TP_ARGS(page, none_or_zero, referenced, writable, status),
+ TP_ARGS(page, none_or_zero, referenced, writable, status, order),
TP_STRUCT__entry(
__field(unsigned long, pfn)
@@ -127,6 +130,7 @@ TRACE_EVENT(mm_collapse_huge_page_isolate,
__field(int, referenced)
__field(bool, writable)
__field(int, status)
+ __field(int, order)
),
TP_fast_assign(
@@ -135,27 +139,31 @@ TRACE_EVENT(mm_collapse_huge_page_isolate,
__entry->referenced = referenced;
__entry->writable = writable;
__entry->status = status;
+ __entry->order = order;
),
- TP_printk("scan_pfn=0x%lx, none_or_zero=%d, referenced=%d, writable=%d, status=%s",
+ TP_printk("scan_pfn=0x%lx, none_or_zero=%d, referenced=%d, writable=%d, status=%s order=%d",
__entry->pfn,
__entry->none_or_zero,
__entry->referenced,
__entry->writable,
- __print_symbolic(__entry->status, SCAN_STATUS))
+ __print_symbolic(__entry->status, SCAN_STATUS),
+ __entry->order)
);
TRACE_EVENT(mm_collapse_huge_page_swapin,
- TP_PROTO(struct mm_struct *mm, int swapped_in, int referenced, int ret),
+ TP_PROTO(struct mm_struct *mm, int swapped_in, int referenced, int ret,
+ int order),
- TP_ARGS(mm, swapped_in, referenced, ret),
+ TP_ARGS(mm, swapped_in, referenced, ret, order),
TP_STRUCT__entry(
__field(struct mm_struct *, mm)
__field(int, swapped_in)
__field(int, referenced)
__field(int, ret)
+ __field(int, order)
),
TP_fast_assign(
@@ -163,13 +171,15 @@ TRACE_EVENT(mm_collapse_huge_page_swapin,
__entry->swapped_in = swapped_in;
__entry->referenced = referenced;
__entry->ret = ret;
+ __entry->order = order;
),
- TP_printk("mm=%p, swapped_in=%d, referenced=%d, ret=%d",
+ TP_printk("mm=%p, swapped_in=%d, referenced=%d, ret=%d, order=%d",
__entry->mm,
__entry->swapped_in,
__entry->referenced,
- __entry->ret)
+ __entry->ret,
+ __entry->order)
);
TRACE_EVENT(mm_khugepaged_scan_file,
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 738dd9c5751d..67da0950b833 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -721,13 +721,14 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
} else {
result = SCAN_SUCCEED;
trace_mm_collapse_huge_page_isolate(&folio->page, none_or_zero,
- referenced, writable, result);
+ referenced, writable, result,
+ order);
return result;
}
out:
release_pte_pages(pte, _pte, compound_pagelist);
trace_mm_collapse_huge_page_isolate(&folio->page, none_or_zero,
- referenced, writable, result);
+ referenced, writable, result, order);
return result;
}
@@ -1097,7 +1098,8 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
result = SCAN_SUCCEED;
out:
- trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, result);
+ trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, result,
+ order);
return result;
}
@@ -1318,7 +1320,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
*mmap_locked = false;
if (folio)
folio_put(folio);
- trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result);
+ trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result, order);
return result;
}
--
2.48.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v4 11/12] khugepaged: add per-order mTHP khugepaged stats
2025-04-17 0:02 [PATCH v4 00/12] khugepaged: mTHP support Nico Pache
` (9 preceding siblings ...)
2025-04-17 0:02 ` [PATCH v4 10/12] khugepaged: improve tracepoints for mTHP orders Nico Pache
@ 2025-04-17 0:02 ` Nico Pache
2025-04-24 7:58 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 12/12] Documentation: mm: update the admin guide for mTHP collapse Nico Pache
11 siblings, 1 reply; 34+ messages in thread
From: Nico Pache @ 2025-04-17 0:02 UTC (permalink / raw)
To: linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
baolin.wang, ryan.roberts, willy, peterx, ziy, wangkefeng.wang,
usamaarif642, sunnanyong, vishal.moola, thomas.hellstrom, yang,
kirill.shutemov, aarcange, raquini, dev.jain, anshuman.khandual,
catalin.marinas, tiwai, will, dave.hansen, jack, cl, jglisse,
surenb, zokeefe, hannes, rientjes, mhocko, rdunlap
With mTHP support inplace, let add the per-order mTHP stats for
exceeding NONE, SWAP, and SHARED.
Signed-off-by: Nico Pache <npache@redhat.com>
---
include/linux/huge_mm.h | 3 +++
mm/huge_memory.c | 7 +++++++
mm/khugepaged.c | 16 +++++++++++++---
3 files changed, 23 insertions(+), 3 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 55b242335420..782d3a7854b4 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -139,6 +139,9 @@ enum mthp_stat_item {
MTHP_STAT_SPLIT_DEFERRED,
MTHP_STAT_NR_ANON,
MTHP_STAT_NR_ANON_PARTIALLY_MAPPED,
+ MTHP_STAT_COLLAPSE_EXCEED_SWAP,
+ MTHP_STAT_COLLAPSE_EXCEED_NONE,
+ MTHP_STAT_COLLAPSE_EXCEED_SHARED,
__MTHP_STAT_COUNT
};
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 7798c9284533..de4704af0022 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -633,6 +633,10 @@ DEFINE_MTHP_STAT_ATTR(split_failed, MTHP_STAT_SPLIT_FAILED);
DEFINE_MTHP_STAT_ATTR(split_deferred, MTHP_STAT_SPLIT_DEFERRED);
DEFINE_MTHP_STAT_ATTR(nr_anon, MTHP_STAT_NR_ANON);
DEFINE_MTHP_STAT_ATTR(nr_anon_partially_mapped, MTHP_STAT_NR_ANON_PARTIALLY_MAPPED);
+DEFINE_MTHP_STAT_ATTR(collapse_exceed_swap_pte, MTHP_STAT_COLLAPSE_EXCEED_SWAP);
+DEFINE_MTHP_STAT_ATTR(collapse_exceed_none_pte, MTHP_STAT_COLLAPSE_EXCEED_NONE);
+DEFINE_MTHP_STAT_ATTR(collapse_exceed_shared_pte, MTHP_STAT_COLLAPSE_EXCEED_SHARED);
+
static struct attribute *anon_stats_attrs[] = {
&anon_fault_alloc_attr.attr,
@@ -649,6 +653,9 @@ static struct attribute *anon_stats_attrs[] = {
&split_deferred_attr.attr,
&nr_anon_attr.attr,
&nr_anon_partially_mapped_attr.attr,
+ &collapse_exceed_swap_pte_attr.attr,
+ &collapse_exceed_none_pte_attr.attr,
+ &collapse_exceed_shared_pte_attr.attr,
NULL,
};
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 67da0950b833..38643a681ba5 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -604,7 +604,10 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
continue;
} else {
result = SCAN_EXCEED_NONE_PTE;
- count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
+ if (order == HPAGE_PMD_ORDER)
+ count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
+ else
+ count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_NONE);
goto out;
}
}
@@ -633,8 +636,14 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
/* See khugepaged_scan_pmd(). */
if (folio_maybe_mapped_shared(folio)) {
++shared;
- if (order != HPAGE_PMD_ORDER || (cc->is_khugepaged &&
- shared > khugepaged_max_ptes_shared)) {
+ if (order != HPAGE_PMD_ORDER) {
+ result = SCAN_EXCEED_SHARED_PTE;
+ count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_SHARED);
+ goto out;
+ }
+
+ if (cc->is_khugepaged &&
+ shared > khugepaged_max_ptes_shared) {
result = SCAN_EXCEED_SHARED_PTE;
count_vm_event(THP_SCAN_EXCEED_SHARED_PTE);
goto out;
@@ -1060,6 +1069,7 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
/* Dont swapin for mTHP collapse */
if (order != HPAGE_PMD_ORDER) {
+ count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_SHARED);
result = SCAN_EXCEED_SWAP_PTE;
goto out;
}
--
2.48.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH v4 12/12] Documentation: mm: update the admin guide for mTHP collapse
2025-04-17 0:02 [PATCH v4 00/12] khugepaged: mTHP support Nico Pache
` (10 preceding siblings ...)
2025-04-17 0:02 ` [PATCH v4 11/12] khugepaged: add per-order mTHP khugepaged stats Nico Pache
@ 2025-04-17 0:02 ` Nico Pache
2025-04-24 15:03 ` Usama Arif
11 siblings, 1 reply; 34+ messages in thread
From: Nico Pache @ 2025-04-17 0:02 UTC (permalink / raw)
To: linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
baolin.wang, ryan.roberts, willy, peterx, ziy, wangkefeng.wang,
usamaarif642, sunnanyong, vishal.moola, thomas.hellstrom, yang,
kirill.shutemov, aarcange, raquini, dev.jain, anshuman.khandual,
catalin.marinas, tiwai, will, dave.hansen, jack, cl, jglisse,
surenb, zokeefe, hannes, rientjes, mhocko, rdunlap
Now that we can collapse to mTHPs lets update the admin guide to
reflect these changes and provide proper guidence on how to utilize it.
Signed-off-by: Nico Pache <npache@redhat.com>
---
Documentation/admin-guide/mm/transhuge.rst | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
index dff8d5985f0f..06814e05e1d5 100644
--- a/Documentation/admin-guide/mm/transhuge.rst
+++ b/Documentation/admin-guide/mm/transhuge.rst
@@ -63,7 +63,7 @@ often.
THP can be enabled system wide or restricted to certain tasks or even
memory ranges inside task's address space. Unless THP is completely
disabled, there is ``khugepaged`` daemon that scans memory and
-collapses sequences of basic pages into PMD-sized huge pages.
+collapses sequences of basic pages into huge pages.
The THP behaviour is controlled via :ref:`sysfs <thp_sysfs>`
interface and using madvise(2) and prctl(2) system calls.
@@ -144,6 +144,14 @@ hugepage sizes have enabled="never". If enabling multiple hugepage
sizes, the kernel will select the most appropriate enabled size for a
given allocation.
+khugepaged uses max_ptes_none scaled to the order of the enabled mTHP size to
+determine collapses. When using mTHPs it's recommended to set max_ptes_none
+low-- ideally less than HPAGE_PMD_NR / 2 (255 on 4k page size). This will
+prevent undesired "creep" behavior that leads to continuously collapsing to a
+larger mTHP size. max_ptes_shared and max_ptes_swap have no effect when
+collapsing to a mTHP, and mTHP collapse will fail on shared or swapped out
+pages.
+
It's also possible to limit defrag efforts in the VM to generate
anonymous hugepages in case they're not immediately free to madvise
regions or to never try to defrag memory and simply fallback to regular
--
2.48.1
^ permalink raw reply related [flat|nested] 34+ messages in thread
* Re: [PATCH v4 01/12] introduce khugepaged_collapse_single_pmd to unify khugepaged and madvise_collapse
2025-04-17 0:02 ` [PATCH v4 01/12] introduce khugepaged_collapse_single_pmd to unify khugepaged and madvise_collapse Nico Pache
@ 2025-04-23 6:44 ` Baolin Wang
2025-04-23 7:06 ` Nico Pache
0 siblings, 1 reply; 34+ messages in thread
From: Baolin Wang @ 2025-04-23 6:44 UTC (permalink / raw)
To: Nico Pache, linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On 2025/4/17 08:02, Nico Pache wrote:
> The khugepaged daemon and madvise_collapse have two different
> implementations that do almost the same thing.
>
> Create khugepaged_collapse_single_pmd to increase code
> reuse and create an entry point for future khugepaged changes.
>
> Refactor madvise_collapse and khugepaged_scan_mm_slot to use
> the new khugepaged_collapse_single_pmd function.
>
> Signed-off-by: Nico Pache <npache@redhat.com>
Can you add a prefix 'khugepaged:' for the subject line?
> ---
> mm/khugepaged.c | 92 ++++++++++++++++++++++++-------------------------
> 1 file changed, 46 insertions(+), 46 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index b8838ba8207a..cecadc4239e7 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2363,6 +2363,48 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
> }
> #endif
>
> +/*
> + * Try to collapse a single PMD starting at a PMD aligned addr, and return
> + * the results.
> + */
> +static int khugepaged_collapse_single_pmd(unsigned long addr,
> + struct vm_area_struct *vma, bool *mmap_locked,
> + struct collapse_control *cc)
> +{
> + int result = SCAN_FAIL;
> + struct mm_struct *mm = vma->vm_mm;
> + unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0;
> +
> + if (thp_vma_allowable_order(vma, vma->vm_flags,
> + tva_flags, PMD_ORDER)) {
We've already checked the thp_vma_allowable_order() before calling this
function, why check again?
> + if (IS_ENABLED(CONFIG_SHMEM) && !vma_is_anonymous(vma)) {
> + struct file *file = get_file(vma->vm_file);
> + pgoff_t pgoff = linear_page_index(vma, addr);
> +
> + mmap_read_unlock(mm);
> + *mmap_locked = false;
> + result = hpage_collapse_scan_file(mm, addr, file, pgoff,
> + cc);
> + fput(file);
> + if (result == SCAN_PTE_MAPPED_HUGEPAGE) {
> + mmap_read_lock(mm);
> + if (hpage_collapse_test_exit_or_disable(mm))
> + goto end;
> + result = collapse_pte_mapped_thp(mm, addr,
> + !cc->is_khugepaged);
why drop the following check?
if (*result == SCAN_PMD_MAPPED)
*result = SCAN_SUCCEED;
> + mmap_read_unlock(mm);
> + }
> + } else {
> + result = hpage_collapse_scan_pmd(mm, vma, addr,
> + mmap_locked, cc);
> + }
> + if (cc->is_khugepaged && result == SCAN_SUCCEED)
> + ++khugepaged_pages_collapsed;
> + }
> +end:
> + return result;
> +}
> +
> static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
> struct collapse_control *cc)
> __releases(&khugepaged_mm_lock)
> @@ -2437,33 +2479,9 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
> VM_BUG_ON(khugepaged_scan.address < hstart ||
> khugepaged_scan.address + HPAGE_PMD_SIZE >
> hend);
> - if (IS_ENABLED(CONFIG_SHMEM) && !vma_is_anonymous(vma)) {
> - struct file *file = get_file(vma->vm_file);
> - pgoff_t pgoff = linear_page_index(vma,
> - khugepaged_scan.address);
>
> - mmap_read_unlock(mm);
> - mmap_locked = false;
> - *result = hpage_collapse_scan_file(mm,
> - khugepaged_scan.address, file, pgoff, cc);
> - fput(file);
> - if (*result == SCAN_PTE_MAPPED_HUGEPAGE) {
> - mmap_read_lock(mm);
> - if (hpage_collapse_test_exit_or_disable(mm))
> - goto breakouterloop;
> - *result = collapse_pte_mapped_thp(mm,
> - khugepaged_scan.address, false);
> - if (*result == SCAN_PMD_MAPPED)
> - *result = SCAN_SUCCEED;
> - mmap_read_unlock(mm);
> - }
> - } else {
> - *result = hpage_collapse_scan_pmd(mm, vma,
> - khugepaged_scan.address, &mmap_locked, cc);
> - }
> -
> - if (*result == SCAN_SUCCEED)
> - ++khugepaged_pages_collapsed;
> + *result = khugepaged_collapse_single_pmd(khugepaged_scan.address,
> + vma, &mmap_locked, cc);
If the khugepaged_collapse_single_pmd() returns a failure caused by
hpage_collapse_test_exit_or_disable(), we should break out of the loop
according to the original logic. But you've changed the action in this
patch, is this intentional?
>
> /* move to next address */
> khugepaged_scan.address += HPAGE_PMD_SIZE;
> @@ -2783,36 +2801,18 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev,
> mmap_assert_locked(mm);
> memset(cc->node_load, 0, sizeof(cc->node_load));
> nodes_clear(cc->alloc_nmask);
> - if (IS_ENABLED(CONFIG_SHMEM) && !vma_is_anonymous(vma)) {
> - struct file *file = get_file(vma->vm_file);
> - pgoff_t pgoff = linear_page_index(vma, addr);
>
> - mmap_read_unlock(mm);
> - mmap_locked = false;
> - result = hpage_collapse_scan_file(mm, addr, file, pgoff,
> - cc);
> - fput(file);
> - } else {
> - result = hpage_collapse_scan_pmd(mm, vma, addr,
> - &mmap_locked, cc);
> - }
> + result = khugepaged_collapse_single_pmd(addr, vma, &mmap_locked, cc);
> +
> if (!mmap_locked)
> *prev = NULL; /* Tell caller we dropped mmap_lock */
>
> -handle_result:
> switch (result) {
> case SCAN_SUCCEED:
> case SCAN_PMD_MAPPED:
> ++thps;
> break;
> case SCAN_PTE_MAPPED_HUGEPAGE:
> - BUG_ON(mmap_locked);
> - BUG_ON(*prev);
> - mmap_read_lock(mm);
> - result = collapse_pte_mapped_thp(mm, addr, true);
> - mmap_read_unlock(mm);
> - goto handle_result;
> - /* Whitelisted set of results where continuing OK */
> case SCAN_PMD_NULL:
> case SCAN_PTE_NON_PRESENT:
> case SCAN_PTE_UFFD_WP:
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 02/12] khugepaged: rename hpage_collapse_* to khugepaged_*
2025-04-17 0:02 ` [PATCH v4 02/12] khugepaged: rename hpage_collapse_* to khugepaged_* Nico Pache
@ 2025-04-23 6:49 ` Baolin Wang
0 siblings, 0 replies; 34+ messages in thread
From: Baolin Wang @ 2025-04-23 6:49 UTC (permalink / raw)
To: Nico Pache, linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On 2025/4/17 08:02, Nico Pache wrote:
> functions in khugepaged.c use a mix of hpage_collapse and khugepaged
> as the function prefix.
>
> rename all of them to khugepaged to keep things consistent and slightly
> shorten the function names.
Yes, make sense to me.
> Signed-off-by: Nico Pache <npache@redhat.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Nit: this renaming cleanup should be put in patch 1.
> ---
> mm/khugepaged.c | 50 ++++++++++++++++++++++++-------------------------
> 1 file changed, 25 insertions(+), 25 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index cecadc4239e7..b6281c04f1e5 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -402,14 +402,14 @@ void __init khugepaged_destroy(void)
> kmem_cache_destroy(mm_slot_cache);
> }
>
> -static inline int hpage_collapse_test_exit(struct mm_struct *mm)
> +static inline int khugepaged_test_exit(struct mm_struct *mm)
> {
> return atomic_read(&mm->mm_users) == 0;
> }
>
> -static inline int hpage_collapse_test_exit_or_disable(struct mm_struct *mm)
> +static inline int khugepaged_test_exit_or_disable(struct mm_struct *mm)
> {
> - return hpage_collapse_test_exit(mm) ||
> + return khugepaged_test_exit(mm) ||
> test_bit(MMF_DISABLE_THP, &mm->flags);
> }
>
> @@ -444,7 +444,7 @@ void __khugepaged_enter(struct mm_struct *mm)
> int wakeup;
>
> /* __khugepaged_exit() must not run from under us */
> - VM_BUG_ON_MM(hpage_collapse_test_exit(mm), mm);
> + VM_BUG_ON_MM(khugepaged_test_exit(mm), mm);
> if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags)))
> return;
>
> @@ -503,7 +503,7 @@ void __khugepaged_exit(struct mm_struct *mm)
> } else if (mm_slot) {
> /*
> * This is required to serialize against
> - * hpage_collapse_test_exit() (which is guaranteed to run
> + * khugepaged_test_exit() (which is guaranteed to run
> * under mmap sem read mode). Stop here (after we return all
> * pagetables will be destroyed) until khugepaged has finished
> * working on the pagetables under the mmap_lock.
> @@ -851,7 +851,7 @@ struct collapse_control khugepaged_collapse_control = {
> .is_khugepaged = true,
> };
>
> -static bool hpage_collapse_scan_abort(int nid, struct collapse_control *cc)
> +static bool khugepaged_scan_abort(int nid, struct collapse_control *cc)
> {
> int i;
>
> @@ -886,7 +886,7 @@ static inline gfp_t alloc_hugepage_khugepaged_gfpmask(void)
> }
>
> #ifdef CONFIG_NUMA
> -static int hpage_collapse_find_target_node(struct collapse_control *cc)
> +static int khugepaged_find_target_node(struct collapse_control *cc)
> {
> int nid, target_node = 0, max_value = 0;
>
> @@ -905,7 +905,7 @@ static int hpage_collapse_find_target_node(struct collapse_control *cc)
> return target_node;
> }
> #else
> -static int hpage_collapse_find_target_node(struct collapse_control *cc)
> +static int khugepaged_find_target_node(struct collapse_control *cc)
> {
> return 0;
> }
> @@ -925,7 +925,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
> struct vm_area_struct *vma;
> unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0;
>
> - if (unlikely(hpage_collapse_test_exit_or_disable(mm)))
> + if (unlikely(khugepaged_test_exit_or_disable(mm)))
> return SCAN_ANY_PROCESS;
>
> *vmap = vma = find_vma(mm, address);
> @@ -992,7 +992,7 @@ static int check_pmd_still_valid(struct mm_struct *mm,
>
> /*
> * Bring missing pages in from swap, to complete THP collapse.
> - * Only done if hpage_collapse_scan_pmd believes it is worthwhile.
> + * Only done if khugepaged_scan_pmd believes it is worthwhile.
> *
> * Called and returns without pte mapped or spinlocks held.
> * Returns result: if not SCAN_SUCCEED, mmap_lock has been released.
> @@ -1078,7 +1078,7 @@ static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm,
> {
> gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
> GFP_TRANSHUGE);
> - int node = hpage_collapse_find_target_node(cc);
> + int node = khugepaged_find_target_node(cc);
> struct folio *folio;
>
> folio = __folio_alloc(gfp, HPAGE_PMD_ORDER, node, &cc->alloc_nmask);
> @@ -1264,7 +1264,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> return result;
> }
>
> -static int hpage_collapse_scan_pmd(struct mm_struct *mm,
> +static int khugepaged_scan_pmd(struct mm_struct *mm,
> struct vm_area_struct *vma,
> unsigned long address, bool *mmap_locked,
> struct collapse_control *cc)
> @@ -1378,7 +1378,7 @@ static int hpage_collapse_scan_pmd(struct mm_struct *mm,
> * hit record.
> */
> node = folio_nid(folio);
> - if (hpage_collapse_scan_abort(node, cc)) {
> + if (khugepaged_scan_abort(node, cc)) {
> result = SCAN_SCAN_ABORT;
> goto out_unmap;
> }
> @@ -1447,7 +1447,7 @@ static void collect_mm_slot(struct khugepaged_mm_slot *mm_slot)
>
> lockdep_assert_held(&khugepaged_mm_lock);
>
> - if (hpage_collapse_test_exit(mm)) {
> + if (khugepaged_test_exit(mm)) {
> /* free mm_slot */
> hash_del(&slot->hash);
> list_del(&slot->mm_node);
> @@ -1742,7 +1742,7 @@ static void retract_page_tables(struct address_space *mapping, pgoff_t pgoff)
> if (find_pmd_or_thp_or_none(mm, addr, &pmd) != SCAN_SUCCEED)
> continue;
>
> - if (hpage_collapse_test_exit(mm))
> + if (khugepaged_test_exit(mm))
> continue;
> /*
> * When a vma is registered with uffd-wp, we cannot recycle
> @@ -2264,7 +2264,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
> return result;
> }
>
> -static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
> +static int khugepaged_scan_file(struct mm_struct *mm, unsigned long addr,
> struct file *file, pgoff_t start,
> struct collapse_control *cc)
> {
> @@ -2309,7 +2309,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
> }
>
> node = folio_nid(folio);
> - if (hpage_collapse_scan_abort(node, cc)) {
> + if (khugepaged_scan_abort(node, cc)) {
> result = SCAN_SCAN_ABORT;
> break;
> }
> @@ -2355,7 +2355,7 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
> return result;
> }
> #else
> -static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
> +static int khugepaged_scan_file(struct mm_struct *mm, unsigned long addr,
> struct file *file, pgoff_t start,
> struct collapse_control *cc)
> {
> @@ -2383,19 +2383,19 @@ static int khugepaged_collapse_single_pmd(unsigned long addr,
>
> mmap_read_unlock(mm);
> *mmap_locked = false;
> - result = hpage_collapse_scan_file(mm, addr, file, pgoff,
> + result = khugepaged_scan_file(mm, addr, file, pgoff,
> cc);
> fput(file);
> if (result == SCAN_PTE_MAPPED_HUGEPAGE) {
> mmap_read_lock(mm);
> - if (hpage_collapse_test_exit_or_disable(mm))
> + if (khugepaged_test_exit_or_disable(mm))
> goto end;
> result = collapse_pte_mapped_thp(mm, addr,
> !cc->is_khugepaged);
> mmap_read_unlock(mm);
> }
> } else {
> - result = hpage_collapse_scan_pmd(mm, vma, addr,
> + result = khugepaged_scan_pmd(mm, vma, addr,
> mmap_locked, cc);
> }
> if (cc->is_khugepaged && result == SCAN_SUCCEED)
> @@ -2443,7 +2443,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
> goto breakouterloop_mmap_lock;
>
> progress++;
> - if (unlikely(hpage_collapse_test_exit_or_disable(mm)))
> + if (unlikely(khugepaged_test_exit_or_disable(mm)))
> goto breakouterloop;
>
> vma_iter_init(&vmi, mm, khugepaged_scan.address);
> @@ -2451,7 +2451,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
> unsigned long hstart, hend;
>
> cond_resched();
> - if (unlikely(hpage_collapse_test_exit_or_disable(mm))) {
> + if (unlikely(khugepaged_test_exit_or_disable(mm))) {
> progress++;
> break;
> }
> @@ -2473,7 +2473,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
> bool mmap_locked = true;
>
> cond_resched();
> - if (unlikely(hpage_collapse_test_exit_or_disable(mm)))
> + if (unlikely(khugepaged_test_exit_or_disable(mm)))
> goto breakouterloop;
>
> VM_BUG_ON(khugepaged_scan.address < hstart ||
> @@ -2509,7 +2509,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
> * Release the current mm_slot if this mm is about to die, or
> * if we scanned all vmas of this mm.
> */
> - if (hpage_collapse_test_exit(mm) || !vma) {
> + if (khugepaged_test_exit(mm) || !vma) {
> /*
> * Make sure that if mm_users is reaching zero while
> * khugepaged runs here, khugepaged_exit will find
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 03/12] khugepaged: generalize hugepage_vma_revalidate for mTHP support
2025-04-17 0:02 ` [PATCH v4 03/12] khugepaged: generalize hugepage_vma_revalidate for mTHP support Nico Pache
@ 2025-04-23 6:55 ` Baolin Wang
0 siblings, 0 replies; 34+ messages in thread
From: Baolin Wang @ 2025-04-23 6:55 UTC (permalink / raw)
To: Nico Pache, linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On 2025/4/17 08:02, Nico Pache wrote:
> For khugepaged to support different mTHP orders, we must generalize this
> function for arbitrary orders.
>
> No functional change in this patch.
>
> Co-developed-by: Dev Jain <dev.jain@arm.com>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> Signed-off-by: Nico Pache <npache@redhat.com>
LGTM.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> mm/khugepaged.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index b6281c04f1e5..54d7f43da69c 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -920,7 +920,7 @@ static int khugepaged_find_target_node(struct collapse_control *cc)
> static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
> bool expect_anon,
> struct vm_area_struct **vmap,
> - struct collapse_control *cc)
> + struct collapse_control *cc, int order)
> {
> struct vm_area_struct *vma;
> unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0;
> @@ -932,9 +932,9 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
> if (!vma)
> return SCAN_VMA_NULL;
>
> - if (!thp_vma_suitable_order(vma, address, PMD_ORDER))
> + if (!thp_vma_suitable_order(vma, address, order))
> return SCAN_ADDRESS_RANGE;
> - if (!thp_vma_allowable_order(vma, vma->vm_flags, tva_flags, PMD_ORDER))
> + if (!thp_vma_allowable_order(vma, vma->vm_flags, tva_flags, order))
> return SCAN_VMA_CHECK;
> /*
> * Anon VMA expected, the address may be unmapped then
> @@ -1130,7 +1130,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> goto out_nolock;
>
> mmap_read_lock(mm);
> - result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
> + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER);
> if (result != SCAN_SUCCEED) {
> mmap_read_unlock(mm);
> goto out_nolock;
> @@ -1164,7 +1164,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> * mmap_lock.
> */
> mmap_write_lock(mm);
> - result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
> + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER);
> if (result != SCAN_SUCCEED)
> goto out_up_write;
> /* check if the pmd is still valid */
> @@ -2790,7 +2790,7 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev,
> mmap_read_lock(mm);
> mmap_locked = true;
> result = hugepage_vma_revalidate(mm, addr, false, &vma,
> - cc);
> + cc, HPAGE_PMD_ORDER);
> if (result != SCAN_SUCCEED) {
> last_fail = result;
> goto out_nolock;
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 01/12] introduce khugepaged_collapse_single_pmd to unify khugepaged and madvise_collapse
2025-04-23 6:44 ` Baolin Wang
@ 2025-04-23 7:06 ` Nico Pache
0 siblings, 0 replies; 34+ messages in thread
From: Nico Pache @ 2025-04-23 7:06 UTC (permalink / raw)
To: Baolin Wang
Cc: linux-mm, linux-doc, linux-kernel, linux-trace-kernel, akpm,
corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On Wed, Apr 23, 2025 at 12:44 AM Baolin Wang
<baolin.wang@linux.alibaba.com> wrote:
>
>
>
> On 2025/4/17 08:02, Nico Pache wrote:
> > The khugepaged daemon and madvise_collapse have two different
> > implementations that do almost the same thing.
> >
> > Create khugepaged_collapse_single_pmd to increase code
> > reuse and create an entry point for future khugepaged changes.
> >
> > Refactor madvise_collapse and khugepaged_scan_mm_slot to use
> > the new khugepaged_collapse_single_pmd function.
> >
> > Signed-off-by: Nico Pache <npache@redhat.com>
>
> Can you add a prefix 'khugepaged:' for the subject line?
I had that originally but the subject line is already extremely long.
>
> > ---
> > mm/khugepaged.c | 92 ++++++++++++++++++++++++-------------------------
> > 1 file changed, 46 insertions(+), 46 deletions(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index b8838ba8207a..cecadc4239e7 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -2363,6 +2363,48 @@ static int hpage_collapse_scan_file(struct mm_struct *mm, unsigned long addr,
> > }
> > #endif
> >
> > +/*
> > + * Try to collapse a single PMD starting at a PMD aligned addr, and return
> > + * the results.
> > + */
> > +static int khugepaged_collapse_single_pmd(unsigned long addr,
> > + struct vm_area_struct *vma, bool *mmap_locked,
> > + struct collapse_control *cc)
> > +{
> > + int result = SCAN_FAIL;
> > + struct mm_struct *mm = vma->vm_mm;
> > + unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0;
> > +
> > + if (thp_vma_allowable_order(vma, vma->vm_flags,
> > + tva_flags, PMD_ORDER)) {
>
> We've already checked the thp_vma_allowable_order() before calling this
> function, why check again?
>
> > + if (IS_ENABLED(CONFIG_SHMEM) && !vma_is_anonymous(vma)) {
> > + struct file *file = get_file(vma->vm_file);
> > + pgoff_t pgoff = linear_page_index(vma, addr);
> > +
> > + mmap_read_unlock(mm);
> > + *mmap_locked = false;
> > + result = hpage_collapse_scan_file(mm, addr, file, pgoff,
> > + cc);
> > + fput(file);
> > + if (result == SCAN_PTE_MAPPED_HUGEPAGE) {
> > + mmap_read_lock(mm);
> > + if (hpage_collapse_test_exit_or_disable(mm))
> > + goto end;
> > + result = collapse_pte_mapped_thp(mm, addr,
> > + !cc->is_khugepaged);
>
> why drop the following check?
> if (*result == SCAN_PMD_MAPPED)
> *result = SCAN_SUCCEED;
Good catch! When generalizing this for madvise_collapse i forgot to
properly handle the khugepaged case of PMD_MAPPED==SUCCEED.
>
> > + mmap_read_unlock(mm);
> > + }
> > + } else {
> > + result = hpage_collapse_scan_pmd(mm, vma, addr,
> > + mmap_locked, cc);
> > + }
> > + if (cc->is_khugepaged && result == SCAN_SUCCEED)
> > + ++khugepaged_pages_collapsed;
> > + }
> > +end:
> > + return result;
> > +}
> > +
> > static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
> > struct collapse_control *cc)
> > __releases(&khugepaged_mm_lock)
> > @@ -2437,33 +2479,9 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
> > VM_BUG_ON(khugepaged_scan.address < hstart ||
> > khugepaged_scan.address + HPAGE_PMD_SIZE >
> > hend);
> > - if (IS_ENABLED(CONFIG_SHMEM) && !vma_is_anonymous(vma)) {
> > - struct file *file = get_file(vma->vm_file);
> > - pgoff_t pgoff = linear_page_index(vma,
> > - khugepaged_scan.address);
> >
> > - mmap_read_unlock(mm);
> > - mmap_locked = false;
> > - *result = hpage_collapse_scan_file(mm,
> > - khugepaged_scan.address, file, pgoff, cc);
> > - fput(file);
> > - if (*result == SCAN_PTE_MAPPED_HUGEPAGE) {
> > - mmap_read_lock(mm);
> > - if (hpage_collapse_test_exit_or_disable(mm))
> > - goto breakouterloop;
> > - *result = collapse_pte_mapped_thp(mm,
> > - khugepaged_scan.address, false);
> > - if (*result == SCAN_PMD_MAPPED)
> > - *result = SCAN_SUCCEED;
> > - mmap_read_unlock(mm);
> > - }
> > - } else {
> > - *result = hpage_collapse_scan_pmd(mm, vma,
> > - khugepaged_scan.address, &mmap_locked, cc);
> > - }
> > -
> > - if (*result == SCAN_SUCCEED)
> > - ++khugepaged_pages_collapsed;
> > + *result = khugepaged_collapse_single_pmd(khugepaged_scan.address,
> > + vma, &mmap_locked, cc);
>
> If the khugepaged_collapse_single_pmd() returns a failure caused by
> hpage_collapse_test_exit_or_disable(), we should break out of the loop
> according to the original logic. But you've changed the action in this
> patch, is this intentional?
Nope, not intentional! Thanks for pointing that out. I'll get that fixed!
Thanks for the in depth review ! I'll work on cleaning up these corner cases.
>
> >
> > /* move to next address */
> > khugepaged_scan.address += HPAGE_PMD_SIZE;
> > @@ -2783,36 +2801,18 @@ int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev,
> > mmap_assert_locked(mm);
> > memset(cc->node_load, 0, sizeof(cc->node_load));
> > nodes_clear(cc->alloc_nmask);
> > - if (IS_ENABLED(CONFIG_SHMEM) && !vma_is_anonymous(vma)) {
> > - struct file *file = get_file(vma->vm_file);
> > - pgoff_t pgoff = linear_page_index(vma, addr);
> >
> > - mmap_read_unlock(mm);
> > - mmap_locked = false;
> > - result = hpage_collapse_scan_file(mm, addr, file, pgoff,
> > - cc);
> > - fput(file);
> > - } else {
> > - result = hpage_collapse_scan_pmd(mm, vma, addr,
> > - &mmap_locked, cc);
> > - }
> > + result = khugepaged_collapse_single_pmd(addr, vma, &mmap_locked, cc);
> > +
> > if (!mmap_locked)
> > *prev = NULL; /* Tell caller we dropped mmap_lock */
> >
> > -handle_result:
> > switch (result) {
> > case SCAN_SUCCEED:
> > case SCAN_PMD_MAPPED:
> > ++thps;
> > break;
> > case SCAN_PTE_MAPPED_HUGEPAGE:
> > - BUG_ON(mmap_locked);
> > - BUG_ON(*prev);
> > - mmap_read_lock(mm);
> > - result = collapse_pte_mapped_thp(mm, addr, true);
> > - mmap_read_unlock(mm);
> > - goto handle_result;
> > - /* Whitelisted set of results where continuing OK */
> > case SCAN_PMD_NULL:
> > case SCAN_PTE_NON_PRESENT:
> > case SCAN_PTE_UFFD_WP:
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 04/12] khugepaged: generalize alloc_charge_folio()
2025-04-17 0:02 ` [PATCH v4 04/12] khugepaged: generalize alloc_charge_folio() Nico Pache
@ 2025-04-23 7:06 ` Baolin Wang
0 siblings, 0 replies; 34+ messages in thread
From: Baolin Wang @ 2025-04-23 7:06 UTC (permalink / raw)
To: Nico Pache, linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On 2025/4/17 08:02, Nico Pache wrote:
> From: Dev Jain <dev.jain@arm.com>
>
> Pass order to alloc_charge_folio() and update mTHP statistics.
>
> Co-developed-by: Nico Pache <npache@redhat.com>
> Signed-off-by: Nico Pache <npache@redhat.com>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
LGTM.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> include/linux/huge_mm.h | 2 ++
> mm/huge_memory.c | 4 ++++
> mm/khugepaged.c | 17 +++++++++++------
> 3 files changed, 17 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index f190998b2ebd..55b242335420 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -123,6 +123,8 @@ enum mthp_stat_item {
> MTHP_STAT_ANON_FAULT_ALLOC,
> MTHP_STAT_ANON_FAULT_FALLBACK,
> MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE,
> + MTHP_STAT_COLLAPSE_ALLOC,
> + MTHP_STAT_COLLAPSE_ALLOC_FAILED,
> MTHP_STAT_ZSWPOUT,
> MTHP_STAT_SWPIN,
> MTHP_STAT_SWPIN_FALLBACK,
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index e97a97586478..7798c9284533 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -615,6 +615,8 @@ static struct kobj_attribute _name##_attr = __ATTR_RO(_name)
> DEFINE_MTHP_STAT_ATTR(anon_fault_alloc, MTHP_STAT_ANON_FAULT_ALLOC);
> DEFINE_MTHP_STAT_ATTR(anon_fault_fallback, MTHP_STAT_ANON_FAULT_FALLBACK);
> DEFINE_MTHP_STAT_ATTR(anon_fault_fallback_charge, MTHP_STAT_ANON_FAULT_FALLBACK_CHARGE);
> +DEFINE_MTHP_STAT_ATTR(collapse_alloc, MTHP_STAT_COLLAPSE_ALLOC);
> +DEFINE_MTHP_STAT_ATTR(collapse_alloc_failed, MTHP_STAT_COLLAPSE_ALLOC_FAILED);
> DEFINE_MTHP_STAT_ATTR(zswpout, MTHP_STAT_ZSWPOUT);
> DEFINE_MTHP_STAT_ATTR(swpin, MTHP_STAT_SWPIN);
> DEFINE_MTHP_STAT_ATTR(swpin_fallback, MTHP_STAT_SWPIN_FALLBACK);
> @@ -680,6 +682,8 @@ static struct attribute *any_stats_attrs[] = {
> #endif
> &split_attr.attr,
> &split_failed_attr.attr,
> + &collapse_alloc_attr.attr,
> + &collapse_alloc_failed_attr.attr,
> NULL,
> };
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 54d7f43da69c..883e9a46359f 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1074,21 +1074,26 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
> }
>
> static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm,
> - struct collapse_control *cc)
> + struct collapse_control *cc, u8 order)
> {
> gfp_t gfp = (cc->is_khugepaged ? alloc_hugepage_khugepaged_gfpmask() :
> GFP_TRANSHUGE);
> int node = khugepaged_find_target_node(cc);
> struct folio *folio;
>
> - folio = __folio_alloc(gfp, HPAGE_PMD_ORDER, node, &cc->alloc_nmask);
> + folio = __folio_alloc(gfp, order, node, &cc->alloc_nmask);
> if (!folio) {
> *foliop = NULL;
> - count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
> + if (order == HPAGE_PMD_ORDER)
> + count_vm_event(THP_COLLAPSE_ALLOC_FAILED);
> + count_mthp_stat(order, MTHP_STAT_COLLAPSE_ALLOC_FAILED);
> return SCAN_ALLOC_HUGE_PAGE_FAIL;
> }
>
> - count_vm_event(THP_COLLAPSE_ALLOC);
> + if (order == HPAGE_PMD_ORDER)
> + count_vm_event(THP_COLLAPSE_ALLOC);
> + count_mthp_stat(order, MTHP_STAT_COLLAPSE_ALLOC);
> +
> if (unlikely(mem_cgroup_charge(folio, mm, gfp))) {
Nit: while we are at it, why not add a
‘MTHP_STAT_COLLAPSE_CHARGE_FAILED’, which is the same as anonymous and
shmem mTHP allocation?
> folio_put(folio);
> *foliop = NULL;
> @@ -1125,7 +1130,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> */
> mmap_read_unlock(mm);
>
> - result = alloc_charge_folio(&folio, mm, cc);
> + result = alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER);
> if (result != SCAN_SUCCEED)
> goto out_nolock;
>
> @@ -1849,7 +1854,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
> VM_BUG_ON(!IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && !is_shmem);
> VM_BUG_ON(start & (HPAGE_PMD_NR - 1));
>
> - result = alloc_charge_folio(&new_folio, mm, cc);
> + result = alloc_charge_folio(&new_folio, mm, cc, HPAGE_PMD_ORDER);
> if (result != SCAN_SUCCEED)
> goto out;
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 05/12] khugepaged: generalize __collapse_huge_page_* for mTHP support
2025-04-17 0:02 ` [PATCH v4 05/12] khugepaged: generalize __collapse_huge_page_* for mTHP support Nico Pache
@ 2025-04-23 7:30 ` Baolin Wang
2025-04-23 8:00 ` Nico Pache
0 siblings, 1 reply; 34+ messages in thread
From: Baolin Wang @ 2025-04-23 7:30 UTC (permalink / raw)
To: Nico Pache, linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On 2025/4/17 08:02, Nico Pache wrote:
> generalize the order of the __collapse_huge_page_* functions
> to support future mTHP collapse.
>
> mTHP collapse can suffer from incosistant behavior, and memory waste
> "creep". disable swapin and shared support for mTHP collapse.
>
> No functional changes in this patch.
>
> Co-developed-by: Dev Jain <dev.jain@arm.com>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> Signed-off-by: Nico Pache <npache@redhat.com>
> ---
> mm/khugepaged.c | 46 ++++++++++++++++++++++++++++------------------
> 1 file changed, 28 insertions(+), 18 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 883e9a46359f..5e9272ab82da 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -565,15 +565,17 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> unsigned long address,
> pte_t *pte,
> struct collapse_control *cc,
> - struct list_head *compound_pagelist)
> + struct list_head *compound_pagelist,
> + u8 order)
> {
> struct page *page = NULL;
> struct folio *folio = NULL;
> pte_t *_pte;
> int none_or_zero = 0, shared = 0, result = SCAN_FAIL, referenced = 0;
> bool writable = false;
> + int scaled_none = khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - order);
>
> - for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
> + for (_pte = pte; _pte < pte + (1 << order);
> _pte++, address += PAGE_SIZE) {
> pte_t pteval = ptep_get(_pte);
> if (pte_none(pteval) || (pte_present(pteval) &&
> @@ -581,7 +583,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> ++none_or_zero;
> if (!userfaultfd_armed(vma) &&
> (!cc->is_khugepaged ||
> - none_or_zero <= khugepaged_max_ptes_none)) {
> + none_or_zero <= scaled_none)) {
> continue;
> } else {
> result = SCAN_EXCEED_NONE_PTE;
> @@ -609,8 +611,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> /* See hpage_collapse_scan_pmd(). */
> if (folio_maybe_mapped_shared(folio)) {
> ++shared;
> - if (cc->is_khugepaged &&
> - shared > khugepaged_max_ptes_shared) {
> + if (order != HPAGE_PMD_ORDER || (cc->is_khugepaged &&
> + shared > khugepaged_max_ptes_shared)) {
> result = SCAN_EXCEED_SHARED_PTE;
> count_vm_event(THP_SCAN_EXCEED_SHARED_PTE);
> goto out;
> @@ -711,13 +713,14 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
> struct vm_area_struct *vma,
> unsigned long address,
> spinlock_t *ptl,
> - struct list_head *compound_pagelist)
> + struct list_head *compound_pagelist,
> + u8 order)
> {
> struct folio *src, *tmp;
> pte_t *_pte;
> pte_t pteval;
>
> - for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
> + for (_pte = pte; _pte < pte + (1 << order);
> _pte++, address += PAGE_SIZE) {
> pteval = ptep_get(_pte);
> if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
> @@ -764,7 +767,8 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
> pmd_t *pmd,
> pmd_t orig_pmd,
> struct vm_area_struct *vma,
> - struct list_head *compound_pagelist)
> + struct list_head *compound_pagelist,
> + u8 order)
> {
> spinlock_t *pmd_ptl;
>
> @@ -781,7 +785,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
> * Release both raw and compound pages isolated
> * in __collapse_huge_page_isolate.
> */
> - release_pte_pages(pte, pte + HPAGE_PMD_NR, compound_pagelist);
> + release_pte_pages(pte, pte + (1 << order), compound_pagelist);
> }
>
> /*
> @@ -802,7 +806,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
> static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
> pmd_t *pmd, pmd_t orig_pmd, struct vm_area_struct *vma,
> unsigned long address, spinlock_t *ptl,
> - struct list_head *compound_pagelist)
> + struct list_head *compound_pagelist, u8 order)
> {
> unsigned int i;
> int result = SCAN_SUCCEED;
> @@ -810,7 +814,7 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
> /*
> * Copying pages' contents is subject to memory poison at any iteration.
> */
> - for (i = 0; i < HPAGE_PMD_NR; i++) {
> + for (i = 0; i < (1 << order); i++) {
> pte_t pteval = ptep_get(pte + i);
> struct page *page = folio_page(folio, i);
> unsigned long src_addr = address + i * PAGE_SIZE;
> @@ -829,10 +833,10 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
>
> if (likely(result == SCAN_SUCCEED))
> __collapse_huge_page_copy_succeeded(pte, vma, address, ptl,
> - compound_pagelist);
> + compound_pagelist, order);
> else
> __collapse_huge_page_copy_failed(pte, pmd, orig_pmd, vma,
> - compound_pagelist);
> + compound_pagelist, order);
>
> return result;
> }
> @@ -1000,11 +1004,11 @@ static int check_pmd_still_valid(struct mm_struct *mm,
> static int __collapse_huge_page_swapin(struct mm_struct *mm,
> struct vm_area_struct *vma,
> unsigned long haddr, pmd_t *pmd,
> - int referenced)
> + int referenced, u8 order)
> {
> int swapped_in = 0;
> vm_fault_t ret = 0;
> - unsigned long address, end = haddr + (HPAGE_PMD_NR * PAGE_SIZE);
> + unsigned long address, end = haddr + (PAGE_SIZE << order);
> int result;
> pte_t *pte = NULL;
> spinlock_t *ptl;
> @@ -1035,6 +1039,12 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
> if (!is_swap_pte(vmf.orig_pte))
> continue;
>
> + /* Dont swapin for mTHP collapse */
> + if (order != HPAGE_PMD_ORDER) {
> + result = SCAN_EXCEED_SWAP_PTE;
> + goto out;
> + }
IMO, this check should move into hpage_collapse_scan_pmd(), that means
if we scan the swap ptes for mTHP collapse, then we can return
'SCAN_EXCEED_SWAP_PTE' to abort the collapse earlier.
The logic is the same as how you handle the shared ptes for mTHP.
> vmf.pte = pte;
> vmf.ptl = ptl;
> ret = do_swap_page(&vmf);
> @@ -1154,7 +1164,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> * that case. Continuing to collapse causes inconsistency.
> */
> result = __collapse_huge_page_swapin(mm, vma, address, pmd,
> - referenced);
> + referenced, HPAGE_PMD_ORDER);
> if (result != SCAN_SUCCEED)
> goto out_nolock;
> }
> @@ -1201,7 +1211,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> pte = pte_offset_map_lock(mm, &_pmd, address, &pte_ptl);
> if (pte) {
> result = __collapse_huge_page_isolate(vma, address, pte, cc,
> - &compound_pagelist);
> + &compound_pagelist, HPAGE_PMD_ORDER);
> spin_unlock(pte_ptl);
> } else {
> result = SCAN_PMD_NULL;
> @@ -1231,7 +1241,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
>
> result = __collapse_huge_page_copy(pte, folio, pmd, _pmd,
> vma, address, pte_ptl,
> - &compound_pagelist);
> + &compound_pagelist, HPAGE_PMD_ORDER);
> pte_unmap(pte);
> if (unlikely(result != SCAN_SUCCEED))
> goto out_up_write;
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 05/12] khugepaged: generalize __collapse_huge_page_* for mTHP support
2025-04-23 7:30 ` Baolin Wang
@ 2025-04-23 8:00 ` Nico Pache
2025-04-23 8:25 ` Baolin Wang
0 siblings, 1 reply; 34+ messages in thread
From: Nico Pache @ 2025-04-23 8:00 UTC (permalink / raw)
To: Baolin Wang
Cc: linux-mm, linux-doc, linux-kernel, linux-trace-kernel, akpm,
corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On Wed, Apr 23, 2025 at 1:30 AM Baolin Wang
<baolin.wang@linux.alibaba.com> wrote:
>
>
>
> On 2025/4/17 08:02, Nico Pache wrote:
> > generalize the order of the __collapse_huge_page_* functions
> > to support future mTHP collapse.
> >
> > mTHP collapse can suffer from incosistant behavior, and memory waste
> > "creep". disable swapin and shared support for mTHP collapse.
> >
> > No functional changes in this patch.
> >
> > Co-developed-by: Dev Jain <dev.jain@arm.com>
> > Signed-off-by: Dev Jain <dev.jain@arm.com>
> > Signed-off-by: Nico Pache <npache@redhat.com>
> > ---
> > mm/khugepaged.c | 46 ++++++++++++++++++++++++++++------------------
> > 1 file changed, 28 insertions(+), 18 deletions(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index 883e9a46359f..5e9272ab82da 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -565,15 +565,17 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> > unsigned long address,
> > pte_t *pte,
> > struct collapse_control *cc,
> > - struct list_head *compound_pagelist)
> > + struct list_head *compound_pagelist,
> > + u8 order)
> > {
> > struct page *page = NULL;
> > struct folio *folio = NULL;
> > pte_t *_pte;
> > int none_or_zero = 0, shared = 0, result = SCAN_FAIL, referenced = 0;
> > bool writable = false;
> > + int scaled_none = khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - order);
> >
> > - for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
> > + for (_pte = pte; _pte < pte + (1 << order);
> > _pte++, address += PAGE_SIZE) {
> > pte_t pteval = ptep_get(_pte);
> > if (pte_none(pteval) || (pte_present(pteval) &&
> > @@ -581,7 +583,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> > ++none_or_zero;
> > if (!userfaultfd_armed(vma) &&
> > (!cc->is_khugepaged ||
> > - none_or_zero <= khugepaged_max_ptes_none)) {
> > + none_or_zero <= scaled_none)) {
> > continue;
> > } else {
> > result = SCAN_EXCEED_NONE_PTE;
> > @@ -609,8 +611,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> > /* See hpage_collapse_scan_pmd(). */
> > if (folio_maybe_mapped_shared(folio)) {
> > ++shared;
> > - if (cc->is_khugepaged &&
> > - shared > khugepaged_max_ptes_shared) {
> > + if (order != HPAGE_PMD_ORDER || (cc->is_khugepaged &&
> > + shared > khugepaged_max_ptes_shared)) {
> > result = SCAN_EXCEED_SHARED_PTE;
> > count_vm_event(THP_SCAN_EXCEED_SHARED_PTE);
> > goto out;
> > @@ -711,13 +713,14 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
> > struct vm_area_struct *vma,
> > unsigned long address,
> > spinlock_t *ptl,
> > - struct list_head *compound_pagelist)
> > + struct list_head *compound_pagelist,
> > + u8 order)
> > {
> > struct folio *src, *tmp;
> > pte_t *_pte;
> > pte_t pteval;
> >
> > - for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
> > + for (_pte = pte; _pte < pte + (1 << order);
> > _pte++, address += PAGE_SIZE) {
> > pteval = ptep_get(_pte);
> > if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
> > @@ -764,7 +767,8 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
> > pmd_t *pmd,
> > pmd_t orig_pmd,
> > struct vm_area_struct *vma,
> > - struct list_head *compound_pagelist)
> > + struct list_head *compound_pagelist,
> > + u8 order)
> > {
> > spinlock_t *pmd_ptl;
> >
> > @@ -781,7 +785,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
> > * Release both raw and compound pages isolated
> > * in __collapse_huge_page_isolate.
> > */
> > - release_pte_pages(pte, pte + HPAGE_PMD_NR, compound_pagelist);
> > + release_pte_pages(pte, pte + (1 << order), compound_pagelist);
> > }
> >
> > /*
> > @@ -802,7 +806,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
> > static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
> > pmd_t *pmd, pmd_t orig_pmd, struct vm_area_struct *vma,
> > unsigned long address, spinlock_t *ptl,
> > - struct list_head *compound_pagelist)
> > + struct list_head *compound_pagelist, u8 order)
> > {
> > unsigned int i;
> > int result = SCAN_SUCCEED;
> > @@ -810,7 +814,7 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
> > /*
> > * Copying pages' contents is subject to memory poison at any iteration.
> > */
> > - for (i = 0; i < HPAGE_PMD_NR; i++) {
> > + for (i = 0; i < (1 << order); i++) {
> > pte_t pteval = ptep_get(pte + i);
> > struct page *page = folio_page(folio, i);
> > unsigned long src_addr = address + i * PAGE_SIZE;
> > @@ -829,10 +833,10 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
> >
> > if (likely(result == SCAN_SUCCEED))
> > __collapse_huge_page_copy_succeeded(pte, vma, address, ptl,
> > - compound_pagelist);
> > + compound_pagelist, order);
> > else
> > __collapse_huge_page_copy_failed(pte, pmd, orig_pmd, vma,
> > - compound_pagelist);
> > + compound_pagelist, order);
> >
> > return result;
> > }
> > @@ -1000,11 +1004,11 @@ static int check_pmd_still_valid(struct mm_struct *mm,
> > static int __collapse_huge_page_swapin(struct mm_struct *mm,
> > struct vm_area_struct *vma,
> > unsigned long haddr, pmd_t *pmd,
> > - int referenced)
> > + int referenced, u8 order)
> > {
> > int swapped_in = 0;
> > vm_fault_t ret = 0;
> > - unsigned long address, end = haddr + (HPAGE_PMD_NR * PAGE_SIZE);
> > + unsigned long address, end = haddr + (PAGE_SIZE << order);
> > int result;
> > pte_t *pte = NULL;
> > spinlock_t *ptl;
> > @@ -1035,6 +1039,12 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
> > if (!is_swap_pte(vmf.orig_pte))
> > continue;
> >
> > + /* Dont swapin for mTHP collapse */
> > + if (order != HPAGE_PMD_ORDER) {
> > + result = SCAN_EXCEED_SWAP_PTE;
> > + goto out;
> > + }
>
> IMO, this check should move into hpage_collapse_scan_pmd(), that means
> if we scan the swap ptes for mTHP collapse, then we can return
> 'SCAN_EXCEED_SWAP_PTE' to abort the collapse earlier.
I dont think this is correct. We currently abort if the global
max_swap_ptes or max_shared_ptes is exceeded during the PMD scan.
However if those pass (and we dont collapse at the PMD level), we will
continue to mTHP collapses. Then during the isolate function we check
for shared ptes in this specific mTHP range and abort if there's a
shared ptes. For swap we only know that some pages in the PMD are
unmapped, but we arent sure which, so we have to try and fault the
PTEs, and if it's a swap pte, and we are on mTHP collapse, we abort
the collapse attempt. So having swap/shared PTEs in the PMD scan, does
not indicate that ALL mTHP collapses will fail, but some will.
This may make more sense as you continue to review the series!
>
> The logic is the same as how you handle the shared ptes for mTHP.>
> > vmf.pte = pte;
> > vmf.ptl = ptl;
> > ret = do_swap_page(&vmf);
> > @@ -1154,7 +1164,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > * that case. Continuing to collapse causes inconsistency.
> > */
> > result = __collapse_huge_page_swapin(mm, vma, address, pmd,
> > - referenced);
> > + referenced, HPAGE_PMD_ORDER);
> > if (result != SCAN_SUCCEED)
> > goto out_nolock;
> > }
> > @@ -1201,7 +1211,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > pte = pte_offset_map_lock(mm, &_pmd, address, &pte_ptl);
> > if (pte) {
> > result = __collapse_huge_page_isolate(vma, address, pte, cc,
> > - &compound_pagelist);
> > + &compound_pagelist, HPAGE_PMD_ORDER);
> > spin_unlock(pte_ptl);
> > } else {
> > result = SCAN_PMD_NULL;
> > @@ -1231,7 +1241,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> >
> > result = __collapse_huge_page_copy(pte, folio, pmd, _pmd,
> > vma, address, pte_ptl,
> > - &compound_pagelist);
> > + &compound_pagelist, HPAGE_PMD_ORDER);
> > pte_unmap(pte);
> > if (unlikely(result != SCAN_SUCCEED))
> > goto out_up_write;
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 05/12] khugepaged: generalize __collapse_huge_page_* for mTHP support
2025-04-23 8:00 ` Nico Pache
@ 2025-04-23 8:25 ` Baolin Wang
0 siblings, 0 replies; 34+ messages in thread
From: Baolin Wang @ 2025-04-23 8:25 UTC (permalink / raw)
To: Nico Pache
Cc: linux-mm, linux-doc, linux-kernel, linux-trace-kernel, akpm,
corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On 2025/4/23 16:00, Nico Pache wrote:
> On Wed, Apr 23, 2025 at 1:30 AM Baolin Wang
> <baolin.wang@linux.alibaba.com> wrote:
>>
>>
>>
>> On 2025/4/17 08:02, Nico Pache wrote:
>>> generalize the order of the __collapse_huge_page_* functions
>>> to support future mTHP collapse.
>>>
>>> mTHP collapse can suffer from incosistant behavior, and memory waste
>>> "creep". disable swapin and shared support for mTHP collapse.
>>>
>>> No functional changes in this patch.
>>>
>>> Co-developed-by: Dev Jain <dev.jain@arm.com>
>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>>> Signed-off-by: Nico Pache <npache@redhat.com>
>>> ---
>>> mm/khugepaged.c | 46 ++++++++++++++++++++++++++++------------------
>>> 1 file changed, 28 insertions(+), 18 deletions(-)
>>>
>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>> index 883e9a46359f..5e9272ab82da 100644
>>> --- a/mm/khugepaged.c
>>> +++ b/mm/khugepaged.c
>>> @@ -565,15 +565,17 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
>>> unsigned long address,
>>> pte_t *pte,
>>> struct collapse_control *cc,
>>> - struct list_head *compound_pagelist)
>>> + struct list_head *compound_pagelist,
>>> + u8 order)
>>> {
>>> struct page *page = NULL;
>>> struct folio *folio = NULL;
>>> pte_t *_pte;
>>> int none_or_zero = 0, shared = 0, result = SCAN_FAIL, referenced = 0;
>>> bool writable = false;
>>> + int scaled_none = khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - order);
>>>
>>> - for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
>>> + for (_pte = pte; _pte < pte + (1 << order);
>>> _pte++, address += PAGE_SIZE) {
>>> pte_t pteval = ptep_get(_pte);
>>> if (pte_none(pteval) || (pte_present(pteval) &&
>>> @@ -581,7 +583,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
>>> ++none_or_zero;
>>> if (!userfaultfd_armed(vma) &&
>>> (!cc->is_khugepaged ||
>>> - none_or_zero <= khugepaged_max_ptes_none)) {
>>> + none_or_zero <= scaled_none)) {
>>> continue;
>>> } else {
>>> result = SCAN_EXCEED_NONE_PTE;
>>> @@ -609,8 +611,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
>>> /* See hpage_collapse_scan_pmd(). */
>>> if (folio_maybe_mapped_shared(folio)) {
>>> ++shared;
>>> - if (cc->is_khugepaged &&
>>> - shared > khugepaged_max_ptes_shared) {
>>> + if (order != HPAGE_PMD_ORDER || (cc->is_khugepaged &&
>>> + shared > khugepaged_max_ptes_shared)) {
>>> result = SCAN_EXCEED_SHARED_PTE;
>>> count_vm_event(THP_SCAN_EXCEED_SHARED_PTE);
>>> goto out;
>>> @@ -711,13 +713,14 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
>>> struct vm_area_struct *vma,
>>> unsigned long address,
>>> spinlock_t *ptl,
>>> - struct list_head *compound_pagelist)
>>> + struct list_head *compound_pagelist,
>>> + u8 order)
>>> {
>>> struct folio *src, *tmp;
>>> pte_t *_pte;
>>> pte_t pteval;
>>>
>>> - for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
>>> + for (_pte = pte; _pte < pte + (1 << order);
>>> _pte++, address += PAGE_SIZE) {
>>> pteval = ptep_get(_pte);
>>> if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
>>> @@ -764,7 +767,8 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
>>> pmd_t *pmd,
>>> pmd_t orig_pmd,
>>> struct vm_area_struct *vma,
>>> - struct list_head *compound_pagelist)
>>> + struct list_head *compound_pagelist,
>>> + u8 order)
>>> {
>>> spinlock_t *pmd_ptl;
>>>
>>> @@ -781,7 +785,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
>>> * Release both raw and compound pages isolated
>>> * in __collapse_huge_page_isolate.
>>> */
>>> - release_pte_pages(pte, pte + HPAGE_PMD_NR, compound_pagelist);
>>> + release_pte_pages(pte, pte + (1 << order), compound_pagelist);
>>> }
>>>
>>> /*
>>> @@ -802,7 +806,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
>>> static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
>>> pmd_t *pmd, pmd_t orig_pmd, struct vm_area_struct *vma,
>>> unsigned long address, spinlock_t *ptl,
>>> - struct list_head *compound_pagelist)
>>> + struct list_head *compound_pagelist, u8 order)
>>> {
>>> unsigned int i;
>>> int result = SCAN_SUCCEED;
>>> @@ -810,7 +814,7 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
>>> /*
>>> * Copying pages' contents is subject to memory poison at any iteration.
>>> */
>>> - for (i = 0; i < HPAGE_PMD_NR; i++) {
>>> + for (i = 0; i < (1 << order); i++) {
>>> pte_t pteval = ptep_get(pte + i);
>>> struct page *page = folio_page(folio, i);
>>> unsigned long src_addr = address + i * PAGE_SIZE;
>>> @@ -829,10 +833,10 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
>>>
>>> if (likely(result == SCAN_SUCCEED))
>>> __collapse_huge_page_copy_succeeded(pte, vma, address, ptl,
>>> - compound_pagelist);
>>> + compound_pagelist, order);
>>> else
>>> __collapse_huge_page_copy_failed(pte, pmd, orig_pmd, vma,
>>> - compound_pagelist);
>>> + compound_pagelist, order);
>>>
>>> return result;
>>> }
>>> @@ -1000,11 +1004,11 @@ static int check_pmd_still_valid(struct mm_struct *mm,
>>> static int __collapse_huge_page_swapin(struct mm_struct *mm,
>>> struct vm_area_struct *vma,
>>> unsigned long haddr, pmd_t *pmd,
>>> - int referenced)
>>> + int referenced, u8 order)
>>> {
>>> int swapped_in = 0;
>>> vm_fault_t ret = 0;
>>> - unsigned long address, end = haddr + (HPAGE_PMD_NR * PAGE_SIZE);
>>> + unsigned long address, end = haddr + (PAGE_SIZE << order);
>>> int result;
>>> pte_t *pte = NULL;
>>> spinlock_t *ptl;
>>> @@ -1035,6 +1039,12 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
>>> if (!is_swap_pte(vmf.orig_pte))
>>> continue;
>>>
>>> + /* Dont swapin for mTHP collapse */
>>> + if (order != HPAGE_PMD_ORDER) {
>>> + result = SCAN_EXCEED_SWAP_PTE;
>>> + goto out;
>>> + }
>>
>> IMO, this check should move into hpage_collapse_scan_pmd(), that means
>> if we scan the swap ptes for mTHP collapse, then we can return
>> 'SCAN_EXCEED_SWAP_PTE' to abort the collapse earlier.
> I dont think this is correct. We currently abort if the global
> max_swap_ptes or max_shared_ptes is exceeded during the PMD scan.
> However if those pass (and we dont collapse at the PMD level), we will
> continue to mTHP collapses. Then during the isolate function we check
> for shared ptes in this specific mTHP range and abort if there's a
> shared ptes. For swap we only know that some pages in the PMD are
> unmapped, but we arent sure which, so we have to try and fault the
> PTEs, and if it's a swap pte, and we are on mTHP collapse, we abort
> the collapse attempt. So having swap/shared PTEs in the PMD scan, does
> not indicate that ALL mTHP collapses will fail, but some will.
Yes, you are right! I misread the code (I thought the changes were in
hpage_collapse_scan_pmd()). Sorry for the noise. Feel free to add:
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 08/12] khugepaged: skip collapsing mTHP to smaller orders
2025-04-17 0:02 ` [PATCH v4 08/12] khugepaged: skip collapsing mTHP to smaller orders Nico Pache
@ 2025-04-24 7:48 ` Baolin Wang
2025-04-28 15:44 ` Nico Pache
0 siblings, 1 reply; 34+ messages in thread
From: Baolin Wang @ 2025-04-24 7:48 UTC (permalink / raw)
To: Nico Pache, linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On 2025/4/17 08:02, Nico Pache wrote:
> khugepaged may try to collapse a mTHP to a smaller mTHP, resulting in
> some pages being unmapped. Skip these cases until we have a way to check
> if its ok to collapse to a smaller mTHP size (like in the case of a
> partially mapped folio).
>
> This patch is inspired by Dev Jain's work on khugepaged mTHP support [1].
>
> [1] https://lore.kernel.org/lkml/20241216165105.56185-11-dev.jain@arm.com/
>
> Co-developed-by: Dev Jain <dev.jain@arm.com>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> Signed-off-by: Nico Pache <npache@redhat.com>
> ---
> mm/khugepaged.c | 7 ++++++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index ece39fd71fe6..383aff12cd43 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -625,7 +625,12 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> folio = page_folio(page);
> VM_BUG_ON_FOLIO(!folio_test_anon(folio), folio);
>
> - /* See hpage_collapse_scan_pmd(). */
> + if (order != HPAGE_PMD_ORDER && folio_order(folio) >= order) {
> + result = SCAN_PTE_MAPPED_HUGEPAGE;
> + goto out;
> + }
Should we also add this check in hpage_collapse_scan_pmd() to abort the
scan early?
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 10/12] khugepaged: improve tracepoints for mTHP orders
2025-04-17 0:02 ` [PATCH v4 10/12] khugepaged: improve tracepoints for mTHP orders Nico Pache
@ 2025-04-24 7:51 ` Baolin Wang
0 siblings, 0 replies; 34+ messages in thread
From: Baolin Wang @ 2025-04-24 7:51 UTC (permalink / raw)
To: Nico Pache, linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On 2025/4/17 08:02, Nico Pache wrote:
> Add the order to the tracepoints to give better insight into what order
> is being operated at for khugepaged.
>
> Signed-off-by: Nico Pache <npache@redhat.com>
LGTM.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> include/trace/events/huge_memory.h | 34 +++++++++++++++++++-----------
> mm/khugepaged.c | 10 +++++----
> 2 files changed, 28 insertions(+), 16 deletions(-)
>
> diff --git a/include/trace/events/huge_memory.h b/include/trace/events/huge_memory.h
> index 9d5c00b0285c..ea2fe20a39f5 100644
> --- a/include/trace/events/huge_memory.h
> +++ b/include/trace/events/huge_memory.h
> @@ -92,34 +92,37 @@ TRACE_EVENT(mm_khugepaged_scan_pmd,
>
> TRACE_EVENT(mm_collapse_huge_page,
>
> - TP_PROTO(struct mm_struct *mm, int isolated, int status),
> + TP_PROTO(struct mm_struct *mm, int isolated, int status, int order),
>
> - TP_ARGS(mm, isolated, status),
> + TP_ARGS(mm, isolated, status, order),
>
> TP_STRUCT__entry(
> __field(struct mm_struct *, mm)
> __field(int, isolated)
> __field(int, status)
> + __field(int, order)
> ),
>
> TP_fast_assign(
> __entry->mm = mm;
> __entry->isolated = isolated;
> __entry->status = status;
> + __entry->order = order;
> ),
>
> - TP_printk("mm=%p, isolated=%d, status=%s",
> + TP_printk("mm=%p, isolated=%d, status=%s order=%d",
> __entry->mm,
> __entry->isolated,
> - __print_symbolic(__entry->status, SCAN_STATUS))
> + __print_symbolic(__entry->status, SCAN_STATUS),
> + __entry->order)
> );
>
> TRACE_EVENT(mm_collapse_huge_page_isolate,
>
> TP_PROTO(struct page *page, int none_or_zero,
> - int referenced, bool writable, int status),
> + int referenced, bool writable, int status, int order),
>
> - TP_ARGS(page, none_or_zero, referenced, writable, status),
> + TP_ARGS(page, none_or_zero, referenced, writable, status, order),
>
> TP_STRUCT__entry(
> __field(unsigned long, pfn)
> @@ -127,6 +130,7 @@ TRACE_EVENT(mm_collapse_huge_page_isolate,
> __field(int, referenced)
> __field(bool, writable)
> __field(int, status)
> + __field(int, order)
> ),
>
> TP_fast_assign(
> @@ -135,27 +139,31 @@ TRACE_EVENT(mm_collapse_huge_page_isolate,
> __entry->referenced = referenced;
> __entry->writable = writable;
> __entry->status = status;
> + __entry->order = order;
> ),
>
> - TP_printk("scan_pfn=0x%lx, none_or_zero=%d, referenced=%d, writable=%d, status=%s",
> + TP_printk("scan_pfn=0x%lx, none_or_zero=%d, referenced=%d, writable=%d, status=%s order=%d",
> __entry->pfn,
> __entry->none_or_zero,
> __entry->referenced,
> __entry->writable,
> - __print_symbolic(__entry->status, SCAN_STATUS))
> + __print_symbolic(__entry->status, SCAN_STATUS),
> + __entry->order)
> );
>
> TRACE_EVENT(mm_collapse_huge_page_swapin,
>
> - TP_PROTO(struct mm_struct *mm, int swapped_in, int referenced, int ret),
> + TP_PROTO(struct mm_struct *mm, int swapped_in, int referenced, int ret,
> + int order),
>
> - TP_ARGS(mm, swapped_in, referenced, ret),
> + TP_ARGS(mm, swapped_in, referenced, ret, order),
>
> TP_STRUCT__entry(
> __field(struct mm_struct *, mm)
> __field(int, swapped_in)
> __field(int, referenced)
> __field(int, ret)
> + __field(int, order)
> ),
>
> TP_fast_assign(
> @@ -163,13 +171,15 @@ TRACE_EVENT(mm_collapse_huge_page_swapin,
> __entry->swapped_in = swapped_in;
> __entry->referenced = referenced;
> __entry->ret = ret;
> + __entry->order = order;
> ),
>
> - TP_printk("mm=%p, swapped_in=%d, referenced=%d, ret=%d",
> + TP_printk("mm=%p, swapped_in=%d, referenced=%d, ret=%d, order=%d",
> __entry->mm,
> __entry->swapped_in,
> __entry->referenced,
> - __entry->ret)
> + __entry->ret,
> + __entry->order)
> );
>
> TRACE_EVENT(mm_khugepaged_scan_file,
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 738dd9c5751d..67da0950b833 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -721,13 +721,14 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> } else {
> result = SCAN_SUCCEED;
> trace_mm_collapse_huge_page_isolate(&folio->page, none_or_zero,
> - referenced, writable, result);
> + referenced, writable, result,
> + order);
> return result;
> }
> out:
> release_pte_pages(pte, _pte, compound_pagelist);
> trace_mm_collapse_huge_page_isolate(&folio->page, none_or_zero,
> - referenced, writable, result);
> + referenced, writable, result, order);
> return result;
> }
>
> @@ -1097,7 +1098,8 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
>
> result = SCAN_SUCCEED;
> out:
> - trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, result);
> + trace_mm_collapse_huge_page_swapin(mm, swapped_in, referenced, result,
> + order);
> return result;
> }
>
> @@ -1318,7 +1320,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> *mmap_locked = false;
> if (folio)
> folio_put(folio);
> - trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result);
> + trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result, order);
> return result;
> }
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 11/12] khugepaged: add per-order mTHP khugepaged stats
2025-04-17 0:02 ` [PATCH v4 11/12] khugepaged: add per-order mTHP khugepaged stats Nico Pache
@ 2025-04-24 7:58 ` Baolin Wang
2025-04-28 15:45 ` Nico Pache
0 siblings, 1 reply; 34+ messages in thread
From: Baolin Wang @ 2025-04-24 7:58 UTC (permalink / raw)
To: Nico Pache, linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On 2025/4/17 08:02, Nico Pache wrote:
> With mTHP support inplace, let add the per-order mTHP stats for
> exceeding NONE, SWAP, and SHARED.
>
> Signed-off-by: Nico Pache <npache@redhat.com>
> ---
> include/linux/huge_mm.h | 3 +++
> mm/huge_memory.c | 7 +++++++
> mm/khugepaged.c | 16 +++++++++++++---
> 3 files changed, 23 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index 55b242335420..782d3a7854b4 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -139,6 +139,9 @@ enum mthp_stat_item {
> MTHP_STAT_SPLIT_DEFERRED,
> MTHP_STAT_NR_ANON,
> MTHP_STAT_NR_ANON_PARTIALLY_MAPPED,
> + MTHP_STAT_COLLAPSE_EXCEED_SWAP,
> + MTHP_STAT_COLLAPSE_EXCEED_NONE,
> + MTHP_STAT_COLLAPSE_EXCEED_SHARED,
> __MTHP_STAT_COUNT
> };
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 7798c9284533..de4704af0022 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -633,6 +633,10 @@ DEFINE_MTHP_STAT_ATTR(split_failed, MTHP_STAT_SPLIT_FAILED);
> DEFINE_MTHP_STAT_ATTR(split_deferred, MTHP_STAT_SPLIT_DEFERRED);
> DEFINE_MTHP_STAT_ATTR(nr_anon, MTHP_STAT_NR_ANON);
> DEFINE_MTHP_STAT_ATTR(nr_anon_partially_mapped, MTHP_STAT_NR_ANON_PARTIALLY_MAPPED);
> +DEFINE_MTHP_STAT_ATTR(collapse_exceed_swap_pte, MTHP_STAT_COLLAPSE_EXCEED_SWAP);
> +DEFINE_MTHP_STAT_ATTR(collapse_exceed_none_pte, MTHP_STAT_COLLAPSE_EXCEED_NONE);
> +DEFINE_MTHP_STAT_ATTR(collapse_exceed_shared_pte, MTHP_STAT_COLLAPSE_EXCEED_SHARED);
> +
>
> static struct attribute *anon_stats_attrs[] = {
> &anon_fault_alloc_attr.attr,
> @@ -649,6 +653,9 @@ static struct attribute *anon_stats_attrs[] = {
> &split_deferred_attr.attr,
> &nr_anon_attr.attr,
> &nr_anon_partially_mapped_attr.attr,
> + &collapse_exceed_swap_pte_attr.attr,
> + &collapse_exceed_none_pte_attr.attr,
> + &collapse_exceed_shared_pte_attr.attr,
> NULL,
> };
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 67da0950b833..38643a681ba5 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -604,7 +604,10 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> continue;
> } else {
> result = SCAN_EXCEED_NONE_PTE;
> - count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
> + if (order == HPAGE_PMD_ORDER)
> + count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
> + else
> + count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_NONE);
> goto out;
> }
> }
> @@ -633,8 +636,14 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> /* See khugepaged_scan_pmd(). */
> if (folio_maybe_mapped_shared(folio)) {
> ++shared;
> - if (order != HPAGE_PMD_ORDER || (cc->is_khugepaged &&
> - shared > khugepaged_max_ptes_shared)) {
> + if (order != HPAGE_PMD_ORDER) {
> + result = SCAN_EXCEED_SHARED_PTE;
> + count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_SHARED);
> + goto out;
> + }
> +
> + if (cc->is_khugepaged &&
> + shared > khugepaged_max_ptes_shared) {
> result = SCAN_EXCEED_SHARED_PTE;
> count_vm_event(THP_SCAN_EXCEED_SHARED_PTE);
> goto out;
> @@ -1060,6 +1069,7 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
>
> /* Dont swapin for mTHP collapse */
> if (order != HPAGE_PMD_ORDER) {
> + count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_SHARED);
Should be MTHP_STAT_COLLAPSE_EXCEED_SWAP?
> result = SCAN_EXCEED_SWAP_PTE;
> goto out;
> }
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 07/12] khugepaged: add mTHP support
2025-04-17 0:02 ` [PATCH v4 07/12] khugepaged: add " Nico Pache
@ 2025-04-24 12:21 ` Baolin Wang
2025-04-28 15:14 ` Nico Pache
0 siblings, 1 reply; 34+ messages in thread
From: Baolin Wang @ 2025-04-24 12:21 UTC (permalink / raw)
To: Nico Pache, linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On 2025/4/17 08:02, Nico Pache wrote:
> Introduce the ability for khugepaged to collapse to different mTHP sizes.
> While scanning PMD ranges for potential collapse candidates, keep track
> of pages in KHUGEPAGED_MIN_MTHP_ORDER chunks via a bitmap. Each bit
> represents a utilized region of order KHUGEPAGED_MIN_MTHP_ORDER ptes. If
> mTHPs are enabled we remove the restriction of max_ptes_none during the
> scan phase so we dont bailout early and miss potential mTHP candidates.
>
> After the scan is complete we will perform binary recursion on the
> bitmap to determine which mTHP size would be most efficient to collapse
> to. max_ptes_none will be scaled by the attempted collapse order to
> determine how full a THP must be to be eligible.
>
> If a mTHP collapse is attempted, but contains swapped out, or shared
> pages, we dont perform the collapse.
>
> Signed-off-by: Nico Pache <npache@redhat.com>
> ---
> mm/khugepaged.c | 122 ++++++++++++++++++++++++++++++++++--------------
> 1 file changed, 88 insertions(+), 34 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 83230e9cdf3a..ece39fd71fe6 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1136,13 +1136,14 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> {
> LIST_HEAD(compound_pagelist);
> pmd_t *pmd, _pmd;
> - pte_t *pte;
> + pte_t *pte, mthp_pte;
> pgtable_t pgtable;
> struct folio *folio;
> spinlock_t *pmd_ptl, *pte_ptl;
> int result = SCAN_FAIL;
> struct vm_area_struct *vma;
> struct mmu_notifier_range range;
> + unsigned long _address = address + offset * PAGE_SIZE;
>
> VM_BUG_ON(address & ~HPAGE_PMD_MASK);
>
> @@ -1158,12 +1159,13 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> *mmap_locked = false;
> }
>
> - result = alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER);
> + result = alloc_charge_folio(&folio, mm, cc, order);
> if (result != SCAN_SUCCEED)
> goto out_nolock;
>
> mmap_read_lock(mm);
> - result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER);
> + *mmap_locked = true;
> + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, order);
> if (result != SCAN_SUCCEED) {
> mmap_read_unlock(mm);
> goto out_nolock;
> @@ -1181,13 +1183,14 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> * released when it fails. So we jump out_nolock directly in
> * that case. Continuing to collapse causes inconsistency.
> */
> - result = __collapse_huge_page_swapin(mm, vma, address, pmd,
> - referenced, HPAGE_PMD_ORDER);
> + result = __collapse_huge_page_swapin(mm, vma, _address, pmd,
> + referenced, order);
> if (result != SCAN_SUCCEED)
> goto out_nolock;
> }
>
> mmap_read_unlock(mm);
> + *mmap_locked = false;
> /*
> * Prevent all access to pagetables with the exception of
> * gup_fast later handled by the ptep_clear_flush and the VM
> @@ -1197,7 +1200,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> * mmap_lock.
> */
> mmap_write_lock(mm);
> - result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER);
> + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, order);
> if (result != SCAN_SUCCEED)
> goto out_up_write;
> /* check if the pmd is still valid */
> @@ -1208,11 +1211,12 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> vma_start_write(vma);
> anon_vma_lock_write(vma->anon_vma);
>
> - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, address,
> - address + HPAGE_PMD_SIZE);
> + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, _address,
> + _address + (PAGE_SIZE << order));
> mmu_notifier_invalidate_range_start(&range);
>
> pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
> +
> /*
> * This removes any huge TLB entry from the CPU so we won't allow
> * huge and small TLB entries for the same virtual address to
> @@ -1226,10 +1230,10 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> mmu_notifier_invalidate_range_end(&range);
> tlb_remove_table_sync_one();
>
> - pte = pte_offset_map_lock(mm, &_pmd, address, &pte_ptl);
> + pte = pte_offset_map_lock(mm, &_pmd, _address, &pte_ptl);
> if (pte) {
> - result = __collapse_huge_page_isolate(vma, address, pte, cc,
> - &compound_pagelist, HPAGE_PMD_ORDER);
> + result = __collapse_huge_page_isolate(vma, _address, pte, cc,
> + &compound_pagelist, order);
> spin_unlock(pte_ptl);
> } else {
> result = SCAN_PMD_NULL;
> @@ -1258,8 +1262,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> anon_vma_unlock_write(vma->anon_vma);
>
> result = __collapse_huge_page_copy(pte, folio, pmd, _pmd,
> - vma, address, pte_ptl,
> - &compound_pagelist, HPAGE_PMD_ORDER);
> + vma, _address, pte_ptl,
> + &compound_pagelist, order);
> pte_unmap(pte);
pte is unmapped here, but...
> if (unlikely(result != SCAN_SUCCEED))
> goto out_up_write;
> @@ -1270,20 +1274,35 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> * write.
> */
> __folio_mark_uptodate(folio);
> - pgtable = pmd_pgtable(_pmd);
> -
> - _pmd = folio_mk_pmd(folio, vma->vm_page_prot);
> - _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma);
> -
> - spin_lock(pmd_ptl);
> - BUG_ON(!pmd_none(*pmd));
> - folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE);
> - folio_add_lru_vma(folio, vma);
> - pgtable_trans_huge_deposit(mm, pmd, pgtable);
> - set_pmd_at(mm, address, pmd, _pmd);
> - update_mmu_cache_pmd(vma, address, pmd);
> - deferred_split_folio(folio, false);
> - spin_unlock(pmd_ptl);
> + if (order == HPAGE_PMD_ORDER) {
> + pgtable = pmd_pgtable(_pmd);
> + _pmd = folio_mk_pmd(folio, vma->vm_page_prot);
> + _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma);
> +
> + spin_lock(pmd_ptl);
> + BUG_ON(!pmd_none(*pmd));
> + folio_add_new_anon_rmap(folio, vma, _address, RMAP_EXCLUSIVE);
> + folio_add_lru_vma(folio, vma);
> + pgtable_trans_huge_deposit(mm, pmd, pgtable);
> + set_pmd_at(mm, address, pmd, _pmd);
> + update_mmu_cache_pmd(vma, address, pmd);
> + deferred_split_folio(folio, false);
> + spin_unlock(pmd_ptl);
> + } else { //mTHP
(Nit: use '/* xxx */' format)
> + mthp_pte = mk_pte(&folio->page, vma->vm_page_prot);
> + mthp_pte = maybe_mkwrite(pte_mkdirty(mthp_pte), vma);
> +
> + spin_lock(pmd_ptl);
> + folio_ref_add(folio, (1 << order) - 1);
> + folio_add_new_anon_rmap(folio, vma, _address, RMAP_EXCLUSIVE);
> + folio_add_lru_vma(folio, vma);
> + set_ptes(vma->vm_mm, _address, pte, mthp_pte, (1 << order));
You still used the unmapped pte? Looks incorrect to me.
> + update_mmu_cache_range(NULL, vma, _address, pte, (1 << order));
> +
> + smp_wmb(); /* make pte visible before pmd */
> + pmd_populate(mm, pmd, pmd_pgtable(_pmd));
> + spin_unlock(pmd_ptl);
> + }
>
> folio = NULL;
>
> @@ -1364,31 +1383,58 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
> {
> pmd_t *pmd;
> pte_t *pte, *_pte;
> + int i;
> int result = SCAN_FAIL, referenced = 0;
> int none_or_zero = 0, shared = 0;
> struct page *page = NULL;
> struct folio *folio = NULL;
> unsigned long _address;
> + unsigned long enabled_orders;
> spinlock_t *ptl;
> int node = NUMA_NO_NODE, unmapped = 0;
> + bool is_pmd_only;
> bool writable = false;
> -
> + int chunk_none_count = 0;
> + int scaled_none = khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER);
> + unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0;
> VM_BUG_ON(address & ~HPAGE_PMD_MASK);
>
> result = find_pmd_or_thp_or_none(mm, address, &pmd);
> if (result != SCAN_SUCCEED)
> goto out;
>
> + bitmap_zero(cc->mthp_bitmap, MAX_MTHP_BITMAP_SIZE);
> + bitmap_zero(cc->mthp_bitmap_temp, MAX_MTHP_BITMAP_SIZE);
> memset(cc->node_load, 0, sizeof(cc->node_load));
> nodes_clear(cc->alloc_nmask);
> +
> + enabled_orders = thp_vma_allowable_orders(vma, vma->vm_flags,
> + tva_flags, THP_ORDERS_ALL_ANON);
> +
> + is_pmd_only = (enabled_orders == (1 << HPAGE_PMD_ORDER));
> +
> pte = pte_offset_map_lock(mm, pmd, address, &ptl);
> if (!pte) {
> result = SCAN_PMD_NULL;
> goto out;
> }
>
> - for (_address = address, _pte = pte; _pte < pte + HPAGE_PMD_NR;
> - _pte++, _address += PAGE_SIZE) {
> + for (i = 0; i < HPAGE_PMD_NR; i++) {
> + /*
> + * we are reading in KHUGEPAGED_MIN_MTHP_NR page chunks. if
> + * there are pages in this chunk keep track of it in the bitmap
> + * for mTHP collapsing.
> + */
> + if (i % KHUGEPAGED_MIN_MTHP_NR == 0) {
> + if (chunk_none_count <= scaled_none)
> + bitmap_set(cc->mthp_bitmap,
> + i / KHUGEPAGED_MIN_MTHP_NR, 1);
> +
> + chunk_none_count = 0;
> + }
> +
> + _pte = pte + i;
> + _address = address + i * PAGE_SIZE;
> pte_t pteval = ptep_get(_pte);
> if (is_swap_pte(pteval)) {
> ++unmapped;
> @@ -1411,10 +1457,11 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
> }
> }
> if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
> + ++chunk_none_count;
> ++none_or_zero;
> if (!userfaultfd_armed(vma) &&
> - (!cc->is_khugepaged ||
> - none_or_zero <= khugepaged_max_ptes_none)) {
> + (!cc->is_khugepaged || !is_pmd_only ||
> + none_or_zero <= khugepaged_max_ptes_none)) {
> continue;
> } else {
> result = SCAN_EXCEED_NONE_PTE;
> @@ -1510,6 +1557,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
> address)))
> referenced++;
> }
> +
> if (!writable) {
> result = SCAN_PAGE_RO;
> } else if (cc->is_khugepaged &&
> @@ -1522,8 +1570,12 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
> out_unmap:
> pte_unmap_unlock(pte, ptl);
> if (result == SCAN_SUCCEED) {
> - result = collapse_huge_page(mm, address, referenced,
> - unmapped, cc, mmap_locked, HPAGE_PMD_ORDER, 0);
> + result = khugepaged_scan_bitmap(mm, address, referenced, unmapped, cc,
> + mmap_locked, enabled_orders);
> + if (result > 0)
> + result = SCAN_SUCCEED;
> + else
> + result = SCAN_FAIL;
> }
> out:
> trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, referenced,
> @@ -2479,11 +2531,13 @@ static int khugepaged_collapse_single_pmd(unsigned long addr,
> fput(file);
> if (result == SCAN_PTE_MAPPED_HUGEPAGE) {
> mmap_read_lock(mm);
> + *mmap_locked = true;
> if (khugepaged_test_exit_or_disable(mm))
> goto end;
> result = collapse_pte_mapped_thp(mm, addr,
> !cc->is_khugepaged);
> mmap_read_unlock(mm);
> + *mmap_locked = false;
> }
> } else {
> result = khugepaged_scan_pmd(mm, vma, addr,
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 12/12] Documentation: mm: update the admin guide for mTHP collapse
2025-04-17 0:02 ` [PATCH v4 12/12] Documentation: mm: update the admin guide for mTHP collapse Nico Pache
@ 2025-04-24 15:03 ` Usama Arif
2025-04-28 14:54 ` Nico Pache
0 siblings, 1 reply; 34+ messages in thread
From: Usama Arif @ 2025-04-24 15:03 UTC (permalink / raw)
To: Nico Pache, linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
baolin.wang, ryan.roberts, willy, peterx, ziy, wangkefeng.wang,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On 17/04/2025 01:02, Nico Pache wrote:
> Now that we can collapse to mTHPs lets update the admin guide to
> reflect these changes and provide proper guidence on how to utilize it.
>
> Signed-off-by: Nico Pache <npache@redhat.com>
> ---
> Documentation/admin-guide/mm/transhuge.rst | 10 +++++++++-
> 1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
> index dff8d5985f0f..06814e05e1d5 100644
> --- a/Documentation/admin-guide/mm/transhuge.rst
> +++ b/Documentation/admin-guide/mm/transhuge.rst
> @@ -63,7 +63,7 @@ often.
> THP can be enabled system wide or restricted to certain tasks or even
> memory ranges inside task's address space. Unless THP is completely
> disabled, there is ``khugepaged`` daemon that scans memory and
> -collapses sequences of basic pages into PMD-sized huge pages.
> +collapses sequences of basic pages into huge pages.
>
> The THP behaviour is controlled via :ref:`sysfs <thp_sysfs>`
> interface and using madvise(2) and prctl(2) system calls.
> @@ -144,6 +144,14 @@ hugepage sizes have enabled="never". If enabling multiple hugepage
> sizes, the kernel will select the most appropriate enabled size for a
> given allocation.
>
> +khugepaged uses max_ptes_none scaled to the order of the enabled mTHP size to
> +determine collapses. When using mTHPs it's recommended to set max_ptes_none
> +low-- ideally less than HPAGE_PMD_NR / 2 (255 on 4k page size). This will
> +prevent undesired "creep" behavior that leads to continuously collapsing to a
> +larger mTHP size. max_ptes_shared and max_ptes_swap have no effect when
> +collapsing to a mTHP, and mTHP collapse will fail on shared or swapped out
> +pages.
> +
Hi Nico,
Could you add a bit more explanation of the creep behaviour here in documentation.
I remember you explained in one of the earlier versions that if more than half of the
collapsed mTHP is zero-filled, it for some reason becomes eligible for collapsing to
larger order, but if less than half is zero-filled its not eligible? I cant exactly
remember what the reason was :) Would be good to have it documented more if possible.
Thanks
> It's also possible to limit defrag efforts in the VM to generate
> anonymous hugepages in case they're not immediately free to madvise
> regions or to never try to defrag memory and simply fallback to regular
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 06/12] khugepaged: introduce khugepaged_scan_bitmap for mTHP support
2025-04-17 0:02 ` [PATCH v4 06/12] khugepaged: introduce khugepaged_scan_bitmap " Nico Pache
@ 2025-04-27 2:51 ` Baolin Wang
2025-04-28 14:47 ` Nico Pache
0 siblings, 1 reply; 34+ messages in thread
From: Baolin Wang @ 2025-04-27 2:51 UTC (permalink / raw)
To: Nico Pache, linux-mm, linux-doc, linux-kernel, linux-trace-kernel
Cc: akpm, corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On 2025/4/17 08:02, Nico Pache wrote:
> khugepaged scans PMD ranges for potential collapse to a hugepage. To add
> mTHP support we use this scan to instead record chunks of utilized
> sections of the PMD.
>
> khugepaged_scan_bitmap uses a stack struct to recursively scan a bitmap
> that represents chunks of utilized regions. We can then determine what
> mTHP size fits best and in the following patch, we set this bitmap while
> scanning the PMD.
>
> max_ptes_none is used as a scale to determine how "full" an order must
> be before being considered for collapse.
>
> When attempting to collapse an order that has its order set to "always"
> lets always collapse to that order in a greedy manner without
> considering the number of bits set.
>
> Signed-off-by: Nico Pache <npache@redhat.com>
> ---
> include/linux/khugepaged.h | 4 ++
> mm/khugepaged.c | 94 ++++++++++++++++++++++++++++++++++----
> 2 files changed, 89 insertions(+), 9 deletions(-)
>
> diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h
> index 1f46046080f5..18fe6eb5051d 100644
> --- a/include/linux/khugepaged.h
> +++ b/include/linux/khugepaged.h
> @@ -1,6 +1,10 @@
> /* SPDX-License-Identifier: GPL-2.0 */
> #ifndef _LINUX_KHUGEPAGED_H
> #define _LINUX_KHUGEPAGED_H
> +#define KHUGEPAGED_MIN_MTHP_ORDER 2
Why is the minimum mTHP order set to 2? IMO, the file large folios can
support order 1, so we don't expect to collapse exec file small folios
to order 1 if possible?
(PS: I need more time to understand your logic in this patch, and any
additional explanation would be helpful:) )
> +#define KHUGEPAGED_MIN_MTHP_NR (1<<KHUGEPAGED_MIN_MTHP_ORDER)
> +#define MAX_MTHP_BITMAP_SIZE (1 << (ilog2(MAX_PTRS_PER_PTE) - KHUGEPAGED_MIN_MTHP_ORDER))
> +#define MTHP_BITMAP_SIZE (1 << (HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER))
>
> extern unsigned int khugepaged_max_ptes_none __read_mostly;
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 5e9272ab82da..83230e9cdf3a 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -94,6 +94,11 @@ static DEFINE_READ_MOSTLY_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS);
>
> static struct kmem_cache *mm_slot_cache __ro_after_init;
>
> +struct scan_bit_state {
> + u8 order;
> + u16 offset;
> +};
> +
> struct collapse_control {
> bool is_khugepaged;
>
> @@ -102,6 +107,18 @@ struct collapse_control {
>
> /* nodemask for allocation fallback */
> nodemask_t alloc_nmask;
> +
> + /*
> + * bitmap used to collapse mTHP sizes.
> + * 1bit = order KHUGEPAGED_MIN_MTHP_ORDER mTHP
> + */
> + DECLARE_BITMAP(mthp_bitmap, MAX_MTHP_BITMAP_SIZE);
> + DECLARE_BITMAP(mthp_bitmap_temp, MAX_MTHP_BITMAP_SIZE);
> + struct scan_bit_state mthp_bitmap_stack[MAX_MTHP_BITMAP_SIZE];
> +};
> +
> +struct collapse_control khugepaged_collapse_control = {
> + .is_khugepaged = true,
> };
>
> /**
> @@ -851,10 +868,6 @@ static void khugepaged_alloc_sleep(void)
> remove_wait_queue(&khugepaged_wait, &wait);
> }
>
> -struct collapse_control khugepaged_collapse_control = {
> - .is_khugepaged = true,
> -};
> -
> static bool khugepaged_scan_abort(int nid, struct collapse_control *cc)
> {
> int i;
> @@ -1118,7 +1131,8 @@ static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm,
>
> static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> int referenced, int unmapped,
> - struct collapse_control *cc)
> + struct collapse_control *cc, bool *mmap_locked,
> + u8 order, u16 offset)
> {
> LIST_HEAD(compound_pagelist);
> pmd_t *pmd, _pmd;
> @@ -1137,8 +1151,12 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> * The allocation can take potentially a long time if it involves
> * sync compaction, and we do not need to hold the mmap_lock during
> * that. We will recheck the vma after taking it again in write mode.
> + * If collapsing mTHPs we may have already released the read_lock.
> */
> - mmap_read_unlock(mm);
> + if (*mmap_locked) {
> + mmap_read_unlock(mm);
> + *mmap_locked = false;
> + }
>
> result = alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER);
> if (result != SCAN_SUCCEED)
> @@ -1273,12 +1291,72 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> out_up_write:
> mmap_write_unlock(mm);
> out_nolock:
> + *mmap_locked = false;
> if (folio)
> folio_put(folio);
> trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result);
> return result;
> }
>
> +// Recursive function to consume the bitmap
> +static int khugepaged_scan_bitmap(struct mm_struct *mm, unsigned long address,
> + int referenced, int unmapped, struct collapse_control *cc,
> + bool *mmap_locked, unsigned long enabled_orders)
> +{
> + u8 order, next_order;
> + u16 offset, mid_offset;
> + int num_chunks;
> + int bits_set, threshold_bits;
> + int top = -1;
> + int collapsed = 0;
> + int ret;
> + struct scan_bit_state state;
> + bool is_pmd_only = (enabled_orders == (1 << HPAGE_PMD_ORDER));
> +
> + cc->mthp_bitmap_stack[++top] = (struct scan_bit_state)
> + { HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER, 0 };
> +
> + while (top >= 0) {
> + state = cc->mthp_bitmap_stack[top--];
> + order = state.order + KHUGEPAGED_MIN_MTHP_ORDER;
> + offset = state.offset;
> + num_chunks = 1 << (state.order);
> + // Skip mTHP orders that are not enabled
> + if (!test_bit(order, &enabled_orders))
> + goto next;
> +
> + // copy the relavant section to a new bitmap
> + bitmap_shift_right(cc->mthp_bitmap_temp, cc->mthp_bitmap, offset,
> + MTHP_BITMAP_SIZE);
> +
> + bits_set = bitmap_weight(cc->mthp_bitmap_temp, num_chunks);
> + threshold_bits = (HPAGE_PMD_NR - khugepaged_max_ptes_none - 1)
> + >> (HPAGE_PMD_ORDER - state.order);
> +
> + //Check if the region is "almost full" based on the threshold
> + if (bits_set > threshold_bits || is_pmd_only
> + || test_bit(order, &huge_anon_orders_always)) {
> + ret = collapse_huge_page(mm, address, referenced, unmapped, cc,
> + mmap_locked, order, offset * KHUGEPAGED_MIN_MTHP_NR);
> + if (ret == SCAN_SUCCEED) {
> + collapsed += (1 << order);
> + continue;
> + }
> + }
> +
> +next:
> + if (state.order > 0) {
> + next_order = state.order - 1;
> + mid_offset = offset + (num_chunks / 2);
> + cc->mthp_bitmap_stack[++top] = (struct scan_bit_state)
> + { next_order, mid_offset };
> + cc->mthp_bitmap_stack[++top] = (struct scan_bit_state)
> + { next_order, offset };
> + }
> + }
> + return collapsed;
> +}
> +
> static int khugepaged_scan_pmd(struct mm_struct *mm,
> struct vm_area_struct *vma,
> unsigned long address, bool *mmap_locked,
> @@ -1445,9 +1523,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
> pte_unmap_unlock(pte, ptl);
> if (result == SCAN_SUCCEED) {
> result = collapse_huge_page(mm, address, referenced,
> - unmapped, cc);
> - /* collapse_huge_page will return with the mmap_lock released */
> - *mmap_locked = false;
> + unmapped, cc, mmap_locked, HPAGE_PMD_ORDER, 0);
> }
> out:
> trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, referenced,
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 06/12] khugepaged: introduce khugepaged_scan_bitmap for mTHP support
2025-04-27 2:51 ` Baolin Wang
@ 2025-04-28 14:47 ` Nico Pache
2025-04-29 7:16 ` Baolin Wang
0 siblings, 1 reply; 34+ messages in thread
From: Nico Pache @ 2025-04-28 14:47 UTC (permalink / raw)
To: Baolin Wang
Cc: linux-mm, linux-doc, linux-kernel, linux-trace-kernel, akpm,
corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On Sat, Apr 26, 2025 at 8:52 PM Baolin Wang
<baolin.wang@linux.alibaba.com> wrote:
>
>
>
> On 2025/4/17 08:02, Nico Pache wrote:
> > khugepaged scans PMD ranges for potential collapse to a hugepage. To add
> > mTHP support we use this scan to instead record chunks of utilized
> > sections of the PMD.
> >
> > khugepaged_scan_bitmap uses a stack struct to recursively scan a bitmap
> > that represents chunks of utilized regions. We can then determine what
> > mTHP size fits best and in the following patch, we set this bitmap while
> > scanning the PMD.
> >
> > max_ptes_none is used as a scale to determine how "full" an order must
> > be before being considered for collapse.
> >
> > When attempting to collapse an order that has its order set to "always"
> > lets always collapse to that order in a greedy manner without
> > considering the number of bits set.
> >
> > Signed-off-by: Nico Pache <npache@redhat.com>
> > ---
> > include/linux/khugepaged.h | 4 ++
> > mm/khugepaged.c | 94 ++++++++++++++++++++++++++++++++++----
> > 2 files changed, 89 insertions(+), 9 deletions(-)
> >
> > diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h
> > index 1f46046080f5..18fe6eb5051d 100644
> > --- a/include/linux/khugepaged.h
> > +++ b/include/linux/khugepaged.h
> > @@ -1,6 +1,10 @@
> > /* SPDX-License-Identifier: GPL-2.0 */
> > #ifndef _LINUX_KHUGEPAGED_H
> > #define _LINUX_KHUGEPAGED_H
> > +#define KHUGEPAGED_MIN_MTHP_ORDER 2
>
> Why is the minimum mTHP order set to 2? IMO, the file large folios can
> support order 1, so we don't expect to collapse exec file small folios
> to order 1 if possible?
I should have been more specific in the patch notes, but this affects
anonymous only. I'll go over my commit messages and make sure this is
reflected in the next version.
>
> (PS: I need more time to understand your logic in this patch, and any
> additional explanation would be helpful:) )
We are currently scanning ptes in a PMD. The core principle/reasoning
behind the bitmap is to keep the PMD scan while saving its state. We
then use this bitmap to determine which chunks of the PMD are active
and are the best candidates for mTHP collapse. We start at the PMD
level, and recursively break down the bitmap to find the appropriate
sizes for the bitmap.
looking at a simplified example: we scan a PMD and get the following
bitmap, 1111101101101011 (in this case MIN_MTHP_ORDER= 5, so each bit
== 32 ptes, in the actual set each bit == 4 ptes).
We would first attempt a PMD collapse, while checking the number of
bits set vs the max_ptes_none tunable. If those conditions arent
triggered, we will try the next enabled mTHP order, for each half of
the bitmap.
ie) order 8 attempt on 11111011 and order 8 attempt on 01101011.
If a collapse succeeds we dont keep recursing on that portion of the
bitmap. If not, we continue attempting lower orders.
Hopefully that helps you understand my logic here! Let me know if you
need more clarification.
I gave a presentation on this that might help too:
https://docs.google.com/presentation/d/1w9NYLuC2kRcMAwhcashU1LWTfmI5TIZRaTWuZq-CHEg/edit?usp=sharing&resourcekey=0-nBAGld8cP1kW26XE6i0Bpg
Cheers,
-- Nico
>
> > +#define KHUGEPAGED_MIN_MTHP_NR (1<<KHUGEPAGED_MIN_MTHP_ORDER)
> > +#define MAX_MTHP_BITMAP_SIZE (1 << (ilog2(MAX_PTRS_PER_PTE) - KHUGEPAGED_MIN_MTHP_ORDER))
> > +#define MTHP_BITMAP_SIZE (1 << (HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER))
> >
> > extern unsigned int khugepaged_max_ptes_none __read_mostly;
> > #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index 5e9272ab82da..83230e9cdf3a 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -94,6 +94,11 @@ static DEFINE_READ_MOSTLY_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS);
> >
> > static struct kmem_cache *mm_slot_cache __ro_after_init;
> >
> > +struct scan_bit_state {
> > + u8 order;
> > + u16 offset;
> > +};
> > +
> > struct collapse_control {
> > bool is_khugepaged;
> >
> > @@ -102,6 +107,18 @@ struct collapse_control {
> >
> > /* nodemask for allocation fallback */
> > nodemask_t alloc_nmask;
> > +
> > + /*
> > + * bitmap used to collapse mTHP sizes.
> > + * 1bit = order KHUGEPAGED_MIN_MTHP_ORDER mTHP
> > + */
> > + DECLARE_BITMAP(mthp_bitmap, MAX_MTHP_BITMAP_SIZE);
> > + DECLARE_BITMAP(mthp_bitmap_temp, MAX_MTHP_BITMAP_SIZE);
> > + struct scan_bit_state mthp_bitmap_stack[MAX_MTHP_BITMAP_SIZE];
> > +};
> > +
> > +struct collapse_control khugepaged_collapse_control = {
> > + .is_khugepaged = true,
> > };
> >
> > /**
> > @@ -851,10 +868,6 @@ static void khugepaged_alloc_sleep(void)
> > remove_wait_queue(&khugepaged_wait, &wait);
> > }
> >
> > -struct collapse_control khugepaged_collapse_control = {
> > - .is_khugepaged = true,
> > -};
> > -
> > static bool khugepaged_scan_abort(int nid, struct collapse_control *cc)
> > {
> > int i;
> > @@ -1118,7 +1131,8 @@ static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm,
> >
> > static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > int referenced, int unmapped,
> > - struct collapse_control *cc)
> > + struct collapse_control *cc, bool *mmap_locked,
> > + u8 order, u16 offset)
> > {
> > LIST_HEAD(compound_pagelist);
> > pmd_t *pmd, _pmd;
> > @@ -1137,8 +1151,12 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > * The allocation can take potentially a long time if it involves
> > * sync compaction, and we do not need to hold the mmap_lock during
> > * that. We will recheck the vma after taking it again in write mode.
> > + * If collapsing mTHPs we may have already released the read_lock.
> > */
> > - mmap_read_unlock(mm);
> > + if (*mmap_locked) {
> > + mmap_read_unlock(mm);
> > + *mmap_locked = false;
> > + }
> >
> > result = alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER);
> > if (result != SCAN_SUCCEED)
> > @@ -1273,12 +1291,72 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > out_up_write:
> > mmap_write_unlock(mm);
> > out_nolock:
> > + *mmap_locked = false;
> > if (folio)
> > folio_put(folio);
> > trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result);
> > return result;
> > }
> >
> > +// Recursive function to consume the bitmap
> > +static int khugepaged_scan_bitmap(struct mm_struct *mm, unsigned long address,
> > + int referenced, int unmapped, struct collapse_control *cc,
> > + bool *mmap_locked, unsigned long enabled_orders)
> > +{
> > + u8 order, next_order;
> > + u16 offset, mid_offset;
> > + int num_chunks;
> > + int bits_set, threshold_bits;
> > + int top = -1;
> > + int collapsed = 0;
> > + int ret;
> > + struct scan_bit_state state;
> > + bool is_pmd_only = (enabled_orders == (1 << HPAGE_PMD_ORDER));
> > +
> > + cc->mthp_bitmap_stack[++top] = (struct scan_bit_state)
> > + { HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER, 0 };
> > +
> > + while (top >= 0) {
> > + state = cc->mthp_bitmap_stack[top--];
> > + order = state.order + KHUGEPAGED_MIN_MTHP_ORDER;
> > + offset = state.offset;
> > + num_chunks = 1 << (state.order);
> > + // Skip mTHP orders that are not enabled
> > + if (!test_bit(order, &enabled_orders))
> > + goto next;
> > +
> > + // copy the relavant section to a new bitmap
> > + bitmap_shift_right(cc->mthp_bitmap_temp, cc->mthp_bitmap, offset,
> > + MTHP_BITMAP_SIZE);
> > +
> > + bits_set = bitmap_weight(cc->mthp_bitmap_temp, num_chunks);
> > + threshold_bits = (HPAGE_PMD_NR - khugepaged_max_ptes_none - 1)
> > + >> (HPAGE_PMD_ORDER - state.order);
> > +
> > + //Check if the region is "almost full" based on the threshold
> > + if (bits_set > threshold_bits || is_pmd_only
> > + || test_bit(order, &huge_anon_orders_always)) {
> > + ret = collapse_huge_page(mm, address, referenced, unmapped, cc,
> > + mmap_locked, order, offset * KHUGEPAGED_MIN_MTHP_NR);
> > + if (ret == SCAN_SUCCEED) {
> > + collapsed += (1 << order);
> > + continue;
> > + }
> > + }
> > +
> > +next:
> > + if (state.order > 0) {
> > + next_order = state.order - 1;
> > + mid_offset = offset + (num_chunks / 2);
> > + cc->mthp_bitmap_stack[++top] = (struct scan_bit_state)
> > + { next_order, mid_offset };
> > + cc->mthp_bitmap_stack[++top] = (struct scan_bit_state)
> > + { next_order, offset };
> > + }
> > + }
> > + return collapsed;
> > +}
> > +
> > static int khugepaged_scan_pmd(struct mm_struct *mm,
> > struct vm_area_struct *vma,
> > unsigned long address, bool *mmap_locked,
> > @@ -1445,9 +1523,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
> > pte_unmap_unlock(pte, ptl);
> > if (result == SCAN_SUCCEED) {
> > result = collapse_huge_page(mm, address, referenced,
> > - unmapped, cc);
> > - /* collapse_huge_page will return with the mmap_lock released */
> > - *mmap_locked = false;
> > + unmapped, cc, mmap_locked, HPAGE_PMD_ORDER, 0);
> > }
> > out:
> > trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, referenced,
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 12/12] Documentation: mm: update the admin guide for mTHP collapse
2025-04-24 15:03 ` Usama Arif
@ 2025-04-28 14:54 ` Nico Pache
0 siblings, 0 replies; 34+ messages in thread
From: Nico Pache @ 2025-04-28 14:54 UTC (permalink / raw)
To: Usama Arif
Cc: linux-mm, linux-doc, linux-kernel, linux-trace-kernel, akpm,
corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
baolin.wang, ryan.roberts, willy, peterx, ziy, wangkefeng.wang,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On Thu, Apr 24, 2025 at 9:04 AM Usama Arif <usamaarif642@gmail.com> wrote:
>
>
>
> On 17/04/2025 01:02, Nico Pache wrote:
> > Now that we can collapse to mTHPs lets update the admin guide to
> > reflect these changes and provide proper guidence on how to utilize it.
> >
> > Signed-off-by: Nico Pache <npache@redhat.com>
> > ---
> > Documentation/admin-guide/mm/transhuge.rst | 10 +++++++++-
> > 1 file changed, 9 insertions(+), 1 deletion(-)
> >
> > diff --git a/Documentation/admin-guide/mm/transhuge.rst b/Documentation/admin-guide/mm/transhuge.rst
> > index dff8d5985f0f..06814e05e1d5 100644
> > --- a/Documentation/admin-guide/mm/transhuge.rst
> > +++ b/Documentation/admin-guide/mm/transhuge.rst
> > @@ -63,7 +63,7 @@ often.
> > THP can be enabled system wide or restricted to certain tasks or even
> > memory ranges inside task's address space. Unless THP is completely
> > disabled, there is ``khugepaged`` daemon that scans memory and
> > -collapses sequences of basic pages into PMD-sized huge pages.
> > +collapses sequences of basic pages into huge pages.
> >
> > The THP behaviour is controlled via :ref:`sysfs <thp_sysfs>`
> > interface and using madvise(2) and prctl(2) system calls.
> > @@ -144,6 +144,14 @@ hugepage sizes have enabled="never". If enabling multiple hugepage
> > sizes, the kernel will select the most appropriate enabled size for a
> > given allocation.
> >
> > +khugepaged uses max_ptes_none scaled to the order of the enabled mTHP size to
> > +determine collapses. When using mTHPs it's recommended to set max_ptes_none
> > +low-- ideally less than HPAGE_PMD_NR / 2 (255 on 4k page size). This will
> > +prevent undesired "creep" behavior that leads to continuously collapsing to a
> > +larger mTHP size. max_ptes_shared and max_ptes_swap have no effect when
> > +collapsing to a mTHP, and mTHP collapse will fail on shared or swapped out
> > +pages.
> > +
>
> Hi Nico,
>
> Could you add a bit more explanation of the creep behaviour here in documentation.
> I remember you explained in one of the earlier versions that if more than half of the
> collapsed mTHP is zero-filled, it for some reason becomes eligible for collapsing to
> larger order, but if less than half is zero-filled its not eligible? I cant exactly
> remember what the reason was :) Would be good to have it documented more if possible.
Hi Usama,
You can think of the creep as a byproduct of introducing N new
non-zero pages to a N sized mTHP, essentially doubling the size. On a
second pass of this mTHP the same condition would be eligible, leading
to constant promotion to the next size. If we allow khugepaged to
double the size of mTHP, by introducing non-zero pages, it will keep
doubling.
I'll see how I can incorporate this description into the admin guide.
-- Nico
>
> Thanks
>
> > It's also possible to limit defrag efforts in the VM to generate
> > anonymous hugepages in case they're not immediately free to madvise
> > regions or to never try to defrag memory and simply fallback to regular
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 07/12] khugepaged: add mTHP support
2025-04-24 12:21 ` Baolin Wang
@ 2025-04-28 15:14 ` Nico Pache
0 siblings, 0 replies; 34+ messages in thread
From: Nico Pache @ 2025-04-28 15:14 UTC (permalink / raw)
To: Baolin Wang
Cc: linux-mm, linux-doc, linux-kernel, linux-trace-kernel, akpm,
corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On Thu, Apr 24, 2025 at 6:22 AM Baolin Wang
<baolin.wang@linux.alibaba.com> wrote:
>
>
>
> On 2025/4/17 08:02, Nico Pache wrote:
> > Introduce the ability for khugepaged to collapse to different mTHP sizes.
> > While scanning PMD ranges for potential collapse candidates, keep track
> > of pages in KHUGEPAGED_MIN_MTHP_ORDER chunks via a bitmap. Each bit
> > represents a utilized region of order KHUGEPAGED_MIN_MTHP_ORDER ptes. If
> > mTHPs are enabled we remove the restriction of max_ptes_none during the
> > scan phase so we dont bailout early and miss potential mTHP candidates.
> >
> > After the scan is complete we will perform binary recursion on the
> > bitmap to determine which mTHP size would be most efficient to collapse
> > to. max_ptes_none will be scaled by the attempted collapse order to
> > determine how full a THP must be to be eligible.
> >
> > If a mTHP collapse is attempted, but contains swapped out, or shared
> > pages, we dont perform the collapse.
> >
> > Signed-off-by: Nico Pache <npache@redhat.com>
> > ---
> > mm/khugepaged.c | 122 ++++++++++++++++++++++++++++++++++--------------
> > 1 file changed, 88 insertions(+), 34 deletions(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index 83230e9cdf3a..ece39fd71fe6 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -1136,13 +1136,14 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > {
> > LIST_HEAD(compound_pagelist);
> > pmd_t *pmd, _pmd;
> > - pte_t *pte;
> > + pte_t *pte, mthp_pte;
> > pgtable_t pgtable;
> > struct folio *folio;
> > spinlock_t *pmd_ptl, *pte_ptl;
> > int result = SCAN_FAIL;
> > struct vm_area_struct *vma;
> > struct mmu_notifier_range range;
> > + unsigned long _address = address + offset * PAGE_SIZE;
> >
> > VM_BUG_ON(address & ~HPAGE_PMD_MASK);
> >
> > @@ -1158,12 +1159,13 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > *mmap_locked = false;
> > }
> >
> > - result = alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER);
> > + result = alloc_charge_folio(&folio, mm, cc, order);
> > if (result != SCAN_SUCCEED)
> > goto out_nolock;
> >
> > mmap_read_lock(mm);
> > - result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER);
> > + *mmap_locked = true;
> > + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, order);
> > if (result != SCAN_SUCCEED) {
> > mmap_read_unlock(mm);
> > goto out_nolock;
> > @@ -1181,13 +1183,14 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > * released when it fails. So we jump out_nolock directly in
> > * that case. Continuing to collapse causes inconsistency.
> > */
> > - result = __collapse_huge_page_swapin(mm, vma, address, pmd,
> > - referenced, HPAGE_PMD_ORDER);
> > + result = __collapse_huge_page_swapin(mm, vma, _address, pmd,
> > + referenced, order);
> > if (result != SCAN_SUCCEED)
> > goto out_nolock;
> > }
> >
> > mmap_read_unlock(mm);
> > + *mmap_locked = false;
> > /*
> > * Prevent all access to pagetables with the exception of
> > * gup_fast later handled by the ptep_clear_flush and the VM
> > @@ -1197,7 +1200,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > * mmap_lock.
> > */
> > mmap_write_lock(mm);
> > - result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER);
> > + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, order);
> > if (result != SCAN_SUCCEED)
> > goto out_up_write;
> > /* check if the pmd is still valid */
> > @@ -1208,11 +1211,12 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > vma_start_write(vma);
> > anon_vma_lock_write(vma->anon_vma);
> >
> > - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, address,
> > - address + HPAGE_PMD_SIZE);
> > + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, _address,
> > + _address + (PAGE_SIZE << order));
> > mmu_notifier_invalidate_range_start(&range);
> >
> > pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
> > +
> > /*
> > * This removes any huge TLB entry from the CPU so we won't allow
> > * huge and small TLB entries for the same virtual address to
> > @@ -1226,10 +1230,10 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > mmu_notifier_invalidate_range_end(&range);
> > tlb_remove_table_sync_one();
> >
> > - pte = pte_offset_map_lock(mm, &_pmd, address, &pte_ptl);
> > + pte = pte_offset_map_lock(mm, &_pmd, _address, &pte_ptl);
> > if (pte) {
> > - result = __collapse_huge_page_isolate(vma, address, pte, cc,
> > - &compound_pagelist, HPAGE_PMD_ORDER);
> > + result = __collapse_huge_page_isolate(vma, _address, pte, cc,
> > + &compound_pagelist, order);
> > spin_unlock(pte_ptl);
> > } else {
> > result = SCAN_PMD_NULL;
> > @@ -1258,8 +1262,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > anon_vma_unlock_write(vma->anon_vma);
> >
> > result = __collapse_huge_page_copy(pte, folio, pmd, _pmd,
> > - vma, address, pte_ptl,
> > - &compound_pagelist, HPAGE_PMD_ORDER);
> > + vma, _address, pte_ptl,
> > + &compound_pagelist, order);
> > pte_unmap(pte);
>
> pte is unmapped here, but...
>
> > if (unlikely(result != SCAN_SUCCEED))
> > goto out_up_write;
> > @@ -1270,20 +1274,35 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> > * write.
> > */
> > __folio_mark_uptodate(folio);
> > - pgtable = pmd_pgtable(_pmd);
> > -
> > - _pmd = folio_mk_pmd(folio, vma->vm_page_prot);
> > - _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma);
> > -
> > - spin_lock(pmd_ptl);
> > - BUG_ON(!pmd_none(*pmd));
> > - folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE);
> > - folio_add_lru_vma(folio, vma);
> > - pgtable_trans_huge_deposit(mm, pmd, pgtable);
> > - set_pmd_at(mm, address, pmd, _pmd);
> > - update_mmu_cache_pmd(vma, address, pmd);
> > - deferred_split_folio(folio, false);
> > - spin_unlock(pmd_ptl);
> > + if (order == HPAGE_PMD_ORDER) {
> > + pgtable = pmd_pgtable(_pmd);
> > + _pmd = folio_mk_pmd(folio, vma->vm_page_prot);
> > + _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma);
> > +
> > + spin_lock(pmd_ptl);
> > + BUG_ON(!pmd_none(*pmd));
> > + folio_add_new_anon_rmap(folio, vma, _address, RMAP_EXCLUSIVE);
> > + folio_add_lru_vma(folio, vma);
> > + pgtable_trans_huge_deposit(mm, pmd, pgtable);
> > + set_pmd_at(mm, address, pmd, _pmd);
> > + update_mmu_cache_pmd(vma, address, pmd);
> > + deferred_split_folio(folio, false);
> > + spin_unlock(pmd_ptl);
> > + } else { //mTHP
>
> (Nit: use '/* xxx */' format)
>
> > + mthp_pte = mk_pte(&folio->page, vma->vm_page_prot);
> > + mthp_pte = maybe_mkwrite(pte_mkdirty(mthp_pte), vma);
> > +
> > + spin_lock(pmd_ptl);
> > + folio_ref_add(folio, (1 << order) - 1);
> > + folio_add_new_anon_rmap(folio, vma, _address, RMAP_EXCLUSIVE);
> > + folio_add_lru_vma(folio, vma);
> > + set_ptes(vma->vm_mm, _address, pte, mthp_pte, (1 << order));
>
> You still used the unmapped pte? Looks incorrect to me.
Ah, I need to move the unmap to after we collapse. Only effects
highmem, but it should be an easy fix!
Thanks!
>
> > + update_mmu_cache_range(NULL, vma, _address, pte, (1 << order));
> > +
> > + smp_wmb(); /* make pte visible before pmd */
> > + pmd_populate(mm, pmd, pmd_pgtable(_pmd));
> > + spin_unlock(pmd_ptl);
> > + }
> >
> > folio = NULL;
> >
> > @@ -1364,31 +1383,58 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
> > {
> > pmd_t *pmd;
> > pte_t *pte, *_pte;
> > + int i;
> > int result = SCAN_FAIL, referenced = 0;
> > int none_or_zero = 0, shared = 0;
> > struct page *page = NULL;
> > struct folio *folio = NULL;
> > unsigned long _address;
> > + unsigned long enabled_orders;
> > spinlock_t *ptl;
> > int node = NUMA_NO_NODE, unmapped = 0;
> > + bool is_pmd_only;
> > bool writable = false;
> > -
> > + int chunk_none_count = 0;
> > + int scaled_none = khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER);
> > + unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0;
> > VM_BUG_ON(address & ~HPAGE_PMD_MASK);
> >
> > result = find_pmd_or_thp_or_none(mm, address, &pmd);
> > if (result != SCAN_SUCCEED)
> > goto out;
> >
> > + bitmap_zero(cc->mthp_bitmap, MAX_MTHP_BITMAP_SIZE);
> > + bitmap_zero(cc->mthp_bitmap_temp, MAX_MTHP_BITMAP_SIZE);
> > memset(cc->node_load, 0, sizeof(cc->node_load));
> > nodes_clear(cc->alloc_nmask);
> > +
> > + enabled_orders = thp_vma_allowable_orders(vma, vma->vm_flags,
> > + tva_flags, THP_ORDERS_ALL_ANON);
> > +
> > + is_pmd_only = (enabled_orders == (1 << HPAGE_PMD_ORDER));
> > +
> > pte = pte_offset_map_lock(mm, pmd, address, &ptl);
> > if (!pte) {
> > result = SCAN_PMD_NULL;
> > goto out;
> > }
> >
> > - for (_address = address, _pte = pte; _pte < pte + HPAGE_PMD_NR;
> > - _pte++, _address += PAGE_SIZE) {
> > + for (i = 0; i < HPAGE_PMD_NR; i++) {
> > + /*
> > + * we are reading in KHUGEPAGED_MIN_MTHP_NR page chunks. if
> > + * there are pages in this chunk keep track of it in the bitmap
> > + * for mTHP collapsing.
> > + */
> > + if (i % KHUGEPAGED_MIN_MTHP_NR == 0) {
> > + if (chunk_none_count <= scaled_none)
> > + bitmap_set(cc->mthp_bitmap,
> > + i / KHUGEPAGED_MIN_MTHP_NR, 1);
> > +
> > + chunk_none_count = 0;
> > + }
> > +
> > + _pte = pte + i;
> > + _address = address + i * PAGE_SIZE;
> > pte_t pteval = ptep_get(_pte);
> > if (is_swap_pte(pteval)) {
> > ++unmapped;
> > @@ -1411,10 +1457,11 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
> > }
> > }
> > if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
> > + ++chunk_none_count;
> > ++none_or_zero;
> > if (!userfaultfd_armed(vma) &&
> > - (!cc->is_khugepaged ||
> > - none_or_zero <= khugepaged_max_ptes_none)) {
> > + (!cc->is_khugepaged || !is_pmd_only ||
> > + none_or_zero <= khugepaged_max_ptes_none)) {
> > continue;
> > } else {
> > result = SCAN_EXCEED_NONE_PTE;
> > @@ -1510,6 +1557,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
> > address)))
> > referenced++;
> > }
> > +
> > if (!writable) {
> > result = SCAN_PAGE_RO;
> > } else if (cc->is_khugepaged &&
> > @@ -1522,8 +1570,12 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
> > out_unmap:
> > pte_unmap_unlock(pte, ptl);
> > if (result == SCAN_SUCCEED) {
> > - result = collapse_huge_page(mm, address, referenced,
> > - unmapped, cc, mmap_locked, HPAGE_PMD_ORDER, 0);
> > + result = khugepaged_scan_bitmap(mm, address, referenced, unmapped, cc,
> > + mmap_locked, enabled_orders);
> > + if (result > 0)
> > + result = SCAN_SUCCEED;
> > + else
> > + result = SCAN_FAIL;
> > }
> > out:
> > trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, referenced,
> > @@ -2479,11 +2531,13 @@ static int khugepaged_collapse_single_pmd(unsigned long addr,
> > fput(file);
> > if (result == SCAN_PTE_MAPPED_HUGEPAGE) {
> > mmap_read_lock(mm);
> > + *mmap_locked = true;
> > if (khugepaged_test_exit_or_disable(mm))
> > goto end;
> > result = collapse_pte_mapped_thp(mm, addr,
> > !cc->is_khugepaged);
> > mmap_read_unlock(mm);
> > + *mmap_locked = false;
> > }
> > } else {
> > result = khugepaged_scan_pmd(mm, vma, addr,
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 08/12] khugepaged: skip collapsing mTHP to smaller orders
2025-04-24 7:48 ` Baolin Wang
@ 2025-04-28 15:44 ` Nico Pache
2025-04-29 6:53 ` Baolin Wang
0 siblings, 1 reply; 34+ messages in thread
From: Nico Pache @ 2025-04-28 15:44 UTC (permalink / raw)
To: Baolin Wang
Cc: linux-mm, linux-doc, linux-kernel, linux-trace-kernel, akpm,
corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On Thu, Apr 24, 2025 at 1:49 AM Baolin Wang
<baolin.wang@linux.alibaba.com> wrote:
>
>
>
> On 2025/4/17 08:02, Nico Pache wrote:
> > khugepaged may try to collapse a mTHP to a smaller mTHP, resulting in
> > some pages being unmapped. Skip these cases until we have a way to check
> > if its ok to collapse to a smaller mTHP size (like in the case of a
> > partially mapped folio).
> >
> > This patch is inspired by Dev Jain's work on khugepaged mTHP support [1].
> >
> > [1] https://lore.kernel.org/lkml/20241216165105.56185-11-dev.jain@arm.com/
> >
> > Co-developed-by: Dev Jain <dev.jain@arm.com>
> > Signed-off-by: Dev Jain <dev.jain@arm.com>
> > Signed-off-by: Nico Pache <npache@redhat.com>
> > ---
> > mm/khugepaged.c | 7 ++++++-
> > 1 file changed, 6 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index ece39fd71fe6..383aff12cd43 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -625,7 +625,12 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> > folio = page_folio(page);
> > VM_BUG_ON_FOLIO(!folio_test_anon(folio), folio);
> >
> > - /* See hpage_collapse_scan_pmd(). */
> > + if (order != HPAGE_PMD_ORDER && folio_order(folio) >= order) {
> > + result = SCAN_PTE_MAPPED_HUGEPAGE;
> > + goto out;
> > + }
>
> Should we also add this check in hpage_collapse_scan_pmd() to abort the
> scan early?
No I dont think so, we can't abort there because we dont know the
attempted collapse order, and we dont want to miss potential mTHP
collapses (by bailing out early and not populating the bitmap).
-- Nico
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 11/12] khugepaged: add per-order mTHP khugepaged stats
2025-04-24 7:58 ` Baolin Wang
@ 2025-04-28 15:45 ` Nico Pache
0 siblings, 0 replies; 34+ messages in thread
From: Nico Pache @ 2025-04-28 15:45 UTC (permalink / raw)
To: Baolin Wang
Cc: linux-mm, linux-doc, linux-kernel, linux-trace-kernel, akpm,
corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On Thu, Apr 24, 2025 at 1:58 AM Baolin Wang
<baolin.wang@linux.alibaba.com> wrote:
>
>
>
> On 2025/4/17 08:02, Nico Pache wrote:
> > With mTHP support inplace, let add the per-order mTHP stats for
> > exceeding NONE, SWAP, and SHARED.
> >
> > Signed-off-by: Nico Pache <npache@redhat.com>
> > ---
> > include/linux/huge_mm.h | 3 +++
> > mm/huge_memory.c | 7 +++++++
> > mm/khugepaged.c | 16 +++++++++++++---
> > 3 files changed, 23 insertions(+), 3 deletions(-)
> >
> > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> > index 55b242335420..782d3a7854b4 100644
> > --- a/include/linux/huge_mm.h
> > +++ b/include/linux/huge_mm.h
> > @@ -139,6 +139,9 @@ enum mthp_stat_item {
> > MTHP_STAT_SPLIT_DEFERRED,
> > MTHP_STAT_NR_ANON,
> > MTHP_STAT_NR_ANON_PARTIALLY_MAPPED,
> > + MTHP_STAT_COLLAPSE_EXCEED_SWAP,
> > + MTHP_STAT_COLLAPSE_EXCEED_NONE,
> > + MTHP_STAT_COLLAPSE_EXCEED_SHARED,
> > __MTHP_STAT_COUNT
> > };
> >
> > diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> > index 7798c9284533..de4704af0022 100644
> > --- a/mm/huge_memory.c
> > +++ b/mm/huge_memory.c
> > @@ -633,6 +633,10 @@ DEFINE_MTHP_STAT_ATTR(split_failed, MTHP_STAT_SPLIT_FAILED);
> > DEFINE_MTHP_STAT_ATTR(split_deferred, MTHP_STAT_SPLIT_DEFERRED);
> > DEFINE_MTHP_STAT_ATTR(nr_anon, MTHP_STAT_NR_ANON);
> > DEFINE_MTHP_STAT_ATTR(nr_anon_partially_mapped, MTHP_STAT_NR_ANON_PARTIALLY_MAPPED);
> > +DEFINE_MTHP_STAT_ATTR(collapse_exceed_swap_pte, MTHP_STAT_COLLAPSE_EXCEED_SWAP);
> > +DEFINE_MTHP_STAT_ATTR(collapse_exceed_none_pte, MTHP_STAT_COLLAPSE_EXCEED_NONE);
> > +DEFINE_MTHP_STAT_ATTR(collapse_exceed_shared_pte, MTHP_STAT_COLLAPSE_EXCEED_SHARED);
> > +
> >
> > static struct attribute *anon_stats_attrs[] = {
> > &anon_fault_alloc_attr.attr,
> > @@ -649,6 +653,9 @@ static struct attribute *anon_stats_attrs[] = {
> > &split_deferred_attr.attr,
> > &nr_anon_attr.attr,
> > &nr_anon_partially_mapped_attr.attr,
> > + &collapse_exceed_swap_pte_attr.attr,
> > + &collapse_exceed_none_pte_attr.attr,
> > + &collapse_exceed_shared_pte_attr.attr,
> > NULL,
> > };
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index 67da0950b833..38643a681ba5 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -604,7 +604,10 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> > continue;
> > } else {
> > result = SCAN_EXCEED_NONE_PTE;
> > - count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
> > + if (order == HPAGE_PMD_ORDER)
> > + count_vm_event(THP_SCAN_EXCEED_NONE_PTE);
> > + else
> > + count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_NONE);
> > goto out;
> > }
> > }
> > @@ -633,8 +636,14 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> > /* See khugepaged_scan_pmd(). */
> > if (folio_maybe_mapped_shared(folio)) {
> > ++shared;
> > - if (order != HPAGE_PMD_ORDER || (cc->is_khugepaged &&
> > - shared > khugepaged_max_ptes_shared)) {
> > + if (order != HPAGE_PMD_ORDER) {
> > + result = SCAN_EXCEED_SHARED_PTE;
> > + count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_SHARED);
> > + goto out;
> > + }
> > +
> > + if (cc->is_khugepaged &&
> > + shared > khugepaged_max_ptes_shared) {
> > result = SCAN_EXCEED_SHARED_PTE;
> > count_vm_event(THP_SCAN_EXCEED_SHARED_PTE);
> > goto out;
> > @@ -1060,6 +1069,7 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
> >
> > /* Dont swapin for mTHP collapse */
> > if (order != HPAGE_PMD_ORDER) {
> > + count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_SHARED);
>
> Should be MTHP_STAT_COLLAPSE_EXCEED_SWAP?
Yes! Thank you, I will fix this :)
>
> > result = SCAN_EXCEED_SWAP_PTE;
> > goto out;
> > }
>
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 08/12] khugepaged: skip collapsing mTHP to smaller orders
2025-04-28 15:44 ` Nico Pache
@ 2025-04-29 6:53 ` Baolin Wang
0 siblings, 0 replies; 34+ messages in thread
From: Baolin Wang @ 2025-04-29 6:53 UTC (permalink / raw)
To: Nico Pache
Cc: linux-mm, linux-doc, linux-kernel, linux-trace-kernel, akpm,
corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On 2025/4/28 23:44, Nico Pache wrote:
> On Thu, Apr 24, 2025 at 1:49 AM Baolin Wang
> <baolin.wang@linux.alibaba.com> wrote:
>>
>>
>>
>> On 2025/4/17 08:02, Nico Pache wrote:
>>> khugepaged may try to collapse a mTHP to a smaller mTHP, resulting in
>>> some pages being unmapped. Skip these cases until we have a way to check
>>> if its ok to collapse to a smaller mTHP size (like in the case of a
>>> partially mapped folio).
>>>
>>> This patch is inspired by Dev Jain's work on khugepaged mTHP support [1].
>>>
>>> [1] https://lore.kernel.org/lkml/20241216165105.56185-11-dev.jain@arm.com/
>>>
>>> Co-developed-by: Dev Jain <dev.jain@arm.com>
>>> Signed-off-by: Dev Jain <dev.jain@arm.com>
>>> Signed-off-by: Nico Pache <npache@redhat.com>
>>> ---
>>> mm/khugepaged.c | 7 ++++++-
>>> 1 file changed, 6 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>>> index ece39fd71fe6..383aff12cd43 100644
>>> --- a/mm/khugepaged.c
>>> +++ b/mm/khugepaged.c
>>> @@ -625,7 +625,12 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
>>> folio = page_folio(page);
>>> VM_BUG_ON_FOLIO(!folio_test_anon(folio), folio);
>>>
>>> - /* See hpage_collapse_scan_pmd(). */
>>> + if (order != HPAGE_PMD_ORDER && folio_order(folio) >= order) {
>>> + result = SCAN_PTE_MAPPED_HUGEPAGE;
>>> + goto out;
>>> + }
>>
>> Should we also add this check in hpage_collapse_scan_pmd() to abort the
>> scan early?
> No I dont think so, we can't abort there because we dont know the
> attempted collapse order, and we dont want to miss potential mTHP
> collapses (by bailing out early and not populating the bitmap).
OK. That makes sense.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH v4 06/12] khugepaged: introduce khugepaged_scan_bitmap for mTHP support
2025-04-28 14:47 ` Nico Pache
@ 2025-04-29 7:16 ` Baolin Wang
0 siblings, 0 replies; 34+ messages in thread
From: Baolin Wang @ 2025-04-29 7:16 UTC (permalink / raw)
To: Nico Pache
Cc: linux-mm, linux-doc, linux-kernel, linux-trace-kernel, akpm,
corbet, rostedt, mhiramat, mathieu.desnoyers, david, baohua,
ryan.roberts, willy, peterx, ziy, wangkefeng.wang, usamaarif642,
sunnanyong, vishal.moola, thomas.hellstrom, yang, kirill.shutemov,
aarcange, raquini, dev.jain, anshuman.khandual, catalin.marinas,
tiwai, will, dave.hansen, jack, cl, jglisse, surenb, zokeefe,
hannes, rientjes, mhocko, rdunlap
On 2025/4/28 22:47, Nico Pache wrote:
> On Sat, Apr 26, 2025 at 8:52 PM Baolin Wang
> <baolin.wang@linux.alibaba.com> wrote:
>>
>>
>>
>> On 2025/4/17 08:02, Nico Pache wrote:
>>> khugepaged scans PMD ranges for potential collapse to a hugepage. To add
>>> mTHP support we use this scan to instead record chunks of utilized
>>> sections of the PMD.
>>>
>>> khugepaged_scan_bitmap uses a stack struct to recursively scan a bitmap
>>> that represents chunks of utilized regions. We can then determine what
>>> mTHP size fits best and in the following patch, we set this bitmap while
>>> scanning the PMD.
>>>
>>> max_ptes_none is used as a scale to determine how "full" an order must
>>> be before being considered for collapse.
>>>
>>> When attempting to collapse an order that has its order set to "always"
>>> lets always collapse to that order in a greedy manner without
>>> considering the number of bits set.
>>>
>>> Signed-off-by: Nico Pache <npache@redhat.com>
>>> ---
>>> include/linux/khugepaged.h | 4 ++
>>> mm/khugepaged.c | 94 ++++++++++++++++++++++++++++++++++----
>>> 2 files changed, 89 insertions(+), 9 deletions(-)
>>>
>>> diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h
>>> index 1f46046080f5..18fe6eb5051d 100644
>>> --- a/include/linux/khugepaged.h
>>> +++ b/include/linux/khugepaged.h
>>> @@ -1,6 +1,10 @@
>>> /* SPDX-License-Identifier: GPL-2.0 */
>>> #ifndef _LINUX_KHUGEPAGED_H
>>> #define _LINUX_KHUGEPAGED_H
>>> +#define KHUGEPAGED_MIN_MTHP_ORDER 2
>>
>> Why is the minimum mTHP order set to 2? IMO, the file large folios can
>> support order 1, so we don't expect to collapse exec file small folios
>> to order 1 if possible?
> I should have been more specific in the patch notes, but this affects
> anonymous only. I'll go over my commit messages and make sure this is
> reflected in the next version.
OK. I am looking into how to support shmem mTHP collapse based on your
patch series.
>> (PS: I need more time to understand your logic in this patch, and any
>> additional explanation would be helpful:) )
>
> We are currently scanning ptes in a PMD. The core principle/reasoning
> behind the bitmap is to keep the PMD scan while saving its state. We
> then use this bitmap to determine which chunks of the PMD are active
> and are the best candidates for mTHP collapse. We start at the PMD
> level, and recursively break down the bitmap to find the appropriate
> sizes for the bitmap.
>
> looking at a simplified example: we scan a PMD and get the following
> bitmap, 1111101101101011 (in this case MIN_MTHP_ORDER= 5, so each bit
> == 32 ptes, in the actual set each bit == 4 ptes).
> We would first attempt a PMD collapse, while checking the number of
> bits set vs the max_ptes_none tunable. If those conditions arent
> triggered, we will try the next enabled mTHP order, for each half of
> the bitmap.
>
> ie) order 8 attempt on 11111011 and order 8 attempt on 01101011.
>
> If a collapse succeeds we dont keep recursing on that portion of the
> bitmap. If not, we continue attempting lower orders.
>
> Hopefully that helps you understand my logic here! Let me know if you
> need more clarification.
Thanks for your explanation. That's pretty much how I understand it.:)
I'll give a test for your new version.
>
> I gave a presentation on this that might help too:
> https://docs.google.com/presentation/d/1w9NYLuC2kRcMAwhcashU1LWTfmI5TIZRaTWuZq-CHEg/edit?usp=sharing&resourcekey=0-nBAGld8cP1kW26XE6i0Bpg
Unfortunately, this link requires access permission.
^ permalink raw reply [flat|nested] 34+ messages in thread
end of thread, other threads:[~2025-04-29 7:16 UTC | newest]
Thread overview: 34+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-17 0:02 [PATCH v4 00/12] khugepaged: mTHP support Nico Pache
2025-04-17 0:02 ` [PATCH v4 01/12] introduce khugepaged_collapse_single_pmd to unify khugepaged and madvise_collapse Nico Pache
2025-04-23 6:44 ` Baolin Wang
2025-04-23 7:06 ` Nico Pache
2025-04-17 0:02 ` [PATCH v4 02/12] khugepaged: rename hpage_collapse_* to khugepaged_* Nico Pache
2025-04-23 6:49 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 03/12] khugepaged: generalize hugepage_vma_revalidate for mTHP support Nico Pache
2025-04-23 6:55 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 04/12] khugepaged: generalize alloc_charge_folio() Nico Pache
2025-04-23 7:06 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 05/12] khugepaged: generalize __collapse_huge_page_* for mTHP support Nico Pache
2025-04-23 7:30 ` Baolin Wang
2025-04-23 8:00 ` Nico Pache
2025-04-23 8:25 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 06/12] khugepaged: introduce khugepaged_scan_bitmap " Nico Pache
2025-04-27 2:51 ` Baolin Wang
2025-04-28 14:47 ` Nico Pache
2025-04-29 7:16 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 07/12] khugepaged: add " Nico Pache
2025-04-24 12:21 ` Baolin Wang
2025-04-28 15:14 ` Nico Pache
2025-04-17 0:02 ` [PATCH v4 08/12] khugepaged: skip collapsing mTHP to smaller orders Nico Pache
2025-04-24 7:48 ` Baolin Wang
2025-04-28 15:44 ` Nico Pache
2025-04-29 6:53 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 09/12] khugepaged: avoid unnecessary mTHP collapse attempts Nico Pache
2025-04-17 0:02 ` [PATCH v4 10/12] khugepaged: improve tracepoints for mTHP orders Nico Pache
2025-04-24 7:51 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 11/12] khugepaged: add per-order mTHP khugepaged stats Nico Pache
2025-04-24 7:58 ` Baolin Wang
2025-04-28 15:45 ` Nico Pache
2025-04-17 0:02 ` [PATCH v4 12/12] Documentation: mm: update the admin guide for mTHP collapse Nico Pache
2025-04-24 15:03 ` Usama Arif
2025-04-28 14:54 ` Nico Pache
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).