From: "Liam R. Howlett" <Liam.Howlett@oracle.com>
To: Nico Pache <npache@redhat.com>
Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org,
david@redhat.com, ziy@nvidia.com, baolin.wang@linux.alibaba.com,
lorenzo.stoakes@oracle.com, ryan.roberts@arm.com,
dev.jain@arm.com, corbet@lwn.net, rostedt@goodmis.org,
mhiramat@kernel.org, mathieu.desnoyers@efficios.com,
akpm@linux-foundation.org, baohua@kernel.org,
willy@infradead.org, peterx@redhat.com,
wangkefeng.wang@huawei.com, usamaarif642@gmail.com,
sunnanyong@huawei.com, vishal.moola@gmail.com,
thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com,
kirill.shutemov@linux.intel.com, aarcange@redhat.com,
raquini@redhat.com, anshuman.khandual@arm.com,
catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org,
dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org,
jglisse@google.com, surenb@google.com, zokeefe@google.com,
hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com,
rdunlap@infradead.org, hughd@google.com
Subject: Re: [PATCH v9 06/14] khugepaged: introduce collapse_scan_bitmap for mTHP support
Date: Wed, 16 Jul 2025 11:38:24 -0400 [thread overview]
Message-ID: <omxfj73o756gyd2kbdggbcxnk6xomtxwrgr7p6j4gvts4272tj@zuklefzbuotf> (raw)
In-Reply-To: <20250714003207.113275-7-npache@redhat.com>
* Nico Pache <npache@redhat.com> [250713 20:34]:
> khugepaged scans anons PMD ranges for potential collapse to a hugepage.
> To add mTHP support we use this scan to instead record chunks of utilized
> sections of the PMD.
>
> collapse_scan_bitmap uses a stack struct to recursively scan a bitmap
> that represents chunks of utilized regions. We can then determine what
> mTHP size fits best and in the following patch, we set this bitmap while
> scanning the anon PMD. A minimum collapse order of 2 is used as this is
> the lowest order supported by anon memory.
>
> max_ptes_none is used as a scale to determine how "full" an order must
> be before being considered for collapse.
>
> When attempting to collapse an order that has its order set to "always"
> lets always collapse to that order in a greedy manner without
> considering the number of bits set.
>
v7 had talks about having selftests of this code. You mention you used
selftests mm in the cover letter but it seems you did not add the
reproducer that Baolin had?
Maybe I missed that?
> Signed-off-by: Nico Pache <npache@redhat.com>
> ---
> include/linux/khugepaged.h | 4 ++
> mm/khugepaged.c | 94 ++++++++++++++++++++++++++++++++++----
> 2 files changed, 89 insertions(+), 9 deletions(-)
>
> diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h
> index ff6120463745..0f957711a117 100644
> --- a/include/linux/khugepaged.h
> +++ b/include/linux/khugepaged.h
> @@ -1,6 +1,10 @@
> /* SPDX-License-Identifier: GPL-2.0 */
> #ifndef _LINUX_KHUGEPAGED_H
> #define _LINUX_KHUGEPAGED_H
> +#define KHUGEPAGED_MIN_MTHP_ORDER 2
> +#define KHUGEPAGED_MIN_MTHP_NR (1<<KHUGEPAGED_MIN_MTHP_ORDER)
> +#define MAX_MTHP_BITMAP_SIZE (1 << (ilog2(MAX_PTRS_PER_PTE) - KHUGEPAGED_MIN_MTHP_ORDER))
> +#define MTHP_BITMAP_SIZE (1 << (HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER))
>
> extern unsigned int khugepaged_max_ptes_none __read_mostly;
> #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index ee54e3c1db4e..59b2431ca616 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -94,6 +94,11 @@ static DEFINE_READ_MOSTLY_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS);
>
> static struct kmem_cache *mm_slot_cache __ro_after_init;
>
> +struct scan_bit_state {
> + u8 order;
> + u16 offset;
> +};
> +
> struct collapse_control {
> bool is_khugepaged;
>
> @@ -102,6 +107,18 @@ struct collapse_control {
>
> /* nodemask for allocation fallback */
> nodemask_t alloc_nmask;
> +
> + /*
> + * bitmap used to collapse mTHP sizes.
> + * 1bit = order KHUGEPAGED_MIN_MTHP_ORDER mTHP
> + */
> + DECLARE_BITMAP(mthp_bitmap, MAX_MTHP_BITMAP_SIZE);
> + DECLARE_BITMAP(mthp_bitmap_temp, MAX_MTHP_BITMAP_SIZE);
> + struct scan_bit_state mthp_bitmap_stack[MAX_MTHP_BITMAP_SIZE];
> +};
> +
> +struct collapse_control khugepaged_collapse_control = {
> + .is_khugepaged = true,
> };
>
> /**
> @@ -838,10 +855,6 @@ static void khugepaged_alloc_sleep(void)
> remove_wait_queue(&khugepaged_wait, &wait);
> }
>
> -struct collapse_control khugepaged_collapse_control = {
> - .is_khugepaged = true,
> -};
> -
> static bool collapse_scan_abort(int nid, struct collapse_control *cc)
> {
> int i;
> @@ -1115,7 +1128,8 @@ static int alloc_charge_folio(struct folio **foliop, struct mm_struct *mm,
>
> static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> int referenced, int unmapped,
> - struct collapse_control *cc)
> + struct collapse_control *cc, bool *mmap_locked,
> + u8 order, u16 offset)
> {
> LIST_HEAD(compound_pagelist);
> pmd_t *pmd, _pmd;
> @@ -1134,8 +1148,12 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> * The allocation can take potentially a long time if it involves
> * sync compaction, and we do not need to hold the mmap_lock during
> * that. We will recheck the vma after taking it again in write mode.
> + * If collapsing mTHPs we may have already released the read_lock.
> */
> - mmap_read_unlock(mm);
> + if (*mmap_locked) {
> + mmap_read_unlock(mm);
> + *mmap_locked = false;
> + }
>
> result = alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER);
> if (result != SCAN_SUCCEED)
> @@ -1272,12 +1290,72 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> out_up_write:
> mmap_write_unlock(mm);
> out_nolock:
> + *mmap_locked = false;
> if (folio)
> folio_put(folio);
> trace_mm_collapse_huge_page(mm, result == SCAN_SUCCEED, result);
> return result;
> }
>
> +/* Recursive function to consume the bitmap */
> +static int collapse_scan_bitmap(struct mm_struct *mm, unsigned long address,
> + int referenced, int unmapped, struct collapse_control *cc,
> + bool *mmap_locked, unsigned long enabled_orders)
> +{
> + u8 order, next_order;
> + u16 offset, mid_offset;
> + int num_chunks;
> + int bits_set, threshold_bits;
> + int top = -1;
> + int collapsed = 0;
> + int ret;
> + struct scan_bit_state state;
> + bool is_pmd_only = (enabled_orders == (1 << HPAGE_PMD_ORDER));
> +
> + cc->mthp_bitmap_stack[++top] = (struct scan_bit_state)
> + { HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER, 0 };
> +
> + while (top >= 0) {
> + state = cc->mthp_bitmap_stack[top--];
> + order = state.order + KHUGEPAGED_MIN_MTHP_ORDER;
> + offset = state.offset;
> + num_chunks = 1 << (state.order);
> + // Skip mTHP orders that are not enabled
> + if (!test_bit(order, &enabled_orders))
> + goto next;
> +
> + // copy the relavant section to a new bitmap
> + bitmap_shift_right(cc->mthp_bitmap_temp, cc->mthp_bitmap, offset,
> + MTHP_BITMAP_SIZE);
> +
> + bits_set = bitmap_weight(cc->mthp_bitmap_temp, num_chunks);
> + threshold_bits = (HPAGE_PMD_NR - khugepaged_max_ptes_none - 1)
> + >> (HPAGE_PMD_ORDER - state.order);
> +
> + //Check if the region is "almost full" based on the threshold
> + if (bits_set > threshold_bits || is_pmd_only
> + || test_bit(order, &huge_anon_orders_always)) {
> + ret = collapse_huge_page(mm, address, referenced, unmapped, cc,
> + mmap_locked, order, offset * KHUGEPAGED_MIN_MTHP_NR);
> + if (ret == SCAN_SUCCEED) {
> + collapsed += (1 << order);
> + continue;
> + }
> + }
> +
> +next:
> + if (state.order > 0) {
> + next_order = state.order - 1;
> + mid_offset = offset + (num_chunks / 2);
> + cc->mthp_bitmap_stack[++top] = (struct scan_bit_state)
> + { next_order, mid_offset };
> + cc->mthp_bitmap_stack[++top] = (struct scan_bit_state)
> + { next_order, offset };
> + }
> + }
> + return collapsed;
> +}
> +
> static int collapse_scan_pmd(struct mm_struct *mm,
> struct vm_area_struct *vma,
> unsigned long address, bool *mmap_locked,
> @@ -1444,9 +1522,7 @@ static int collapse_scan_pmd(struct mm_struct *mm,
> pte_unmap_unlock(pte, ptl);
> if (result == SCAN_SUCCEED) {
> result = collapse_huge_page(mm, address, referenced,
> - unmapped, cc);
> - /* collapse_huge_page will return with the mmap_lock released */
> - *mmap_locked = false;
> + unmapped, cc, mmap_locked, HPAGE_PMD_ORDER, 0);
> }
> out:
> trace_mm_khugepaged_scan_pmd(mm, folio, writable, referenced,
> --
> 2.50.0
>
next prev parent reply other threads:[~2025-07-16 15:39 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-14 0:31 [PATCH v9 00/14] khugepaged: mTHP support Nico Pache
2025-07-14 0:31 ` [PATCH v9 01/14] khugepaged: rename hpage_collapse_* to collapse_* Nico Pache
2025-07-15 15:39 ` David Hildenbrand
2025-07-16 14:29 ` Liam R. Howlett
2025-07-16 15:20 ` David Hildenbrand
2025-07-17 7:21 ` Nico Pache
2025-07-25 16:43 ` Lorenzo Stoakes
2025-07-25 22:35 ` Nico Pache
2025-07-14 0:31 ` [PATCH v9 02/14] introduce collapse_single_pmd to unify khugepaged and madvise_collapse Nico Pache
2025-07-15 15:53 ` David Hildenbrand
2025-07-23 1:56 ` Nico Pache
2025-07-16 15:12 ` Liam R. Howlett
2025-07-23 1:55 ` Nico Pache
2025-07-14 0:31 ` [PATCH v9 03/14] khugepaged: generalize hugepage_vma_revalidate for mTHP support Nico Pache
2025-07-15 15:55 ` David Hildenbrand
2025-07-14 0:31 ` [PATCH v9 04/14] khugepaged: generalize alloc_charge_folio() Nico Pache
2025-07-16 13:46 ` David Hildenbrand
2025-07-17 7:22 ` Nico Pache
2025-07-14 0:31 ` [PATCH v9 05/14] khugepaged: generalize __collapse_huge_page_* for mTHP support Nico Pache
2025-07-16 13:52 ` David Hildenbrand
2025-07-17 7:22 ` Nico Pache
2025-07-16 14:02 ` David Hildenbrand
2025-07-17 7:23 ` Nico Pache
2025-07-17 15:54 ` Lorenzo Stoakes
2025-07-25 16:09 ` Lorenzo Stoakes
2025-07-25 22:37 ` Nico Pache
2025-07-14 0:31 ` [PATCH v9 06/14] khugepaged: introduce collapse_scan_bitmap " Nico Pache
2025-07-16 14:03 ` David Hildenbrand
2025-07-17 7:23 ` Nico Pache
2025-07-16 15:38 ` Liam R. Howlett [this message]
2025-07-17 7:24 ` Nico Pache
2025-07-14 0:32 ` [PATCH v9 07/14] khugepaged: add " Nico Pache
2025-07-14 0:32 ` [PATCH v9 08/14] khugepaged: skip collapsing mTHP to smaller orders Nico Pache
2025-07-16 14:32 ` David Hildenbrand
2025-07-17 7:24 ` Nico Pache
2025-07-14 0:32 ` [PATCH v9 09/14] khugepaged: avoid unnecessary mTHP collapse attempts Nico Pache
2025-07-18 2:14 ` Baolin Wang
2025-07-18 22:34 ` Nico Pache
2025-07-14 0:32 ` [PATCH v9 10/14] khugepaged: allow khugepaged to check all anonymous mTHP orders Nico Pache
2025-07-16 15:28 ` David Hildenbrand
2025-07-17 7:25 ` Nico Pache
2025-07-18 8:40 ` David Hildenbrand
2025-07-14 0:32 ` [PATCH v9 11/14] khugepaged: kick khugepaged for enabling none-PMD-sized mTHPs Nico Pache
2025-07-14 0:32 ` [PATCH v9 12/14] khugepaged: improve tracepoints for mTHP orders Nico Pache
2025-07-22 15:39 ` David Hildenbrand
2025-07-14 0:32 ` [PATCH v9 13/14] khugepaged: add per-order mTHP khugepaged stats Nico Pache
2025-07-18 5:04 ` Baolin Wang
2025-07-18 21:00 ` Nico Pache
2025-07-19 4:42 ` Baolin Wang
2025-07-14 0:32 ` [PATCH v9 14/14] Documentation: mm: update the admin guide for mTHP collapse Nico Pache
2025-07-15 0:39 ` [PATCH v9 00/14] khugepaged: mTHP support Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=omxfj73o756gyd2kbdggbcxnk6xomtxwrgr7p6j4gvts4272tj@zuklefzbuotf \
--to=liam.howlett@oracle.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=catalin.marinas@arm.com \
--cc=cl@gentwo.org \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=jack@suse.cz \
--cc=jglisse@google.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mathieu.desnoyers@efficios.com \
--cc=mhiramat@kernel.org \
--cc=mhocko@suse.com \
--cc=npache@redhat.com \
--cc=peterx@redhat.com \
--cc=raquini@redhat.com \
--cc=rdunlap@infradead.org \
--cc=rientjes@google.com \
--cc=rostedt@goodmis.org \
--cc=ryan.roberts@arm.com \
--cc=sunnanyong@huawei.com \
--cc=surenb@google.com \
--cc=thomas.hellstrom@linux.intel.com \
--cc=tiwai@suse.de \
--cc=usamaarif642@gmail.com \
--cc=vishal.moola@gmail.com \
--cc=wangkefeng.wang@huawei.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yang@os.amperecomputing.com \
--cc=ziy@nvidia.com \
--cc=zokeefe@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).