linux-doc.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Nico Pache <npache@redhat.com>,
	Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Dev Jain <dev.jain@arm.com>,
	linux-mm@kvack.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org,
	ziy@nvidia.com, baolin.wang@linux.alibaba.com,
	Liam.Howlett@oracle.com, ryan.roberts@arm.com, corbet@lwn.net,
	rostedt@goodmis.org, mhiramat@kernel.org,
	mathieu.desnoyers@efficios.com, akpm@linux-foundation.org,
	baohua@kernel.org, willy@infradead.org, peterx@redhat.com,
	wangkefeng.wang@huawei.com, usamaarif642@gmail.com,
	sunnanyong@huawei.com, vishal.moola@gmail.com,
	thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com,
	kirill.shutemov@linux.intel.com, aarcange@redhat.com,
	raquini@redhat.com, anshuman.khandual@arm.com,
	catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org,
	dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org,
	jglisse@google.com, surenb@google.com, zokeefe@google.com,
	hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com,
	rdunlap@infradead.org, hughd@google.com
Subject: Re: [PATCH v10 00/13] khugepaged: mTHP support
Date: Thu, 4 Sep 2025 20:56:45 +0200	[thread overview]
Message-ID: <26e6828e-78f7-4454-abaa-334257a8f8c2@redhat.com> (raw)
In-Reply-To: <CAA1CXcBVR=L5_6x5FGeR693AB_YqEF=4KAX7_2fRgGNa1j1j9A@mail.gmail.com>

On 04.09.25 04:44, Nico Pache wrote:
> On Thu, Aug 21, 2025 at 10:55 AM Lorenzo Stoakes
> <lorenzo.stoakes@oracle.com> wrote:
>>
>> On Thu, Aug 21, 2025 at 10:46:18AM -0600, Nico Pache wrote:
>>>>>>> Thanks and I"ll have a look, but this series is unmergeable with a broken
>>>>>>> default in
>>>>>>> /sys/kernel/mm/transparent_hugepage/khugepaged/mthp_max_ptes_none_ratio
>>>>>>> sorry.
>>>>>>>
>>>>>>> We need to have a new tunable as far as I can tell. I also find the use of
>>>>>>> this PMD-specific value as an arbitrary way of expressing a ratio pretty
>>>>>>> gross.
>>>>>> The first thing that comes to mind is that we can pin max_ptes_none to
>>>>>> 255 if it exceeds 255. It's worth noting that the issue occurs only
>>>>>> for adjacently enabled mTHP sizes.
>>>>
>>>> No! Presumably the default of 511 (for PMDs with 512 entries) is set for a
>>>> reason, arbitrarily changing this to suit a specific case seems crazy no?
>>> We wouldn't be changing it for PMD collapse, just for the new
>>> behavior. At 511, no mTHP collapses would ever occur anyways, unless
>>> you have 2MB disabled and other mTHP sizes enabled. Technically at 511
>>> only the highest enabled order always gets collapsed.
>>>
>>> Ive also argued in the past that 511 is a terrible default for
>>> anything other than thp.enabled=always, but that's a whole other can
>>> of worms we dont need to discuss now.
>>>
>>> with this cap of 255, the PMD scan/collapse would work as intended,
>>> then in mTHP collapses we would never introduce this undesired
>>> behavior. We've discussed before that this would be a hard problem to
>>> solve without introducing some expensive way of tracking what has
>>> already been through a collapse, and that doesnt even consider what
>>> happens if things change or are unmapped, and rescanning that section
>>> would be helpful. So having a strictly enforced limit of 255 actually
>>> seems like a good idea to me, as it completely avoids the undesired
>>> behavior and does not require the admins to be aware of such an issue.
>>>
>>> Another thought similar to what (IIRC) Dev has mentioned before, if we
>>> have max_ptes_none > 255 then we only consider collapses to the
>>> largest enabled order, that way no creep to the largest enabled order
>>> would occur in the first place, and we would get there straight away.
>>>
>>> To me one of these two solutions seem sane in the context of what we
>>> are dealing with.
>>>>
>>>>>>
>>>>>> ie)
>>>>>> if order!=HPAGE_PMD_ORDER && khugepaged_max_ptes_none > 255
>>>>>>        temp_max_ptes_none = 255;
>>>>> Oh and my second point, introducing a new tunable to control mTHP
>>>>> collapse may become exceedingly complex from a tuning and code
>>>>> management standpoint.
>>>>
>>>> Umm right now you hve a ratio expressed in PTES per mTHP * ((PTEs per PMD) /
>>>> PMD) 'except please don't set to the usual default when using mTHP' and it's
>>>> currently default-broken.
>>>>
>>>> I'm really not sure how that is simpler than a seprate tunable that can be
>>>> expressed as a ratio (e.g. percentage) that actually makes some kind of sense?
>>> I agree that the current tunable wasn't designed for this, but we
>>> tried to come up with something that leverages the tunable we have to
>>> avoid new tunables and added complexity.
>>>>
>>>> And we can make anything workable from a code management point of view by
>>>> refactoring/developing appropriately.
>>> What happens if max_ptes_none = 0 and the ratio is 50% - 1 pte
>>> (ideally the max number)? seems like we would be saying we want no new
>>> none pages, but also to allow new none pages. To me that seems equally
>>> broken and more confusing than just taking a scale of the current
>>> number (now with a cap).
>>>
>>>
>>
>> The one thing we absolutely cannot have is a default that causes this
>> 'creeping' behaviour. This feels like shipping something that is broken and
>> alluding to it in the documentation.
> Ok I've put a lot of thought and time into this and came up with a solution.
> 
> Here is what I currently have tested and would like to proposing:
> 
> - Expand bitmap to HPAGE_PMD_NR (512)*, this increases the accuracy of
> the max_pte_none handling, and removes a lot of inaccuracies caused by
> the compression into 128 bits that was being done. This also makes the
> code a lot easier to understand.

That sounds good to me. Should make the code easier as well.

> 
> - When attempting mTHP level collapses cap max_ptes_none to 255 to
> prevent the creep issue

I guess the documentation would then state something like

* When collapsing smaller THPs, "max_ptes_none" is scaled proportional
   to the THP size.
* When collapsing smaller THPs, "max_ptes_none" may be internally
   capped at 255 if it exceeds 255 but is not set to the default (511).

Not 100% a fan of all of that, but maybe the only option when wanting to 
avoid other toggles.

The only alternative would really be respecting only 0/511 for mTHP, and 
not doing any scaling. That would obviously make the documentation 
easier and would allow us to revisit that later. The documentation would be:

* When collapsing smaller THPs, "max_ptes_none" may be interpreted as
   "0"  when set to a value different to the default (511). This behavior
   might change in the future.

> 
> Ive tested this and found this performs better than my previous
> version, allows for more granular control via max_ptes_none, and
> prevents the creep issue without any admin knowledge needed.

How would this interact with the shrinker once extended to mTHP? Would 
your RFC patch be sufficient for that or would we actually also want to 
cap? I haven't; fully thought this through yet. I'd assume we would not 
want to cap here. Which makes the doc weird as well, lol.

-- 
Cheers

David / dhildenb


  reply	other threads:[~2025-09-04 18:56 UTC|newest]

Thread overview: 84+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-19 13:41 [PATCH v10 00/13] khugepaged: mTHP support Nico Pache
2025-08-19 13:41 ` [PATCH v10 01/13] khugepaged: rename hpage_collapse_* to collapse_* Nico Pache
2025-08-20 10:42   ` Lorenzo Stoakes
2025-08-19 13:41 ` [PATCH v10 02/13] introduce collapse_single_pmd to unify khugepaged and madvise_collapse Nico Pache
2025-08-20 11:21   ` Lorenzo Stoakes
2025-08-20 16:35     ` Nico Pache
2025-08-22 10:21       ` Lorenzo Stoakes
2025-08-26 13:30         ` Nico Pache
2025-08-19 13:41 ` [PATCH v10 03/13] khugepaged: generalize hugepage_vma_revalidate for mTHP support Nico Pache
2025-08-20 13:23   ` Lorenzo Stoakes
2025-08-20 15:40     ` Nico Pache
2025-08-21  3:41       ` Wei Yang
2025-08-21 14:09         ` Zi Yan
2025-08-22 10:25           ` Lorenzo Stoakes
2025-08-24  1:37   ` Wei Yang
2025-08-26 13:46     ` Nico Pache
2025-08-19 13:41 ` [PATCH v10 04/13] khugepaged: generalize alloc_charge_folio() Nico Pache
2025-08-20 13:28   ` Lorenzo Stoakes
2025-08-19 13:41 ` [PATCH v10 05/13] khugepaged: generalize __collapse_huge_page_* for mTHP support Nico Pache
2025-08-20 14:22   ` Lorenzo Stoakes
2025-09-01 16:15     ` David Hildenbrand
2025-08-19 13:41 ` [PATCH v10 06/13] khugepaged: add " Nico Pache
2025-08-20 18:29   ` Lorenzo Stoakes
2025-09-02 20:12     ` Nico Pache
2025-09-05 10:13       ` Lorenzo Stoakes
2025-08-19 13:41 ` [PATCH v10 07/13] khugepaged: skip collapsing mTHP to smaller orders Nico Pache
2025-08-21 12:05   ` Lorenzo Stoakes
2025-08-21 12:33     ` Dev Jain
2025-08-22 10:33       ` Lorenzo Stoakes
2025-08-21 16:54     ` Steven Rostedt
2025-08-21 16:56       ` Lorenzo Stoakes
2025-08-19 13:42 ` [PATCH v10 08/13] khugepaged: avoid unnecessary mTHP collapse attempts Nico Pache
2025-08-20 10:38   ` Lorenzo Stoakes
2025-08-19 13:42 ` [PATCH v10 09/13] khugepaged: enable collapsing mTHPs even when PMD THPs are disabled Nico Pache
2025-08-21 13:35   ` Lorenzo Stoakes
2025-08-19 13:42 ` [PATCH v10 10/13] khugepaged: kick khugepaged for enabling none-PMD-sized mTHPs Nico Pache
2025-08-21 14:18   ` Lorenzo Stoakes
2025-08-21 14:26     ` Lorenzo Stoakes
2025-08-22  6:59     ` Baolin Wang
2025-08-22  7:36       ` Dev Jain
2025-08-19 13:42 ` [PATCH v10 11/13] khugepaged: improve tracepoints for mTHP orders Nico Pache
2025-08-21 14:24   ` Lorenzo Stoakes
2025-08-19 14:16 ` [PATCH v10 12/13] khugepaged: add per-order mTHP khugepaged stats Nico Pache
2025-08-21 14:47   ` Lorenzo Stoakes
2025-08-19 14:17 ` [PATCH v10 13/13] Documentation: mm: update the admin guide for mTHP collapse Nico Pache
2025-08-21 15:03   ` Lorenzo Stoakes
2025-08-19 21:55 ` [PATCH v10 00/13] khugepaged: mTHP support Andrew Morton
2025-08-20 15:55   ` Nico Pache
2025-08-21 15:01 ` Lorenzo Stoakes
2025-08-21 15:13   ` Dev Jain
2025-08-21 15:19     ` Lorenzo Stoakes
2025-08-21 15:25       ` Nico Pache
2025-08-21 15:27         ` Nico Pache
2025-08-21 15:32           ` Lorenzo Stoakes
2025-08-21 16:46             ` Nico Pache
2025-08-21 16:54               ` Lorenzo Stoakes
2025-08-21 17:26                 ` David Hildenbrand
2025-08-21 20:43                 ` David Hildenbrand
2025-08-22 10:41                   ` Lorenzo Stoakes
2025-08-22 14:10                     ` David Hildenbrand
2025-08-22 14:49                       ` Lorenzo Stoakes
2025-08-22 15:33                         ` Dev Jain
2025-08-26 10:43                           ` Lorenzo Stoakes
2025-08-28  9:46                       ` Baolin Wang
2025-08-28 10:48                         ` Dev Jain
2025-08-29  1:55                           ` Baolin Wang
2025-09-01 16:46                             ` David Hildenbrand
2025-09-02  2:28                               ` Baolin Wang
2025-09-02  9:03                                 ` David Hildenbrand
2025-09-02 10:34                                   ` Usama Arif
2025-09-02 11:03                                     ` David Hildenbrand
2025-09-02 20:23                                       ` Usama Arif
2025-09-03  3:27                                         ` Baolin Wang
2025-09-04  2:54                                         ` Nico Pache
2025-09-05 11:48                                           ` Lorenzo Stoakes
2025-09-05 11:55                                             ` David Hildenbrand
2025-09-05 12:31                                               ` Usama Arif
2025-09-05 12:38                                                 ` David Hildenbrand
2025-09-04  2:44                 ` Nico Pache
2025-09-04 18:56                   ` David Hildenbrand [this message]
2025-08-21 16:38     ` Liam R. Howlett
2025-09-01 16:21 ` David Hildenbrand
2025-09-01 17:06   ` David Hildenbrand
2025-09-05 18:05 ` Dev Jain

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=26e6828e-78f7-4454-abaa-334257a8f8c2@redhat.com \
    --to=david@redhat.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=catalin.marinas@arm.com \
    --cc=cl@gentwo.org \
    --cc=corbet@lwn.net \
    --cc=dave.hansen@linux.intel.com \
    --cc=dev.jain@arm.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=jack@suse.cz \
    --cc=jglisse@google.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-trace-kernel@vger.kernel.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mhiramat@kernel.org \
    --cc=mhocko@suse.com \
    --cc=npache@redhat.com \
    --cc=peterx@redhat.com \
    --cc=raquini@redhat.com \
    --cc=rdunlap@infradead.org \
    --cc=rientjes@google.com \
    --cc=rostedt@goodmis.org \
    --cc=ryan.roberts@arm.com \
    --cc=sunnanyong@huawei.com \
    --cc=surenb@google.com \
    --cc=thomas.hellstrom@linux.intel.com \
    --cc=tiwai@suse.de \
    --cc=usamaarif642@gmail.com \
    --cc=vishal.moola@gmail.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=yang@os.amperecomputing.com \
    --cc=ziy@nvidia.com \
    --cc=zokeefe@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).