From: Usama Arif <usamaarif642@gmail.com>
To: David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
linux-doc@vger.kernel.org, Jonathan Corbet <corbet@lwn.net>,
Andrew Morton <akpm@linux-foundation.org>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Zi Yan <ziy@nvidia.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Nico Pache <npache@redhat.com>,
Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
Barry Song <baohua@kernel.org>, Vlastimil Babka <vbabka@suse.cz>,
Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>, SeongJae Park <sj@kernel.org>,
Jann Horn <jannh@google.com>, Yafang Shao <laoar.shao@gmail.com>,
Matthew Wilcox <willy@infradead.org>,
Johannes Weiner <hannes@cmpxchg.org>
Subject: Re: [PATCH POC] prctl: extend PR_SET_THP_DISABLE to optionally exclude VM_HUGEPAGE
Date: Thu, 24 Jul 2025 23:27:29 +0100 [thread overview]
Message-ID: <99e25828-641b-490b-baab-35df860760b4@gmail.com> (raw)
In-Reply-To: <601e015b-1f61-45e8-9db8-4e0d2bc1505e@redhat.com>
> Hi!
>
>>
>> Over here, with MMF_DISABLE_THP_EXCEPT_ADVISED, MADV_HUGEPAGE will succeed as vm_flags has
>> VM_HUGEPAGE set, but MADV_COLLAPSE will fail to give a hugepage (as VM_HUGEPAGE is not set
>> and MMF_DISABLE_THP_EXCEPT_ADVISED is set) which I feel might not be the right behaviour
>> as MADV_COLLAPSE is "advise" and the prctl flag is PR_THP_DISABLE_EXCEPT_ADVISED?
>
> THPs are disabled for these regions, so it's at least consistent with the "disable all", but ...
>
>>
>> This will be checked in multiple places in madvise_collapse: thp_vma_allowable_order,
>> hugepage_vma_revalidate which calls thp_vma_allowable_order and hpage_collapse_scan_pmd
>> which also ends up calling hugepage_vma_revalidate.
>> > A hacky way would be to save and overwrite vma->vm_flags with VM_HUGEPAGE at the start of madvise_collapse
>> if VM_NOHUGEPAGE is not set, and reset vma->vm_flags to its original value at the end of madvise_collapse
>> (Not something I am recommending, just throwing it out there).
>
> Gah.
>
>>
>> Another possibility is to pass the fact that you are in madvise_collapse to these functions
>> as an argument, this might look ugly, although maybe not as ugly as hugepage_vma_revalidate
>> already has collapse control arg, so just need to take care of thp_vma_allowable_orders.
>
> Likely this.
>
>>
>> Any preference or better suggestions?
>
> What you are asking for is not MMF_DISABLE_THP_EXCEPT_ADVISED as I planned it, but MMF_DISABLE_THP_EXCEPT_ADVISED_OR_MADV_COLLAPSE.
>
> Now, one could consider MADV_COLLAPSE an "advise". (I am not opposed to that change)
>
lol yeah I always think of MADV_COLLAPSE as an extreme version of MADV_HUGE (more of a demand
than an advice :)), eventhough its not persistant.
Which is why I think might be unexpected if MADV_HUGE gives hugepages but MADV_COLLAPSE doesn't
(But could just be my opinion).
> Indeed, the right way might be telling vma_thp_disabled() whether we are in collapse.
>
> Can you try implementing that on top of my patch to see how it looks?
>
My reasoning is that a process that is running with system policy always but with
PR_THP_DISABLE_EXCEPT_ADVISED gets THPs in exactly the same behaviour as a process that is running
with system policy madvise. This will help us achieve (3) that you mentioned in the
commit message:
(3) Switch from THP=madvise to THP=always, but keep the old behavior
(THP only when advised) for selected workloads.
I have written quite a few selftests now for prctl SET_THP_DISABLE, both with and without
PR_THP_DISABLE_EXCEPT_ADVISED set incorporating your feedback on it. I have all of them passing
with the below diff. The diff is slightly ugly, but very simple and hopefully acceptable. If it
looks good, I can send a series with everything. Probably make the below diff as a separate patch
on top of this patch as its mostly adding an extra arg to functions and would keep the review easier?
I can squash it with this patch as well if thats better.
Thanks!
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 3d6d8a9f13fc..bb5f1dedbd2c 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -1294,7 +1294,7 @@ static int show_smap(struct seq_file *m, void *v)
seq_printf(m, "THPeligible: %8u\n",
!!thp_vma_allowable_orders(vma, vma->vm_flags,
- TVA_SMAPS | TVA_ENFORCE_SYSFS, THP_ORDERS_ALL));
+ TVA_SMAPS | TVA_ENFORCE_SYSFS, THP_ORDERS_ALL, 0));
if (arch_pkeys_enabled())
seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma));
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 71db243a002e..82066721b161 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -98,8 +98,8 @@ extern struct kobj_attribute thpsize_shmem_enabled_attr;
#define TVA_IN_PF (1 << 1) /* Page fault handler */
#define TVA_ENFORCE_SYSFS (1 << 2) /* Obey sysfs configuration */
-#define thp_vma_allowable_order(vma, vm_flags, tva_flags, order) \
- (!!thp_vma_allowable_orders(vma, vm_flags, tva_flags, BIT(order)))
+#define thp_vma_allowable_order(vma, vm_flags, tva_flags, order, in_collapse) \
+ (!!thp_vma_allowable_orders(vma, vm_flags, tva_flags, BIT(order), in_collapse))
#define split_folio(f) split_folio_to_list(f, NULL)
@@ -265,7 +265,8 @@ static inline unsigned long thp_vma_suitable_orders(struct vm_area_struct *vma,
unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
vm_flags_t vm_flags,
unsigned long tva_flags,
- unsigned long orders);
+ unsigned long orders,
+ bool in_collapse);
/**
* thp_vma_allowable_orders - determine hugepage orders that are allowed for vma
@@ -273,6 +274,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
* @vm_flags: use these vm_flags instead of vma->vm_flags
* @tva_flags: Which TVA flags to honour
* @orders: bitfield of all orders to consider
+ * @in_collapse: whether we are being called from MADV_COLLAPSE
*
* Calculates the intersection of the requested hugepage orders and the allowed
* hugepage orders for the provided vma. Permitted orders are encoded as a set
@@ -286,7 +288,8 @@ static inline
unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
vm_flags_t vm_flags,
unsigned long tva_flags,
- unsigned long orders)
+ unsigned long orders,
+ bool in_collapse)
{
/* Optimization to check if required orders are enabled early. */
if ((tva_flags & TVA_ENFORCE_SYSFS) && vma_is_anonymous(vma)) {
@@ -303,7 +306,7 @@ unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
return 0;
}
- return __thp_vma_allowable_orders(vma, vm_flags, tva_flags, orders);
+ return __thp_vma_allowable_orders(vma, vm_flags, tva_flags, orders, in_collapse);
}
struct thpsize {
@@ -323,7 +326,7 @@ struct thpsize {
* through madvise or prctl.
*/
static inline bool vma_thp_disabled(struct vm_area_struct *vma,
- vm_flags_t vm_flags)
+ vm_flags_t vm_flags, bool in_collapse)
{
/* Are THPs disabled for this VMA? */
if (vm_flags & VM_NOHUGEPAGE)
@@ -331,6 +334,9 @@ static inline bool vma_thp_disabled(struct vm_area_struct *vma,
/* Are THPs disabled for all VMAs in the whole process? */
if (test_bit(MMF_DISABLE_THP_COMPLETELY, &vma->vm_mm->flags))
return true;
+ /* Are we being called from madvise_collapse? */
+ if (in_collapse)
+ return false;
/*
* Are THPs disabled only for VMAs where we didn't get an explicit
* advise to use them?
@@ -537,7 +543,8 @@ static inline unsigned long thp_vma_suitable_orders(struct vm_area_struct *vma,
static inline unsigned long thp_vma_allowable_orders(struct vm_area_struct *vma,
vm_flags_t vm_flags,
unsigned long tva_flags,
- unsigned long orders)
+ unsigned long orders,
+ bool in_collapse)
{
return 0;
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2b4ea5a2ce7d..ecf48a922530 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -100,7 +100,8 @@ static inline bool file_thp_enabled(struct vm_area_struct *vma)
unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
vm_flags_t vm_flags,
unsigned long tva_flags,
- unsigned long orders)
+ unsigned long orders,
+ bool in_collapse)
{
bool smaps = tva_flags & TVA_SMAPS;
bool in_pf = tva_flags & TVA_IN_PF;
@@ -122,7 +123,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
if (!vma->vm_mm) /* vdso */
return 0;
- if (thp_disabled_by_hw() || vma_thp_disabled(vma, vm_flags))
+ if (thp_disabled_by_hw() || vma_thp_disabled(vma, vm_flags, in_collapse))
return 0;
/* khugepaged doesn't collapse DAX vma, but page fault is fine. */
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 2c9008246785..ba707ce5a00a 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -475,7 +475,7 @@ void khugepaged_enter_vma(struct vm_area_struct *vma,
if (!test_bit(MMF_VM_HUGEPAGE, &vma->vm_mm->flags) &&
hugepage_pmd_enabled()) {
if (thp_vma_allowable_order(vma, vm_flags, TVA_ENFORCE_SYSFS,
- PMD_ORDER))
+ PMD_ORDER, 0))
__khugepaged_enter(vma->vm_mm);
}
}
@@ -932,7 +932,7 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
if (!thp_vma_suitable_order(vma, address, PMD_ORDER))
return SCAN_ADDRESS_RANGE;
- if (!thp_vma_allowable_order(vma, vma->vm_flags, tva_flags, PMD_ORDER))
+ if (!thp_vma_allowable_order(vma, vma->vm_flags, tva_flags, PMD_ORDER, 1))
return SCAN_VMA_CHECK;
/*
* Anon VMA expected, the address may be unmapped then
@@ -1534,7 +1534,7 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
* and map it by a PMD, regardless of sysfs THP settings. As such, let's
* analogously elide sysfs THP settings here.
*/
- if (!thp_vma_allowable_order(vma, vma->vm_flags, 0, PMD_ORDER))
+ if (!thp_vma_allowable_order(vma, vma->vm_flags, 0, PMD_ORDER, 1))
return SCAN_VMA_CHECK;
/* Keep pmd pgtable for uffd-wp; see comment in retract_page_tables() */
@@ -2432,7 +2432,7 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages, int *result,
break;
}
if (!thp_vma_allowable_order(vma, vma->vm_flags,
- TVA_ENFORCE_SYSFS, PMD_ORDER)) {
+ TVA_ENFORCE_SYSFS, PMD_ORDER, 0)) {
skip:
progress++;
continue;
@@ -2766,7 +2766,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
BUG_ON(vma->vm_start > start);
BUG_ON(vma->vm_end < end);
- if (!thp_vma_allowable_order(vma, vma->vm_flags, 0, PMD_ORDER))
+ if (!thp_vma_allowable_order(vma, vma->vm_flags, 0, PMD_ORDER, 1))
return -EINVAL;
cc = kmalloc(sizeof(*cc), GFP_KERNEL);
diff --git a/mm/memory.c b/mm/memory.c
index 92fd18a5d8d1..da5ab2dc1797 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4370,7 +4370,7 @@ static struct folio *alloc_swap_folio(struct vm_fault *vmf)
* and suitable for swapping THP.
*/
orders = thp_vma_allowable_orders(vma, vma->vm_flags,
- TVA_IN_PF | TVA_ENFORCE_SYSFS, BIT(PMD_ORDER) - 1);
+ TVA_IN_PF | TVA_ENFORCE_SYSFS, BIT(PMD_ORDER) - 1, 0);
orders = thp_vma_suitable_orders(vma, vmf->address, orders);
orders = thp_swap_suitable_orders(swp_offset(entry),
vmf->address, orders);
@@ -4918,7 +4918,7 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf)
* the faulting address and still be fully contained in the vma.
*/
orders = thp_vma_allowable_orders(vma, vma->vm_flags,
- TVA_IN_PF | TVA_ENFORCE_SYSFS, BIT(PMD_ORDER) - 1);
+ TVA_IN_PF | TVA_ENFORCE_SYSFS, BIT(PMD_ORDER) - 1, 0);
orders = thp_vma_suitable_orders(vma, vmf->address, orders);
if (!orders)
@@ -5188,7 +5188,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct folio *folio, struct page *pa
* PMD mappings, but PTE-mapped THP are fine. So let's simply refuse any
* PMD mappings if THPs are disabled.
*/
- if (thp_disabled_by_hw() || vma_thp_disabled(vma, vma->vm_flags))
+ if (thp_disabled_by_hw() || vma_thp_disabled(vma, vma->vm_flags, 0))
return ret;
if (!thp_vma_suitable_order(vma, haddr, PMD_ORDER))
@@ -6109,7 +6109,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
retry_pud:
if (pud_none(*vmf.pud) &&
thp_vma_allowable_order(vma, vm_flags,
- TVA_IN_PF | TVA_ENFORCE_SYSFS, PUD_ORDER)) {
+ TVA_IN_PF | TVA_ENFORCE_SYSFS, PUD_ORDER, 0)) {
ret = create_huge_pud(&vmf);
if (!(ret & VM_FAULT_FALLBACK))
return ret;
@@ -6144,7 +6144,7 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
if (pmd_none(*vmf.pmd) &&
thp_vma_allowable_order(vma, vm_flags,
- TVA_IN_PF | TVA_ENFORCE_SYSFS, PMD_ORDER)) {
+ TVA_IN_PF | TVA_ENFORCE_SYSFS, PMD_ORDER, 0)) {
ret = create_huge_pmd(&vmf);
if (!(ret & VM_FAULT_FALLBACK))
return ret;
diff --git a/mm/shmem.c b/mm/shmem.c
index e6cdfda08aed..1960cf87b077 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1816,7 +1816,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
vm_flags_t vm_flags = vma ? vma->vm_flags : 0;
unsigned int global_orders;
- if (thp_disabled_by_hw() || (vma && vma_thp_disabled(vma, vm_flags)))
+ if (thp_disabled_by_hw() || (vma && vma_thp_disabled(vma, vm_flags, 0)))
return 0;
global_orders = shmem_huge_global_enabled(inode, index, write_end,
next prev parent reply other threads:[~2025-07-24 22:27 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-21 9:09 [PATCH POC] prctl: extend PR_SET_THP_DISABLE to optionally exclude VM_HUGEPAGE David Hildenbrand
2025-07-21 10:10 ` David Hildenbrand
2025-07-21 11:28 ` Lorenzo Stoakes
2025-07-21 11:45 ` David Hildenbrand
2025-07-21 13:15 ` Lorenzo Stoakes
2025-07-21 14:32 ` Usama Arif
2025-07-21 14:39 ` David Hildenbrand
2025-07-21 17:27 ` Usama Arif
2025-07-21 19:35 ` SeongJae Park
2025-07-22 10:23 ` David Hildenbrand
2025-07-22 10:27 ` Lorenzo Stoakes
2025-07-23 17:07 ` Usama Arif
2025-07-23 18:02 ` David Hildenbrand
2025-07-24 18:57 ` Usama Arif
2025-07-24 19:07 ` David Hildenbrand
2025-07-24 22:27 ` Usama Arif [this message]
2025-07-25 13:08 ` David Hildenbrand
2025-07-25 16:26 ` Usama Arif
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=99e25828-641b-490b-baab-35df860760b4@gmail.com \
--to=usamaarif642@gmail.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=corbet@lwn.net \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=hannes@cmpxchg.org \
--cc=jannh@google.com \
--cc=laoar.shao@gmail.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=npache@redhat.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=sj@kernel.org \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).