From: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
To: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: akpm@linux-foundation.org, hughd@google.com, david@redhat.com,
Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com,
dev.jain@arm.com, ziy@nvidia.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 2/2] mm: shmem: disallow hugepages if the system-wide shmem THP sysfs settings are disabled
Date: Sat, 7 Jun 2025 13:14:41 +0100 [thread overview]
Message-ID: <b6ae32e5-60e0-44dd-a1e8-37c162d04ed3@lucifer.local> (raw)
In-Reply-To: <39d7617a6142c6091f233357171c5793e0992d36.1749109709.git.baolin.wang@linux.alibaba.com>
On Thu, Jun 05, 2025 at 04:00:59PM +0800, Baolin Wang wrote:
> The MADV_COLLAPSE will ignore the system-wide shmem THP sysfs settings, which
> means that even though we have disabled the shmem THP configuration, MADV_COLLAPSE
> will still attempt to collapse into a shmem THP. This violates the rule we have
> agreed upon: never means never.
Ugh that we have separate shmem logic. And split between huge_memory.c and
shmem.c too :)) Again, not your fault, just a general moan about existing
stuff :P
>
> Another rule for madvise, referring to David's suggestion: “allowing for collapsing
> in a VM without VM_HUGEPAGE in the "madvise" mode would be fine".
Hm I'm not sure if this is enforced is it? I may have missed something here
however.
>
> Then the current strategy is:
>
> For shmem, if none of always, madvise, within_size, and inherit have enabled
> PMD-sized THP, then MADV_COLLAPSE will be prohibited from collapsing PMD-sized THP.
Again, is this just MADV_COLLAPSE? Surely this is a general change?
We should be clear that we are not explicitly limiting ourselves to
MADV_COLLAPSE here.
You shoudl clearly indicate that the MADV_COLLAPSE case DOESN'T set
TVA_ENFORCE_SYSFS and that's the key difference here.
>
> For tmpfs, if the mount option is set with the 'huge=never' parameter, then
> MADV_COLLAPSE will be prohibited from collapsing PMD-sized THP.
>
> Acked-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
> mm/huge_memory.c | 2 +-
> mm/shmem.c | 6 +++---
> 2 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index d3e66136e41a..a8cfa37cae72 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -166,7 +166,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
> * own flags.
> */
> if (!in_pf && shmem_file(vma->vm_file))
> - return shmem_allowable_huge_orders(file_inode(vma->vm_file),
> + return orders & shmem_allowable_huge_orders(file_inode(vma->vm_file),
> vma, vma->vm_pgoff, 0,
> !enforce_sysfs);
Did you mean to do &&?
Also, is this achieving what you want to achieve? Is it necessary? The
changes in patch 1/2 enforce the global settings and before this code in
__thp_vma_allowable_orders():
unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
unsigned long vm_flags,
unsigned long tva_flags,
unsigned long orders)
{
... (no early exits) ...
orders &= supported_orders;
if (!orders)
return 0;
...
}
So if orders == 0 due to the changes in thp_vma_allowable_orders(), which
is the only caller of __thp_vma_allowable_orders() then we _always_ exit
early here before we get to this shmem_allowable_huge_orders() code.
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 4b42419ce6b2..9af45d4e27e6 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -625,7 +625,7 @@ static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t index
> return 0;
> if (shmem_huge == SHMEM_HUGE_DENY)
> return 0;
> - if (shmem_huge_force || shmem_huge == SHMEM_HUGE_FORCE)
> + if (shmem_huge == SHMEM_HUGE_FORCE)
> return maybe_pmd_order;
>
> /*
> @@ -660,7 +660,7 @@ static unsigned int shmem_huge_global_enabled(struct inode *inode, pgoff_t index
>
> fallthrough;
> case SHMEM_HUGE_ADVISE:
> - if (vm_flags & VM_HUGEPAGE)
> + if (shmem_huge_force || (vm_flags & VM_HUGEPAGE))
> return maybe_pmd_order;
> fallthrough;
> default:
> @@ -1790,7 +1790,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
> /* Allow mTHP that will be fully within i_size. */
> mask |= shmem_get_orders_within_size(inode, within_size_orders, index, 0);
>
> - if (vm_flags & VM_HUGEPAGE)
> + if (shmem_huge_force || (vm_flags & VM_HUGEPAGE))
> mask |= READ_ONCE(huge_shmem_orders_madvise);
I'm also not sure these are necessary:
The only path that can set shmem_huge_force is __thp_vma_allowable_orders()
-> shmem_allowable_huge_orders() -> shmem_huge_global_enabled() and then
only if !(tva_flags & TVA_ENFORCE_SYSFS) and as stated above we already
cover off this case by early exiting __thp_vma_allowable_orders() if orders
== 0 as established in patch 1/2.
>
> if (global_orders > 0)
> --
> 2.43.5
>
next prev parent reply other threads:[~2025-06-07 12:14 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-05 8:00 [PATCH v2 0/2] fix MADV_COLLAPSE issue if THP settings are disabled Baolin Wang
2025-06-05 8:00 ` [PATCH v2 1/2] mm: huge_memory: disallow hugepages if the system-wide THP sysfs " Baolin Wang
2025-06-06 16:49 ` Dev Jain
2025-06-06 18:47 ` Dev Jain
2025-06-09 5:57 ` Baolin Wang
2025-06-07 11:55 ` Lorenzo Stoakes
2025-06-07 12:21 ` Lorenzo Stoakes
2025-06-09 6:18 ` Baolin Wang
2025-06-09 15:12 ` Lorenzo Stoakes
2025-06-09 6:10 ` Baolin Wang
2025-06-09 15:17 ` Lorenzo Stoakes
2025-06-11 6:59 ` Baolin Wang
2025-06-08 18:37 ` Nico Pache
2025-06-09 6:36 ` Baolin Wang
2025-06-11 12:34 ` David Hildenbrand
2025-06-12 7:51 ` Baolin Wang
2025-06-12 8:46 ` Dev Jain
2025-06-12 8:52 ` David Hildenbrand
2025-06-12 8:51 ` David Hildenbrand
2025-06-12 12:45 ` Baolin Wang
2025-06-12 13:05 ` David Hildenbrand
2025-06-12 13:25 ` Baolin Wang
2025-06-12 13:40 ` Baolin Wang
2025-06-12 13:27 ` Lorenzo Stoakes
2025-06-12 13:29 ` Lorenzo Stoakes
2025-06-12 14:13 ` Baolin Wang
2025-06-12 14:16 ` David Hildenbrand
2025-06-12 14:20 ` Lorenzo Stoakes
2025-06-12 14:09 ` David Hildenbrand
2025-06-12 14:49 ` Lorenzo Stoakes
2025-06-13 2:07 ` Baolin Wang
2025-06-13 5:18 ` Lorenzo Stoakes
2025-06-12 13:07 ` Lorenzo Stoakes
2025-06-12 13:13 ` David Hildenbrand
2025-06-12 13:31 ` Lorenzo Stoakes
2025-06-05 8:00 ` [PATCH v2 2/2] mm: shmem: disallow hugepages if the system-wide shmem " Baolin Wang
2025-06-07 12:14 ` Lorenzo Stoakes [this message]
2025-06-07 12:17 ` Lorenzo Stoakes
2025-06-09 6:34 ` Baolin Wang
2025-06-09 19:30 ` Lorenzo Stoakes
2025-06-09 6:31 ` Baolin Wang
2025-06-09 15:33 ` Lorenzo Stoakes
2025-06-11 7:02 ` Baolin Wang
2025-06-07 12:28 ` [PATCH v2 0/2] fix MADV_COLLAPSE issue if THP " Lorenzo Stoakes
2025-06-11 7:05 ` Baolin Wang
2025-06-13 14:23 ` Usama Arif
2025-06-13 14:29 ` Lorenzo Stoakes
2025-06-13 14:39 ` Usama Arif
2025-06-13 14:42 ` Lorenzo Stoakes
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b6ae32e5-60e0-44dd-a1e8-37c162d04ed3@lucifer.local \
--to=lorenzo.stoakes@oracle.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=npache@redhat.com \
--cc=ryan.roberts@arm.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).