* [PATCH v4 0/9] mm: thp: always enable mTHP support
@ 2026-05-01 19:18 Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 1/9] docs: tmpfs: remove implementation detail reference Luiz Capitulino
` (9 more replies)
0 siblings, 10 replies; 12+ messages in thread
From: Luiz Capitulino @ 2026-05-01 19:18 UTC (permalink / raw)
To: linux-kernel, linux-mm, david, baolin.wang, ziy, lance.yang
Cc: corbet, tsbogend, maddy, mpe, agordeev, gerald.schaefer, hca, gor,
x86, dave.hansen, djbw, vishal.l.verma, dave.jiang, akpm,
lorenzo.stoakes
Today, if an architecture implements has_transparent_hugepage() and the CPU
lacks support for PMD-sized pages, the THP code disables all THP, including
mTHP. In addition, the kernel lacks a well defined API to check for
PMD-sized page support. It currently relies on has_transparent_hugepage()
and thp_disabled_by_hw(), but they are not well defined and are tied to
THP support.
This series addresses both issues by introducing a new well defined API
to query PMD-sized page support: pgtable_has_pmd_leaves(). Using this
new helper, we ensure that mTHP remains enabled even when the
architecture or CPU doesn't support PMD-sized pages.
Thanks to David Hildenbrand for suggesting this improvement and for
providing guidance (all bugs and misconceptions are mine).
This applies to Linus tree 08d0d3466664 ("Merge tag 'net-7.1-rc2'
of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net")
NOTE: I used Claude Code Opus 4.6 to *review* the series before
posting. It did find one issue where a pgtable_has_pmd_leaves()
check was missing when assining huge_shmem_orders_inherit in
shmem_init().
v4
--
- Use static key for pgtable_has_pmd_leaves() API (Lance)
- Moved shmem pgtable_has_pmd_leaves() check to
shmem_allowable_huge_orders() (Baolin)
- Default pgtable_has_pmd_leaves() implementation to
IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE) (Zi)
- Dropped patch “mm: thp: x86: cleanup PSE feature bit usage” (Dave)
v3
--
- Rebased on top of latest Linus tree
- Removed i915 patch as driver dropped has_transparent_hugepage() usage
- Moved init_arch_has_pmd_leaves() call in start_kernel() to avoid conflict
with early_param handlers clearing CPU feature flags
- Fixed build error with CONFIG_MMU=n (kernel test robot)
- Fixed huge_anon_orders_inherit default setting when !pgtable_pmd_leaves() (Baolin)
- Small commit changelog improvements
v2
--
- Added support for always enabling mTHPs for shmem (Baolin)
- Improved commits changelog & added reviewed-by
v1
--
- Call init_arch_has_pmd_leaves() from start_kernel()
- Keep pgtable_has_pmd_leaves() calls tied to CONFIG_TRANSPARENT_HUGEPAGE (David)
- Clear PUD_ORDER when clearing PMD_ORDER (David)
- Small changelog improvements (David)
- Rebased on top of latest mm-new
Luiz Capitulino (9):
docs: tmpfs: remove implementation detail reference
mm: introduce pgtable_has_pmd_leaves()
drivers: dax: use pgtable_has_pmd_leaves()
drivers: nvdimm: use pgtable_has_pmd_leaves()
mm: debug_vm_pgtable: use pgtable_has_pmd_leaves()
mm: shmem: drop has_transparent_hugepage() usage
treewide: introduce arch_has_pmd_leaves()
mm: replace thp_disabled_by_hw() with pgtable_has_pmd_leaves()
mm: thp: always enable mTHP support
Documentation/filesystems/tmpfs.rst | 5 ++--
arch/mips/include/asm/pgtable.h | 4 +--
arch/mips/mm/tlb-r4k.c | 4 +--
arch/powerpc/include/asm/book3s/64/hash-4k.h | 2 +-
arch/powerpc/include/asm/book3s/64/hash-64k.h | 2 +-
arch/powerpc/include/asm/book3s/64/pgtable.h | 10 ++++----
arch/powerpc/include/asm/book3s/64/radix.h | 2 +-
arch/powerpc/mm/book3s64/hash_pgtable.c | 4 +--
arch/s390/include/asm/pgtable.h | 4 +--
arch/x86/include/asm/pgtable.h | 4 +--
drivers/dax/dax-private.h | 2 +-
drivers/nvdimm/pfn_devs.c | 6 +++--
include/linux/huge_mm.h | 7 ------
include/linux/pgtable.h | 19 ++++++++++++--
init/main.c | 1 +
mm/debug_vm_pgtable.c | 20 +++++++--------
mm/huge_memory.c | 25 +++++++++++++------
mm/memory.c | 11 +++++++-
mm/shmem.c | 21 +++++++++-------
19 files changed, 93 insertions(+), 60 deletions(-)
--
2.53.0
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v4 1/9] docs: tmpfs: remove implementation detail reference
2026-05-01 19:18 [PATCH v4 0/9] mm: thp: always enable mTHP support Luiz Capitulino
@ 2026-05-01 19:18 ` Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 2/9] mm: introduce pgtable_has_pmd_leaves() Luiz Capitulino
` (8 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Luiz Capitulino @ 2026-05-01 19:18 UTC (permalink / raw)
To: linux-kernel, linux-mm, david, baolin.wang, ziy, lance.yang
Cc: corbet, tsbogend, maddy, mpe, agordeev, gerald.schaefer, hca, gor,
x86, dave.hansen, djbw, vishal.l.verma, dave.jiang, akpm,
lorenzo.stoakes
The tmpfs.rst doc references the has_transparent_hugepage() helper, which
is an implementation detail in the kernel and not relevant for users
wishing to properly configure THP support for tmpfs. Remove it.
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Lance Yang <lance.yang@linux.dev>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
Documentation/filesystems/tmpfs.rst | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/Documentation/filesystems/tmpfs.rst b/Documentation/filesystems/tmpfs.rst
index d677e0428c3f..46fc986c3388 100644
--- a/Documentation/filesystems/tmpfs.rst
+++ b/Documentation/filesystems/tmpfs.rst
@@ -109,9 +109,8 @@ noswap Disables swap. Remounts must respect the original settings.
====== ===========================================================
tmpfs also supports Transparent Huge Pages which requires a kernel
-configured with CONFIG_TRANSPARENT_HUGEPAGE and with huge supported for
-your system (has_transparent_hugepage(), which is architecture specific).
-The mount options for this are:
+configured with CONFIG_TRANSPARENT_HUGEPAGE and with huge pages
+supported for your system. The mount options for this are:
================ ==============================================================
huge=never Do not allocate huge pages. This is the default.
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v4 2/9] mm: introduce pgtable_has_pmd_leaves()
2026-05-01 19:18 [PATCH v4 0/9] mm: thp: always enable mTHP support Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 1/9] docs: tmpfs: remove implementation detail reference Luiz Capitulino
@ 2026-05-01 19:18 ` Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 3/9] drivers: dax: use pgtable_has_pmd_leaves() Luiz Capitulino
` (7 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Luiz Capitulino @ 2026-05-01 19:18 UTC (permalink / raw)
To: linux-kernel, linux-mm, david, baolin.wang, ziy, lance.yang
Cc: corbet, tsbogend, maddy, mpe, agordeev, gerald.schaefer, hca, gor,
x86, dave.hansen, djbw, vishal.l.verma, dave.jiang, akpm,
lorenzo.stoakes
Currently, we have two helpers that check for PMD-sized pages but have
different names and slightly different semantics:
- has_transparent_hugepage(): the name suggests it checks if THP is
enabled, but when CONFIG_TRANSPARENT_HUGEPAGE=y and the architecture
implements this helper, it actually checks if the CPU supports
PMD-sized pages
- thp_disabled_by_hw(): the name suggests it checks if THP is disabled
by the hardware, but it just returns a cached value acquired with
has_transparent_hugepage(). This helper is used in fast paths
This commit introduces a new helper called pgtable_has_pmd_leaves()
which is intended to replace both has_transparent_hugepage() and
thp_disabled_by_hw(). pgtable_has_pmd_leaves() has very clear semantics:
it returns true if the CPU supports PMD-sized pages and false otherwise.
It always returns a cached value, so it can be used in fast paths.
The new helper requires an initialization step which is performed by
init_arch_has_pmd_leaves(). We call init_arch_has_pmd_leaves() early
during boot in start_kernel() right after parse_early_param() but before
parse_args(). This allows early_param() handlers to change CPU flags if
needed (eg. parse_memopt() in x86-32) while also allowing users to use
the API from __setup() handlers.
The next commits will convert users of both has_transparent_hugepage()
and thp_disabled_by_hw() to pgtable_has_pmd_leaves().
Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
include/linux/pgtable.h | 15 +++++++++++++++
init/main.c | 1 +
mm/memory.c | 9 +++++++++
3 files changed, 25 insertions(+)
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index cdd68ed3ae1a..b365be3516bf 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -2243,6 +2243,21 @@ static inline const char *pgtable_level_to_str(enum pgtable_level level)
}
}
+#ifdef CONFIG_MMU
+DECLARE_STATIC_KEY_TRUE(__arch_has_pmd_leaves_key);
+static inline bool pgtable_has_pmd_leaves(void)
+{
+ return static_branch_likely(&__arch_has_pmd_leaves_key);
+}
+void __init init_arch_has_pmd_leaves(void);
+#else
+static inline bool pgtable_has_pmd_leaves(void)
+{
+ return false;
+}
+static inline void __init init_arch_has_pmd_leaves(void) { }
+#endif
+
#endif /* !__ASSEMBLY__ */
#if !defined(MAX_POSSIBLE_PHYSMEM_BITS) && !defined(CONFIG_64BIT)
diff --git a/init/main.c b/init/main.c
index 96f93bb06c49..eea7c5bdddf7 100644
--- a/init/main.c
+++ b/init/main.c
@@ -1053,6 +1053,7 @@ void start_kernel(void)
print_kernel_cmdline(saved_command_line);
/* parameters may set static keys */
parse_early_param();
+ init_arch_has_pmd_leaves();
after_dashes = parse_args("Booting kernel",
static_command_line, __start___param,
__stop___param - __start___param,
diff --git a/mm/memory.c b/mm/memory.c
index ea6568571131..90b2d9e84320 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -164,6 +164,15 @@ __setup("norandmaps", disable_randmaps);
unsigned long highest_memmap_pfn __read_mostly;
+DEFINE_STATIC_KEY_TRUE(__arch_has_pmd_leaves_key);
+EXPORT_SYMBOL(__arch_has_pmd_leaves_key);
+
+void __init init_arch_has_pmd_leaves(void)
+{
+ if (!has_transparent_hugepage())
+ static_branch_disable(&__arch_has_pmd_leaves_key);
+}
+
void mm_trace_rss_stat(struct mm_struct *mm, int member)
{
trace_rss_stat(mm, member);
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v4 3/9] drivers: dax: use pgtable_has_pmd_leaves()
2026-05-01 19:18 [PATCH v4 0/9] mm: thp: always enable mTHP support Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 1/9] docs: tmpfs: remove implementation detail reference Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 2/9] mm: introduce pgtable_has_pmd_leaves() Luiz Capitulino
@ 2026-05-01 19:18 ` Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 4/9] drivers: nvdimm: " Luiz Capitulino
` (6 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Luiz Capitulino @ 2026-05-01 19:18 UTC (permalink / raw)
To: linux-kernel, linux-mm, david, baolin.wang, ziy, lance.yang
Cc: corbet, tsbogend, maddy, mpe, agordeev, gerald.schaefer, hca, gor,
x86, dave.hansen, djbw, vishal.l.verma, dave.jiang, akpm,
lorenzo.stoakes
dax_align_valid() uses has_transparent_hugepage() to check if PMD-sized
pages are supported, use pgtable_has_pmd_leaves() instead.
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
drivers/dax/dax-private.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/dax/dax-private.h b/drivers/dax/dax-private.h
index 81e4af49e39c..35744ff6592a 100644
--- a/drivers/dax/dax-private.h
+++ b/drivers/dax/dax-private.h
@@ -123,7 +123,7 @@ static inline bool dax_align_valid(unsigned long align)
{
if (align == PUD_SIZE && IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD))
return true;
- if (align == PMD_SIZE && has_transparent_hugepage())
+ if (align == PMD_SIZE && pgtable_has_pmd_leaves())
return true;
if (align == PAGE_SIZE)
return true;
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v4 4/9] drivers: nvdimm: use pgtable_has_pmd_leaves()
2026-05-01 19:18 [PATCH v4 0/9] mm: thp: always enable mTHP support Luiz Capitulino
` (2 preceding siblings ...)
2026-05-01 19:18 ` [PATCH v4 3/9] drivers: dax: use pgtable_has_pmd_leaves() Luiz Capitulino
@ 2026-05-01 19:18 ` Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 5/9] mm: debug_vm_pgtable: " Luiz Capitulino
` (5 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Luiz Capitulino @ 2026-05-01 19:18 UTC (permalink / raw)
To: linux-kernel, linux-mm, david, baolin.wang, ziy, lance.yang
Cc: corbet, tsbogend, maddy, mpe, agordeev, gerald.schaefer, hca, gor,
x86, dave.hansen, djbw, vishal.l.verma, dave.jiang, akpm,
lorenzo.stoakes
nd_pfn_supported_alignments() and nd_pfn_default_alignment() use
has_transparent_hugepage() to check if THP is supported with PMD-sized
pages. Use pgtable_has_pmd_leaves() instead. Also, check for
IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) to preserve the current
implementation semantics.
Acked-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
drivers/nvdimm/pfn_devs.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c
index 8fa9c16aba7e..457eb54e7ab6 100644
--- a/drivers/nvdimm/pfn_devs.c
+++ b/drivers/nvdimm/pfn_devs.c
@@ -94,7 +94,8 @@ static unsigned long *nd_pfn_supported_alignments(unsigned long *alignments)
alignments[0] = PAGE_SIZE;
- if (has_transparent_hugepage()) {
+ if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
+ pgtable_has_pmd_leaves()) {
alignments[1] = HPAGE_PMD_SIZE;
if (has_transparent_pud_hugepage())
alignments[2] = HPAGE_PUD_SIZE;
@@ -109,7 +110,8 @@ static unsigned long *nd_pfn_supported_alignments(unsigned long *alignments)
static unsigned long nd_pfn_default_alignment(void)
{
- if (has_transparent_hugepage())
+ if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
+ pgtable_has_pmd_leaves())
return HPAGE_PMD_SIZE;
return PAGE_SIZE;
}
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v4 5/9] mm: debug_vm_pgtable: use pgtable_has_pmd_leaves()
2026-05-01 19:18 [PATCH v4 0/9] mm: thp: always enable mTHP support Luiz Capitulino
` (3 preceding siblings ...)
2026-05-01 19:18 ` [PATCH v4 4/9] drivers: nvdimm: " Luiz Capitulino
@ 2026-05-01 19:18 ` Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 6/9] mm: shmem: drop has_transparent_hugepage() usage Luiz Capitulino
` (4 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Luiz Capitulino @ 2026-05-01 19:18 UTC (permalink / raw)
To: linux-kernel, linux-mm, david, baolin.wang, ziy, lance.yang
Cc: corbet, tsbogend, maddy, mpe, agordeev, gerald.schaefer, hca, gor,
x86, dave.hansen, djbw, vishal.l.verma, dave.jiang, akpm,
lorenzo.stoakes
debug_vm_pgtable calls has_transparent_hugepage() in multiple places to
check if PMD-sized pages are supported, use pgtable_has_pmd_leaves()
instead.
Reviewed-by: Lance Yang <lance.yang@linux.dev>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
mm/debug_vm_pgtable.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index 23dc3ee09561..bd53417dde2f 100644
--- a/mm/debug_vm_pgtable.c
+++ b/mm/debug_vm_pgtable.c
@@ -177,7 +177,7 @@ static void __init pmd_basic_tests(struct pgtable_debug_args *args, int idx)
unsigned long val = idx, *ptr = &val;
pmd_t pmd;
- if (!has_transparent_hugepage())
+ if (!pgtable_has_pmd_leaves())
return;
pr_debug("Validating PMD basic (%pGv)\n", ptr);
@@ -222,7 +222,7 @@ static void __init pmd_advanced_tests(struct pgtable_debug_args *args)
pmd_t pmd;
unsigned long vaddr = args->vaddr;
- if (!has_transparent_hugepage())
+ if (!pgtable_has_pmd_leaves())
return;
page = (args->pmd_pfn != ULONG_MAX) ? pfn_to_page(args->pmd_pfn) : NULL;
@@ -283,7 +283,7 @@ static void __init pmd_leaf_tests(struct pgtable_debug_args *args)
{
pmd_t pmd;
- if (!has_transparent_hugepage())
+ if (!pgtable_has_pmd_leaves())
return;
pr_debug("Validating PMD leaf\n");
@@ -688,7 +688,7 @@ static void __init pmd_protnone_tests(struct pgtable_debug_args *args)
if (!IS_ENABLED(CONFIG_NUMA_BALANCING))
return;
- if (!has_transparent_hugepage())
+ if (!pgtable_has_pmd_leaves())
return;
pr_debug("Validating PMD protnone\n");
@@ -737,7 +737,7 @@ static void __init pmd_soft_dirty_tests(struct pgtable_debug_args *args)
if (!pgtable_supports_soft_dirty())
return;
- if (!has_transparent_hugepage())
+ if (!pgtable_has_pmd_leaves())
return;
pr_debug("Validating PMD soft dirty\n");
@@ -754,7 +754,7 @@ static void __init pmd_leaf_soft_dirty_tests(struct pgtable_debug_args *args)
!IS_ENABLED(CONFIG_ARCH_ENABLE_THP_MIGRATION))
return;
- if (!has_transparent_hugepage())
+ if (!pgtable_has_pmd_leaves())
return;
pr_debug("Validating PMD swap soft dirty\n");
@@ -825,7 +825,7 @@ static void __init pmd_softleaf_tests(struct pgtable_debug_args *args)
swp_entry_t arch_entry;
pmd_t pmd1, pmd2;
- if (!has_transparent_hugepage())
+ if (!pgtable_has_pmd_leaves())
return;
pr_debug("Validating PMD swap\n");
@@ -906,7 +906,7 @@ static void __init pmd_thp_tests(struct pgtable_debug_args *args)
{
pmd_t pmd;
- if (!has_transparent_hugepage())
+ if (!pgtable_has_pmd_leaves())
return;
pr_debug("Validating PMD based THP\n");
@@ -997,7 +997,7 @@ static void __init destroy_args(struct pgtable_debug_args *args)
}
if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
- has_transparent_hugepage() &&
+ pgtable_has_pmd_leaves() &&
args->pmd_pfn != ULONG_MAX) {
debug_vm_pgtable_free_huge_page(args, args->pmd_pfn, HPAGE_PMD_ORDER);
args->pmd_pfn = ULONG_MAX;
@@ -1249,7 +1249,7 @@ static int __init init_args(struct pgtable_debug_args *args)
}
if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
- has_transparent_hugepage()) {
+ pgtable_has_pmd_leaves()) {
page = debug_vm_pgtable_alloc_huge_page(args, HPAGE_PMD_ORDER);
if (page) {
args->pmd_pfn = page_to_pfn(page);
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v4 6/9] mm: shmem: drop has_transparent_hugepage() usage
2026-05-01 19:18 [PATCH v4 0/9] mm: thp: always enable mTHP support Luiz Capitulino
` (4 preceding siblings ...)
2026-05-01 19:18 ` [PATCH v4 5/9] mm: debug_vm_pgtable: " Luiz Capitulino
@ 2026-05-01 19:18 ` Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 7/9] treewide: introduce arch_has_pmd_leaves() Luiz Capitulino
` (3 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Luiz Capitulino @ 2026-05-01 19:18 UTC (permalink / raw)
To: linux-kernel, linux-mm, david, baolin.wang, ziy, lance.yang
Cc: corbet, tsbogend, maddy, mpe, agordeev, gerald.schaefer, hca, gor,
x86, dave.hansen, djbw, vishal.l.verma, dave.jiang, akpm,
lorenzo.stoakes
Shmem uses has_transparent_hugepage() in the following ways:
- shmem_parse_one() and shmem_parse_huge(): Check if THP is built-in and
if the CPU supports PMD-sized pages
- shmem_init(): Since the CONFIG_TRANSPARENT_HUGEPAGE guard is outside
the code block calling has_transparent_hugepage(), the
has_transparent_hugepage() call is exclusively checking if the CPU
supports PMD-sized pages
While it's necessary to check if CONFIG_TRANSPARENT_HUGEPAGE is enabled
in all cases, shmem can determine mTHP size support at folio allocation
time. Therefore, drop has_transparent_hugepage() usage while keeping the
CONFIG_TRANSPARENT_HUGEPAGE checks.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Lance Yang <lance.yang@linux.dev>
Acked-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
mm/shmem.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 3b5dc21b323c..1948d73fb1e3 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -689,7 +689,7 @@ static int shmem_parse_huge(const char *str)
else
return -EINVAL;
- if (!has_transparent_hugepage() &&
+ if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
huge != SHMEM_HUGE_NEVER && huge != SHMEM_HUGE_DENY)
return -EINVAL;
@@ -4656,8 +4656,7 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param)
case Opt_huge:
ctx->huge = result.uint_32;
if (ctx->huge != SHMEM_HUGE_NEVER &&
- !(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
- has_transparent_hugepage()))
+ !IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
goto unsupported_parameter;
ctx->seen |= SHMEM_SEEN_HUGE;
break;
@@ -5449,7 +5448,7 @@ void __init shmem_init(void)
#endif
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
- if (has_transparent_hugepage() && shmem_huge > SHMEM_HUGE_DENY)
+ if (shmem_huge > SHMEM_HUGE_DENY)
SHMEM_SB(shm_mnt->mnt_sb)->huge = shmem_huge;
else
shmem_huge = SHMEM_HUGE_NEVER; /* just in case it was patched */
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v4 7/9] treewide: introduce arch_has_pmd_leaves()
2026-05-01 19:18 [PATCH v4 0/9] mm: thp: always enable mTHP support Luiz Capitulino
` (5 preceding siblings ...)
2026-05-01 19:18 ` [PATCH v4 6/9] mm: shmem: drop has_transparent_hugepage() usage Luiz Capitulino
@ 2026-05-01 19:18 ` Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 8/9] mm: replace thp_disabled_by_hw() with pgtable_has_pmd_leaves() Luiz Capitulino
` (2 subsequent siblings)
9 siblings, 0 replies; 12+ messages in thread
From: Luiz Capitulino @ 2026-05-01 19:18 UTC (permalink / raw)
To: linux-kernel, linux-mm, david, baolin.wang, ziy, lance.yang
Cc: corbet, tsbogend, maddy, mpe, agordeev, gerald.schaefer, hca, gor,
x86, dave.hansen, djbw, vishal.l.verma, dave.jiang, akpm,
lorenzo.stoakes
Now that all the has_transparent_hugepage() callers have been converted
to pgtable_has_pmd_leaves(), this commit does two things:
1. Rename has_transparent_hugepage() arch implementations to
arch_has_pmd_leaves(), since that's what the helper checks for
2. Introduce the default implementation of arch_has_pmd_leaves() as
IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE). This means that if
the arch doesn't implement arch_has_pmd_leaves() we default to checking
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE as a way to determine if
PMD-sized pages are supported
Note that arch_has_pmd_leaves() is supposed to be called only by
init_arch_has_pmd_leaves(). The remaining exception is hugepage_init()
which will be converted in a future commit.
Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
arch/mips/include/asm/pgtable.h | 4 ++--
arch/mips/mm/tlb-r4k.c | 4 ++--
arch/powerpc/include/asm/book3s/64/hash-4k.h | 2 +-
arch/powerpc/include/asm/book3s/64/hash-64k.h | 2 +-
arch/powerpc/include/asm/book3s/64/pgtable.h | 10 +++++-----
arch/powerpc/include/asm/book3s/64/radix.h | 2 +-
arch/powerpc/mm/book3s64/hash_pgtable.c | 4 ++--
arch/s390/include/asm/pgtable.h | 4 ++--
arch/x86/include/asm/pgtable.h | 4 ++--
include/linux/pgtable.h | 4 ++--
mm/huge_memory.c | 2 +-
mm/memory.c | 2 +-
12 files changed, 22 insertions(+), 22 deletions(-)
diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
index fa7b935f947c..a97b788315e2 100644
--- a/arch/mips/include/asm/pgtable.h
+++ b/arch/mips/include/asm/pgtable.h
@@ -615,8 +615,8 @@ unsigned long io_remap_pfn_range_pfn(unsigned long pfn, unsigned long size);
/* We don't have hardware dirty/accessed bits, generic_pmdp_establish is fine.*/
#define pmdp_establish generic_pmdp_establish
-#define has_transparent_hugepage has_transparent_hugepage
-extern int has_transparent_hugepage(void);
+#define arch_has_pmd_leaves arch_has_pmd_leaves
+extern int arch_has_pmd_leaves(void);
static inline int pmd_trans_huge(pmd_t pmd)
{
diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c
index 24fe85fa169d..c423b5784337 100644
--- a/arch/mips/mm/tlb-r4k.c
+++ b/arch/mips/mm/tlb-r4k.c
@@ -434,7 +434,7 @@ void add_wired_entry(unsigned long entrylo0, unsigned long entrylo1,
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-int has_transparent_hugepage(void)
+int arch_has_pmd_leaves(void)
{
static unsigned int mask = -1;
@@ -450,7 +450,7 @@ int has_transparent_hugepage(void)
}
return mask == PM_HUGE_MASK;
}
-EXPORT_SYMBOL(has_transparent_hugepage);
+EXPORT_SYMBOL(arch_has_pmd_leaves);
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h
index 8e5bd9902bed..6744c2287199 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-4k.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h
@@ -165,7 +165,7 @@ extern void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
extern pgtable_t hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
unsigned long addr, pmd_t *pmdp);
-extern int hash__has_transparent_hugepage(void);
+extern int hash__arch_has_pmd_leaves(void);
#endif
#endif /* !__ASSEMBLER__ */
diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h
index 7deb3a66890b..9392aba5e5dc 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-64k.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h
@@ -278,7 +278,7 @@ extern void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
extern pgtable_t hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
unsigned long addr, pmd_t *pmdp);
-extern int hash__has_transparent_hugepage(void);
+extern int hash__arch_has_pmd_leaves(void);
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
#endif /* __ASSEMBLER__ */
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index e67e64ac6e8c..b6629c041e75 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -1121,14 +1121,14 @@ static inline void update_mmu_cache_pud(struct vm_area_struct *vma,
{
}
-extern int hash__has_transparent_hugepage(void);
-static inline int has_transparent_hugepage(void)
+extern int hash__arch_has_pmd_leaves(void);
+static inline int arch_has_pmd_leaves(void)
{
if (radix_enabled())
- return radix__has_transparent_hugepage();
- return hash__has_transparent_hugepage();
+ return radix__arch_has_pmd_leaves();
+ return hash__arch_has_pmd_leaves();
}
-#define has_transparent_hugepage has_transparent_hugepage
+#define arch_has_pmd_leaves arch_has_pmd_leaves
static inline int has_transparent_pud_hugepage(void)
{
diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h
index da954e779744..c884a119cbd9 100644
--- a/arch/powerpc/include/asm/book3s/64/radix.h
+++ b/arch/powerpc/include/asm/book3s/64/radix.h
@@ -298,7 +298,7 @@ extern pmd_t radix__pmdp_huge_get_and_clear(struct mm_struct *mm,
pud_t radix__pudp_huge_get_and_clear(struct mm_struct *mm,
unsigned long addr, pud_t *pudp);
-static inline int radix__has_transparent_hugepage(void)
+static inline int radix__arch_has_pmd_leaves(void)
{
/* For radix 2M at PMD level means thp */
if (mmu_psize_defs[MMU_PAGE_2M].shift == PMD_SHIFT)
diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c
index d9b5b751d7b7..88a4a2eab513 100644
--- a/arch/powerpc/mm/book3s64/hash_pgtable.c
+++ b/arch/powerpc/mm/book3s64/hash_pgtable.c
@@ -391,7 +391,7 @@ pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
return old_pmd;
}
-int hash__has_transparent_hugepage(void)
+int hash__arch_has_pmd_leaves(void)
{
if (!mmu_has_feature(MMU_FTR_16M_PAGE))
@@ -420,7 +420,7 @@ int hash__has_transparent_hugepage(void)
return 1;
}
-EXPORT_SYMBOL_GPL(hash__has_transparent_hugepage);
+EXPORT_SYMBOL_GPL(hash__arch_has_pmd_leaves);
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index 2c6cee8241e0..33b165dbf3db 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -1799,8 +1799,8 @@ static inline int pmd_trans_huge(pmd_t pmd)
return pmd_leaf(pmd);
}
-#define has_transparent_hugepage has_transparent_hugepage
-static inline int has_transparent_hugepage(void)
+#define arch_has_pmd_leaves arch_has_pmd_leaves
+static inline int arch_has_pmd_leaves(void)
{
return cpu_has_edat1() ? 1 : 0;
}
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 2187e9cfcefa..2edd6c9d789c 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -306,8 +306,8 @@ static inline int pud_trans_huge(pud_t pud)
}
#endif
-#define has_transparent_hugepage has_transparent_hugepage
-static inline int has_transparent_hugepage(void)
+#define arch_has_pmd_leaves arch_has_pmd_leaves
+static inline int arch_has_pmd_leaves(void)
{
return boot_cpu_has(X86_FEATURE_PSE);
}
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index b365be3516bf..3d7eeb50c183 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -2273,8 +2273,8 @@ static inline void __init init_arch_has_pmd_leaves(void) { }
#endif
#endif
-#ifndef has_transparent_hugepage
-#define has_transparent_hugepage() IS_BUILTIN(CONFIG_TRANSPARENT_HUGEPAGE)
+#ifndef arch_has_pmd_leaves
+#define arch_has_pmd_leaves() IS_ENABLED(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE)
#endif
#ifndef has_transparent_pud_hugepage
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 970e077019b7..4da10e94bbb6 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -969,7 +969,7 @@ static int __init hugepage_init(void)
int err;
struct kobject *hugepage_kobj;
- if (!has_transparent_hugepage()) {
+ if (!arch_has_pmd_leaves()) {
transparent_hugepage_flags = 1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED;
return -EINVAL;
}
diff --git a/mm/memory.c b/mm/memory.c
index 90b2d9e84320..c62fce83b8d0 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -169,7 +169,7 @@ EXPORT_SYMBOL(__arch_has_pmd_leaves_key);
void __init init_arch_has_pmd_leaves(void)
{
- if (!has_transparent_hugepage())
+ if (!arch_has_pmd_leaves())
static_branch_disable(&__arch_has_pmd_leaves_key);
}
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v4 8/9] mm: replace thp_disabled_by_hw() with pgtable_has_pmd_leaves()
2026-05-01 19:18 [PATCH v4 0/9] mm: thp: always enable mTHP support Luiz Capitulino
` (6 preceding siblings ...)
2026-05-01 19:18 ` [PATCH v4 7/9] treewide: introduce arch_has_pmd_leaves() Luiz Capitulino
@ 2026-05-01 19:18 ` Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 9/9] mm: thp: always enable mTHP support Luiz Capitulino
2026-05-03 15:02 ` [PATCH v4 0/9] " Andrew Morton
9 siblings, 0 replies; 12+ messages in thread
From: Luiz Capitulino @ 2026-05-01 19:18 UTC (permalink / raw)
To: linux-kernel, linux-mm, david, baolin.wang, ziy, lance.yang
Cc: corbet, tsbogend, maddy, mpe, agordeev, gerald.schaefer, hca, gor,
x86, dave.hansen, djbw, vishal.l.verma, dave.jiang, akpm,
lorenzo.stoakes
Despite its name, thp_disabled_by_hw() just checks whether the
architecture supports PMD-sized pages. It returns true when
TRANSPARENT_HUGEPAGE_UNSUPPORTED is set in transparent_hugepage_flags,
this only occurs if the architecture implements arch_has_pmd_leaves()
and that function returns false.
Since pgtable_has_pmd_leaves() provides the same semantics, use it
instead.
Reviewed-by: Lance Yang <lance.yang@linux.dev>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Acked-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
include/linux/huge_mm.h | 7 -------
mm/huge_memory.c | 6 ++----
mm/memory.c | 2 +-
mm/shmem.c | 2 +-
4 files changed, 4 insertions(+), 13 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 2949e5acff35..da048aa06761 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -47,7 +47,6 @@ vm_fault_t vmf_insert_folio_pud(struct vm_fault *vmf, struct folio *folio,
bool write);
enum transparent_hugepage_flag {
- TRANSPARENT_HUGEPAGE_UNSUPPORTED,
TRANSPARENT_HUGEPAGE_FLAG,
TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
TRANSPARENT_HUGEPAGE_DEFRAG_DIRECT_FLAG,
@@ -352,12 +351,6 @@ static inline bool vma_thp_disabled(struct vm_area_struct *vma,
return mm_flags_test(MMF_DISABLE_THP_EXCEPT_ADVISED, vma->vm_mm);
}
-static inline bool thp_disabled_by_hw(void)
-{
- /* If the hardware/firmware marked hugepage support disabled. */
- return transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED);
-}
-
unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr,
unsigned long len, unsigned long pgoff, unsigned long flags);
unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 4da10e94bbb6..32254febe097 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -133,7 +133,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
if (!vma->vm_mm) /* vdso */
return 0;
- if (thp_disabled_by_hw() || vma_thp_disabled(vma, vm_flags, forced_collapse))
+ if (!pgtable_has_pmd_leaves() || vma_thp_disabled(vma, vm_flags, forced_collapse))
return 0;
/* khugepaged doesn't collapse DAX vma, but page fault is fine. */
@@ -969,10 +969,8 @@ static int __init hugepage_init(void)
int err;
struct kobject *hugepage_kobj;
- if (!arch_has_pmd_leaves()) {
- transparent_hugepage_flags = 1 << TRANSPARENT_HUGEPAGE_UNSUPPORTED;
+ if (!pgtable_has_pmd_leaves())
return -EINVAL;
- }
/*
* hugepages can't be allocated by the buddy allocator
diff --git a/mm/memory.c b/mm/memory.c
index c62fce83b8d0..483af476d9b2 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5522,7 +5522,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct folio *folio, struct page *pa
* PMD mappings if THPs are disabled. As we already have a THP,
* behave as if we are forcing a collapse.
*/
- if (thp_disabled_by_hw() || vma_thp_disabled(vma, vma->vm_flags,
+ if (!pgtable_has_pmd_leaves() || vma_thp_disabled(vma, vma->vm_flags,
/* forced_collapse=*/ true))
return ret;
diff --git a/mm/shmem.c b/mm/shmem.c
index 1948d73fb1e3..a48f034830cd 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1842,7 +1842,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
vm_flags_t vm_flags = vma ? vma->vm_flags : 0;
unsigned int global_orders;
- if (thp_disabled_by_hw() || (vma && vma_thp_disabled(vma, vm_flags, shmem_huge_force)))
+ if (!pgtable_has_pmd_leaves() || (vma && vma_thp_disabled(vma, vm_flags, shmem_huge_force)))
return 0;
global_orders = shmem_huge_global_enabled(inode, index, write_end,
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [PATCH v4 9/9] mm: thp: always enable mTHP support
2026-05-01 19:18 [PATCH v4 0/9] mm: thp: always enable mTHP support Luiz Capitulino
` (7 preceding siblings ...)
2026-05-01 19:18 ` [PATCH v4 8/9] mm: replace thp_disabled_by_hw() with pgtable_has_pmd_leaves() Luiz Capitulino
@ 2026-05-01 19:18 ` Luiz Capitulino
2026-05-03 15:02 ` [PATCH v4 0/9] " Andrew Morton
9 siblings, 0 replies; 12+ messages in thread
From: Luiz Capitulino @ 2026-05-01 19:18 UTC (permalink / raw)
To: linux-kernel, linux-mm, david, baolin.wang, ziy, lance.yang
Cc: corbet, tsbogend, maddy, mpe, agordeev, gerald.schaefer, hca, gor,
x86, dave.hansen, djbw, vishal.l.verma, dave.jiang, akpm,
lorenzo.stoakes
If PMD-sized pages are not supported on an architecture (ie. the
arch implements arch_has_pmd_leaves() and it returns false) then the
current code disables all THP, including mTHP.
This commit fixes this by allowing mTHP to be always enabled for all
archs. When PMD-sized pages are not supported, its sysfs entry won't be
created and their mapping will be disallowed at page-fault time.
Similarly, this commit implements the following changes for shmem in
shmem_allowable_huge_orders():
- Drop the pgtable_has_pmd_leaves() check so that mTHP sizes are
considered
- Filter out PMD and PUD orders from allowable orders when
PMD-sized pages are not supported by the CPU
Signed-off-by: Luiz Capitulino <luizcap@redhat.com>
---
mm/huge_memory.c | 23 ++++++++++++++++++-----
mm/shmem.c | 14 +++++++++-----
2 files changed, 27 insertions(+), 10 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 32254febe097..c1765c8e3dc6 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -126,6 +126,14 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
else
supported_orders = THP_ORDERS_ALL_FILE_DEFAULT;
+ if (!pgtable_has_pmd_leaves()) {
+ /*
+ * The CPU doesn't support PMD-sized pages, assume it
+ * doesn't support PUD-sized pages either.
+ */
+ supported_orders &= ~(BIT(PMD_ORDER) | BIT(PUD_ORDER));
+ }
+
orders &= supported_orders;
if (!orders)
return 0;
@@ -133,7 +141,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma,
if (!vma->vm_mm) /* vdso */
return 0;
- if (!pgtable_has_pmd_leaves() || vma_thp_disabled(vma, vm_flags, forced_collapse))
+ if (vma_thp_disabled(vma, vm_flags, forced_collapse))
return 0;
/* khugepaged doesn't collapse DAX vma, but page fault is fine. */
@@ -848,7 +856,7 @@ static int __init hugepage_init_sysfs(struct kobject **hugepage_kobj)
* disable all other sizes. powerpc's PMD_ORDER isn't a compile-time
* constant so we have to do this here.
*/
- if (!anon_orders_configured)
+ if (!anon_orders_configured && pgtable_has_pmd_leaves())
huge_anon_orders_inherit = BIT(PMD_ORDER);
*hugepage_kobj = kobject_create_and_add("transparent_hugepage", mm_kobj);
@@ -870,6 +878,14 @@ static int __init hugepage_init_sysfs(struct kobject **hugepage_kobj)
}
orders = THP_ORDERS_ALL_ANON | THP_ORDERS_ALL_FILE_DEFAULT;
+ if (!pgtable_has_pmd_leaves()) {
+ /*
+ * The CPU doesn't support PMD-sized pages, assume it
+ * doesn't support PUD-sized pages either.
+ */
+ orders &= ~(BIT(PMD_ORDER) | BIT(PUD_ORDER));
+ }
+
order = highest_order(orders);
while (orders) {
thpsize = thpsize_create(order, *hugepage_kobj);
@@ -969,9 +985,6 @@ static int __init hugepage_init(void)
int err;
struct kobject *hugepage_kobj;
- if (!pgtable_has_pmd_leaves())
- return -EINVAL;
-
/*
* hugepages can't be allocated by the buddy allocator
*/
diff --git a/mm/shmem.c b/mm/shmem.c
index a48f034830cd..23893c2bc2dd 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1840,16 +1840,19 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
unsigned long mask = READ_ONCE(huge_shmem_orders_always);
unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
vm_flags_t vm_flags = vma ? vma->vm_flags : 0;
- unsigned int global_orders;
+ unsigned int global_orders, filter_orders = 0;
- if (!pgtable_has_pmd_leaves() || (vma && vma_thp_disabled(vma, vm_flags, shmem_huge_force)))
+ if (vma && vma_thp_disabled(vma, vm_flags, shmem_huge_force))
return 0;
+ if (!pgtable_has_pmd_leaves())
+ filter_orders = BIT(PMD_ORDER) | BIT(PUD_ORDER);
+
global_orders = shmem_huge_global_enabled(inode, index, write_end,
shmem_huge_force, vma, vm_flags);
/* Tmpfs huge pages allocation */
if (!vma || !vma_is_anon_shmem(vma))
- return global_orders;
+ return global_orders & ~filter_orders;
/*
* Following the 'deny' semantics of the top level, force the huge
@@ -1863,7 +1866,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
* means non-PMD sized THP can not override 'huge' mount option now.
*/
if (shmem_huge == SHMEM_HUGE_FORCE)
- return READ_ONCE(huge_shmem_orders_inherit);
+ return READ_ONCE(huge_shmem_orders_inherit) & ~filter_orders;
/* Allow mTHP that will be fully within i_size. */
mask |= shmem_get_orders_within_size(inode, within_size_orders, index, 0);
@@ -1874,6 +1877,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
if (global_orders > 0)
mask |= READ_ONCE(huge_shmem_orders_inherit);
+ mask &= ~filter_orders;
return THP_ORDERS_ALL_FILE_DEFAULT & mask;
}
@@ -5457,7 +5461,7 @@ void __init shmem_init(void)
* Default to setting PMD-sized THP to inherit the global setting and
* disable all other multi-size THPs.
*/
- if (!shmem_orders_configured)
+ if (!shmem_orders_configured && pgtable_has_pmd_leaves())
huge_shmem_orders_inherit = BIT(HPAGE_PMD_ORDER);
#endif
return;
--
2.53.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v4 0/9] mm: thp: always enable mTHP support
2026-05-01 19:18 [PATCH v4 0/9] mm: thp: always enable mTHP support Luiz Capitulino
` (8 preceding siblings ...)
2026-05-01 19:18 ` [PATCH v4 9/9] mm: thp: always enable mTHP support Luiz Capitulino
@ 2026-05-03 15:02 ` Andrew Morton
2026-05-04 19:11 ` Luiz Capitulino
9 siblings, 1 reply; 12+ messages in thread
From: Andrew Morton @ 2026-05-03 15:02 UTC (permalink / raw)
To: Luiz Capitulino
Cc: linux-kernel, linux-mm, david, baolin.wang, ziy, lance.yang,
corbet, tsbogend, maddy, mpe, agordeev, gerald.schaefer, hca, gor,
x86, dave.hansen, djbw, vishal.l.verma, dave.jiang,
lorenzo.stoakes
On Fri, 1 May 2026 15:18:42 -0400 Luiz Capitulino <luizcap@redhat.com> wrote:
> Today, if an architecture implements has_transparent_hugepage() and the CPU
> lacks support for PMD-sized pages, the THP code disables all THP, including
> mTHP. In addition, the kernel lacks a well defined API to check for
> PMD-sized page support. It currently relies on has_transparent_hugepage()
> and thp_disabled_by_hw(), but they are not well defined and are tied to
> THP support.
>
> This series addresses both issues by introducing a new well defined API
> to query PMD-sized page support: pgtable_has_pmd_leaves(). Using this
> new helper, we ensure that mTHP remains enabled even when the
> architecture or CPU doesn't support PMD-sized pages.
>
> Thanks to David Hildenbrand for suggesting this improvement and for
> providing guidance (all bugs and misconceptions are mine).
>
> This applies to Linus tree 08d0d3466664 ("Merge tag 'net-7.1-rc2'
> of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net")
>
> NOTE: I used Claude Code Opus 4.6 to *review* the series before
> posting. It did find one issue where a pgtable_has_pmd_leaves()
> check was missing when assining huge_shmem_orders_inherit in
> shmem_init().
Thanks.
Sashiko found a few other things to ask about:
https://sashiko.dev/#/patchset/cover.1777663129.git.luizcap@redhat.com
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4 0/9] mm: thp: always enable mTHP support
2026-05-03 15:02 ` [PATCH v4 0/9] " Andrew Morton
@ 2026-05-04 19:11 ` Luiz Capitulino
0 siblings, 0 replies; 12+ messages in thread
From: Luiz Capitulino @ 2026-05-04 19:11 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-kernel, linux-mm, david, baolin.wang, ziy, lance.yang,
corbet, tsbogend, maddy, mpe, agordeev, gerald.schaefer, hca, gor,
x86, dave.hansen, djbw, vishal.l.verma, dave.jiang,
lorenzo.stoakes
On 2026-05-03 11:02, Andrew Morton wrote:
> On Fri, 1 May 2026 15:18:42 -0400 Luiz Capitulino <luizcap@redhat.com> wrote:
>
>> Today, if an architecture implements has_transparent_hugepage() and the CPU
>> lacks support for PMD-sized pages, the THP code disables all THP, including
>> mTHP. In addition, the kernel lacks a well defined API to check for
>> PMD-sized page support. It currently relies on has_transparent_hugepage()
>> and thp_disabled_by_hw(), but they are not well defined and are tied to
>> THP support.
>>
>> This series addresses both issues by introducing a new well defined API
>> to query PMD-sized page support: pgtable_has_pmd_leaves(). Using this
>> new helper, we ensure that mTHP remains enabled even when the
>> architecture or CPU doesn't support PMD-sized pages.
>>
>> Thanks to David Hildenbrand for suggesting this improvement and for
>> providing guidance (all bugs and misconceptions are mine).
>>
>> This applies to Linus tree 08d0d3466664 ("Merge tag 'net-7.1-rc2'
>> of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net")
>>
>> NOTE: I used Claude Code Opus 4.6 to *review* the series before
>> posting. It did find one issue where a pgtable_has_pmd_leaves()
>> check was missing when assining huge_shmem_orders_inherit in
>> shmem_init().
>
> Thanks.
>
> Sashiko found a few other things to ask about:
> https://sashiko.dev/#/patchset/cover.1777663129.git.luizcap@redhat.com
Thanks, I'll go over those soon.
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2026-05-04 19:11 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-01 19:18 [PATCH v4 0/9] mm: thp: always enable mTHP support Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 1/9] docs: tmpfs: remove implementation detail reference Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 2/9] mm: introduce pgtable_has_pmd_leaves() Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 3/9] drivers: dax: use pgtable_has_pmd_leaves() Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 4/9] drivers: nvdimm: " Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 5/9] mm: debug_vm_pgtable: " Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 6/9] mm: shmem: drop has_transparent_hugepage() usage Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 7/9] treewide: introduce arch_has_pmd_leaves() Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 8/9] mm: replace thp_disabled_by_hw() with pgtable_has_pmd_leaves() Luiz Capitulino
2026-05-01 19:18 ` [PATCH v4 9/9] mm: thp: always enable mTHP support Luiz Capitulino
2026-05-03 15:02 ` [PATCH v4 0/9] " Andrew Morton
2026-05-04 19:11 ` Luiz Capitulino
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox