* [PATCH hotfix] mm: fix pmd_special() fallback to observe huge_zero
@ 2026-04-29 5:08 Hugh Dickins
2026-04-29 5:54 ` Lance Yang
0 siblings, 1 reply; 12+ messages in thread
From: Hugh Dickins @ 2026-04-29 5:08 UTC (permalink / raw)
To: Andrew Morton
Cc: Baolin Wang, Barry Song, David Hildenbrand, Dev Jain, Lance Yang,
Liam Howlett, Lorenzo Stoakes, Michal Hocko, Mike Rapoport,
Nico Pache, Qi Zheng, Ryan Roberts, Suren Baghdasaryan, Zi Yan,
linux-mm
On x86 32-bit with THP enabled, zap_huge_pmd() is seen to generate a
"WARNING: mm/memory.c:735 at __vm_normal_page+0x6a/0x7d", from the
VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn)); followed
by "BUG: Bad rss-counter state"s, then later "BUG: Bad page state"s
when reclaim gets to call shrink_huge_zero_folio_scan().
It's as if the _PAGE_SPECIAL bit never got set in the huge_zero pmd:
and indeed, whereas pte_special() and pte_mkspecial() are subject to a
dedicated CONFIG_ARCH_HAS_PTE_SPECIAL, pmd_special() and pmd_mkspecial()
are subject to CONFIG_ARCH_SUPPORTS_PMD_PFNMAP, which is never enabled
on any 32-bit architecture.
Add CONFIG_ARCH_HAS_PMD_SPECIAL? Perhaps; but I think it's better just
to observe the huge_zero pmd in the fallback version of pmd_special().
Fixes: d80a9cb1a64a ("mm/huge_memory: add and use normal_or_softleaf_folio_pmd()")
Signed-off-by: Hugh Dickins <hughd@google.com>
---
include/linux/mm.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 0b776907152e..3b02ac43bcb7 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3422,7 +3422,7 @@ static inline pte_t pte_mkspecial(pte_t pte)
#ifndef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
static inline bool pmd_special(pmd_t pmd)
{
- return false;
+ return is_huge_zero_pmd(pmd);
}
static inline pmd_t pmd_mkspecial(pmd_t pmd)
--
2.51.0
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH hotfix] mm: fix pmd_special() fallback to observe huge_zero
2026-04-29 5:08 [PATCH hotfix] mm: fix pmd_special() fallback to observe huge_zero Hugh Dickins
@ 2026-04-29 5:54 ` Lance Yang
2026-04-29 6:12 ` David Hildenbrand (Arm)
0 siblings, 1 reply; 12+ messages in thread
From: Lance Yang @ 2026-04-29 5:54 UTC (permalink / raw)
To: hughd
Cc: akpm, baolin.wang, baohua, david, dev.jain, lance.yang,
liam.howlett, ljs, mhocko, rppt, npache, zhengqi.arch,
ryan.roberts, surenb, ziy, linux-mm
On Tue, Apr 28, 2026 at 10:08:37PM -0700, Hugh Dickins wrote:
>On x86 32-bit with THP enabled, zap_huge_pmd() is seen to generate a
>"WARNING: mm/memory.c:735 at __vm_normal_page+0x6a/0x7d", from the
>VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn)); followed
>by "BUG: Bad rss-counter state"s, then later "BUG: Bad page state"s
>when reclaim gets to call shrink_huge_zero_folio_scan().
Good catch!
>It's as if the _PAGE_SPECIAL bit never got set in the huge_zero pmd:
>and indeed, whereas pte_special() and pte_mkspecial() are subject to a
>dedicated CONFIG_ARCH_HAS_PTE_SPECIAL, pmd_special() and pmd_mkspecial()
>are subject to CONFIG_ARCH_SUPPORTS_PMD_PFNMAP, which is never enabled
>on any 32-bit architecture.
>
>Add CONFIG_ARCH_HAS_PMD_SPECIAL? Perhaps; but I think it's better just
>to observe the huge_zero pmd in the fallback version of pmd_special().
>
>Fixes: d80a9cb1a64a ("mm/huge_memory: add and use normal_or_softleaf_folio_pmd()")
>Signed-off-by: Hugh Dickins <hughd@google.com>
>---
> include/linux/mm.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>diff --git a/include/linux/mm.h b/include/linux/mm.h
>index 0b776907152e..3b02ac43bcb7 100644
>--- a/include/linux/mm.h
>+++ b/include/linux/mm.h
>@@ -3422,7 +3422,7 @@ static inline pte_t pte_mkspecial(pte_t pte)
> #ifndef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
> static inline bool pmd_special(pmd_t pmd)
> {
>- return false;
>+ return is_huge_zero_pmd(pmd);
> }
Emm ... feels a bit odd to me ...
On these configs pmd_mkspecial() is still a no-op, so pmd_special()
would no longer really mean that the PMD was made special :)
Could we handle the huge zero PMD in vm_normal_page_pmd() instead?
---8<---
diff --git a/mm/memory.c b/mm/memory.c
index 199214f8de08..3d9ed41a46b8 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -793,6 +793,9 @@ struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr,
struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
pmd_t pmd)
{
+ if (is_huge_zero_pmd(pmd))
+ return NULL;
+
return __vm_normal_page(vma, addr, pmd_pfn(pmd), pmd_special(pmd),
pmd_val(pmd), PGTABLE_LEVEL_PMD);
}
--
zap_huge_pmd()
-> normal_or_softleaf_folio_pmd()
-> vm_normal_folio_pmd()
-> vm_normal_page_pmd()
-> is_huge_zero_pmd(pmd)
-> return NULL
-> return NULL
That's where we ask whether the PMD maps a normal page, and for the huge
zero PMD the answer is simply "no".
-> has_deposited_pgtable()
-> is_huge_zero_pmd(pmd)
-> true
-> zap_deposited_table()
-> skip zap_huge_pmd_folio()
Then zap_huge_pmd() would still withdraw the deposited pgtable (except
for DAX huge zero PMDs), but skip zap_huge_pmd_folio() - no normal folio
rmap/RSS accounting.
Just a thought :)
Thanks,
Lance
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH hotfix] mm: fix pmd_special() fallback to observe huge_zero
2026-04-29 5:54 ` Lance Yang
@ 2026-04-29 6:12 ` David Hildenbrand (Arm)
2026-04-29 6:57 ` Lance Yang
0 siblings, 1 reply; 12+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-29 6:12 UTC (permalink / raw)
To: Lance Yang, hughd
Cc: akpm, baolin.wang, baohua, dev.jain, liam.howlett, ljs, mhocko,
rppt, npache, zhengqi.arch, ryan.roberts, surenb, ziy, linux-mm
On 4/29/26 07:54, Lance Yang wrote:
>
> On Tue, Apr 28, 2026 at 10:08:37PM -0700, Hugh Dickins wrote:
>> On x86 32-bit with THP enabled, zap_huge_pmd() is seen to generate a
>> "WARNING: mm/memory.c:735 at __vm_normal_page+0x6a/0x7d", from the
>> VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn)); followed
>> by "BUG: Bad rss-counter state"s, then later "BUG: Bad page state"s
>> when reclaim gets to call shrink_huge_zero_folio_scan().
>
> Good catch!
>
>> It's as if the _PAGE_SPECIAL bit never got set in the huge_zero pmd:
>> and indeed, whereas pte_special() and pte_mkspecial() are subject to a
>> dedicated CONFIG_ARCH_HAS_PTE_SPECIAL, pmd_special() and pmd_mkspecial()
>> are subject to CONFIG_ARCH_SUPPORTS_PMD_PFNMAP, which is never enabled
>> on any 32-bit architecture.
>>
>> Add CONFIG_ARCH_HAS_PMD_SPECIAL? Perhaps; but I think it's better just
>> to observe the huge_zero pmd in the fallback version of pmd_special().
>>
>> Fixes: d80a9cb1a64a ("mm/huge_memory: add and use normal_or_softleaf_folio_pmd()")
Likely it should be
Fixes: d82d09e48219 ("mm/huge_memory: mark PMD mappings of the huge zero folio special")
Because vm_normal_page_pmd() would return the wrong thing.
But I am surprised that we didn't run into the
VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn));
earlier.
>> Signed-off-by: Hugh Dickins <hughd@google.com>
>> ---
>> include/linux/mm.h | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>> index 0b776907152e..3b02ac43bcb7 100644
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -3422,7 +3422,7 @@ static inline pte_t pte_mkspecial(pte_t pte)
>> #ifndef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
>> static inline bool pmd_special(pmd_t pmd)
>> {
>> - return false;
>> + return is_huge_zero_pmd(pmd);
>> }
>
> Emm ... feels a bit odd to me ...
Agreed. But it could be a temporary fix until we fixed up relevant architectures.
>
> On these configs pmd_mkspecial() is still a no-op, so pmd_special()
> would no longer really mean that the PMD was made special :)
>
> Could we handle the huge zero PMD in vm_normal_page_pmd() instead?
That adds unnecessary checks for architectures that properly implement pmd_special.
pmd_special() should be fixed longterm on architectures that support THP
and CONFIG_ARCH_HAS_PTE_SPECIAL. It should not be glued to CONFIG_ARCH_SUPPORTS_PMD_PFNMAP.
arch/arc/Kconfig: select HAVE_ARCH_TRANSPARENT_HUGEPAGE if ARC_MMU_V4
arch/arm/Kconfig: select HAVE_ARCH_TRANSPARENT_HUGEPAGE if ARM_LPAE
arch/arm64/Kconfig: select HAVE_ARCH_TRANSPARENT_HUGEPAGE
arch/loongarch/Kconfig: select HAVE_ARCH_TRANSPARENT_HUGEPAGE
arch/mips/Kconfig: select HAVE_ARCH_TRANSPARENT_HUGEPAGE if CPU_SUPPORTS_HUGEPAGES
arch/powerpc/platforms/Kconfig.cputype: select HAVE_ARCH_TRANSPARENT_HUGEPAGE
arch/riscv/Kconfig: select HAVE_ARCH_TRANSPARENT_HUGEPAGE if 64BIT && MMU
arch/s390/Kconfig: select HAVE_ARCH_TRANSPARENT_HUGEPAGE
arch/sparc/Kconfig: select HAVE_ARCH_TRANSPARENT_HUGEPAGE
arch/x86/Kconfig: select HAVE_ARCH_TRANSPARENT_HUGEPAGE
arch/arc/Kconfig: select ARCH_HAS_PTE_SPECIAL
arch/arm/Kconfig: select ARCH_HAS_PTE_SPECIAL if ARM_LPAE
arch/arm64/Kconfig: select ARCH_HAS_PTE_SPECIAL
arch/loongarch/Kconfig: select ARCH_HAS_PTE_SPECIAL
arch/mips/Kconfig: select ARCH_HAS_PTE_SPECIAL if !(32BIT && CPU_HAS_RIXI)
arch/parisc/Kconfig: select ARCH_HAS_PTE_SPECIAL
arch/powerpc/Kconfig: select ARCH_HAS_PTE_SPECIAL
arch/riscv/Kconfig: select ARCH_HAS_PTE_SPECIAL
arch/s390/Kconfig: select ARCH_HAS_PTE_SPECIAL
arch/sh/Kconfig: select ARCH_HAS_PTE_SPECIAL
arch/sparc/Kconfig: select ARCH_HAS_PTE_SPECIAL
arch/x86/Kconfig: select ARCH_HAS_PTE_SPECIAL
That's a bit of work given that only arm64, powerpc (64), riscv and x86 (64)
properly implement pmd_special().
So I think Hugh's patch here makes sense for now.
--
Cheers,
David
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH hotfix] mm: fix pmd_special() fallback to observe huge_zero
2026-04-29 6:12 ` David Hildenbrand (Arm)
@ 2026-04-29 6:57 ` Lance Yang
2026-04-29 7:14 ` David Hildenbrand (Arm)
0 siblings, 1 reply; 12+ messages in thread
From: Lance Yang @ 2026-04-29 6:57 UTC (permalink / raw)
To: david, hughd
Cc: akpm, baolin.wang, baohua, dev.jain, liam.howlett, ljs, mhocko,
rppt, npache, zhengqi.arch, ryan.roberts, surenb, ziy, linux-mm,
Lance Yang
On Wed, Apr 29, 2026 at 08:12:55AM +0200, David Hildenbrand (Arm) wrote:
>On 4/29/26 07:54, Lance Yang wrote:
>>
>> On Tue, Apr 28, 2026 at 10:08:37PM -0700, Hugh Dickins wrote:
>>> On x86 32-bit with THP enabled, zap_huge_pmd() is seen to generate a
>>> "WARNING: mm/memory.c:735 at __vm_normal_page+0x6a/0x7d", from the
>>> VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn)); followed
>>> by "BUG: Bad rss-counter state"s, then later "BUG: Bad page state"s
>>> when reclaim gets to call shrink_huge_zero_folio_scan().
>>
>> Good catch!
>>
>>> It's as if the _PAGE_SPECIAL bit never got set in the huge_zero pmd:
>>> and indeed, whereas pte_special() and pte_mkspecial() are subject to a
>>> dedicated CONFIG_ARCH_HAS_PTE_SPECIAL, pmd_special() and pmd_mkspecial()
>>> are subject to CONFIG_ARCH_SUPPORTS_PMD_PFNMAP, which is never enabled
>>> on any 32-bit architecture.
>>>
>>> Add CONFIG_ARCH_HAS_PMD_SPECIAL? Perhaps; but I think it's better just
>>> to observe the huge_zero pmd in the fallback version of pmd_special().
>>>
>>> Fixes: d80a9cb1a64a ("mm/huge_memory: add and use normal_or_softleaf_folio_pmd()")
>
>Likely it should be
>
> Fixes: d82d09e48219 ("mm/huge_memory: mark PMD mappings of the huge zero folio special")
>
>Because vm_normal_page_pmd() would return the wrong thing.
Right.
>But I am surprised that we didn't run into the
>
> VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn));
>
>earlier.
The history seems to be:
2025-09-13 d82d09e48219 ("mm/huge_memory: mark PMD mappings of the huge zero folio special")
2025-09-13 af38538801c6 ("mm/memory: factor out common code from vm_normal_page_*()")
After d82d09e48219, vm_normal_page_pmd() still had the explicit huge
zero check before returning the page:
--8<---
struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
pmd_t pmd)
{
unsigned long pfn = pmd_pfn(pmd);
if (unlikely(pmd_special(pmd)))
return NULL;
if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
if (vma->vm_flags & VM_MIXEDMAP) {
if (!pfn_valid(pfn))
return NULL;
goto out;
} else {
unsigned long off;
off = (addr - vma->vm_start) >> PAGE_SHIFT;
if (pfn == vma->vm_pgoff + off)
return NULL;
if (!is_cow_mapping(vma->vm_flags))
return NULL;
}
}
if (is_huge_zero_pfn(pfn))
return NULL;
if (unlikely(pfn > highest_memmap_pfn))
return NULL;
/*
* NOTE! We still have PageReserved() pages in the page tables.
* eg. VDSO mappings can cause them to exist.
*/
out:
return pfn_to_page(pfn);
}
---
So even if pmd_mkspecial() was a no-op and pmd_special() stayed false,
we would still return NULL there.
Then af38538801c6 moved the PMD path into __vm_normal_page():
---8<---
struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
pmd_t pmd)
{
return __vm_normal_page(vma, addr, pmd_pfn(pmd), pmd_special(pmd),
pmd_val(pmd), PGTABLE_LEVEL_PMD);
}
---
For CONFIG_ARCH_HAS_PTE_SPECIAL=y, __vm_normal_page() only returns NULL
for the huge zero PFN if special == true. On x86 32-bit, pmd_special()
stays false, so this can now fall through to VM_WARN_ON_ONCE():
---8<---
if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)) {
if (unlikely(special)) {
if (is_zero_pfn(pfn) || is_huge_zero_pfn(pfn))
return NULL;
...
}
...
} else {
...
if (is_zero_pfn(pfn) || is_huge_zero_pfn(pfn))
return NULL;
}
...
VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn));
...
---
So my guess is that the warning above became possible after
af38538801c6, IIUC.
>
>>> Signed-off-by: Hugh Dickins <hughd@google.com>
>>> ---
>>> include/linux/mm.h | 2 +-
>>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/include/linux/mm.h b/include/linux/mm.h
>>> index 0b776907152e..3b02ac43bcb7 100644
>>> --- a/include/linux/mm.h
>>> +++ b/include/linux/mm.h
>>> @@ -3422,7 +3422,7 @@ static inline pte_t pte_mkspecial(pte_t pte)
>>> #ifndef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
>>> static inline bool pmd_special(pmd_t pmd)
>>> {
>>> - return false;
>>> + return is_huge_zero_pmd(pmd);
>>> }
>>
>> Emm ... feels a bit odd to me ...
>
>Agreed. But it could be a temporary fix until we fixed up relevant architectures.
Ah, got it :D
>>
>> On these configs pmd_mkspecial() is still a no-op, so pmd_special()
>> would no longer really mean that the PMD was made special :)
>>
>> Could we handle the huge zero PMD in vm_normal_page_pmd() instead?
>
>That adds unnecessary checks for architectures that properly implement pmd_special.
>
>pmd_special() should be fixed longterm on architectures that support THP
>and CONFIG_ARCH_HAS_PTE_SPECIAL. It should not be glued to CONFIG_ARCH_SUPPORTS_PMD_PFNMAP.
>
>
>arch/arc/Kconfig: select HAVE_ARCH_TRANSPARENT_HUGEPAGE if ARC_MMU_V4
>arch/arm/Kconfig: select HAVE_ARCH_TRANSPARENT_HUGEPAGE if ARM_LPAE
>arch/arm64/Kconfig: select HAVE_ARCH_TRANSPARENT_HUGEPAGE
>arch/loongarch/Kconfig: select HAVE_ARCH_TRANSPARENT_HUGEPAGE
>arch/mips/Kconfig: select HAVE_ARCH_TRANSPARENT_HUGEPAGE if CPU_SUPPORTS_HUGEPAGES
>arch/powerpc/platforms/Kconfig.cputype: select HAVE_ARCH_TRANSPARENT_HUGEPAGE
>arch/riscv/Kconfig: select HAVE_ARCH_TRANSPARENT_HUGEPAGE if 64BIT && MMU
>arch/s390/Kconfig: select HAVE_ARCH_TRANSPARENT_HUGEPAGE
>arch/sparc/Kconfig: select HAVE_ARCH_TRANSPARENT_HUGEPAGE
>arch/x86/Kconfig: select HAVE_ARCH_TRANSPARENT_HUGEPAGE
>
>arch/arc/Kconfig: select ARCH_HAS_PTE_SPECIAL
>arch/arm/Kconfig: select ARCH_HAS_PTE_SPECIAL if ARM_LPAE
>arch/arm64/Kconfig: select ARCH_HAS_PTE_SPECIAL
>arch/loongarch/Kconfig: select ARCH_HAS_PTE_SPECIAL
>arch/mips/Kconfig: select ARCH_HAS_PTE_SPECIAL if !(32BIT && CPU_HAS_RIXI)
>arch/parisc/Kconfig: select ARCH_HAS_PTE_SPECIAL
>arch/powerpc/Kconfig: select ARCH_HAS_PTE_SPECIAL
>arch/riscv/Kconfig: select ARCH_HAS_PTE_SPECIAL
>arch/s390/Kconfig: select ARCH_HAS_PTE_SPECIAL
>arch/sh/Kconfig: select ARCH_HAS_PTE_SPECIAL
>arch/sparc/Kconfig: select ARCH_HAS_PTE_SPECIAL
>arch/x86/Kconfig: select ARCH_HAS_PTE_SPECIAL
>
>That's a bit of work given that only arm64, powerpc (64), riscv and x86 (64)
>properly implement pmd_special().
>
>
>So I think Hugh's patch here makes sense for now.
Lesson learned :D thanks!
Lance
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH hotfix] mm: fix pmd_special() fallback to observe huge_zero
2026-04-29 6:57 ` Lance Yang
@ 2026-04-29 7:14 ` David Hildenbrand (Arm)
2026-04-29 7:33 ` Lance Yang
0 siblings, 1 reply; 12+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-29 7:14 UTC (permalink / raw)
To: Lance Yang, hughd
Cc: akpm, baolin.wang, baohua, dev.jain, liam.howlett, ljs, mhocko,
rppt, npache, zhengqi.arch, ryan.roberts, surenb, ziy, linux-mm
On 4/29/26 08:57, Lance Yang wrote:
>
> On Wed, Apr 29, 2026 at 08:12:55AM +0200, David Hildenbrand (Arm) wrote:
>> On 4/29/26 07:54, Lance Yang wrote:
>>>
>>>
>>> Good catch!
>>>
>>
>> Likely it should be
>>
>> Fixes: d82d09e48219 ("mm/huge_memory: mark PMD mappings of the huge zero folio special")
>>
>> Because vm_normal_page_pmd() would return the wrong thing.
>
> Right.
>
>> But I am surprised that we didn't run into the
>>
>> VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn));
>>
>> earlier.
>
> The history seems to be:
>
> 2025-09-13 d82d09e48219 ("mm/huge_memory: mark PMD mappings of the huge zero folio special")
> 2025-09-13 af38538801c6 ("mm/memory: factor out common code from vm_normal_page_*()")
>
> After d82d09e48219, vm_normal_page_pmd() still had the explicit huge
> zero check before returning the page:
>
> --8<---
> struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
> pmd_t pmd)
> {
> unsigned long pfn = pmd_pfn(pmd);
>
> if (unlikely(pmd_special(pmd)))
> return NULL;
>
> if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
> if (vma->vm_flags & VM_MIXEDMAP) {
> if (!pfn_valid(pfn))
> return NULL;
> goto out;
> } else {
> unsigned long off;
> off = (addr - vma->vm_start) >> PAGE_SHIFT;
> if (pfn == vma->vm_pgoff + off)
> return NULL;
> if (!is_cow_mapping(vma->vm_flags))
> return NULL;
> }
> }
>
> if (is_huge_zero_pfn(pfn))
> return NULL;
> if (unlikely(pfn > highest_memmap_pfn))
> return NULL;
>
> /*
> * NOTE! We still have PageReserved() pages in the page tables.
> * eg. VDSO mappings can cause them to exist.
> */
> out:
> return pfn_to_page(pfn);
> }
> ---
>
> So even if pmd_mkspecial() was a no-op and pmd_special() stayed false,
> we would still return NULL there.
>
> Then af38538801c6 moved the PMD path into __vm_normal_page():
>
> ---8<---
> struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
> pmd_t pmd)
> {
> return __vm_normal_page(vma, addr, pmd_pfn(pmd), pmd_special(pmd),
> pmd_val(pmd), PGTABLE_LEVEL_PMD);
> }
> ---
>
> For CONFIG_ARCH_HAS_PTE_SPECIAL=y, __vm_normal_page() only returns NULL
> for the huge zero PFN if special == true. On x86 32-bit, pmd_special()
> stays false, so this can now fall through to VM_WARN_ON_ONCE():
>
> ---8<---
> if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)) {
> if (unlikely(special)) {
> if (is_zero_pfn(pfn) || is_huge_zero_pfn(pfn))
> return NULL;
> ...
> }
> ...
> } else {
> ...
> if (is_zero_pfn(pfn) || is_huge_zero_pfn(pfn))
> return NULL;
> }
>
> ...
> VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn));
> ...
> ---
>
> So my guess is that the warning above became possible after
> af38538801c6, IIUC.
Yes, I think you are right about af38538801c6.
What about the following then as a temporary solution:
diff --git a/mm/memory.c b/mm/memory.c
index 199214f8de08..bf447c8b2f57 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -684,7 +684,9 @@ static inline struct page *__vm_normal_page(struct vm_area_struct *vma,
unsigned long addr, unsigned long pfn, bool special,
unsigned long long entry, enum pgtable_level level)
{
- if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)) {
+ if ((IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && level == PGTABLE_LEVEL_PTE) ||
+ (IS_ENABLED(CONFIG_ARCH_SUPPORTS_PMD_PFNMAP) && level == PGTABLE_LEVEL_PMD) ||
+ (IS_ENABLED(CONFIG_ARCH_SUPPORTS_PUD_PFNMAP) && level == PGTABLE_LEVEL_PUD)) {
if (unlikely(special)) {
#ifdef CONFIG_FIND_NORMAL_PAGE
if (vma->vm_ops && vma->vm_ops->find_normal_page)
We could wrap the check in a fancy helper.
--
Cheers,
David
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH hotfix] mm: fix pmd_special() fallback to observe huge_zero
2026-04-29 7:14 ` David Hildenbrand (Arm)
@ 2026-04-29 7:33 ` Lance Yang
2026-04-30 5:53 ` David Hildenbrand (Arm)
0 siblings, 1 reply; 12+ messages in thread
From: Lance Yang @ 2026-04-29 7:33 UTC (permalink / raw)
To: David Hildenbrand (Arm), hughd
Cc: akpm, baolin.wang, baohua, dev.jain, liam.howlett, ljs, mhocko,
rppt, npache, zhengqi.arch, ryan.roberts, surenb, ziy, linux-mm
On 2026/4/29 15:14, David Hildenbrand (Arm) wrote:
> On 4/29/26 08:57, Lance Yang wrote:
>>
>> On Wed, Apr 29, 2026 at 08:12:55AM +0200, David Hildenbrand (Arm) wrote:
>>> On 4/29/26 07:54, Lance Yang wrote:
>>>>
>>>>
>>>> Good catch!
>>>>
>>>
>>> Likely it should be
>>>
>>> Fixes: d82d09e48219 ("mm/huge_memory: mark PMD mappings of the huge zero folio special")
>>>
>>> Because vm_normal_page_pmd() would return the wrong thing.
>>
>> Right.
>>
>>> But I am surprised that we didn't run into the
>>>
>>> VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn));
>>>
>>> earlier.
>>
>> The history seems to be:
>>
>> 2025-09-13 d82d09e48219 ("mm/huge_memory: mark PMD mappings of the huge zero folio special")
>> 2025-09-13 af38538801c6 ("mm/memory: factor out common code from vm_normal_page_*()")
>>
>> After d82d09e48219, vm_normal_page_pmd() still had the explicit huge
>> zero check before returning the page:
>>
>> --8<---
>> struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
>> pmd_t pmd)
>> {
>> unsigned long pfn = pmd_pfn(pmd);
>>
>> if (unlikely(pmd_special(pmd)))
>> return NULL;
>>
>> if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
>> if (vma->vm_flags & VM_MIXEDMAP) {
>> if (!pfn_valid(pfn))
>> return NULL;
>> goto out;
>> } else {
>> unsigned long off;
>> off = (addr - vma->vm_start) >> PAGE_SHIFT;
>> if (pfn == vma->vm_pgoff + off)
>> return NULL;
>> if (!is_cow_mapping(vma->vm_flags))
>> return NULL;
>> }
>> }
>>
>> if (is_huge_zero_pfn(pfn))
>> return NULL;
>> if (unlikely(pfn > highest_memmap_pfn))
>> return NULL;
>>
>> /*
>> * NOTE! We still have PageReserved() pages in the page tables.
>> * eg. VDSO mappings can cause them to exist.
>> */
>> out:
>> return pfn_to_page(pfn);
>> }
>> ---
>>
>> So even if pmd_mkspecial() was a no-op and pmd_special() stayed false,
>> we would still return NULL there.
>>
>> Then af38538801c6 moved the PMD path into __vm_normal_page():
>>
>> ---8<---
>> struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
>> pmd_t pmd)
>> {
>> return __vm_normal_page(vma, addr, pmd_pfn(pmd), pmd_special(pmd),
>> pmd_val(pmd), PGTABLE_LEVEL_PMD);
>> }
>> ---
>>
>> For CONFIG_ARCH_HAS_PTE_SPECIAL=y, __vm_normal_page() only returns NULL
>> for the huge zero PFN if special == true. On x86 32-bit, pmd_special()
>> stays false, so this can now fall through to VM_WARN_ON_ONCE():
>>
>> ---8<---
>> if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)) {
>> if (unlikely(special)) {
>> if (is_zero_pfn(pfn) || is_huge_zero_pfn(pfn))
>> return NULL;
>> ...
>> }
>> ...
>> } else {
>> ...
>> if (is_zero_pfn(pfn) || is_huge_zero_pfn(pfn))
>> return NULL;
>> }
>>
>> ...
>> VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn));
>> ...
>> ---
>>
>> So my guess is that the warning above became possible after
>> af38538801c6, IIUC.
>
> Yes, I think you are right about af38538801c6.
>
> What about the following then as a temporary solution:
Nice, that works for me :)
> diff --git a/mm/memory.c b/mm/memory.c
> index 199214f8de08..bf447c8b2f57 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -684,7 +684,9 @@ static inline struct page *__vm_normal_page(struct vm_area_struct *vma,
> unsigned long addr, unsigned long pfn, bool special,
> unsigned long long entry, enum pgtable_level level)
> {
> - if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)) {
> + if ((IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL) && level == PGTABLE_LEVEL_PTE) ||
> + (IS_ENABLED(CONFIG_ARCH_SUPPORTS_PMD_PFNMAP) && level == PGTABLE_LEVEL_PMD) ||
> + (IS_ENABLED(CONFIG_ARCH_SUPPORTS_PUD_PFNMAP) && level == PGTABLE_LEVEL_PUD)) {
> if (unlikely(special)) {
> #ifdef CONFIG_FIND_NORMAL_PAGE
> if (vma->vm_ops && vma->vm_ops->find_normal_page)
>
>
> We could wrap the check in a fancy helper.
Cheers,
Lance
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH hotfix] mm: fix pmd_special() fallback to observe huge_zero
2026-04-29 7:33 ` Lance Yang
@ 2026-04-30 5:53 ` David Hildenbrand (Arm)
2026-04-30 6:46 ` Lance Yang
` (2 more replies)
0 siblings, 3 replies; 12+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-30 5:53 UTC (permalink / raw)
To: Lance Yang, hughd
Cc: akpm, baolin.wang, baohua, dev.jain, liam.howlett, ljs, mhocko,
rppt, npache, zhengqi.arch, ryan.roberts, surenb, ziy, linux-mm
On 4/29/26 09:33, Lance Yang wrote:
>
>
> On 2026/4/29 15:14, David Hildenbrand (Arm) wrote:
>> On 4/29/26 08:57, Lance Yang wrote:
>>>
>>>
>>> Right.
>>>
>>>
>>> The history seems to be:
>>>
>>> 2025-09-13 d82d09e48219 ("mm/huge_memory: mark PMD mappings of the huge
>>> zero folio special")
>>> 2025-09-13 af38538801c6 ("mm/memory: factor out common code from
>>> vm_normal_page_*()")
>>>
>>> After d82d09e48219, vm_normal_page_pmd() still had the explicit huge
>>> zero check before returning the page:
>>>
>>> --8<---
>>> struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
>>> pmd_t pmd)
>>> {
>>> unsigned long pfn = pmd_pfn(pmd);
>>>
>>> if (unlikely(pmd_special(pmd)))
>>> return NULL;
>>>
>>> if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
>>> if (vma->vm_flags & VM_MIXEDMAP) {
>>> if (!pfn_valid(pfn))
>>> return NULL;
>>> goto out;
>>> } else {
>>> unsigned long off;
>>> off = (addr - vma->vm_start) >> PAGE_SHIFT;
>>> if (pfn == vma->vm_pgoff + off)
>>> return NULL;
>>> if (!is_cow_mapping(vma->vm_flags))
>>> return NULL;
>>> }
>>> }
>>>
>>> if (is_huge_zero_pfn(pfn))
>>> return NULL;
>>> if (unlikely(pfn > highest_memmap_pfn))
>>> return NULL;
>>>
>>> /*
>>> * NOTE! We still have PageReserved() pages in the page tables.
>>> * eg. VDSO mappings can cause them to exist.
>>> */
>>> out:
>>> return pfn_to_page(pfn);
>>> }
>>> ---
>>>
>>> So even if pmd_mkspecial() was a no-op and pmd_special() stayed false,
>>> we would still return NULL there.
>>>
>>> Then af38538801c6 moved the PMD path into __vm_normal_page():
>>>
>>> ---8<---
>>> struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
>>> pmd_t pmd)
>>> {
>>> return __vm_normal_page(vma, addr, pmd_pfn(pmd), pmd_special(pmd),
>>> pmd_val(pmd), PGTABLE_LEVEL_PMD);
>>> }
>>> ---
>>>
>>> For CONFIG_ARCH_HAS_PTE_SPECIAL=y, __vm_normal_page() only returns NULL
>>> for the huge zero PFN if special == true. On x86 32-bit, pmd_special()
>>> stays false, so this can now fall through to VM_WARN_ON_ONCE():
>>>
>>> ---8<---
>>> if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)) {
>>> if (unlikely(special)) {
>>> if (is_zero_pfn(pfn) || is_huge_zero_pfn(pfn))
>>> return NULL;
>>> ...
>>> }
>>> ...
>>> } else {
>>> ...
>>> if (is_zero_pfn(pfn) || is_huge_zero_pfn(pfn))
>>> return NULL;
>>> }
>>>
>>> ...
>>> VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn));
>>> ...
>>> ---
>>>
>>> So my guess is that the warning above became possible after
>>> af38538801c6, IIUC.
>>
>> Yes, I think you are right about af38538801c6.
>>
>> What about the following then as a temporary solution:
>
> Nice, that works for me :)
Okay, I'd say we do the following:
From fd9ead548f102f7c257980ccc7b96cce7e42a570 Mon Sep 17 00:00:00 2001
From: Hugh Dickins <hughd@google.com>
Date: Thu, 30 Apr 2026 07:35:31 +0200
Subject: [PATCH] mm: fix pmd_special() fallback to observe huge_zero
On x86 32-bit with THP enabled, zap_huge_pmd() is seen to generate a
"WARNING: mm/memory.c:735 at __vm_normal_page+0x6a/0x7d", from the
VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn)); followed
by "BUG: Bad rss-counter state"s, then later "BUG: Bad page state"s
when reclaim gets to call shrink_huge_zero_folio_scan().
It's as if the _PAGE_SPECIAL bit never got set in the huge_zero pmd:
and indeed, whereas pte_special() and pte_mkspecial() are subject to a
dedicated CONFIG_ARCH_HAS_PTE_SPECIAL, pmd_special() and pmd_mkspecial()
are subject to CONFIG_ARCH_SUPPORTS_PMD_PFNMAP, which is never enabled
on any 32-bit architecture.
While the problem was exposed through d80a9cb1a64a ("mm/huge_memory: add
and use normal_or_softleaf_folio_pmd()"), it was an oversight in
af38538801c6 ("mm/memory: factor out common code from vm_normal_page_*()")
and would result in other problems:
* huge zero folio accounted in smaps, pagemap (PAGE_IS_FILE) and
numamaps as file-backed THP
* folio_walk_start() returning the folio even without FW_ZEROPAGE set.
Callers seem to tolerate that, though.
... and triggering the VM_WARN_ON_ONE(), although never reported so far.
To fix it, teach vm_normal_page_pmd()/vm_normal_page_pud() whether
pmd_special/pud_special is actually implemented.
Fixes: af38538801c6 ("mm/memory: factor out common code from vm_normal_page_*()")
Signed-off-by: Hugh Dickins <hughd@google.com>>
Co-developed-by: David Hildenbrand (Arm) <david@kernel.org>
Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
---
mm/memory.c | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index 7322a40e73b9..a60bc07b48b2 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -612,6 +612,21 @@ static void print_bad_page_map(struct vm_area_struct *vma,
dump_stack();
add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
}
+
+static inline bool pgtable_level_has_pxx_special(enum pgtable_level level)
+{
+ switch (level) {
+ case PGTABLE_LEVEL_PTE:
+ return IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL);
+ case PGTABLE_LEVEL_PMD:
+ return IS_ENABLED(CONFIG_ARCH_SUPPORTS_PMD_PFNMAP);
+ case PGTABLE_LEVEL_PUD:
+ return IS_ENABLED(CONFIG_ARCH_SUPPORTS_PUD_PFNMAP);
+ default:
+ return false;
+ }
+}
+
#define print_bad_pte(vma, addr, pte, page) \
print_bad_page_map(vma, addr, pte_val(pte), page, PGTABLE_LEVEL_PTE)
@@ -684,7 +699,7 @@ static inline struct page *__vm_normal_page(struct vm_area_struct *vma,
unsigned long addr, unsigned long pfn, bool special,
unsigned long long entry, enum pgtable_level level)
{
- if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)) {
+ if (pgtable_level_has_pxx_special(level)) {
if (unlikely(special)) {
#ifdef CONFIG_FIND_NORMAL_PAGE
if (vma->vm_ops && vma->vm_ops->find_normal_page)
--
2.43.0
--
Cheers,
David
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH hotfix] mm: fix pmd_special() fallback to observe huge_zero
2026-04-30 5:53 ` David Hildenbrand (Arm)
@ 2026-04-30 6:46 ` Lance Yang
2026-04-30 8:30 ` Lance Yang
2026-04-30 8:48 ` Hugh Dickins
2 siblings, 0 replies; 12+ messages in thread
From: Lance Yang @ 2026-04-30 6:46 UTC (permalink / raw)
To: David Hildenbrand (Arm), hughd
Cc: akpm, baolin.wang, baohua, dev.jain, liam.howlett, ljs, mhocko,
rppt, npache, zhengqi.arch, ryan.roberts, surenb, ziy, linux-mm
On 2026/4/30 13:53, David Hildenbrand (Arm) wrote:
> On 4/29/26 09:33, Lance Yang wrote:
>>
>>
>> On 2026/4/29 15:14, David Hildenbrand (Arm) wrote:
>>> On 4/29/26 08:57, Lance Yang wrote:
>>>>
>>>>
>>>> Right.
>>>>
>>>>
>>>> The history seems to be:
>>>>
>>>> 2025-09-13 d82d09e48219 ("mm/huge_memory: mark PMD mappings of the huge
>>>> zero folio special")
>>>> 2025-09-13 af38538801c6 ("mm/memory: factor out common code from
>>>> vm_normal_page_*()")
>>>>
>>>> After d82d09e48219, vm_normal_page_pmd() still had the explicit huge
>>>> zero check before returning the page:
>>>>
>>>> --8<---
>>>> struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
>>>> pmd_t pmd)
>>>> {
>>>> unsigned long pfn = pmd_pfn(pmd);
>>>>
>>>> if (unlikely(pmd_special(pmd)))
>>>> return NULL;
>>>>
>>>> if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
>>>> if (vma->vm_flags & VM_MIXEDMAP) {
>>>> if (!pfn_valid(pfn))
>>>> return NULL;
>>>> goto out;
>>>> } else {
>>>> unsigned long off;
>>>> off = (addr - vma->vm_start) >> PAGE_SHIFT;
>>>> if (pfn == vma->vm_pgoff + off)
>>>> return NULL;
>>>> if (!is_cow_mapping(vma->vm_flags))
>>>> return NULL;
>>>> }
>>>> }
>>>>
>>>> if (is_huge_zero_pfn(pfn))
>>>> return NULL;
>>>> if (unlikely(pfn > highest_memmap_pfn))
>>>> return NULL;
>>>>
>>>> /*
>>>> * NOTE! We still have PageReserved() pages in the page tables.
>>>> * eg. VDSO mappings can cause them to exist.
>>>> */
>>>> out:
>>>> return pfn_to_page(pfn);
>>>> }
>>>> ---
>>>>
>>>> So even if pmd_mkspecial() was a no-op and pmd_special() stayed false,
>>>> we would still return NULL there.
>>>>
>>>> Then af38538801c6 moved the PMD path into __vm_normal_page():
>>>>
>>>> ---8<---
>>>> struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
>>>> pmd_t pmd)
>>>> {
>>>> return __vm_normal_page(vma, addr, pmd_pfn(pmd), pmd_special(pmd),
>>>> pmd_val(pmd), PGTABLE_LEVEL_PMD);
>>>> }
>>>> ---
>>>>
>>>> For CONFIG_ARCH_HAS_PTE_SPECIAL=y, __vm_normal_page() only returns NULL
>>>> for the huge zero PFN if special == true. On x86 32-bit, pmd_special()
>>>> stays false, so this can now fall through to VM_WARN_ON_ONCE():
>>>>
>>>> ---8<---
>>>> if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)) {
>>>> if (unlikely(special)) {
>>>> if (is_zero_pfn(pfn) || is_huge_zero_pfn(pfn))
>>>> return NULL;
>>>> ...
>>>> }
>>>> ...
>>>> } else {
>>>> ...
>>>> if (is_zero_pfn(pfn) || is_huge_zero_pfn(pfn))
>>>> return NULL;
>>>> }
>>>>
>>>> ...
>>>> VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn));
>>>> ...
>>>> ---
>>>>
>>>> So my guess is that the warning above became possible after
>>>> af38538801c6, IIUC.
>>>
>>> Yes, I think you are right about af38538801c6.
>>>
>>> What about the following then as a temporary solution:
>>
>> Nice, that works for me :)
>
> Okay, I'd say we do the following:
>
> From fd9ead548f102f7c257980ccc7b96cce7e42a570 Mon Sep 17 00:00:00 2001
> From: Hugh Dickins <hughd@google.com>
> Date: Thu, 30 Apr 2026 07:35:31 +0200
> Subject: [PATCH] mm: fix pmd_special() fallback to observe huge_zero
>
> On x86 32-bit with THP enabled, zap_huge_pmd() is seen to generate a
> "WARNING: mm/memory.c:735 at __vm_normal_page+0x6a/0x7d", from the
> VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn)); followed
> by "BUG: Bad rss-counter state"s, then later "BUG: Bad page state"s
> when reclaim gets to call shrink_huge_zero_folio_scan().
>
> It's as if the _PAGE_SPECIAL bit never got set in the huge_zero pmd:
> and indeed, whereas pte_special() and pte_mkspecial() are subject to a
> dedicated CONFIG_ARCH_HAS_PTE_SPECIAL, pmd_special() and pmd_mkspecial()
> are subject to CONFIG_ARCH_SUPPORTS_PMD_PFNMAP, which is never enabled
> on any 32-bit architecture.
>
> While the problem was exposed through d80a9cb1a64a ("mm/huge_memory: add
> and use normal_or_softleaf_folio_pmd()"), it was an oversight in
> af38538801c6 ("mm/memory: factor out common code from vm_normal_page_*()")
> and would result in other problems:
> * huge zero folio accounted in smaps, pagemap (PAGE_IS_FILE) and
> numamaps as file-backed THP
> * folio_walk_start() returning the folio even without FW_ZEROPAGE set.
> Callers seem to tolerate that, though.
>
> ... and triggering the VM_WARN_ON_ONE(), although never reported so far.
>
> To fix it, teach vm_normal_page_pmd()/vm_normal_page_pud() whether
> pmd_special/pud_special is actually implemented.
>
> Fixes: af38538801c6 ("mm/memory: factor out common code from vm_normal_page_*()")
> Signed-off-by: Hugh Dickins <hughd@google.com>>
> Co-developed-by: David Hildenbrand (Arm) <david@kernel.org>
> Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
> ---
Thanks, LGTM! Feel free to add:
Reviewed-by: Lance Yang <lance.yang@linux.dev>
Cheers, Lance
> mm/memory.c | 17 ++++++++++++++++-
> 1 file changed, 16 insertions(+), 1 deletion(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 7322a40e73b9..a60bc07b48b2 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -612,6 +612,21 @@ static void print_bad_page_map(struct vm_area_struct *vma,
> dump_stack();
> add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
> }
> +
> +static inline bool pgtable_level_has_pxx_special(enum pgtable_level level)
> +{
> + switch (level) {
> + case PGTABLE_LEVEL_PTE:
> + return IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL);
> + case PGTABLE_LEVEL_PMD:
> + return IS_ENABLED(CONFIG_ARCH_SUPPORTS_PMD_PFNMAP);
> + case PGTABLE_LEVEL_PUD:
> + return IS_ENABLED(CONFIG_ARCH_SUPPORTS_PUD_PFNMAP);
> + default:
> + return false;
> + }
> +}
> +
> #define print_bad_pte(vma, addr, pte, page) \
> print_bad_page_map(vma, addr, pte_val(pte), page, PGTABLE_LEVEL_PTE)
>
> @@ -684,7 +699,7 @@ static inline struct page *__vm_normal_page(struct vm_area_struct *vma,
> unsigned long addr, unsigned long pfn, bool special,
> unsigned long long entry, enum pgtable_level level)
> {
> - if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)) {
> + if (pgtable_level_has_pxx_special(level)) {
> if (unlikely(special)) {
> #ifdef CONFIG_FIND_NORMAL_PAGE
> if (vma->vm_ops && vma->vm_ops->find_normal_page)
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH hotfix] mm: fix pmd_special() fallback to observe huge_zero
2026-04-30 5:53 ` David Hildenbrand (Arm)
2026-04-30 6:46 ` Lance Yang
@ 2026-04-30 8:30 ` Lance Yang
2026-04-30 8:48 ` Hugh Dickins
2 siblings, 0 replies; 12+ messages in thread
From: Lance Yang @ 2026-04-30 8:30 UTC (permalink / raw)
To: david, maobibo
Cc: lance.yang, hughd, akpm, baolin.wang, baohua, dev.jain,
liam.howlett, ljs, mhocko, rppt, npache, zhengqi.arch,
ryan.roberts, surenb, ziy, linux-mm
Cc Bibo Mao
On Thu, Apr 30, 2026 at 07:53:05AM +0200, David Hildenbrand (Arm) wrote:
>On 4/29/26 09:33, Lance Yang wrote:
>>
>>
>> On 2026/4/29 15:14, David Hildenbrand (Arm) wrote:
>>> On 4/29/26 08:57, Lance Yang wrote:
>>>>
>>>>
>>>> Right.
>>>>
>>>>
>>>> The history seems to be:
>>>>
>>>> 2025-09-13 d82d09e48219 ("mm/huge_memory: mark PMD mappings of the huge
>>>> zero folio special")
>>>> 2025-09-13 af38538801c6 ("mm/memory: factor out common code from
>>>> vm_normal_page_*()")
>>>>
>>>> After d82d09e48219, vm_normal_page_pmd() still had the explicit huge
>>>> zero check before returning the page:
>>>>
>>>> --8<---
>>>> struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
>>>> pmd_t pmd)
>>>> {
>>>> unsigned long pfn = pmd_pfn(pmd);
>>>>
>>>> if (unlikely(pmd_special(pmd)))
>>>> return NULL;
>>>>
>>>> if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
>>>> if (vma->vm_flags & VM_MIXEDMAP) {
>>>> if (!pfn_valid(pfn))
>>>> return NULL;
>>>> goto out;
>>>> } else {
>>>> unsigned long off;
>>>> off = (addr - vma->vm_start) >> PAGE_SHIFT;
>>>> if (pfn == vma->vm_pgoff + off)
>>>> return NULL;
>>>> if (!is_cow_mapping(vma->vm_flags))
>>>> return NULL;
>>>> }
>>>> }
>>>>
>>>> if (is_huge_zero_pfn(pfn))
>>>> return NULL;
>>>> if (unlikely(pfn > highest_memmap_pfn))
>>>> return NULL;
>>>>
>>>> /*
>>>> * NOTE! We still have PageReserved() pages in the page tables.
>>>> * eg. VDSO mappings can cause them to exist.
>>>> */
>>>> out:
>>>> return pfn_to_page(pfn);
>>>> }
>>>> ---
>>>>
>>>> So even if pmd_mkspecial() was a no-op and pmd_special() stayed false,
>>>> we would still return NULL there.
>>>>
>>>> Then af38538801c6 moved the PMD path into __vm_normal_page():
>>>>
>>>> ---8<---
>>>> struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
>>>> pmd_t pmd)
>>>> {
>>>> return __vm_normal_page(vma, addr, pmd_pfn(pmd), pmd_special(pmd),
>>>> pmd_val(pmd), PGTABLE_LEVEL_PMD);
>>>> }
>>>> ---
>>>>
>>>> For CONFIG_ARCH_HAS_PTE_SPECIAL=y, __vm_normal_page() only returns NULL
>>>> for the huge zero PFN if special == true. On x86 32-bit, pmd_special()
>>>> stays false, so this can now fall through to VM_WARN_ON_ONCE():
>>>>
>>>> ---8<---
>>>> if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)) {
>>>> if (unlikely(special)) {
>>>> if (is_zero_pfn(pfn) || is_huge_zero_pfn(pfn))
>>>> return NULL;
>>>> ...
>>>> }
>>>> ...
>>>> } else {
>>>> ...
>>>> if (is_zero_pfn(pfn) || is_huge_zero_pfn(pfn))
>>>> return NULL;
>>>> }
>>>>
>>>> ...
>>>> VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn));
>>>> ...
>>>> ---
>>>>
>>>> So my guess is that the warning above became possible after
>>>> af38538801c6, IIUC.
>>>
>>> Yes, I think you are right about af38538801c6.
>>>
>>> What about the following then as a temporary solution:
>>
>> Nice, that works for me :)
>
>Okay, I'd say we do the following:
>
>>From fd9ead548f102f7c257980ccc7b96cce7e42a570 Mon Sep 17 00:00:00 2001
>From: Hugh Dickins <hughd@google.com>
>Date: Thu, 30 Apr 2026 07:35:31 +0200
>Subject: [PATCH] mm: fix pmd_special() fallback to observe huge_zero
>
>On x86 32-bit with THP enabled, zap_huge_pmd() is seen to generate a
>"WARNING: mm/memory.c:735 at __vm_normal_page+0x6a/0x7d", from the
>VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn)); followed
>by "BUG: Bad rss-counter state"s, then later "BUG: Bad page state"s
>when reclaim gets to call shrink_huge_zero_folio_scan().
>
>It's as if the _PAGE_SPECIAL bit never got set in the huge_zero pmd:
>and indeed, whereas pte_special() and pte_mkspecial() are subject to a
>dedicated CONFIG_ARCH_HAS_PTE_SPECIAL, pmd_special() and pmd_mkspecial()
>are subject to CONFIG_ARCH_SUPPORTS_PMD_PFNMAP, which is never enabled
>on any 32-bit architecture.
>
>While the problem was exposed through d80a9cb1a64a ("mm/huge_memory: add
>and use normal_or_softleaf_folio_pmd()"), it was an oversight in
>af38538801c6 ("mm/memory: factor out common code from vm_normal_page_*()")
>and would result in other problems:
>* huge zero folio accounted in smaps, pagemap (PAGE_IS_FILE) and
> numamaps as file-backed THP
>* folio_walk_start() returning the folio even without FW_ZEROPAGE set.
> Callers seem to tolerate that, though.
>
>... and triggering the VM_WARN_ON_ONE(), although never reported so far.
>
>To fix it, teach vm_normal_page_pmd()/vm_normal_page_pud() whether
>pmd_special/pud_special is actually implemented.
>
>Fixes: af38538801c6 ("mm/memory: factor out common code from vm_normal_page_*()")
>Signed-off-by: Hugh Dickins <hughd@google.com>>
>Co-developed-by: David Hildenbrand (Arm) <david@kernel.org>
>Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
>---
Bibo Mao also hit the same bad rss-counter state issue while running
QEMU "make check" on LoongArch, and confirmed[1] that this patch fixes
it.
[1] https://lore.kernel.org/linux-mm/4807181d-c111-5568-b040-140706e56b4f@loongson.cn/
Cheers, Lance
> mm/memory.c | 17 ++++++++++++++++-
> 1 file changed, 16 insertions(+), 1 deletion(-)
>
>diff --git a/mm/memory.c b/mm/memory.c
>index 7322a40e73b9..a60bc07b48b2 100644
>--- a/mm/memory.c
>+++ b/mm/memory.c
>@@ -612,6 +612,21 @@ static void print_bad_page_map(struct vm_area_struct *vma,
> dump_stack();
> add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
> }
>+
>+static inline bool pgtable_level_has_pxx_special(enum pgtable_level level)
>+{
>+ switch (level) {
>+ case PGTABLE_LEVEL_PTE:
>+ return IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL);
>+ case PGTABLE_LEVEL_PMD:
>+ return IS_ENABLED(CONFIG_ARCH_SUPPORTS_PMD_PFNMAP);
>+ case PGTABLE_LEVEL_PUD:
>+ return IS_ENABLED(CONFIG_ARCH_SUPPORTS_PUD_PFNMAP);
>+ default:
>+ return false;
>+ }
>+}
>+
> #define print_bad_pte(vma, addr, pte, page) \
> print_bad_page_map(vma, addr, pte_val(pte), page, PGTABLE_LEVEL_PTE)
>
>@@ -684,7 +699,7 @@ static inline struct page *__vm_normal_page(struct vm_area_struct *vma,
> unsigned long addr, unsigned long pfn, bool special,
> unsigned long long entry, enum pgtable_level level)
> {
>- if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)) {
>+ if (pgtable_level_has_pxx_special(level)) {
> if (unlikely(special)) {
> #ifdef CONFIG_FIND_NORMAL_PAGE
> if (vma->vm_ops && vma->vm_ops->find_normal_page)
>--
>2.43.0
>
>
>
>--
>Cheers,
>
>David
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH hotfix] mm: fix pmd_special() fallback to observe huge_zero
2026-04-30 5:53 ` David Hildenbrand (Arm)
2026-04-30 6:46 ` Lance Yang
2026-04-30 8:30 ` Lance Yang
@ 2026-04-30 8:48 ` Hugh Dickins
2026-04-30 8:54 ` David Hildenbrand (Arm)
2 siblings, 1 reply; 12+ messages in thread
From: Hugh Dickins @ 2026-04-30 8:48 UTC (permalink / raw)
To: David Hildenbrand (Arm)
Cc: Lance Yang, hughd, akpm, baolin.wang, baohua, dev.jain,
liam.howlett, ljs, mhocko, rppt, npache, zhengqi.arch,
ryan.roberts, surenb, ziy, linux-mm
On Thu, 30 Apr 2026, David Hildenbrand (Arm) wrote:
>
> Okay, I'd say we do the following:
>
> From fd9ead548f102f7c257980ccc7b96cce7e42a570 Mon Sep 17 00:00:00 2001
> From: Hugh Dickins <hughd@google.com>
> Date: Thu, 30 Apr 2026 07:35:31 +0200
> Subject: [PATCH] mm: fix pmd_special() fallback to observe huge_zero
No to those lines, this patch of yours is quite different.
>
> On x86 32-bit with THP enabled, zap_huge_pmd() is seen to generate a
> "WARNING: mm/memory.c:735 at __vm_normal_page+0x6a/0x7d", from the
> VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn)); followed
> by "BUG: Bad rss-counter state"s, then later "BUG: Bad page state"s
> when reclaim gets to call shrink_huge_zero_folio_scan().
>
> It's as if the _PAGE_SPECIAL bit never got set in the huge_zero pmd:
> and indeed, whereas pte_special() and pte_mkspecial() are subject to a
> dedicated CONFIG_ARCH_HAS_PTE_SPECIAL, pmd_special() and pmd_mkspecial()
> are subject to CONFIG_ARCH_SUPPORTS_PMD_PFNMAP, which is never enabled
> on any 32-bit architecture.
>
> While the problem was exposed through d80a9cb1a64a ("mm/huge_memory: add
> and use normal_or_softleaf_folio_pmd()"), it was an oversight in
> af38538801c6 ("mm/memory: factor out common code from vm_normal_page_*()")
> and would result in other problems:
> * huge zero folio accounted in smaps, pagemap (PAGE_IS_FILE) and
> numamaps as file-backed THP
> * folio_walk_start() returning the folio even without FW_ZEROPAGE set.
> Callers seem to tolerate that, though.
Yes, I hadn't thought to check other uses when I posted yesterday;
but later took a look through, and was coming to exactly the conclusion
you reach in that paragraph (well, I gave up before following through
far enough on damon: perhaps it could also have been affected).
>
> ... and triggering the VM_WARN_ON_ONE(), although never reported so far.
>
> To fix it, teach vm_normal_page_pmd()/vm_normal_page_pud() whether
> pmd_special/pud_special is actually implemented.
>
> Fixes: af38538801c6 ("mm/memory: factor out common code from vm_normal_page_*()")
Agreed. My Fixes tag (where my bisection arrived) was correct for the
common zap_huge_pmd() symptom I was seeing (Lorenzo's commit removed
an independent is_huge_zero_pmd() check from it, so it now relies on
vm_normal_folio_pmd() to give the right answer). But you've chased
up other usages, and realized it goes back further. You could just as
well blame the other commit mentioned in this thread, the d82d09e48219
("mm/huge_memory: mark PMD mappings of the huge zero folio special"),
because in come configs it is not doing what it expects to be doing.
But af38538801c6 is where effects start appearing, so fine to blame
it (and both come from the same 6.18 series, do it doesn't matter).
> Signed-off-by: Hugh Dickins <hughd@google.com>>
> Co-developed-by: David Hildenbrand (Arm) <david@kernel.org>
That's generous, but the patch is not mine at all, and I'll
happily let you grab my two paragraphs above. Please, just
Reported-by: Hugh Dickins <hughd@google.com>
> Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
> ---
> mm/memory.c | 17 ++++++++++++++++-
> 1 file changed, 16 insertions(+), 1 deletion(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 7322a40e73b9..a60bc07b48b2 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -612,6 +612,21 @@ static void print_bad_page_map(struct vm_area_struct *vma,
> dump_stack();
> add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
> }
> +
> +static inline bool pgtable_level_has_pxx_special(enum pgtable_level level)
> +{
> + switch (level) {
> + case PGTABLE_LEVEL_PTE:
> + return IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL);
> + case PGTABLE_LEVEL_PMD:
> + return IS_ENABLED(CONFIG_ARCH_SUPPORTS_PMD_PFNMAP);
> + case PGTABLE_LEVEL_PUD:
> + return IS_ENABLED(CONFIG_ARCH_SUPPORTS_PUD_PFNMAP);
> + default:
> + return false;
> + }
> +}
> +
> #define print_bad_pte(vma, addr, pte, page) \
> print_bad_page_map(vma, addr, pte_val(pte), page, PGTABLE_LEVEL_PTE)
>
> @@ -684,7 +699,7 @@ static inline struct page *__vm_normal_page(struct vm_area_struct *vma,
> unsigned long addr, unsigned long pfn, bool special,
> unsigned long long entry, enum pgtable_level level)
> {
> - if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)) {
> + if (pgtable_level_has_pxx_special(level)) {
> if (unlikely(special)) {
> #ifdef CONFIG_FIND_NORMAL_PAGE
> if (vma->vm_ops && vma->vm_ops->find_normal_page)
That block ends with a comment on CONFIG_ARCH_HAS_PTE_SPECIAL:
perhaps better reworded now - but I don't know what to suggest!
This patch seems okay, but TBH I have no enthusiasm for it -
it forces me to think too hard, and I prefer my own one-liner
(iwhich Lance found odd: odd if you're thinking pmd_special(pmd)
means pmd has the _PAGE_SPECIAL bit set, yes, but not odd if you
think it means that pmd is of a special folio).
But whatever, you and Lance prefer this one: thanks for the fix!
Hugh
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH hotfix] mm: fix pmd_special() fallback to observe huge_zero
2026-04-30 8:48 ` Hugh Dickins
@ 2026-04-30 8:54 ` David Hildenbrand (Arm)
2026-04-30 9:10 ` Lance Yang
0 siblings, 1 reply; 12+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-30 8:54 UTC (permalink / raw)
To: Hugh Dickins
Cc: Lance Yang, akpm, baolin.wang, baohua, dev.jain, liam.howlett,
ljs, mhocko, rppt, npache, zhengqi.arch, ryan.roberts, surenb,
ziy, linux-mm
>> ... and triggering the VM_WARN_ON_ONE(), although never reported so far.
>>
>> To fix it, teach vm_normal_page_pmd()/vm_normal_page_pud() whether
>> pmd_special/pud_special is actually implemented.
>>
>> Fixes: af38538801c6 ("mm/memory: factor out common code from vm_normal_page_*()")
>
> Agreed. My Fixes tag (where my bisection arrived) was correct for the
> common zap_huge_pmd() symptom I was seeing (Lorenzo's commit removed
> an independent is_huge_zero_pmd() check from it, so it now relies on
> vm_normal_folio_pmd() to give the right answer). But you've chased
> up other usages, and realized it goes back further. You could just as
> well blame the other commit mentioned in this thread, the d82d09e48219
> ("mm/huge_memory: mark PMD mappings of the huge zero folio special"),
> because in come configs it is not doing what it expects to be doing.
> But af38538801c6 is where effects start appearing, so fine to blame
> it (and both come from the same 6.18 series, do it doesn't matter).
>
>> Signed-off-by: Hugh Dickins <hughd@google.com>>
>> Co-developed-by: David Hildenbrand (Arm) <david@kernel.org>
>
> That's generous, but the patch is not mine at all, and I'll
> happily let you grab my two paragraphs above. Please, just
Okay, I didn't want to undermine your involvement.
> Reported-by: Hugh Dickins <hughd@google.com>
Then, I'd also add a
Debugged-by: Hugh Dickins <hughd@google.com>
>
>> Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
>> ---
>> mm/memory.c | 17 ++++++++++++++++-
>> 1 file changed, 16 insertions(+), 1 deletion(-)
>>
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 7322a40e73b9..a60bc07b48b2 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -612,6 +612,21 @@ static void print_bad_page_map(struct vm_area_struct *vma,
>> dump_stack();
>> add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE);
>> }
>> +
>> +static inline bool pgtable_level_has_pxx_special(enum pgtable_level level)
>> +{
>> + switch (level) {
>> + case PGTABLE_LEVEL_PTE:
>> + return IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL);
>> + case PGTABLE_LEVEL_PMD:
>> + return IS_ENABLED(CONFIG_ARCH_SUPPORTS_PMD_PFNMAP);
>> + case PGTABLE_LEVEL_PUD:
>> + return IS_ENABLED(CONFIG_ARCH_SUPPORTS_PUD_PFNMAP);
>> + default:
>> + return false;
>> + }
>> +}
>> +
>> #define print_bad_pte(vma, addr, pte, page) \
>> print_bad_page_map(vma, addr, pte_val(pte), page, PGTABLE_LEVEL_PTE)
>>
>> @@ -684,7 +699,7 @@ static inline struct page *__vm_normal_page(struct vm_area_struct *vma,
>> unsigned long addr, unsigned long pfn, bool special,
>> unsigned long long entry, enum pgtable_level level)
>> {
>> - if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)) {
>> + if (pgtable_level_has_pxx_special(level)) {
>> if (unlikely(special)) {
>> #ifdef CONFIG_FIND_NORMAL_PAGE
>> if (vma->vm_ops && vma->vm_ops->find_normal_page)
>
> That block ends with a comment on CONFIG_ARCH_HAS_PTE_SPECIAL:
> perhaps better reworded now - but I don't know what to suggest!
Good point, let me take a loook.
>
> This patch seems okay, but TBH I have no enthusiasm for it -
> it forces me to think too hard, and I prefer my own one-liner
> (iwhich Lance found odd: odd if you're thinking pmd_special(pmd)
> means pmd has the _PAGE_SPECIAL bit set, yes, but not odd if you
> think it means that pmd is of a special folio).
If pmd_mkspecial() is a nop I'd prefer that pmd_special() is similarly a nop.
>
> But whatever, you and Lance prefer this one: thanks for the fix!
Thank you Hugh. I'll send out my version officially, and if there are strong
opinions against it we can just use your variant.
--
Cheers,
David
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH hotfix] mm: fix pmd_special() fallback to observe huge_zero
2026-04-30 8:54 ` David Hildenbrand (Arm)
@ 2026-04-30 9:10 ` Lance Yang
0 siblings, 0 replies; 12+ messages in thread
From: Lance Yang @ 2026-04-30 9:10 UTC (permalink / raw)
To: Hugh Dickins, David Hildenbrand (Arm)
Cc: akpm, baolin.wang, baohua, dev.jain, liam.howlett, ljs, mhocko,
rppt, npache, zhengqi.arch, ryan.roberts, surenb, ziy, linux-mm
On 2026/4/30 16:54, David Hildenbrand (Arm) wrote:
>>> ... and triggering the VM_WARN_ON_ONE(), although never reported so far.
>>>
>>> To fix it, teach vm_normal_page_pmd()/vm_normal_page_pud() whether
>>> pmd_special/pud_special is actually implemented.
>>>
>>> Fixes: af38538801c6 ("mm/memory: factor out common code from vm_normal_page_*()")
>>
>> Agreed. My Fixes tag (where my bisection arrived) was correct for the
>> common zap_huge_pmd() symptom I was seeing (Lorenzo's commit removed
>> an independent is_huge_zero_pmd() check from it, so it now relies on
>> vm_normal_folio_pmd() to give the right answer). But you've chased
>> up other usages, and realized it goes back further. You could just as
>> well blame the other commit mentioned in this thread, the d82d09e48219
>> ("mm/huge_memory: mark PMD mappings of the huge zero folio special"),
>> because in come configs it is not doing what it expects to be doing.
>> But af38538801c6 is where effects start appearing, so fine to blame
>> it (and both come from the same 6.18 series, do it doesn't matter).
>>
>>> Signed-off-by: Hugh Dickins <hughd@google.com>>
>>> Co-developed-by: David Hildenbrand (Arm) <david@kernel.org>
>>
>> That's generous, but the patch is not mine at all, and I'll
>> happily let you grab my two paragraphs above. Please, just
>
> Okay, I didn't want to undermine your involvement.
>
>> Reported-by: Hugh Dickins <hughd@google.com>
>
> Then, I'd also add a
>
> Debugged-by: Hugh Dickins <hughd@google.com>
Yeah. Thanks to Hugh for the bisection, debugging and detailed
analysis!
Cheers, Lance
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2026-04-30 9:11 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-29 5:08 [PATCH hotfix] mm: fix pmd_special() fallback to observe huge_zero Hugh Dickins
2026-04-29 5:54 ` Lance Yang
2026-04-29 6:12 ` David Hildenbrand (Arm)
2026-04-29 6:57 ` Lance Yang
2026-04-29 7:14 ` David Hildenbrand (Arm)
2026-04-29 7:33 ` Lance Yang
2026-04-30 5:53 ` David Hildenbrand (Arm)
2026-04-30 6:46 ` Lance Yang
2026-04-30 8:30 ` Lance Yang
2026-04-30 8:48 ` Hugh Dickins
2026-04-30 8:54 ` David Hildenbrand (Arm)
2026-04-30 9:10 ` Lance Yang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox