* [PATCH v3 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables
@ 2025-07-25 1:21 Harry Yoo
2025-07-25 1:21 ` [PATCH v3 mm-hotfixes 1/5] mm: move page table sync declarations to linux/pgtable.h Harry Yoo
` (5 more replies)
0 siblings, 6 replies; 9+ messages in thread
From: Harry Yoo @ 2025-07-25 1:21 UTC (permalink / raw)
To: Andrew Morton, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, Andy Lutomirski, Peter Zijlstra, H . Peter Anvin
Cc: Andrey Ryabinin, Arnd Bergmann, Dennis Zhou, Tejun Heo,
Christoph Lameter, Alexander Potapenko, Andrey Konovalov,
Dmitry Vyukov, Vincenzo Frascino, Juergen Gross, Kevin Brodsky,
Oscar Salvador, Joao Martins, Lorenzo Sccakes, Jane Chu,
Alistair Popple, Mike Rapoport, David Hildenbrand,
Gwan-gyeong Mun, Aneesh Kumar K . V, Uladzislau Rezki,
Liam R . Howlett, Vlastimil Babka, Suren Baghdasaryan,
Michal Hocko, Qi Zheng, Ard Biesheuvel, Thomas Huth, John Hubbard,
Ryan Roberts, Peter Xu, Dev Jain, Bibo Mao, Anshuman Khandual,
Joerg Roedel, x86, linux-kernel, linux-arch, linux-mm, Harry Yoo
v2: https://lore.kernel.org/linux-mm/20250720234203.9126-1-harry.yoo@oracle.com/
v2 -> v3:
- Rebased onto mm-hotfixes-unstable (e89f90f1a588 ("sprintf.h requires stdarg.h"))
- Fixed kernel test robot reports
- Moved arch-independent ARCH_PAGE_TABLE_SYNC_MASK and
arch_sync_kernel_mappings() declarations to <linux/pgtable.h>.
Moved x86-64 version of ARCH_PAGE_TABLE_SYNC_MASK
from asm/pgalloc.h to arch/x86/include/asm/pgtable_64_types.h
Now, any code that wants to use ARCH_PAGE_TABLE_SYNC_MASK and
arch_sync_kernel_mappings() will include <linux/pgtable.h>.
- Dropped Cc: stable from patch 4-5 as technically they are not fixing
bugs.
# The problem: It is easy to miss/overlook page table synchronization
Hi all,
During our internal testing, we started observing intermittent boot
failures when the machine uses 4-level paging and has a large amount
of persistent memory:
BUG: unable to handle page fault for address: ffffe70000000034
#PF: supervisor write access in kernel mode
#PF: error_code(0x0002) - not-present page
PGD 0 P4D 0
Oops: 0002 [#1] SMP NOPTI
RIP: 0010:__init_single_page+0x9/0x6d
Call Trace:
<TASK>
__init_zone_device_page+0x17/0x5d
memmap_init_zone_device+0x154/0x1bb
pagemap_range+0x2e0/0x40f
memremap_pages+0x10b/0x2f0
devm_memremap_pages+0x1e/0x60
dev_dax_probe+0xce/0x2ec [device_dax]
dax_bus_probe+0x6d/0xc9
[... snip ...]
</TASK>
It turns out that the kernel panics while initializing vmemmap
(struct page array) when the vmemmap region spans two PGD entries,
because the new PGD entry is only installed in init_mm.pgd,
but not in the page tables of other tasks.
And looking at __populate_section_memmap():
if (vmemmap_can_optimize(altmap, pgmap))
// does not sync top level page tables
r = vmemmap_populate_compound_pages(pfn, start, end, nid, pgmap);
else
// sync top level page tables in x86
r = vmemmap_populate(start, end, nid, altmap);
In the normal path, vmemmap_populate() in arch/x86/mm/init_64.c
synchronizes the top level page table (See commit 9b861528a801
("x86-64, mem: Update all PGDs for direct mapping and vmemmap mapping
changes")) so that all tasks in the system can see the new vmemmap area.
However, when vmemmap_can_optimize() returns true, the optimized path
skips synchronization of top-level page tables. This is because
vmemmap_populate_compound_pages() is implemented in core MM code, which
does not handle synchronization of the top-level page tables. Instead,
the core MM has historically relied on each architecture to perform this
synchronization manually.
We're not the first party to encounter a crash caused by not-sync'd
top level page tables: earlier this year, Gwan-gyeong Mun attempted to
address the issue [1] [2] after hitting a kernel panic when x86 code
accessed the vmemmap area before the corresponding top-level entries
were synced. At that time, the issue was believed to be triggered
only when struct page was enlarged for debugging purposes, and the patch
did not get further updates.
It turns out that current approach of relying on each arch to handle
the page table sync manually is fragile because 1) it's easy to forget
to sync the top level page table, and 2) it's also easy to overlook that
the kernel should not access the vmemmap and direct mapping areas before
the sync.
# The solution: Make page table sync more code robust
To address this, Dave Hansen suggested [3] [4] introducing
{pgd,p4d}_populate_kernel() for updating kernel portion
of the page tables and allow each architecture to explicitly perform
synchronization when installing top-level entries. With this approach,
we no longer need to worry about missing the sync step, reducing the risk
of future regressions.
The new interface reuses existing ARCH_PAGE_TABLE_SYNC_MASK,
PGTBL_P*D_MODIFIED and arch_sync_kernel_mappings() facility used by
vmalloc and ioremap to synchronize page tables.
pgd_populate_kernel() looks like this:
#define pgd_populate_kernel(addr, pgd, p4d) \
do { \
pgd_populate(&init_mm, pgd, p4d); \
if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED) \
arch_sync_kernel_mappings(addr, addr); \
} while (0)
It is worth noting that vmalloc() and apply_to_range() carefully
synchronizes page tables by calling p*d_alloc_track() and
arch_sync_kernel_mappings(), and thus they are not affected by
this patch series.
This patch series was hugely inspired by Dave Hansen's suggestion and
hence added Suggested-by: Dave Hansen.
Cc stable because lack of this series opens the door to intermittent
boot failures.
[1] https://lore.kernel.org/linux-mm/20250220064105.808339-1-gwan-gyeong.mun@intel.com
[2] https://lore.kernel.org/linux-mm/20250311114420.240341-1-gwan-gyeong.mun@intel.com
[3] https://lore.kernel.org/linux-mm/d1da214c-53d3-45ac-a8b6-51821c5416e4@intel.com
[4] https://lore.kernel.org/linux-mm/4d800744-7b88-41aa-9979-b245e8bf794b@intel.com
Harry Yoo (5):
mm: move page table sync declarations to linux/pgtable.h
mm: introduce and use {pgd,p4d}_populate_kernel()
x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and
arch_sync_kernel_mappings()
x86/mm/64: convert p*d_populate{,_init} to _kernel variants
x86/mm: drop unnecessary calls to sync_global_pgds() and fold into its
sole user
arch/x86/include/asm/pgalloc.h | 20 +++++++++++++
arch/x86/include/asm/pgtable_64_types.h | 3 ++
arch/x86/mm/init_64.c | 37 ++++++++++++++-----------
arch/x86/mm/kasan_init_64.c | 8 +++---
include/asm-generic/pgalloc.h | 16 +++++++++++
include/linux/pgtable.h | 17 ++++++++++++
include/linux/vmalloc.h | 16 -----------
mm/kasan/init.c | 10 +++----
mm/percpu.c | 4 +--
mm/sparse-vmemmap.c | 4 +--
10 files changed, 90 insertions(+), 45 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v3 mm-hotfixes 1/5] mm: move page table sync declarations to linux/pgtable.h
2025-07-25 1:21 [PATCH v3 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Harry Yoo
@ 2025-07-25 1:21 ` Harry Yoo
2025-07-25 1:21 ` [PATCH v3 mm-hotfixes 2/5] mm: introduce and use {pgd,p4d}_populate_kernel() Harry Yoo
` (4 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Harry Yoo @ 2025-07-25 1:21 UTC (permalink / raw)
To: Andrew Morton, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, Andy Lutomirski, Peter Zijlstra, H . Peter Anvin
Cc: Andrey Ryabinin, Arnd Bergmann, Dennis Zhou, Tejun Heo,
Christoph Lameter, Alexander Potapenko, Andrey Konovalov,
Dmitry Vyukov, Vincenzo Frascino, Juergen Gross, Kevin Brodsky,
Oscar Salvador, Joao Martins, Lorenzo Sccakes, Jane Chu,
Alistair Popple, Mike Rapoport, David Hildenbrand,
Gwan-gyeong Mun, Aneesh Kumar K . V, Uladzislau Rezki,
Liam R . Howlett, Vlastimil Babka, Suren Baghdasaryan,
Michal Hocko, Qi Zheng, Ard Biesheuvel, Thomas Huth, John Hubbard,
Ryan Roberts, Peter Xu, Dev Jain, Bibo Mao, Anshuman Khandual,
Joerg Roedel, x86, linux-kernel, linux-arch, linux-mm, Harry Yoo,
stable
Move ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() to
linux/pgtable.h so that they can be used outside of vmalloc and ioremap.
Cc: stable@vger.kernel.org
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
---
include/linux/pgtable.h | 17 +++++++++++++++++
include/linux/vmalloc.h | 16 ----------------
2 files changed, 17 insertions(+), 16 deletions(-)
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 0b6e1f781d86..e564f338c758 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1329,6 +1329,23 @@ static inline void ptep_modify_prot_commit(struct vm_area_struct *vma,
__ptep_modify_prot_commit(vma, addr, ptep, pte);
}
#endif /* __HAVE_ARCH_PTEP_MODIFY_PROT_TRANSACTION */
+
+/*
+ * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
+ * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
+ * needs to be called.
+ */
+#ifndef ARCH_PAGE_TABLE_SYNC_MASK
+#define ARCH_PAGE_TABLE_SYNC_MASK 0
+#endif
+
+/*
+ * There is no default implementation for arch_sync_kernel_mappings(). It is
+ * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK
+ * is 0.
+ */
+void arch_sync_kernel_mappings(unsigned long start, unsigned long end);
+
#endif /* CONFIG_MMU */
/*
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index fdc9aeb74a44..2759dac6be44 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -219,22 +219,6 @@ extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
int vmap_pages_range(unsigned long addr, unsigned long end, pgprot_t prot,
struct page **pages, unsigned int page_shift);
-/*
- * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
- * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
- * needs to be called.
- */
-#ifndef ARCH_PAGE_TABLE_SYNC_MASK
-#define ARCH_PAGE_TABLE_SYNC_MASK 0
-#endif
-
-/*
- * There is no default implementation for arch_sync_kernel_mappings(). It is
- * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK
- * is 0.
- */
-void arch_sync_kernel_mappings(unsigned long start, unsigned long end);
-
/*
* Lowlevel-APIs (not for driver use!)
*/
--
2.43.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 mm-hotfixes 2/5] mm: introduce and use {pgd,p4d}_populate_kernel()
2025-07-25 1:21 [PATCH v3 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Harry Yoo
2025-07-25 1:21 ` [PATCH v3 mm-hotfixes 1/5] mm: move page table sync declarations to linux/pgtable.h Harry Yoo
@ 2025-07-25 1:21 ` Harry Yoo
2025-07-29 7:59 ` Harry Yoo
2025-07-25 1:21 ` [PATCH v3 mm-hotfixes 3/5] x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() Harry Yoo
` (3 subsequent siblings)
5 siblings, 1 reply; 9+ messages in thread
From: Harry Yoo @ 2025-07-25 1:21 UTC (permalink / raw)
To: Andrew Morton, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, Andy Lutomirski, Peter Zijlstra, H . Peter Anvin
Cc: Andrey Ryabinin, Arnd Bergmann, Dennis Zhou, Tejun Heo,
Christoph Lameter, Alexander Potapenko, Andrey Konovalov,
Dmitry Vyukov, Vincenzo Frascino, Juergen Gross, Kevin Brodsky,
Oscar Salvador, Joao Martins, Lorenzo Sccakes, Jane Chu,
Alistair Popple, Mike Rapoport, David Hildenbrand,
Gwan-gyeong Mun, Aneesh Kumar K . V, Uladzislau Rezki,
Liam R . Howlett, Vlastimil Babka, Suren Baghdasaryan,
Michal Hocko, Qi Zheng, Ard Biesheuvel, Thomas Huth, John Hubbard,
Ryan Roberts, Peter Xu, Dev Jain, Bibo Mao, Anshuman Khandual,
Joerg Roedel, x86, linux-kernel, linux-arch, linux-mm, Harry Yoo,
stable
Introduce and use {pgd,p4d}_populate_kernel() in core MM code when
populating PGD and P4D entries for the kernel address space.
These helpers ensure proper synchronization of page tables when
updating the kernel portion of top-level page tables.
Until now, the kernel has relied on each architecture to handle
synchronization of top-level page tables in an ad-hoc manner.
For example, see commit 9b861528a801 ("x86-64, mem: Update all PGDs for
direct mapping and vmemmap mapping changes").
However, this approach has proven fragile for following reasons:
1) It is easy to forget to perform the necessary page table
synchronization when introducing new changes.
For instance, commit 4917f55b4ef9 ("mm/sparse-vmemmap: improve memory
savings for compound devmaps") overlooked the need to synchronize
page tables for the vmemmap area.
2) It is also easy to overlook that the vmemmap and direct mapping areas
must not be accessed before explicit page table synchronization.
For example, commit 8d400913c231 ("x86/vmemmap: handle unpopulated
sub-pmd ranges")) caused crashes by accessing the vmemmap area
before calling sync_global_pgds().
To address this, as suggested by Dave Hansen, introduce _kernel() variants
of the page table population helpers, which invoke architecture-specific
hooks to properly synchronize page tables.
They reuse existing infrastructure for vmalloc and ioremap.
Synchronization requirements are determined by ARCH_PAGE_TABLE_SYNC_MASK,
and the actual synchronization is performed by arch_sync_kernel_mappings().
This change currently targets only x86_64, so only PGD and P4D level
helpers are introduced. In theory, PUD and PMD level helpers can be added
later if needed by other architectures.
Currently this is a no-op, since no architecture sets
PGTBL_{PGD,P4D}_MODIFIED in ARCH_PAGE_TABLE_SYNC_MASK.
Cc: stable@vger.kernel.org
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
---
include/asm-generic/pgalloc.h | 16 ++++++++++++++++
include/linux/pgtable.h | 4 ++--
mm/kasan/init.c | 10 +++++-----
mm/percpu.c | 4 ++--
mm/sparse-vmemmap.c | 4 ++--
5 files changed, 27 insertions(+), 11 deletions(-)
diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h
index 3c8ec3bfea44..fc0ab8eed5a6 100644
--- a/include/asm-generic/pgalloc.h
+++ b/include/asm-generic/pgalloc.h
@@ -4,6 +4,8 @@
#ifdef CONFIG_MMU
+#include <linux/pgtable.h>
+
#define GFP_PGTABLE_KERNEL (GFP_KERNEL | __GFP_ZERO)
#define GFP_PGTABLE_USER (GFP_PGTABLE_KERNEL | __GFP_ACCOUNT)
@@ -296,6 +298,20 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
}
#endif
+#define pgd_populate_kernel(addr, pgd, p4d) \
+do { \
+ pgd_populate(&init_mm, pgd, p4d); \
+ if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED) \
+ arch_sync_kernel_mappings(addr, addr); \
+} while (0)
+
+#define p4d_populate_kernel(addr, p4d, pud) \
+do { \
+ p4d_populate(&init_mm, p4d, pud); \
+ if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_P4D_MODIFIED) \
+ arch_sync_kernel_mappings(addr, addr); \
+} while (0)
+
#endif /* CONFIG_MMU */
#endif /* __ASM_GENERIC_PGALLOC_H */
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index e564f338c758..2e24514ab6d0 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1332,8 +1332,8 @@ static inline void ptep_modify_prot_commit(struct vm_area_struct *vma,
/*
* Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
- * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
- * needs to be called.
+ * and let generic vmalloc, ioremap and page table update code know when
+ * arch_sync_kernel_mappings() needs to be called.
*/
#ifndef ARCH_PAGE_TABLE_SYNC_MASK
#define ARCH_PAGE_TABLE_SYNC_MASK 0
diff --git a/mm/kasan/init.c b/mm/kasan/init.c
index ced6b29fcf76..43de820ee282 100644
--- a/mm/kasan/init.c
+++ b/mm/kasan/init.c
@@ -191,7 +191,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
pud_t *pud;
pmd_t *pmd;
- p4d_populate(&init_mm, p4d,
+ p4d_populate_kernel(addr, p4d,
lm_alias(kasan_early_shadow_pud));
pud = pud_offset(p4d, addr);
pud_populate(&init_mm, pud,
@@ -212,7 +212,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
} else {
p = early_alloc(PAGE_SIZE, NUMA_NO_NODE);
pud_init(p);
- p4d_populate(&init_mm, p4d, p);
+ p4d_populate_kernel(addr, p4d, p);
}
}
zero_pud_populate(p4d, addr, next);
@@ -251,10 +251,10 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
* puds,pmds, so pgd_populate(), pud_populate()
* is noops.
*/
- pgd_populate(&init_mm, pgd,
+ pgd_populate_kernel(addr, pgd,
lm_alias(kasan_early_shadow_p4d));
p4d = p4d_offset(pgd, addr);
- p4d_populate(&init_mm, p4d,
+ p4d_populate_kernel(addr, p4d,
lm_alias(kasan_early_shadow_pud));
pud = pud_offset(p4d, addr);
pud_populate(&init_mm, pud,
@@ -273,7 +273,7 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
if (!p)
return -ENOMEM;
} else {
- pgd_populate(&init_mm, pgd,
+ pgd_populate_kernel(addr, pgd,
early_alloc(PAGE_SIZE, NUMA_NO_NODE));
}
}
diff --git a/mm/percpu.c b/mm/percpu.c
index b35494c8ede2..1615dc3b3af5 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -3134,13 +3134,13 @@ void __init __weak pcpu_populate_pte(unsigned long addr)
if (pgd_none(*pgd)) {
p4d = memblock_alloc_or_panic(P4D_TABLE_SIZE, P4D_TABLE_SIZE);
- pgd_populate(&init_mm, pgd, p4d);
+ pgd_populate_kernel(addr, pgd, p4d);
}
p4d = p4d_offset(pgd, addr);
if (p4d_none(*p4d)) {
pud = memblock_alloc_or_panic(PUD_TABLE_SIZE, PUD_TABLE_SIZE);
- p4d_populate(&init_mm, p4d, pud);
+ p4d_populate_kernel(addr, p4d, pud);
}
pud = pud_offset(p4d, addr);
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index fd2ab5118e13..e275310ac708 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -229,7 +229,7 @@ p4d_t * __meminit vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node)
if (!p)
return NULL;
pud_init(p);
- p4d_populate(&init_mm, p4d, p);
+ p4d_populate_kernel(addr, p4d, p);
}
return p4d;
}
@@ -241,7 +241,7 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node)
void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
if (!p)
return NULL;
- pgd_populate(&init_mm, pgd, p);
+ pgd_populate_kernel(addr, pgd, p);
}
return pgd;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 mm-hotfixes 3/5] x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings()
2025-07-25 1:21 [PATCH v3 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Harry Yoo
2025-07-25 1:21 ` [PATCH v3 mm-hotfixes 1/5] mm: move page table sync declarations to linux/pgtable.h Harry Yoo
2025-07-25 1:21 ` [PATCH v3 mm-hotfixes 2/5] mm: introduce and use {pgd,p4d}_populate_kernel() Harry Yoo
@ 2025-07-25 1:21 ` Harry Yoo
2025-07-25 1:21 ` [PATCH v3 mm-hotfixes 4/5] x86/mm/64: convert p*d_populate{,_init} to _kernel variants Harry Yoo
` (2 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Harry Yoo @ 2025-07-25 1:21 UTC (permalink / raw)
To: Andrew Morton, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, Andy Lutomirski, Peter Zijlstra, H . Peter Anvin
Cc: Andrey Ryabinin, Arnd Bergmann, Dennis Zhou, Tejun Heo,
Christoph Lameter, Alexander Potapenko, Andrey Konovalov,
Dmitry Vyukov, Vincenzo Frascino, Juergen Gross, Kevin Brodsky,
Oscar Salvador, Joao Martins, Lorenzo Sccakes, Jane Chu,
Alistair Popple, Mike Rapoport, David Hildenbrand,
Gwan-gyeong Mun, Aneesh Kumar K . V, Uladzislau Rezki,
Liam R . Howlett, Vlastimil Babka, Suren Baghdasaryan,
Michal Hocko, Qi Zheng, Ard Biesheuvel, Thomas Huth, John Hubbard,
Ryan Roberts, Peter Xu, Dev Jain, Bibo Mao, Anshuman Khandual,
Joerg Roedel, x86, linux-kernel, linux-arch, linux-mm, Harry Yoo,
stable
Define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() to ensure
page tables are properly synchronized when calling p*d_populate_kernel().
It is inteneded to synchronize page tables via pgd_pouplate_kernel() when
5-level paging is in use and via p4d_pouplate_kernel() when 4-level paging
is used.
This fixes intermittent boot failures on systems using 4-level paging
and a large amount of persistent memory:
BUG: unable to handle page fault for address: ffffe70000000034
#PF: supervisor write access in kernel mode
#PF: error_code(0x0002) - not-present page
PGD 0 P4D 0
Oops: 0002 [#1] SMP NOPTI
RIP: 0010:__init_single_page+0x9/0x6d
Call Trace:
<TASK>
__init_zone_device_page+0x17/0x5d
memmap_init_zone_device+0x154/0x1bb
pagemap_range+0x2e0/0x40f
memremap_pages+0x10b/0x2f0
devm_memremap_pages+0x1e/0x60
dev_dax_probe+0xce/0x2ec [device_dax]
dax_bus_probe+0x6d/0xc9
[... snip ...]
</TASK>
It also fixes a crash in vmemmap_set_pmd() caused by accessing vmemmap
before sync_global_pgds() [1]:
BUG: unable to handle page fault for address: ffffeb3ff1200000
#PF: supervisor write access in kernel mode
#PF: error_code(0x0002) - not-present page
PGD 0 P4D 0
Oops: Oops: 0002 [#1] PREEMPT SMP NOPTI
Tainted: [W]=WARN
RIP: 0010:vmemmap_set_pmd+0xff/0x230
<TASK>
vmemmap_populate_hugepages+0x176/0x180
vmemmap_populate+0x34/0x80
__populate_section_memmap+0x41/0x90
sparse_add_section+0x121/0x3e0
__add_pages+0xba/0x150
add_pages+0x1d/0x70
memremap_pages+0x3dc/0x810
devm_memremap_pages+0x1c/0x60
xe_devm_add+0x8b/0x100 [xe]
xe_tile_init_noalloc+0x6a/0x70 [xe]
xe_device_probe+0x48c/0x740 [xe]
[... snip ...]
Cc: stable@vger.kernel.org
Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
Closes: https://lore.kernel.org/linux-mm/20250311114420.240341-1-gwan-gyeong.mun@intel.com [1]
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
---
arch/x86/include/asm/pgtable_64_types.h | 3 +++
arch/x86/mm/init_64.c | 5 +++++
2 files changed, 8 insertions(+)
diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
index 4604f924d8b8..7eb61ef6a185 100644
--- a/arch/x86/include/asm/pgtable_64_types.h
+++ b/arch/x86/include/asm/pgtable_64_types.h
@@ -36,6 +36,9 @@ static inline bool pgtable_l5_enabled(void)
#define pgtable_l5_enabled() cpu_feature_enabled(X86_FEATURE_LA57)
#endif /* USE_EARLY_PGTABLE_L5 */
+#define ARCH_PAGE_TABLE_SYNC_MASK \
+ (pgtable_l5_enabled() ? PGTBL_PGD_MODIFIED : PGTBL_P4D_MODIFIED)
+
extern unsigned int pgdir_shift;
extern unsigned int ptrs_per_p4d;
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index fdb6cab524f0..3800479022e4 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -223,6 +223,11 @@ static void sync_global_pgds(unsigned long start, unsigned long end)
sync_global_pgds_l4(start, end);
}
+void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
+{
+ sync_global_pgds(start, end);
+}
+
/*
* NOTE: This function is marked __ref because it calls __init function
* (alloc_bootmem_pages). It's safe to do it ONLY when after_bootmem == 0.
--
2.43.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 mm-hotfixes 4/5] x86/mm/64: convert p*d_populate{,_init} to _kernel variants
2025-07-25 1:21 [PATCH v3 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Harry Yoo
` (2 preceding siblings ...)
2025-07-25 1:21 ` [PATCH v3 mm-hotfixes 3/5] x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() Harry Yoo
@ 2025-07-25 1:21 ` Harry Yoo
2025-07-25 1:21 ` [PATCH v3 mm-hotfixes 5/5] x86/mm: drop unnecessary calls to sync_global_pgds() and fold into its sole user Harry Yoo
2025-07-25 23:51 ` [PATCH v3 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Andrew Morton
5 siblings, 0 replies; 9+ messages in thread
From: Harry Yoo @ 2025-07-25 1:21 UTC (permalink / raw)
To: Andrew Morton, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, Andy Lutomirski, Peter Zijlstra, H . Peter Anvin
Cc: Andrey Ryabinin, Arnd Bergmann, Dennis Zhou, Tejun Heo,
Christoph Lameter, Alexander Potapenko, Andrey Konovalov,
Dmitry Vyukov, Vincenzo Frascino, Juergen Gross, Kevin Brodsky,
Oscar Salvador, Joao Martins, Lorenzo Sccakes, Jane Chu,
Alistair Popple, Mike Rapoport, David Hildenbrand,
Gwan-gyeong Mun, Aneesh Kumar K . V, Uladzislau Rezki,
Liam R . Howlett, Vlastimil Babka, Suren Baghdasaryan,
Michal Hocko, Qi Zheng, Ard Biesheuvel, Thomas Huth, John Hubbard,
Ryan Roberts, Peter Xu, Dev Jain, Bibo Mao, Anshuman Khandual,
Joerg Roedel, x86, linux-kernel, linux-arch, linux-mm, Harry Yoo
Introduce p*d_populate_kernel_safe() and convert p*d_populate{,_init}()
to p*d_populate_kernel{,_init}() to ensure synchronization of
kernel mappings when populating PGD entries.
By converting them, we eliminate the risk of forgetting to synchronize
top-level page tables after populating PGD entries.
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
---
arch/x86/include/asm/pgalloc.h | 20 ++++++++++++++++++++
arch/x86/mm/init_64.c | 25 +++++++++++++++++++------
arch/x86/mm/kasan_init_64.c | 8 ++++----
3 files changed, 43 insertions(+), 10 deletions(-)
diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgalloc.h
index c88691b15f3c..1d5af9fc4557 100644
--- a/arch/x86/include/asm/pgalloc.h
+++ b/arch/x86/include/asm/pgalloc.h
@@ -120,6 +120,15 @@ static inline void p4d_populate_safe(struct mm_struct *mm, p4d_t *p4d, pud_t *pu
set_p4d_safe(p4d, __p4d(_PAGE_TABLE | __pa(pud)));
}
+static inline void p4d_populate_kernel_safe(unsigned long addr,
+ p4d_t *p4d, pud_t *pud)
+{
+ paravirt_alloc_pud(&init_mm, __pa(pud) >> PAGE_SHIFT);
+ set_p4d_safe(p4d, __p4d(_PAGE_TABLE | __pa(pud)));
+ if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_P4D_MODIFIED)
+ arch_sync_kernel_mappings(addr, addr);
+}
+
extern void ___pud_free_tlb(struct mmu_gather *tlb, pud_t *pud);
static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud,
@@ -145,6 +154,17 @@ static inline void pgd_populate_safe(struct mm_struct *mm, pgd_t *pgd, p4d_t *p4
set_pgd_safe(pgd, __pgd(_PAGE_TABLE | __pa(p4d)));
}
+static inline void pgd_populate_kernel_safe(unsigned long addr,
+ pgd_t *pgd, p4d_t *p4d)
+{
+ if (!pgtable_l5_enabled())
+ return;
+ paravirt_alloc_p4d(&init_mm, __pa(p4d) >> PAGE_SHIFT);
+ set_pgd_safe(pgd, __pgd(_PAGE_TABLE | __pa(p4d)));
+ if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED)
+ arch_sync_kernel_mappings(addr, addr);
+}
+
extern void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d);
static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d,
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 3800479022e4..e4922b9c8403 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -75,6 +75,19 @@ DEFINE_POPULATE(pgd_populate, pgd, p4d, init)
DEFINE_POPULATE(pud_populate, pud, pmd, init)
DEFINE_POPULATE(pmd_populate_kernel, pmd, pte, init)
+#define DEFINE_POPULATE_KERNEL(fname, type1, type2, init) \
+static inline void fname##_init(unsigned long addr, \
+ type1##_t *arg1, type2##_t *arg2, bool init) \
+{ \
+ if (init) \
+ fname##_safe(addr, arg1, arg2); \
+ else \
+ fname(addr, arg1, arg2); \
+}
+
+DEFINE_POPULATE_KERNEL(pgd_populate_kernel, pgd, p4d, init)
+DEFINE_POPULATE_KERNEL(p4d_populate_kernel, p4d, pud, init)
+
#define DEFINE_ENTRY(type1, type2, init) \
static inline void set_##type1##_init(type1##_t *arg1, \
type2##_t arg2, bool init) \
@@ -255,7 +268,7 @@ static p4d_t *fill_p4d(pgd_t *pgd, unsigned long vaddr)
{
if (pgd_none(*pgd)) {
p4d_t *p4d = (p4d_t *)spp_getpage();
- pgd_populate(&init_mm, pgd, p4d);
+ pgd_populate_kernel(vaddr, pgd, p4d);
if (p4d != p4d_offset(pgd, 0))
printk(KERN_ERR "PAGETABLE BUG #00! %p <-> %p\n",
p4d, p4d_offset(pgd, 0));
@@ -267,7 +280,7 @@ static pud_t *fill_pud(p4d_t *p4d, unsigned long vaddr)
{
if (p4d_none(*p4d)) {
pud_t *pud = (pud_t *)spp_getpage();
- p4d_populate(&init_mm, p4d, pud);
+ p4d_populate_kernel(vaddr, p4d, pud);
if (pud != pud_offset(p4d, 0))
printk(KERN_ERR "PAGETABLE BUG #01! %p <-> %p\n",
pud, pud_offset(p4d, 0));
@@ -720,7 +733,7 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end,
page_size_mask, prot, init);
spin_lock(&init_mm.page_table_lock);
- p4d_populate_init(&init_mm, p4d, pud, init);
+ p4d_populate_kernel_init(vaddr, p4d, pud, init);
spin_unlock(&init_mm.page_table_lock);
}
@@ -762,10 +775,10 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
spin_lock(&init_mm.page_table_lock);
if (pgtable_l5_enabled())
- pgd_populate_init(&init_mm, pgd, p4d, init);
+ pgd_populate_kernel_init(vaddr, pgd, p4d, init);
else
- p4d_populate_init(&init_mm, p4d_offset(pgd, vaddr),
- (pud_t *) p4d, init);
+ p4d_populate_kernel_init(vaddr, p4d_offset(pgd, vaddr),
+ (pud_t *) p4d, init);
spin_unlock(&init_mm.page_table_lock);
pgd_changed = true;
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 0539efd0d216..e825952d25b2 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -108,7 +108,7 @@ static void __init kasan_populate_p4d(p4d_t *p4d, unsigned long addr,
if (p4d_none(*p4d)) {
void *p = early_alloc(PAGE_SIZE, nid, true);
- p4d_populate(&init_mm, p4d, p);
+ p4d_populate_kernel(addr, p4d, p);
}
pud = pud_offset(p4d, addr);
@@ -128,7 +128,7 @@ static void __init kasan_populate_pgd(pgd_t *pgd, unsigned long addr,
if (pgd_none(*pgd)) {
p = early_alloc(PAGE_SIZE, nid, true);
- pgd_populate(&init_mm, pgd, p);
+ pgd_populate_kernel(addr, pgd, p);
}
p4d = p4d_offset(pgd, addr);
@@ -255,7 +255,7 @@ static void __init kasan_shallow_populate_p4ds(pgd_t *pgd,
if (p4d_none(*p4d)) {
p = early_alloc(PAGE_SIZE, NUMA_NO_NODE, true);
- p4d_populate(&init_mm, p4d, p);
+ p4d_populate_kernel(addr, p4d, p);
}
} while (p4d++, addr = next, addr != end);
}
@@ -273,7 +273,7 @@ static void __init kasan_shallow_populate_pgds(void *start, void *end)
if (pgd_none(*pgd)) {
p = early_alloc(PAGE_SIZE, NUMA_NO_NODE, true);
- pgd_populate(&init_mm, pgd, p);
+ pgd_populate_kernel(addr, pgd, p);
}
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PATCH v3 mm-hotfixes 5/5] x86/mm: drop unnecessary calls to sync_global_pgds() and fold into its sole user
2025-07-25 1:21 [PATCH v3 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Harry Yoo
` (3 preceding siblings ...)
2025-07-25 1:21 ` [PATCH v3 mm-hotfixes 4/5] x86/mm/64: convert p*d_populate{,_init} to _kernel variants Harry Yoo
@ 2025-07-25 1:21 ` Harry Yoo
2025-07-25 23:51 ` [PATCH v3 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Andrew Morton
5 siblings, 0 replies; 9+ messages in thread
From: Harry Yoo @ 2025-07-25 1:21 UTC (permalink / raw)
To: Andrew Morton, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, Andy Lutomirski, Peter Zijlstra, H . Peter Anvin
Cc: Andrey Ryabinin, Arnd Bergmann, Dennis Zhou, Tejun Heo,
Christoph Lameter, Alexander Potapenko, Andrey Konovalov,
Dmitry Vyukov, Vincenzo Frascino, Juergen Gross, Kevin Brodsky,
Oscar Salvador, Joao Martins, Lorenzo Sccakes, Jane Chu,
Alistair Popple, Mike Rapoport, David Hildenbrand,
Gwan-gyeong Mun, Aneesh Kumar K . V, Uladzislau Rezki,
Liam R . Howlett, Vlastimil Babka, Suren Baghdasaryan,
Michal Hocko, Qi Zheng, Ard Biesheuvel, Thomas Huth, John Hubbard,
Ryan Roberts, Peter Xu, Dev Jain, Bibo Mao, Anshuman Khandual,
Joerg Roedel, x86, linux-kernel, linux-arch, linux-mm, Harry Yoo
Now that p*d_populate_kernel{,init}() handles page table synchronization,
calling sync_global_pgds() is no longer necessary. Remove those
redundant calls.
Additionally, since arch_sync_kernel_mappings() is now the only remaining
caller of sync_global_pgds(), fold the function into its user.
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
---
arch/x86/mm/init_64.c | 17 ++---------------
1 file changed, 2 insertions(+), 15 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index e4922b9c8403..f1507de3b7a3 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -228,7 +228,7 @@ static void sync_global_pgds_l4(unsigned long start, unsigned long end)
* When memory was added make sure all the processes MM have
* suitable PGD entries in the local PGD level page.
*/
-static void sync_global_pgds(unsigned long start, unsigned long end)
+void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
{
if (pgtable_l5_enabled())
sync_global_pgds_l5(start, end);
@@ -236,11 +236,6 @@ static void sync_global_pgds(unsigned long start, unsigned long end)
sync_global_pgds_l4(start, end);
}
-void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
-{
- sync_global_pgds(start, end);
-}
-
/*
* NOTE: This function is marked __ref because it calls __init function
* (alloc_bootmem_pages). It's safe to do it ONLY when after_bootmem == 0.
@@ -746,13 +741,11 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
unsigned long page_size_mask,
pgprot_t prot, bool init)
{
- bool pgd_changed = false;
- unsigned long vaddr, vaddr_start, vaddr_end, vaddr_next, paddr_last;
+ unsigned long vaddr, vaddr_end, vaddr_next, paddr_last;
paddr_last = paddr_end;
vaddr = (unsigned long)__va(paddr_start);
vaddr_end = (unsigned long)__va(paddr_end);
- vaddr_start = vaddr;
for (; vaddr < vaddr_end; vaddr = vaddr_next) {
pgd_t *pgd = pgd_offset_k(vaddr);
@@ -781,12 +774,8 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
(pud_t *) p4d, init);
spin_unlock(&init_mm.page_table_lock);
- pgd_changed = true;
}
- if (pgd_changed)
- sync_global_pgds(vaddr_start, vaddr_end - 1);
-
return paddr_last;
}
@@ -1580,8 +1569,6 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
err = -ENOMEM;
} else
err = vmemmap_populate_basepages(start, end, node, NULL);
- if (!err)
- sync_global_pgds(start, end - 1);
return err;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v3 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables
2025-07-25 1:21 [PATCH v3 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Harry Yoo
` (4 preceding siblings ...)
2025-07-25 1:21 ` [PATCH v3 mm-hotfixes 5/5] x86/mm: drop unnecessary calls to sync_global_pgds() and fold into its sole user Harry Yoo
@ 2025-07-25 23:51 ` Andrew Morton
2025-07-26 0:56 ` Harry Yoo
5 siblings, 1 reply; 9+ messages in thread
From: Andrew Morton @ 2025-07-25 23:51 UTC (permalink / raw)
To: Harry Yoo
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
Andy Lutomirski, Peter Zijlstra, H . Peter Anvin, Andrey Ryabinin,
Arnd Bergmann, Dennis Zhou, Tejun Heo, Christoph Lameter,
Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Juergen Gross, Kevin Brodsky, Oscar Salvador,
Joao Martins, Lorenzo Sccakes, Jane Chu, Alistair Popple,
Mike Rapoport, David Hildenbrand, Gwan-gyeong Mun,
Aneesh Kumar K . V, Uladzislau Rezki, Liam R . Howlett,
Vlastimil Babka, Suren Baghdasaryan, Michal Hocko, Qi Zheng,
Ard Biesheuvel, Thomas Huth, John Hubbard, Ryan Roberts, Peter Xu,
Dev Jain, Bibo Mao, Anshuman Khandual, Joerg Roedel, x86,
linux-kernel, linux-arch, linux-mm
On Fri, 25 Jul 2025 10:21:01 +0900 Harry Yoo <harry.yoo@oracle.com> wrote:
> During our internal testing, we started observing intermittent boot
> failures when the machine uses 4-level paging and has a large amount
> of persistent memory:
>
> BUG: unable to handle page fault for address: ffffe70000000034
> #PF: supervisor write access in kernel mode
> #PF: error_code(0x0002) - not-present page
> PGD 0 P4D 0
> Oops: 0002 [#1] SMP NOPTI
> RIP: 0010:__init_single_page+0x9/0x6d
> Call Trace:
> <TASK>
> __init_zone_device_page+0x17/0x5d
> memmap_init_zone_device+0x154/0x1bb
> pagemap_range+0x2e0/0x40f
> memremap_pages+0x10b/0x2f0
> devm_memremap_pages+0x1e/0x60
> dev_dax_probe+0xce/0x2ec [device_dax]
> dax_bus_probe+0x6d/0xc9
> [... snip ...]
> </TASK>
>
> ...
>
> arch/x86/include/asm/pgalloc.h | 20 +++++++++++++
> arch/x86/include/asm/pgtable_64_types.h | 3 ++
> arch/x86/mm/init_64.c | 37 ++++++++++++++-----------
> arch/x86/mm/kasan_init_64.c | 8 +++---
> include/asm-generic/pgalloc.h | 16 +++++++++++
> include/linux/pgtable.h | 17 ++++++++++++
> include/linux/vmalloc.h | 16 -----------
> mm/kasan/init.c | 10 +++----
> mm/percpu.c | 4 +--
> mm/sparse-vmemmap.c | 4 +--
> 10 files changed, 90 insertions(+), 45 deletions(-)
Are any other architectures likely to be affected by this flaw?
It's late for 6.16. I'd propose that this series target 6.17 and once
merged, the cc:stable tags will take care of 6.16.x and earlier.
It's regrettable that the series contains some patches which are
cc:stable and some which are not. Because 6.16.x and earlier will end
up getting only some of these patches, so we're backporting an untested
patch combination. It would be better to prepare all this as two
series: one for backporting and the other not.
It's awkward that some of the cc:stable patches have a Fixes: and
others do not. Exactly which kernel version(s) are we asking the
-stable maintainers to merge these patches into?
This looks somewhat more like an x86 series than an MM one. I can take
it via mm.git with suitable x86 acks. Or drop it from mm.git if it
goes into the x86 tree. We can discuss that.
For now, I'll add this to mm.git's mm-new branch. There it will get a
bit of exposure but it will be withheld from linux-next. Once 6.17-rc1
is released I can move this into mm.git's mm-unstable branch to expose
it to linux-next testers.
Thanks. I'll suppress the usual added-to-mm emails, save a few electrons.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables
2025-07-25 23:51 ` [PATCH v3 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Andrew Morton
@ 2025-07-26 0:56 ` Harry Yoo
0 siblings, 0 replies; 9+ messages in thread
From: Harry Yoo @ 2025-07-26 0:56 UTC (permalink / raw)
To: Andrew Morton
Cc: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
Andy Lutomirski, Peter Zijlstra, H . Peter Anvin, Andrey Ryabinin,
Arnd Bergmann, Dennis Zhou, Tejun Heo, Christoph Lameter,
Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
Vincenzo Frascino, Juergen Gross, Kevin Brodsky, Oscar Salvador,
Joao Martins, Lorenzo Sccakes, Jane Chu, Alistair Popple,
Mike Rapoport, David Hildenbrand, Gwan-gyeong Mun,
Aneesh Kumar K . V, Uladzislau Rezki, Liam R . Howlett,
Vlastimil Babka, Suren Baghdasaryan, Michal Hocko, Qi Zheng,
Ard Biesheuvel, Thomas Huth, John Hubbard, Ryan Roberts, Peter Xu,
Dev Jain, Bibo Mao, Anshuman Khandual, Joerg Roedel, x86,
linux-kernel, linux-arch, linux-mm
On Fri, Jul 25, 2025 at 04:51:01PM -0700, Andrew Morton wrote:
> On Fri, 25 Jul 2025 10:21:01 +0900 Harry Yoo <harry.yoo@oracle.com> wrote:
>
> > During our internal testing, we started observing intermittent boot
> > failures when the machine uses 4-level paging and has a large amount
> > of persistent memory:
> >
> > BUG: unable to handle page fault for address: ffffe70000000034
> > #PF: supervisor write access in kernel mode
> > #PF: error_code(0x0002) - not-present page
> > PGD 0 P4D 0
> > Oops: 0002 [#1] SMP NOPTI
> > RIP: 0010:__init_single_page+0x9/0x6d
> > Call Trace:
> > <TASK>
> > __init_zone_device_page+0x17/0x5d
> > memmap_init_zone_device+0x154/0x1bb
> > pagemap_range+0x2e0/0x40f
> > memremap_pages+0x10b/0x2f0
> > devm_memremap_pages+0x1e/0x60
> > dev_dax_probe+0xce/0x2ec [device_dax]
> > dax_bus_probe+0x6d/0xc9
> > [... snip ...]
> > </TASK>
> >
> > ...
> >
> > arch/x86/include/asm/pgalloc.h | 20 +++++++++++++
> > arch/x86/include/asm/pgtable_64_types.h | 3 ++
> > arch/x86/mm/init_64.c | 37 ++++++++++++++-----------
> > arch/x86/mm/kasan_init_64.c | 8 +++---
> > include/asm-generic/pgalloc.h | 16 +++++++++++
> > include/linux/pgtable.h | 17 ++++++++++++
> > include/linux/vmalloc.h | 16 -----------
> > mm/kasan/init.c | 10 +++----
> > mm/percpu.c | 4 +--
> > mm/sparse-vmemmap.c | 4 +--
> > 10 files changed, 90 insertions(+), 45 deletions(-)
>
> Are any other architectures likely to be affected by this flaw?
In theory, any architecture that does not share a kernel page table between
tasks can be affected if they forgot to sync page tables properly.
e.g., arm64 uses a single page table for kernel address space which
is shared between tasks, so it should not be affected.
But I'm not aware of any other architectures that are _actually_ known to
have this flaw. Even on x86, it was quite hard to trigger without
hot-plugging a large amount of memory. But if it turns out other
architectures are affected, they can be fixed later in the same way as
x86-64.
> It's late for 6.16. I'd propose that this series target 6.17 and once
> merged, the cc:stable tags will take care of 6.16.x and earlier.
Yes. It's quite late and that makes sense.
> It's regrettable that the series contains some patches which are
> cc:stable and some which are not. Because 6.16.x and earlier will end
> up getting only some of these patches, so we're backporting an untested
> patch combination. It would be better to prepare all this as two
> series: one for backporting and the other not.
Yes, that makes sense and I'll post it as two series (one for backporting
and the other not for backporting but as a follow-up) unless someone
speaks up and argues that it should be backported as a whole.
> It's awkward that some of the cc:stable patches have a Fixes: and
> others do not. Exactly which kernel version(s) are we asking the
> -stable maintainers to merge these patches into?
I thought technically patch 1 and 2 are not fixing any bugs but they
are prequisites of patch 3. But I think you're right that it only
confuses -stable maintainers. I'll add Fixes: tags (the same one as
patch 3) to patch 1 and 2 in future revisions.
> This looks somewhat more like an x86 series than an MM one. I can take
> it via mm.git with suitable x86 acks. Or drop it from mm.git if it
> goes into the x86 tree. We can discuss that.
It touches both x86/mm and general mm code so I was unsure which tree
is the right one :) I don't have a strong opinion and I'm fine with both.
Let's wait to hear opinions from the x86/mm maintainers.
> For now, I'll add this to mm.git's mm-new branch. There it will get a
> bit of exposure but it will be withheld from linux-next. Once 6.17-rc1
> is released I can move this into mm.git's mm-unstable branch to expose
> it to linux-next testers.
>
> Thanks. I'll suppress the usual added-to-mm emails, save a few electrons.
Yeah, the Cc list got quite long since it touches many files..
Thanks a lot, Andrew!
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v3 mm-hotfixes 2/5] mm: introduce and use {pgd,p4d}_populate_kernel()
2025-07-25 1:21 ` [PATCH v3 mm-hotfixes 2/5] mm: introduce and use {pgd,p4d}_populate_kernel() Harry Yoo
@ 2025-07-29 7:59 ` Harry Yoo
0 siblings, 0 replies; 9+ messages in thread
From: Harry Yoo @ 2025-07-29 7:59 UTC (permalink / raw)
To: Andrew Morton, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
Dave Hansen, Andy Lutomirski, Peter Zijlstra, H . Peter Anvin
Cc: Andrey Ryabinin, Arnd Bergmann, Dennis Zhou, Tejun Heo,
Christoph Lameter, Alexander Potapenko, Andrey Konovalov,
Dmitry Vyukov, Vincenzo Frascino, Juergen Gross, Kevin Brodsky,
Oscar Salvador, Joao Martins, Lorenzo Sccakes, Jane Chu,
Alistair Popple, Mike Rapoport, David Hildenbrand,
Gwan-gyeong Mun, Aneesh Kumar K . V, Uladzislau Rezki,
Liam R . Howlett, Vlastimil Babka, Suren Baghdasaryan,
Michal Hocko, Qi Zheng, Ard Biesheuvel, Thomas Huth, John Hubbard,
Ryan Roberts, Peter Xu, Dev Jain, Bibo Mao, Anshuman Khandual,
Joerg Roedel, x86, linux-kernel, linux-arch, linux-mm, stable
Adding some comment after looking at a kernel test robot report [1]
that seems to be rejected by linux-mm.
[1] https://lore.kernel.org/oe-kbuild-all/202507290917.T24WIcvt-lkp@intel.com
I will post the next version with it fixed and including only first
three patches that will be backported to -stable. (and post last 2
patches as a follow-up after that)
On Fri, Jul 25, 2025 at 10:21:03AM +0900, Harry Yoo wrote:
> Introduce and use {pgd,p4d}_populate_kernel() in core MM code when
> populating PGD and P4D entries for the kernel address space.
> These helpers ensure proper synchronization of page tables when
> updating the kernel portion of top-level page tables.
>
> Until now, the kernel has relied on each architecture to handle
> synchronization of top-level page tables in an ad-hoc manner.
> For example, see commit 9b861528a801 ("x86-64, mem: Update all PGDs for
> direct mapping and vmemmap mapping changes").
>
> However, this approach has proven fragile for following reasons:
>
> 1) It is easy to forget to perform the necessary page table
> synchronization when introducing new changes.
> For instance, commit 4917f55b4ef9 ("mm/sparse-vmemmap: improve memory
> savings for compound devmaps") overlooked the need to synchronize
> page tables for the vmemmap area.
>
> 2) It is also easy to overlook that the vmemmap and direct mapping areas
> must not be accessed before explicit page table synchronization.
> For example, commit 8d400913c231 ("x86/vmemmap: handle unpopulated
> sub-pmd ranges")) caused crashes by accessing the vmemmap area
> before calling sync_global_pgds().
>
> To address this, as suggested by Dave Hansen, introduce _kernel() variants
> of the page table population helpers, which invoke architecture-specific
> hooks to properly synchronize page tables.
>
> They reuse existing infrastructure for vmalloc and ioremap.
> Synchronization requirements are determined by ARCH_PAGE_TABLE_SYNC_MASK,
> and the actual synchronization is performed by arch_sync_kernel_mappings().
>
> This change currently targets only x86_64, so only PGD and P4D level
> helpers are introduced. In theory, PUD and PMD level helpers can be added
> later if needed by other architectures.
>
> Currently this is a no-op, since no architecture sets
> PGTBL_{PGD,P4D}_MODIFIED in ARCH_PAGE_TABLE_SYNC_MASK.
>
> Cc: stable@vger.kernel.org
> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
> ---
> include/asm-generic/pgalloc.h | 16 ++++++++++++++++
> include/linux/pgtable.h | 4 ++--
> mm/kasan/init.c | 10 +++++-----
> mm/percpu.c | 4 ++--
> mm/sparse-vmemmap.c | 4 ++--
> 5 files changed, 27 insertions(+), 11 deletions(-)
>
> diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h
> index 3c8ec3bfea44..fc0ab8eed5a6 100644
> --- a/include/asm-generic/pgalloc.h
> +++ b/include/asm-generic/pgalloc.h
> @@ -4,6 +4,8 @@
>
> #ifdef CONFIG_MMU
>
> +#include <linux/pgtable.h>
> +
> #define GFP_PGTABLE_KERNEL (GFP_KERNEL | __GFP_ZERO)
> #define GFP_PGTABLE_USER (GFP_PGTABLE_KERNEL | __GFP_ACCOUNT)
>
> @@ -296,6 +298,20 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
> }
> #endif
>
> +#define pgd_populate_kernel(addr, pgd, p4d) \
> +do { \
> + pgd_populate(&init_mm, pgd, p4d); \
> + if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED) \
> + arch_sync_kernel_mappings(addr, addr); \
> +} while (0)
> +
> +#define p4d_populate_kernel(addr, p4d, pud) \
> +do { \
> + p4d_populate(&init_mm, p4d, pud); \
> + if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_P4D_MODIFIED) \
> + arch_sync_kernel_mappings(addr, addr); \
> +} while (0)
> +
> #endif /* CONFIG_MMU */
The report [1] complains that p*d_populate_kernel() is not defined:
mm/percpu.c: In function 'pcpu_populate_pte':
>> mm/percpu.c:3137:17: error: implicit declaration of function 'pgd_populate_kernel'; did you mean 'pmd_populate_kernel'? [-Wimplicit-function-declaration]
3137 | pgd_populate_kernel(addr, pgd, p4d);
| ^~~~~~~~~~~~~~~~~~~
| pmd_populate_kernel
>> mm/percpu.c:3143:17: error: implicit declaration of function 'p4d_populate_kernel'; did you mean 'pmd_populate_kernel'? [-Wimplicit-function-declaration]
3143 | p4d_populate_kernel(addr, p4d, pud);
| ^~~~~~~~~~~~~~~~~~~
| pmd_populate_kernel
--
mm/sparse-vmemmap.c: In function 'vmemmap_p4d_populate':
>> mm/sparse-vmemmap.c:232:17: error: implicit declaration of function 'p4d_populate_kernel'; did you mean 'pmd_populate_kernel'? [-Wimplicit-function-declaration]
232 | p4d_populate_kernel(addr, p4d, p);
| ^~~~~~~~~~~~~~~~~~~
| pmd_populate_kernel
mm/sparse-vmemmap.c: In function 'vmemmap_pgd_populate':
>> mm/sparse-vmemmap.c:244:17: error: implicit declaration of function 'pgd_populate_kernel'; did you mean 'pmd_populate_kernel'? [-Wimplicit-function-declaration]
244 | pgd_populate_kernel(addr, pgd, p);
| ^~~~~~~~~~~~~~~~~~~
| pmd_populate_kernel
I had incorrectly assumed that asm/pgalloc.h in all architecture would
include asm-generic/pgalloc.h. That's true for most architectures,
but a few architectures (sparc, powerpc, s390) don't do that.
As it turns out the assumption isn't valid on all arches, I think the
right thing to do now is to introduce include/linux/pgalloc.h and put
these helpers there, and include it from common code.
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-07-29 8:00 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-25 1:21 [PATCH v3 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Harry Yoo
2025-07-25 1:21 ` [PATCH v3 mm-hotfixes 1/5] mm: move page table sync declarations to linux/pgtable.h Harry Yoo
2025-07-25 1:21 ` [PATCH v3 mm-hotfixes 2/5] mm: introduce and use {pgd,p4d}_populate_kernel() Harry Yoo
2025-07-29 7:59 ` Harry Yoo
2025-07-25 1:21 ` [PATCH v3 mm-hotfixes 3/5] x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() Harry Yoo
2025-07-25 1:21 ` [PATCH v3 mm-hotfixes 4/5] x86/mm/64: convert p*d_populate{,_init} to _kernel variants Harry Yoo
2025-07-25 1:21 ` [PATCH v3 mm-hotfixes 5/5] x86/mm: drop unnecessary calls to sync_global_pgds() and fold into its sole user Harry Yoo
2025-07-25 23:51 ` [PATCH v3 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Andrew Morton
2025-07-26 0:56 ` Harry Yoo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).