* [PATCH V4 mm-hotfixes 0/3] mm, x86: fix crash due to missing page table sync and make it harder to miss
@ 2025-08-11 5:34 Harry Yoo
2025-08-11 5:34 ` [PATCH V4 mm-hotfixes 1/3] mm: move page table sync declarations to linux/pgtable.h Harry Yoo
` (3 more replies)
0 siblings, 4 replies; 26+ messages in thread
From: Harry Yoo @ 2025-08-11 5:34 UTC (permalink / raw)
To: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand
Cc: Andrey Konovalov, Vincenzo Frascino, H. Peter Anvin, kasan-dev,
Mike Rapoport, Ard Biesheuvel, linux-kernel, Dmitry Vyukov,
Alexander Potapenko, Vlastimil Babka, Suren Baghdasaryan,
Harry Yoo, Thomas Huth, John Hubbard, Lorenzo Stoakes,
Michal Hocko, Liam R. Howlett, linux-mm, Kirill A. Shutemov,
Oscar Salvador, Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V,
Joerg Roedel, Alistair Popple, Joao Martins, linux-arch
v3: https://lore.kernel.org/linux-mm/aIQnvFTkQGieHfEh@hyeyoo/
To x86 folks:
It's not clear whether this should go through the MM tree or the x86
tree as it changes both. We could send it to the MM tree with Acks
from the x86 folks, or we could send it through the x86 tree instead.
What do you think?
To MM maintainers:
I'll add include/linux/pgalloc.h to "MEMORY MANAGEMENT - CORE"
in the follow-up series, if there's no objection.
v3 -> v4:
- Updated the subject line to emphasize that this is a bug fix rather
than just an improvement. (was: a more robust approach to sync
top level kernel page tables)
- Added include/linux/pgalloc.h and moved p*d_populate_kernel()
to the file (fixed sparc64 build error).
- Added Fixes: tags to patch 1 and 2 to clarify which -stable versions
they should be backported to (Andrew).
- Dropped patch 4 and 5 because they don't fix bugs but are
improvements. They are planned as follow-ups (Andrew).
- Rebased onto the latest mm-hotfixes-unstable (f1f0068165a4), but also
applies to the latest mm-unstable (c2144e09b922)
This patch series includes only minimal changes necessary for
backporting the fix to -stable. Planned follow-up patches:
- treewide: include linux/pgalloc.h instead of asm/pgalloc.h
in common code
- MAINTAINERS: add include/linux/pgalloc.h to MM CORE
- x86/mm/64: convert p*d_populate{,_init} to _kernel variants
- x86/mm/64: drop unnecessary calls to sync_global_pgds() and
fold it into its sole user
# The problem: It is easy to miss/overlook page table synchronization
Hi all,
During our internal testing, we started observing intermittent boot
failures when the machine uses 4-level paging and has a large amount
of persistent memory:
BUG: unable to handle page fault for address: ffffe70000000034
#PF: supervisor write access in kernel mode
#PF: error_code(0x0002) - not-present page
PGD 0 P4D 0
Oops: 0002 [#1] SMP NOPTI
RIP: 0010:__init_single_page+0x9/0x6d
Call Trace:
<TASK>
__init_zone_device_page+0x17/0x5d
memmap_init_zone_device+0x154/0x1bb
pagemap_range+0x2e0/0x40f
memremap_pages+0x10b/0x2f0
devm_memremap_pages+0x1e/0x60
dev_dax_probe+0xce/0x2ec [device_dax]
dax_bus_probe+0x6d/0xc9
[... snip ...]
</TASK>
It turns out that the kernel panics while initializing vmemmap
(struct page array) when the vmemmap region spans two PGD entries,
because the new PGD entry is only installed in init_mm.pgd,
but not in the page tables of other tasks.
And looking at __populate_section_memmap():
if (vmemmap_can_optimize(altmap, pgmap))
// does not sync top level page tables
r = vmemmap_populate_compound_pages(pfn, start, end, nid, pgmap);
else
// sync top level page tables in x86
r = vmemmap_populate(start, end, nid, altmap);
In the normal path, vmemmap_populate() in arch/x86/mm/init_64.c
synchronizes the top level page table (See commit 9b861528a801
("x86-64, mem: Update all PGDs for direct mapping and vmemmap mapping
changes")) so that all tasks in the system can see the new vmemmap area.
However, when vmemmap_can_optimize() returns true, the optimized path
skips synchronization of top-level page tables. This is because
vmemmap_populate_compound_pages() is implemented in core MM code, which
does not handle synchronization of the top-level page tables. Instead,
the core MM has historically relied on each architecture to perform this
synchronization manually.
We're not the first party to encounter a crash caused by not-sync'd
top level page tables: earlier this year, Gwan-gyeong Mun attempted to
address the issue [1] [2] after hitting a kernel panic when x86 code
accessed the vmemmap area before the corresponding top-level entries
were synced. At that time, the issue was believed to be triggered
only when struct page was enlarged for debugging purposes, and the patch
did not get further updates.
It turns out that current approach of relying on each arch to handle
the page table sync manually is fragile because 1) it's easy to forget
to sync the top level page table, and 2) it's also easy to overlook that
the kernel should not access the vmemmap and direct mapping areas before
the sync.
# The solution: Make page table sync more code robust and harder to miss
To address this, Dave Hansen suggested [3] [4] introducing
{pgd,p4d}_populate_kernel() for updating kernel portion
of the page tables and allow each architecture to explicitly perform
synchronization when installing top-level entries. With this approach,
we no longer need to worry about missing the sync step, reducing the risk
of future regressions.
The new interface reuses existing ARCH_PAGE_TABLE_SYNC_MASK,
PGTBL_P*D_MODIFIED and arch_sync_kernel_mappings() facility used by
vmalloc and ioremap to synchronize page tables.
pgd_populate_kernel() looks like this:
static inline void pgd_populate_kernel(unsigned long addr, pgd_t *pgd,
p4d_t *p4d)
{
pgd_populate(&init_mm, pgd, p4d);
if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED)
arch_sync_kernel_mappings(addr, addr);
}
It is worth noting that vmalloc() and apply_to_range() carefully
synchronizes page tables by calling p*d_alloc_track() and
arch_sync_kernel_mappings(), and thus they are not affected by
this patch series.
This patch series was hugely inspired by Dave Hansen's suggestion and
hence added Suggested-by: Dave Hansen.
Cc stable because lack of this series opens the door to intermittent
boot failures.
[1] https://lore.kernel.org/linux-mm/20250220064105.808339-1-gwan-gyeong.mun@intel.com
[2] https://lore.kernel.org/linux-mm/20250311114420.240341-1-gwan-gyeong.mun@intel.com
[3] https://lore.kernel.org/linux-mm/d1da214c-53d3-45ac-a8b6-51821c5416e4@intel.com
[4] https://lore.kernel.org/linux-mm/4d800744-7b88-41aa-9979-b245e8bf794b@intel.com
Harry Yoo (3):
mm: move page table sync declarations to linux/pgtable.h
mm: introduce and use {pgd,p4d}_populate_kernel()
x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and
arch_sync_kernel_mappings()
arch/x86/include/asm/pgtable_64_types.h | 3 +++
arch/x86/mm/init_64.c | 5 +++++
include/linux/pgalloc.h | 24 ++++++++++++++++++++++++
include/linux/pgtable.h | 16 ++++++++++++++++
include/linux/vmalloc.h | 16 ----------------
mm/kasan/init.c | 12 ++++++------
mm/percpu.c | 6 +++---
mm/sparse-vmemmap.c | 6 +++---
8 files changed, 60 insertions(+), 28 deletions(-)
create mode 100644 include/linux/pgalloc.h
--
2.43.0
^ permalink raw reply [flat|nested] 26+ messages in thread
* [PATCH V4 mm-hotfixes 1/3] mm: move page table sync declarations to linux/pgtable.h
2025-08-11 5:34 [PATCH V4 mm-hotfixes 0/3] mm, x86: fix crash due to missing page table sync and make it harder to miss Harry Yoo
@ 2025-08-11 5:34 ` Harry Yoo
2025-08-11 8:05 ` Mike Rapoport
2025-08-11 11:21 ` Lorenzo Stoakes
2025-08-11 5:34 ` [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel() Harry Yoo
` (2 subsequent siblings)
3 siblings, 2 replies; 26+ messages in thread
From: Harry Yoo @ 2025-08-11 5:34 UTC (permalink / raw)
To: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand
Cc: Andrey Konovalov, Vincenzo Frascino, H. Peter Anvin, kasan-dev,
Mike Rapoport, Ard Biesheuvel, linux-kernel, Dmitry Vyukov,
Alexander Potapenko, Vlastimil Babka, Suren Baghdasaryan,
Harry Yoo, Thomas Huth, John Hubbard, Lorenzo Stoakes,
Michal Hocko, Liam R. Howlett, linux-mm, Kirill A. Shutemov,
Oscar Salvador, Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V,
Joerg Roedel, Alistair Popple, Joao Martins, linux-arch, stable
Move ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() to
linux/pgtable.h so that they can be used outside of vmalloc and ioremap.
Cc: <stable@vger.kernel.org>
Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
---
include/linux/pgtable.h | 16 ++++++++++++++++
include/linux/vmalloc.h | 16 ----------------
2 files changed, 16 insertions(+), 16 deletions(-)
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 4c035637eeb7..ba699df6ef69 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1467,6 +1467,22 @@ static inline void modify_prot_commit_ptes(struct vm_area_struct *vma, unsigned
}
#endif
+/*
+ * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
+ * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
+ * needs to be called.
+ */
+#ifndef ARCH_PAGE_TABLE_SYNC_MASK
+#define ARCH_PAGE_TABLE_SYNC_MASK 0
+#endif
+
+/*
+ * There is no default implementation for arch_sync_kernel_mappings(). It is
+ * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK
+ * is 0.
+ */
+void arch_sync_kernel_mappings(unsigned long start, unsigned long end);
+
#endif /* CONFIG_MMU */
/*
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index fdc9aeb74a44..2759dac6be44 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -219,22 +219,6 @@ extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
int vmap_pages_range(unsigned long addr, unsigned long end, pgprot_t prot,
struct page **pages, unsigned int page_shift);
-/*
- * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
- * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
- * needs to be called.
- */
-#ifndef ARCH_PAGE_TABLE_SYNC_MASK
-#define ARCH_PAGE_TABLE_SYNC_MASK 0
-#endif
-
-/*
- * There is no default implementation for arch_sync_kernel_mappings(). It is
- * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK
- * is 0.
- */
-void arch_sync_kernel_mappings(unsigned long start, unsigned long end);
-
/*
* Lowlevel-APIs (not for driver use!)
*/
--
2.43.0
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel()
2025-08-11 5:34 [PATCH V4 mm-hotfixes 0/3] mm, x86: fix crash due to missing page table sync and make it harder to miss Harry Yoo
2025-08-11 5:34 ` [PATCH V4 mm-hotfixes 1/3] mm: move page table sync declarations to linux/pgtable.h Harry Yoo
@ 2025-08-11 5:34 ` Harry Yoo
2025-08-11 8:10 ` Mike Rapoport
` (3 more replies)
2025-08-11 5:34 ` [PATCH V4 mm-hotfixes 3/3] x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() Harry Yoo
2025-08-11 6:46 ` [PATCH V4 mm-hotfixes 0/3] mm, x86: fix crash due to missing page table sync and make it harder to miss Kiryl Shutsemau
3 siblings, 4 replies; 26+ messages in thread
From: Harry Yoo @ 2025-08-11 5:34 UTC (permalink / raw)
To: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand
Cc: Andrey Konovalov, Vincenzo Frascino, H. Peter Anvin, kasan-dev,
Mike Rapoport, Ard Biesheuvel, linux-kernel, Dmitry Vyukov,
Alexander Potapenko, Vlastimil Babka, Suren Baghdasaryan,
Harry Yoo, Thomas Huth, John Hubbard, Lorenzo Stoakes,
Michal Hocko, Liam R. Howlett, linux-mm, Kirill A. Shutemov,
Oscar Salvador, Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V,
Joerg Roedel, Alistair Popple, Joao Martins, linux-arch, stable
Introduce and use {pgd,p4d}_populate_kernel() in core MM code when
populating PGD and P4D entries for the kernel address space.
These helpers ensure proper synchronization of page tables when
updating the kernel portion of top-level page tables.
Until now, the kernel has relied on each architecture to handle
synchronization of top-level page tables in an ad-hoc manner.
For example, see commit 9b861528a801 ("x86-64, mem: Update all PGDs for
direct mapping and vmemmap mapping changes").
However, this approach has proven fragile for following reasons:
1) It is easy to forget to perform the necessary page table
synchronization when introducing new changes.
For instance, commit 4917f55b4ef9 ("mm/sparse-vmemmap: improve memory
savings for compound devmaps") overlooked the need to synchronize
page tables for the vmemmap area.
2) It is also easy to overlook that the vmemmap and direct mapping areas
must not be accessed before explicit page table synchronization.
For example, commit 8d400913c231 ("x86/vmemmap: handle unpopulated
sub-pmd ranges")) caused crashes by accessing the vmemmap area
before calling sync_global_pgds().
To address this, as suggested by Dave Hansen, introduce _kernel() variants
of the page table population helpers, which invoke architecture-specific
hooks to properly synchronize page tables. These are introduced in a new
header file, include/linux/pgalloc.h, so they can be called from common code.
They reuse existing infrastructure for vmalloc and ioremap.
Synchronization requirements are determined by ARCH_PAGE_TABLE_SYNC_MASK,
and the actual synchronization is performed by arch_sync_kernel_mappings().
This change currently targets only x86_64, so only PGD and P4D level
helpers are introduced. In theory, PUD and PMD level helpers can be added
later if needed by other architectures.
Currently this is a no-op, since no architecture sets
PGTBL_{PGD,P4D}_MODIFIED in ARCH_PAGE_TABLE_SYNC_MASK.
Cc: <stable@vger.kernel.org>
Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
---
include/linux/pgalloc.h | 24 ++++++++++++++++++++++++
include/linux/pgtable.h | 4 ++--
mm/kasan/init.c | 12 ++++++------
mm/percpu.c | 6 +++---
mm/sparse-vmemmap.c | 6 +++---
5 files changed, 38 insertions(+), 14 deletions(-)
create mode 100644 include/linux/pgalloc.h
diff --git a/include/linux/pgalloc.h b/include/linux/pgalloc.h
new file mode 100644
index 000000000000..290ab864320f
--- /dev/null
+++ b/include/linux/pgalloc.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_PGALLOC_H
+#define _LINUX_PGALLOC_H
+
+#include <linux/pgtable.h>
+#include <asm/pgalloc.h>
+
+static inline void pgd_populate_kernel(unsigned long addr, pgd_t *pgd,
+ p4d_t *p4d)
+{
+ pgd_populate(&init_mm, pgd, p4d);
+ if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED)
+ arch_sync_kernel_mappings(addr, addr);
+}
+
+static inline void p4d_populate_kernel(unsigned long addr, p4d_t *p4d,
+ pud_t *pud)
+{
+ p4d_populate(&init_mm, p4d, pud);
+ if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_P4D_MODIFIED)
+ arch_sync_kernel_mappings(addr, addr);
+}
+
+#endif /* _LINUX_PGALLOC_H */
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index ba699df6ef69..0cf5c6c3e483 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1469,8 +1469,8 @@ static inline void modify_prot_commit_ptes(struct vm_area_struct *vma, unsigned
/*
* Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
- * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
- * needs to be called.
+ * and let generic vmalloc, ioremap and page table update code know when
+ * arch_sync_kernel_mappings() needs to be called.
*/
#ifndef ARCH_PAGE_TABLE_SYNC_MASK
#define ARCH_PAGE_TABLE_SYNC_MASK 0
diff --git a/mm/kasan/init.c b/mm/kasan/init.c
index ced6b29fcf76..8fce3370c84e 100644
--- a/mm/kasan/init.c
+++ b/mm/kasan/init.c
@@ -13,9 +13,9 @@
#include <linux/mm.h>
#include <linux/pfn.h>
#include <linux/slab.h>
+#include <linux/pgalloc.h>
#include <asm/page.h>
-#include <asm/pgalloc.h>
#include "kasan.h"
@@ -191,7 +191,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
pud_t *pud;
pmd_t *pmd;
- p4d_populate(&init_mm, p4d,
+ p4d_populate_kernel(addr, p4d,
lm_alias(kasan_early_shadow_pud));
pud = pud_offset(p4d, addr);
pud_populate(&init_mm, pud,
@@ -212,7 +212,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
} else {
p = early_alloc(PAGE_SIZE, NUMA_NO_NODE);
pud_init(p);
- p4d_populate(&init_mm, p4d, p);
+ p4d_populate_kernel(addr, p4d, p);
}
}
zero_pud_populate(p4d, addr, next);
@@ -251,10 +251,10 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
* puds,pmds, so pgd_populate(), pud_populate()
* is noops.
*/
- pgd_populate(&init_mm, pgd,
+ pgd_populate_kernel(addr, pgd,
lm_alias(kasan_early_shadow_p4d));
p4d = p4d_offset(pgd, addr);
- p4d_populate(&init_mm, p4d,
+ p4d_populate_kernel(addr, p4d,
lm_alias(kasan_early_shadow_pud));
pud = pud_offset(p4d, addr);
pud_populate(&init_mm, pud,
@@ -273,7 +273,7 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
if (!p)
return -ENOMEM;
} else {
- pgd_populate(&init_mm, pgd,
+ pgd_populate_kernel(addr, pgd,
early_alloc(PAGE_SIZE, NUMA_NO_NODE));
}
}
diff --git a/mm/percpu.c b/mm/percpu.c
index d9cbaee92b60..a56f35dcc417 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -3108,7 +3108,7 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
#endif /* BUILD_EMBED_FIRST_CHUNK */
#ifdef BUILD_PAGE_FIRST_CHUNK
-#include <asm/pgalloc.h>
+#include <linux/pgalloc.h>
#ifndef P4D_TABLE_SIZE
#define P4D_TABLE_SIZE PAGE_SIZE
@@ -3134,13 +3134,13 @@ void __init __weak pcpu_populate_pte(unsigned long addr)
if (pgd_none(*pgd)) {
p4d = memblock_alloc_or_panic(P4D_TABLE_SIZE, P4D_TABLE_SIZE);
- pgd_populate(&init_mm, pgd, p4d);
+ pgd_populate_kernel(addr, pgd, p4d);
}
p4d = p4d_offset(pgd, addr);
if (p4d_none(*p4d)) {
pud = memblock_alloc_or_panic(PUD_TABLE_SIZE, PUD_TABLE_SIZE);
- p4d_populate(&init_mm, p4d, pud);
+ p4d_populate_kernel(addr, p4d, pud);
}
pud = pud_offset(p4d, addr);
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 41aa0493eb03..dbd8daccade2 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -27,9 +27,9 @@
#include <linux/spinlock.h>
#include <linux/vmalloc.h>
#include <linux/sched.h>
+#include <linux/pgalloc.h>
#include <asm/dma.h>
-#include <asm/pgalloc.h>
#include <asm/tlbflush.h>
#include "hugetlb_vmemmap.h"
@@ -229,7 +229,7 @@ p4d_t * __meminit vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node)
if (!p)
return NULL;
pud_init(p);
- p4d_populate(&init_mm, p4d, p);
+ p4d_populate_kernel(addr, p4d, p);
}
return p4d;
}
@@ -241,7 +241,7 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node)
void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
if (!p)
return NULL;
- pgd_populate(&init_mm, pgd, p);
+ pgd_populate_kernel(addr, pgd, p);
}
return pgd;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 26+ messages in thread
* [PATCH V4 mm-hotfixes 3/3] x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings()
2025-08-11 5:34 [PATCH V4 mm-hotfixes 0/3] mm, x86: fix crash due to missing page table sync and make it harder to miss Harry Yoo
2025-08-11 5:34 ` [PATCH V4 mm-hotfixes 1/3] mm: move page table sync declarations to linux/pgtable.h Harry Yoo
2025-08-11 5:34 ` [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel() Harry Yoo
@ 2025-08-11 5:34 ` Harry Yoo
2025-08-11 8:13 ` Mike Rapoport
2025-08-11 11:46 ` Lorenzo Stoakes
2025-08-11 6:46 ` [PATCH V4 mm-hotfixes 0/3] mm, x86: fix crash due to missing page table sync and make it harder to miss Kiryl Shutsemau
3 siblings, 2 replies; 26+ messages in thread
From: Harry Yoo @ 2025-08-11 5:34 UTC (permalink / raw)
To: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand
Cc: Andrey Konovalov, Vincenzo Frascino, H. Peter Anvin, kasan-dev,
Mike Rapoport, Ard Biesheuvel, linux-kernel, Dmitry Vyukov,
Alexander Potapenko, Vlastimil Babka, Suren Baghdasaryan,
Harry Yoo, Thomas Huth, John Hubbard, Lorenzo Stoakes,
Michal Hocko, Liam R. Howlett, linux-mm, Kirill A. Shutemov,
Oscar Salvador, Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V,
Joerg Roedel, Alistair Popple, Joao Martins, linux-arch, stable
Define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() to ensure
page tables are properly synchronized when calling p*d_populate_kernel().
It is inteneded to synchronize page tables via pgd_pouplate_kernel() when
5-level paging is in use and via p4d_pouplate_kernel() when 4-level paging
is used.
This fixes intermittent boot failures on systems using 4-level paging
and a large amount of persistent memory:
BUG: unable to handle page fault for address: ffffe70000000034
#PF: supervisor write access in kernel mode
#PF: error_code(0x0002) - not-present page
PGD 0 P4D 0
Oops: 0002 [#1] SMP NOPTI
RIP: 0010:__init_single_page+0x9/0x6d
Call Trace:
<TASK>
__init_zone_device_page+0x17/0x5d
memmap_init_zone_device+0x154/0x1bb
pagemap_range+0x2e0/0x40f
memremap_pages+0x10b/0x2f0
devm_memremap_pages+0x1e/0x60
dev_dax_probe+0xce/0x2ec [device_dax]
dax_bus_probe+0x6d/0xc9
[... snip ...]
</TASK>
It also fixes a crash in vmemmap_set_pmd() caused by accessing vmemmap
before sync_global_pgds() [1]:
BUG: unable to handle page fault for address: ffffeb3ff1200000
#PF: supervisor write access in kernel mode
#PF: error_code(0x0002) - not-present page
PGD 0 P4D 0
Oops: Oops: 0002 [#1] PREEMPT SMP NOPTI
Tainted: [W]=WARN
RIP: 0010:vmemmap_set_pmd+0xff/0x230
<TASK>
vmemmap_populate_hugepages+0x176/0x180
vmemmap_populate+0x34/0x80
__populate_section_memmap+0x41/0x90
sparse_add_section+0x121/0x3e0
__add_pages+0xba/0x150
add_pages+0x1d/0x70
memremap_pages+0x3dc/0x810
devm_memremap_pages+0x1c/0x60
xe_devm_add+0x8b/0x100 [xe]
xe_tile_init_noalloc+0x6a/0x70 [xe]
xe_device_probe+0x48c/0x740 [xe]
[... snip ...]
Cc: <stable@vger.kernel.org>
Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
Closes: https://lore.kernel.org/linux-mm/20250311114420.240341-1-gwan-gyeong.mun@intel.com [1]
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
---
arch/x86/include/asm/pgtable_64_types.h | 3 +++
arch/x86/mm/init_64.c | 5 +++++
2 files changed, 8 insertions(+)
diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
index 4604f924d8b8..7eb61ef6a185 100644
--- a/arch/x86/include/asm/pgtable_64_types.h
+++ b/arch/x86/include/asm/pgtable_64_types.h
@@ -36,6 +36,9 @@ static inline bool pgtable_l5_enabled(void)
#define pgtable_l5_enabled() cpu_feature_enabled(X86_FEATURE_LA57)
#endif /* USE_EARLY_PGTABLE_L5 */
+#define ARCH_PAGE_TABLE_SYNC_MASK \
+ (pgtable_l5_enabled() ? PGTBL_PGD_MODIFIED : PGTBL_P4D_MODIFIED)
+
extern unsigned int pgdir_shift;
extern unsigned int ptrs_per_p4d;
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 76e33bd7c556..a78b498c0dc3 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -223,6 +223,11 @@ static void sync_global_pgds(unsigned long start, unsigned long end)
sync_global_pgds_l4(start, end);
}
+void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
+{
+ sync_global_pgds(start, end);
+}
+
/*
* NOTE: This function is marked __ref because it calls __init function
* (alloc_bootmem_pages). It's safe to do it ONLY when after_bootmem == 0.
--
2.43.0
^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 0/3] mm, x86: fix crash due to missing page table sync and make it harder to miss
2025-08-11 5:34 [PATCH V4 mm-hotfixes 0/3] mm, x86: fix crash due to missing page table sync and make it harder to miss Harry Yoo
` (2 preceding siblings ...)
2025-08-11 5:34 ` [PATCH V4 mm-hotfixes 3/3] x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() Harry Yoo
@ 2025-08-11 6:46 ` Kiryl Shutsemau
2025-08-11 8:09 ` Harry Yoo
3 siblings, 1 reply; 26+ messages in thread
From: Kiryl Shutsemau @ 2025-08-11 6:46 UTC (permalink / raw)
To: Harry Yoo
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Mike Rapoport, Ard Biesheuvel,
linux-kernel, Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Lorenzo Stoakes,
Michal Hocko, Liam R. Howlett, linux-mm, Oscar Salvador, Jane Chu,
Gwan-gyeong Mun, Aneesh Kumar K . V, Joerg Roedel,
Alistair Popple, Joao Martins, linux-arch
On Mon, Aug 11, 2025 at 02:34:17PM +0900, Harry Yoo wrote:
> # The solution: Make page table sync more code robust and harder to miss
>
> To address this, Dave Hansen suggested [3] [4] introducing
> {pgd,p4d}_populate_kernel() for updating kernel portion
> of the page tables and allow each architecture to explicitly perform
> synchronization when installing top-level entries. With this approach,
> we no longer need to worry about missing the sync step, reducing the risk
> of future regressions.
Looks sane:
Acked-by: Kiryl Shutsemau <kas@kernel.org>
> The new interface reuses existing ARCH_PAGE_TABLE_SYNC_MASK,
> PGTBL_P*D_MODIFIED and arch_sync_kernel_mappings() facility used by
> vmalloc and ioremap to synchronize page tables.
>
> pgd_populate_kernel() looks like this:
> static inline void pgd_populate_kernel(unsigned long addr, pgd_t *pgd,
> p4d_t *p4d)
> {
> pgd_populate(&init_mm, pgd, p4d);
> if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED)
> arch_sync_kernel_mappings(addr, addr);
> }
>
> It is worth noting that vmalloc() and apply_to_range() carefully
> synchronizes page tables by calling p*d_alloc_track() and
> arch_sync_kernel_mappings(), and thus they are not affected by
> this patch series.
Well, except ARCH_PAGE_TABLE_SYNC_MASK is not defined on x86-64 until
now. So I think it is affected.
--
Kiryl Shutsemau / Kirill A. Shutemov
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 1/3] mm: move page table sync declarations to linux/pgtable.h
2025-08-11 5:34 ` [PATCH V4 mm-hotfixes 1/3] mm: move page table sync declarations to linux/pgtable.h Harry Yoo
@ 2025-08-11 8:05 ` Mike Rapoport
2025-08-11 8:36 ` Harry Yoo
2025-08-11 9:19 ` Uladzislau Rezki
2025-08-11 11:21 ` Lorenzo Stoakes
1 sibling, 2 replies; 26+ messages in thread
From: Mike Rapoport @ 2025-08-11 8:05 UTC (permalink / raw)
To: Harry Yoo
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Ard Biesheuvel, linux-kernel,
Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Lorenzo Stoakes,
Michal Hocko, Liam R. Howlett, linux-mm, Kirill A. Shutemov,
Oscar Salvador, Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V,
Joerg Roedel, Alistair Popple, Joao Martins, linux-arch, stable
On Mon, Aug 11, 2025 at 02:34:18PM +0900, Harry Yoo wrote:
> Move ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() to
> linux/pgtable.h so that they can be used outside of vmalloc and ioremap.
>
> Cc: <stable@vger.kernel.org>
> Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
> Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
> ---
> include/linux/pgtable.h | 16 ++++++++++++++++
> include/linux/vmalloc.h | 16 ----------------
> 2 files changed, 16 insertions(+), 16 deletions(-)
>
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 4c035637eeb7..ba699df6ef69 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1467,6 +1467,22 @@ static inline void modify_prot_commit_ptes(struct vm_area_struct *vma, unsigned
> }
> #endif
>
> +/*
> + * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
> + * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
If ARCH_PAGE_TABLE_SYNC_MASK can be used outside vmalloc(), the comment
needs an update, maybe
... and let the generic code that modifies kernel page tables
Other than that
Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> + * needs to be called.
> + */
> +#ifndef ARCH_PAGE_TABLE_SYNC_MASK
> +#define ARCH_PAGE_TABLE_SYNC_MASK 0
> +#endif
> +
> +/*
> + * There is no default implementation for arch_sync_kernel_mappings(). It is
> + * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK
> + * is 0.
> + */
> +void arch_sync_kernel_mappings(unsigned long start, unsigned long end);
> +
> #endif /* CONFIG_MMU */
>
> /*
> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> index fdc9aeb74a44..2759dac6be44 100644
> --- a/include/linux/vmalloc.h
> +++ b/include/linux/vmalloc.h
> @@ -219,22 +219,6 @@ extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
> int vmap_pages_range(unsigned long addr, unsigned long end, pgprot_t prot,
> struct page **pages, unsigned int page_shift);
>
> -/*
> - * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
> - * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
> - * needs to be called.
> - */
> -#ifndef ARCH_PAGE_TABLE_SYNC_MASK
> -#define ARCH_PAGE_TABLE_SYNC_MASK 0
> -#endif
> -
> -/*
> - * There is no default implementation for arch_sync_kernel_mappings(). It is
> - * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK
> - * is 0.
> - */
> -void arch_sync_kernel_mappings(unsigned long start, unsigned long end);
> -
> /*
> * Lowlevel-APIs (not for driver use!)
> */
> --
> 2.43.0
>
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 0/3] mm, x86: fix crash due to missing page table sync and make it harder to miss
2025-08-11 6:46 ` [PATCH V4 mm-hotfixes 0/3] mm, x86: fix crash due to missing page table sync and make it harder to miss Kiryl Shutsemau
@ 2025-08-11 8:09 ` Harry Yoo
0 siblings, 0 replies; 26+ messages in thread
From: Harry Yoo @ 2025-08-11 8:09 UTC (permalink / raw)
To: Kiryl Shutsemau
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Mike Rapoport, Ard Biesheuvel,
linux-kernel, Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Lorenzo Stoakes,
Michal Hocko, Liam R. Howlett, linux-mm, Oscar Salvador, Jane Chu,
Gwan-gyeong Mun, Aneesh Kumar K . V, Joerg Roedel,
Alistair Popple, Joao Martins, linux-arch
On Mon, Aug 11, 2025 at 07:46:13AM +0100, Kiryl Shutsemau wrote:
> On Mon, Aug 11, 2025 at 02:34:17PM +0900, Harry Yoo wrote:
> > # The solution: Make page table sync more code robust and harder to miss
> >
> > To address this, Dave Hansen suggested [3] [4] introducing
> > {pgd,p4d}_populate_kernel() for updating kernel portion
> > of the page tables and allow each architecture to explicitly perform
> > synchronization when installing top-level entries. With this approach,
> > we no longer need to worry about missing the sync step, reducing the risk
> > of future regressions.
>
> Looks sane:
>
> Acked-by: Kiryl Shutsemau <kas@kernel.org>
Thanks a lot, Kiryl!
> > The new interface reuses existing ARCH_PAGE_TABLE_SYNC_MASK,
> > PGTBL_P*D_MODIFIED and arch_sync_kernel_mappings() facility used by
> > vmalloc and ioremap to synchronize page tables.
> >
> > pgd_populate_kernel() looks like this:
> > static inline void pgd_populate_kernel(unsigned long addr, pgd_t *pgd,
> > p4d_t *p4d)
> > {
> > pgd_populate(&init_mm, pgd, p4d);
> > if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED)
> > arch_sync_kernel_mappings(addr, addr);
> > }
> >
> > It is worth noting that vmalloc() and apply_to_range() carefully
> > synchronizes page tables by calling p*d_alloc_track() and
> > arch_sync_kernel_mappings(), and thus they are not affected by
> > this patch series.
> Well, except ARCH_PAGE_TABLE_SYNC_MASK is not defined on x86-64 until
> now. So I think it is affected.
Oh, you are right. Although they don't use p*d_populate_kernel() API,
changing ARCH_PAGE_TABLE_SYNC_MASK affects their behavior.
PGD entries for vmalloc are always pre-populated so it shouldn't be
affected much. But apply_to_page_range() is. Though I'm not aware of
any bugs from it spanning multiple PGD ranges and missing page table sync.
By the way, I think it may be better in the future to unify them
under the same logic for synchronizing kernel mappings.
With this series, there are two ways:
1. p*d_populate_kernel()
2. p*d_alloc_track() + arch_sync_kernel_mappings.
--
Cheers,
Harry / Hyeonggon
> --
> Kiryl Shutsemau / Kirill A. Shutemov
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel()
2025-08-11 5:34 ` [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel() Harry Yoo
@ 2025-08-11 8:10 ` Mike Rapoport
2025-08-11 9:10 ` Lorenzo Stoakes
` (2 subsequent siblings)
3 siblings, 0 replies; 26+ messages in thread
From: Mike Rapoport @ 2025-08-11 8:10 UTC (permalink / raw)
To: Harry Yoo
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Ard Biesheuvel, linux-kernel,
Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Lorenzo Stoakes,
Michal Hocko, Liam R. Howlett, linux-mm, Kirill A. Shutemov,
Oscar Salvador, Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V,
Joerg Roedel, Alistair Popple, Joao Martins, linux-arch, stable
On Mon, Aug 11, 2025 at 02:34:19PM +0900, Harry Yoo wrote:
> Introduce and use {pgd,p4d}_populate_kernel() in core MM code when
> populating PGD and P4D entries for the kernel address space.
> These helpers ensure proper synchronization of page tables when
> updating the kernel portion of top-level page tables.
>
> Until now, the kernel has relied on each architecture to handle
> synchronization of top-level page tables in an ad-hoc manner.
> For example, see commit 9b861528a801 ("x86-64, mem: Update all PGDs for
> direct mapping and vmemmap mapping changes").
>
> However, this approach has proven fragile for following reasons:
>
> 1) It is easy to forget to perform the necessary page table
> synchronization when introducing new changes.
> For instance, commit 4917f55b4ef9 ("mm/sparse-vmemmap: improve memory
> savings for compound devmaps") overlooked the need to synchronize
> page tables for the vmemmap area.
>
> 2) It is also easy to overlook that the vmemmap and direct mapping areas
> must not be accessed before explicit page table synchronization.
> For example, commit 8d400913c231 ("x86/vmemmap: handle unpopulated
> sub-pmd ranges")) caused crashes by accessing the vmemmap area
> before calling sync_global_pgds().
>
> To address this, as suggested by Dave Hansen, introduce _kernel() variants
> of the page table population helpers, which invoke architecture-specific
> hooks to properly synchronize page tables. These are introduced in a new
> header file, include/linux/pgalloc.h, so they can be called from common code.
>
> They reuse existing infrastructure for vmalloc and ioremap.
> Synchronization requirements are determined by ARCH_PAGE_TABLE_SYNC_MASK,
> and the actual synchronization is performed by arch_sync_kernel_mappings().
>
> This change currently targets only x86_64, so only PGD and P4D level
> helpers are introduced. In theory, PUD and PMD level helpers can be added
> later if needed by other architectures.
>
> Currently this is a no-op, since no architecture sets
> PGTBL_{PGD,P4D}_MODIFIED in ARCH_PAGE_TABLE_SYNC_MASK.
>
> Cc: <stable@vger.kernel.org>
> Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> ---
> include/linux/pgalloc.h | 24 ++++++++++++++++++++++++
> include/linux/pgtable.h | 4 ++--
> mm/kasan/init.c | 12 ++++++------
> mm/percpu.c | 6 +++---
> mm/sparse-vmemmap.c | 6 +++---
> 5 files changed, 38 insertions(+), 14 deletions(-)
> create mode 100644 include/linux/pgalloc.h
>
> diff --git a/include/linux/pgalloc.h b/include/linux/pgalloc.h
> new file mode 100644
> index 000000000000..290ab864320f
> --- /dev/null
> +++ b/include/linux/pgalloc.h
> @@ -0,0 +1,24 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _LINUX_PGALLOC_H
> +#define _LINUX_PGALLOC_H
> +
> +#include <linux/pgtable.h>
> +#include <asm/pgalloc.h>
> +
> +static inline void pgd_populate_kernel(unsigned long addr, pgd_t *pgd,
> + p4d_t *p4d)
> +{
> + pgd_populate(&init_mm, pgd, p4d);
> + if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED)
> + arch_sync_kernel_mappings(addr, addr);
> +}
> +
> +static inline void p4d_populate_kernel(unsigned long addr, p4d_t *p4d,
> + pud_t *pud)
> +{
> + p4d_populate(&init_mm, p4d, pud);
> + if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_P4D_MODIFIED)
> + arch_sync_kernel_mappings(addr, addr);
> +}
> +
> +#endif /* _LINUX_PGALLOC_H */
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index ba699df6ef69..0cf5c6c3e483 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1469,8 +1469,8 @@ static inline void modify_prot_commit_ptes(struct vm_area_struct *vma, unsigned
>
> /*
> * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
> - * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
> - * needs to be called.
> + * and let generic vmalloc, ioremap and page table update code know when
> + * arch_sync_kernel_mappings() needs to be called.
> */
> #ifndef ARCH_PAGE_TABLE_SYNC_MASK
> #define ARCH_PAGE_TABLE_SYNC_MASK 0
> diff --git a/mm/kasan/init.c b/mm/kasan/init.c
> index ced6b29fcf76..8fce3370c84e 100644
> --- a/mm/kasan/init.c
> +++ b/mm/kasan/init.c
> @@ -13,9 +13,9 @@
> #include <linux/mm.h>
> #include <linux/pfn.h>
> #include <linux/slab.h>
> +#include <linux/pgalloc.h>
>
> #include <asm/page.h>
> -#include <asm/pgalloc.h>
>
> #include "kasan.h"
>
> @@ -191,7 +191,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
> pud_t *pud;
> pmd_t *pmd;
>
> - p4d_populate(&init_mm, p4d,
> + p4d_populate_kernel(addr, p4d,
> lm_alias(kasan_early_shadow_pud));
> pud = pud_offset(p4d, addr);
> pud_populate(&init_mm, pud,
> @@ -212,7 +212,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
> } else {
> p = early_alloc(PAGE_SIZE, NUMA_NO_NODE);
> pud_init(p);
> - p4d_populate(&init_mm, p4d, p);
> + p4d_populate_kernel(addr, p4d, p);
> }
> }
> zero_pud_populate(p4d, addr, next);
> @@ -251,10 +251,10 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
> * puds,pmds, so pgd_populate(), pud_populate()
> * is noops.
> */
> - pgd_populate(&init_mm, pgd,
> + pgd_populate_kernel(addr, pgd,
> lm_alias(kasan_early_shadow_p4d));
> p4d = p4d_offset(pgd, addr);
> - p4d_populate(&init_mm, p4d,
> + p4d_populate_kernel(addr, p4d,
> lm_alias(kasan_early_shadow_pud));
> pud = pud_offset(p4d, addr);
> pud_populate(&init_mm, pud,
> @@ -273,7 +273,7 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
> if (!p)
> return -ENOMEM;
> } else {
> - pgd_populate(&init_mm, pgd,
> + pgd_populate_kernel(addr, pgd,
> early_alloc(PAGE_SIZE, NUMA_NO_NODE));
> }
> }
> diff --git a/mm/percpu.c b/mm/percpu.c
> index d9cbaee92b60..a56f35dcc417 100644
> --- a/mm/percpu.c
> +++ b/mm/percpu.c
> @@ -3108,7 +3108,7 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
> #endif /* BUILD_EMBED_FIRST_CHUNK */
>
> #ifdef BUILD_PAGE_FIRST_CHUNK
> -#include <asm/pgalloc.h>
> +#include <linux/pgalloc.h>
>
> #ifndef P4D_TABLE_SIZE
> #define P4D_TABLE_SIZE PAGE_SIZE
> @@ -3134,13 +3134,13 @@ void __init __weak pcpu_populate_pte(unsigned long addr)
>
> if (pgd_none(*pgd)) {
> p4d = memblock_alloc_or_panic(P4D_TABLE_SIZE, P4D_TABLE_SIZE);
> - pgd_populate(&init_mm, pgd, p4d);
> + pgd_populate_kernel(addr, pgd, p4d);
> }
>
> p4d = p4d_offset(pgd, addr);
> if (p4d_none(*p4d)) {
> pud = memblock_alloc_or_panic(PUD_TABLE_SIZE, PUD_TABLE_SIZE);
> - p4d_populate(&init_mm, p4d, pud);
> + p4d_populate_kernel(addr, p4d, pud);
> }
>
> pud = pud_offset(p4d, addr);
> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index 41aa0493eb03..dbd8daccade2 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -27,9 +27,9 @@
> #include <linux/spinlock.h>
> #include <linux/vmalloc.h>
> #include <linux/sched.h>
> +#include <linux/pgalloc.h>
>
> #include <asm/dma.h>
> -#include <asm/pgalloc.h>
> #include <asm/tlbflush.h>
>
> #include "hugetlb_vmemmap.h"
> @@ -229,7 +229,7 @@ p4d_t * __meminit vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node)
> if (!p)
> return NULL;
> pud_init(p);
> - p4d_populate(&init_mm, p4d, p);
> + p4d_populate_kernel(addr, p4d, p);
> }
> return p4d;
> }
> @@ -241,7 +241,7 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node)
> void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
> if (!p)
> return NULL;
> - pgd_populate(&init_mm, pgd, p);
> + pgd_populate_kernel(addr, pgd, p);
> }
> return pgd;
> }
> --
> 2.43.0
>
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 3/3] x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings()
2025-08-11 5:34 ` [PATCH V4 mm-hotfixes 3/3] x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() Harry Yoo
@ 2025-08-11 8:13 ` Mike Rapoport
2025-08-11 11:46 ` Lorenzo Stoakes
1 sibling, 0 replies; 26+ messages in thread
From: Mike Rapoport @ 2025-08-11 8:13 UTC (permalink / raw)
To: Harry Yoo
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Ard Biesheuvel, linux-kernel,
Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Lorenzo Stoakes,
Michal Hocko, Liam R. Howlett, linux-mm, Kirill A. Shutemov,
Oscar Salvador, Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V,
Joerg Roedel, Alistair Popple, Joao Martins, linux-arch, stable
On Mon, Aug 11, 2025 at 02:34:20PM +0900, Harry Yoo wrote:
> Define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() to ensure
> page tables are properly synchronized when calling p*d_populate_kernel().
> It is inteneded to synchronize page tables via pgd_pouplate_kernel() when
> 5-level paging is in use and via p4d_pouplate_kernel() when 4-level paging
> is used.
>
> This fixes intermittent boot failures on systems using 4-level paging
> and a large amount of persistent memory:
>
> BUG: unable to handle page fault for address: ffffe70000000034
> #PF: supervisor write access in kernel mode
> #PF: error_code(0x0002) - not-present page
> PGD 0 P4D 0
> Oops: 0002 [#1] SMP NOPTI
> RIP: 0010:__init_single_page+0x9/0x6d
> Call Trace:
> <TASK>
> __init_zone_device_page+0x17/0x5d
> memmap_init_zone_device+0x154/0x1bb
> pagemap_range+0x2e0/0x40f
> memremap_pages+0x10b/0x2f0
> devm_memremap_pages+0x1e/0x60
> dev_dax_probe+0xce/0x2ec [device_dax]
> dax_bus_probe+0x6d/0xc9
> [... snip ...]
> </TASK>
>
> It also fixes a crash in vmemmap_set_pmd() caused by accessing vmemmap
> before sync_global_pgds() [1]:
>
> BUG: unable to handle page fault for address: ffffeb3ff1200000
> #PF: supervisor write access in kernel mode
> #PF: error_code(0x0002) - not-present page
> PGD 0 P4D 0
> Oops: Oops: 0002 [#1] PREEMPT SMP NOPTI
> Tainted: [W]=WARN
> RIP: 0010:vmemmap_set_pmd+0xff/0x230
> <TASK>
> vmemmap_populate_hugepages+0x176/0x180
> vmemmap_populate+0x34/0x80
> __populate_section_memmap+0x41/0x90
> sparse_add_section+0x121/0x3e0
> __add_pages+0xba/0x150
> add_pages+0x1d/0x70
> memremap_pages+0x3dc/0x810
> devm_memremap_pages+0x1c/0x60
> xe_devm_add+0x8b/0x100 [xe]
> xe_tile_init_noalloc+0x6a/0x70 [xe]
> xe_device_probe+0x48c/0x740 [xe]
> [... snip ...]
>
> Cc: <stable@vger.kernel.org>
> Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
> Closes: https://lore.kernel.org/linux-mm/20250311114420.240341-1-gwan-gyeong.mun@intel.com [1]
> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> ---
> arch/x86/include/asm/pgtable_64_types.h | 3 +++
> arch/x86/mm/init_64.c | 5 +++++
> 2 files changed, 8 insertions(+)
>
> diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
> index 4604f924d8b8..7eb61ef6a185 100644
> --- a/arch/x86/include/asm/pgtable_64_types.h
> +++ b/arch/x86/include/asm/pgtable_64_types.h
> @@ -36,6 +36,9 @@ static inline bool pgtable_l5_enabled(void)
> #define pgtable_l5_enabled() cpu_feature_enabled(X86_FEATURE_LA57)
> #endif /* USE_EARLY_PGTABLE_L5 */
>
> +#define ARCH_PAGE_TABLE_SYNC_MASK \
> + (pgtable_l5_enabled() ? PGTBL_PGD_MODIFIED : PGTBL_P4D_MODIFIED)
> +
> extern unsigned int pgdir_shift;
> extern unsigned int ptrs_per_p4d;
>
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index 76e33bd7c556..a78b498c0dc3 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -223,6 +223,11 @@ static void sync_global_pgds(unsigned long start, unsigned long end)
> sync_global_pgds_l4(start, end);
> }
>
> +void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
> +{
> + sync_global_pgds(start, end);
> +}
> +
> /*
> * NOTE: This function is marked __ref because it calls __init function
> * (alloc_bootmem_pages). It's safe to do it ONLY when after_bootmem == 0.
> --
> 2.43.0
>
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 1/3] mm: move page table sync declarations to linux/pgtable.h
2025-08-11 8:05 ` Mike Rapoport
@ 2025-08-11 8:36 ` Harry Yoo
2025-08-11 8:52 ` Mike Rapoport
2025-08-11 9:19 ` Uladzislau Rezki
1 sibling, 1 reply; 26+ messages in thread
From: Harry Yoo @ 2025-08-11 8:36 UTC (permalink / raw)
To: Mike Rapoport
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Ard Biesheuvel, linux-kernel,
Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Lorenzo Stoakes,
Michal Hocko, Liam R. Howlett, linux-mm, Kirill A. Shutemov,
Oscar Salvador, Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V,
Joerg Roedel, Alistair Popple, Joao Martins, linux-arch, stable
On Mon, Aug 11, 2025 at 11:05:51AM +0300, Mike Rapoport wrote:
> On Mon, Aug 11, 2025 at 02:34:18PM +0900, Harry Yoo wrote:
> > Move ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() to
> > linux/pgtable.h so that they can be used outside of vmalloc and ioremap.
> >
> > Cc: <stable@vger.kernel.org>
> > Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
> > Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
> > ---
> > include/linux/pgtable.h | 16 ++++++++++++++++
> > include/linux/vmalloc.h | 16 ----------------
> > 2 files changed, 16 insertions(+), 16 deletions(-)
> >
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index 4c035637eeb7..ba699df6ef69 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -1467,6 +1467,22 @@ static inline void modify_prot_commit_ptes(struct vm_area_struct *vma, unsigned
> > }
> > #endif
> >
> > +/*
> > + * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
> > + * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
>
> If ARCH_PAGE_TABLE_SYNC_MASK can be used outside vmalloc(), the comment
> needs an update, maybe
>
> ... and let the generic code that modifies kernel page tables
Right, and patch 2 updates the comment as it uses it outside vmalloc():
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index ba699df6ef69..0cf5c6c3e483 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1469,8 +1469,8 @@ static inline void modify_prot_commit_ptes(struct vm_area_struct *vma, unsigned
/*
* Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
- * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
- * needs to be called.
+ * and let generic vmalloc, ioremap and page table update code know when
+ * arch_sync_kernel_mappings() needs to be called.
*/
#ifndef ARCH_PAGE_TABLE_SYNC_MASK
#define ARCH_PAGE_TABLE_SYNC_MASK 0
Or if you think "page table update code" is unclear, please let me know.
> Other than that
>
> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Thanks a lot for all the reviews, Mike!
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply related [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 1/3] mm: move page table sync declarations to linux/pgtable.h
2025-08-11 8:36 ` Harry Yoo
@ 2025-08-11 8:52 ` Mike Rapoport
0 siblings, 0 replies; 26+ messages in thread
From: Mike Rapoport @ 2025-08-11 8:52 UTC (permalink / raw)
To: Harry Yoo
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Ard Biesheuvel, linux-kernel,
Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Lorenzo Stoakes,
Michal Hocko, Liam R. Howlett, linux-mm, Kirill A. Shutemov,
Oscar Salvador, Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V,
Joerg Roedel, Alistair Popple, Joao Martins, linux-arch, stable
On Mon, Aug 11, 2025 at 05:36:53PM +0900, Harry Yoo wrote:
> On Mon, Aug 11, 2025 at 11:05:51AM +0300, Mike Rapoport wrote:
> > On Mon, Aug 11, 2025 at 02:34:18PM +0900, Harry Yoo wrote:
> > > Move ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() to
> > > linux/pgtable.h so that they can be used outside of vmalloc and ioremap.
> > >
> > > Cc: <stable@vger.kernel.org>
> > > Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
> > > Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
> > > ---
> > > include/linux/pgtable.h | 16 ++++++++++++++++
> > > include/linux/vmalloc.h | 16 ----------------
> > > 2 files changed, 16 insertions(+), 16 deletions(-)
> > >
> > > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > > index 4c035637eeb7..ba699df6ef69 100644
> > > --- a/include/linux/pgtable.h
> > > +++ b/include/linux/pgtable.h
> > > @@ -1467,6 +1467,22 @@ static inline void modify_prot_commit_ptes(struct vm_area_struct *vma, unsigned
> > > }
> > > #endif
> > >
> > > +/*
> > > + * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
> > > + * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
> >
> > If ARCH_PAGE_TABLE_SYNC_MASK can be used outside vmalloc(), the comment
> > needs an update, maybe
> >
> > ... and let the generic code that modifies kernel page tables
>
> Right, and patch 2 updates the comment as it uses it outside vmalloc():
>
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index ba699df6ef69..0cf5c6c3e483 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1469,8 +1469,8 @@ static inline void modify_prot_commit_ptes(struct vm_area_struct *vma, unsigned
>
> /*
> * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
> - * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
> - * needs to be called.
> + * and let generic vmalloc, ioremap and page table update code know when
> + * arch_sync_kernel_mappings() needs to be called.
> */
> #ifndef ARCH_PAGE_TABLE_SYNC_MASK
> #define ARCH_PAGE_TABLE_SYNC_MASK 0
>
> Or if you think "page table update code" is unclear, please let me know.
It's fine :)
> > Other than that
> >
> > Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
>
> Thanks a lot for all the reviews, Mike!
>
> --
> Cheers,
> Harry / Hyeonggon
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel()
2025-08-11 5:34 ` [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel() Harry Yoo
2025-08-11 8:10 ` Mike Rapoport
@ 2025-08-11 9:10 ` Lorenzo Stoakes
2025-08-11 10:36 ` Harry Yoo
2025-08-11 11:38 ` Lorenzo Stoakes
2025-08-25 11:27 ` Christophe Leroy
3 siblings, 1 reply; 26+ messages in thread
From: Lorenzo Stoakes @ 2025-08-11 9:10 UTC (permalink / raw)
To: Harry Yoo
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Mike Rapoport, Ard Biesheuvel,
linux-kernel, Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Michal Hocko,
Liam R. Howlett, linux-mm, Kirill A. Shutemov, Oscar Salvador,
Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V, Joerg Roedel,
Alistair Popple, Joao Martins, linux-arch, stable
On Mon, Aug 11, 2025 at 02:34:19PM +0900, Harry Yoo wrote:
> Introduce and use {pgd,p4d}_populate_kernel() in core MM code when
> populating PGD and P4D entries for the kernel address space.
> These helpers ensure proper synchronization of page tables when
> updating the kernel portion of top-level page tables.
>
> Until now, the kernel has relied on each architecture to handle
> synchronization of top-level page tables in an ad-hoc manner.
> For example, see commit 9b861528a801 ("x86-64, mem: Update all PGDs for
> direct mapping and vmemmap mapping changes").
>
> However, this approach has proven fragile for following reasons:
>
> 1) It is easy to forget to perform the necessary page table
> synchronization when introducing new changes.
> For instance, commit 4917f55b4ef9 ("mm/sparse-vmemmap: improve memory
> savings for compound devmaps") overlooked the need to synchronize
> page tables for the vmemmap area.
>
> 2) It is also easy to overlook that the vmemmap and direct mapping areas
> must not be accessed before explicit page table synchronization.
> For example, commit 8d400913c231 ("x86/vmemmap: handle unpopulated
> sub-pmd ranges")) caused crashes by accessing the vmemmap area
> before calling sync_global_pgds().
>
> To address this, as suggested by Dave Hansen, introduce _kernel() variants
> of the page table population helpers, which invoke architecture-specific
> hooks to properly synchronize page tables. These are introduced in a new
> header file, include/linux/pgalloc.h, so they can be called from common code.
>
> They reuse existing infrastructure for vmalloc and ioremap.
> Synchronization requirements are determined by ARCH_PAGE_TABLE_SYNC_MASK,
> and the actual synchronization is performed by arch_sync_kernel_mappings().
>
> This change currently targets only x86_64, so only PGD and P4D level
> helpers are introduced. In theory, PUD and PMD level helpers can be added
> later if needed by other architectures.
>
> Currently this is a no-op, since no architecture sets
> PGTBL_{PGD,P4D}_MODIFIED in ARCH_PAGE_TABLE_SYNC_MASK.
>
> Cc: <stable@vger.kernel.org>
> Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
> ---
> include/linux/pgalloc.h | 24 ++++++++++++++++++++++++
Could we put this in the correct place in MAINTAINERS please? I think
MEMORY MANAGEMENT - CORE is correct, given the below file is there.
> include/linux/pgtable.h | 4 ++--
Thanks!
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 1/3] mm: move page table sync declarations to linux/pgtable.h
2025-08-11 8:05 ` Mike Rapoport
2025-08-11 8:36 ` Harry Yoo
@ 2025-08-11 9:19 ` Uladzislau Rezki
1 sibling, 0 replies; 26+ messages in thread
From: Uladzislau Rezki @ 2025-08-11 9:19 UTC (permalink / raw)
To: Mike Rapoport, Harry Yoo
Cc: Harry Yoo, Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86,
Borislav Petkov, Peter Zijlstra, Andy Lutomirski, Thomas Gleixner,
Ingo Molnar, Tejun Heo, Uladzislau Rezki, Dave Hansen,
Christoph Lameter, David Hildenbrand, Andrey Konovalov,
Vincenzo Frascino, H. Peter Anvin, kasan-dev, Ard Biesheuvel,
linux-kernel, Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Lorenzo Stoakes,
Michal Hocko, Liam R. Howlett, linux-mm, Kirill A. Shutemov,
Oscar Salvador, Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V,
Joerg Roedel, Alistair Popple, Joao Martins, linux-arch, stable
On Mon, Aug 11, 2025 at 11:05:51AM +0300, Mike Rapoport wrote:
> On Mon, Aug 11, 2025 at 02:34:18PM +0900, Harry Yoo wrote:
> > Move ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() to
> > linux/pgtable.h so that they can be used outside of vmalloc and ioremap.
> >
> > Cc: <stable@vger.kernel.org>
> > Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
> > Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
> > ---
> > include/linux/pgtable.h | 16 ++++++++++++++++
> > include/linux/vmalloc.h | 16 ----------------
> > 2 files changed, 16 insertions(+), 16 deletions(-)
> >
> > diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> > index 4c035637eeb7..ba699df6ef69 100644
> > --- a/include/linux/pgtable.h
> > +++ b/include/linux/pgtable.h
> > @@ -1467,6 +1467,22 @@ static inline void modify_prot_commit_ptes(struct vm_area_struct *vma, unsigned
> > }
> > #endif
> >
> > +/*
> > + * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
> > + * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
>
> If ARCH_PAGE_TABLE_SYNC_MASK can be used outside vmalloc(), the comment
> needs an update, maybe
>
> ... and let the generic code that modifies kernel page tables
>
> Other than that
>
> Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
>
> > + * needs to be called.
> > + */
> > +#ifndef ARCH_PAGE_TABLE_SYNC_MASK
> > +#define ARCH_PAGE_TABLE_SYNC_MASK 0
> > +#endif
> > +
> > +/*
> > + * There is no default implementation for arch_sync_kernel_mappings(). It is
> > + * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK
> > + * is 0.
> > + */
> > +void arch_sync_kernel_mappings(unsigned long start, unsigned long end);
> > +
> > #endif /* CONFIG_MMU */
> >
> > /*
> > diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> > index fdc9aeb74a44..2759dac6be44 100644
> > --- a/include/linux/vmalloc.h
> > +++ b/include/linux/vmalloc.h
> > @@ -219,22 +219,6 @@ extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
> > int vmap_pages_range(unsigned long addr, unsigned long end, pgprot_t prot,
> > struct page **pages, unsigned int page_shift);
> >
> > -/*
> > - * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
> > - * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
> > - * needs to be called.
> > - */
> > -#ifndef ARCH_PAGE_TABLE_SYNC_MASK
> > -#define ARCH_PAGE_TABLE_SYNC_MASK 0
> > -#endif
> > -
> > -/*
> > - * There is no default implementation for arch_sync_kernel_mappings(). It is
> > - * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK
> > - * is 0.
> > - */
> > -void arch_sync_kernel_mappings(unsigned long start, unsigned long end);
> > -
> > /*
> > * Lowlevel-APIs (not for driver use!)
> > */
> > --
> > 2.43.0
> >
>
LGTM,
Reviewed-by: "Uladzislau Rezki (Sony)" <urezki@gmail.com>
--
Uladzislau Rezki
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel()
2025-08-11 9:10 ` Lorenzo Stoakes
@ 2025-08-11 10:36 ` Harry Yoo
2025-08-11 11:18 ` Lorenzo Stoakes
0 siblings, 1 reply; 26+ messages in thread
From: Harry Yoo @ 2025-08-11 10:36 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Mike Rapoport, Ard Biesheuvel,
linux-kernel, Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Michal Hocko,
Liam R. Howlett, linux-mm, Kirill A. Shutemov, Oscar Salvador,
Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V, Joerg Roedel,
Alistair Popple, Joao Martins, linux-arch, stable
On Mon, Aug 11, 2025 at 10:10:58AM +0100, Lorenzo Stoakes wrote:
> On Mon, Aug 11, 2025 at 02:34:19PM +0900, Harry Yoo wrote:
> > Introduce and use {pgd,p4d}_populate_kernel() in core MM code when
> > populating PGD and P4D entries for the kernel address space.
> > These helpers ensure proper synchronization of page tables when
> > updating the kernel portion of top-level page tables.
> >
> > Until now, the kernel has relied on each architecture to handle
> > synchronization of top-level page tables in an ad-hoc manner.
> > For example, see commit 9b861528a801 ("x86-64, mem: Update all PGDs for
> > direct mapping and vmemmap mapping changes").
> >
> > However, this approach has proven fragile for following reasons:
> >
> > 1) It is easy to forget to perform the necessary page table
> > synchronization when introducing new changes.
> > For instance, commit 4917f55b4ef9 ("mm/sparse-vmemmap: improve memory
> > savings for compound devmaps") overlooked the need to synchronize
> > page tables for the vmemmap area.
> >
> > 2) It is also easy to overlook that the vmemmap and direct mapping areas
> > must not be accessed before explicit page table synchronization.
> > For example, commit 8d400913c231 ("x86/vmemmap: handle unpopulated
> > sub-pmd ranges")) caused crashes by accessing the vmemmap area
> > before calling sync_global_pgds().
> >
> > To address this, as suggested by Dave Hansen, introduce _kernel() variants
> > of the page table population helpers, which invoke architecture-specific
> > hooks to properly synchronize page tables. These are introduced in a new
> > header file, include/linux/pgalloc.h, so they can be called from common code.
> >
> > They reuse existing infrastructure for vmalloc and ioremap.
> > Synchronization requirements are determined by ARCH_PAGE_TABLE_SYNC_MASK,
> > and the actual synchronization is performed by arch_sync_kernel_mappings().
> >
> > This change currently targets only x86_64, so only PGD and P4D level
> > helpers are introduced. In theory, PUD and PMD level helpers can be added
> > later if needed by other architectures.
> >
> > Currently this is a no-op, since no architecture sets
> > PGTBL_{PGD,P4D}_MODIFIED in ARCH_PAGE_TABLE_SYNC_MASK.
> >
> > Cc: <stable@vger.kernel.org>
> > Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
> > Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> > Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
> > ---
> > include/linux/pgalloc.h | 24 ++++++++++++++++++++++++
>
> Could we put this in the correct place in MAINTAINERS please?
Definitely yes!
Since this series will be backported to about five -stable kernels
(v5.13.x and later), I will add that as part of a follow-up series
that is not intended for backporting.
Does that sound okay?
> I think MEMORY MANAGEMENT - CORE is correct, given the below file is there.
Thanks for confirming that!
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel()
2025-08-11 10:36 ` Harry Yoo
@ 2025-08-11 11:18 ` Lorenzo Stoakes
0 siblings, 0 replies; 26+ messages in thread
From: Lorenzo Stoakes @ 2025-08-11 11:18 UTC (permalink / raw)
To: Harry Yoo
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Mike Rapoport, Ard Biesheuvel,
linux-kernel, Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Michal Hocko,
Liam R. Howlett, linux-mm, Kirill A. Shutemov, Oscar Salvador,
Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V, Joerg Roedel,
Alistair Popple, Joao Martins, linux-arch, stable
On Mon, Aug 11, 2025 at 07:36:46PM +0900, Harry Yoo wrote:
> > > include/linux/pgalloc.h | 24 ++++++++++++++++++++++++
> >
> > Could we put this in the correct place in MAINTAINERS please?
>
> Definitely yes!
>
> Since this series will be backported to about five -stable kernels
> (v5.13.x and later), I will add that as part of a follow-up series
> that is not intended for backporting.
>
> Does that sound okay?
Yes that's fine thanks!
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 1/3] mm: move page table sync declarations to linux/pgtable.h
2025-08-11 5:34 ` [PATCH V4 mm-hotfixes 1/3] mm: move page table sync declarations to linux/pgtable.h Harry Yoo
2025-08-11 8:05 ` Mike Rapoport
@ 2025-08-11 11:21 ` Lorenzo Stoakes
1 sibling, 0 replies; 26+ messages in thread
From: Lorenzo Stoakes @ 2025-08-11 11:21 UTC (permalink / raw)
To: Harry Yoo
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Mike Rapoport, Ard Biesheuvel,
linux-kernel, Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Michal Hocko,
Liam R. Howlett, linux-mm, Kirill A. Shutemov, Oscar Salvador,
Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V, Joerg Roedel,
Alistair Popple, Joao Martins, linux-arch, stable
On Mon, Aug 11, 2025 at 02:34:18PM +0900, Harry Yoo wrote:
> Move ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() to
> linux/pgtable.h so that they can be used outside of vmalloc and ioremap.
>
> Cc: <stable@vger.kernel.org>
> Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
> Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
LGTM, obviously assuming you address Mike's comments about... comments :)
So:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/pgtable.h | 16 ++++++++++++++++
> include/linux/vmalloc.h | 16 ----------------
> 2 files changed, 16 insertions(+), 16 deletions(-)
>
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index 4c035637eeb7..ba699df6ef69 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1467,6 +1467,22 @@ static inline void modify_prot_commit_ptes(struct vm_area_struct *vma, unsigned
> }
> #endif
>
> +/*
> + * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
> + * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
> + * needs to be called.
> + */
> +#ifndef ARCH_PAGE_TABLE_SYNC_MASK
> +#define ARCH_PAGE_TABLE_SYNC_MASK 0
> +#endif
> +
> +/*
> + * There is no default implementation for arch_sync_kernel_mappings(). It is
> + * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK
> + * is 0.
> + */
> +void arch_sync_kernel_mappings(unsigned long start, unsigned long end);
> +
> #endif /* CONFIG_MMU */
>
> /*
> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> index fdc9aeb74a44..2759dac6be44 100644
> --- a/include/linux/vmalloc.h
> +++ b/include/linux/vmalloc.h
> @@ -219,22 +219,6 @@ extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
> int vmap_pages_range(unsigned long addr, unsigned long end, pgprot_t prot,
> struct page **pages, unsigned int page_shift);
>
> -/*
> - * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
> - * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
> - * needs to be called.
> - */
> -#ifndef ARCH_PAGE_TABLE_SYNC_MASK
> -#define ARCH_PAGE_TABLE_SYNC_MASK 0
> -#endif
> -
> -/*
> - * There is no default implementation for arch_sync_kernel_mappings(). It is
> - * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK
> - * is 0.
> - */
> -void arch_sync_kernel_mappings(unsigned long start, unsigned long end);
> -
> /*
> * Lowlevel-APIs (not for driver use!)
> */
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel()
2025-08-11 5:34 ` [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel() Harry Yoo
2025-08-11 8:10 ` Mike Rapoport
2025-08-11 9:10 ` Lorenzo Stoakes
@ 2025-08-11 11:38 ` Lorenzo Stoakes
2025-08-11 12:12 ` Harry Yoo
2025-08-25 11:27 ` Christophe Leroy
3 siblings, 1 reply; 26+ messages in thread
From: Lorenzo Stoakes @ 2025-08-11 11:38 UTC (permalink / raw)
To: Harry Yoo
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Mike Rapoport, Ard Biesheuvel,
linux-kernel, Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Michal Hocko,
Liam R. Howlett, linux-mm, Kirill A. Shutemov, Oscar Salvador,
Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V, Joerg Roedel,
Alistair Popple, Joao Martins, linux-arch, stable
On Mon, Aug 11, 2025 at 02:34:19PM +0900, Harry Yoo wrote:
> Introduce and use {pgd,p4d}_populate_kernel() in core MM code when
> populating PGD and P4D entries for the kernel address space.
> These helpers ensure proper synchronization of page tables when
> updating the kernel portion of top-level page tables.
>
> Until now, the kernel has relied on each architecture to handle
> synchronization of top-level page tables in an ad-hoc manner.
> For example, see commit 9b861528a801 ("x86-64, mem: Update all PGDs for
> direct mapping and vmemmap mapping changes").
>
> However, this approach has proven fragile for following reasons:
>
> 1) It is easy to forget to perform the necessary page table
> synchronization when introducing new changes.
> For instance, commit 4917f55b4ef9 ("mm/sparse-vmemmap: improve memory
> savings for compound devmaps") overlooked the need to synchronize
> page tables for the vmemmap area.
>
> 2) It is also easy to overlook that the vmemmap and direct mapping areas
> must not be accessed before explicit page table synchronization.
> For example, commit 8d400913c231 ("x86/vmemmap: handle unpopulated
> sub-pmd ranges")) caused crashes by accessing the vmemmap area
> before calling sync_global_pgds().
>
> To address this, as suggested by Dave Hansen, introduce _kernel() variants
> of the page table population helpers, which invoke architecture-specific
> hooks to properly synchronize page tables. These are introduced in a new
> header file, include/linux/pgalloc.h, so they can be called from common code.
>
> They reuse existing infrastructure for vmalloc and ioremap.
> Synchronization requirements are determined by ARCH_PAGE_TABLE_SYNC_MASK,
> and the actual synchronization is performed by arch_sync_kernel_mappings().
>
> This change currently targets only x86_64, so only PGD and P4D level
Well, arm defines ARCH_PAGE_TABLE_SYNC_MASK in arch/arm/include/asm/page.h. But
it aliases this to PGTBL_PMD_MODIFIED so will remain unaffected :)
> helpers are introduced. In theory, PUD and PMD level helpers can be added
> later if needed by other architectures.
>
> Currently this is a no-op, since no architecture sets
> PGTBL_{PGD,P4D}_MODIFIED in ARCH_PAGE_TABLE_SYNC_MASK.
>
> Cc: <stable@vger.kernel.org>
> Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
> ---
> include/linux/pgalloc.h | 24 ++++++++++++++++++++++++
> include/linux/pgtable.h | 4 ++--
> mm/kasan/init.c | 12 ++++++------
> mm/percpu.c | 6 +++---
> mm/sparse-vmemmap.c | 6 +++---
> 5 files changed, 38 insertions(+), 14 deletions(-)
> create mode 100644 include/linux/pgalloc.h
>
> diff --git a/include/linux/pgalloc.h b/include/linux/pgalloc.h
> new file mode 100644
> index 000000000000..290ab864320f
> --- /dev/null
> +++ b/include/linux/pgalloc.h
> @@ -0,0 +1,24 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +#ifndef _LINUX_PGALLOC_H
> +#define _LINUX_PGALLOC_H
> +
> +#include <linux/pgtable.h>
> +#include <asm/pgalloc.h>
> +
> +static inline void pgd_populate_kernel(unsigned long addr, pgd_t *pgd,
> + p4d_t *p4d)
> +{
> + pgd_populate(&init_mm, pgd, p4d);
> + if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED)
Hm, ARCH_PAGE_TABLE_SYNC_MASK is only defined for x86 2, 3 page level and arm. I see:
#ifndef ARCH_PAGE_TABLE_SYNC_MASK
#define ARCH_PAGE_TABLE_SYNC_MASK 0
#endif
In linux/vmalloc.h, but you're not importing that?
It sucks that that there is there, but maybe you need to #include
<linux/vmalloc.h> for this otherwise this could be broken on other arches?
You may be getting lucky with nested header includes that causes this to be
picked up somewhere for you, or having it only declared for arches that define
it, but we should probably make this explicit.
Also arch_sync_kernel_mappings() is defined in linux/vmalloc.h so seems
sensible.
> + arch_sync_kernel_mappings(addr, addr);
> +}
> +
> +static inline void p4d_populate_kernel(unsigned long addr, p4d_t *p4d,
> + pud_t *pud)
> +{
> + p4d_populate(&init_mm, p4d, pud);
> + if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_P4D_MODIFIED)
> + arch_sync_kernel_mappings(addr, addr);
It's kind of weird we don't have this defined as a function for many arches,
(weird as well that we declare it in... vmalloc.h but I guess one for follow up
cleanups that).
But I see from the comment:
/*
* There is no default implementation for arch_sync_kernel_mappings(). It is
* relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK
* is 0.
*/
So this seems intended... :)
The rest of this seems sensible, nice cleanup!
> +}
> +
> +#endif /* _LINUX_PGALLOC_H */
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index ba699df6ef69..0cf5c6c3e483 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -1469,8 +1469,8 @@ static inline void modify_prot_commit_ptes(struct vm_area_struct *vma, unsigned
>
> /*
> * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
> - * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
> - * needs to be called.
> + * and let generic vmalloc, ioremap and page table update code know when
> + * arch_sync_kernel_mappings() needs to be called.
> */
> #ifndef ARCH_PAGE_TABLE_SYNC_MASK
> #define ARCH_PAGE_TABLE_SYNC_MASK 0
> diff --git a/mm/kasan/init.c b/mm/kasan/init.c
> index ced6b29fcf76..8fce3370c84e 100644
> --- a/mm/kasan/init.c
> +++ b/mm/kasan/init.c
> @@ -13,9 +13,9 @@
> #include <linux/mm.h>
> #include <linux/pfn.h>
> #include <linux/slab.h>
> +#include <linux/pgalloc.h>
>
> #include <asm/page.h>
> -#include <asm/pgalloc.h>
>
> #include "kasan.h"
>
> @@ -191,7 +191,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
> pud_t *pud;
> pmd_t *pmd;
>
> - p4d_populate(&init_mm, p4d,
> + p4d_populate_kernel(addr, p4d,
> lm_alias(kasan_early_shadow_pud));
> pud = pud_offset(p4d, addr);
> pud_populate(&init_mm, pud,
> @@ -212,7 +212,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
> } else {
> p = early_alloc(PAGE_SIZE, NUMA_NO_NODE);
> pud_init(p);
> - p4d_populate(&init_mm, p4d, p);
> + p4d_populate_kernel(addr, p4d, p);
> }
> }
> zero_pud_populate(p4d, addr, next);
> @@ -251,10 +251,10 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
> * puds,pmds, so pgd_populate(), pud_populate()
> * is noops.
> */
> - pgd_populate(&init_mm, pgd,
> + pgd_populate_kernel(addr, pgd,
> lm_alias(kasan_early_shadow_p4d));
> p4d = p4d_offset(pgd, addr);
> - p4d_populate(&init_mm, p4d,
> + p4d_populate_kernel(addr, p4d,
> lm_alias(kasan_early_shadow_pud));
> pud = pud_offset(p4d, addr);
> pud_populate(&init_mm, pud,
> @@ -273,7 +273,7 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
> if (!p)
> return -ENOMEM;
> } else {
> - pgd_populate(&init_mm, pgd,
> + pgd_populate_kernel(addr, pgd,
> early_alloc(PAGE_SIZE, NUMA_NO_NODE));
> }
> }
> diff --git a/mm/percpu.c b/mm/percpu.c
> index d9cbaee92b60..a56f35dcc417 100644
> --- a/mm/percpu.c
> +++ b/mm/percpu.c
> @@ -3108,7 +3108,7 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
> #endif /* BUILD_EMBED_FIRST_CHUNK */
>
> #ifdef BUILD_PAGE_FIRST_CHUNK
> -#include <asm/pgalloc.h>
> +#include <linux/pgalloc.h>
>
> #ifndef P4D_TABLE_SIZE
> #define P4D_TABLE_SIZE PAGE_SIZE
> @@ -3134,13 +3134,13 @@ void __init __weak pcpu_populate_pte(unsigned long addr)
>
> if (pgd_none(*pgd)) {
> p4d = memblock_alloc_or_panic(P4D_TABLE_SIZE, P4D_TABLE_SIZE);
> - pgd_populate(&init_mm, pgd, p4d);
> + pgd_populate_kernel(addr, pgd, p4d);
> }
>
> p4d = p4d_offset(pgd, addr);
> if (p4d_none(*p4d)) {
> pud = memblock_alloc_or_panic(PUD_TABLE_SIZE, PUD_TABLE_SIZE);
> - p4d_populate(&init_mm, p4d, pud);
> + p4d_populate_kernel(addr, p4d, pud);
> }
>
> pud = pud_offset(p4d, addr);
> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index 41aa0493eb03..dbd8daccade2 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -27,9 +27,9 @@
> #include <linux/spinlock.h>
> #include <linux/vmalloc.h>
> #include <linux/sched.h>
> +#include <linux/pgalloc.h>
>
> #include <asm/dma.h>
> -#include <asm/pgalloc.h>
> #include <asm/tlbflush.h>
>
> #include "hugetlb_vmemmap.h"
> @@ -229,7 +229,7 @@ p4d_t * __meminit vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node)
> if (!p)
> return NULL;
> pud_init(p);
> - p4d_populate(&init_mm, p4d, p);
> + p4d_populate_kernel(addr, p4d, p);
> }
> return p4d;
> }
> @@ -241,7 +241,7 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node)
> void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
> if (!p)
> return NULL;
> - pgd_populate(&init_mm, pgd, p);
> + pgd_populate_kernel(addr, pgd, p);
> }
> return pgd;
> }
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 3/3] x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings()
2025-08-11 5:34 ` [PATCH V4 mm-hotfixes 3/3] x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() Harry Yoo
2025-08-11 8:13 ` Mike Rapoport
@ 2025-08-11 11:46 ` Lorenzo Stoakes
2025-08-12 8:59 ` Harry Yoo
1 sibling, 1 reply; 26+ messages in thread
From: Lorenzo Stoakes @ 2025-08-11 11:46 UTC (permalink / raw)
To: Harry Yoo
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Mike Rapoport, Ard Biesheuvel,
linux-kernel, Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Michal Hocko,
Liam R. Howlett, linux-mm, Kirill A. Shutemov, Oscar Salvador,
Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V, Joerg Roedel,
Alistair Popple, Joao Martins, linux-arch, stable
On Mon, Aug 11, 2025 at 02:34:20PM +0900, Harry Yoo wrote:
> Define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() to ensure
> page tables are properly synchronized when calling p*d_populate_kernel().
> It is inteneded to synchronize page tables via pgd_pouplate_kernel() when
> 5-level paging is in use and via p4d_pouplate_kernel() when 4-level paging
> is used.
>
I think it's worth mentioning here that pgd_populate() is a no-op in 4-level
systems, so the sychronisation must occur at the P4D level, just to make this
clear.
> This fixes intermittent boot failures on systems using 4-level paging
> and a large amount of persistent memory:
>
> BUG: unable to handle page fault for address: ffffe70000000034
> #PF: supervisor write access in kernel mode
> #PF: error_code(0x0002) - not-present page
> PGD 0 P4D 0
> Oops: 0002 [#1] SMP NOPTI
> RIP: 0010:__init_single_page+0x9/0x6d
> Call Trace:
> <TASK>
> __init_zone_device_page+0x17/0x5d
> memmap_init_zone_device+0x154/0x1bb
> pagemap_range+0x2e0/0x40f
> memremap_pages+0x10b/0x2f0
> devm_memremap_pages+0x1e/0x60
> dev_dax_probe+0xce/0x2ec [device_dax]
> dax_bus_probe+0x6d/0xc9
> [... snip ...]
> </TASK>
>
> It also fixes a crash in vmemmap_set_pmd() caused by accessing vmemmap
> before sync_global_pgds() [1]:
>
> BUG: unable to handle page fault for address: ffffeb3ff1200000
> #PF: supervisor write access in kernel mode
> #PF: error_code(0x0002) - not-present page
> PGD 0 P4D 0
> Oops: Oops: 0002 [#1] PREEMPT SMP NOPTI
> Tainted: [W]=WARN
> RIP: 0010:vmemmap_set_pmd+0xff/0x230
> <TASK>
> vmemmap_populate_hugepages+0x176/0x180
> vmemmap_populate+0x34/0x80
> __populate_section_memmap+0x41/0x90
> sparse_add_section+0x121/0x3e0
> __add_pages+0xba/0x150
> add_pages+0x1d/0x70
> memremap_pages+0x3dc/0x810
> devm_memremap_pages+0x1c/0x60
> xe_devm_add+0x8b/0x100 [xe]
> xe_tile_init_noalloc+0x6a/0x70 [xe]
> xe_device_probe+0x48c/0x740 [xe]
> [... snip ...]
>
> Cc: <stable@vger.kernel.org>
> Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
> Closes: https://lore.kernel.org/linux-mm/20250311114420.240341-1-gwan-gyeong.mun@intel.com [1]
> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
Other than nitty comments, this looks good to me, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> arch/x86/include/asm/pgtable_64_types.h | 3 +++
> arch/x86/mm/init_64.c | 5 +++++
> 2 files changed, 8 insertions(+)
>
> diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
> index 4604f924d8b8..7eb61ef6a185 100644
> --- a/arch/x86/include/asm/pgtable_64_types.h
> +++ b/arch/x86/include/asm/pgtable_64_types.h
> @@ -36,6 +36,9 @@ static inline bool pgtable_l5_enabled(void)
> #define pgtable_l5_enabled() cpu_feature_enabled(X86_FEATURE_LA57)
> #endif /* USE_EARLY_PGTABLE_L5 */
>
> +#define ARCH_PAGE_TABLE_SYNC_MASK \
> + (pgtable_l5_enabled() ? PGTBL_PGD_MODIFIED : PGTBL_P4D_MODIFIED)
> +
> extern unsigned int pgdir_shift;
> extern unsigned int ptrs_per_p4d;
>
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index 76e33bd7c556..a78b498c0dc3 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -223,6 +223,11 @@ static void sync_global_pgds(unsigned long start, unsigned long end)
> sync_global_pgds_l4(start, end);
> }
>
Worth a comment to say 'if 4-level, then we synchronise at P4D level by
convention, however the same sync_global_pgds() applies'?
> +void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
> +{
> + sync_global_pgds(start, end);
> +}
> +
> /*
> * NOTE: This function is marked __ref because it calls __init function
> * (alloc_bootmem_pages). It's safe to do it ONLY when after_bootmem == 0.
> --
> 2.43.0
>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel()
2025-08-11 11:38 ` Lorenzo Stoakes
@ 2025-08-11 12:12 ` Harry Yoo
2025-08-11 12:18 ` Lorenzo Stoakes
0 siblings, 1 reply; 26+ messages in thread
From: Harry Yoo @ 2025-08-11 12:12 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Mike Rapoport, Ard Biesheuvel,
linux-kernel, Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Michal Hocko,
Liam R. Howlett, linux-mm, Kirill A. Shutemov, Oscar Salvador,
Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V, Joerg Roedel,
Alistair Popple, Joao Martins, linux-arch, stable
On Mon, Aug 11, 2025 at 12:38:37PM +0100, Lorenzo Stoakes wrote:
> On Mon, Aug 11, 2025 at 02:34:19PM +0900, Harry Yoo wrote:
> > Introduce and use {pgd,p4d}_populate_kernel() in core MM code when
> > populating PGD and P4D entries for the kernel address space.
> > These helpers ensure proper synchronization of page tables when
> > updating the kernel portion of top-level page tables.
> >
> > Until now, the kernel has relied on each architecture to handle
> > synchronization of top-level page tables in an ad-hoc manner.
> > For example, see commit 9b861528a801 ("x86-64, mem: Update all PGDs for
> > direct mapping and vmemmap mapping changes").
> >
> > However, this approach has proven fragile for following reasons:
> >
> > 1) It is easy to forget to perform the necessary page table
> > synchronization when introducing new changes.
> > For instance, commit 4917f55b4ef9 ("mm/sparse-vmemmap: improve memory
> > savings for compound devmaps") overlooked the need to synchronize
> > page tables for the vmemmap area.
> >
> > 2) It is also easy to overlook that the vmemmap and direct mapping areas
> > must not be accessed before explicit page table synchronization.
> > For example, commit 8d400913c231 ("x86/vmemmap: handle unpopulated
> > sub-pmd ranges")) caused crashes by accessing the vmemmap area
> > before calling sync_global_pgds().
> >
> > To address this, as suggested by Dave Hansen, introduce _kernel() variants
> > of the page table population helpers, which invoke architecture-specific
> > hooks to properly synchronize page tables. These are introduced in a new
> > header file, include/linux/pgalloc.h, so they can be called from common code.
> >
> > They reuse existing infrastructure for vmalloc and ioremap.
> > Synchronization requirements are determined by ARCH_PAGE_TABLE_SYNC_MASK,
> > and the actual synchronization is performed by arch_sync_kernel_mappings().
> >
> > This change currently targets only x86_64, so only PGD and P4D level
Hi Lorenzo, thanks for looking at this!
> Well, arm defines ARCH_PAGE_TABLE_SYNC_MASK in arch/arm/include/asm/page.h. But
> it aliases this to PGTBL_PMD_MODIFIED so will remain unaffected :)
Oh, here I just intended to explain why I didn't implement
{pud,pmd}_populate_kernel().
> > helpers are introduced. In theory, PUD and PMD level helpers can be added
> > later if needed by other architectures.
> >
> > Currently this is a no-op, since no architecture sets
> > PGTBL_{PGD,P4D}_MODIFIED in ARCH_PAGE_TABLE_SYNC_MASK.
> >
> > Cc: <stable@vger.kernel.org>
> > Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
> > Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> > Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
> > ---
> > include/linux/pgalloc.h | 24 ++++++++++++++++++++++++
> > include/linux/pgtable.h | 4 ++--
> > mm/kasan/init.c | 12 ++++++------
> > mm/percpu.c | 6 +++---
> > mm/sparse-vmemmap.c | 6 +++---
> > 5 files changed, 38 insertions(+), 14 deletions(-)
> > create mode 100644 include/linux/pgalloc.h
> >
> > diff --git a/include/linux/pgalloc.h b/include/linux/pgalloc.h
> > new file mode 100644
> > index 000000000000..290ab864320f
> > --- /dev/null
> > +++ b/include/linux/pgalloc.h
> > @@ -0,0 +1,24 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +#ifndef _LINUX_PGALLOC_H
> > +#define _LINUX_PGALLOC_H
> > +
> > +#include <linux/pgtable.h>
> > +#include <asm/pgalloc.h>
> > +
> > +static inline void pgd_populate_kernel(unsigned long addr, pgd_t *pgd,
> > + p4d_t *p4d)
> > +{
> > + pgd_populate(&init_mm, pgd, p4d);
> > + if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED)
>
> Hm, ARCH_PAGE_TABLE_SYNC_MASK is only defined for x86 2, 3 page level and arm. I see:
>
> #ifndef ARCH_PAGE_TABLE_SYNC_MASK
> #define ARCH_PAGE_TABLE_SYNC_MASK 0
> #endif
>
> In linux/vmalloc.h, but you're not importing that?
Patch 1 moves it from linux/vmalloc.h to linux/pgtable.h,
and linux/pgalloc.h includes linux/pgtable.h.
> It sucks that that there is there, but maybe you need to #include
> <linux/vmalloc.h> for this otherwise this could be broken on other arches?
>
> You may be getting lucky with nested header includes that causes this to be
> picked up somewhere for you, or having it only declared for arches that define
> it, but we should probably make this explicit.
...so I don't think I'm missing necessary header includes even on
other architectures?
> Also arch_sync_kernel_mappings() is defined in linux/vmalloc.h so seems
> sensible.
Also moved to linux/pgtable.h.
> > + arch_sync_kernel_mappings(addr, addr);
> > +}
> > +
> > +static inline void p4d_populate_kernel(unsigned long addr, p4d_t *p4d,
> > + pud_t *pud)
> > +{
> > + p4d_populate(&init_mm, p4d, pud);
> > + if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_P4D_MODIFIED)
> > + arch_sync_kernel_mappings(addr, addr);
>
> It's kind of weird we don't have this defined as a function for many arches,
That's really a mystery :)
I have no idea why other architectures don't handle this.
(At least on 64 bit arches) In theory I think only a few architectures
(like arm64 where a kernel page table is shared between tasks) don't have
to implement this.
Probably because it's a bit niche bug to hit?
(vmemmap, direct mapping, vmalloc/vmap area can span multiple PGD ranges)
AND (populating some PGD entries is done after boot process (e.g. memory
hot-plug or vmalloc())).
> (weird as well that we declare it in... vmalloc.h but I guess one for follow up
> cleanups that).
>
> But I see from the comment:
>
> /*
> * There is no default implementation for arch_sync_kernel_mappings(). It is
> * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK
> * is 0.
> */
>
> So this seems intended... :)
> The rest of this seems sensible, nice cleanup!
Thanks for looking at!
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel()
2025-08-11 12:12 ` Harry Yoo
@ 2025-08-11 12:18 ` Lorenzo Stoakes
2025-08-12 9:53 ` Harry Yoo
0 siblings, 1 reply; 26+ messages in thread
From: Lorenzo Stoakes @ 2025-08-11 12:18 UTC (permalink / raw)
To: Harry Yoo
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Mike Rapoport, Ard Biesheuvel,
linux-kernel, Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Michal Hocko,
Liam R. Howlett, linux-mm, Kirill A. Shutemov, Oscar Salvador,
Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V, Joerg Roedel,
Alistair Popple, Joao Martins, linux-arch, stable
On Mon, Aug 11, 2025 at 09:12:08PM +0900, Harry Yoo wrote:
> On Mon, Aug 11, 2025 at 12:38:37PM +0100, Lorenzo Stoakes wrote:
> > On Mon, Aug 11, 2025 at 02:34:19PM +0900, Harry Yoo wrote:
> > > Introduce and use {pgd,p4d}_populate_kernel() in core MM code when
> > > populating PGD and P4D entries for the kernel address space.
> > > These helpers ensure proper synchronization of page tables when
> > > updating the kernel portion of top-level page tables.
> > >
> > > Until now, the kernel has relied on each architecture to handle
> > > synchronization of top-level page tables in an ad-hoc manner.
> > > For example, see commit 9b861528a801 ("x86-64, mem: Update all PGDs for
> > > direct mapping and vmemmap mapping changes").
> > >
> > > However, this approach has proven fragile for following reasons:
> > >
> > > 1) It is easy to forget to perform the necessary page table
> > > synchronization when introducing new changes.
> > > For instance, commit 4917f55b4ef9 ("mm/sparse-vmemmap: improve memory
> > > savings for compound devmaps") overlooked the need to synchronize
> > > page tables for the vmemmap area.
> > >
> > > 2) It is also easy to overlook that the vmemmap and direct mapping areas
> > > must not be accessed before explicit page table synchronization.
> > > For example, commit 8d400913c231 ("x86/vmemmap: handle unpopulated
> > > sub-pmd ranges")) caused crashes by accessing the vmemmap area
> > > before calling sync_global_pgds().
> > >
> > > To address this, as suggested by Dave Hansen, introduce _kernel() variants
> > > of the page table population helpers, which invoke architecture-specific
> > > hooks to properly synchronize page tables. These are introduced in a new
> > > header file, include/linux/pgalloc.h, so they can be called from common code.
> > >
> > > They reuse existing infrastructure for vmalloc and ioremap.
> > > Synchronization requirements are determined by ARCH_PAGE_TABLE_SYNC_MASK,
> > > and the actual synchronization is performed by arch_sync_kernel_mappings().
> > >
> > > This change currently targets only x86_64, so only PGD and P4D level
>
> Hi Lorenzo, thanks for looking at this!
>
> > Well, arm defines ARCH_PAGE_TABLE_SYNC_MASK in arch/arm/include/asm/page.h. But
> > it aliases this to PGTBL_PMD_MODIFIED so will remain unaffected :)
>
> Oh, here I just intended to explain why I didn't implement
> {pud,pmd}_populate_kernel().
I'd add that arm handles PGTBL_PMD_MODIFIED and therefore remains unaffected
just to be super clear.
>
> > > helpers are introduced. In theory, PUD and PMD level helpers can be added
> > > later if needed by other architectures.
> > >
> > > Currently this is a no-op, since no architecture sets
> > > PGTBL_{PGD,P4D}_MODIFIED in ARCH_PAGE_TABLE_SYNC_MASK.
> > >
> > > Cc: <stable@vger.kernel.org>
> > > Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
> > > Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> > > Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
Given that I missed you fixed the vmalloc.h thing, this LGTM so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> > > ---
> > > include/linux/pgalloc.h | 24 ++++++++++++++++++++++++
> > > include/linux/pgtable.h | 4 ++--
> > > mm/kasan/init.c | 12 ++++++------
> > > mm/percpu.c | 6 +++---
> > > mm/sparse-vmemmap.c | 6 +++---
> > > 5 files changed, 38 insertions(+), 14 deletions(-)
> > > create mode 100644 include/linux/pgalloc.h
> > >
> > > diff --git a/include/linux/pgalloc.h b/include/linux/pgalloc.h
> > > new file mode 100644
> > > index 000000000000..290ab864320f
> > > --- /dev/null
> > > +++ b/include/linux/pgalloc.h
> > > @@ -0,0 +1,24 @@
> > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > +#ifndef _LINUX_PGALLOC_H
> > > +#define _LINUX_PGALLOC_H
> > > +
> > > +#include <linux/pgtable.h>
> > > +#include <asm/pgalloc.h>
> > > +
> > > +static inline void pgd_populate_kernel(unsigned long addr, pgd_t *pgd,
> > > + p4d_t *p4d)
> > > +{
> > > + pgd_populate(&init_mm, pgd, p4d);
> > > + if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED)
> >
> > Hm, ARCH_PAGE_TABLE_SYNC_MASK is only defined for x86 2, 3 page level and arm. I see:
> >
> > #ifndef ARCH_PAGE_TABLE_SYNC_MASK
> > #define ARCH_PAGE_TABLE_SYNC_MASK 0
> > #endif
> >
> > In linux/vmalloc.h, but you're not importing that?
>
> Patch 1 moves it from linux/vmalloc.h to linux/pgtable.h,
> and linux/pgalloc.h includes linux/pgtable.h.
>
> > It sucks that that there is there, but maybe you need to #include
> > <linux/vmalloc.h> for this otherwise this could be broken on other arches?
> >
> > You may be getting lucky with nested header includes that causes this to be
> > picked up somewhere for you, or having it only declared for arches that define
> > it, but we should probably make this explicit.
>
> ...so I don't think I'm missing necessary header includes even on
> other architectures?
>
> > Also arch_sync_kernel_mappings() is defined in linux/vmalloc.h so seems
> > sensible.
>
> Also moved to linux/pgtable.h.
Ah yeah damn, I missed that you do that there, ok well that's fine then :)
>
> > > + arch_sync_kernel_mappings(addr, addr);
> > > +}
> > > +
> > > +static inline void p4d_populate_kernel(unsigned long addr, p4d_t *p4d,
> > > + pud_t *pud)
> > > +{
> > > + p4d_populate(&init_mm, p4d, pud);
> > > + if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_P4D_MODIFIED)
> > > + arch_sync_kernel_mappings(addr, addr);
> >
> > It's kind of weird we don't have this defined as a function for many arches,
>
> That's really a mystery :)
>
> I have no idea why other architectures don't handle this.
>
> (At least on 64 bit arches) In theory I think only a few architectures
> (like arm64 where a kernel page table is shared between tasks) don't have
> to implement this.
>
> Probably because it's a bit niche bug to hit?
> (vmemmap, direct mapping, vmalloc/vmap area can span multiple PGD ranges)
> AND (populating some PGD entries is done after boot process (e.g. memory
> hot-plug or vmalloc())).
No comment is more why we don't just do a standard:
#ifndef xxx
#define xxx (0)
#endif
Or something. Just odd.
>
> > (weird as well that we declare it in... vmalloc.h but I guess one for follow up
> > cleanups that).
> >
> > But I see from the comment:
> >
> > /*
> > * There is no default implementation for arch_sync_kernel_mappings(). It is
> > * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK
> > * is 0.
> > */
> >
> > So this seems intended... :)
>
> > The rest of this seems sensible, nice cleanup!
>
> Thanks for looking at!
>
> --
> Cheers,
> Harry / Hyeonggon
>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 3/3] x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings()
2025-08-11 11:46 ` Lorenzo Stoakes
@ 2025-08-12 8:59 ` Harry Yoo
2025-08-12 16:36 ` Lorenzo Stoakes
0 siblings, 1 reply; 26+ messages in thread
From: Harry Yoo @ 2025-08-12 8:59 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Mike Rapoport, Ard Biesheuvel,
linux-kernel, Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Michal Hocko,
Liam R. Howlett, linux-mm, Kirill A. Shutemov, Oscar Salvador,
Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V, Joerg Roedel,
Alistair Popple, Joao Martins, linux-arch, stable
On Mon, Aug 11, 2025 at 12:46:32PM +0100, Lorenzo Stoakes wrote:
> On Mon, Aug 11, 2025 at 02:34:20PM +0900, Harry Yoo wrote:
> > Define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() to ensure
> > page tables are properly synchronized when calling p*d_populate_kernel().
> > It is inteneded to synchronize page tables via pgd_pouplate_kernel() when
> > 5-level paging is in use and via p4d_pouplate_kernel() when 4-level paging
> > is used.
> >
>
> I think it's worth mentioning here that pgd_populate() is a no-op in 4-level
> systems, so the sychronisation must occur at the P4D level, just to make this
> clear.
Yeah, that's indeed confusing and agree that it's worth mentioning.
Will do. The new one:
Define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() to
ensure page tables are properly synchronized when calling
p*d_populate_kernel().
For 5-level paging, synchronization is performed via pgd_populate_kernel().
In 4-level paging, pgd_populate() is a no-op, so synchronization is instead
performed at the P4D level via p4d_populate_kernel().
> > This fixes intermittent boot failures on systems using 4-level paging
> > and a large amount of persistent memory:
> >
> > BUG: unable to handle page fault for address: ffffe70000000034
> > #PF: supervisor write access in kernel mode
> > #PF: error_code(0x0002) - not-present page
> > PGD 0 P4D 0
> > Oops: 0002 [#1] SMP NOPTI
> > RIP: 0010:__init_single_page+0x9/0x6d
> > Call Trace:
> > <TASK>
> > __init_zone_device_page+0x17/0x5d
> > memmap_init_zone_device+0x154/0x1bb
> > pagemap_range+0x2e0/0x40f
> > memremap_pages+0x10b/0x2f0
> > devm_memremap_pages+0x1e/0x60
> > dev_dax_probe+0xce/0x2ec [device_dax]
> > dax_bus_probe+0x6d/0xc9
> > [... snip ...]
> > </TASK>
> >
> > It also fixes a crash in vmemmap_set_pmd() caused by accessing vmemmap
> > before sync_global_pgds() [1]:
> >
> > BUG: unable to handle page fault for address: ffffeb3ff1200000
> > #PF: supervisor write access in kernel mode
> > #PF: error_code(0x0002) - not-present page
> > PGD 0 P4D 0
> > Oops: Oops: 0002 [#1] PREEMPT SMP NOPTI
> > Tainted: [W]=WARN
> > RIP: 0010:vmemmap_set_pmd+0xff/0x230
> > <TASK>
> > vmemmap_populate_hugepages+0x176/0x180
> > vmemmap_populate+0x34/0x80
> > __populate_section_memmap+0x41/0x90
> > sparse_add_section+0x121/0x3e0
> > __add_pages+0xba/0x150
> > add_pages+0x1d/0x70
> > memremap_pages+0x3dc/0x810
> > devm_memremap_pages+0x1c/0x60
> > xe_devm_add+0x8b/0x100 [xe]
> > xe_tile_init_noalloc+0x6a/0x70 [xe]
> > xe_device_probe+0x48c/0x740 [xe]
> > [... snip ...]
> >
> > Cc: <stable@vger.kernel.org>
> > Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
> > Closes: https://lore.kernel.org/linux-mm/20250311114420.240341-1-gwan-gyeong.mun@intel.com [1]
> > Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> > Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
>
> Other than nitty comments, this looks good to me, so:
>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Thanks!
> > ---
> > arch/x86/include/asm/pgtable_64_types.h | 3 +++
> > arch/x86/mm/init_64.c | 5 +++++
> > 2 files changed, 8 insertions(+)
> >
> > diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
> > index 4604f924d8b8..7eb61ef6a185 100644
> > --- a/arch/x86/include/asm/pgtable_64_types.h
> > +++ b/arch/x86/include/asm/pgtable_64_types.h
> > @@ -36,6 +36,9 @@ static inline bool pgtable_l5_enabled(void)
> > #define pgtable_l5_enabled() cpu_feature_enabled(X86_FEATURE_LA57)
> > #endif /* USE_EARLY_PGTABLE_L5 */
> >
> > +#define ARCH_PAGE_TABLE_SYNC_MASK \
> > + (pgtable_l5_enabled() ? PGTBL_PGD_MODIFIED : PGTBL_P4D_MODIFIED)
> > +
> > extern unsigned int pgdir_shift;
> > extern unsigned int ptrs_per_p4d;
> >
> > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> > index 76e33bd7c556..a78b498c0dc3 100644
> > --- a/arch/x86/mm/init_64.c
> > +++ b/arch/x86/mm/init_64.c
> > @@ -223,6 +223,11 @@ static void sync_global_pgds(unsigned long start, unsigned long end)
> > sync_global_pgds_l4(start, end);
> > }
> >
>
> Worth a comment to say 'if 4-level, then we synchronise at P4D level by
> convention, however the same sync_global_pgds() applies'?
Maybe:
/*
* Make kernel mappings visible in all page tables in the system.
* This is necessary except when the init task populates kernel mappings
* during the boot process. In that case, all processes originating from
* the init task copies the kernel mappings, so there is no issue.
* Otherwise, missing synchronization could lead to kernel crashes due
* to missing page table entries for certain kernel mappings.
*
* Synchronization is performed at the top level, which is the PGD in
* 5-level paging systems. But in 4-level paging systems, however,
* pgd_populate() is a no-op, so synchronization is done at P4D level instead.
* sync_global_pgds() handles this difference between paging levels.
*/
--
Cheers,
Harry / Hyeonggon
> > +void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
> > +{
> > + sync_global_pgds(start, end);
> > +}
> > +
> > /*
> > * NOTE: This function is marked __ref because it calls __init function
> > * (alloc_bootmem_pages). It's safe to do it ONLY when after_bootmem == 0.
> > --
> > 2.43.0
> >
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel()
2025-08-11 12:18 ` Lorenzo Stoakes
@ 2025-08-12 9:53 ` Harry Yoo
2025-08-12 16:08 ` Lorenzo Stoakes
0 siblings, 1 reply; 26+ messages in thread
From: Harry Yoo @ 2025-08-12 9:53 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Mike Rapoport, Ard Biesheuvel,
linux-kernel, Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Michal Hocko,
Liam R. Howlett, linux-mm, Kirill A. Shutemov, Oscar Salvador,
Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V, Joerg Roedel,
Alistair Popple, Joao Martins, linux-arch, stable
On Mon, Aug 11, 2025 at 01:18:12PM +0100, Lorenzo Stoakes wrote:
> On Mon, Aug 11, 2025 at 09:12:08PM +0900, Harry Yoo wrote:
> > On Mon, Aug 11, 2025 at 12:38:37PM +0100, Lorenzo Stoakes wrote:
> > > On Mon, Aug 11, 2025 at 02:34:19PM +0900, Harry Yoo wrote:
> > > > Introduce and use {pgd,p4d}_populate_kernel() in core MM code when
> > > > populating PGD and P4D entries for the kernel address space.
> > > > These helpers ensure proper synchronization of page tables when
> > > > updating the kernel portion of top-level page tables.
> > > >
> > > > Until now, the kernel has relied on each architecture to handle
> > > > synchronization of top-level page tables in an ad-hoc manner.
> > > > For example, see commit 9b861528a801 ("x86-64, mem: Update all PGDs for
> > > > direct mapping and vmemmap mapping changes").
> > > >
> > > > However, this approach has proven fragile for following reasons:
> > > >
> > > > 1) It is easy to forget to perform the necessary page table
> > > > synchronization when introducing new changes.
> > > > For instance, commit 4917f55b4ef9 ("mm/sparse-vmemmap: improve memory
> > > > savings for compound devmaps") overlooked the need to synchronize
> > > > page tables for the vmemmap area.
> > > >
> > > > 2) It is also easy to overlook that the vmemmap and direct mapping areas
> > > > must not be accessed before explicit page table synchronization.
> > > > For example, commit 8d400913c231 ("x86/vmemmap: handle unpopulated
> > > > sub-pmd ranges")) caused crashes by accessing the vmemmap area
> > > > before calling sync_global_pgds().
> > > >
> > > > To address this, as suggested by Dave Hansen, introduce _kernel() variants
> > > > of the page table population helpers, which invoke architecture-specific
> > > > hooks to properly synchronize page tables. These are introduced in a new
> > > > header file, include/linux/pgalloc.h, so they can be called from common code.
> > > >
> > > > They reuse existing infrastructure for vmalloc and ioremap.
> > > > Synchronization requirements are determined by ARCH_PAGE_TABLE_SYNC_MASK,
> > > > and the actual synchronization is performed by arch_sync_kernel_mappings().
> > > >
> > > > This change currently targets only x86_64, so only PGD and P4D level
> >
> > Hi Lorenzo, thanks for looking at this!
> >
> > > Well, arm defines ARCH_PAGE_TABLE_SYNC_MASK in arch/arm/include/asm/page.h. But
> > > it aliases this to PGTBL_PMD_MODIFIED so will remain unaffected :)
> >
> > Oh, here I just intended to explain why I didn't implement
> > {pud,pmd}_populate_kernel().
>
> I'd add that arm handles PGTBL_PMD_MODIFIED and therefore remains unaffected
> just to be super clear.
Will do:
This change currently targets only x86_64, so only PGD and P4D level
helpers are introduced. Currently, these helpers are no-ops since no
architecture sets PGTBL_{PGD,P4D}_MODIFIED in ARCH_PAGE_TABLE_SYNC_MASK.
In theory, PUD and PMD level helpers can be added later if needed by
other architectures. For now, 32-bit architectures (x86-32 and arm)
only handle PGTBL_PMD_MODIFIED, so p*d_populate_kernel() will never
affect them unless we introduce a PMD level helper.
> > > > helpers are introduced. In theory, PUD and PMD level helpers can be added
> > > > later if needed by other architectures.
> > > >
> > > > Currently this is a no-op, since no architecture sets
> > > > PGTBL_{PGD,P4D}_MODIFIED in ARCH_PAGE_TABLE_SYNC_MASK.
> > > >
> > > > Cc: <stable@vger.kernel.org>
> > > > Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
> > > > Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> > > > Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
>
> Given that I missed you fixed the vmalloc.h thing, this LGTM so:
>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Thanks!
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel()
2025-08-12 9:53 ` Harry Yoo
@ 2025-08-12 16:08 ` Lorenzo Stoakes
0 siblings, 0 replies; 26+ messages in thread
From: Lorenzo Stoakes @ 2025-08-12 16:08 UTC (permalink / raw)
To: Harry Yoo
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Mike Rapoport, Ard Biesheuvel,
linux-kernel, Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Michal Hocko,
Liam R. Howlett, linux-mm, Kirill A. Shutemov, Oscar Salvador,
Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V, Joerg Roedel,
Alistair Popple, Joao Martins, linux-arch, stable
On Tue, Aug 12, 2025 at 06:53:49PM +0900, Harry Yoo wrote:
> > I'd add that arm handles PGTBL_PMD_MODIFIED and therefore remains unaffected
> > just to be super clear.
>
> Will do:
>
> This change currently targets only x86_64, so only PGD and P4D level
> helpers are introduced. Currently, these helpers are no-ops since no
> architecture sets PGTBL_{PGD,P4D}_MODIFIED in ARCH_PAGE_TABLE_SYNC_MASK.
>
> In theory, PUD and PMD level helpers can be added later if needed by
> other architectures. For now, 32-bit architectures (x86-32 and arm)
> only handle PGTBL_PMD_MODIFIED, so p*d_populate_kernel() will never
> affect them unless we introduce a PMD level helper.
Sounds good!
Cheers, Lorenzo
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 3/3] x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings()
2025-08-12 8:59 ` Harry Yoo
@ 2025-08-12 16:36 ` Lorenzo Stoakes
0 siblings, 0 replies; 26+ messages in thread
From: Lorenzo Stoakes @ 2025-08-12 16:36 UTC (permalink / raw)
To: Harry Yoo
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Mike Rapoport, Ard Biesheuvel,
linux-kernel, Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Michal Hocko,
Liam R. Howlett, linux-mm, Kirill A. Shutemov, Oscar Salvador,
Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V, Joerg Roedel,
Alistair Popple, Joao Martins, linux-arch, stable
On Tue, Aug 12, 2025 at 05:59:02PM +0900, Harry Yoo wrote:
> On Mon, Aug 11, 2025 at 12:46:32PM +0100, Lorenzo Stoakes wrote:
> > On Mon, Aug 11, 2025 at 02:34:20PM +0900, Harry Yoo wrote:
> > > Define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() to ensure
> > > page tables are properly synchronized when calling p*d_populate_kernel().
> > > It is inteneded to synchronize page tables via pgd_pouplate_kernel() when
> > > 5-level paging is in use and via p4d_pouplate_kernel() when 4-level paging
> > > is used.
> > >
> >
> > I think it's worth mentioning here that pgd_populate() is a no-op in 4-level
> > systems, so the sychronisation must occur at the P4D level, just to make this
> > clear.
>
> Yeah, that's indeed confusing and agree that it's worth mentioning.
> Will do. The new one:
>
> Define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() to
> ensure page tables are properly synchronized when calling
> p*d_populate_kernel().
>
> For 5-level paging, synchronization is performed via pgd_populate_kernel().
> In 4-level paging, pgd_populate() is a no-op, so synchronization is instead
> performed at the P4D level via p4d_populate_kernel().
That's great thanks!
>
> > > This fixes intermittent boot failures on systems using 4-level paging
> > > and a large amount of persistent memory:
> > >
> > > BUG: unable to handle page fault for address: ffffe70000000034
> > > #PF: supervisor write access in kernel mode
> > > #PF: error_code(0x0002) - not-present page
> > > PGD 0 P4D 0
> > > Oops: 0002 [#1] SMP NOPTI
> > > RIP: 0010:__init_single_page+0x9/0x6d
> > > Call Trace:
> > > <TASK>
> > > __init_zone_device_page+0x17/0x5d
> > > memmap_init_zone_device+0x154/0x1bb
> > > pagemap_range+0x2e0/0x40f
> > > memremap_pages+0x10b/0x2f0
> > > devm_memremap_pages+0x1e/0x60
> > > dev_dax_probe+0xce/0x2ec [device_dax]
> > > dax_bus_probe+0x6d/0xc9
> > > [... snip ...]
> > > </TASK>
> > >
> > > It also fixes a crash in vmemmap_set_pmd() caused by accessing vmemmap
> > > before sync_global_pgds() [1]:
> > >
> > > BUG: unable to handle page fault for address: ffffeb3ff1200000
> > > #PF: supervisor write access in kernel mode
> > > #PF: error_code(0x0002) - not-present page
> > > PGD 0 P4D 0
> > > Oops: Oops: 0002 [#1] PREEMPT SMP NOPTI
> > > Tainted: [W]=WARN
> > > RIP: 0010:vmemmap_set_pmd+0xff/0x230
> > > <TASK>
> > > vmemmap_populate_hugepages+0x176/0x180
> > > vmemmap_populate+0x34/0x80
> > > __populate_section_memmap+0x41/0x90
> > > sparse_add_section+0x121/0x3e0
> > > __add_pages+0xba/0x150
> > > add_pages+0x1d/0x70
> > > memremap_pages+0x3dc/0x810
> > > devm_memremap_pages+0x1c/0x60
> > > xe_devm_add+0x8b/0x100 [xe]
> > > xe_tile_init_noalloc+0x6a/0x70 [xe]
> > > xe_device_probe+0x48c/0x740 [xe]
> > > [... snip ...]
> > >
> > > Cc: <stable@vger.kernel.org>
> > > Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
> > > Closes: https://lore.kernel.org/linux-mm/20250311114420.240341-1-gwan-gyeong.mun@intel.com [1]
> > > Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> > > Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
> >
> > Other than nitty comments, this looks good to me, so:
> >
> > Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>
> Thanks!
>
> > > ---
> > > arch/x86/include/asm/pgtable_64_types.h | 3 +++
> > > arch/x86/mm/init_64.c | 5 +++++
> > > 2 files changed, 8 insertions(+)
> > >
> > > diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
> > > index 4604f924d8b8..7eb61ef6a185 100644
> > > --- a/arch/x86/include/asm/pgtable_64_types.h
> > > +++ b/arch/x86/include/asm/pgtable_64_types.h
> > > @@ -36,6 +36,9 @@ static inline bool pgtable_l5_enabled(void)
> > > #define pgtable_l5_enabled() cpu_feature_enabled(X86_FEATURE_LA57)
> > > #endif /* USE_EARLY_PGTABLE_L5 */
> > >
> > > +#define ARCH_PAGE_TABLE_SYNC_MASK \
> > > + (pgtable_l5_enabled() ? PGTBL_PGD_MODIFIED : PGTBL_P4D_MODIFIED)
> > > +
> > > extern unsigned int pgdir_shift;
> > > extern unsigned int ptrs_per_p4d;
> > >
> > > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> > > index 76e33bd7c556..a78b498c0dc3 100644
> > > --- a/arch/x86/mm/init_64.c
> > > +++ b/arch/x86/mm/init_64.c
> > > @@ -223,6 +223,11 @@ static void sync_global_pgds(unsigned long start, unsigned long end)
> > > sync_global_pgds_l4(start, end);
> > > }
> > >
> >
> > Worth a comment to say 'if 4-level, then we synchronise at P4D level by
> > convention, however the same sync_global_pgds() applies'?
>
> Maybe:
>
> /*
> * Make kernel mappings visible in all page tables in the system.
> * This is necessary except when the init task populates kernel mappings
> * during the boot process. In that case, all processes originating from
> * the init task copies the kernel mappings, so there is no issue.
> * Otherwise, missing synchronization could lead to kernel crashes due
> * to missing page table entries for certain kernel mappings.
> *
> * Synchronization is performed at the top level, which is the PGD in
> * 5-level paging systems. But in 4-level paging systems, however,
> * pgd_populate() is a no-op, so synchronization is done at P4D level instead.
> * sync_global_pgds() handles this difference between paging levels.
> */
>
That's great also, thanks!
> --
> Cheers,
> Harry / Hyeonggon
>
> > > +void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
> > > +{
> > > + sync_global_pgds(start, end);
> > > +}
> > > +
> > > /*
> > > * NOTE: This function is marked __ref because it calls __init function
> > > * (alloc_bootmem_pages). It's safe to do it ONLY when after_bootmem == 0.
> > > --
> > > 2.43.0
> > >
>
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel()
2025-08-11 5:34 ` [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel() Harry Yoo
` (2 preceding siblings ...)
2025-08-11 11:38 ` Lorenzo Stoakes
@ 2025-08-25 11:27 ` Christophe Leroy
2025-08-25 16:02 ` Harry Yoo
3 siblings, 1 reply; 26+ messages in thread
From: Christophe Leroy @ 2025-08-25 11:27 UTC (permalink / raw)
To: Harry Yoo, Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86,
Borislav Petkov, Peter Zijlstra, Andy Lutomirski, Thomas Gleixner,
Ingo Molnar, Tejun Heo, Uladzislau Rezki, Dave Hansen,
Christoph Lameter, David Hildenbrand
Cc: Andrey Konovalov, Vincenzo Frascino, H. Peter Anvin, kasan-dev,
Mike Rapoport, Ard Biesheuvel, linux-kernel, Dmitry Vyukov,
Alexander Potapenko, Vlastimil Babka, Suren Baghdasaryan,
Thomas Huth, John Hubbard, Lorenzo Stoakes, Michal Hocko,
Liam R. Howlett, linux-mm, Kirill A. Shutemov, Oscar Salvador,
Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V, Joerg Roedel,
Alistair Popple, Joao Martins, linux-arch, stable
Le 11/08/2025 à 07:34, Harry Yoo a écrit :
> Introduce and use {pgd,p4d}_populate_kernel() in core MM code when
> populating PGD and P4D entries for the kernel address space.
> These helpers ensure proper synchronization of page tables when
> updating the kernel portion of top-level page tables.
>
> Until now, the kernel has relied on each architecture to handle
> synchronization of top-level page tables in an ad-hoc manner.
> For example, see commit 9b861528a801 ("x86-64, mem: Update all PGDs for
> direct mapping and vmemmap mapping changes").
>
> However, this approach has proven fragile for following reasons:
>
> 1) It is easy to forget to perform the necessary page table
> synchronization when introducing new changes.
> For instance, commit 4917f55b4ef9 ("mm/sparse-vmemmap: improve memory
> savings for compound devmaps") overlooked the need to synchronize
> page tables for the vmemmap area.
>
> 2) It is also easy to overlook that the vmemmap and direct mapping areas
> must not be accessed before explicit page table synchronization.
> For example, commit 8d400913c231 ("x86/vmemmap: handle unpopulated
> sub-pmd ranges")) caused crashes by accessing the vmemmap area
> before calling sync_global_pgds().
>
> To address this, as suggested by Dave Hansen, introduce _kernel() variants
> of the page table population helpers, which invoke architecture-specific
> hooks to properly synchronize page tables. These are introduced in a new
> header file, include/linux/pgalloc.h, so they can be called from common code.
>
> They reuse existing infrastructure for vmalloc and ioremap.
> Synchronization requirements are determined by ARCH_PAGE_TABLE_SYNC_MASK,
> and the actual synchronization is performed by arch_sync_kernel_mappings().
>
> This change currently targets only x86_64, so only PGD and P4D level
> helpers are introduced. In theory, PUD and PMD level helpers can be added
> later if needed by other architectures.
AFAIK pmd_populate_kernel() already exist on all architectures, and I'm
not sure it does what you expect. Or am I missing something ?
Christophe
^ permalink raw reply [flat|nested] 26+ messages in thread
* Re: [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel()
2025-08-25 11:27 ` Christophe Leroy
@ 2025-08-25 16:02 ` Harry Yoo
0 siblings, 0 replies; 26+ messages in thread
From: Harry Yoo @ 2025-08-25 16:02 UTC (permalink / raw)
To: Christophe Leroy
Cc: Dennis Zhou, Andrew Morton, Andrey Ryabinin, x86, Borislav Petkov,
Peter Zijlstra, Andy Lutomirski, Thomas Gleixner, Ingo Molnar,
Tejun Heo, Uladzislau Rezki, Dave Hansen, Christoph Lameter,
David Hildenbrand, Andrey Konovalov, Vincenzo Frascino,
H. Peter Anvin, kasan-dev, Mike Rapoport, Ard Biesheuvel,
linux-kernel, Dmitry Vyukov, Alexander Potapenko, Vlastimil Babka,
Suren Baghdasaryan, Thomas Huth, John Hubbard, Lorenzo Stoakes,
Michal Hocko, Liam R. Howlett, linux-mm, Kirill A. Shutemov,
Oscar Salvador, Jane Chu, Gwan-gyeong Mun, Aneesh Kumar K . V,
Joerg Roedel, Alistair Popple, Joao Martins, linux-arch, stable
On Mon, Aug 25, 2025 at 01:27:20PM +0200, Christophe Leroy wrote:
>
>
> Le 11/08/2025 à 07:34, Harry Yoo a écrit :
> > Introduce and use {pgd,p4d}_populate_kernel() in core MM code when
> > populating PGD and P4D entries for the kernel address space.
> > These helpers ensure proper synchronization of page tables when
> > updating the kernel portion of top-level page tables.
> >
> > Until now, the kernel has relied on each architecture to handle
> > synchronization of top-level page tables in an ad-hoc manner.
> > For example, see commit 9b861528a801 ("x86-64, mem: Update all PGDs for
> > direct mapping and vmemmap mapping changes").
> >
> > However, this approach has proven fragile for following reasons:
> >
> > 1) It is easy to forget to perform the necessary page table
> > synchronization when introducing new changes.
> > For instance, commit 4917f55b4ef9 ("mm/sparse-vmemmap: improve memory
> > savings for compound devmaps") overlooked the need to synchronize
> > page tables for the vmemmap area.
> >
> > 2) It is also easy to overlook that the vmemmap and direct mapping areas
> > must not be accessed before explicit page table synchronization.
> > For example, commit 8d400913c231 ("x86/vmemmap: handle unpopulated
> > sub-pmd ranges")) caused crashes by accessing the vmemmap area
> > before calling sync_global_pgds().
> >
> > To address this, as suggested by Dave Hansen, introduce _kernel() variants
> > of the page table population helpers, which invoke architecture-specific
> > hooks to properly synchronize page tables. These are introduced in a new
> > header file, include/linux/pgalloc.h, so they can be called from common code.
> >
> > They reuse existing infrastructure for vmalloc and ioremap.
> > Synchronization requirements are determined by ARCH_PAGE_TABLE_SYNC_MASK,
> > and the actual synchronization is performed by arch_sync_kernel_mappings().
> >
> > This change currently targets only x86_64, so only PGD and P4D level
> > helpers are introduced. In theory, PUD and PMD level helpers can be added
> > later if needed by other architectures.
>
> AFAIK pmd_populate_kernel() already exist on all architectures, and I'm not
> sure it does what you expect. Or am I missing something ?
It does not do what I expect.
Yes, if someone is going to introduce a PMD level helper, existing
pmd_populate_kernel() should be renamed or removed.
To be honest I'm not really sure why we need both pmd_populate() and
pmd_populate_kernel(). It is introduced by historical commit
3a0b82c08a0e8668 ("adds simple support for atomically-mapped PTEs.
On highmem systems this enables the allocation of the pagetables in
highmem.") [1], but as there's no explanation or comment so I can only
speculate.
Key differences I recognize is 1) the type of the last parameter is
pgtable_t (which can be either struct page * or pte_t * depending on
architecture) in pmd_populate() and pte_t * in pmd_populate_kernel(),
and 2) some architectures treat user and kernel page tables differently.
Regarding 1), I think a reasonable experience is that pmd_populate()
should take struct page * in some architectures because
with CONFIG_HIGHPTE=y pte_t * might not be accessible, but kernel
page tables are not allocated from highmem even with CONFIG_HIGHPTE=y
so pmd_populate_kernel() can take pte_t *, and that can save a few
instructions.
And some architectures (that does not support HIGHPTE?) define pgtable_t
as pte_t * to support sub-page page tables (Commit 2f569afd9ced
("CONFIG_HIGHPTE vs. sub-page page tables.")).
Maybe things to clean up in the future:
1) Once CONFIG_HIGHPTE is completely dropped (is that ever going to
happen?), pte_t * can be used instead of struct page *.
2) Convert users of pmd_populate_kernel() to use pmd_populate().
But some architectures treat user and kernel page tables differently
and that will be handled in pmd_populate() (depending on
(mm == &init_mm))
[1] https://web.git.kernel.org/pub/scm/linux/kernel/git/tglx/history.git/commit/?id=3a0b82c08a0e86683783c30d7fec9d1b06c2fe20
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 26+ messages in thread
end of thread, other threads:[~2025-08-25 16:03 UTC | newest]
Thread overview: 26+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-08-11 5:34 [PATCH V4 mm-hotfixes 0/3] mm, x86: fix crash due to missing page table sync and make it harder to miss Harry Yoo
2025-08-11 5:34 ` [PATCH V4 mm-hotfixes 1/3] mm: move page table sync declarations to linux/pgtable.h Harry Yoo
2025-08-11 8:05 ` Mike Rapoport
2025-08-11 8:36 ` Harry Yoo
2025-08-11 8:52 ` Mike Rapoport
2025-08-11 9:19 ` Uladzislau Rezki
2025-08-11 11:21 ` Lorenzo Stoakes
2025-08-11 5:34 ` [PATCH V4 mm-hotfixes 2/3] mm: introduce and use {pgd,p4d}_populate_kernel() Harry Yoo
2025-08-11 8:10 ` Mike Rapoport
2025-08-11 9:10 ` Lorenzo Stoakes
2025-08-11 10:36 ` Harry Yoo
2025-08-11 11:18 ` Lorenzo Stoakes
2025-08-11 11:38 ` Lorenzo Stoakes
2025-08-11 12:12 ` Harry Yoo
2025-08-11 12:18 ` Lorenzo Stoakes
2025-08-12 9:53 ` Harry Yoo
2025-08-12 16:08 ` Lorenzo Stoakes
2025-08-25 11:27 ` Christophe Leroy
2025-08-25 16:02 ` Harry Yoo
2025-08-11 5:34 ` [PATCH V4 mm-hotfixes 3/3] x86/mm/64: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() Harry Yoo
2025-08-11 8:13 ` Mike Rapoport
2025-08-11 11:46 ` Lorenzo Stoakes
2025-08-12 8:59 ` Harry Yoo
2025-08-12 16:36 ` Lorenzo Stoakes
2025-08-11 6:46 ` [PATCH V4 mm-hotfixes 0/3] mm, x86: fix crash due to missing page table sync and make it harder to miss Kiryl Shutsemau
2025-08-11 8:09 ` Harry Yoo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).