linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables
@ 2025-07-20 23:41 Harry Yoo
  2025-07-20 23:41 ` [PATCH v2 mm-hotfixes 1/5] mm: move page table sync declarations to asm/pgalloc.h Harry Yoo
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Harry Yoo @ 2025-07-20 23:41 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, Andrey Ryabinin, Arnd Bergmann,
	Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter
  Cc: H . Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, Juergen Gross, Kevin Brodsky,
	Muchun Song, Oscar Salvador, Joao Martins, Lorenzo Stoakes,
	Jane Chu, Alistair Popple, Mike Rapoport, David Hildenbrand,
	Gwan-gyeong Mun, Aneesh Kumar K . V, x86, linux-kernel,
	linux-arch, linux-mm, Harry Yoo

RFC v1: https://lore.kernel.org/linux-mm/20250709131657.5660-1-harry.yoo@oracle.com

RFC v1 -> v2:
- Dropped RFC tag.
- Exposed page table sync code to common code (Mike Rapoport).
- Used only one Fixes: tag in patch 3 instead of two,
  to avoid confusion (Andrew Morton)
- Reused existing ARCH_PAGE_TABLE_SYNC_MASK and
  arch_sync_kernel_mappings() facility (currently used by vmalloc and
  ioremap) forpage table sync instead of introducing a new interface.

A quick question: Technically, patch 4 and 5 don't necessarily need to be
backported. Does it make sense to backport only patch 1-3?

# The problem: It is easy to miss/overlook page table synchronization

Hi all,

During our internal testing, we started observing intermittent boot
failures when the machine uses 4-level paging and has a large amount
of persistent memory:

  BUG: unable to handle page fault for address: ffffe70000000034
  #PF: supervisor write access in kernel mode
  #PF: error_code(0x0002) - not-present page
  PGD 0 P4D 0 
  Oops: 0002 [#1] SMP NOPTI
  RIP: 0010:__init_single_page+0x9/0x6d
  Call Trace:
   <TASK>
   __init_zone_device_page+0x17/0x5d
   memmap_init_zone_device+0x154/0x1bb
   pagemap_range+0x2e0/0x40f
   memremap_pages+0x10b/0x2f0
   devm_memremap_pages+0x1e/0x60
   dev_dax_probe+0xce/0x2ec [device_dax]
   dax_bus_probe+0x6d/0xc9
   [... snip ...]
   </TASK>

It turns out that the kernel panics while initializing vmemmap
(struct page array) when the vmemmap region spans two PGD entries,
because the new PGD entry is only installed in init_mm.pgd,
but not in the page tables of other tasks.

And looking at __populate_section_memmap():
  if (vmemmap_can_optimize(altmap, pgmap))                                
          // does not sync top level page tables
          r = vmemmap_populate_compound_pages(pfn, start, end, nid, pgmap);
  else                                                                    
          // sync top level page tables in x86
          r = vmemmap_populate(start, end, nid, altmap);

In the normal path, vmemmap_populate() in arch/x86/mm/init_64.c
synchronizes the top level page table (See commit 9b861528a801
("x86-64, mem: Update all PGDs for direct mapping and vmemmap mapping
changes")) so that all tasks in the system can see the new vmemmap area.

However, when vmemmap_can_optimize() returns true, the optimized path
skips synchronization of top-level page tables. This is because
vmemmap_populate_compound_pages() is implemented in core MM code, which
does not handle synchronization of the top-level page tables. Instead,
the core MM has historically relied on each architecture to perform this
synchronization manually.

We're not the first party to encounter a crash caused by not-sync'd
top level page tables: earlier this year, Gwan-gyeong Mun attempted to
address the issue [1] [2] after hitting a kernel panic when x86 code
accessed the vmemmap area before the corresponding top-level entries
were synced. At that time, the issue was believed to be triggered
only when struct page was enlarged for debugging purposes, and the patch
did not get further updates.

It turns out that current approach of relying on each arch to handle
the page table sync manually is fragile because 1) it's easy to forget
to sync the top level page table, and 2) it's also easy to overlook that
the kernel should not access the vmemmap and direct mapping areas before
the sync.

# The solution: Make page table sync more code robust 

To address this, Dave Hansen suggested [3] [4] introducing
{pgd,p4d}_populate_kernel() for updating kernel portion
of the page tables and allow each architecture to explicitly perform
synchronization when installing top-level entries. With this approach,
we no longer need to worry about missing the sync step, reducing the risk
of future regressions.

The new interface reuses existing ARCH_PAGE_TABLE_SYNC_MASK,
PGTBL_P*D_MODIFIED and arch_sync_kernel_mappings() facility used by
vmalloc and ioremap to synchronize page tables.

pgd_populate_kernel() looks like this:
  #define pgd_populate_kernel(addr, pgd, p4d)                    \               
  do {                                                           \               
         pgd_populate(&init_mm, pgd, p4d);                       \               
         if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED)     \               
                 arch_sync_kernel_mappings(addr, addr);          \               
  } while (0) 

It is worth noting that vmalloc() and apply_to_range() carefully
synchronizes page tables by calling p*d_alloc_track() and
arch_sync_kernel_mappings(), and thus they are not affected by
this patch series.

This patch series was hugely inspired by Dave Hansen's suggestion and
hence added Suggested-by: Dave Hansen.

Cc stable because lack of this series opens the door to intermittent
boot failures.

[1] https://lore.kernel.org/linux-mm/20250220064105.808339-1-gwan-gyeong.mun@intel.com
[2] https://lore.kernel.org/linux-mm/20250311114420.240341-1-gwan-gyeong.mun@intel.com
[3] https://lore.kernel.org/linux-mm/d1da214c-53d3-45ac-a8b6-51821c5416e4@intel.com
[4] https://lore.kernel.org/linux-mm/4d800744-7b88-41aa-9979-b245e8bf794b@intel.com 

Harry Yoo (5):
  mm: move page table sync declarations to asm/pgalloc.h
  mm: introduce and use {pgd,p4d}_populate_kernel()
  x86/mm: define ARCH_PAGE_TABLE_SYNC_MASK and
    arch_sync_kernel_mappings()
  x86/mm: convert p*d_populate{,_init} to _kernel variants
  x86/mm: drop unnecessary calls to sync_global_pgds() and fold into its
    sole user

 arch/x86/include/asm/pgalloc.h | 22 ++++++++++++++++++++
 arch/x86/mm/init_64.c          | 37 +++++++++++++++++++---------------
 arch/x86/mm/kasan_init_64.c    |  8 ++++----
 include/asm-generic/pgalloc.h  | 30 +++++++++++++++++++++++++++
 include/linux/vmalloc.h        | 16 ---------------
 mm/kasan/init.c                | 10 ++++-----
 mm/percpu.c                    |  4 ++--
 mm/sparse-vmemmap.c            |  4 ++--
 mm/vmalloc.c                   |  1 +
 9 files changed, 87 insertions(+), 45 deletions(-)

-- 
2.43.0



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 mm-hotfixes 1/5] mm: move page table sync declarations to asm/pgalloc.h
  2025-07-20 23:41 [PATCH v2 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Harry Yoo
@ 2025-07-20 23:41 ` Harry Yoo
  2025-07-21  2:56   ` kernel test robot
  2025-07-21  3:40   ` kernel test robot
  2025-07-20 23:42 ` [PATCH v2 mm-hotfixes 2/5] mm: introduce and use {pgd,p4d}_populate_kernel() Harry Yoo
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 14+ messages in thread
From: Harry Yoo @ 2025-07-20 23:41 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, Andrey Ryabinin, Arnd Bergmann,
	Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter
  Cc: H . Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, Juergen Gross, Kevin Brodsky,
	Muchun Song, Oscar Salvador, Joao Martins, Lorenzo Stoakes,
	Jane Chu, Alistair Popple, Mike Rapoport, David Hildenbrand,
	Gwan-gyeong Mun, Aneesh Kumar K . V, x86, linux-kernel,
	linux-arch, linux-mm, Harry Yoo, stable

Move ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() to
asm/pgalloc.h so that they can be used outside of vmalloc and ioremap.

Cc: stable@vger.kernel.org
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
---
 include/asm-generic/pgalloc.h | 16 ++++++++++++++++
 include/linux/vmalloc.h       | 16 ----------------
 mm/vmalloc.c                  |  1 +
 3 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h
index 3c8ec3bfea44..7ff5d7ca4cd6 100644
--- a/include/asm-generic/pgalloc.h
+++ b/include/asm-generic/pgalloc.h
@@ -296,6 +296,22 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
 }
 #endif
 
+/*
+ * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
+ * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
+ * needs to be called.
+ */
+#ifndef ARCH_PAGE_TABLE_SYNC_MASK
+#define ARCH_PAGE_TABLE_SYNC_MASK 0
+#endif
+
+/*
+ * There is no default implementation for arch_sync_kernel_mappings(). It is
+ * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK
+ * is 0.
+ */
+void arch_sync_kernel_mappings(unsigned long start, unsigned long end);
+
 #endif /* CONFIG_MMU */
 
 #endif /* __ASM_GENERIC_PGALLOC_H */
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index fdc9aeb74a44..2759dac6be44 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -219,22 +219,6 @@ extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
 int vmap_pages_range(unsigned long addr, unsigned long end, pgprot_t prot,
 		     struct page **pages, unsigned int page_shift);
 
-/*
- * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
- * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
- * needs to be called.
- */
-#ifndef ARCH_PAGE_TABLE_SYNC_MASK
-#define ARCH_PAGE_TABLE_SYNC_MASK 0
-#endif
-
-/*
- * There is no default implementation for arch_sync_kernel_mappings(). It is
- * relied upon the compiler to optimize calls out if ARCH_PAGE_TABLE_SYNC_MASK
- * is 0.
- */
-void arch_sync_kernel_mappings(unsigned long start, unsigned long end);
-
 /*
  *	Lowlevel-APIs (not for driver use!)
  */
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 6dbcdceecae1..37d4a2783246 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -42,6 +42,7 @@
 #include <linux/sched/mm.h>
 #include <asm/tlbflush.h>
 #include <asm/shmparam.h>
+#include <asm/pgalloc.h>
 #include <linux/page_owner.h>
 
 #define CREATE_TRACE_POINTS
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 mm-hotfixes 2/5] mm: introduce and use {pgd,p4d}_populate_kernel()
  2025-07-20 23:41 [PATCH v2 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Harry Yoo
  2025-07-20 23:41 ` [PATCH v2 mm-hotfixes 1/5] mm: move page table sync declarations to asm/pgalloc.h Harry Yoo
@ 2025-07-20 23:42 ` Harry Yoo
  2025-07-20 23:42 ` [PATCH v2 mm-hotfixes 3/5] x86/mm: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() Harry Yoo
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Harry Yoo @ 2025-07-20 23:42 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, Andrey Ryabinin, Arnd Bergmann,
	Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter
  Cc: H . Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, Juergen Gross, Kevin Brodsky,
	Muchun Song, Oscar Salvador, Joao Martins, Lorenzo Stoakes,
	Jane Chu, Alistair Popple, Mike Rapoport, David Hildenbrand,
	Gwan-gyeong Mun, Aneesh Kumar K . V, x86, linux-kernel,
	linux-arch, linux-mm, Harry Yoo, stable

Introduce and use {pgd,p4d}_populate_kernel() in core MM code when
populating PGD and P4D entries for the kernel address space.
These helpers ensure proper synchronization of page tables when
updating the kernel portion of top-level page tables.

Until now, the kernel has relied on each architecture to handle
synchronization of top-level page tables in an ad-hoc manner.
For example, see commit 9b861528a801 ("x86-64, mem: Update all PGDs for
direct mapping and vmemmap mapping changes").

However, this approach has proven fragile for following reasons:

  1) It is easy to forget to perform the necessary page table
     synchronization when introducing new changes.
     For instance, commit 4917f55b4ef9 ("mm/sparse-vmemmap: improve memory
     savings for compound devmaps") overlooked the need to synchronize
     page tables for the vmemmap area.

  2) It is also easy to overlook that the vmemmap and direct mapping areas
     must not be accessed before explicit page table synchronization.
     For example, commit 8d400913c231 ("x86/vmemmap: handle unpopulated
     sub-pmd ranges")) caused crashes by accessing the vmemmap area
     before calling sync_global_pgds().

To address this, as suggested by Dave Hansen, introduce _kernel() variants
of the page table population helpers, which invoke architecture-specific
hooks to properly synchronize page tables.

They reuse existing infrastructure for vmalloc and ioremap.
Synchronization requirements are determined by ARCH_PAGE_TABLE_SYNC_MASK,
and the actual synchronization is performed by arch_sync_kernel_mappings().

This change currently targets only x86_64, so only PGD and P4D level
helpers are introduced. In theory, PUD and PMD level helpers can be added
later if needed by other architectures.

Currently this is a no-op, since no architecture sets
PGTBL_{PGD,P4D}_MODIFIED in ARCH_PAGE_TABLE_SYNC_MASK.

Cc: stable@vger.kernel.org
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
---
 include/asm-generic/pgalloc.h | 18 ++++++++++++++++--
 mm/kasan/init.c               | 10 +++++-----
 mm/percpu.c                   |  4 ++--
 mm/sparse-vmemmap.c           |  4 ++--
 4 files changed, 25 insertions(+), 11 deletions(-)

diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h
index 7ff5d7ca4cd6..c05fea06b3fd 100644
--- a/include/asm-generic/pgalloc.h
+++ b/include/asm-generic/pgalloc.h
@@ -298,8 +298,8 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
 
 /*
  * Architectures can set this mask to a combination of PGTBL_P?D_MODIFIED values
- * and let generic vmalloc and ioremap code know when arch_sync_kernel_mappings()
- * needs to be called.
+ * and let generic vmalloc, ioremap and page table update code know when
+ * arch_sync_kernel_mappings() needs to be called.
  */
 #ifndef ARCH_PAGE_TABLE_SYNC_MASK
 #define ARCH_PAGE_TABLE_SYNC_MASK 0
@@ -312,6 +312,20 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
  */
 void arch_sync_kernel_mappings(unsigned long start, unsigned long end);
 
+#define pgd_populate_kernel(addr, pgd, p4d)			\
+do {								\
+	pgd_populate(&init_mm, pgd, p4d);			\
+	if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED)	\
+		arch_sync_kernel_mappings(addr, addr);		\
+} while (0)
+
+#define p4d_populate_kernel(addr, p4d, pud)			\
+do {								\
+	p4d_populate(&init_mm, p4d, pud);			\
+	if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_P4D_MODIFIED)	\
+		arch_sync_kernel_mappings(addr, addr);		\
+} while (0)
+
 #endif /* CONFIG_MMU */
 
 #endif /* __ASM_GENERIC_PGALLOC_H */
diff --git a/mm/kasan/init.c b/mm/kasan/init.c
index ced6b29fcf76..43de820ee282 100644
--- a/mm/kasan/init.c
+++ b/mm/kasan/init.c
@@ -191,7 +191,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
 			pud_t *pud;
 			pmd_t *pmd;
 
-			p4d_populate(&init_mm, p4d,
+			p4d_populate_kernel(addr, p4d,
 					lm_alias(kasan_early_shadow_pud));
 			pud = pud_offset(p4d, addr);
 			pud_populate(&init_mm, pud,
@@ -212,7 +212,7 @@ static int __ref zero_p4d_populate(pgd_t *pgd, unsigned long addr,
 			} else {
 				p = early_alloc(PAGE_SIZE, NUMA_NO_NODE);
 				pud_init(p);
-				p4d_populate(&init_mm, p4d, p);
+				p4d_populate_kernel(addr, p4d, p);
 			}
 		}
 		zero_pud_populate(p4d, addr, next);
@@ -251,10 +251,10 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
 			 * puds,pmds, so pgd_populate(), pud_populate()
 			 * is noops.
 			 */
-			pgd_populate(&init_mm, pgd,
+			pgd_populate_kernel(addr, pgd,
 					lm_alias(kasan_early_shadow_p4d));
 			p4d = p4d_offset(pgd, addr);
-			p4d_populate(&init_mm, p4d,
+			p4d_populate_kernel(addr, p4d,
 					lm_alias(kasan_early_shadow_pud));
 			pud = pud_offset(p4d, addr);
 			pud_populate(&init_mm, pud,
@@ -273,7 +273,7 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
 				if (!p)
 					return -ENOMEM;
 			} else {
-				pgd_populate(&init_mm, pgd,
+				pgd_populate_kernel(addr, pgd,
 					early_alloc(PAGE_SIZE, NUMA_NO_NODE));
 			}
 		}
diff --git a/mm/percpu.c b/mm/percpu.c
index 782cc148b39c..57450a03c432 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -3134,13 +3134,13 @@ void __init __weak pcpu_populate_pte(unsigned long addr)
 
 	if (pgd_none(*pgd)) {
 		p4d = memblock_alloc_or_panic(P4D_TABLE_SIZE, P4D_TABLE_SIZE);
-		pgd_populate(&init_mm, pgd, p4d);
+		pgd_populate_kernel(addr, pgd, p4d);
 	}
 
 	p4d = p4d_offset(pgd, addr);
 	if (p4d_none(*p4d)) {
 		pud = memblock_alloc_or_panic(PUD_TABLE_SIZE, PUD_TABLE_SIZE);
-		p4d_populate(&init_mm, p4d, pud);
+		p4d_populate_kernel(addr, p4d, pud);
 	}
 
 	pud = pud_offset(p4d, addr);
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index fd2ab5118e13..e275310ac708 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -229,7 +229,7 @@ p4d_t * __meminit vmemmap_p4d_populate(pgd_t *pgd, unsigned long addr, int node)
 		if (!p)
 			return NULL;
 		pud_init(p);
-		p4d_populate(&init_mm, p4d, p);
+		p4d_populate_kernel(addr, p4d, p);
 	}
 	return p4d;
 }
@@ -241,7 +241,7 @@ pgd_t * __meminit vmemmap_pgd_populate(unsigned long addr, int node)
 		void *p = vmemmap_alloc_block_zero(PAGE_SIZE, node);
 		if (!p)
 			return NULL;
-		pgd_populate(&init_mm, pgd, p);
+		pgd_populate_kernel(addr, pgd, p);
 	}
 	return pgd;
 }
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 mm-hotfixes 3/5] x86/mm: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings()
  2025-07-20 23:41 [PATCH v2 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Harry Yoo
  2025-07-20 23:41 ` [PATCH v2 mm-hotfixes 1/5] mm: move page table sync declarations to asm/pgalloc.h Harry Yoo
  2025-07-20 23:42 ` [PATCH v2 mm-hotfixes 2/5] mm: introduce and use {pgd,p4d}_populate_kernel() Harry Yoo
@ 2025-07-20 23:42 ` Harry Yoo
  2025-07-21  7:06   ` kernel test robot
  2025-07-20 23:42 ` [PATCH v2 mm-hotfixes 4/5] x86/mm: convert p*d_populate{,_init} to _kernel variants Harry Yoo
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Harry Yoo @ 2025-07-20 23:42 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, Andrey Ryabinin, Arnd Bergmann,
	Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter
  Cc: H . Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, Juergen Gross, Kevin Brodsky,
	Muchun Song, Oscar Salvador, Joao Martins, Lorenzo Stoakes,
	Jane Chu, Alistair Popple, Mike Rapoport, David Hildenbrand,
	Gwan-gyeong Mun, Aneesh Kumar K . V, x86, linux-kernel,
	linux-arch, linux-mm, Harry Yoo, stable

Define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() to ensure
page tables are properly synchronized when calling p*d_populate_kernel().
It is inteneded to synchronize page tables via pgd_pouplate_kernel() when
5-level paging is in use and via p4d_pouplate_kernel() when 4-level paging
is used.

This fixes intermittent boot failures on systems using 4-level paging
and a large amount of persistent memory:

  BUG: unable to handle page fault for address: ffffe70000000034
  #PF: supervisor write access in kernel mode
  #PF: error_code(0x0002) - not-present page
  PGD 0 P4D 0
  Oops: 0002 [#1] SMP NOPTI
  RIP: 0010:__init_single_page+0x9/0x6d
  Call Trace:
   <TASK>
   __init_zone_device_page+0x17/0x5d
   memmap_init_zone_device+0x154/0x1bb
   pagemap_range+0x2e0/0x40f
   memremap_pages+0x10b/0x2f0
   devm_memremap_pages+0x1e/0x60
   dev_dax_probe+0xce/0x2ec [device_dax]
   dax_bus_probe+0x6d/0xc9
   [... snip ...]
   </TASK>

It also fixes a crash in vmemmap_set_pmd() caused by accessing vmemmap
before sync_global_pgds() [1]:

  BUG: unable to handle page fault for address: ffffeb3ff1200000
  #PF: supervisor write access in kernel mode
  #PF: error_code(0x0002) - not-present page
  PGD 0 P4D 0
  Oops: Oops: 0002 [#1] PREEMPT SMP NOPTI
  Tainted: [W]=WARN
  RIP: 0010:vmemmap_set_pmd+0xff/0x230
   <TASK>
   vmemmap_populate_hugepages+0x176/0x180
   vmemmap_populate+0x34/0x80
   __populate_section_memmap+0x41/0x90
   sparse_add_section+0x121/0x3e0
   __add_pages+0xba/0x150
   add_pages+0x1d/0x70
   memremap_pages+0x3dc/0x810
   devm_memremap_pages+0x1c/0x60
   xe_devm_add+0x8b/0x100 [xe]
   xe_tile_init_noalloc+0x6a/0x70 [xe]
   xe_device_probe+0x48c/0x740 [xe]
   [... snip ...]

Cc: stable@vger.kernel.org
Fixes: 8d400913c231 ("x86/vmemmap: handle unpopulated sub-pmd ranges")
Closes: https://lore.kernel.org/linux-mm/20250311114420.240341-1-gwan-gyeong.mun@intel.com [1]
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
---
 arch/x86/include/asm/pgalloc.h | 2 ++
 arch/x86/mm/init_64.c          | 5 +++++
 2 files changed, 7 insertions(+)

diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgalloc.h
index c88691b15f3c..ead834e8141a 100644
--- a/arch/x86/include/asm/pgalloc.h
+++ b/arch/x86/include/asm/pgalloc.h
@@ -10,6 +10,8 @@
 
 #define __HAVE_ARCH_PTE_ALLOC_ONE
 #define __HAVE_ARCH_PGD_FREE
+#define ARCH_PAGE_TABLE_SYNC_MASK \
+	(pgtable_l5_enabled() ? PGTBL_PGD_MODIFIED : PGTBL_P4D_MODIFIED)
 #include <asm-generic/pgalloc.h>
 
 static inline int  __paravirt_pgd_alloc(struct mm_struct *mm) { return 0; }
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index fdb6cab524f0..3800479022e4 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -223,6 +223,11 @@ static void sync_global_pgds(unsigned long start, unsigned long end)
 		sync_global_pgds_l4(start, end);
 }
 
+void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
+{
+	sync_global_pgds(start, end);
+}
+
 /*
  * NOTE: This function is marked __ref because it calls __init function
  * (alloc_bootmem_pages). It's safe to do it ONLY when after_bootmem == 0.
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 mm-hotfixes 4/5] x86/mm: convert p*d_populate{,_init} to _kernel variants
  2025-07-20 23:41 [PATCH v2 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Harry Yoo
                   ` (2 preceding siblings ...)
  2025-07-20 23:42 ` [PATCH v2 mm-hotfixes 3/5] x86/mm: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() Harry Yoo
@ 2025-07-20 23:42 ` Harry Yoo
  2025-07-20 23:42 ` [PATCH v2 mm-hotfixes 5/5] x86/mm: drop unnecessary calls to sync_global_pgds() and fold into its sole user Harry Yoo
  2025-07-20 23:57 ` [PATCH v2 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Harry Yoo
  5 siblings, 0 replies; 14+ messages in thread
From: Harry Yoo @ 2025-07-20 23:42 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, Andrey Ryabinin, Arnd Bergmann,
	Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter
  Cc: H . Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, Juergen Gross, Kevin Brodsky,
	Muchun Song, Oscar Salvador, Joao Martins, Lorenzo Stoakes,
	Jane Chu, Alistair Popple, Mike Rapoport, David Hildenbrand,
	Gwan-gyeong Mun, Aneesh Kumar K . V, x86, linux-kernel,
	linux-arch, linux-mm, Harry Yoo, stable

Introduce p*d_populate_kernel_safe() and convert p*d_populate{,_init}()
to p*d_populate_kernel{,_init}() to ensure synchronization of
kernel mappings when populating PGD entries.

By converting them, we eliminate the risk of forgetting to synchronize
top-level page tables after populating PGD entries.

Cc: stable@vger.kernel.org
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
---
 arch/x86/include/asm/pgalloc.h | 20 ++++++++++++++++++++
 arch/x86/mm/init_64.c          | 25 +++++++++++++++++++------
 arch/x86/mm/kasan_init_64.c    |  8 ++++----
 3 files changed, 43 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/pgalloc.h b/arch/x86/include/asm/pgalloc.h
index ead834e8141a..aea3b16e7a35 100644
--- a/arch/x86/include/asm/pgalloc.h
+++ b/arch/x86/include/asm/pgalloc.h
@@ -122,6 +122,15 @@ static inline void p4d_populate_safe(struct mm_struct *mm, p4d_t *p4d, pud_t *pu
 	set_p4d_safe(p4d, __p4d(_PAGE_TABLE | __pa(pud)));
 }
 
+static inline void p4d_populate_kernel_safe(unsigned long addr,
+					    p4d_t *p4d, pud_t *pud)
+{
+	paravirt_alloc_pud(&init_mm, __pa(pud) >> PAGE_SHIFT);
+	set_p4d_safe(p4d, __p4d(_PAGE_TABLE | __pa(pud)));
+	if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_P4D_MODIFIED)
+		arch_sync_kernel_mappings(addr, addr);
+}
+
 extern void ___pud_free_tlb(struct mmu_gather *tlb, pud_t *pud);
 
 static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud,
@@ -147,6 +156,17 @@ static inline void pgd_populate_safe(struct mm_struct *mm, pgd_t *pgd, p4d_t *p4
 	set_pgd_safe(pgd, __pgd(_PAGE_TABLE | __pa(p4d)));
 }
 
+static inline void pgd_populate_kernel_safe(unsigned long addr,
+				       pgd_t *pgd, p4d_t *p4d)
+{
+	if (!pgtable_l5_enabled())
+		return;
+	paravirt_alloc_p4d(&init_mm, __pa(p4d) >> PAGE_SHIFT);
+	set_pgd_safe(pgd, __pgd(_PAGE_TABLE | __pa(p4d)));
+	if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED)
+		arch_sync_kernel_mappings(addr, addr);
+}
+
 extern void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d);
 
 static inline void __p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d,
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 3800479022e4..e4922b9c8403 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -75,6 +75,19 @@ DEFINE_POPULATE(pgd_populate, pgd, p4d, init)
 DEFINE_POPULATE(pud_populate, pud, pmd, init)
 DEFINE_POPULATE(pmd_populate_kernel, pmd, pte, init)
 
+#define DEFINE_POPULATE_KERNEL(fname, type1, type2, init)	\
+static inline void fname##_init(unsigned long addr,		\
+		type1##_t *arg1, type2##_t *arg2, bool init)	\
+{								\
+	if (init)						\
+		fname##_safe(addr, arg1, arg2);			\
+	else							\
+		fname(addr, arg1, arg2);			\
+}
+
+DEFINE_POPULATE_KERNEL(pgd_populate_kernel, pgd, p4d, init)
+DEFINE_POPULATE_KERNEL(p4d_populate_kernel, p4d, pud, init)
+
 #define DEFINE_ENTRY(type1, type2, init)			\
 static inline void set_##type1##_init(type1##_t *arg1,		\
 			type2##_t arg2, bool init)		\
@@ -255,7 +268,7 @@ static p4d_t *fill_p4d(pgd_t *pgd, unsigned long vaddr)
 {
 	if (pgd_none(*pgd)) {
 		p4d_t *p4d = (p4d_t *)spp_getpage();
-		pgd_populate(&init_mm, pgd, p4d);
+		pgd_populate_kernel(vaddr, pgd, p4d);
 		if (p4d != p4d_offset(pgd, 0))
 			printk(KERN_ERR "PAGETABLE BUG #00! %p <-> %p\n",
 			       p4d, p4d_offset(pgd, 0));
@@ -267,7 +280,7 @@ static pud_t *fill_pud(p4d_t *p4d, unsigned long vaddr)
 {
 	if (p4d_none(*p4d)) {
 		pud_t *pud = (pud_t *)spp_getpage();
-		p4d_populate(&init_mm, p4d, pud);
+		p4d_populate_kernel(vaddr, p4d, pud);
 		if (pud != pud_offset(p4d, 0))
 			printk(KERN_ERR "PAGETABLE BUG #01! %p <-> %p\n",
 			       pud, pud_offset(p4d, 0));
@@ -720,7 +733,7 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end,
 					   page_size_mask, prot, init);
 
 		spin_lock(&init_mm.page_table_lock);
-		p4d_populate_init(&init_mm, p4d, pud, init);
+		p4d_populate_kernel_init(vaddr, p4d, pud, init);
 		spin_unlock(&init_mm.page_table_lock);
 	}
 
@@ -762,10 +775,10 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
 
 		spin_lock(&init_mm.page_table_lock);
 		if (pgtable_l5_enabled())
-			pgd_populate_init(&init_mm, pgd, p4d, init);
+			pgd_populate_kernel_init(vaddr, pgd, p4d, init);
 		else
-			p4d_populate_init(&init_mm, p4d_offset(pgd, vaddr),
-					  (pud_t *) p4d, init);
+			p4d_populate_kernel_init(vaddr, p4d_offset(pgd, vaddr),
+						 (pud_t *) p4d, init);
 
 		spin_unlock(&init_mm.page_table_lock);
 		pgd_changed = true;
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 0539efd0d216..e825952d25b2 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -108,7 +108,7 @@ static void __init kasan_populate_p4d(p4d_t *p4d, unsigned long addr,
 	if (p4d_none(*p4d)) {
 		void *p = early_alloc(PAGE_SIZE, nid, true);
 
-		p4d_populate(&init_mm, p4d, p);
+		p4d_populate_kernel(addr, p4d, p);
 	}
 
 	pud = pud_offset(p4d, addr);
@@ -128,7 +128,7 @@ static void __init kasan_populate_pgd(pgd_t *pgd, unsigned long addr,
 
 	if (pgd_none(*pgd)) {
 		p = early_alloc(PAGE_SIZE, nid, true);
-		pgd_populate(&init_mm, pgd, p);
+		pgd_populate_kernel(addr, pgd, p);
 	}
 
 	p4d = p4d_offset(pgd, addr);
@@ -255,7 +255,7 @@ static void __init kasan_shallow_populate_p4ds(pgd_t *pgd,
 
 		if (p4d_none(*p4d)) {
 			p = early_alloc(PAGE_SIZE, NUMA_NO_NODE, true);
-			p4d_populate(&init_mm, p4d, p);
+			p4d_populate_kernel(addr, p4d, p);
 		}
 	} while (p4d++, addr = next, addr != end);
 }
@@ -273,7 +273,7 @@ static void __init kasan_shallow_populate_pgds(void *start, void *end)
 
 		if (pgd_none(*pgd)) {
 			p = early_alloc(PAGE_SIZE, NUMA_NO_NODE, true);
-			pgd_populate(&init_mm, pgd, p);
+			pgd_populate_kernel(addr, pgd, p);
 		}
 
 		/*
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 mm-hotfixes 5/5] x86/mm: drop unnecessary calls to sync_global_pgds() and fold into its sole user
  2025-07-20 23:41 [PATCH v2 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Harry Yoo
                   ` (3 preceding siblings ...)
  2025-07-20 23:42 ` [PATCH v2 mm-hotfixes 4/5] x86/mm: convert p*d_populate{,_init} to _kernel variants Harry Yoo
@ 2025-07-20 23:42 ` Harry Yoo
  2025-07-20 23:57 ` [PATCH v2 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Harry Yoo
  5 siblings, 0 replies; 14+ messages in thread
From: Harry Yoo @ 2025-07-20 23:42 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, Andrey Ryabinin, Arnd Bergmann,
	Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter
  Cc: H . Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, Juergen Gross, Kevin Brodsky,
	Muchun Song, Oscar Salvador, Joao Martins, Lorenzo Stoakes,
	Jane Chu, Alistair Popple, Mike Rapoport, David Hildenbrand,
	Gwan-gyeong Mun, Aneesh Kumar K . V, x86, linux-kernel,
	linux-arch, linux-mm, Harry Yoo, stable

Now that p*d_populate_kernel{,init}() handles page table synchronization,
calling sync_global_pgds() is no longer necessary. Remove those
redundant calls.

Additionally, since arch_sync_kernel_mappings() is now the only remaining
caller of sync_global_pgds(), fold the function into its user.

Cc: stable@vger.kernel.org
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
---
 arch/x86/mm/init_64.c | 17 ++---------------
 1 file changed, 2 insertions(+), 15 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index e4922b9c8403..f1507de3b7a3 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -228,7 +228,7 @@ static void sync_global_pgds_l4(unsigned long start, unsigned long end)
  * When memory was added make sure all the processes MM have
  * suitable PGD entries in the local PGD level page.
  */
-static void sync_global_pgds(unsigned long start, unsigned long end)
+void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
 {
 	if (pgtable_l5_enabled())
 		sync_global_pgds_l5(start, end);
@@ -236,11 +236,6 @@ static void sync_global_pgds(unsigned long start, unsigned long end)
 		sync_global_pgds_l4(start, end);
 }
 
-void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
-{
-	sync_global_pgds(start, end);
-}
-
 /*
  * NOTE: This function is marked __ref because it calls __init function
  * (alloc_bootmem_pages). It's safe to do it ONLY when after_bootmem == 0.
@@ -746,13 +741,11 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
 			       unsigned long page_size_mask,
 			       pgprot_t prot, bool init)
 {
-	bool pgd_changed = false;
-	unsigned long vaddr, vaddr_start, vaddr_end, vaddr_next, paddr_last;
+	unsigned long vaddr, vaddr_end, vaddr_next, paddr_last;
 
 	paddr_last = paddr_end;
 	vaddr = (unsigned long)__va(paddr_start);
 	vaddr_end = (unsigned long)__va(paddr_end);
-	vaddr_start = vaddr;
 
 	for (; vaddr < vaddr_end; vaddr = vaddr_next) {
 		pgd_t *pgd = pgd_offset_k(vaddr);
@@ -781,12 +774,8 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
 						 (pud_t *) p4d, init);
 
 		spin_unlock(&init_mm.page_table_lock);
-		pgd_changed = true;
 	}
 
-	if (pgd_changed)
-		sync_global_pgds(vaddr_start, vaddr_end - 1);
-
 	return paddr_last;
 }
 
@@ -1580,8 +1569,6 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 		err = -ENOMEM;
 	} else
 		err = vmemmap_populate_basepages(start, end, node, NULL);
-	if (!err)
-		sync_global_pgds(start, end - 1);
 	return err;
 }
 
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables
  2025-07-20 23:41 [PATCH v2 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Harry Yoo
                   ` (4 preceding siblings ...)
  2025-07-20 23:42 ` [PATCH v2 mm-hotfixes 5/5] x86/mm: drop unnecessary calls to sync_global_pgds() and fold into its sole user Harry Yoo
@ 2025-07-20 23:57 ` Harry Yoo
  2025-07-21 11:46   ` Harry Yoo
  5 siblings, 1 reply; 14+ messages in thread
From: Harry Yoo @ 2025-07-20 23:57 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, Andrey Ryabinin, Arnd Bergmann,
	Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter
  Cc: H . Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, Juergen Gross, Kevin Brodsky,
	Muchun Song, Oscar Salvador, Joao Martins, Lorenzo Stoakes,
	Jane Chu, Alistair Popple, Mike Rapoport, David Hildenbrand,
	Gwan-gyeong Mun, Joerg Roedel, Uladzislau Rezki,
	Aneesh Kumar K . V, x86, linux-kernel, linux-arch, linux-mm

On Mon, Jul 21, 2025 at 08:41:58AM +0900, Harry Yoo wrote:
> RFC v1: https://lore.kernel.org/linux-mm/20250709131657.5660-1-harry.yoo@oracle.com
> 
> RFC v1 -> v2:
> - Dropped RFC tag.
> - Exposed page table sync code to common code (Mike Rapoport).
> - Used only one Fixes: tag in patch 3 instead of two,
>   to avoid confusion (Andrew Morton)
> - Reused existing ARCH_PAGE_TABLE_SYNC_MASK and
>   arch_sync_kernel_mappings() facility (currently used by vmalloc and
>   ioremap) forpage table sync instead of introducing a new interface.
> 
> A quick question: Technically, patch 4 and 5 don't necessarily need to be
> backported. Does it make sense to backport only patch 1-3?
>
> # The problem: It is easy to miss/overlook page table synchronization
> 
> Hi all,

Looks like I forgot to Cc: Uladzislau and Joerg.. adding them to Cc.

-- 
Cheers,
Harry / Hyeonggon
 
> During our internal testing, we started observing intermittent boot
> failures when the machine uses 4-level paging and has a large amount
> of persistent memory:
> 
>   BUG: unable to handle page fault for address: ffffe70000000034
>   #PF: supervisor write access in kernel mode
>   #PF: error_code(0x0002) - not-present page
>   PGD 0 P4D 0 
>   Oops: 0002 [#1] SMP NOPTI
>   RIP: 0010:__init_single_page+0x9/0x6d
>   Call Trace:
>    <TASK>
>    __init_zone_device_page+0x17/0x5d
>    memmap_init_zone_device+0x154/0x1bb
>    pagemap_range+0x2e0/0x40f
>    memremap_pages+0x10b/0x2f0
>    devm_memremap_pages+0x1e/0x60
>    dev_dax_probe+0xce/0x2ec [device_dax]
>    dax_bus_probe+0x6d/0xc9
>    [... snip ...]
>    </TASK>
> 
> It turns out that the kernel panics while initializing vmemmap
> (struct page array) when the vmemmap region spans two PGD entries,
> because the new PGD entry is only installed in init_mm.pgd,
> but not in the page tables of other tasks.
> 
> And looking at __populate_section_memmap():
>   if (vmemmap_can_optimize(altmap, pgmap))                                
>           // does not sync top level page tables
>           r = vmemmap_populate_compound_pages(pfn, start, end, nid, pgmap);
>   else                                                                    
>           // sync top level page tables in x86
>           r = vmemmap_populate(start, end, nid, altmap);
> 
> In the normal path, vmemmap_populate() in arch/x86/mm/init_64.c
> synchronizes the top level page table (See commit 9b861528a801
> ("x86-64, mem: Update all PGDs for direct mapping and vmemmap mapping
> changes")) so that all tasks in the system can see the new vmemmap area.
> 
> However, when vmemmap_can_optimize() returns true, the optimized path
> skips synchronization of top-level page tables. This is because
> vmemmap_populate_compound_pages() is implemented in core MM code, which
> does not handle synchronization of the top-level page tables. Instead,
> the core MM has historically relied on each architecture to perform this
> synchronization manually.
> 
> We're not the first party to encounter a crash caused by not-sync'd
> top level page tables: earlier this year, Gwan-gyeong Mun attempted to
> address the issue [1] [2] after hitting a kernel panic when x86 code
> accessed the vmemmap area before the corresponding top-level entries
> were synced. At that time, the issue was believed to be triggered
> only when struct page was enlarged for debugging purposes, and the patch
> did not get further updates.
> 
> It turns out that current approach of relying on each arch to handle
> the page table sync manually is fragile because 1) it's easy to forget
> to sync the top level page table, and 2) it's also easy to overlook that
> the kernel should not access the vmemmap and direct mapping areas before
> the sync.
> 
> # The solution: Make page table sync more code robust 
> 
> To address this, Dave Hansen suggested [3] [4] introducing
> {pgd,p4d}_populate_kernel() for updating kernel portion
> of the page tables and allow each architecture to explicitly perform
> synchronization when installing top-level entries. With this approach,
> we no longer need to worry about missing the sync step, reducing the risk
> of future regressions.
> 
> The new interface reuses existing ARCH_PAGE_TABLE_SYNC_MASK,
> PGTBL_P*D_MODIFIED and arch_sync_kernel_mappings() facility used by
> vmalloc and ioremap to synchronize page tables.
> 
> pgd_populate_kernel() looks like this:
>   #define pgd_populate_kernel(addr, pgd, p4d)                    \               
>   do {                                                           \               
>          pgd_populate(&init_mm, pgd, p4d);                       \               
>          if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED)     \               
>                  arch_sync_kernel_mappings(addr, addr);          \               
>   } while (0) 
> 
> It is worth noting that vmalloc() and apply_to_range() carefully
> synchronizes page tables by calling p*d_alloc_track() and
> arch_sync_kernel_mappings(), and thus they are not affected by
> this patch series.
> 
> This patch series was hugely inspired by Dave Hansen's suggestion and
> hence added Suggested-by: Dave Hansen.
> 
> Cc stable because lack of this series opens the door to intermittent
> boot failures.
> 
> [1] https://lore.kernel.org/linux-mm/20250220064105.808339-1-gwan-gyeong.mun@intel.com
> [2] https://lore.kernel.org/linux-mm/20250311114420.240341-1-gwan-gyeong.mun@intel.com
> [3] https://lore.kernel.org/linux-mm/d1da214c-53d3-45ac-a8b6-51821c5416e4@intel.com
> [4] https://lore.kernel.org/linux-mm/4d800744-7b88-41aa-9979-b245e8bf794b@intel.com 
> 
> Harry Yoo (5):
>   mm: move page table sync declarations to asm/pgalloc.h
>   mm: introduce and use {pgd,p4d}_populate_kernel()
>   x86/mm: define ARCH_PAGE_TABLE_SYNC_MASK and
>     arch_sync_kernel_mappings()
>   x86/mm: convert p*d_populate{,_init} to _kernel variants
>   x86/mm: drop unnecessary calls to sync_global_pgds() and fold into its
>     sole user
> 
>  arch/x86/include/asm/pgalloc.h | 22 ++++++++++++++++++++
>  arch/x86/mm/init_64.c          | 37 +++++++++++++++++++---------------
>  arch/x86/mm/kasan_init_64.c    |  8 ++++----
>  include/asm-generic/pgalloc.h  | 30 +++++++++++++++++++++++++++
>  include/linux/vmalloc.h        | 16 ---------------
>  mm/kasan/init.c                | 10 ++++-----
>  mm/percpu.c                    |  4 ++--
>  mm/sparse-vmemmap.c            |  4 ++--
>  mm/vmalloc.c                   |  1 +
>  9 files changed, 87 insertions(+), 45 deletions(-)
> 
> -- 
> 2.43.0
>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 mm-hotfixes 1/5] mm: move page table sync declarations to asm/pgalloc.h
  2025-07-20 23:41 ` [PATCH v2 mm-hotfixes 1/5] mm: move page table sync declarations to asm/pgalloc.h Harry Yoo
@ 2025-07-21  2:56   ` kernel test robot
  2025-07-21  3:40   ` kernel test robot
  1 sibling, 0 replies; 14+ messages in thread
From: kernel test robot @ 2025-07-21  2:56 UTC (permalink / raw)
  To: Harry Yoo, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, Andy Lutomirski, Peter Zijlstra, Andrey Ryabinin,
	Arnd Bergmann, Andrew Morton, Dennis Zhou, Tejun Heo,
	Christoph Lameter
  Cc: oe-kbuild-all, Linux Memory Management List, H . Peter Anvin,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Juergen Gross, Kevin Brodsky, Muchun Song,
	Oscar Salvador, Joao Martins, Lorenzo Stoakes, Jane Chu,
	Alistair Popple, Mike Rapoport, David Hildenbrand,
	Gwan-gyeong Mun, Aneesh Kumar K . V

Hi Harry,

kernel test robot noticed the following build errors:

[auto build test ERROR on akpm-mm/mm-everything]

url:    https://github.com/intel-lab-lkp/linux/commits/Harry-Yoo/mm-move-page-table-sync-declarations-to-asm-pgalloc-h/20250721-074448
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20250720234203.9126-2-harry.yoo%40oracle.com
patch subject: [PATCH v2 mm-hotfixes 1/5] mm: move page table sync declarations to asm/pgalloc.h
config: sparc-randconfig-001-20250721 (https://download.01.org/0day-ci/archive/20250721/202507211059.kHMi8xEC-lkp@intel.com/config)
compiler: sparc64-linux-gcc (GCC) 15.1.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250721/202507211059.kHMi8xEC-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202507211059.kHMi8xEC-lkp@intel.com/

All errors (new ones prefixed by >>):

   mm/memory.c: In function '__apply_to_page_range':
>> mm/memory.c:3155:20: error: 'ARCH_PAGE_TABLE_SYNC_MASK' undeclared (first use in this function)
    3155 |         if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
         |                    ^~~~~~~~~~~~~~~~~~~~~~~~~
   mm/memory.c:3155:20: note: each undeclared identifier is reported only once for each function it appears in
>> mm/memory.c:3156:17: error: implicit declaration of function 'arch_sync_kernel_mappings' [-Wimplicit-function-declaration]
    3156 |                 arch_sync_kernel_mappings(start, start + size);
         |                 ^~~~~~~~~~~~~~~~~~~~~~~~~
--
   mm/vmalloc.c: In function 'vmap_range_noflush':
>> mm/vmalloc.c:315:20: error: 'ARCH_PAGE_TABLE_SYNC_MASK' undeclared (first use in this function)
     315 |         if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
         |                    ^~~~~~~~~~~~~~~~~~~~~~~~~
   mm/vmalloc.c:315:20: note: each undeclared identifier is reported only once for each function it appears in
>> mm/vmalloc.c:316:17: error: implicit declaration of function 'arch_sync_kernel_mappings' [-Wimplicit-function-declaration]
     316 |                 arch_sync_kernel_mappings(start, end);
         |                 ^~~~~~~~~~~~~~~~~~~~~~~~~
   mm/vmalloc.c: In function '__vunmap_range_noflush':
   mm/vmalloc.c:488:20: error: 'ARCH_PAGE_TABLE_SYNC_MASK' undeclared (first use in this function)
     488 |         if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
         |                    ^~~~~~~~~~~~~~~~~~~~~~~~~
   mm/vmalloc.c: In function 'vmap_small_pages_range_noflush':
   mm/vmalloc.c:633:20: error: 'ARCH_PAGE_TABLE_SYNC_MASK' undeclared (first use in this function)
     633 |         if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
         |                    ^~~~~~~~~~~~~~~~~~~~~~~~~


vim +/ARCH_PAGE_TABLE_SYNC_MASK +3155 mm/memory.c

c2febafc67734a Kirill A. Shutemov  2017-03-09  3121  
be1db4753ee6a0 Daniel Axtens       2019-12-17  3122  static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr,
be1db4753ee6a0 Daniel Axtens       2019-12-17  3123  				 unsigned long size, pte_fn_t fn,
be1db4753ee6a0 Daniel Axtens       2019-12-17  3124  				 void *data, bool create)
aee16b3cee2746 Jeremy Fitzhardinge 2007-05-06  3125  {
aee16b3cee2746 Jeremy Fitzhardinge 2007-05-06  3126  	pgd_t *pgd;
e80d3909be42f7 Joerg Roedel        2020-09-04  3127  	unsigned long start = addr, next;
57250a5bf0f6ff Jeremy Fitzhardinge 2010-08-09  3128  	unsigned long end = addr + size;
e80d3909be42f7 Joerg Roedel        2020-09-04  3129  	pgtbl_mod_mask mask = 0;
be1db4753ee6a0 Daniel Axtens       2019-12-17  3130  	int err = 0;
aee16b3cee2746 Jeremy Fitzhardinge 2007-05-06  3131  
9cb65bc3b11140 Mika Penttilä       2016-03-15  3132  	if (WARN_ON(addr >= end))
9cb65bc3b11140 Mika Penttilä       2016-03-15  3133  		return -EINVAL;
9cb65bc3b11140 Mika Penttilä       2016-03-15  3134  
aee16b3cee2746 Jeremy Fitzhardinge 2007-05-06  3135  	pgd = pgd_offset(mm, addr);
aee16b3cee2746 Jeremy Fitzhardinge 2007-05-06  3136  	do {
aee16b3cee2746 Jeremy Fitzhardinge 2007-05-06  3137  		next = pgd_addr_end(addr, end);
0c95cba4925509 Nicholas Piggin     2021-04-29  3138  		if (pgd_none(*pgd) && !create)
be1db4753ee6a0 Daniel Axtens       2019-12-17  3139  			continue;
3685024edd270f Ryan Roberts        2025-02-26  3140  		if (WARN_ON_ONCE(pgd_leaf(*pgd))) {
3685024edd270f Ryan Roberts        2025-02-26  3141  			err = -EINVAL;
3685024edd270f Ryan Roberts        2025-02-26  3142  			break;
3685024edd270f Ryan Roberts        2025-02-26  3143  		}
0c95cba4925509 Nicholas Piggin     2021-04-29  3144  		if (!pgd_none(*pgd) && WARN_ON_ONCE(pgd_bad(*pgd))) {
0c95cba4925509 Nicholas Piggin     2021-04-29  3145  			if (!create)
0c95cba4925509 Nicholas Piggin     2021-04-29  3146  				continue;
0c95cba4925509 Nicholas Piggin     2021-04-29  3147  			pgd_clear_bad(pgd);
0c95cba4925509 Nicholas Piggin     2021-04-29  3148  		}
0c95cba4925509 Nicholas Piggin     2021-04-29  3149  		err = apply_to_p4d_range(mm, pgd, addr, next,
0c95cba4925509 Nicholas Piggin     2021-04-29  3150  					 fn, data, create, &mask);
aee16b3cee2746 Jeremy Fitzhardinge 2007-05-06  3151  		if (err)
aee16b3cee2746 Jeremy Fitzhardinge 2007-05-06  3152  			break;
aee16b3cee2746 Jeremy Fitzhardinge 2007-05-06  3153  	} while (pgd++, addr = next, addr != end);
57250a5bf0f6ff Jeremy Fitzhardinge 2010-08-09  3154  
e80d3909be42f7 Joerg Roedel        2020-09-04 @3155  	if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
e80d3909be42f7 Joerg Roedel        2020-09-04 @3156  		arch_sync_kernel_mappings(start, start + size);
e80d3909be42f7 Joerg Roedel        2020-09-04  3157  
aee16b3cee2746 Jeremy Fitzhardinge 2007-05-06  3158  	return err;
aee16b3cee2746 Jeremy Fitzhardinge 2007-05-06  3159  }
be1db4753ee6a0 Daniel Axtens       2019-12-17  3160  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 mm-hotfixes 1/5] mm: move page table sync declarations to asm/pgalloc.h
  2025-07-20 23:41 ` [PATCH v2 mm-hotfixes 1/5] mm: move page table sync declarations to asm/pgalloc.h Harry Yoo
  2025-07-21  2:56   ` kernel test robot
@ 2025-07-21  3:40   ` kernel test robot
  2025-07-21 11:38     ` Lorenzo Stoakes
  1 sibling, 1 reply; 14+ messages in thread
From: kernel test robot @ 2025-07-21  3:40 UTC (permalink / raw)
  To: Harry Yoo, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, Andy Lutomirski, Andrey Ryabinin, Arnd Bergmann,
	Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter
  Cc: oe-kbuild-all, Linux Memory Management List, H . Peter Anvin,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Juergen Gross, Kevin Brodsky, Muchun Song,
	Oscar Salvador, Joao Martins, Lorenzo Stoakes, Jane Chu,
	Alistair Popple, Mike Rapoport, David Hildenbrand,
	Gwan-gyeong Mun, Aneesh Kumar K . V

Hi Harry,

kernel test robot noticed the following build warnings:

[auto build test WARNING on akpm-mm/mm-everything]

url:    https://github.com/intel-lab-lkp/linux/commits/Harry-Yoo/mm-move-page-table-sync-declarations-to-asm-pgalloc-h/20250721-074448
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20250720234203.9126-2-harry.yoo%40oracle.com
patch subject: [PATCH v2 mm-hotfixes 1/5] mm: move page table sync declarations to asm/pgalloc.h
config: i386-buildonly-randconfig-003-20250721 (https://download.01.org/0day-ci/archive/20250721/202507211129.Xbn2bAOg-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14+deb12u1) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250721/202507211129.Xbn2bAOg-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202507211129.Xbn2bAOg-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> arch/x86/mm/fault.c:265:6: warning: no previous prototype for 'arch_sync_kernel_mappings' [-Wmissing-prototypes]
     265 | void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
         |      ^~~~~~~~~~~~~~~~~~~~~~~~~


vim +/arch_sync_kernel_mappings +265 arch/x86/mm/fault.c

4819e15f740ec88 Joerg Roedel        2020-09-02  264  
1e15d374bb1cb95 Alexander Potapenko 2023-01-11 @265  void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
f2f13a8535174db Ingo Molnar         2009-02-20  266  {
86cf69f1d893d48 Joerg Roedel        2020-06-01  267  	unsigned long addr;
f2f13a8535174db Ingo Molnar         2009-02-20  268  
86cf69f1d893d48 Joerg Roedel        2020-06-01  269  	for (addr = start & PMD_MASK;
86cf69f1d893d48 Joerg Roedel        2020-06-01  270  	     addr >= TASK_SIZE_MAX && addr < VMALLOC_END;
86cf69f1d893d48 Joerg Roedel        2020-06-01  271  	     addr += PMD_SIZE) {
f2f13a8535174db Ingo Molnar         2009-02-20  272  		struct page *page;
f2f13a8535174db Ingo Molnar         2009-02-20  273  
a79e53d85683c6d Andrea Arcangeli    2011-02-16  274  		spin_lock(&pgd_lock);
f2f13a8535174db Ingo Molnar         2009-02-20  275  		list_for_each_entry(page, &pgd_list, lru) {
617d34d9e5d8326 Jeremy Fitzhardinge 2010-09-21  276  			spinlock_t *pgt_lock;
617d34d9e5d8326 Jeremy Fitzhardinge 2010-09-21  277  
a79e53d85683c6d Andrea Arcangeli    2011-02-16  278  			/* the pgt_lock only for Xen */
617d34d9e5d8326 Jeremy Fitzhardinge 2010-09-21  279  			pgt_lock = &pgd_page_get_mm(page)->page_table_lock;
617d34d9e5d8326 Jeremy Fitzhardinge 2010-09-21  280  
617d34d9e5d8326 Jeremy Fitzhardinge 2010-09-21  281  			spin_lock(pgt_lock);
86cf69f1d893d48 Joerg Roedel        2020-06-01  282  			vmalloc_sync_one(page_address(page), addr);
617d34d9e5d8326 Jeremy Fitzhardinge 2010-09-21  283  			spin_unlock(pgt_lock);
f2f13a8535174db Ingo Molnar         2009-02-20  284  		}
a79e53d85683c6d Andrea Arcangeli    2011-02-16  285  		spin_unlock(&pgd_lock);
f2f13a8535174db Ingo Molnar         2009-02-20  286  	}
f2f13a8535174db Ingo Molnar         2009-02-20  287  }
f2f13a8535174db Ingo Molnar         2009-02-20  288  

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 mm-hotfixes 3/5] x86/mm: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings()
  2025-07-20 23:42 ` [PATCH v2 mm-hotfixes 3/5] x86/mm: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() Harry Yoo
@ 2025-07-21  7:06   ` kernel test robot
  0 siblings, 0 replies; 14+ messages in thread
From: kernel test robot @ 2025-07-21  7:06 UTC (permalink / raw)
  To: Harry Yoo, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, Andy Lutomirski, Peter Zijlstra, Andrey Ryabinin,
	Arnd Bergmann, Andrew Morton, Dennis Zhou, Tejun Heo,
	Christoph Lameter
  Cc: oe-kbuild-all, Linux Memory Management List, H . Peter Anvin,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Juergen Gross, Kevin Brodsky, Muchun Song,
	Oscar Salvador, Joao Martins, Lorenzo Stoakes, Jane Chu,
	Alistair Popple, Mike Rapoport, David Hildenbrand,
	Gwan-gyeong Mun, Aneesh Kumar K . V

Hi Harry,

kernel test robot noticed the following build warnings:

[auto build test WARNING on akpm-mm/mm-everything]

url:    https://github.com/intel-lab-lkp/linux/commits/Harry-Yoo/mm-move-page-table-sync-declarations-to-asm-pgalloc-h/20250721-074448
base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
patch link:    https://lore.kernel.org/r/20250720234203.9126-4-harry.yoo%40oracle.com
patch subject: [PATCH v2 mm-hotfixes 3/5] x86/mm: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings()
config: i386-buildonly-randconfig-003-20250721 (https://download.01.org/0day-ci/archive/20250721/202507211433.J7CqBp8O-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14+deb12u1) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250721/202507211433.J7CqBp8O-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202507211433.J7CqBp8O-lkp@intel.com/

All warnings (new ones prefixed by >>):

   In file included from mm/mremap.c:31:
>> arch/x86/include/asm/pgalloc.h:13: warning: "ARCH_PAGE_TABLE_SYNC_MASK" redefined
      13 | #define ARCH_PAGE_TABLE_SYNC_MASK \
         | 
   In file included from arch/x86/include/asm/pgtable_32_types.h:15,
                    from arch/x86/include/asm/pgtable_types.h:278,
                    from arch/x86/include/asm/paravirt_types.h:11,
                    from arch/x86/include/asm/ptrace.h:175,
                    from arch/x86/include/asm/math_emu.h:5,
                    from arch/x86/include/asm/processor.h:13,
                    from arch/x86/include/asm/cpufeature.h:5,
                    from arch/x86/include/asm/thread_info.h:59,
                    from include/linux/thread_info.h:60,
                    from include/linux/spinlock.h:60,
                    from include/linux/mmzone.h:8,
                    from include/linux/gfp.h:7,
                    from include/linux/mm.h:7,
                    from mm/mremap.c:11:
   arch/x86/include/asm/pgtable-2level_types.h:21: note: this is the location of the previous definition
      21 | #define ARCH_PAGE_TABLE_SYNC_MASK       PGTBL_PMD_MODIFIED
         | 


vim +/ARCH_PAGE_TABLE_SYNC_MASK +13 arch/x86/include/asm/pgalloc.h

    10	
    11	#define __HAVE_ARCH_PTE_ALLOC_ONE
    12	#define __HAVE_ARCH_PGD_FREE
  > 13	#define ARCH_PAGE_TABLE_SYNC_MASK \
    14		(pgtable_l5_enabled() ? PGTBL_PGD_MODIFIED : PGTBL_P4D_MODIFIED)
    15	#include <asm-generic/pgalloc.h>
    16	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 mm-hotfixes 1/5] mm: move page table sync declarations to asm/pgalloc.h
  2025-07-21  3:40   ` kernel test robot
@ 2025-07-21 11:38     ` Lorenzo Stoakes
  2025-07-21 12:10       ` Harry Yoo
  0 siblings, 1 reply; 14+ messages in thread
From: Lorenzo Stoakes @ 2025-07-21 11:38 UTC (permalink / raw)
  To: kernel test robot
  Cc: Harry Yoo, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, Andy Lutomirski, Andrey Ryabinin, Arnd Bergmann,
	Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter,
	oe-kbuild-all, Linux Memory Management List, H . Peter Anvin,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Juergen Gross, Kevin Brodsky, Muchun Song,
	Oscar Salvador, Joao Martins, Jane Chu, Alistair Popple,
	Mike Rapoport, David Hildenbrand, Gwan-gyeong Mun,
	Aneesh Kumar K . V

On Mon, Jul 21, 2025 at 11:40:10AM +0800, kernel test robot wrote:
> Hi Harry,
>
> kernel test robot noticed the following build warnings:
>
> [auto build test WARNING on akpm-mm/mm-everything]
>
> url:    https://github.com/intel-lab-lkp/linux/commits/Harry-Yoo/mm-move-page-table-sync-declarations-to-asm-pgalloc-h/20250721-074448
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
> patch link:    https://lore.kernel.org/r/20250720234203.9126-2-harry.yoo%40oracle.com
> patch subject: [PATCH v2 mm-hotfixes 1/5] mm: move page table sync declarations to asm/pgalloc.h
> config: i386-buildonly-randconfig-003-20250721 (https://download.01.org/0day-ci/archive/20250721/202507211129.Xbn2bAOg-lkp@intel.com/config)
> compiler: gcc-12 (Debian 12.2.0-14+deb12u1) 12.2.0
> reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250721/202507211129.Xbn2bAOg-lkp@intel.com/reproduce)
>
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <lkp@intel.com>
> | Closes: https://lore.kernel.org/oe-kbuild-all/202507211129.Xbn2bAOg-lkp@intel.com/
>
> All warnings (new ones prefixed by >>):
>
> >> arch/x86/mm/fault.c:265:6: warning: no previous prototype for 'arch_sync_kernel_mappings' [-Wmissing-prototypes]
>      265 | void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
>          |      ^~~~~~~~~~~~~~~~~~~~~~~~~
>

Looks like arch/x86/mm/fault.c, which includes linux/vmalloc.h (such an odd
place for this decl!) needs to:

#include <asm/pgalloc.h>

This seems to be a 32-bit build thing, as your series builds locally on my
x86-64 machine.

>
> vim +/arch_sync_kernel_mappings +265 arch/x86/mm/fault.c
>
> 4819e15f740ec88 Joerg Roedel        2020-09-02  264
> 1e15d374bb1cb95 Alexander Potapenko 2023-01-11 @265  void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
> f2f13a8535174db Ingo Molnar         2009-02-20  266  {
> 86cf69f1d893d48 Joerg Roedel        2020-06-01  267  	unsigned long addr;
> f2f13a8535174db Ingo Molnar         2009-02-20  268
> 86cf69f1d893d48 Joerg Roedel        2020-06-01  269  	for (addr = start & PMD_MASK;
> 86cf69f1d893d48 Joerg Roedel        2020-06-01  270  	     addr >= TASK_SIZE_MAX && addr < VMALLOC_END;
> 86cf69f1d893d48 Joerg Roedel        2020-06-01  271  	     addr += PMD_SIZE) {
> f2f13a8535174db Ingo Molnar         2009-02-20  272  		struct page *page;
> f2f13a8535174db Ingo Molnar         2009-02-20  273
> a79e53d85683c6d Andrea Arcangeli    2011-02-16  274  		spin_lock(&pgd_lock);
> f2f13a8535174db Ingo Molnar         2009-02-20  275  		list_for_each_entry(page, &pgd_list, lru) {
> 617d34d9e5d8326 Jeremy Fitzhardinge 2010-09-21  276  			spinlock_t *pgt_lock;
> 617d34d9e5d8326 Jeremy Fitzhardinge 2010-09-21  277
> a79e53d85683c6d Andrea Arcangeli    2011-02-16  278  			/* the pgt_lock only for Xen */
> 617d34d9e5d8326 Jeremy Fitzhardinge 2010-09-21  279  			pgt_lock = &pgd_page_get_mm(page)->page_table_lock;
> 617d34d9e5d8326 Jeremy Fitzhardinge 2010-09-21  280
> 617d34d9e5d8326 Jeremy Fitzhardinge 2010-09-21  281  			spin_lock(pgt_lock);
> 86cf69f1d893d48 Joerg Roedel        2020-06-01  282  			vmalloc_sync_one(page_address(page), addr);
> 617d34d9e5d8326 Jeremy Fitzhardinge 2010-09-21  283  			spin_unlock(pgt_lock);
> f2f13a8535174db Ingo Molnar         2009-02-20  284  		}
> a79e53d85683c6d Andrea Arcangeli    2011-02-16  285  		spin_unlock(&pgd_lock);
> f2f13a8535174db Ingo Molnar         2009-02-20  286  	}
> f2f13a8535174db Ingo Molnar         2009-02-20  287  }
> f2f13a8535174db Ingo Molnar         2009-02-20  288
>
> --
> 0-DAY CI Kernel Test Service
> https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables
  2025-07-20 23:57 ` [PATCH v2 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Harry Yoo
@ 2025-07-21 11:46   ` Harry Yoo
  0 siblings, 0 replies; 14+ messages in thread
From: Harry Yoo @ 2025-07-21 11:46 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Borislav Petkov, Dave Hansen,
	Andy Lutomirski, Peter Zijlstra, Andrey Ryabinin, Arnd Bergmann,
	Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter
  Cc: H . Peter Anvin, Alexander Potapenko, Andrey Konovalov,
	Dmitry Vyukov, Vincenzo Frascino, Juergen Gross, Kevin Brodsky,
	Muchun Song, Oscar Salvador, Joao Martins, Lorenzo Stoakes,
	Jane Chu, Alistair Popple, Mike Rapoport, David Hildenbrand,
	Gwan-gyeong Mun, Joerg Roedel, Uladzislau Rezki,
	Aneesh Kumar K . V, x86, linux-kernel, linux-arch, linux-mm

On Mon, Jul 21, 2025 at 08:57:19AM +0900, Harry Yoo wrote:
> On Mon, Jul 21, 2025 at 08:41:58AM +0900, Harry Yoo wrote:
> > RFC v1: https://lore.kernel.org/linux-mm/20250709131657.5660-1-harry.yoo@oracle.com
> > 
> > RFC v1 -> v2:
> > - Dropped RFC tag.
> > - Exposed page table sync code to common code (Mike Rapoport).
> > - Used only one Fixes: tag in patch 3 instead of two,
> >   to avoid confusion (Andrew Morton)
> > - Reused existing ARCH_PAGE_TABLE_SYNC_MASK and
> >   arch_sync_kernel_mappings() facility (currently used by vmalloc and
> >   ioremap) forpage table sync instead of introducing a new interface.
> > 
> > A quick question: Technically, patch 4 and 5 don't necessarily need to be
> > backported. Does it make sense to backport only patch 1-3?
> >
> > # The problem: It is easy to miss/overlook page table synchronization
> > 
> > Hi all,
> 
> Looks like I forgot to Cc: Uladzislau and Joerg.. adding them to Cc.

Apologizes for kernel bot reports! I should have tested it on non x86-64
architectures.

Looking at the kernel test robot reports it seems:

- ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() should be
  moved to <linux/pgtable.h> instead of <asm/pgalloc.h> because x86-32
  and arm exposes ARCH_PAGE_TABLE_SYNC_MASK via <linux/pgtable.h> which
  in turn includes <asm/pgtable.h>

  I'd keep p*d_populate_kernel() in include/asm-generic/pgalloc.h, but
  move the others to <linux/pgtable.h> and include it in
  include/asm-generic/pgalloc.h.

- On x86-64, ARCH_PAGE_TABLE_SYNC_MASK should be defined in
  arch/x86/include/asm/pgtable_64_types.h to align with x86-32.

Will repost v3 with changes mentioned above hopefully in a few days.

If you have any further feedback, please let me know!

-- 
Cheers,
Harry / Hyeonggon

> > During our internal testing, we started observing intermittent boot
> > failures when the machine uses 4-level paging and has a large amount
> > of persistent memory:
> > 
> >   BUG: unable to handle page fault for address: ffffe70000000034
> >   #PF: supervisor write access in kernel mode
> >   #PF: error_code(0x0002) - not-present page
> >   PGD 0 P4D 0 
> >   Oops: 0002 [#1] SMP NOPTI
> >   RIP: 0010:__init_single_page+0x9/0x6d
> >   Call Trace:
> >    <TASK>
> >    __init_zone_device_page+0x17/0x5d
> >    memmap_init_zone_device+0x154/0x1bb
> >    pagemap_range+0x2e0/0x40f
> >    memremap_pages+0x10b/0x2f0
> >    devm_memremap_pages+0x1e/0x60
> >    dev_dax_probe+0xce/0x2ec [device_dax]
> >    dax_bus_probe+0x6d/0xc9
> >    [... snip ...]
> >    </TASK>
> > 
> > It turns out that the kernel panics while initializing vmemmap
> > (struct page array) when the vmemmap region spans two PGD entries,
> > because the new PGD entry is only installed in init_mm.pgd,
> > but not in the page tables of other tasks.
> > 
> > And looking at __populate_section_memmap():
> >   if (vmemmap_can_optimize(altmap, pgmap))                                
> >           // does not sync top level page tables
> >           r = vmemmap_populate_compound_pages(pfn, start, end, nid, pgmap);
> >   else                                                                    
> >           // sync top level page tables in x86
> >           r = vmemmap_populate(start, end, nid, altmap);
> > 
> > In the normal path, vmemmap_populate() in arch/x86/mm/init_64.c
> > synchronizes the top level page table (See commit 9b861528a801
> > ("x86-64, mem: Update all PGDs for direct mapping and vmemmap mapping
> > changes")) so that all tasks in the system can see the new vmemmap area.
> > 
> > However, when vmemmap_can_optimize() returns true, the optimized path
> > skips synchronization of top-level page tables. This is because
> > vmemmap_populate_compound_pages() is implemented in core MM code, which
> > does not handle synchronization of the top-level page tables. Instead,
> > the core MM has historically relied on each architecture to perform this
> > synchronization manually.
> > 
> > We're not the first party to encounter a crash caused by not-sync'd
> > top level page tables: earlier this year, Gwan-gyeong Mun attempted to
> > address the issue [1] [2] after hitting a kernel panic when x86 code
> > accessed the vmemmap area before the corresponding top-level entries
> > were synced. At that time, the issue was believed to be triggered
> > only when struct page was enlarged for debugging purposes, and the patch
> > did not get further updates.
> > 
> > It turns out that current approach of relying on each arch to handle
> > the page table sync manually is fragile because 1) it's easy to forget
> > to sync the top level page table, and 2) it's also easy to overlook that
> > the kernel should not access the vmemmap and direct mapping areas before
> > the sync.
> > 
> > # The solution: Make page table sync more code robust 
> > 
> > To address this, Dave Hansen suggested [3] [4] introducing
> > {pgd,p4d}_populate_kernel() for updating kernel portion
> > of the page tables and allow each architecture to explicitly perform
> > synchronization when installing top-level entries. With this approach,
> > we no longer need to worry about missing the sync step, reducing the risk
> > of future regressions.
> > 
> > The new interface reuses existing ARCH_PAGE_TABLE_SYNC_MASK,
> > PGTBL_P*D_MODIFIED and arch_sync_kernel_mappings() facility used by
> > vmalloc and ioremap to synchronize page tables.
> > 
> > pgd_populate_kernel() looks like this:
> >   #define pgd_populate_kernel(addr, pgd, p4d)                    \               
> >   do {                                                           \               
> >          pgd_populate(&init_mm, pgd, p4d);                       \               
> >          if (ARCH_PAGE_TABLE_SYNC_MASK & PGTBL_PGD_MODIFIED)     \               
> >                  arch_sync_kernel_mappings(addr, addr);          \               
> >   } while (0) 
> > 
> > It is worth noting that vmalloc() and apply_to_range() carefully
> > synchronizes page tables by calling p*d_alloc_track() and
> > arch_sync_kernel_mappings(), and thus they are not affected by
> > this patch series.
> > 
> > This patch series was hugely inspired by Dave Hansen's suggestion and
> > hence added Suggested-by: Dave Hansen.
> > 
> > Cc stable because lack of this series opens the door to intermittent
> > boot failures.
> > 
> > [1] https://lore.kernel.org/linux-mm/20250220064105.808339-1-gwan-gyeong.mun@intel.com
> > [2] https://lore.kernel.org/linux-mm/20250311114420.240341-1-gwan-gyeong.mun@intel.com
> > [3] https://lore.kernel.org/linux-mm/d1da214c-53d3-45ac-a8b6-51821c5416e4@intel.com
> > [4] https://lore.kernel.org/linux-mm/4d800744-7b88-41aa-9979-b245e8bf794b@intel.com 
> > 
> > Harry Yoo (5):
> >   mm: move page table sync declarations to asm/pgalloc.h
> >   mm: introduce and use {pgd,p4d}_populate_kernel()
> >   x86/mm: define ARCH_PAGE_TABLE_SYNC_MASK and
> >     arch_sync_kernel_mappings()
> >   x86/mm: convert p*d_populate{,_init} to _kernel variants
> >   x86/mm: drop unnecessary calls to sync_global_pgds() and fold into its
> >     sole user
> > 
> >  arch/x86/include/asm/pgalloc.h | 22 ++++++++++++++++++++
> >  arch/x86/mm/init_64.c          | 37 +++++++++++++++++++---------------
> >  arch/x86/mm/kasan_init_64.c    |  8 ++++----
> >  include/asm-generic/pgalloc.h  | 30 +++++++++++++++++++++++++++
> >  include/linux/vmalloc.h        | 16 ---------------
> >  mm/kasan/init.c                | 10 ++++-----
> >  mm/percpu.c                    |  4 ++--
> >  mm/sparse-vmemmap.c            |  4 ++--
> >  mm/vmalloc.c                   |  1 +
> >  9 files changed, 87 insertions(+), 45 deletions(-)
> > 
> > -- 
> > 2.43.0
> >


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 mm-hotfixes 1/5] mm: move page table sync declarations to asm/pgalloc.h
  2025-07-21 11:38     ` Lorenzo Stoakes
@ 2025-07-21 12:10       ` Harry Yoo
  2025-07-21 12:15         ` Lorenzo Stoakes
  0 siblings, 1 reply; 14+ messages in thread
From: Harry Yoo @ 2025-07-21 12:10 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: kernel test robot, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, Andy Lutomirski, Andrey Ryabinin, Arnd Bergmann,
	Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter,
	oe-kbuild-all, Linux Memory Management List, H . Peter Anvin,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Juergen Gross, Kevin Brodsky, Muchun Song,
	Oscar Salvador, Joao Martins, Jane Chu, Alistair Popple,
	Mike Rapoport, David Hildenbrand, Gwan-gyeong Mun,
	Aneesh Kumar K . V

On Mon, Jul 21, 2025 at 12:38:27PM +0100, Lorenzo Stoakes wrote:
> On Mon, Jul 21, 2025 at 11:40:10AM +0800, kernel test robot wrote:
> > Hi Harry,
> >
> > kernel test robot noticed the following build warnings:
> >
> > [auto build test WARNING on akpm-mm/mm-everything]
> >
> > url:    https://github.com/intel-lab-lkp/linux/commits/Harry-Yoo/mm-move-page-table-sync-declarations-to-asm-pgalloc-h/20250721-074448
> > base:   https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-everything
> > patch link:    https://lore.kernel.org/r/20250720234203.9126-2-harry.yoo%40oracle.com
> > patch subject: [PATCH v2 mm-hotfixes 1/5] mm: move page table sync declarations to asm/pgalloc.h
> > config: i386-buildonly-randconfig-003-20250721 (https://download.01.org/0day-ci/archive/20250721/202507211129.Xbn2bAOg-lkp@intel.com/config)
> > compiler: gcc-12 (Debian 12.2.0-14+deb12u1) 12.2.0
> > reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250721/202507211129.Xbn2bAOg-lkp@intel.com/reproduce)
> >
> > If you fix the issue in a separate patch/commit (i.e. not just a new version of
> > the same patch/commit), kindly add following tags
> > | Reported-by: kernel test robot <lkp@intel.com>
> > | Closes: https://lore.kernel.org/oe-kbuild-all/202507211129.Xbn2bAOg-lkp@intel.com/
> >
> > All warnings (new ones prefixed by >>):
> >
> > >> arch/x86/mm/fault.c:265:6: warning: no previous prototype for 'arch_sync_kernel_mappings' [-Wmissing-prototypes]
> >      265 | void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
> >          |      ^~~~~~~~~~~~~~~~~~~~~~~~~
> >
> 
> Looks like arch/x86/mm/fault.c, which includes linux/vmalloc.h (such an odd
> place for this decl!) needs to:
> 
> #include <asm/pgalloc.h>

But in x86-32, ARCH_PAGE_TABLE_SYNC_MASK is defined in
arch/x86/include/asm/pgtable-2,3level_types.h, which can be included
via <asm/pgtable.h> or <linux/pgtable.h>.

I think it was a mistake to move the declarations to
<asm-generic/pgalloc.h> because if a file includes <asm/pgalloc.h> but
forgets to include <linux/vmalloc.h> or <linux/pgtable.h>, then
arch_sync_kernel_mappings() will be optimized out even when it's needed.

I'll move them to <linux/pgtable.h> and let architectures
override ARCH_PAGE_TABLE_SYNC_MASK by defining their own in <asm/pgtable.h>.

> This seems to be a 32-bit build thing, as your series builds locally on my
> x86-64 machine.

Yeah, I only tested it on x86-64 (with 4 & 5 level paging)..
I was unware that I was breaking x86-32.

Thanks!

-- 
Cheers,
Harry / Hyeonggon

> > vim +/arch_sync_kernel_mappings +265 arch/x86/mm/fault.c
> >
> > 4819e15f740ec88 Joerg Roedel        2020-09-02  264
> > 1e15d374bb1cb95 Alexander Potapenko 2023-01-11 @265  void arch_sync_kernel_mappings(unsigned long start, unsigned long end)
> > f2f13a8535174db Ingo Molnar         2009-02-20  266  {
> > 86cf69f1d893d48 Joerg Roedel        2020-06-01  267  	unsigned long addr;
> > f2f13a8535174db Ingo Molnar         2009-02-20  268
> > 86cf69f1d893d48 Joerg Roedel        2020-06-01  269  	for (addr = start & PMD_MASK;
> > 86cf69f1d893d48 Joerg Roedel        2020-06-01  270  	     addr >= TASK_SIZE_MAX && addr < VMALLOC_END;
> > 86cf69f1d893d48 Joerg Roedel        2020-06-01  271  	     addr += PMD_SIZE) {
> > f2f13a8535174db Ingo Molnar         2009-02-20  272  		struct page *page;
> > f2f13a8535174db Ingo Molnar         2009-02-20  273
> > a79e53d85683c6d Andrea Arcangeli    2011-02-16  274  		spin_lock(&pgd_lock);
> > f2f13a8535174db Ingo Molnar         2009-02-20  275  		list_for_each_entry(page, &pgd_list, lru) {
> > 617d34d9e5d8326 Jeremy Fitzhardinge 2010-09-21  276  			spinlock_t *pgt_lock;
> > 617d34d9e5d8326 Jeremy Fitzhardinge 2010-09-21  277
> > a79e53d85683c6d Andrea Arcangeli    2011-02-16  278  			/* the pgt_lock only for Xen */
> > 617d34d9e5d8326 Jeremy Fitzhardinge 2010-09-21  279  			pgt_lock = &pgd_page_get_mm(page)->page_table_lock;
> > 617d34d9e5d8326 Jeremy Fitzhardinge 2010-09-21  280
> > 617d34d9e5d8326 Jeremy Fitzhardinge 2010-09-21  281  			spin_lock(pgt_lock);
> > 86cf69f1d893d48 Joerg Roedel        2020-06-01  282  			vmalloc_sync_one(page_address(page), addr);
> > 617d34d9e5d8326 Jeremy Fitzhardinge 2010-09-21  283  			spin_unlock(pgt_lock);
> > f2f13a8535174db Ingo Molnar         2009-02-20  284  		}
> > a79e53d85683c6d Andrea Arcangeli    2011-02-16  285  		spin_unlock(&pgd_lock);
> > f2f13a8535174db Ingo Molnar         2009-02-20  286  	}
> > f2f13a8535174db Ingo Molnar         2009-02-20  287  }
> > f2f13a8535174db Ingo Molnar         2009-02-20  288
> >
> > --
> > 0-DAY CI Kernel Test Service
> > https://github.com/intel/lkp-tests/wiki


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 mm-hotfixes 1/5] mm: move page table sync declarations to asm/pgalloc.h
  2025-07-21 12:10       ` Harry Yoo
@ 2025-07-21 12:15         ` Lorenzo Stoakes
  0 siblings, 0 replies; 14+ messages in thread
From: Lorenzo Stoakes @ 2025-07-21 12:15 UTC (permalink / raw)
  To: Harry Yoo
  Cc: kernel test robot, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	Dave Hansen, Andy Lutomirski, Andrey Ryabinin, Arnd Bergmann,
	Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter,
	oe-kbuild-all, Linux Memory Management List, H . Peter Anvin,
	Alexander Potapenko, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Juergen Gross, Kevin Brodsky, Muchun Song,
	Oscar Salvador, Joao Martins, Jane Chu, Alistair Popple,
	Mike Rapoport, David Hildenbrand, Gwan-gyeong Mun,
	Aneesh Kumar K . V

On Mon, Jul 21, 2025 at 09:10:36PM +0900, Harry Yoo wrote:
> Yeah, I only tested it on x86-64 (with 4 & 5 level paging)..
> I was unware that I was breaking x86-32.

32-bit kernels need to die...


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2025-07-21 12:15 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-20 23:41 [PATCH v2 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Harry Yoo
2025-07-20 23:41 ` [PATCH v2 mm-hotfixes 1/5] mm: move page table sync declarations to asm/pgalloc.h Harry Yoo
2025-07-21  2:56   ` kernel test robot
2025-07-21  3:40   ` kernel test robot
2025-07-21 11:38     ` Lorenzo Stoakes
2025-07-21 12:10       ` Harry Yoo
2025-07-21 12:15         ` Lorenzo Stoakes
2025-07-20 23:42 ` [PATCH v2 mm-hotfixes 2/5] mm: introduce and use {pgd,p4d}_populate_kernel() Harry Yoo
2025-07-20 23:42 ` [PATCH v2 mm-hotfixes 3/5] x86/mm: define ARCH_PAGE_TABLE_SYNC_MASK and arch_sync_kernel_mappings() Harry Yoo
2025-07-21  7:06   ` kernel test robot
2025-07-20 23:42 ` [PATCH v2 mm-hotfixes 4/5] x86/mm: convert p*d_populate{,_init} to _kernel variants Harry Yoo
2025-07-20 23:42 ` [PATCH v2 mm-hotfixes 5/5] x86/mm: drop unnecessary calls to sync_global_pgds() and fold into its sole user Harry Yoo
2025-07-20 23:57 ` [PATCH v2 mm-hotfixes 0/5] mm, arch: a more robust approach to sync top level kernel page tables Harry Yoo
2025-07-21 11:46   ` Harry Yoo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).