* [PATCH v5 0/4] x86/mm: Improve alloc handling of phys_*_init()
@ 2025-07-11 16:22 Em Sharnoff
2025-07-11 16:24 ` [PATCH v5 1/4] x86/mm: Update mapped addresses in phys_{pmd,pud}_init() Em Sharnoff
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Em Sharnoff @ 2025-07-11 16:22 UTC (permalink / raw)
To: linux-kernel, x86, linux-mm
Cc: Dave Hansen, Andy Lutomirski, Peter Zijlstra, Thomas Gleixner,
Ingo Molnar, Borislav Petkov, H. Peter Anvin, Edgecombe, Rick P,
Oleg Vasilev, Arthur Petukhovsky, Stefan Radig, Misha Sakhnov
Hi folks,
See changelog + more context below.
tl;dr:
* Currently alloc_low_page() uses GFP_ATOMIC after boot, which may fail
* Those failures aren't currently handled by phys_pud_init() and similar
functions.
* Those failures can happen during memory hotplug
So:
1. Add handling for those allocation failures (patches 1-3)
2. Use GFP_KERNEL instead of GFP_ATOMIC (patch 4)
Previous version here:
https://lore.kernel.org/all/7d0d307d-71eb-4913-8023-bccc7a8a4a3d@neon.tech/
=== Changelog ===
v2:
- Switch from special-casing zero values to ERR_PTR()
- Add patch to move from GFP_ATOMIC -> GFP_KERNEL
- Move commentary out of the patch message and into this cover letter
v3:
- Fix -Wint-conversion issues
v4:
- new patch: move 'paddr_last' usage into phys_{pud,pmd}_init() so the
return from those functions is no longer needed.
- new patch: make phys_*_init() and their callers return int
v5:
- resend; bumped base commit.
I'm not sure if patch 2/4 ("Allow error returns ...") should be separate
from patch 3/4 ("Handle alloc failure ..."), but it's easy enough to
combine them if need be.
=== Background ===
We recently started observing these null pointer dereferences happening
in practice (albeit quite rarely), triggered by allocation failures
during virtio-mem hotplug.
We use virtio-mem quite heavily - adding/removing memory based on
resource usage of customer workloads across a fleet of VMs - so it's
somewhat expected that we have occasional allocation failures here, if
we run out of memory before hotplug takes place.
We started seeing this bug after upgrading from 6.6.64 to 6.12.26, but
there didn't appear to be relevant changes in the codepaths involved, so
we figured the upgrade was triggering a latent issue.
The possibility for this issue was also pointed out a while back:
> For alloc_low_pages(), I noticed the callers don’t check for allocation
> failure. I'm a little surprised that there haven't been reports of the
> allocation failing, because these operations could result in a lot more
> pages getting allocated way past boot, and failure causes a NULL
> pointer dereference.
https://lore.kernel.org/all/5aee7bcdf49b1c6b8ee902dd2abd9220169c694b.camel@intel.com/
For completeness, here's an example stack trace we saw (on 6.12.26):
BUG: kernel NULL pointer dereference, address: 0000000000000000
....
Call Trace:
<TASK>
phys_pud_init+0xa0/0x390
phys_p4d_init+0x93/0x330
__kernel_physical_mapping_init+0xa1/0x370
kernel_physical_mapping_init+0xf/0x20
init_memory_mapping+0x1fa/0x430
arch_add_memory+0x2b/0x50
add_memory_resource+0xe6/0x260
add_memory_driver_managed+0x78/0xc0
virtio_mem_add_memory+0x46/0xc0
virtio_mem_sbm_plug_and_add_mb+0xa3/0x160
virtio_mem_run_wq+0x1035/0x16c0
process_one_work+0x17a/0x3c0
worker_thread+0x2c5/0x3f0
? _raw_spin_unlock_irqrestore+0x9/0x30
? __pfx_worker_thread+0x10/0x10
kthread+0xdc/0x110
? __pfx_kthread+0x10/0x10
ret_from_fork+0x35/0x60
? __pfx_kthread+0x10/0x10
ret_from_fork_asm+0x1a/0x30
</TASK>
and the allocation failure preceding it:
kworker/0:2: page allocation failure: order:0, mode:0x920(GFP_ATOMIC|__GFP_ZERO), nodemask=(null),cpuset=/,mems_allowed=0
...
Call Trace:
<TASK>
dump_stack_lvl+0x5b/0x70
dump_stack+0x10/0x20
warn_alloc+0x103/0x180
__alloc_pages_slowpath.constprop.0+0x738/0xf30
__alloc_pages_noprof+0x1e9/0x340
alloc_pages_mpol_noprof+0x47/0x100
alloc_pages_noprof+0x4b/0x80
get_free_pages_noprof+0xc/0x40
alloc_low_pages+0xc2/0x150
phys_pud_init+0x82/0x390
...
(everything from phys_pud_init and below was the same)
There's some additional context in a github issue we opened on our side:
https://github.com/neondatabase/autoscaling/issues/1391
=== Reproducing / Testing ===
I was able to partially reproduce the original issue we saw by
modifying phys_pud_init() to simulate alloc_low_page() returning null
after boot, and then doing memory hotplug to trigger the "failure".
Something roughly like:
- pmd = alloc_low_page();
+ if (!after_bootmem)
+ pmd = alloc_low_page();
+ else
+ pmd = 0;
To test recovery, I also tried simulating just one alloc_low_page()
failure after boot. This change seemed to handle it at a basic level
(virito-mem hotplug succeeded with the right amount, after retrying),
but I didn't dig further.
We have also been running this in our production environment, where we
have observed that it fixes the issue.
Em Sharnoff (4):
x86/mm: Update mapped addresses in phys_{pmd,pud}_init()
x86/mm: Allow error returns from phys_*_init()
x86/mm: Handle alloc failure in phys_*_init()
x86/mm: Use GFP_KERNEL for alloc_low_pages() after boot
arch/x86/include/asm/pgtable.h | 3 +-
arch/x86/mm/init.c | 29 ++++++---
arch/x86/mm/init_32.c | 6 +-
arch/x86/mm/init_64.c | 116 ++++++++++++++++++++++-----------
arch/x86/mm/mem_encrypt_amd.c | 8 ++-
arch/x86/mm/mm_internal.h | 13 ++--
6 files changed, 113 insertions(+), 62 deletions(-)
base-commit: e04c78d86a9699d136910cfc0bdcf01087e3267e
--
2.39.5
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH v5 1/4] x86/mm: Update mapped addresses in phys_{pmd,pud}_init()
2025-07-11 16:22 [PATCH v5 0/4] x86/mm: Improve alloc handling of phys_*_init() Em Sharnoff
@ 2025-07-11 16:24 ` Em Sharnoff
2025-07-11 16:25 ` [PATCH v5 2/4] x86/mm: Allow error returns from phys_*_init() Em Sharnoff
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Em Sharnoff @ 2025-07-11 16:24 UTC (permalink / raw)
To: linux-kernel, x86, linux-mm
Cc: Dave Hansen, Andy Lutomirski, Peter Zijlstra, Thomas Gleixner,
Ingo Molnar, Borislav Petkov, H. Peter Anvin, Edgecombe, Rick P,
Oleg Vasilev, Arthur Petukhovsky, Stefan Radig, Misha Sakhnov
Currently kernel_physical_mapping_init() and its dependents return the
last physical address mapped ('paddr_last'). This makes it harder to
cleanly handle allocation errors in those functions.
'paddr_last' is used to update 'pfn_mapped'/'max_pfn_mapped', so:
1. Introduce add_paddr_range_mapped() to do the update, translating from
physical addresses to pfns
2. Call add_paddr_range_mapped() in phys_pud_init() where 'paddr_last'
would otherwise be updated due to 1Gi pages.
- Note: this includes places where we set 'paddr_last = paddr_next',
as was added in 20167d3421a0 ("x86-64: Fix accounting in
kernel_physical_mapping_init()")
add_paddr_range_mapped() is probably too expensive to be called every
time a page is updated, so instead, phys_pte_init() continues to return
'paddr_last', and phys_pmd_init() calls add_paddr_range_mapped() only at
the end of the loop (should mean it's called every 1Gi).
Signed-off-by: Em Sharnoff <sharnoff@neon.tech>
---
Changelog:
- v4: Add this patch
---
arch/x86/include/asm/pgtable.h | 3 +-
arch/x86/mm/init.c | 23 +++++----
arch/x86/mm/init_32.c | 6 ++-
arch/x86/mm/init_64.c | 88 +++++++++++++++++-----------------
arch/x86/mm/mm_internal.h | 13 +++--
5 files changed, 69 insertions(+), 64 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 97954c936c54..5d71cb192c57 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1224,8 +1224,7 @@ extern int direct_gbpages;
void init_mem_mapping(void);
void early_alloc_pgt_buf(void);
void __init poking_init(void);
-unsigned long init_memory_mapping(unsigned long start,
- unsigned long end, pgprot_t prot);
+void init_memory_mapping(unsigned long start, unsigned long end, pgprot_t prot);
#ifdef CONFIG_X86_64
extern pgd_t trampoline_pgd_entry;
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 7456df985d96..e87466489c66 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -526,16 +526,24 @@ bool pfn_range_is_mapped(unsigned long start_pfn, unsigned long end_pfn)
return false;
}
+/*
+ * Update max_pfn_mapped and range_pfn_mapped with the range of physical
+ * addresses mapped. The range may overlap with previous calls to this function.
+ */
+void add_paddr_range_mapped(unsigned long start_paddr, unsigned long end_paddr)
+{
+ add_pfn_range_mapped(start_paddr >> PAGE_SHIFT, end_paddr >> PAGE_SHIFT);
+}
+
/*
* Setup the direct mapping of the physical memory at PAGE_OFFSET.
* This runs before bootmem is initialized and gets pages directly from
* the physical memory. To access them they are temporarily mapped.
*/
-unsigned long __ref init_memory_mapping(unsigned long start,
- unsigned long end, pgprot_t prot)
+void __ref init_memory_mapping(unsigned long start,
+ unsigned long end, pgprot_t prot)
{
struct map_range mr[NR_RANGE_MR];
- unsigned long ret = 0;
int nr_range, i;
pr_debug("init_memory_mapping: [mem %#010lx-%#010lx]\n",
@@ -545,13 +553,10 @@ unsigned long __ref init_memory_mapping(unsigned long start,
nr_range = split_mem_range(mr, 0, start, end);
for (i = 0; i < nr_range; i++)
- ret = kernel_physical_mapping_init(mr[i].start, mr[i].end,
- mr[i].page_size_mask,
- prot);
+ kernel_physical_mapping_init(mr[i].start, mr[i].end,
+ mr[i].page_size_mask, prot);
- add_pfn_range_mapped(start >> PAGE_SHIFT, ret >> PAGE_SHIFT);
-
- return ret >> PAGE_SHIFT;
+ return;
}
/*
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 607d6a2e66e2..a9a16d3d0eb2 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -246,7 +246,7 @@ static inline int is_x86_32_kernel_text(unsigned long addr)
* of max_low_pfn pages, by creating page tables starting from address
* PAGE_OFFSET:
*/
-unsigned long __init
+void __init
kernel_physical_mapping_init(unsigned long start,
unsigned long end,
unsigned long page_size_mask,
@@ -383,7 +383,9 @@ kernel_physical_mapping_init(unsigned long start,
mapping_iter = 2;
goto repeat;
}
- return last_map_addr;
+
+ add_paddr_range_mapped(start, last_map_addr);
+ return;
}
#ifdef CONFIG_HIGHMEM
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index ee66fae9ebcc..f0dc4a0e8cde 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -503,13 +503,13 @@ phys_pte_init(pte_t *pte_page, unsigned long paddr, unsigned long paddr_end,
/*
* Create PMD level page table mapping for physical addresses. The virtual
* and physical address have to be aligned at this level.
- * It returns the last physical address mapped.
*/
-static unsigned long __meminit
+static void __meminit
phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
unsigned long page_size_mask, pgprot_t prot, bool init)
{
unsigned long pages = 0, paddr_next;
+ unsigned long paddr_first = paddr;
unsigned long paddr_last = paddr_end;
int i = pmd_index(paddr);
@@ -580,21 +580,25 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
spin_unlock(&init_mm.page_table_lock);
}
update_page_count(PG_LEVEL_2M, pages);
- return paddr_last;
+ /*
+ * In case of recovery from previous state, add_paddr_range_mapped() may
+ * be called with an overlapping range from previous operations.
+ * It is idempotent, so this is ok.
+ */
+ add_paddr_range_mapped(paddr_first, paddr_last);
+ return;
}
/*
* Create PUD level page table mapping for physical addresses. The virtual
* and physical address do not have to be aligned at this level. KASLR can
* randomize virtual addresses up to this level.
- * It returns the last physical address mapped.
*/
-static unsigned long __meminit
+static void __meminit
phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
unsigned long page_size_mask, pgprot_t _prot, bool init)
{
unsigned long pages = 0, paddr_next;
- unsigned long paddr_last = paddr_end;
unsigned long vaddr = (unsigned long)__va(paddr);
int i = pud_index(vaddr);
@@ -620,10 +624,8 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
if (!pud_none(*pud)) {
if (!pud_leaf(*pud)) {
pmd = pmd_offset(pud, 0);
- paddr_last = phys_pmd_init(pmd, paddr,
- paddr_end,
- page_size_mask,
- prot, init);
+ phys_pmd_init(pmd, paddr, paddr_end,
+ page_size_mask, prot, init);
continue;
}
/*
@@ -641,7 +643,7 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
if (page_size_mask & (1 << PG_LEVEL_1G)) {
if (!after_bootmem)
pages++;
- paddr_last = paddr_next;
+ add_paddr_range_mapped(paddr, paddr_next);
continue;
}
prot = pte_pgprot(pte_clrhuge(*(pte_t *)pud));
@@ -654,13 +656,13 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
pfn_pud(paddr >> PAGE_SHIFT, prot_sethuge(prot)),
init);
spin_unlock(&init_mm.page_table_lock);
- paddr_last = paddr_next;
+ add_paddr_range_mapped(paddr, paddr_next);
continue;
}
pmd = alloc_low_page();
- paddr_last = phys_pmd_init(pmd, paddr, paddr_end,
- page_size_mask, prot, init);
+ phys_pmd_init(pmd, paddr, paddr_end,
+ page_size_mask, prot, init);
spin_lock(&init_mm.page_table_lock);
pud_populate_init(&init_mm, pud, pmd, init);
@@ -669,22 +671,23 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
update_page_count(PG_LEVEL_1G, pages);
- return paddr_last;
+ return;
}
-static unsigned long __meminit
+static void __meminit
phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end,
unsigned long page_size_mask, pgprot_t prot, bool init)
{
- unsigned long vaddr, vaddr_end, vaddr_next, paddr_next, paddr_last;
+ unsigned long vaddr, vaddr_end, vaddr_next, paddr_next;
- paddr_last = paddr_end;
vaddr = (unsigned long)__va(paddr);
vaddr_end = (unsigned long)__va(paddr_end);
- if (!pgtable_l5_enabled())
- return phys_pud_init((pud_t *) p4d_page, paddr, paddr_end,
- page_size_mask, prot, init);
+ if (!pgtable_l5_enabled()) {
+ phys_pud_init((pud_t *) p4d_page, paddr, paddr_end,
+ page_size_mask, prot, init);
+ return;
+ }
for (; vaddr < vaddr_end; vaddr = vaddr_next) {
p4d_t *p4d = p4d_page + p4d_index(vaddr);
@@ -706,33 +709,32 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end,
if (!p4d_none(*p4d)) {
pud = pud_offset(p4d, 0);
- paddr_last = phys_pud_init(pud, paddr, __pa(vaddr_end),
- page_size_mask, prot, init);
+ phys_pud_init(pud, paddr, __pa(vaddr_end),
+ page_size_mask, prot, init);
continue;
}
pud = alloc_low_page();
- paddr_last = phys_pud_init(pud, paddr, __pa(vaddr_end),
- page_size_mask, prot, init);
+ phys_pud_init(pud, paddr, __pa(vaddr_end),
+ page_size_mask, prot, init);
spin_lock(&init_mm.page_table_lock);
p4d_populate_init(&init_mm, p4d, pud, init);
spin_unlock(&init_mm.page_table_lock);
}
- return paddr_last;
+ return;
}
-static unsigned long __meminit
+static void __meminit
__kernel_physical_mapping_init(unsigned long paddr_start,
unsigned long paddr_end,
unsigned long page_size_mask,
pgprot_t prot, bool init)
{
bool pgd_changed = false;
- unsigned long vaddr, vaddr_start, vaddr_end, vaddr_next, paddr_last;
+ unsigned long vaddr, vaddr_start, vaddr_end, vaddr_next;
- paddr_last = paddr_end;
vaddr = (unsigned long)__va(paddr_start);
vaddr_end = (unsigned long)__va(paddr_end);
vaddr_start = vaddr;
@@ -745,16 +747,14 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
if (pgd_val(*pgd)) {
p4d = (p4d_t *)pgd_page_vaddr(*pgd);
- paddr_last = phys_p4d_init(p4d, __pa(vaddr),
- __pa(vaddr_end),
- page_size_mask,
- prot, init);
+ phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end),
+ page_size_mask, prot, init);
continue;
}
p4d = alloc_low_page();
- paddr_last = phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end),
- page_size_mask, prot, init);
+ phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end),
+ page_size_mask, prot, init);
spin_lock(&init_mm.page_table_lock);
if (pgtable_l5_enabled())
@@ -770,7 +770,7 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
if (pgd_changed)
sync_global_pgds(vaddr_start, vaddr_end - 1);
- return paddr_last;
+ return;
}
@@ -778,15 +778,15 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
* Create page table mapping for the physical memory for specific physical
* addresses. Note that it can only be used to populate non-present entries.
* The virtual and physical addresses have to be aligned on PMD level
- * down. It returns the last physical address mapped.
+ * down.
*/
-unsigned long __meminit
+void __meminit
kernel_physical_mapping_init(unsigned long paddr_start,
unsigned long paddr_end,
unsigned long page_size_mask, pgprot_t prot)
{
- return __kernel_physical_mapping_init(paddr_start, paddr_end,
- page_size_mask, prot, true);
+ __kernel_physical_mapping_init(paddr_start, paddr_end,
+ page_size_mask, prot, true);
}
/*
@@ -795,14 +795,14 @@ kernel_physical_mapping_init(unsigned long paddr_start,
* when updating the mapping. The caller is responsible to flush the TLBs after
* the function returns.
*/
-unsigned long __meminit
+void __meminit
kernel_physical_mapping_change(unsigned long paddr_start,
unsigned long paddr_end,
unsigned long page_size_mask)
{
- return __kernel_physical_mapping_init(paddr_start, paddr_end,
- page_size_mask, PAGE_KERNEL,
- false);
+ __kernel_physical_mapping_init(paddr_start, paddr_end,
+ page_size_mask, PAGE_KERNEL,
+ false);
}
#ifndef CONFIG_NUMA
diff --git a/arch/x86/mm/mm_internal.h b/arch/x86/mm/mm_internal.h
index 097aadc250f7..5b873191c3c9 100644
--- a/arch/x86/mm/mm_internal.h
+++ b/arch/x86/mm/mm_internal.h
@@ -10,13 +10,12 @@ static inline void *alloc_low_page(void)
void early_ioremap_page_table_range_init(void);
-unsigned long kernel_physical_mapping_init(unsigned long start,
- unsigned long end,
- unsigned long page_size_mask,
- pgprot_t prot);
-unsigned long kernel_physical_mapping_change(unsigned long start,
- unsigned long end,
- unsigned long page_size_mask);
+void add_paddr_range_mapped(unsigned long start_paddr, unsigned long end_paddr);
+
+void kernel_physical_mapping_init(unsigned long start, unsigned long end,
+ unsigned long page_size_mask, pgprot_t prot);
+void kernel_physical_mapping_change(unsigned long start, unsigned long end,
+ unsigned long page_size_mask);
void zone_sizes_init(void);
extern int after_bootmem;
--
2.39.5
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v5 2/4] x86/mm: Allow error returns from phys_*_init()
2025-07-11 16:22 [PATCH v5 0/4] x86/mm: Improve alloc handling of phys_*_init() Em Sharnoff
2025-07-11 16:24 ` [PATCH v5 1/4] x86/mm: Update mapped addresses in phys_{pmd,pud}_init() Em Sharnoff
@ 2025-07-11 16:25 ` Em Sharnoff
2025-07-11 16:25 ` [PATCH v5 3/4] x86/mm: Handle alloc failure in phys_*_init() Em Sharnoff
2025-07-11 16:26 ` [PATCH v5 4/4] x86/mm: Use GFP_KERNEL for alloc_low_pages() after boot Em Sharnoff
3 siblings, 0 replies; 5+ messages in thread
From: Em Sharnoff @ 2025-07-11 16:25 UTC (permalink / raw)
To: linux-kernel, x86, linux-mm
Cc: Dave Hansen, Andy Lutomirski, Peter Zijlstra, Thomas Gleixner,
Ingo Molnar, Borislav Petkov, H. Peter Anvin, Edgecombe, Rick P,
Oleg Vasilev, Arthur Petukhovsky, Stefan Radig, Misha Sakhnov
Preparation for returning errors when alloc_low_page() fails.
phys_pte_init() is excluded because it can't fail, and it's useful for
it to return 'paddr_last' instead.
This patch depends on the previous patch ("x86/mm: Update mapped
addresses in phys_{pmd,pud}_init()").
Signed-off-by: Em Sharnoff <sharnoff@neon.tech>
---
Changleog:
- v2: Switch from special-casing zero value to using ERR_PTR()
- v3: Fix -Wint-conversion errors
- v4: Switch return type to int, split alloc handling into separate patch.
---
arch/x86/include/asm/pgtable.h | 2 +-
arch/x86/mm/init.c | 14 +++--
arch/x86/mm/init_32.c | 4 +-
arch/x86/mm/init_64.c | 100 ++++++++++++++++++++++-----------
arch/x86/mm/mem_encrypt_amd.c | 8 ++-
arch/x86/mm/mm_internal.h | 8 +--
6 files changed, 87 insertions(+), 49 deletions(-)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 5d71cb192c57..f964f52327de 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1224,7 +1224,7 @@ extern int direct_gbpages;
void init_mem_mapping(void);
void early_alloc_pgt_buf(void);
void __init poking_init(void);
-void init_memory_mapping(unsigned long start, unsigned long end, pgprot_t prot);
+int init_memory_mapping(unsigned long start, unsigned long end, pgprot_t prot);
#ifdef CONFIG_X86_64
extern pgd_t trampoline_pgd_entry;
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index e87466489c66..474a7294016c 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -540,11 +540,12 @@ void add_paddr_range_mapped(unsigned long start_paddr, unsigned long end_paddr)
* This runs before bootmem is initialized and gets pages directly from
* the physical memory. To access them they are temporarily mapped.
*/
-void __ref init_memory_mapping(unsigned long start,
+int __ref init_memory_mapping(unsigned long start,
unsigned long end, pgprot_t prot)
{
struct map_range mr[NR_RANGE_MR];
int nr_range, i;
+ int ret;
pr_debug("init_memory_mapping: [mem %#010lx-%#010lx]\n",
start, end - 1);
@@ -552,11 +553,14 @@ void __ref init_memory_mapping(unsigned long start,
memset(mr, 0, sizeof(mr));
nr_range = split_mem_range(mr, 0, start, end);
- for (i = 0; i < nr_range; i++)
- kernel_physical_mapping_init(mr[i].start, mr[i].end,
- mr[i].page_size_mask, prot);
+ for (i = 0; i < nr_range; i++) {
+ ret = kernel_physical_mapping_init(mr[i].start, mr[i].end,
+ mr[i].page_size_mask, prot);
+ if (ret)
+ return ret;
+ }
- return;
+ return 0;
}
/*
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index a9a16d3d0eb2..6e13685d7ced 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -246,7 +246,7 @@ static inline int is_x86_32_kernel_text(unsigned long addr)
* of max_low_pfn pages, by creating page tables starting from address
* PAGE_OFFSET:
*/
-void __init
+int __init
kernel_physical_mapping_init(unsigned long start,
unsigned long end,
unsigned long page_size_mask,
@@ -385,7 +385,7 @@ kernel_physical_mapping_init(unsigned long start,
}
add_paddr_range_mapped(start, last_map_addr);
- return;
+ return 0;
}
#ifdef CONFIG_HIGHMEM
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index f0dc4a0e8cde..ca71eaec1db5 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -504,7 +504,7 @@ phys_pte_init(pte_t *pte_page, unsigned long paddr, unsigned long paddr_end,
* Create PMD level page table mapping for physical addresses. The virtual
* and physical address have to be aligned at this level.
*/
-static void __meminit
+static int __meminit
phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
unsigned long page_size_mask, pgprot_t prot, bool init)
{
@@ -586,7 +586,7 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
* It is idempotent, so this is ok.
*/
add_paddr_range_mapped(paddr_first, paddr_last);
- return;
+ return 0;
}
/*
@@ -594,12 +594,14 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
* and physical address do not have to be aligned at this level. KASLR can
* randomize virtual addresses up to this level.
*/
-static void __meminit
+static int __meminit
phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
unsigned long page_size_mask, pgprot_t _prot, bool init)
{
unsigned long pages = 0, paddr_next;
unsigned long vaddr = (unsigned long)__va(paddr);
+ int ret;
+
int i = pud_index(vaddr);
for (; i < PTRS_PER_PUD; i++, paddr = paddr_next) {
@@ -624,8 +626,10 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
if (!pud_none(*pud)) {
if (!pud_leaf(*pud)) {
pmd = pmd_offset(pud, 0);
- phys_pmd_init(pmd, paddr, paddr_end,
- page_size_mask, prot, init);
+ ret = phys_pmd_init(pmd, paddr, paddr_end,
+ page_size_mask, prot, init);
+ if (ret)
+ return ret;
continue;
}
/*
@@ -661,33 +665,39 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
}
pmd = alloc_low_page();
- phys_pmd_init(pmd, paddr, paddr_end,
- page_size_mask, prot, init);
+ ret = phys_pmd_init(pmd, paddr, paddr_end,
+ page_size_mask, prot, init);
spin_lock(&init_mm.page_table_lock);
pud_populate_init(&init_mm, pud, pmd, init);
spin_unlock(&init_mm.page_table_lock);
+
+ /*
+ * Bail only after updating pud to keep progress from pmd across
+ * retries.
+ */
+ if (ret)
+ return ret;
}
update_page_count(PG_LEVEL_1G, pages);
- return;
+ return 0;
}
-static void __meminit
+static int __meminit
phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end,
unsigned long page_size_mask, pgprot_t prot, bool init)
{
unsigned long vaddr, vaddr_end, vaddr_next, paddr_next;
+ int ret;
vaddr = (unsigned long)__va(paddr);
vaddr_end = (unsigned long)__va(paddr_end);
- if (!pgtable_l5_enabled()) {
- phys_pud_init((pud_t *) p4d_page, paddr, paddr_end,
- page_size_mask, prot, init);
- return;
- }
+ if (!pgtable_l5_enabled())
+ return phys_pud_init((pud_t *) p4d_page, paddr, paddr_end,
+ page_size_mask, prot, init);
for (; vaddr < vaddr_end; vaddr = vaddr_next) {
p4d_t *p4d = p4d_page + p4d_index(vaddr);
@@ -709,24 +719,33 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end,
if (!p4d_none(*p4d)) {
pud = pud_offset(p4d, 0);
- phys_pud_init(pud, paddr, __pa(vaddr_end),
- page_size_mask, prot, init);
+ ret = phys_pud_init(pud, paddr, __pa(vaddr_end),
+ page_size_mask, prot, init);
+ if (ret)
+ return ret;
continue;
}
pud = alloc_low_page();
- phys_pud_init(pud, paddr, __pa(vaddr_end),
- page_size_mask, prot, init);
+ ret = phys_pud_init(pud, paddr, __pa(vaddr_end),
+ page_size_mask, prot, init);
spin_lock(&init_mm.page_table_lock);
p4d_populate_init(&init_mm, p4d, pud, init);
spin_unlock(&init_mm.page_table_lock);
+
+ /*
+ * Bail only after updating p4d to keep progress from pud across
+ * retries.
+ */
+ if (ret)
+ return ret;
}
- return;
+ return 0;
}
-static void __meminit
+static int __meminit
__kernel_physical_mapping_init(unsigned long paddr_start,
unsigned long paddr_end,
unsigned long page_size_mask,
@@ -734,6 +753,7 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
{
bool pgd_changed = false;
unsigned long vaddr, vaddr_start, vaddr_end, vaddr_next;
+ int ret;
vaddr = (unsigned long)__va(paddr_start);
vaddr_end = (unsigned long)__va(paddr_end);
@@ -747,14 +767,16 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
if (pgd_val(*pgd)) {
p4d = (p4d_t *)pgd_page_vaddr(*pgd);
- phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end),
- page_size_mask, prot, init);
+ ret = phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end),
+ page_size_mask, prot, init);
+ if (ret)
+ return ret;
continue;
}
p4d = alloc_low_page();
- phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end),
- page_size_mask, prot, init);
+ ret = phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end),
+ page_size_mask, prot, init);
spin_lock(&init_mm.page_table_lock);
if (pgtable_l5_enabled())
@@ -762,15 +784,22 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
else
p4d_populate_init(&init_mm, p4d_offset(pgd, vaddr),
(pud_t *) p4d, init);
-
spin_unlock(&init_mm.page_table_lock);
+
+ /*
+ * Bail only after updating pgd/p4d to keep progress from p4d
+ * across retries.
+ */
+ if (ret)
+ return ret;
+
pgd_changed = true;
}
if (pgd_changed)
sync_global_pgds(vaddr_start, vaddr_end - 1);
- return;
+ return 0;
}
@@ -780,13 +809,13 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
* The virtual and physical addresses have to be aligned on PMD level
* down.
*/
-void __meminit
+int __meminit
kernel_physical_mapping_init(unsigned long paddr_start,
unsigned long paddr_end,
unsigned long page_size_mask, pgprot_t prot)
{
- __kernel_physical_mapping_init(paddr_start, paddr_end,
- page_size_mask, prot, true);
+ return __kernel_physical_mapping_init(paddr_start, paddr_end,
+ page_size_mask, prot, true);
}
/*
@@ -795,14 +824,14 @@ kernel_physical_mapping_init(unsigned long paddr_start,
* when updating the mapping. The caller is responsible to flush the TLBs after
* the function returns.
*/
-void __meminit
+int __meminit
kernel_physical_mapping_change(unsigned long paddr_start,
unsigned long paddr_end,
unsigned long page_size_mask)
{
- __kernel_physical_mapping_init(paddr_start, paddr_end,
- page_size_mask, PAGE_KERNEL,
- false);
+ return __kernel_physical_mapping_init(paddr_start, paddr_end,
+ page_size_mask, PAGE_KERNEL,
+ false);
}
#ifndef CONFIG_NUMA
@@ -984,8 +1013,11 @@ int arch_add_memory(int nid, u64 start, u64 size,
{
unsigned long start_pfn = start >> PAGE_SHIFT;
unsigned long nr_pages = size >> PAGE_SHIFT;
+ int ret;
- init_memory_mapping(start, start + size, params->pgprot);
+ ret = init_memory_mapping(start, start + size, params->pgprot);
+ if (ret)
+ return ret;
return add_pages(nid, start_pfn, nr_pages, params);
}
diff --git a/arch/x86/mm/mem_encrypt_amd.c b/arch/x86/mm/mem_encrypt_amd.c
index faf3a13fb6ba..15174940d218 100644
--- a/arch/x86/mm/mem_encrypt_amd.c
+++ b/arch/x86/mm/mem_encrypt_amd.c
@@ -446,9 +446,11 @@ static int __init early_set_memory_enc_dec(unsigned long vaddr,
* kernel_physical_mapping_change() does not flush the TLBs, so
* a TLB flush is required after we exit from the for loop.
*/
- kernel_physical_mapping_change(__pa(vaddr & pmask),
- __pa((vaddr_end & pmask) + psize),
- split_page_size_mask);
+ ret = kernel_physical_mapping_change(__pa(vaddr & pmask),
+ __pa((vaddr_end & pmask) + psize),
+ split_page_size_mask);
+ if (ret)
+ return ret;
}
ret = 0;
diff --git a/arch/x86/mm/mm_internal.h b/arch/x86/mm/mm_internal.h
index 5b873191c3c9..7f948d5377f0 100644
--- a/arch/x86/mm/mm_internal.h
+++ b/arch/x86/mm/mm_internal.h
@@ -12,10 +12,10 @@ void early_ioremap_page_table_range_init(void);
void add_paddr_range_mapped(unsigned long start_paddr, unsigned long end_paddr);
-void kernel_physical_mapping_init(unsigned long start, unsigned long end,
- unsigned long page_size_mask, pgprot_t prot);
-void kernel_physical_mapping_change(unsigned long start, unsigned long end,
- unsigned long page_size_mask);
+int kernel_physical_mapping_init(unsigned long start, unsigned long end,
+ unsigned long page_size_mask, pgprot_t prot);
+int kernel_physical_mapping_change(unsigned long start, unsigned long end,
+ unsigned long page_size_mask);
void zone_sizes_init(void);
extern int after_bootmem;
--
2.39.5
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v5 3/4] x86/mm: Handle alloc failure in phys_*_init()
2025-07-11 16:22 [PATCH v5 0/4] x86/mm: Improve alloc handling of phys_*_init() Em Sharnoff
2025-07-11 16:24 ` [PATCH v5 1/4] x86/mm: Update mapped addresses in phys_{pmd,pud}_init() Em Sharnoff
2025-07-11 16:25 ` [PATCH v5 2/4] x86/mm: Allow error returns from phys_*_init() Em Sharnoff
@ 2025-07-11 16:25 ` Em Sharnoff
2025-07-11 16:26 ` [PATCH v5 4/4] x86/mm: Use GFP_KERNEL for alloc_low_pages() after boot Em Sharnoff
3 siblings, 0 replies; 5+ messages in thread
From: Em Sharnoff @ 2025-07-11 16:25 UTC (permalink / raw)
To: linux-kernel, x86, linux-mm
Cc: Dave Hansen, Andy Lutomirski, Peter Zijlstra, Thomas Gleixner,
Ingo Molnar, Borislav Petkov, H. Peter Anvin, Edgecombe, Rick P,
Oleg Vasilev, Arthur Petukhovsky, Stefan Radig, Misha Sakhnov
During memory hotplug, allocation failures in phys_*_init() aren't
handled, which results in a null pointer dereference if they occur.
This patch depends on the previous patch ("x86/mm: Allow error returns
from phys_*_init()").
Signed-off-by: Em Sharnoff <sharnoff@neon.tech>
---
Changelog:
- v4: Split this patch out from the error handling changes
---
arch/x86/mm/init_64.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index ca71eaec1db5..eced309a4015 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -573,6 +573,8 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
}
pte = alloc_low_page();
+ if (!pte)
+ return -ENOMEM;
paddr_last = phys_pte_init(pte, paddr, paddr_end, new_prot, init);
spin_lock(&init_mm.page_table_lock);
@@ -665,6 +667,8 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
}
pmd = alloc_low_page();
+ if (!pmd)
+ return -ENOMEM;
ret = phys_pmd_init(pmd, paddr, paddr_end,
page_size_mask, prot, init);
@@ -727,6 +731,8 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end,
}
pud = alloc_low_page();
+ if (!pud)
+ return -ENOMEM;
ret = phys_pud_init(pud, paddr, __pa(vaddr_end),
page_size_mask, prot, init);
@@ -775,6 +781,8 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
}
p4d = alloc_low_page();
+ if (!p4d)
+ return -ENOMEM;
ret = phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end),
page_size_mask, prot, init);
--
2.39.5
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH v5 4/4] x86/mm: Use GFP_KERNEL for alloc_low_pages() after boot
2025-07-11 16:22 [PATCH v5 0/4] x86/mm: Improve alloc handling of phys_*_init() Em Sharnoff
` (2 preceding siblings ...)
2025-07-11 16:25 ` [PATCH v5 3/4] x86/mm: Handle alloc failure in phys_*_init() Em Sharnoff
@ 2025-07-11 16:26 ` Em Sharnoff
3 siblings, 0 replies; 5+ messages in thread
From: Em Sharnoff @ 2025-07-11 16:26 UTC (permalink / raw)
To: linux-kernel, x86, linux-mm
Cc: Dave Hansen, Andy Lutomirski, Peter Zijlstra, Thomas Gleixner,
Ingo Molnar, Borislav Petkov, H. Peter Anvin, Edgecombe, Rick P,
Oleg Vasilev, Arthur Petukhovsky, Stefan Radig, Misha Sakhnov
Currently it's GFP_ATOMIC. GFP_KERNEL seems more correct.
From Ingo M. [1]
> There's no real reason why it should be GFP_ATOMIC AFAICS, other than
> some historic inertia that nobody bothered to fix.
and previously Mike R. [2]
> The few callers that effectively use page allocator for the direct map
> updates are gart_iommu_init() and memory hotplug. Neither of them
> happen in an atomic context so there is no reason to use GFP_ATOMIC
> for these allocations.
>
> Replace GFP_ATOMIC with GFP_KERNEL to avoid using atomic reserves for
> allocations that do not require that.
[1]: https://lore.kernel.org/all/aEE6_S2a-1tk1dtI@gmail.com/
[2]: https://lore.kernel.org/all/20211111110241.25968-5-rppt@kernel.org/
Signed-off-by: Em Sharnoff <sharnoff@neon.tech>
---
Changelog:
- v2: Add this patch
- v3: No changes
- v4: No changes
---
arch/x86/mm/init.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 474a7294016c..b37ac1d546af 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -132,7 +132,7 @@ __ref void *alloc_low_pages(unsigned int num)
unsigned int order;
order = get_order((unsigned long)num << PAGE_SHIFT);
- return (void *)__get_free_pages(GFP_ATOMIC | __GFP_ZERO, order);
+ return (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, order);
}
if ((pgt_buf_end + num) > pgt_buf_top || !can_use_brk_pgt) {
--
2.39.5
^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-07-11 16:26 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-11 16:22 [PATCH v5 0/4] x86/mm: Improve alloc handling of phys_*_init() Em Sharnoff
2025-07-11 16:24 ` [PATCH v5 1/4] x86/mm: Update mapped addresses in phys_{pmd,pud}_init() Em Sharnoff
2025-07-11 16:25 ` [PATCH v5 2/4] x86/mm: Allow error returns from phys_*_init() Em Sharnoff
2025-07-11 16:25 ` [PATCH v5 3/4] x86/mm: Handle alloc failure in phys_*_init() Em Sharnoff
2025-07-11 16:26 ` [PATCH v5 4/4] x86/mm: Use GFP_KERNEL for alloc_low_pages() after boot Em Sharnoff
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).