* [PATCH v2 0/2] x86/mm: Improve alloc handling of phys_*_init()
@ 2025-06-09 10:32 Em Sharnoff
2025-06-09 10:33 ` [PATCH v2 1/2] x86/mm: Handle alloc failure in phys_*_init() Em Sharnoff
2025-06-09 10:34 ` [PATCH v2 2/2] x86/mm: Use GFP_KERNEL for alloc_low_pages() after boot Em Sharnoff
0 siblings, 2 replies; 6+ messages in thread
From: Em Sharnoff @ 2025-06-09 10:32 UTC (permalink / raw)
To: linux-kernel, x86, linux-mm
Cc: Ingo Molnar, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
Peter Zijlstra, Thomas Gleixner, Borislav Petkov,
Edgecombe, Rick P, Oleg Vasilev, Arthur Petukhovsky, Stefan Radig,
Misha Sakhnov
Hi (again) folks,
See changelog + more context below.
tl;dr:
* Currently alloc_low_page() uses GFP_ATOMIC after boot, which may fail
* Those failures aren't currently handled by phys_pud_init() and similar
functions.
* Those failures can happen during memory hotplug
So:
1. Add handling for those allocation failures
2. Use GFP_KERNEL instead of GFP_ATOMIC
Previous version here, if you missed it:
https://lore.kernel.org/all/9f4c0972-a123-4cc3-89f2-ed3490371e65@neon.tech/
=== Changelog ===
v2:
- Switch from special-casing zero values to ERR_PTR()
- Add patch to move from GFP_ATOMIC -> GFP_KERNEL
- Move commentary out of the patch message and into this cover letter
=== Background ===
We recently started observing these null pointer dereferences happening
in practice (albeit quite rarely), triggered by allocation failures
during virtio-mem hotplug.
We use virtio-mem quite heavily - adding/removing memory based on
resource usage of customer workloads across a fleet of VMs - so it's
somewhat expected that we have occasional allocation failures here, if
we run out of memory before hotplug takes place.
We started seeing this bug after upgrading from 6.6.64 to 6.12.26, but
there didn't appear to be relevant changes in the codepaths involved, so
we figured the upgrade was triggering a latent issue.
The possibility for this issue was also pointed out a while back:
> For alloc_low_pages(), I noticed the callers don’t check for allocation
> failure. I'm a little surprised that there haven't been reports of the
> allocation failing, because these operations could result in a lot more
> pages getting allocated way past boot, and failure causes a NULL
> pointer dereference.
https://lore.kernel.org/all/5aee7bcdf49b1c6b8ee902dd2abd9220169c694b.camel@intel.com/
For completeness, here's an example stack trace we saw (on 6.12.26):
BUG: kernel NULL pointer dereference, address: 0000000000000000
....
Call Trace:
<TASK>
phys_pud_init+0xa0/0x390
phys_p4d_init+0x93/0x330
__kernel_physical_mapping_init+0xa1/0x370
kernel_physical_mapping_init+0xf/0x20
init_memory_mapping+0x1fa/0x430
arch_add_memory+0x2b/0x50
add_memory_resource+0xe6/0x260
add_memory_driver_managed+0x78/0xc0
virtio_mem_add_memory+0x46/0xc0
virtio_mem_sbm_plug_and_add_mb+0xa3/0x160
virtio_mem_run_wq+0x1035/0x16c0
process_one_work+0x17a/0x3c0
worker_thread+0x2c5/0x3f0
? _raw_spin_unlock_irqrestore+0x9/0x30
? __pfx_worker_thread+0x10/0x10
kthread+0xdc/0x110
? __pfx_kthread+0x10/0x10
ret_from_fork+0x35/0x60
? __pfx_kthread+0x10/0x10
ret_from_fork_asm+0x1a/0x30
</TASK>
and the allocation failure preceding it:
kworker/0:2: page allocation failure: order:0, mode:0x920(GFP_ATOMIC|__GFP_ZERO), nodemask=(null),cpuset=/,mems_allowed=0
...
Call Trace:
<TASK>
dump_stack_lvl+0x5b/0x70
dump_stack+0x10/0x20
warn_alloc+0x103/0x180
__alloc_pages_slowpath.constprop.0+0x738/0xf30
__alloc_pages_noprof+0x1e9/0x340
alloc_pages_mpol_noprof+0x47/0x100
alloc_pages_noprof+0x4b/0x80
get_free_pages_noprof+0xc/0x40
alloc_low_pages+0xc2/0x150
phys_pud_init+0x82/0x390
...
(everything from phys_pud_init and below was the same)
There's some additional context in a github issue we opened on our side:
https://github.com/neondatabase/autoscaling/issues/1391
=== Reproducing / Testing ===
I was able to partially reproduce the original issue we saw by
modifying phys_pud_init() to simulate alloc_low_page() returning null
after boot, and then doing memory hotplug to trigger the "failure".
Something roughly like:
- pmd = alloc_low_page();
+ if (!after_bootmem)
+ pmd = alloc_low_page();
+ else
+ pmd = 0;
To test recovery, I also tried simulating just one alloc_low_page()
failure after boot. This change seemed to handle it at a basic level
(virito-mem hotplug succeeded with the right amount, after retrying),
but I didn't dig further.
We also plan to test this in our production environment (where we should
see the difference after a few days); as of 2025-06-09, we haven't yet
rolled that out.
Em Sharnoff (2):
x86/mm: Handle alloc failure in phys_*_init()
x86/mm: Use GFP_KERNEL for alloc_low_pages() after boot
arch/x86/mm/init.c | 8 +++++--
arch/x86/mm/init_64.c | 54 +++++++++++++++++++++++++++++++++++++++----
2 files changed, 56 insertions(+), 6 deletions(-)
base-commit: 82f2b0b97b36ee3fcddf0f0780a9a0825d52fec3
--
2.39.5
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v2 1/2] x86/mm: Handle alloc failure in phys_*_init()
2025-06-09 10:32 [PATCH v2 0/2] x86/mm: Improve alloc handling of phys_*_init() Em Sharnoff
@ 2025-06-09 10:33 ` Em Sharnoff
2025-06-09 17:15 ` kernel test robot
2025-06-09 17:56 ` kernel test robot
2025-06-09 10:34 ` [PATCH v2 2/2] x86/mm: Use GFP_KERNEL for alloc_low_pages() after boot Em Sharnoff
1 sibling, 2 replies; 6+ messages in thread
From: Em Sharnoff @ 2025-06-09 10:33 UTC (permalink / raw)
To: linux-kernel, x86, linux-mm
Cc: Ingo Molnar, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
Peter Zijlstra, Thomas Gleixner, Borislav Petkov,
Edgecombe, Rick P, Oleg Vasilev, Arthur Petukhovsky, Stefan Radig,
Misha Sakhnov
During memory hotplug, allocation failures in phys_*_init() aren't
handled, which results in a null pointer dereference, if they occur.
To handle that, change phys_pud_init() and similar functions to return
allocation errors via ERR_PTR() and check for that in arch_add_memory().
Signed-off-by: Em Sharnoff <sharnoff@neon.tech>
---
Changelog:
- v2: switch from special-casing zero value to using ERR_PTR()
---
arch/x86/mm/init.c | 6 ++++-
arch/x86/mm/init_64.c | 54 +++++++++++++++++++++++++++++++++++++++----
2 files changed, 55 insertions(+), 5 deletions(-)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index bfa444a7dbb0..82dd5ce03dd6 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -533,6 +533,7 @@ bool pfn_range_is_mapped(unsigned long start_pfn, unsigned long end_pfn)
* Setup the direct mapping of the physical memory at PAGE_OFFSET.
* This runs before bootmem is initialized and gets pages directly from
* the physical memory. To access them they are temporarily mapped.
+ * Allocation errors are returned with ERR_PTR.
*/
unsigned long __ref init_memory_mapping(unsigned long start,
unsigned long end, pgprot_t prot)
@@ -547,10 +548,13 @@ unsigned long __ref init_memory_mapping(unsigned long start,
memset(mr, 0, sizeof(mr));
nr_range = split_mem_range(mr, 0, start, end);
- for (i = 0; i < nr_range; i++)
+ for (i = 0; i < nr_range; i++) {
ret = kernel_physical_mapping_init(mr[i].start, mr[i].end,
mr[i].page_size_mask,
prot);
+ if (IS_ERR(ret))
+ return ret;
+ }
add_pfn_range_mapped(start >> PAGE_SHIFT, ret >> PAGE_SHIFT);
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 7c4f6f591f2b..3ab261aa8eff 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -502,7 +502,8 @@ phys_pte_init(pte_t *pte_page, unsigned long paddr, unsigned long paddr_end,
/*
* Create PMD level page table mapping for physical addresses. The virtual
* and physical address have to be aligned at this level.
- * It returns the last physical address mapped.
+ * It returns the last physical address mapped. Allocation errors are
+ * returned with ERR_PTR.
*/
static unsigned long __meminit
phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
@@ -572,7 +573,14 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
}
pte = alloc_low_page();
+ if (!pte)
+ return (unsigned long)ERR_PTR(-ENOMEM);
paddr_last = phys_pte_init(pte, paddr, paddr_end, new_prot, init);
+ /*
+ * phys_{ppmd,pud,p4d}_init return allocation errors via ERR_PTR.
+ * phys_pte_init makes no allocations, so should not error.
+ */
+ BUG_ON(IS_ERR(paddr_last));
spin_lock(&init_mm.page_table_lock);
pmd_populate_kernel_init(&init_mm, pmd, pte, init);
@@ -586,7 +594,8 @@ phys_pmd_init(pmd_t *pmd_page, unsigned long paddr, unsigned long paddr_end,
* Create PUD level page table mapping for physical addresses. The virtual
* and physical address do not have to be aligned at this level. KASLR can
* randomize virtual addresses up to this level.
- * It returns the last physical address mapped.
+ * It returns the last physical address mapped. Allocation errors are
+ * returned with ERR_PTR.
*/
static unsigned long __meminit
phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
@@ -623,6 +632,8 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
paddr_end,
page_size_mask,
prot, init);
+ if (IS_ERR(paddr_last))
+ return paddr_last;
continue;
}
/*
@@ -658,12 +669,22 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
}
pmd = alloc_low_page();
+ if (!pmd)
+ return (unsigned long)ERR_PTR(-ENOMEM);
paddr_last = phys_pmd_init(pmd, paddr, paddr_end,
page_size_mask, prot, init);
+ /*
+ * We might have IS_ERR(paddr_last) if allocation failed, but we should
+ * still update pud before bailing, so that subsequent retries can pick
+ * up on progress (here and in phys_pmd_init) without leaking pmd.
+ */
spin_lock(&init_mm.page_table_lock);
pud_populate_init(&init_mm, pud, pmd, init);
spin_unlock(&init_mm.page_table_lock);
+
+ if (IS_ERR(paddr_last))
+ return paddr_last;
}
update_page_count(PG_LEVEL_1G, pages);
@@ -707,16 +728,26 @@ phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end,
pud = pud_offset(p4d, 0);
paddr_last = phys_pud_init(pud, paddr, __pa(vaddr_end),
page_size_mask, prot, init);
+ if (IS_ERR(paddr_last))
+ return paddr_last;
continue;
}
pud = alloc_low_page();
+ if (!pud)
+ return (unsigned long)ERR_PTR(-ENOMEM);
paddr_last = phys_pud_init(pud, paddr, __pa(vaddr_end),
page_size_mask, prot, init);
spin_lock(&init_mm.page_table_lock);
p4d_populate_init(&init_mm, p4d, pud, init);
spin_unlock(&init_mm.page_table_lock);
+
+ /*
+ * Bail only after updating p4d to keep progress from pud across retries.
+ */
+ if (IS_ERR(paddr_last))
+ return paddr_last;
}
return paddr_last;
@@ -748,10 +779,14 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
__pa(vaddr_end),
page_size_mask,
prot, init);
+ if (IS_ERR(paddr_last))
+ return paddr_last;
continue;
}
p4d = alloc_low_page();
+ if (!p4d)
+ return (unsigned long)ERR_PTR(-ENOMEM);
paddr_last = phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end),
page_size_mask, prot, init);
@@ -763,6 +798,13 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
(pud_t *) p4d, init);
spin_unlock(&init_mm.page_table_lock);
+
+ /*
+ * Bail only after updating pgd/p4d to keep progress from p4d across retries.
+ */
+ if (IS_ERR(paddr_last))
+ return paddr_last;
+
pgd_changed = true;
}
@@ -777,7 +819,8 @@ __kernel_physical_mapping_init(unsigned long paddr_start,
* Create page table mapping for the physical memory for specific physical
* addresses. Note that it can only be used to populate non-present entries.
* The virtual and physical addresses have to be aligned on PMD level
- * down. It returns the last physical address mapped.
+ * down. It returns the last physical address mapped. Allocation errors are
+ * returned with ERR_PTR.
*/
unsigned long __meminit
kernel_physical_mapping_init(unsigned long paddr_start,
@@ -980,8 +1023,11 @@ int arch_add_memory(int nid, u64 start, u64 size,
{
unsigned long start_pfn = start >> PAGE_SHIFT;
unsigned long nr_pages = size >> PAGE_SHIFT;
+ unsigned long ret = 0;
- init_memory_mapping(start, start + size, params->pgprot);
+ ret = init_memory_mapping(start, start + size, params->pgprot);
+ if (IS_ERR(ret))
+ return (int)PTR_ERR(ret);
return add_pages(nid, start_pfn, nr_pages, params);
}
--
2.39.5
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v2 2/2] x86/mm: Use GFP_KERNEL for alloc_low_pages() after boot
2025-06-09 10:32 [PATCH v2 0/2] x86/mm: Improve alloc handling of phys_*_init() Em Sharnoff
2025-06-09 10:33 ` [PATCH v2 1/2] x86/mm: Handle alloc failure in phys_*_init() Em Sharnoff
@ 2025-06-09 10:34 ` Em Sharnoff
1 sibling, 0 replies; 6+ messages in thread
From: Em Sharnoff @ 2025-06-09 10:34 UTC (permalink / raw)
To: linux-kernel, x86, linux-mm
Cc: Ingo Molnar, H. Peter Anvin, Dave Hansen, Andy Lutomirski,
Peter Zijlstra, Thomas Gleixner, Borislav Petkov,
Edgecombe, Rick P, Oleg Vasilev, Arthur Petukhovsky, Stefan Radig,
Misha Sakhnov
Currently it's GFP_ATOMIC. GFP_KERNEL seems more correct.
From Ingo M. [1]
> There's no real reason why it should be GFP_ATOMIC AFAICS, other than
> some historic inertia that nobody bothered to fix.
and previously Mike R. [2]
> The few callers that effectively use page allocator for the direct map
> updates are gart_iommu_init() and memory hotplug. Neither of them
> happen in an atomic context so there is no reason to use GFP_ATOMIC
> for these allocations.
>
> Replace GFP_ATOMIC with GFP_KERNEL to avoid using atomic reserves for
> allocations that do not require that.
[1]: https://lore.kernel.org/all/aEE6_S2a-1tk1dtI@gmail.com/
[2]: https://lore.kernel.org/all/20211111110241.25968-5-rppt@kernel.org/
Signed-off-by: Em Sharnoff <sharnoff@neon.tech>
---
Changelog:
- v2: Add this patch
---
arch/x86/mm/init.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 82dd5ce03dd6..bb5fe21f4794 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -131,7 +131,7 @@ __ref void *alloc_low_pages(unsigned int num)
unsigned int order;
order = get_order((unsigned long)num << PAGE_SHIFT);
- return (void *)__get_free_pages(GFP_ATOMIC | __GFP_ZERO, order);
+ return (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, order);
}
if ((pgt_buf_end + num) > pgt_buf_top || !can_use_brk_pgt) {
--
2.39.5
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v2 1/2] x86/mm: Handle alloc failure in phys_*_init()
2025-06-09 10:33 ` [PATCH v2 1/2] x86/mm: Handle alloc failure in phys_*_init() Em Sharnoff
@ 2025-06-09 17:15 ` kernel test robot
2025-06-10 10:19 ` Em Sharnoff
2025-06-09 17:56 ` kernel test robot
1 sibling, 1 reply; 6+ messages in thread
From: kernel test robot @ 2025-06-09 17:15 UTC (permalink / raw)
To: Em Sharnoff, linux-kernel, x86, linux-mm
Cc: llvm, oe-kbuild-all, Ingo Molnar, H. Peter Anvin, Dave Hansen,
Andy Lutomirski, Peter Zijlstra, Thomas Gleixner, Borislav Petkov,
Edgecombe, Rick P, Oleg Vasilev, Arthur Petukhovsky, Stefan Radig,
Misha Sakhnov
Hi Em,
kernel test robot noticed the following build errors:
[auto build test ERROR on 82f2b0b97b36ee3fcddf0f0780a9a0825d52fec3]
url: https://github.com/intel-lab-lkp/linux/commits/Em-Sharnoff/x86-mm-Handle-alloc-failure-in-phys_-_init/20250609-183537
base: 82f2b0b97b36ee3fcddf0f0780a9a0825d52fec3
patch link: https://lore.kernel.org/r/25c5e747-107f-4450-8eb0-11b2f0dab14d%40neon.tech
patch subject: [PATCH v2 1/2] x86/mm: Handle alloc failure in phys_*_init()
config: i386-buildonly-randconfig-002-20250609 (https://download.01.org/0day-ci/archive/20250610/202506100041.N8Bgx8q0-lkp@intel.com/config)
compiler: clang version 20.1.2 (https://github.com/llvm/llvm-project 58df0ef89dd64126512e4ee27b4ac3fd8ddf6247)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250610/202506100041.N8Bgx8q0-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202506100041.N8Bgx8q0-lkp@intel.com/
All errors (new ones prefixed by >>):
>> arch/x86/mm/init.c:555:14: error: incompatible integer to pointer conversion passing 'unsigned long' to parameter of type 'const void *' [-Wint-conversion]
555 | if (IS_ERR(ret))
| ^~~
include/linux/err.h:68:60: note: passing argument to parameter 'ptr' here
68 | static inline bool __must_check IS_ERR(__force const void *ptr)
| ^
1 error generated.
vim +555 arch/x86/mm/init.c
531
532 /*
533 * Setup the direct mapping of the physical memory at PAGE_OFFSET.
534 * This runs before bootmem is initialized and gets pages directly from
535 * the physical memory. To access them they are temporarily mapped.
536 * Allocation errors are returned with ERR_PTR.
537 */
538 unsigned long __ref init_memory_mapping(unsigned long start,
539 unsigned long end, pgprot_t prot)
540 {
541 struct map_range mr[NR_RANGE_MR];
542 unsigned long ret = 0;
543 int nr_range, i;
544
545 pr_debug("init_memory_mapping: [mem %#010lx-%#010lx]\n",
546 start, end - 1);
547
548 memset(mr, 0, sizeof(mr));
549 nr_range = split_mem_range(mr, 0, start, end);
550
551 for (i = 0; i < nr_range; i++) {
552 ret = kernel_physical_mapping_init(mr[i].start, mr[i].end,
553 mr[i].page_size_mask,
554 prot);
> 555 if (IS_ERR(ret))
556 return ret;
557 }
558
559 add_pfn_range_mapped(start >> PAGE_SHIFT, ret >> PAGE_SHIFT);
560
561 return ret >> PAGE_SHIFT;
562 }
563
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 1/2] x86/mm: Handle alloc failure in phys_*_init()
2025-06-09 10:33 ` [PATCH v2 1/2] x86/mm: Handle alloc failure in phys_*_init() Em Sharnoff
2025-06-09 17:15 ` kernel test robot
@ 2025-06-09 17:56 ` kernel test robot
1 sibling, 0 replies; 6+ messages in thread
From: kernel test robot @ 2025-06-09 17:56 UTC (permalink / raw)
To: Em Sharnoff, linux-kernel, x86, linux-mm
Cc: oe-kbuild-all, Ingo Molnar, H. Peter Anvin, Dave Hansen,
Andy Lutomirski, Peter Zijlstra, Thomas Gleixner, Borislav Petkov,
Edgecombe, Rick P, Oleg Vasilev, Arthur Petukhovsky, Stefan Radig,
Misha Sakhnov
Hi Em,
kernel test robot noticed the following build warnings:
[auto build test WARNING on 82f2b0b97b36ee3fcddf0f0780a9a0825d52fec3]
url: https://github.com/intel-lab-lkp/linux/commits/Em-Sharnoff/x86-mm-Handle-alloc-failure-in-phys_-_init/20250609-183537
base: 82f2b0b97b36ee3fcddf0f0780a9a0825d52fec3
patch link: https://lore.kernel.org/r/25c5e747-107f-4450-8eb0-11b2f0dab14d%40neon.tech
patch subject: [PATCH v2 1/2] x86/mm: Handle alloc failure in phys_*_init()
config: i386-buildonly-randconfig-006-20250609 (https://download.01.org/0day-ci/archive/20250610/202506100135.4iTfYLoH-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250610/202506100135.4iTfYLoH-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202506100135.4iTfYLoH-lkp@intel.com/
All warnings (new ones prefixed by >>):
arch/x86/mm/init.c: In function 'init_memory_mapping':
>> arch/x86/mm/init.c:555:28: warning: passing argument 1 of 'IS_ERR' makes pointer from integer without a cast [-Wint-conversion]
555 | if (IS_ERR(ret))
| ^~~
| |
| long unsigned int
In file included from include/linux/string.h:11,
from arch/x86/include/asm/page_32.h:18,
from arch/x86/include/asm/page.h:14,
from arch/x86/include/asm/thread_info.h:12,
from include/linux/thread_info.h:60,
from include/linux/spinlock.h:60,
from include/linux/mmzone.h:8,
from include/linux/gfp.h:7,
from arch/x86/mm/init.c:1:
include/linux/err.h:68:60: note: expected 'const void *' but argument is of type 'long unsigned int'
68 | static inline bool __must_check IS_ERR(__force const void *ptr)
| ~~~~~~~~~~~~^~~
vim +/IS_ERR +555 arch/x86/mm/init.c
531
532 /*
533 * Setup the direct mapping of the physical memory at PAGE_OFFSET.
534 * This runs before bootmem is initialized and gets pages directly from
535 * the physical memory. To access them they are temporarily mapped.
536 * Allocation errors are returned with ERR_PTR.
537 */
538 unsigned long __ref init_memory_mapping(unsigned long start,
539 unsigned long end, pgprot_t prot)
540 {
541 struct map_range mr[NR_RANGE_MR];
542 unsigned long ret = 0;
543 int nr_range, i;
544
545 pr_debug("init_memory_mapping: [mem %#010lx-%#010lx]\n",
546 start, end - 1);
547
548 memset(mr, 0, sizeof(mr));
549 nr_range = split_mem_range(mr, 0, start, end);
550
551 for (i = 0; i < nr_range; i++) {
552 ret = kernel_physical_mapping_init(mr[i].start, mr[i].end,
553 mr[i].page_size_mask,
554 prot);
> 555 if (IS_ERR(ret))
556 return ret;
557 }
558
559 add_pfn_range_mapped(start >> PAGE_SHIFT, ret >> PAGE_SHIFT);
560
561 return ret >> PAGE_SHIFT;
562 }
563
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 1/2] x86/mm: Handle alloc failure in phys_*_init()
2025-06-09 17:15 ` kernel test robot
@ 2025-06-10 10:19 ` Em Sharnoff
0 siblings, 0 replies; 6+ messages in thread
From: Em Sharnoff @ 2025-06-10 10:19 UTC (permalink / raw)
To: kernel test robot, linux-kernel, x86, linux-mm
Cc: llvm, oe-kbuild-all, Ingo Molnar, H. Peter Anvin, Dave Hansen,
Andy Lutomirski, Peter Zijlstra, Thomas Gleixner, Borislav Petkov,
Edgecombe, Rick P, Oleg Vasilev, Arthur Petukhovsky, Stefan Radig,
Misha Sakhnov
Apologies for not catching this. New version here:
https://lore.kernel.org/all/a31e3b89-5040-4426-9ce8-d674b8554aa1@neon.tech/
Em
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-06-10 10:19 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-09 10:32 [PATCH v2 0/2] x86/mm: Improve alloc handling of phys_*_init() Em Sharnoff
2025-06-09 10:33 ` [PATCH v2 1/2] x86/mm: Handle alloc failure in phys_*_init() Em Sharnoff
2025-06-09 17:15 ` kernel test robot
2025-06-10 10:19 ` Em Sharnoff
2025-06-09 17:56 ` kernel test robot
2025-06-09 10:34 ` [PATCH v2 2/2] x86/mm: Use GFP_KERNEL for alloc_low_pages() after boot Em Sharnoff
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).