* [PATCH] powerpc/mem: Move CMA reservations to arch_mm_preinit
@ 2026-02-28 18:47 Ritesh Harjani (IBM)
2026-03-10 18:00 ` Dan Horák
2026-03-15 4:01 ` Madhavan Srinivasan
0 siblings, 2 replies; 3+ messages in thread
From: Ritesh Harjani (IBM) @ 2026-02-28 18:47 UTC (permalink / raw)
To: linuxppc-dev
Cc: linux-mm, Madhavan Srinivasan, Mike Rapoport, Sourabh Jain,
Michael Ellerman, Donet Tom, Hari Bathini, Mahesh J Salgaonkar,
Ritesh Harjani (IBM)
commit 4267739cabb8 ("arch, mm: consolidate initialization of SPARSE memory model"),
changed the initialization order of "pageblock_order" from...
start_kernel()
- setup_arch()
- initmem_init()
- sparse_init()
- set_pageblock_order(); // this sets the pageblock_order
- xxx_cma_reserve();
to...
start_kernel()
- setup_arch()
- xxx_cma_reserve();
- mm_core_init_early()
- free_area_init()
- sparse_init()
- set_pageblock_order() // this sets the pageblock_order.
So this means, pageblock_order is not initialized before these cma
reservation function calls, hence we are seeing CMA failures like...
[ 0.000000] kvm_cma_reserve: reserving 3276 MiB for global area
[ 0.000000] cma: pageblock_order not yet initialized. Called during early boot?
[ 0.000000] cma: Failed to reserve 3276 MiB
....
[ 0.000000][ T0] cma: pageblock_order not yet initialized. Called during early boot?
[ 0.000000][ T0] cma: Failed to reserve 1024 MiB
This patch moves these CMA reservations to arch_mm_preinit() which
happens in mm_core_init() (which happens after pageblock_order is
initialized), but before the memblock moves the free memory to buddy.
Fixes: 4267739cabb8 ("arch, mm: consolidate initialization of SPARSE memory model")
Suggested-by: Mike Rapoport <rppt@kernel.org>
Reported-and-tested-by: Sourabh Jain <sourabhjain@linux.ibm.com>
Closes: https://lore.kernel.org/linuxppc-dev/4c338a29-d190-44f3-8874-6cfa0a031f0b@linux.ibm.com/
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
---
arch/powerpc/kernel/setup-common.c | 10 ----------
arch/powerpc/mm/mem.c | 14 ++++++++++++++
2 files changed, 14 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index cb5b73adc250..b1761909c23f 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -35,7 +35,6 @@
#include <linux/of_irq.h>
#include <linux/hugetlb.h>
#include <linux/pgtable.h>
-#include <asm/kexec.h>
#include <asm/io.h>
#include <asm/paca.h>
#include <asm/processor.h>
@@ -995,15 +994,6 @@ void __init setup_arch(char **cmdline_p)
initmem_init();
- /*
- * Reserve large chunks of memory for use by CMA for kdump, fadump, KVM and
- * hugetlb. These must be called after initmem_init(), so that
- * pageblock_order is initialised.
- */
- fadump_cma_init();
- kdump_cma_reserve();
- kvm_cma_reserve();
-
early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
if (ppc_md.setup_arch)
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index a985fc96b953..b7982d0243d4 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -30,6 +30,10 @@
#include <asm/setup.h>
#include <asm/fixmap.h>
+#include <asm/fadump.h>
+#include <asm/kexec.h>
+#include <asm/kvm_ppc.h>
+
#include <mm/mmu_decl.h>
unsigned long long memory_limit __initdata;
@@ -268,6 +272,16 @@ void __init paging_init(void)
void __init arch_mm_preinit(void)
{
+
+ /*
+ * Reserve large chunks of memory for use by CMA for kdump, fadump, KVM
+ * and hugetlb. These must be called after pageblock_order is
+ * initialised.
+ */
+ fadump_cma_init();
+ kdump_cma_reserve();
+ kvm_cma_reserve();
+
/*
* book3s is limited to 16 page sizes due to encoding this in
* a 4-bit field for slices.
--
2.53.0
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] powerpc/mem: Move CMA reservations to arch_mm_preinit
2026-02-28 18:47 [PATCH] powerpc/mem: Move CMA reservations to arch_mm_preinit Ritesh Harjani (IBM)
@ 2026-03-10 18:00 ` Dan Horák
2026-03-15 4:01 ` Madhavan Srinivasan
1 sibling, 0 replies; 3+ messages in thread
From: Dan Horák @ 2026-03-10 18:00 UTC (permalink / raw)
To: Ritesh Harjani (IBM)
Cc: linuxppc-dev, linux-mm, Madhavan Srinivasan, Mike Rapoport,
Sourabh Jain, Michael Ellerman, Donet Tom, Hari Bathini,
Mahesh J Salgaonkar
Hi Ritesh,
On Sun, 1 Mar 2026 00:17:59 +0530
"Ritesh Harjani (IBM)" <ritesh.list@gmail.com> wrote:
> commit 4267739cabb8 ("arch, mm: consolidate initialization of SPARSE memory model"),
> changed the initialization order of "pageblock_order" from...
> start_kernel()
> - setup_arch()
> - initmem_init()
> - sparse_init()
> - set_pageblock_order(); // this sets the pageblock_order
> - xxx_cma_reserve();
>
> to...
> start_kernel()
> - setup_arch()
> - xxx_cma_reserve();
> - mm_core_init_early()
> - free_area_init()
> - sparse_init()
> - set_pageblock_order() // this sets the pageblock_order.
>
> So this means, pageblock_order is not initialized before these cma
> reservation function calls, hence we are seeing CMA failures like...
>
> [ 0.000000] kvm_cma_reserve: reserving 3276 MiB for global area
> [ 0.000000] cma: pageblock_order not yet initialized. Called during early boot?
> [ 0.000000] cma: Failed to reserve 3276 MiB
> ....
> [ 0.000000][ T0] cma: pageblock_order not yet initialized. Called during early boot?
> [ 0.000000][ T0] cma: Failed to reserve 1024 MiB
>
> This patch moves these CMA reservations to arch_mm_preinit() which
> happens in mm_core_init() (which happens after pageblock_order is
> initialized), but before the memblock moves the free memory to buddy.
>
> Fixes: 4267739cabb8 ("arch, mm: consolidate initialization of SPARSE memory model")
> Suggested-by: Mike Rapoport <rppt@kernel.org>
> Reported-and-tested-by: Sourabh Jain <sourabhjain@linux.ibm.com>
> Closes: https://lore.kernel.org/linuxppc-dev/4c338a29-d190-44f3-8874-6cfa0a031f0b@linux.ibm.com/
> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
awesome, this fixes KVM initialization issues in 7.0-rc. Successfully
started a KVM guest on a Power8 based host system.
...
Mar 10 16:48:11 fedora kernel: NODE_DATA(0) allocated [mem 0x3ff4f9c00-0x3ff50197f]
Mar 10 16:48:11 fedora kernel: kvm_cma_reserve: reserving 819 MiB for global area
Mar 10 16:48:11 fedora kernel: cma: pageblock_order not yet initialized. Called during early boot?
Mar 10 16:48:11 fedora kernel: cma: Failed to reserve 819 MiB
Mar 10 16:48:11 fedora kernel: rfi-flush: ori type flush available
Mar 10 16:48:11 fedora kernel: rfi-flush: patched 12 locations (ori type flush)
...
was the error message in 7.0-rc3
Tested-by: Dan Horák <dan@danny.cz>
Dan
> ---
> arch/powerpc/kernel/setup-common.c | 10 ----------
> arch/powerpc/mm/mem.c | 14 ++++++++++++++
> 2 files changed, 14 insertions(+), 10 deletions(-)
>
> --
> 2.53.0
>
> diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
> index cb5b73adc250..b1761909c23f 100644
> --- a/arch/powerpc/kernel/setup-common.c
> +++ b/arch/powerpc/kernel/setup-common.c
> @@ -35,7 +35,6 @@
> #include <linux/of_irq.h>
> #include <linux/hugetlb.h>
> #include <linux/pgtable.h>
> -#include <asm/kexec.h>
> #include <asm/io.h>
> #include <asm/paca.h>
> #include <asm/processor.h>
> @@ -995,15 +994,6 @@ void __init setup_arch(char **cmdline_p)
>
> initmem_init();
>
> - /*
> - * Reserve large chunks of memory for use by CMA for kdump, fadump, KVM and
> - * hugetlb. These must be called after initmem_init(), so that
> - * pageblock_order is initialised.
> - */
> - fadump_cma_init();
> - kdump_cma_reserve();
> - kvm_cma_reserve();
> -
> early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
>
> if (ppc_md.setup_arch)
> diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
> index a985fc96b953..b7982d0243d4 100644
> --- a/arch/powerpc/mm/mem.c
> +++ b/arch/powerpc/mm/mem.c
> @@ -30,6 +30,10 @@
> #include <asm/setup.h>
> #include <asm/fixmap.h>
>
> +#include <asm/fadump.h>
> +#include <asm/kexec.h>
> +#include <asm/kvm_ppc.h>
> +
> #include <mm/mmu_decl.h>
>
> unsigned long long memory_limit __initdata;
> @@ -268,6 +272,16 @@ void __init paging_init(void)
>
> void __init arch_mm_preinit(void)
> {
> +
> + /*
> + * Reserve large chunks of memory for use by CMA for kdump, fadump, KVM
> + * and hugetlb. These must be called after pageblock_order is
> + * initialised.
> + */
> + fadump_cma_init();
> + kdump_cma_reserve();
> + kvm_cma_reserve();
> +
> /*
> * book3s is limited to 16 page sizes due to encoding this in
> * a 4-bit field for slices.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] powerpc/mem: Move CMA reservations to arch_mm_preinit
2026-02-28 18:47 [PATCH] powerpc/mem: Move CMA reservations to arch_mm_preinit Ritesh Harjani (IBM)
2026-03-10 18:00 ` Dan Horák
@ 2026-03-15 4:01 ` Madhavan Srinivasan
1 sibling, 0 replies; 3+ messages in thread
From: Madhavan Srinivasan @ 2026-03-15 4:01 UTC (permalink / raw)
To: linuxppc-dev, Ritesh Harjani (IBM)
Cc: linux-mm, Mike Rapoport, Sourabh Jain, Michael Ellerman,
Donet Tom, Hari Bathini, Mahesh J Salgaonkar
On Sun, 01 Mar 2026 00:17:59 +0530, Ritesh Harjani (IBM) wrote:
> commit 4267739cabb8 ("arch, mm: consolidate initialization of SPARSE memory model"),
> changed the initialization order of "pageblock_order" from...
> start_kernel()
> - setup_arch()
> - initmem_init()
> - sparse_init()
> - set_pageblock_order(); // this sets the pageblock_order
> - xxx_cma_reserve();
>
> [...]
Applied to powerpc/fixes.
[1/1] powerpc/mem: Move CMA reservations to arch_mm_preinit
https://git.kernel.org/powerpc/c/0a8321dde01ffdbd9455a028194d57484def59eb
cheers
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-03-15 4:01 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-28 18:47 [PATCH] powerpc/mem: Move CMA reservations to arch_mm_preinit Ritesh Harjani (IBM)
2026-03-10 18:00 ` Dan Horák
2026-03-15 4:01 ` Madhavan Srinivasan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox