* [PATCH 00/28] arch, mm: consolidate hugetlb early reservation
@ 2025-12-28 12:39 Mike Rapoport
2025-12-28 12:39 ` [PATCH 01/28] alpha: introduce arch_zone_limits_init() Mike Rapoport
` (27 more replies)
0 siblings, 28 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Hi,
Order in which early memory reservation for hugetlb happens depends on
architecture, on configuration options and on command line parameters.
Some architectures rely on the core MM to call hugetlb_bootmem_alloc()
while others call it very early to allow pre-allocation of HVO-style
vmemmap.
When hugetlb_cma is supported by an architecture it is initialized during
setup_arch() and then later hugetlb_init code needs to understand did it
happen or not.
To make everything consistent and unified, both reservation of hugetlb
memory from bootmem and creation of CMA areas for hugetlb must be called
from core MM initialization and it would have been a simple change.
However, HVO-style pre-initialization ordering requirements slightly
complicate things and for HVO pre-init to work sparse and memory map should
be initialized after hugetlb reservations.
This required pulling out the call to free_area_init() out of setup_arch()
path and moving it MM initialization and this is what the first 23 patches
do.
These changes are deliberately split into per-arch patches that change how
the zone limits are calculated for each architecture and the patches 22 and
23 just remove the calls to free_area_init() and sprase_init() from arch/*.
Patch 24 is a simple cleanup for MIPS.
Patches 25 and 26 actually consolidate hugetlb reservations and patches 27
and 28 perform some aftermath cleanups.
I tried to trim the distribution list and although it's still quite long
if you feel that someone was wrongly excluded please add them back.
The changes also available in git:
https://git.kernel.org/pub/scm/linux/kernel/git/rppt/linux.git/log/?h=hugetlb-init/v1
Mike Rapoport (Microsoft) (28):
alpha: introduce arch_zone_limits_init()
arc: introduce arch_zone_limits_init()
arm: introduce arch_zone_limits_init()
arm64: introduce arch_zone_limits_init()
csky: introduce arch_zone_limits_init()
hexagon: introduce arch_zone_limits_init()
loongarch: introduce arch_zone_limits_init()
m68k: introduce arch_zone_limits_init()
microblaze: introduce arch_zone_limits_init()
mips: introduce arch_zone_limits_init()
nios2: introduce arch_zone_limits_init()
openrisc: introduce arch_zone_limits_init()
parisc: introduce arch_zone_limits_init()
powerpc: introduce arch_zone_limits_init()
riscv: introduce arch_zone_limits_init()
s390: introduce arch_zone_limits_init()
sh: introduce arch_zone_limits_init()
sparc: introduce arch_zone_limits_init()
um: introduce arch_zone_limits_init()
x86: introduce arch_zone_limits_init()
xtensa: introduce arch_zone_limits_init()
arch, mm: consolidate initialization of nodes, zones and memory map
arch, mm: consolidate initialization of SPARSE memory model
mips: drop paging_init()
x86: don't reserve hugetlb memory in setup_arch()
mm, arch: consolidate hugetlb CMA reservation
mm/hugetlb: drop hugetlb_cma_check()
Revert "mm/hugetlb: deal with multiple calls to hugetlb_bootmem_alloc"
.../driver-api/cxl/linux/early-boot.rst | 2 +-
Documentation/mm/memory-model.rst | 3 -
.../translations/zh_CN/mm/memory-model.rst | 2 -
arch/alpha/kernel/setup.c | 1 -
arch/alpha/mm/init.c | 16 ++--
arch/arc/mm/init.c | 37 ++++----
arch/arm/mm/init.c | 25 +----
arch/arm64/include/asm/hugetlb.h | 2 -
arch/arm64/mm/hugetlbpage.c | 10 +-
arch/arm64/mm/init.c | 39 ++++----
arch/csky/kernel/setup.c | 16 ++--
arch/hexagon/mm/init.c | 19 +---
arch/loongarch/include/asm/pgtable.h | 2 -
arch/loongarch/kernel/setup.c | 10 --
arch/loongarch/mm/init.c | 6 +-
arch/m68k/mm/init.c | 8 +-
arch/m68k/mm/mcfmmu.c | 3 -
arch/m68k/mm/motorola.c | 6 +-
arch/m68k/mm/sun3mmu.c | 9 --
arch/microblaze/mm/init.c | 22 ++---
arch/mips/include/asm/pgalloc.h | 2 -
arch/mips/include/asm/pgtable.h | 2 +-
arch/mips/kernel/setup.c | 15 +--
arch/mips/loongson64/numa.c | 10 +-
arch/mips/mm/init.c | 8 +-
arch/mips/sgi-ip27/ip27-memory.c | 8 +-
arch/nios2/mm/init.c | 12 +--
arch/openrisc/mm/init.c | 10 +-
arch/parisc/mm/init.c | 11 +--
arch/powerpc/include/asm/hugetlb.h | 5 -
arch/powerpc/include/asm/setup.h | 4 +
arch/powerpc/kernel/setup-common.c | 1 -
arch/powerpc/mm/hugetlbpage.c | 11 +--
arch/powerpc/mm/mem.c | 27 ++----
arch/powerpc/mm/numa.c | 2 -
arch/riscv/mm/hugetlbpage.c | 8 ++
arch/riscv/mm/init.c | 10 +-
arch/s390/kernel/setup.c | 2 -
arch/s390/mm/hugetlbpage.c | 8 ++
arch/s390/mm/init.c | 13 ++-
arch/sh/mm/init.c | 12 +--
arch/sparc/mm/init_64.c | 17 +---
arch/sparc/mm/srmmu.c | 17 ++--
arch/um/kernel/mem.c | 10 +-
arch/x86/kernel/setup.c | 5 -
arch/x86/mm/hugetlbpage.c | 8 ++
arch/x86/mm/init.c | 8 +-
arch/x86/mm/init_32.c | 2 -
arch/x86/mm/init_64.c | 4 -
arch/x86/mm/mm_internal.h | 1 -
arch/xtensa/mm/init.c | 14 +--
include/linux/hugetlb.h | 12 +--
include/linux/mm.h | 5 +-
include/linux/mmzone.h | 2 -
init/main.c | 1 +
mm/hugetlb.c | 13 ---
mm/hugetlb_cma.c | 33 ++++---
mm/hugetlb_cma.h | 5 -
mm/hugetlb_vmemmap.c | 11 ---
mm/internal.h | 6 ++
mm/mm_init.c | 94 +++++++++++--------
61 files changed, 263 insertions(+), 424 deletions(-)
base-commit: 8f0b4cce4481fb22653697cced8d0d04027cb1e8
--
2.51.0
^ permalink raw reply [flat|nested] 34+ messages in thread
* [PATCH 01/28] alpha: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 02/28] arc: " Mike Rapoport
` (26 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/alpha/mm/init.c | 15 ++++++++++-----
include/linux/mm.h | 1 +
2 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/arch/alpha/mm/init.c b/arch/alpha/mm/init.c
index 4c5ab9cd8a0a..cd0cb1abde5f 100644
--- a/arch/alpha/mm/init.c
+++ b/arch/alpha/mm/init.c
@@ -208,12 +208,8 @@ callback_init(void * kernel_end)
return kernel_end;
}
-/*
- * paging_init() sets up the memory map.
- */
-void __init paging_init(void)
+void __init arch_zone_limits_init(unsigned long *max_zone_pfn)
{
- unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, };
unsigned long dma_pfn;
dma_pfn = virt_to_phys((char *)MAX_DMA_ADDRESS) >> PAGE_SHIFT;
@@ -221,8 +217,17 @@ void __init paging_init(void)
max_zone_pfn[ZONE_DMA] = dma_pfn;
max_zone_pfn[ZONE_NORMAL] = max_pfn;
+}
+
+/*
+ * paging_init() sets up the memory map.
+ */
+void __init paging_init(void)
+{
+ unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, };
/* Initialize mem_map[]. */
+ arch_zone_limits_init(max_zone_pfn);
free_area_init(max_zone_pfn);
/* Initialize the kernel's ZERO_PGE. */
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 15076261d0c2..628c0e0ac313 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3552,6 +3552,7 @@ static inline unsigned long get_num_physpages(void)
* free_area_init(max_zone_pfns);
*/
void free_area_init(unsigned long *max_zone_pfn);
+void arch_zone_limits_init(unsigned long *max_zone_pfn);
unsigned long node_map_pfn_alignment(void);
extern unsigned long absent_pages_in_range(unsigned long start_pfn,
unsigned long end_pfn);
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 02/28] arc: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
2025-12-28 12:39 ` [PATCH 01/28] alpha: introduce arch_zone_limits_init() Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 03/28] arm: " Mike Rapoport
` (25 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/arc/mm/init.c | 34 ++++++++++++++++++++--------------
1 file changed, 20 insertions(+), 14 deletions(-)
diff --git a/arch/arc/mm/init.c b/arch/arc/mm/init.c
index a73cc94f806e..ff7974d38011 100644
--- a/arch/arc/mm/init.c
+++ b/arch/arc/mm/init.c
@@ -75,6 +75,25 @@ void __init early_init_dt_add_memory_arch(u64 base, u64 size)
base, TO_MB(size), !in_use ? "Not used":"");
}
+void __init arch_zone_limits_init(unsigned long *max_zone_pfn)
+{
+ /*----------------- node/zones setup --------------------------*/
+ max_zone_pfn[ZONE_NORMAL] = max_low_pfn;
+
+#ifdef CONFIG_HIGHMEM
+ /*
+ * max_high_pfn should be ok here for both HIGHMEM and HIGHMEM+PAE.
+ * For HIGHMEM without PAE max_high_pfn should be less than
+ * min_low_pfn to guarantee that these two regions don't overlap.
+ * For PAE case highmem is greater than lowmem, so it is natural
+ * to use max_high_pfn.
+ *
+ * In both cases, holes should be handled by pfn_valid().
+ */
+ max_zone_pfn[ZONE_HIGHMEM] = max_high_pfn;
+#endif
+}
+
/*
* First memory setup routine called from setup_arch()
* 1. setup swapper's mm @init_mm
@@ -122,9 +141,6 @@ void __init setup_arch_memory(void)
memblock_dump_all();
- /*----------------- node/zones setup --------------------------*/
- max_zone_pfn[ZONE_NORMAL] = max_low_pfn;
-
#ifdef CONFIG_HIGHMEM
/*
* On ARC (w/o PAE) HIGHMEM addresses are actually smaller (0 based)
@@ -139,21 +155,11 @@ void __init setup_arch_memory(void)
min_high_pfn = PFN_DOWN(high_mem_start);
max_high_pfn = PFN_DOWN(high_mem_start + high_mem_sz);
- /*
- * max_high_pfn should be ok here for both HIGHMEM and HIGHMEM+PAE.
- * For HIGHMEM without PAE max_high_pfn should be less than
- * min_low_pfn to guarantee that these two regions don't overlap.
- * For PAE case highmem is greater than lowmem, so it is natural
- * to use max_high_pfn.
- *
- * In both cases, holes should be handled by pfn_valid().
- */
- max_zone_pfn[ZONE_HIGHMEM] = max_high_pfn;
-
arch_pfn_offset = min(min_low_pfn, min_high_pfn);
kmap_init();
#endif /* CONFIG_HIGHMEM */
+ arch_zone_limits_init(max_zone_pfn);
free_area_init(max_zone_pfn);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 03/28] arm: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
2025-12-28 12:39 ` [PATCH 01/28] alpha: introduce arch_zone_limits_init() Mike Rapoport
2025-12-28 12:39 ` [PATCH 02/28] arc: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 04/28] arm64: " Mike Rapoport
` (24 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/arm/mm/init.c | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index 54bdca025c9f..bdcc3639681f 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -107,18 +107,23 @@ void __init setup_dma_zone(const struct machine_desc *mdesc)
#endif
}
-static void __init zone_sizes_init(unsigned long min, unsigned long max_low,
- unsigned long max_high)
+void __init arch_zone_limits_init(unsigned long *max_zone_pfn)
{
- unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
-
#ifdef CONFIG_ZONE_DMA
- max_zone_pfn[ZONE_DMA] = min(arm_dma_pfn_limit, max_low);
+ max_zone_pfn[ZONE_DMA] = min(arm_dma_pfn_limit, max_low_pfn);
#endif
- max_zone_pfn[ZONE_NORMAL] = max_low;
+ max_zone_pfn[ZONE_NORMAL] = max_low_pfn;
#ifdef CONFIG_HIGHMEM
- max_zone_pfn[ZONE_HIGHMEM] = max_high;
+ max_zone_pfn[ZONE_HIGHMEM] = max_pfn;
#endif
+}
+
+static void __init zone_sizes_init(unsigned long min, unsigned long max_low,
+ unsigned long max_high)
+{
+ unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
+
+ arch_zone_limits_init(max_zone_pfn);
free_area_init(max_zone_pfn);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 04/28] arm64: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (2 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 03/28] arm: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 05/28] csky: " Mike Rapoport
` (23 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
While on it rename zone_sizes_init() to dma_limits_init() to better
reflect what that function does.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/arm64/mm/init.c | 22 +++++++++++++++++-----
1 file changed, 17 insertions(+), 5 deletions(-)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 524d34a0e921..06815d34cc11 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -118,7 +118,21 @@ static phys_addr_t __init max_zone_phys(phys_addr_t zone_limit)
return min(zone_limit, memblock_end_of_DRAM() - 1) + 1;
}
-static void __init zone_sizes_init(void)
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
+{
+ phys_addr_t __maybe_unused dma32_phys_limit =
+ max_zone_phys(DMA_BIT_MASK(32));
+
+#ifdef CONFIG_ZONE_DMA
+ max_zone_pfns[ZONE_DMA] = PFN_DOWN(max_zone_phys(zone_dma_limit));
+#endif
+#ifdef CONFIG_ZONE_DMA32
+ max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit);
+#endif
+ max_zone_pfns[ZONE_NORMAL] = max_pfn;
+}
+
+static void __init dma_limits_init(void)
{
unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
phys_addr_t __maybe_unused acpi_zone_dma_limit;
@@ -139,17 +153,15 @@ static void __init zone_sizes_init(void)
if (memblock_start_of_DRAM() < U32_MAX)
zone_dma_limit = min(zone_dma_limit, U32_MAX);
arm64_dma_phys_limit = max_zone_phys(zone_dma_limit);
- max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
#endif
#ifdef CONFIG_ZONE_DMA32
- max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit);
if (!arm64_dma_phys_limit)
arm64_dma_phys_limit = dma32_phys_limit;
#endif
if (!arm64_dma_phys_limit)
arm64_dma_phys_limit = PHYS_MASK + 1;
- max_zone_pfns[ZONE_NORMAL] = max_pfn;
+ arch_zone_limits_init(max_zone_pfns);
free_area_init(max_zone_pfns);
}
@@ -319,7 +331,7 @@ void __init bootmem_init(void)
* done after the fixed reservations
*/
sparse_init();
- zone_sizes_init();
+ dma_limits_init();
/*
* Reserve the CMA area after arm64_dma_phys_limit was initialised.
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 05/28] csky: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (3 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 04/28] arm64: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-29 1:25 ` Guo Ren
2025-12-28 12:39 ` [PATCH 06/28] hexagon: " Mike Rapoport
` (22 subsequent siblings)
27 siblings, 1 reply; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/csky/kernel/setup.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/arch/csky/kernel/setup.c b/arch/csky/kernel/setup.c
index e0d6ca86ea8c..8968815d93e6 100644
--- a/arch/csky/kernel/setup.c
+++ b/arch/csky/kernel/setup.c
@@ -51,6 +51,14 @@ static void __init setup_initrd(void)
}
#endif
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
+{
+ max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
+#ifdef CONFIG_HIGHMEM
+ max_zone_pfns[ZONE_HIGHMEM] = max_pfn;
+#endif
+}
+
static void __init csky_memblock_init(void)
{
unsigned long lowmem_size = PFN_DOWN(LOWMEM_LIMIT - PHYS_OFFSET_OFFSET);
@@ -83,12 +91,9 @@ static void __init csky_memblock_init(void)
setup_initrd();
#endif
- max_zone_pfn[ZONE_NORMAL] = max_low_pfn;
-
mmu_init(min_low_pfn, max_low_pfn);
#ifdef CONFIG_HIGHMEM
- max_zone_pfn[ZONE_HIGHMEM] = max_pfn;
highstart_pfn = max_low_pfn;
highend_pfn = max_pfn;
@@ -97,6 +102,7 @@ static void __init csky_memblock_init(void)
dma_contiguous_reserve(0);
+ arch_zone_limits_init(max_zone_pfn);
free_area_init(max_zone_pfn);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 06/28] hexagon: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (4 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 05/28] csky: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 07/28] loongarch: " Mike Rapoport
` (21 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/hexagon/mm/init.c | 23 +++++++++++++----------
1 file changed, 13 insertions(+), 10 deletions(-)
diff --git a/arch/hexagon/mm/init.c b/arch/hexagon/mm/init.c
index 34eb9d424b96..e2c9487d8d34 100644
--- a/arch/hexagon/mm/init.c
+++ b/arch/hexagon/mm/init.c
@@ -54,6 +54,18 @@ void sync_icache_dcache(pte_t pte)
__vmcache_idsync(addr, PAGE_SIZE);
}
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
+{
+ /*
+ * This is not particularly well documented anywhere, but
+ * give ZONE_NORMAL all the memory, including the big holes
+ * left by the kernel+bootmem_map which are already left as reserved
+ * in the bootmem_map; free_area_init should see those bits and
+ * adjust accordingly.
+ */
+ max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
+}
+
/*
* In order to set up page allocator "nodes",
* somebody has to call free_area_init() for UMA.
@@ -65,16 +77,7 @@ static void __init paging_init(void)
{
unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, };
- /*
- * This is not particularly well documented anywhere, but
- * give ZONE_NORMAL all the memory, including the big holes
- * left by the kernel+bootmem_map which are already left as reserved
- * in the bootmem_map; free_area_init should see those bits and
- * adjust accordingly.
- */
-
- max_zone_pfn[ZONE_NORMAL] = max_low_pfn;
-
+ arch_zone_limits_init(max_zone_pfn);
free_area_init(max_zone_pfn); /* sets up the zonelists and mem_map */
/*
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 07/28] loongarch: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (5 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 06/28] hexagon: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 08/28] m68k: " Mike Rapoport
` (20 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/loongarch/mm/init.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c
index 0946662afdd6..17235f87eafb 100644
--- a/arch/loongarch/mm/init.c
+++ b/arch/loongarch/mm/init.c
@@ -60,15 +60,19 @@ int __ref page_is_ram(unsigned long pfn)
return memblock_is_memory(addr) && !memblock_is_reserved(addr);
}
-void __init paging_init(void)
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
{
- unsigned long max_zone_pfns[MAX_NR_ZONES];
-
#ifdef CONFIG_ZONE_DMA32
max_zone_pfns[ZONE_DMA32] = MAX_DMA32_PFN;
#endif
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
+}
+
+void __init paging_init(void)
+{
+ unsigned long max_zone_pfns[MAX_NR_ZONES];
+ arch_zone_limits_init(max_zone_pfns);
free_area_init(max_zone_pfns);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 08/28] m68k: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (6 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 07/28] loongarch: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 09/28] microblaze: " Mike Rapoport
` (19 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Since all variants of m68k add all memory to ZONE_DMA, it is possible to
use unified implementation for arch_zone_limits_init() that sets the end
of ZONE_DMA to memblock_end_of_DRAM().
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/m68k/mm/init.c | 7 ++++++-
arch/m68k/mm/mcfmmu.c | 2 +-
arch/m68k/mm/motorola.c | 2 +-
arch/m68k/mm/sun3mmu.c | 2 +-
4 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/arch/m68k/mm/init.c b/arch/m68k/mm/init.c
index 488411af1b3f..6b1d9d2434b5 100644
--- a/arch/m68k/mm/init.c
+++ b/arch/m68k/mm/init.c
@@ -40,6 +40,11 @@
void *empty_zero_page;
EXPORT_SYMBOL(empty_zero_page);
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
+{
+ max_zone_pfns[ZONE_DMA] = PFN_DOWN(memblock_end_of_DRAM());
+}
+
#ifdef CONFIG_MMU
int m68k_virt_to_node_shift;
@@ -69,7 +74,7 @@ void __init paging_init(void)
high_memory = (void *) end_mem;
empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
- max_zone_pfn[ZONE_DMA] = end_mem >> PAGE_SHIFT;
+ arch_zone_limits_init(max_zone_pfn);
free_area_init(max_zone_pfn);
}
diff --git a/arch/m68k/mm/mcfmmu.c b/arch/m68k/mm/mcfmmu.c
index 19a75029036c..24a6f7bbd1ce 100644
--- a/arch/m68k/mm/mcfmmu.c
+++ b/arch/m68k/mm/mcfmmu.c
@@ -73,7 +73,7 @@ void __init paging_init(void)
}
current->mm = NULL;
- max_zone_pfn[ZONE_DMA] = PFN_DOWN(_ramend);
+ arch_zone_limits_init(max_zone_pfn);
free_area_init(max_zone_pfn);
}
diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c
index 62283bc2ed79..d6ccd23caf61 100644
--- a/arch/m68k/mm/motorola.c
+++ b/arch/m68k/mm/motorola.c
@@ -517,6 +517,6 @@ void __init paging_init(void)
if (node_present_pages(i))
node_set_state(i, N_NORMAL_MEMORY);
- max_zone_pfn[ZONE_DMA] = memblock_end_of_DRAM();
+ arch_zone_limits_init(max_zone_pfn);
free_area_init(max_zone_pfn);
}
diff --git a/arch/m68k/mm/sun3mmu.c b/arch/m68k/mm/sun3mmu.c
index 1ecf6bdd08bf..fdd69cc4240c 100644
--- a/arch/m68k/mm/sun3mmu.c
+++ b/arch/m68k/mm/sun3mmu.c
@@ -82,7 +82,7 @@ void __init paging_init(void)
current->mm = NULL;
/* memory sizing is a hack stolen from motorola.c.. hope it works for us */
- max_zone_pfn[ZONE_DMA] = ((unsigned long)high_memory) >> PAGE_SHIFT;
+ arch_zone_limits_init(max_zone_pfn);
/* I really wish I knew why the following change made things better... -- Sam */
free_area_init(max_zone_pfn);
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 09/28] microblaze: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (7 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 08/28] m68k: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 10/28] mips: " Mike Rapoport
` (18 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/microblaze/mm/init.c | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/arch/microblaze/mm/init.c b/arch/microblaze/mm/init.c
index 31d475cdb1c5..54da60b81094 100644
--- a/arch/microblaze/mm/init.c
+++ b/arch/microblaze/mm/init.c
@@ -54,6 +54,16 @@ static void __init highmem_init(void)
}
#endif /* CONFIG_HIGHMEM */
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
+{
+#ifdef CONFIG_HIGHMEM
+ max_zone_pfns[ZONE_DMA] = max_low_pfn;
+ max_zone_pfns[ZONE_HIGHMEM] = max_pfn;
+#else
+ max_zone_pfns[ZONE_DMA] = max_pfn;
+#endif
+}
+
/*
* paging_init() sets up the page tables - in fact we've already done this.
*/
@@ -71,13 +81,8 @@ static void __init paging_init(void)
#ifdef CONFIG_HIGHMEM
highmem_init();
-
- zones_size[ZONE_DMA] = max_low_pfn;
- zones_size[ZONE_HIGHMEM] = max_pfn;
-#else
- zones_size[ZONE_DMA] = max_pfn;
#endif
-
+ arch_zone_limits_init(zones_size);
/* We don't have holes in memory map */
free_area_init(zones_size);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 10/28] mips: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (8 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 09/28] microblaze: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 11/28] nios2: " Mike Rapoport
` (17 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/mips/loongson64/numa.c | 9 +++++++--
arch/mips/mm/init.c | 14 +++++++++-----
arch/mips/sgi-ip27/ip27-memory.c | 7 ++++++-
3 files changed, 22 insertions(+), 8 deletions(-)
diff --git a/arch/mips/loongson64/numa.c b/arch/mips/loongson64/numa.c
index 95d5f553ce19..f72a58f87878 100644
--- a/arch/mips/loongson64/numa.c
+++ b/arch/mips/loongson64/numa.c
@@ -154,13 +154,18 @@ static __init void prom_meminit(void)
}
}
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
+{
+ max_zone_pfns[ZONE_DMA32] = MAX_DMA32_PFN;
+ max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
+}
+
void __init paging_init(void)
{
unsigned long zones_size[MAX_NR_ZONES] = {0, };
pagetable_init();
- zones_size[ZONE_DMA32] = MAX_DMA32_PFN;
- zones_size[ZONE_NORMAL] = max_low_pfn;
+ arch_zone_limits_init(zones_size);
free_area_init(zones_size);
}
diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c
index a673d3d68254..ab08249cfede 100644
--- a/arch/mips/mm/init.c
+++ b/arch/mips/mm/init.c
@@ -394,12 +394,8 @@ void maar_init(void)
}
#ifndef CONFIG_NUMA
-void __init paging_init(void)
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
{
- unsigned long max_zone_pfns[MAX_NR_ZONES];
-
- pagetable_init();
-
#ifdef CONFIG_ZONE_DMA
max_zone_pfns[ZONE_DMA] = MAX_DMA_PFN;
#endif
@@ -417,7 +413,15 @@ void __init paging_init(void)
max_zone_pfns[ZONE_HIGHMEM] = max_low_pfn;
}
#endif
+}
+
+void __init paging_init(void)
+{
+ unsigned long max_zone_pfns[MAX_NR_ZONES];
+
+ pagetable_init();
+ arch_zone_limits_init(max_zone_pfns);
free_area_init(max_zone_pfns);
}
diff --git a/arch/mips/sgi-ip27/ip27-memory.c b/arch/mips/sgi-ip27/ip27-memory.c
index 2b3e46e2e607..babeb0e07687 100644
--- a/arch/mips/sgi-ip27/ip27-memory.c
+++ b/arch/mips/sgi-ip27/ip27-memory.c
@@ -406,11 +406,16 @@ void __init prom_meminit(void)
}
}
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
+{
+ max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
+}
+
void __init paging_init(void)
{
unsigned long zones_size[MAX_NR_ZONES] = {0, };
pagetable_init();
- zones_size[ZONE_NORMAL] = max_low_pfn;
+ arch_zone_limits_init(zones_size);
free_area_init(zones_size);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 11/28] nios2: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (9 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 10/28] mips: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-29 13:55 ` Dinh Nguyen
2025-12-28 12:39 ` [PATCH 12/28] openrisc: " Mike Rapoport
` (16 subsequent siblings)
27 siblings, 1 reply; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/nios2/mm/init.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/nios2/mm/init.c b/arch/nios2/mm/init.c
index 94efa3de3933..2cb666a65d9e 100644
--- a/arch/nios2/mm/init.c
+++ b/arch/nios2/mm/init.c
@@ -38,6 +38,11 @@
pgd_t *pgd_current;
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
+{
+ max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
+}
+
/*
* paging_init() continues the virtual memory environment setup which
* was begun by the code in arch/head.S.
@@ -51,8 +56,7 @@ void __init paging_init(void)
pagetable_init();
pgd_current = swapper_pg_dir;
- max_zone_pfn[ZONE_NORMAL] = max_low_pfn;
-
+ arch_zone_limits_init(max_zone_pfn);
/* pass the memory from the bootmem allocator to the main allocator */
free_area_init(max_zone_pfn);
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 12/28] openrisc: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (10 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 11/28] nios2: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 13/28] parisc: " Mike Rapoport
` (15 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/openrisc/mm/init.c | 12 ++++++++----
1 file changed, 8 insertions(+), 4 deletions(-)
diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c
index 9382d9a0ec78..67de93e7a685 100644
--- a/arch/openrisc/mm/init.c
+++ b/arch/openrisc/mm/init.c
@@ -39,15 +39,19 @@
int mem_init_done;
-static void __init zone_sizes_init(void)
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
{
- unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
-
/*
* We use only ZONE_NORMAL
*/
- max_zone_pfn[ZONE_NORMAL] = max_low_pfn;
+ max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
+}
+
+static void __init zone_sizes_init(void)
+{
+ unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
+ arch_zone_limits_init(max_zone_pfn);
free_area_init(max_zone_pfn);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 13/28] parisc: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (11 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 12/28] openrisc: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 14/28] powerpc: " Mike Rapoport
` (14 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/parisc/mm/init.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
index 14270715d754..dc5bd3efe738 100644
--- a/arch/parisc/mm/init.c
+++ b/arch/parisc/mm/init.c
@@ -693,12 +693,16 @@ static void __init fixmap_init(void)
} while (addr < end);
}
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
+{
+ max_zone_pfns[ZONE_NORMAL] = PFN_DOWN(memblock_end_of_DRAM());
+}
+
static void __init parisc_bootmem_free(void)
{
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, };
- max_zone_pfn[0] = memblock_end_of_DRAM();
-
+ arch_zone_limits_init(max_zone_pfn);
free_area_init(max_zone_pfn);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 14/28] powerpc: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (12 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 13/28] parisc: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 15/28] riscv: " Mike Rapoport
` (13 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/powerpc/mm/mem.c | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 3ddbfdbfa941..32c496bfab4f 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -221,13 +221,23 @@ static int __init mark_nonram_nosave(void)
* anyway) will take a first dip into ZONE_NORMAL and get otherwise served by
* ZONE_DMA.
*/
-static unsigned long max_zone_pfns[MAX_NR_ZONES];
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
+{
+#ifdef CONFIG_ZONE_DMA
+ max_zone_pfns[ZONE_DMA] = min(zone_dma_limit, max_low_pfn - 1) + 1;
+#endif
+ max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
+#ifdef CONFIG_HIGHMEM
+ max_zone_pfns[ZONE_HIGHMEM] = max_pfn;
+#endif
+}
/*
* paging_init() sets up the page tables - in fact we've already done this.
*/
void __init paging_init(void)
{
+ unsigned long max_zone_pfns[MAX_NR_ZONES];
unsigned long long total_ram = memblock_phys_mem_size();
phys_addr_t top_of_ram = memblock_end_of_DRAM();
int zone_dma_bits;
@@ -259,15 +269,7 @@ void __init paging_init(void)
zone_dma_limit = DMA_BIT_MASK(zone_dma_bits);
-#ifdef CONFIG_ZONE_DMA
- max_zone_pfns[ZONE_DMA] = min(max_low_pfn,
- 1UL << (zone_dma_bits - PAGE_SHIFT));
-#endif
- max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
-#ifdef CONFIG_HIGHMEM
- max_zone_pfns[ZONE_HIGHMEM] = max_pfn;
-#endif
-
+ arch_zone_limits_init(max_zone_pfns);
free_area_init(max_zone_pfns);
mark_nonram_nosave();
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 15/28] riscv: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (13 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 14/28] powerpc: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 16/28] s390: " Mike Rapoport
` (12 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/riscv/mm/init.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index addb8a9305be..97e8661fbcff 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -79,15 +79,19 @@ uintptr_t _dtb_early_pa __initdata;
phys_addr_t dma32_phys_limit __initdata;
-static void __init zone_sizes_init(void)
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
{
- unsigned long max_zone_pfns[MAX_NR_ZONES] = { 0, };
-
#ifdef CONFIG_ZONE_DMA32
max_zone_pfns[ZONE_DMA32] = PFN_DOWN(dma32_phys_limit);
#endif
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
+}
+
+static void __init zone_sizes_init(void)
+{
+ unsigned long max_zone_pfns[MAX_NR_ZONES] = { 0, };
+ arch_zone_limits_init(max_zone_pfns);
free_area_init(max_zone_pfns);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 16/28] s390: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (14 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 15/28] riscv: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 17/28] sh: " Mike Rapoport
` (11 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/s390/mm/init.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index e4953453d254..1c11ad84dddb 100644
--- a/arch/s390/mm/init.c
+++ b/arch/s390/mm/init.c
@@ -86,6 +86,12 @@ static void __init setup_zero_pages(void)
zero_page_mask = ((PAGE_SIZE << order) - 1) & PAGE_MASK;
}
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
+{
+ max_zone_pfns[ZONE_DMA] = virt_to_pfn(MAX_DMA_ADDRESS);
+ max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
+}
+
/*
* paging_init() sets up the page tables
*/
@@ -97,8 +103,7 @@ void __init paging_init(void)
sparse_init();
zone_dma_limit = DMA_BIT_MASK(31);
memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
- max_zone_pfns[ZONE_DMA] = virt_to_pfn(MAX_DMA_ADDRESS);
- max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
+ arch_zone_limits_init(max_zone_pfns);
free_area_init(max_zone_pfns);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 17/28] sh: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (15 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 16/28] s390: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 18/28] sparc: " Mike Rapoport
` (10 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/sh/mm/init.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c
index 99e302eeeec1..5e7e63642611 100644
--- a/arch/sh/mm/init.c
+++ b/arch/sh/mm/init.c
@@ -264,6 +264,11 @@ static void __init early_reserve_mem(void)
reserve_crashkernel();
}
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
+{
+ max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
+}
+
void __init paging_init(void)
{
unsigned long max_zone_pfns[MAX_NR_ZONES];
@@ -322,7 +327,7 @@ void __init paging_init(void)
kmap_coherent_init();
memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
- max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
+ arch_zone_limits_init(max_zone_pfns);
free_area_init(max_zone_pfns);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 18/28] sparc: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (16 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 17/28] sh: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 19/28] um: " Mike Rapoport
` (9 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/sparc/mm/init_64.c | 6 ++++++
arch/sparc/mm/srmmu.c | 12 ++++++++----
2 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index df9f7c444c39..fbaad449dfc9 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -2279,6 +2279,11 @@ static void __init reduce_memory(phys_addr_t limit_ram)
memblock_enforce_memory_limit(limit_ram);
}
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
+{
+ max_zone_pfns[ZONE_NORMAL] = last_valid_pfn;
+}
+
void __init paging_init(void)
{
unsigned long end_pfn, shift, phys_base;
@@ -2461,6 +2466,7 @@ void __init paging_init(void)
max_zone_pfns[ZONE_NORMAL] = end_pfn;
+ arch_zone_limits_init(max_zone_pfns);
free_area_init(max_zone_pfns);
}
diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
index f8fb4911d360..81e90151db90 100644
--- a/arch/sparc/mm/srmmu.c
+++ b/arch/sparc/mm/srmmu.c
@@ -884,6 +884,13 @@ static void __init map_kernel(void)
void (*poke_srmmu)(void) = NULL;
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
+{
+ max_zone_pfns[ZONE_DMA] = max_low_pfn;
+ max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
+ max_zone_pfns[ZONE_HIGHMEM] = highend_pfn;
+}
+
void __init srmmu_paging_init(void)
{
int i;
@@ -967,10 +974,7 @@ void __init srmmu_paging_init(void)
{
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
- max_zone_pfn[ZONE_DMA] = max_low_pfn;
- max_zone_pfn[ZONE_NORMAL] = max_low_pfn;
- max_zone_pfn[ZONE_HIGHMEM] = highend_pfn;
-
+ arch_zone_limits_init(max_zone_pfn);
free_area_init(max_zone_pfn);
}
}
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 19/28] um: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (17 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 18/28] sparc: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 20/28] x86: " Mike Rapoport
` (8 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/um/kernel/mem.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c
index 39c4a7e21c6f..2ac4e9debedd 100644
--- a/arch/um/kernel/mem.c
+++ b/arch/um/kernel/mem.c
@@ -84,6 +84,11 @@ void __init mem_init(void)
kmalloc_ok = 1;
}
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
+{
+ max_zone_pfns[ZONE_NORMAL] = high_physmem >> PAGE_SHIFT;
+}
+
void __init paging_init(void)
{
unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
@@ -94,7 +99,7 @@ void __init paging_init(void)
panic("%s: Failed to allocate %lu bytes align=%lx\n",
__func__, PAGE_SIZE, PAGE_SIZE);
- max_zone_pfn[ZONE_NORMAL] = high_physmem >> PAGE_SHIFT;
+ arch_zone_limits_init(max_zone_pfn);
free_area_init(max_zone_pfn);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 20/28] x86: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (18 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 19/28] um: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 21/28] xtensa: " Mike Rapoport
` (7 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/x86/mm/init.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 8bf6ad4b9400..e7ef605a18d6 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -997,12 +997,8 @@ void __init free_initrd_mem(unsigned long start, unsigned long end)
}
#endif
-void __init zone_sizes_init(void)
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
{
- unsigned long max_zone_pfns[MAX_NR_ZONES];
-
- memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
-
#ifdef CONFIG_ZONE_DMA
max_zone_pfns[ZONE_DMA] = min(MAX_DMA_PFN, max_low_pfn);
#endif
@@ -1013,7 +1009,15 @@ void __init zone_sizes_init(void)
#ifdef CONFIG_HIGHMEM
max_zone_pfns[ZONE_HIGHMEM] = max_pfn;
#endif
+}
+
+void __init zone_sizes_init(void)
+{
+ unsigned long max_zone_pfns[MAX_NR_ZONES];
+
+ memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
+ arch_zone_limits_init(max_zone_pfns);
free_area_init(max_zone_pfns);
}
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 21/28] xtensa: introduce arch_zone_limits_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (19 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 20/28] x86: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 22/28] arch, mm: consolidate initialization of nodes, zones and memory map Mike Rapoport
` (6 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Move calculations of zone limits to a dedicated arch_zone_limits_init()
function.
Later MM core will use this function as an architecture specific callback
during nodes and zones initialization and thus there won't be a need to
call free_area_init() from every architecture.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/xtensa/mm/init.c | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c
index cc52733a0649..60299f359a3c 100644
--- a/arch/xtensa/mm/init.c
+++ b/arch/xtensa/mm/init.c
@@ -116,15 +116,19 @@ static void __init print_vm_layout(void)
(unsigned long)(__bss_stop - __bss_start) >> 10);
}
-void __init zones_init(void)
+void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
{
- /* All pages are DMA-able, so we put them all in the DMA zone. */
- unsigned long max_zone_pfn[MAX_NR_ZONES] = {
- [ZONE_NORMAL] = max_low_pfn,
+ max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
#ifdef CONFIG_HIGHMEM
- [ZONE_HIGHMEM] = max_pfn,
+ max_zone_pfns[ZONE_HIGHMEM] = max_pfn;
#endif
- };
+}
+
+void __init zones_init(void)
+{
+ unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, };
+
+ arch_zone_limits_init(max_zone_pfn);
free_area_init(max_zone_pfn);
print_vm_layout();
}
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 22/28] arch, mm: consolidate initialization of nodes, zones and memory map
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (20 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 21/28] xtensa: " Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 23/28] arch, mm: consolidate initialization of SPARSE memory model Mike Rapoport
` (5 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
To initialize node, zone and memory map data structures every architecture
calls free_area_init() during setup_arch() and passes it an array of zone
limits.
Beside code duplication it creates "interesting" ordering cases between
allocation and initialization of hugetlb and the memory map. Some
architectures allocate hugetlb pages very early in setup_arch(), some only
create hugetlb CMA areas in setup_arch() and some delay hugetlb allocations
until mm_core_init().
With arch_zone_limits_init() helper available now on all architectures it
is no longer necessary to call free_area_init() from architecture setup
code. Rather core MM initialization can call arch_zone_limits_init() in a
single place.
This allows to unify ordering of hugetlb vs memory map allocation and
initialization.
Add mm_core_init_early() function that will be called immediately after
setup_arch(). It will initialize zone limits and setup initial estimates of
the memory available for early allocations of large system hashes using
zone_limits_init() function split out if the current free_area_init().
The zone limits must be know prior to the first call to
alloc_large_system_hash() because it implicitly relies on the knowledge of
ZONE_HIGMEM extents.
Remove the call to free_area_init() from architecture specific code and
call the remaining part of free_area_init() that initializes node
structures and the memory map from mm_core_init() after hugetlb
allocations are done.
After this refactoring it it possible to consolidate hugetlb allocations
and eliminate differences in ordering of hugetlb and memory map
initialization among different architectures.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/alpha/mm/init.c | 9 +--
arch/arc/mm/init.c | 5 --
arch/arm/mm/init.c | 16 -----
arch/arm64/mm/init.c | 4 --
arch/csky/kernel/setup.c | 4 --
arch/hexagon/mm/init.c | 12 ----
arch/loongarch/include/asm/pgtable.h | 2 -
arch/loongarch/kernel/setup.c | 2 -
arch/loongarch/mm/init.c | 8 ---
arch/m68k/mm/init.c | 3 -
arch/m68k/mm/mcfmmu.c | 3 -
arch/m68k/mm/motorola.c | 6 +-
arch/m68k/mm/sun3mmu.c | 9 ---
arch/microblaze/mm/init.c | 7 ---
arch/mips/loongson64/numa.c | 4 --
arch/mips/mm/init.c | 5 --
arch/mips/sgi-ip27/ip27-memory.c | 4 --
arch/nios2/mm/init.c | 6 --
arch/openrisc/mm/init.c | 10 ----
arch/parisc/mm/init.c | 9 ---
arch/powerpc/mm/mem.c | 4 --
arch/riscv/mm/init.c | 9 ---
arch/s390/mm/init.c | 5 --
arch/sh/mm/init.c | 5 --
arch/sparc/mm/init_64.c | 11 ----
arch/sparc/mm/srmmu.c | 7 ---
arch/um/kernel/mem.c | 5 --
arch/x86/mm/init.c | 10 ----
arch/x86/mm/init_32.c | 1 -
arch/x86/mm/init_64.c | 2 -
arch/x86/mm/mm_internal.h | 1 -
arch/xtensa/mm/init.c | 4 --
include/linux/mm.h | 4 +-
init/main.c | 1 +
mm/mm_init.c | 90 ++++++++++++++++------------
35 files changed, 57 insertions(+), 230 deletions(-)
diff --git a/arch/alpha/mm/init.c b/arch/alpha/mm/init.c
index cd0cb1abde5f..9531cbc761c0 100644
--- a/arch/alpha/mm/init.c
+++ b/arch/alpha/mm/init.c
@@ -220,17 +220,10 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfn)
}
/*
- * paging_init() sets up the memory map.
+ * paging_init() initializes the kernel's ZERO_PGE.
*/
void __init paging_init(void)
{
- unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, };
-
- /* Initialize mem_map[]. */
- arch_zone_limits_init(max_zone_pfn);
- free_area_init(max_zone_pfn);
-
- /* Initialize the kernel's ZERO_PGE. */
memset(absolute_pointer(ZERO_PGE), 0, PAGE_SIZE);
}
diff --git a/arch/arc/mm/init.c b/arch/arc/mm/init.c
index ff7974d38011..a5e92f46e5d1 100644
--- a/arch/arc/mm/init.c
+++ b/arch/arc/mm/init.c
@@ -102,8 +102,6 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfn)
*/
void __init setup_arch_memory(void)
{
- unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
-
setup_initial_init_mm(_text, _etext, _edata, _end);
/* first page of system - kernel .vector starts here */
@@ -158,9 +156,6 @@ void __init setup_arch_memory(void)
arch_pfn_offset = min(min_low_pfn, min_high_pfn);
kmap_init();
#endif /* CONFIG_HIGHMEM */
-
- arch_zone_limits_init(max_zone_pfn);
- free_area_init(max_zone_pfn);
}
void __init arch_mm_preinit(void)
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index bdcc3639681f..a8f7b4084715 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -118,15 +118,6 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfn)
#endif
}
-static void __init zone_sizes_init(unsigned long min, unsigned long max_low,
- unsigned long max_high)
-{
- unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
-
- arch_zone_limits_init(max_zone_pfn);
- free_area_init(max_zone_pfn);
-}
-
#ifdef CONFIG_HAVE_ARCH_PFN_VALID
int pfn_valid(unsigned long pfn)
{
@@ -222,13 +213,6 @@ void __init bootmem_init(void)
* done after the fixed reservations
*/
sparse_init();
-
- /*
- * Now free the memory - free_area_init needs
- * the sparse mem_map arrays initialized by sparse_init()
- * for memmap_init_zone(), otherwise all PFNs are invalid.
- */
- zone_sizes_init(min_low_pfn, max_low_pfn, max_pfn);
}
/*
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 06815d34cc11..3641e88ea871 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -134,7 +134,6 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
static void __init dma_limits_init(void)
{
- unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
phys_addr_t __maybe_unused acpi_zone_dma_limit;
phys_addr_t __maybe_unused dt_zone_dma_limit;
phys_addr_t __maybe_unused dma32_phys_limit =
@@ -160,9 +159,6 @@ static void __init dma_limits_init(void)
#endif
if (!arm64_dma_phys_limit)
arm64_dma_phys_limit = PHYS_MASK + 1;
-
- arch_zone_limits_init(max_zone_pfns);
- free_area_init(max_zone_pfns);
}
int pfn_is_map_memory(unsigned long pfn)
diff --git a/arch/csky/kernel/setup.c b/arch/csky/kernel/setup.c
index 8968815d93e6..4bf3c01ead3a 100644
--- a/arch/csky/kernel/setup.c
+++ b/arch/csky/kernel/setup.c
@@ -63,7 +63,6 @@ static void __init csky_memblock_init(void)
{
unsigned long lowmem_size = PFN_DOWN(LOWMEM_LIMIT - PHYS_OFFSET_OFFSET);
unsigned long sseg_size = PFN_DOWN(SSEG_SIZE - PHYS_OFFSET_OFFSET);
- unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
signed long size;
memblock_reserve(__pa(_start), _end - _start);
@@ -101,9 +100,6 @@ static void __init csky_memblock_init(void)
memblock_set_current_limit(PFN_PHYS(max_low_pfn));
dma_contiguous_reserve(0);
-
- arch_zone_limits_init(max_zone_pfn);
- free_area_init(max_zone_pfn);
}
void __init setup_arch(char **cmdline_p)
diff --git a/arch/hexagon/mm/init.c b/arch/hexagon/mm/init.c
index e2c9487d8d34..07086dbd33fd 100644
--- a/arch/hexagon/mm/init.c
+++ b/arch/hexagon/mm/init.c
@@ -66,20 +66,8 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
}
-/*
- * In order to set up page allocator "nodes",
- * somebody has to call free_area_init() for UMA.
- *
- * In this mode, we only have one pg_data_t
- * structure: contig_mem_data.
- */
static void __init paging_init(void)
{
- unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, };
-
- arch_zone_limits_init(max_zone_pfn);
- free_area_init(max_zone_pfn); /* sets up the zonelists and mem_map */
-
/*
* Set the init_mm descriptors "context" value to point to the
* initial kernel segment table's physical address.
diff --git a/arch/loongarch/include/asm/pgtable.h b/arch/loongarch/include/asm/pgtable.h
index f41a648a3d9e..c33b3bcb733e 100644
--- a/arch/loongarch/include/asm/pgtable.h
+++ b/arch/loongarch/include/asm/pgtable.h
@@ -353,8 +353,6 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte)
return pte;
}
-extern void paging_init(void);
-
#define pte_none(pte) (!(pte_val(pte) & ~_PAGE_GLOBAL))
#define pte_present(pte) (pte_val(pte) & (_PAGE_PRESENT | _PAGE_PROTNONE))
#define pte_no_exec(pte) (pte_val(pte) & _PAGE_NO_EXEC)
diff --git a/arch/loongarch/kernel/setup.c b/arch/loongarch/kernel/setup.c
index 20cb6f306456..708ac025db71 100644
--- a/arch/loongarch/kernel/setup.c
+++ b/arch/loongarch/kernel/setup.c
@@ -621,8 +621,6 @@ void __init setup_arch(char **cmdline_p)
prefill_possible_map();
#endif
- paging_init();
-
#ifdef CONFIG_KASAN
kasan_init();
#endif
diff --git a/arch/loongarch/mm/init.c b/arch/loongarch/mm/init.c
index 17235f87eafb..c331bf69d2ec 100644
--- a/arch/loongarch/mm/init.c
+++ b/arch/loongarch/mm/init.c
@@ -68,14 +68,6 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
}
-void __init paging_init(void)
-{
- unsigned long max_zone_pfns[MAX_NR_ZONES];
-
- arch_zone_limits_init(max_zone_pfns);
- free_area_init(max_zone_pfns);
-}
-
void __ref free_initmem(void)
{
free_initmem_default(POISON_FREE_INITMEM);
diff --git a/arch/m68k/mm/init.c b/arch/m68k/mm/init.c
index 6b1d9d2434b5..53b71f786c27 100644
--- a/arch/m68k/mm/init.c
+++ b/arch/m68k/mm/init.c
@@ -69,13 +69,10 @@ void __init paging_init(void)
* page_alloc get different views of the world.
*/
unsigned long end_mem = memory_end & PAGE_MASK;
- unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, };
high_memory = (void *) end_mem;
empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
- arch_zone_limits_init(max_zone_pfn);
- free_area_init(max_zone_pfn);
}
#endif /* CONFIG_MMU */
diff --git a/arch/m68k/mm/mcfmmu.c b/arch/m68k/mm/mcfmmu.c
index 24a6f7bbd1ce..3418fd864237 100644
--- a/arch/m68k/mm/mcfmmu.c
+++ b/arch/m68k/mm/mcfmmu.c
@@ -39,7 +39,6 @@ void __init paging_init(void)
pte_t *pg_table;
unsigned long address, size;
unsigned long next_pgtable;
- unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
int i;
empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
@@ -73,8 +72,6 @@ void __init paging_init(void)
}
current->mm = NULL;
- arch_zone_limits_init(max_zone_pfn);
- free_area_init(max_zone_pfn);
}
int cf_tlb_miss(struct pt_regs *regs, int write, int dtlb, int extension_word)
diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c
index d6ccd23caf61..127a3fa69f4c 100644
--- a/arch/m68k/mm/motorola.c
+++ b/arch/m68k/mm/motorola.c
@@ -429,7 +429,6 @@ DECLARE_VM_GET_PAGE_PROT
*/
void __init paging_init(void)
{
- unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, };
unsigned long min_addr, max_addr;
unsigned long addr;
int i;
@@ -511,12 +510,9 @@ void __init paging_init(void)
set_fc(USER_DATA);
#ifdef DEBUG
- printk ("before free_area_init\n");
+ printk ("before node_set_state\n");
#endif
for (i = 0; i < m68k_num_memory; i++)
if (node_present_pages(i))
node_set_state(i, N_NORMAL_MEMORY);
-
- arch_zone_limits_init(max_zone_pfn);
- free_area_init(max_zone_pfn);
}
diff --git a/arch/m68k/mm/sun3mmu.c b/arch/m68k/mm/sun3mmu.c
index fdd69cc4240c..c801677f7df8 100644
--- a/arch/m68k/mm/sun3mmu.c
+++ b/arch/m68k/mm/sun3mmu.c
@@ -41,7 +41,6 @@ void __init paging_init(void)
unsigned long address;
unsigned long next_pgtable;
unsigned long bootmem_end;
- unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, };
unsigned long size;
empty_zero_page = memblock_alloc_or_panic(PAGE_SIZE, PAGE_SIZE);
@@ -80,14 +79,6 @@ void __init paging_init(void)
mmu_emu_init(bootmem_end);
current->mm = NULL;
-
- /* memory sizing is a hack stolen from motorola.c.. hope it works for us */
- arch_zone_limits_init(max_zone_pfn);
-
- /* I really wish I knew why the following change made things better... -- Sam */
- free_area_init(max_zone_pfn);
-
-
}
static const pgprot_t protection_map[16] = {
diff --git a/arch/microblaze/mm/init.c b/arch/microblaze/mm/init.c
index 54da60b81094..848cdee1380c 100644
--- a/arch/microblaze/mm/init.c
+++ b/arch/microblaze/mm/init.c
@@ -69,22 +69,15 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
*/
static void __init paging_init(void)
{
- unsigned long zones_size[MAX_NR_ZONES];
int idx;
/* Setup fixmaps */
for (idx = 0; idx < __end_of_fixed_addresses; idx++)
clear_fixmap(idx);
- /* Clean every zones */
- memset(zones_size, 0, sizeof(zones_size));
-
#ifdef CONFIG_HIGHMEM
highmem_init();
#endif
- arch_zone_limits_init(zones_size);
- /* We don't have holes in memory map */
- free_area_init(zones_size);
}
void __init setup_memory(void)
diff --git a/arch/mips/loongson64/numa.c b/arch/mips/loongson64/numa.c
index f72a58f87878..2cd95020df08 100644
--- a/arch/mips/loongson64/numa.c
+++ b/arch/mips/loongson64/numa.c
@@ -162,11 +162,7 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
void __init paging_init(void)
{
- unsigned long zones_size[MAX_NR_ZONES] = {0, };
-
pagetable_init();
- arch_zone_limits_init(zones_size);
- free_area_init(zones_size);
}
/* All PCI device belongs to logical Node-0 */
diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c
index ab08249cfede..c479c42141c3 100644
--- a/arch/mips/mm/init.c
+++ b/arch/mips/mm/init.c
@@ -417,12 +417,7 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
void __init paging_init(void)
{
- unsigned long max_zone_pfns[MAX_NR_ZONES];
-
pagetable_init();
-
- arch_zone_limits_init(max_zone_pfns);
- free_area_init(max_zone_pfns);
}
#ifdef CONFIG_64BIT
diff --git a/arch/mips/sgi-ip27/ip27-memory.c b/arch/mips/sgi-ip27/ip27-memory.c
index babeb0e07687..082651facf4f 100644
--- a/arch/mips/sgi-ip27/ip27-memory.c
+++ b/arch/mips/sgi-ip27/ip27-memory.c
@@ -413,9 +413,5 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
void __init paging_init(void)
{
- unsigned long zones_size[MAX_NR_ZONES] = {0, };
-
pagetable_init();
- arch_zone_limits_init(zones_size);
- free_area_init(zones_size);
}
diff --git a/arch/nios2/mm/init.c b/arch/nios2/mm/init.c
index 2cb666a65d9e..6b22f1995c16 100644
--- a/arch/nios2/mm/init.c
+++ b/arch/nios2/mm/init.c
@@ -51,15 +51,9 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
*/
void __init paging_init(void)
{
- unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
-
pagetable_init();
pgd_current = swapper_pg_dir;
- arch_zone_limits_init(max_zone_pfn);
- /* pass the memory from the bootmem allocator to the main allocator */
- free_area_init(max_zone_pfn);
-
flush_dcache_range((unsigned long)empty_zero_page,
(unsigned long)empty_zero_page + PAGE_SIZE);
}
diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c
index 67de93e7a685..78fb0734cdbc 100644
--- a/arch/openrisc/mm/init.c
+++ b/arch/openrisc/mm/init.c
@@ -47,14 +47,6 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
}
-static void __init zone_sizes_init(void)
-{
- unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
-
- arch_zone_limits_init(max_zone_pfn);
- free_area_init(max_zone_pfn);
-}
-
extern const char _s_kernel_ro[], _e_kernel_ro[];
/*
@@ -145,8 +137,6 @@ void __init paging_init(void)
map_ram();
- zone_sizes_init();
-
/* self modifying code ;) */
/* Since the old TLB miss handler has been running up until now,
* the kernel pages are still all RW, so we can still modify the
diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
index dc5bd3efe738..ce6f09ab7a90 100644
--- a/arch/parisc/mm/init.c
+++ b/arch/parisc/mm/init.c
@@ -698,14 +698,6 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
max_zone_pfns[ZONE_NORMAL] = PFN_DOWN(memblock_end_of_DRAM());
}
-static void __init parisc_bootmem_free(void)
-{
- unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, };
-
- arch_zone_limits_init(max_zone_pfn);
- free_area_init(max_zone_pfn);
-}
-
void __init paging_init(void)
{
setup_bootmem();
@@ -716,7 +708,6 @@ void __init paging_init(void)
flush_tlb_all_local(NULL);
sparse_init();
- parisc_bootmem_free();
}
static void alloc_btlb(unsigned long start, unsigned long end, int *slot,
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 32c496bfab4f..72d4993192a6 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -237,7 +237,6 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
*/
void __init paging_init(void)
{
- unsigned long max_zone_pfns[MAX_NR_ZONES];
unsigned long long total_ram = memblock_phys_mem_size();
phys_addr_t top_of_ram = memblock_end_of_DRAM();
int zone_dma_bits;
@@ -269,9 +268,6 @@ void __init paging_init(void)
zone_dma_limit = DMA_BIT_MASK(zone_dma_bits);
- arch_zone_limits_init(max_zone_pfns);
- free_area_init(max_zone_pfns);
-
mark_nonram_nosave();
}
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index 97e8661fbcff..79b4792578c4 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -87,14 +87,6 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
}
-static void __init zone_sizes_init(void)
-{
- unsigned long max_zone_pfns[MAX_NR_ZONES] = { 0, };
-
- arch_zone_limits_init(max_zone_pfns);
- free_area_init(max_zone_pfns);
-}
-
#if defined(CONFIG_MMU) && defined(CONFIG_DEBUG_VM)
#define LOG2_SZ_1K ilog2(SZ_1K)
@@ -1443,7 +1435,6 @@ void __init misc_mem_init(void)
/* The entire VMEMMAP region has been populated. Flush TLB for this region */
local_flush_tlb_kernel_range(VMEMMAP_START, VMEMMAP_END);
#endif
- zone_sizes_init();
arch_reserve_crashkernel();
memblock_dump_all();
}
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 1c11ad84dddb..9ec608b5cbb1 100644
--- a/arch/s390/mm/init.c
+++ b/arch/s390/mm/init.c
@@ -97,14 +97,9 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
*/
void __init paging_init(void)
{
- unsigned long max_zone_pfns[MAX_NR_ZONES];
-
vmem_map_init();
sparse_init();
zone_dma_limit = DMA_BIT_MASK(31);
- memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
- arch_zone_limits_init(max_zone_pfns);
- free_area_init(max_zone_pfns);
}
void mark_rodata_ro(void)
diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c
index 5e7e63642611..3edee854b755 100644
--- a/arch/sh/mm/init.c
+++ b/arch/sh/mm/init.c
@@ -271,7 +271,6 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
void __init paging_init(void)
{
- unsigned long max_zone_pfns[MAX_NR_ZONES];
unsigned long vaddr, end;
sh_mv.mv_mem_init();
@@ -325,10 +324,6 @@ void __init paging_init(void)
page_table_range_init(vaddr, end, swapper_pg_dir);
kmap_coherent_init();
-
- memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
- arch_zone_limits_init(max_zone_pfns);
- free_area_init(max_zone_pfns);
}
unsigned int mem_init_done = 0;
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index fbaad449dfc9..931f872ce84a 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -2459,17 +2459,6 @@ void __init paging_init(void)
kernel_physical_mapping_init();
- {
- unsigned long max_zone_pfns[MAX_NR_ZONES];
-
- memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
-
- max_zone_pfns[ZONE_NORMAL] = end_pfn;
-
- arch_zone_limits_init(max_zone_pfns);
- free_area_init(max_zone_pfns);
- }
-
printk("Booting Linux...\n");
}
diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
index 81e90151db90..1b24c5e8d73d 100644
--- a/arch/sparc/mm/srmmu.c
+++ b/arch/sparc/mm/srmmu.c
@@ -970,13 +970,6 @@ void __init srmmu_paging_init(void)
flush_tlb_all();
sparc_context_init(num_contexts);
-
- {
- unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
-
- arch_zone_limits_init(max_zone_pfn);
- free_area_init(max_zone_pfn);
- }
}
void mmu_info(struct seq_file *m)
diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c
index 2ac4e9debedd..89c8c8b94a79 100644
--- a/arch/um/kernel/mem.c
+++ b/arch/um/kernel/mem.c
@@ -91,16 +91,11 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
void __init paging_init(void)
{
- unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
-
empty_zero_page = (unsigned long *) memblock_alloc_low(PAGE_SIZE,
PAGE_SIZE);
if (!empty_zero_page)
panic("%s: Failed to allocate %lu bytes align=%lx\n",
__func__, PAGE_SIZE, PAGE_SIZE);
-
- arch_zone_limits_init(max_zone_pfn);
- free_area_init(max_zone_pfn);
}
/*
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index e7ef605a18d6..e52a262d3207 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -1011,16 +1011,6 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
#endif
}
-void __init zone_sizes_init(void)
-{
- unsigned long max_zone_pfns[MAX_NR_ZONES];
-
- memset(max_zone_pfns, 0, sizeof(max_zone_pfns));
-
- arch_zone_limits_init(max_zone_pfns);
- free_area_init(max_zone_pfns);
-}
-
__visible DEFINE_PER_CPU_ALIGNED(struct tlb_state, cpu_tlbstate) = {
.loaded_mm = &init_mm,
.next_asid = 1,
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index 8a34fff6ab2b..b55172118c91 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -655,7 +655,6 @@ void __init paging_init(void)
*/
olpc_dt_build_devicetree();
sparse_init();
- zone_sizes_init();
}
/*
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 9983017ecbe0..4daa40071c9f 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -843,8 +843,6 @@ void __init paging_init(void)
*/
node_clear_state(0, N_MEMORY);
node_clear_state(0, N_NORMAL_MEMORY);
-
- zone_sizes_init();
}
#define PAGE_UNUSED 0xFD
diff --git a/arch/x86/mm/mm_internal.h b/arch/x86/mm/mm_internal.h
index 097aadc250f7..7c4a41235323 100644
--- a/arch/x86/mm/mm_internal.h
+++ b/arch/x86/mm/mm_internal.h
@@ -17,7 +17,6 @@ unsigned long kernel_physical_mapping_init(unsigned long start,
unsigned long kernel_physical_mapping_change(unsigned long start,
unsigned long end,
unsigned long page_size_mask);
-void zone_sizes_init(void);
extern int after_bootmem;
diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c
index 60299f359a3c..fe83a68335da 100644
--- a/arch/xtensa/mm/init.c
+++ b/arch/xtensa/mm/init.c
@@ -126,10 +126,6 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
void __init zones_init(void)
{
- unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0, };
-
- arch_zone_limits_init(max_zone_pfn);
- free_area_init(max_zone_pfn);
print_vm_layout();
}
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 628c0e0ac313..64d6f9c15ef1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -45,6 +45,7 @@ struct pt_regs;
struct folio_batch;
void arch_mm_preinit(void);
+void mm_core_init_early(void);
void mm_core_init(void);
void init_mm_internals(void);
@@ -3536,7 +3537,7 @@ static inline unsigned long get_num_physpages(void)
}
/*
- * Using memblock node mappings, an architecture may initialise its
+ * FIXME: Using memblock node mappings, an architecture may initialise its
* zones, allocate the backing mem_map and account for memory holes in an
* architecture independent manner.
*
@@ -3551,7 +3552,6 @@ static inline unsigned long get_num_physpages(void)
* memblock_add_node(base, size, nid, MEMBLOCK_NONE)
* free_area_init(max_zone_pfns);
*/
-void free_area_init(unsigned long *max_zone_pfn);
void arch_zone_limits_init(unsigned long *max_zone_pfn);
unsigned long node_map_pfn_alignment(void);
extern unsigned long absent_pages_in_range(unsigned long start_pfn,
diff --git a/init/main.c b/init/main.c
index b84818ad9685..445b5643ecec 100644
--- a/init/main.c
+++ b/init/main.c
@@ -1025,6 +1025,7 @@ void start_kernel(void)
page_address_init();
pr_notice("%s", linux_banner);
setup_arch(&command_line);
+ mm_core_init_early();
/* Static keys and static calls are needed by LSMs */
jump_label_init();
static_call_init();
diff --git a/mm/mm_init.c b/mm/mm_init.c
index fc2a6f1e518f..43ef7a3501b9 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -1808,30 +1808,14 @@ static void __init set_high_memory(void)
high_memory = phys_to_virt(highmem - 1) + 1;
}
-/**
- * free_area_init - Initialise all pg_data_t and zone data
- * @max_zone_pfn: an array of max PFNs for each zone
- *
- * This will call free_area_init_node() for each active node in the system.
- * Using the page ranges provided by memblock_set_node(), the size of each
- * zone in each node and their holes is calculated. If the maximum PFN
- * between two adjacent zones match, it is assumed that the zone is empty.
- * For example, if arch_max_dma_pfn == arch_max_dma32_pfn, it is assumed
- * that arch_max_dma32_pfn has no pages. It is also assumed that a zone
- * starts where the previous one ended. For example, ZONE_DMA32 starts
- * at arch_max_dma_pfn.
- */
-void __init free_area_init(unsigned long *max_zone_pfn)
+static void __init zone_limits_init(void)
{
+ unsigned long max_zone_pfn[MAX_NR_ZONES] = { 0 };
unsigned long start_pfn, end_pfn;
- int i, nid, zone;
+ int i, zone, nid;
bool descending;
- /* Record where the zone boundaries are */
- memset(arch_zone_lowest_possible_pfn, 0,
- sizeof(arch_zone_lowest_possible_pfn));
- memset(arch_zone_highest_possible_pfn, 0,
- sizeof(arch_zone_highest_possible_pfn));
+ arch_zone_limits_init(max_zone_pfn);
start_pfn = PHYS_PFN(memblock_start_of_DRAM());
descending = arch_has_descending_max_zone_pfns();
@@ -1882,15 +1866,45 @@ void __init free_area_init(unsigned long *max_zone_pfn)
}
/*
- * Print out the early node map, and initialize the
- * subsection-map relative to active online memory ranges to
- * enable future "sub-section" extensions of the memory map.
+ * Print out the early node map, and initialize the N_MEMORY nodes
+ * bitmask.
*/
pr_info("Early memory node ranges\n");
for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
pr_info(" node %3d: [mem %#018Lx-%#018Lx]\n", nid,
(u64)start_pfn << PAGE_SHIFT,
((u64)end_pfn << PAGE_SHIFT) - 1);
+ node_set_state(nid, N_MEMORY);
+ }
+
+ calc_nr_kernel_pages();
+ /* disable hash distribution for systems with a single node */
+ fixup_hashdist();
+ set_high_memory();
+}
+
+/**
+ * free_area_init - Initialise all pg_data_t and zone data
+ *
+ * This will call free_area_init_node() for each active node in the system.
+ * Using the page ranges provided by memblock_set_node(), the size of each
+ * zone in each node and their holes is calculated. If the maximum PFN
+ * between two adjacent zones match, it is assumed that the zone is empty.
+ * For example, if arch_max_dma_pfn == arch_max_dma32_pfn, it is assumed
+ * that arch_max_dma32_pfn has no pages. It is also assumed that a zone
+ * starts where the previous one ended. For example, ZONE_DMA32 starts
+ * at arch_max_dma_pfn.
+ */
+static void __init free_area_init(void)
+{
+ unsigned long start_pfn, end_pfn;
+ int i, nid;
+
+ /*
+ * Initialize the subsection-map relative to active online memory
+ * ranges to enable future "sub-section" extensions of the memory map.
+ */
+ for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
subsection_map_init(start_pfn, end_pfn - start_pfn);
}
@@ -1909,29 +1923,22 @@ void __init free_area_init(unsigned long *max_zone_pfn)
free_area_init_node(nid);
/*
- * No sysfs hierarchy will be created via register_node()
- *for memory-less node because here it's not marked as N_MEMORY
- *and won't be set online later. The benefit is userspace
- *program won't be confused by sysfs files/directories of
- *memory-less node. The pgdat will get fully initialized by
- *hotadd_init_pgdat() when memory is hotplugged into this node.
+ * No sysfs hierarchy will be created via register_node() for
+ * memory-less node because here it's not marked as N_MEMORY
+ * and won't be set online later. The benefit is userspace
+ * program won't be confused by sysfs files/directories of
+ * memory-less node. The pgdat will get fully initialized by
+ * hotadd_init_pgdat() when memory is hotplugged into this
+ * node.
*/
- if (pgdat->node_present_pages) {
- node_set_state(nid, N_MEMORY);
+ if (pgdat->node_present_pages)
check_for_memory(pgdat);
- }
}
for_each_node_state(nid, N_MEMORY)
sparse_vmemmap_init_nid_late(nid);
- calc_nr_kernel_pages();
memmap_init();
-
- /* disable hash distribution for systems with a single node */
- fixup_hashdist();
-
- set_high_memory();
}
/**
@@ -2681,6 +2688,11 @@ void __init __weak mem_init(void)
{
}
+void __init mm_core_init_early(void)
+{
+ zone_limits_init();
+}
+
/*
* Set up kernel memory allocators
*/
@@ -2689,6 +2701,8 @@ void __init mm_core_init(void)
arch_mm_preinit();
hugetlb_bootmem_alloc();
+ free_area_init();
+
/* Initializations relying on SMP setup */
BUILD_BUG_ON(MAX_ZONELISTS > 2);
build_all_zonelists(NULL);
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 23/28] arch, mm: consolidate initialization of SPARSE memory model
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (21 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 22/28] arch, mm: consolidate initialization of nodes, zones and memory map Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 24/28] mips: drop paging_init() Mike Rapoport
` (4 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Every architecture calls sparse_init() during setup_arch() although the
data structures created by sparse_init() are not used until the
initialization of the core MM.
Beside the code duplication, calling sparse_init() from architecture
specific code causes ordering differences of vmemmap and HVO initialization
on different architectures.
Move the call to sparse_init() from architecture specific code to
mm_core_init() to ensure that vmemmap and HVO initialization order is
always the same.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
Documentation/mm/memory-model.rst | 3 ---
Documentation/translations/zh_CN/mm/memory-model.rst | 2 --
arch/alpha/kernel/setup.c | 1 -
arch/arm/mm/init.c | 6 ------
arch/arm64/mm/init.c | 6 ------
arch/csky/kernel/setup.c | 2 --
arch/loongarch/kernel/setup.c | 8 --------
arch/mips/kernel/setup.c | 11 -----------
arch/parisc/mm/init.c | 2 --
arch/powerpc/include/asm/setup.h | 4 ++++
arch/powerpc/mm/mem.c | 5 -----
arch/powerpc/mm/numa.c | 2 --
arch/riscv/mm/init.c | 1 -
arch/s390/mm/init.c | 1 -
arch/sh/mm/init.c | 2 --
arch/sparc/mm/init_64.c | 2 --
arch/x86/mm/init_32.c | 1 -
arch/x86/mm/init_64.c | 2 --
include/linux/mmzone.h | 2 --
mm/internal.h | 6 ++++++
mm/mm_init.c | 2 ++
21 files changed, 12 insertions(+), 59 deletions(-)
diff --git a/Documentation/mm/memory-model.rst b/Documentation/mm/memory-model.rst
index 7957122039e8..199b11328f4f 100644
--- a/Documentation/mm/memory-model.rst
+++ b/Documentation/mm/memory-model.rst
@@ -97,9 +97,6 @@ sections:
`mem_section` objects and the number of rows is calculated to fit
all the memory sections.
-The architecture setup code should call sparse_init() to
-initialize the memory sections and the memory maps.
-
With SPARSEMEM there are two possible ways to convert a PFN to the
corresponding `struct page` - a "classic sparse" and "sparse
vmemmap". The selection is made at build time and it is determined by
diff --git a/Documentation/translations/zh_CN/mm/memory-model.rst b/Documentation/translations/zh_CN/mm/memory-model.rst
index 77ec149a970c..c0c5d8ecd880 100644
--- a/Documentation/translations/zh_CN/mm/memory-model.rst
+++ b/Documentation/translations/zh_CN/mm/memory-model.rst
@@ -83,8 +83,6 @@ SPARSEMEM模型将物理内存显示为一个部分的集合。一个区段用me
每一行包含价值 `PAGE_SIZE` 的 `mem_section` 对象,行数的计算是为了适应所有的
内存区。
-架构设置代码应该调用sparse_init()来初始化内存区和内存映射。
-
通过SPARSEMEM,有两种可能的方式将PFN转换为相应的 `struct page` --"classic sparse"和
"sparse vmemmap"。选择是在构建时进行的,它由 `CONFIG_SPARSEMEM_VMEMMAP` 的
值决定。
diff --git a/arch/alpha/kernel/setup.c b/arch/alpha/kernel/setup.c
index bebdffafaee8..f0af444a69a4 100644
--- a/arch/alpha/kernel/setup.c
+++ b/arch/alpha/kernel/setup.c
@@ -607,7 +607,6 @@ setup_arch(char **cmdline_p)
/* Find our memory. */
setup_memory(kernel_end);
memblock_set_bottom_up(true);
- sparse_init();
/* First guess at cpu cache sizes. Do this before init_arch. */
determine_cpu_caches(cpu->type);
diff --git a/arch/arm/mm/init.c b/arch/arm/mm/init.c
index a8f7b4084715..0cc1bf04686d 100644
--- a/arch/arm/mm/init.c
+++ b/arch/arm/mm/init.c
@@ -207,12 +207,6 @@ void __init bootmem_init(void)
early_memtest((phys_addr_t)min_low_pfn << PAGE_SHIFT,
(phys_addr_t)max_low_pfn << PAGE_SHIFT);
-
- /*
- * sparse_init() tries to allocate memory from memblock, so must be
- * done after the fixed reservations
- */
- sparse_init();
}
/*
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 3641e88ea871..9d271aff7652 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -321,12 +321,6 @@ void __init bootmem_init(void)
#endif
kvm_hyp_reserve();
-
- /*
- * sparse_init() tries to allocate memory from memblock, so must be
- * done after the fixed reservations
- */
- sparse_init();
dma_limits_init();
/*
diff --git a/arch/csky/kernel/setup.c b/arch/csky/kernel/setup.c
index 4bf3c01ead3a..45c98dcf7f50 100644
--- a/arch/csky/kernel/setup.c
+++ b/arch/csky/kernel/setup.c
@@ -123,8 +123,6 @@ void __init setup_arch(char **cmdline_p)
setup_smp();
#endif
- sparse_init();
-
fixaddr_init();
#ifdef CONFIG_HIGHMEM
diff --git a/arch/loongarch/kernel/setup.c b/arch/loongarch/kernel/setup.c
index 708ac025db71..d6a1ff0e16f1 100644
--- a/arch/loongarch/kernel/setup.c
+++ b/arch/loongarch/kernel/setup.c
@@ -402,14 +402,6 @@ static void __init arch_mem_init(char **cmdline_p)
check_kernel_sections_mem();
- /*
- * In order to reduce the possibility of kernel panic when failed to
- * get IO TLB memory under CONFIG_SWIOTLB, it is better to allocate
- * low memory as small as possible before swiotlb_init(), so make
- * sparse_init() using top-down allocation.
- */
- memblock_set_bottom_up(false);
- sparse_init();
memblock_set_bottom_up(true);
swiotlb_init(true, SWIOTLB_VERBOSE);
diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
index 11b9b6b63e19..d36d89d01fa4 100644
--- a/arch/mips/kernel/setup.c
+++ b/arch/mips/kernel/setup.c
@@ -614,7 +614,6 @@ static void __init bootcmdline_init(void)
* kernel but generic memory management system is still entirely uninitialized.
*
* o bootmem_init()
- * o sparse_init()
* o paging_init()
* o dma_contiguous_reserve()
*
@@ -665,16 +664,6 @@ static void __init arch_mem_init(char **cmdline_p)
mips_parse_crashkernel();
device_tree_init();
- /*
- * In order to reduce the possibility of kernel panic when failed to
- * get IO TLB memory under CONFIG_SWIOTLB, it is better to allocate
- * low memory as small as possible before plat_swiotlb_setup(), so
- * make sparse_init() using top-down allocation.
- */
- memblock_set_bottom_up(false);
- sparse_init();
- memblock_set_bottom_up(true);
-
plat_swiotlb_setup();
dma_contiguous_reserve(PFN_PHYS(max_low_pfn));
diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
index ce6f09ab7a90..6a39e031e5ff 100644
--- a/arch/parisc/mm/init.c
+++ b/arch/parisc/mm/init.c
@@ -706,8 +706,6 @@ void __init paging_init(void)
fixmap_init();
flush_cache_all_local(); /* start with known state */
flush_tlb_all_local(NULL);
-
- sparse_init();
}
static void alloc_btlb(unsigned long start, unsigned long end, int *slot,
diff --git a/arch/powerpc/include/asm/setup.h b/arch/powerpc/include/asm/setup.h
index 50a92b24628d..6d60ea4868ab 100644
--- a/arch/powerpc/include/asm/setup.h
+++ b/arch/powerpc/include/asm/setup.h
@@ -20,7 +20,11 @@ extern void reloc_got2(unsigned long);
void check_for_initrd(void);
void mem_topology_setup(void);
+#ifdef CONFIG_NUMA
void initmem_init(void);
+#else
+static inline void initmem_init(void) {}
+#endif
void setup_panic(void);
#define ARCH_PANIC_TIMEOUT 180
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 72d4993192a6..30f56d601e56 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -182,11 +182,6 @@ void __init mem_topology_setup(void)
memblock_set_node(0, PHYS_ADDR_MAX, &memblock.memory, 0);
}
-void __init initmem_init(void)
-{
- sparse_init();
-}
-
/* mark pages that don't exist as nosave */
static int __init mark_nonram_nosave(void)
{
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 603a0f652ba6..f4cf3ae036de 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -1213,8 +1213,6 @@ void __init initmem_init(void)
setup_node_data(nid, start_pfn, end_pfn);
}
- sparse_init();
-
/*
* We need the numa_cpu_lookup_table to be accurate for all CPUs,
* even before we online them, so that we can use cpu_to_{node,mem}
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index 79b4792578c4..11ac4041afc0 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -1430,7 +1430,6 @@ void __init misc_mem_init(void)
{
early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
arch_numa_init();
- sparse_init();
#ifdef CONFIG_SPARSEMEM_VMEMMAP
/* The entire VMEMMAP region has been populated. Flush TLB for this region */
local_flush_tlb_kernel_range(VMEMMAP_START, VMEMMAP_END);
diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c
index 9ec608b5cbb1..3c20475cbee2 100644
--- a/arch/s390/mm/init.c
+++ b/arch/s390/mm/init.c
@@ -98,7 +98,6 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
void __init paging_init(void)
{
vmem_map_init();
- sparse_init();
zone_dma_limit = DMA_BIT_MASK(31);
}
diff --git a/arch/sh/mm/init.c b/arch/sh/mm/init.c
index 3edee854b755..464a3a63e2fa 100644
--- a/arch/sh/mm/init.c
+++ b/arch/sh/mm/init.c
@@ -227,8 +227,6 @@ static void __init do_init_bootmem(void)
node_set_online(0);
plat_mem_setup();
-
- sparse_init();
}
static void __init early_reserve_mem(void)
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 931f872ce84a..4f7bdb18774b 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -1615,8 +1615,6 @@ static unsigned long __init bootmem_init(unsigned long phys_base)
/* XXX cpu notifier XXX */
- sparse_init();
-
return end_pfn;
}
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index b55172118c91..0908c44d51e6 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -654,7 +654,6 @@ void __init paging_init(void)
* NOTE: at this point the bootmem allocator is fully available.
*/
olpc_dt_build_devicetree();
- sparse_init();
}
/*
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 4daa40071c9f..df2261fa4f98 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -833,8 +833,6 @@ void __init initmem_init(void)
void __init paging_init(void)
{
- sparse_init();
-
/*
* clear the default setting with node 0
* note: don't use nodes_clear here, that is really clearing when
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 75ef7c9f9307..6a7db0fee54a 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -2285,9 +2285,7 @@ static inline unsigned long next_present_section_nr(unsigned long section_nr)
#define pfn_to_nid(pfn) (0)
#endif
-void sparse_init(void);
#else
-#define sparse_init() do {} while (0)
#define sparse_index_init(_sec, _nid) do {} while (0)
#define sparse_vmemmap_init_nid_early(_nid) do {} while (0)
#define sparse_vmemmap_init_nid_late(_nid) do {} while (0)
diff --git a/mm/internal.h b/mm/internal.h
index e430da900430..dc5316c68664 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -860,6 +860,12 @@ void memmap_init_range(unsigned long, int, unsigned long, unsigned long,
unsigned long, enum meminit_context, struct vmem_altmap *, int,
bool);
+#ifdef CONFIG_SPARSEMEM
+void sparse_init(void);
+#else
+static inline void sparse_init(void) {}
+#endif /* CONFIG_SPARSEMEM */
+
#if defined CONFIG_COMPACTION || defined CONFIG_CMA
/*
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 43ef7a3501b9..027d53073393 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -1900,6 +1900,8 @@ static void __init free_area_init(void)
unsigned long start_pfn, end_pfn;
int i, nid;
+ sparse_init();
+
/*
* Initialize the subsection-map relative to active online memory
* ranges to enable future "sub-section" extensions of the memory map.
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 24/28] mips: drop paging_init()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (22 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 23/28] arch, mm: consolidate initialization of SPARSE memory model Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 25/28] x86: don't reserve hugetlb memory in setup_arch() Mike Rapoport
` (3 subsequent siblings)
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
All three variants of paging_init() on MIPS are wrappers for
pagetable_init().
Instead of having three identical wrappers, call pagetable_init() directly
from setup_arch() and remove the unnecessary paging_init() functions.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/mips/include/asm/pgalloc.h | 2 --
arch/mips/include/asm/pgtable.h | 2 +-
arch/mips/kernel/setup.c | 4 ++--
arch/mips/loongson64/numa.c | 5 -----
arch/mips/mm/init.c | 5 -----
arch/mips/sgi-ip27/ip27-memory.c | 5 -----
6 files changed, 3 insertions(+), 20 deletions(-)
diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h
index 7a04381efa0b..6efd4a58bf10 100644
--- a/arch/mips/include/asm/pgalloc.h
+++ b/arch/mips/include/asm/pgalloc.h
@@ -101,6 +101,4 @@ static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4d, pud_t *pud)
#endif /* __PAGETABLE_PUD_FOLDED */
-extern void pagetable_init(void);
-
#endif /* _ASM_PGALLOC_H */
diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
index 9c06a612d33a..fa7b935f947c 100644
--- a/arch/mips/include/asm/pgtable.h
+++ b/arch/mips/include/asm/pgtable.h
@@ -56,7 +56,7 @@ extern unsigned long zero_page_mask;
(virt_to_page((void *)(empty_zero_page + (((unsigned long)(vaddr)) & zero_page_mask))))
#define __HAVE_COLOR_ZERO_PAGE
-extern void paging_init(void);
+extern void pagetable_init(void);
/*
* Conversion functions: convert a page and protection to a page entry,
diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
index d36d89d01fa4..7622aad0f0b3 100644
--- a/arch/mips/kernel/setup.c
+++ b/arch/mips/kernel/setup.c
@@ -614,7 +614,7 @@ static void __init bootcmdline_init(void)
* kernel but generic memory management system is still entirely uninitialized.
*
* o bootmem_init()
- * o paging_init()
+ * o pagetable_init()
* o dma_contiguous_reserve()
*
* At this stage the bootmem allocator is ready to use.
@@ -778,7 +778,7 @@ void __init setup_arch(char **cmdline_p)
prefill_possible_map();
cpu_cache_init();
- paging_init();
+ pagetable_init();
memblock_dump_all();
diff --git a/arch/mips/loongson64/numa.c b/arch/mips/loongson64/numa.c
index 2cd95020df08..16ffb32cca50 100644
--- a/arch/mips/loongson64/numa.c
+++ b/arch/mips/loongson64/numa.c
@@ -160,11 +160,6 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
}
-void __init paging_init(void)
-{
- pagetable_init();
-}
-
/* All PCI device belongs to logical Node-0 */
int pcibus_to_node(struct pci_bus *bus)
{
diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c
index c479c42141c3..cd04200d0573 100644
--- a/arch/mips/mm/init.c
+++ b/arch/mips/mm/init.c
@@ -415,11 +415,6 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
#endif
}
-void __init paging_init(void)
-{
- pagetable_init();
-}
-
#ifdef CONFIG_64BIT
static struct kcore_list kcore_kseg0;
#endif
diff --git a/arch/mips/sgi-ip27/ip27-memory.c b/arch/mips/sgi-ip27/ip27-memory.c
index 082651facf4f..4317f5ae1fd1 100644
--- a/arch/mips/sgi-ip27/ip27-memory.c
+++ b/arch/mips/sgi-ip27/ip27-memory.c
@@ -410,8 +410,3 @@ void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
{
max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
}
-
-void __init paging_init(void)
-{
- pagetable_init();
-}
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 25/28] x86: don't reserve hugetlb memory in setup_arch()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (23 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 24/28] mips: drop paging_init() Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 13:29 ` Sergey Shtylyov
2025-12-28 12:39 ` [PATCH 26/28] mm, arch: consolidate hugetlb CMA reservation Mike Rapoport
` (2 subsequent siblings)
27 siblings, 1 reply; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Commit 665eaf313314 ("x86/setup: call hugetlb_bootmem_alloc early")
added an early call to hugetlb_bootmem_alloc() to setup_arch() to allow
HVO style pre-initialization of vmemmap on x86.
With the ordering of hugetlb reservation vs memory map initiaization
sorted out in core MM this no longer needs to be an architecture specific
quirk.
Drop the call to hugetlb_bootmem_alloc() from x86::setup_arch().
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/x86/kernel/setup.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 1b2edd07a3e1..e2318fa9b1bb 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1191,7 +1191,6 @@ void __init setup_arch(char **cmdline_p)
if (boot_cpu_has(X86_FEATURE_GBPAGES)) {
hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
- hugetlb_bootmem_alloc();
}
/*
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 26/28] mm, arch: consolidate hugetlb CMA reservation
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (24 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 25/28] x86: don't reserve hugetlb memory in setup_arch() Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-28 12:39 ` [PATCH 27/28] mm/hugetlb: drop hugetlb_cma_check() Mike Rapoport
2025-12-28 12:39 ` [PATCH 28/28] Revert "mm/hugetlb: deal with multiple calls to hugetlb_bootmem_alloc" Mike Rapoport
27 siblings, 0 replies; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
Every architecture that supports hugetlb_cma command line parameter
reserves CMA areas for hugetlb during setup_arch().
This obfuscates the ordering of hugetlb CMA initialization with resepect to
the rest initalization of the core MM.
Introduce arch_hugetlb_cma_order() callback to allow arhictectures report
the desired order-per-bit of CMA areas and provive a week implementation of
arch_hugetlb_cma_order() for architectures that don't support hugetlb with
CMA.
Use this callback in hugetlb_cma_reserve() instead if passing the order as
parameter and call hugetlb_cma_reserve() from mm_core_init rather than have
it spead over arcihtecture specific code.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
.../driver-api/cxl/linux/early-boot.rst | 2 +-
arch/arm64/include/asm/hugetlb.h | 2 --
arch/arm64/mm/hugetlbpage.c | 10 +++-------
arch/arm64/mm/init.c | 9 ---------
arch/powerpc/include/asm/hugetlb.h | 5 -----
arch/powerpc/kernel/setup-common.c | 1 -
arch/powerpc/mm/hugetlbpage.c | 11 ++++-------
arch/riscv/mm/hugetlbpage.c | 8 ++++++++
arch/riscv/mm/init.c | 2 --
arch/s390/kernel/setup.c | 2 --
arch/s390/mm/hugetlbpage.c | 8 ++++++++
arch/x86/kernel/setup.c | 4 ----
arch/x86/mm/hugetlbpage.c | 8 ++++++++
include/linux/hugetlb.h | 6 ++++--
mm/hugetlb_cma.c | 19 ++++++++++++++-----
mm/mm_init.c | 2 ++
16 files changed, 52 insertions(+), 47 deletions(-)
diff --git a/Documentation/driver-api/cxl/linux/early-boot.rst b/Documentation/driver-api/cxl/linux/early-boot.rst
index a7fc6fc85fbe..414481f33819 100644
--- a/Documentation/driver-api/cxl/linux/early-boot.rst
+++ b/Documentation/driver-api/cxl/linux/early-boot.rst
@@ -125,7 +125,7 @@ The contiguous memory allocator (CMA) enables reservation of contiguous memory
regions on NUMA nodes during early boot. However, CMA cannot reserve memory
on NUMA nodes that are not online during early boot. ::
- void __init hugetlb_cma_reserve(int order) {
+ void __init hugetlb_cma_reserve(void) {
if (!node_online(nid))
/* do not allow reservations */
}
diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
index 44c1f757bfcf..e6f8ff3cc630 100644
--- a/arch/arm64/include/asm/hugetlb.h
+++ b/arch/arm64/include/asm/hugetlb.h
@@ -56,8 +56,6 @@ extern void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
#define __HAVE_ARCH_HUGE_PTEP_GET
extern pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep);
-void __init arm64_hugetlb_cma_reserve(void);
-
#define huge_ptep_modify_prot_start huge_ptep_modify_prot_start
extern pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep);
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 1d90a7e75333..f8dd58ab67a8 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -36,16 +36,12 @@
* huge pages could still be served from those areas.
*/
#ifdef CONFIG_CMA
-void __init arm64_hugetlb_cma_reserve(void)
+unsigned int arch_hugetlb_cma_order(void)
{
- int order;
-
if (pud_sect_supported())
- order = PUD_SHIFT - PAGE_SHIFT;
- else
- order = CONT_PMD_SHIFT - PAGE_SHIFT;
+ return PUD_SHIFT - PAGE_SHIFT;
- hugetlb_cma_reserve(order);
+ return CONT_PMD_SHIFT - PAGE_SHIFT;
}
#endif /* CONFIG_CMA */
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 9d271aff7652..96711b8578fd 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -311,15 +311,6 @@ void __init bootmem_init(void)
arch_numa_init();
- /*
- * must be done after arch_numa_init() which calls numa_init() to
- * initialize node_online_map that gets used in hugetlb_cma_reserve()
- * while allocating required CMA size across online nodes.
- */
-#if defined(CONFIG_HUGETLB_PAGE) && defined(CONFIG_CMA)
- arm64_hugetlb_cma_reserve();
-#endif
-
kvm_hyp_reserve();
dma_limits_init();
diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h
index 86326587e58d..6d32a4299445 100644
--- a/arch/powerpc/include/asm/hugetlb.h
+++ b/arch/powerpc/include/asm/hugetlb.h
@@ -68,7 +68,6 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep,
pte_t pte, int dirty);
-void gigantic_hugetlb_cma_reserve(void) __init;
#include <asm-generic/hugetlb.h>
#else /* ! CONFIG_HUGETLB_PAGE */
@@ -77,10 +76,6 @@ static inline void flush_hugetlb_page(struct vm_area_struct *vma,
{
}
-static inline void __init gigantic_hugetlb_cma_reserve(void)
-{
-}
-
static inline void __init hugetlbpage_init_defaultsize(void)
{
}
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index c8c42b419742..cb5b73adc250 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -1003,7 +1003,6 @@ void __init setup_arch(char **cmdline_p)
fadump_cma_init();
kdump_cma_reserve();
kvm_cma_reserve();
- gigantic_hugetlb_cma_reserve();
early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index d3c1b749dcfc..558fafb82b8a 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -200,18 +200,15 @@ static int __init hugetlbpage_init(void)
arch_initcall(hugetlbpage_init);
-void __init gigantic_hugetlb_cma_reserve(void)
+unsigned int __init arch_hugetlb_cma_order(void)
{
- unsigned long order = 0;
-
if (radix_enabled())
- order = PUD_SHIFT - PAGE_SHIFT;
+ return PUD_SHIFT - PAGE_SHIFT;
else if (!firmware_has_feature(FW_FEATURE_LPAR) && mmu_psize_defs[MMU_PAGE_16G].shift)
/*
* For pseries we do use ibm,expected#pages for reserving 16G pages.
*/
- order = mmu_psize_to_shift(MMU_PAGE_16G) - PAGE_SHIFT;
+ return mmu_psize_to_shift(MMU_PAGE_16G) - PAGE_SHIFT;
- if (order)
- hugetlb_cma_reserve(order);
+ return 0;
}
diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c
index 375dd96bb4a0..a6d217112cf4 100644
--- a/arch/riscv/mm/hugetlbpage.c
+++ b/arch/riscv/mm/hugetlbpage.c
@@ -447,3 +447,11 @@ static __init int gigantic_pages_init(void)
}
arch_initcall(gigantic_pages_init);
#endif
+
+unsigned int __init arch_hugetlb_cma_order(void)
+{
+ if (IS_ENABLED(CONFIG_64BIT))
+ return PUD_SHIFT - PAGE_SHIFT;
+
+ return 0;
+}
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index 11ac4041afc0..848efeb9e163 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -311,8 +311,6 @@ static void __init setup_bootmem(void)
memblock_reserve(dtb_early_pa, fdt_totalsize(dtb_early_va));
dma_contiguous_reserve(dma32_phys_limit);
- if (IS_ENABLED(CONFIG_64BIT))
- hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
}
#ifdef CONFIG_RELOCATABLE
diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
index c1fe0b53c5ac..b60284328fe3 100644
--- a/arch/s390/kernel/setup.c
+++ b/arch/s390/kernel/setup.c
@@ -963,8 +963,6 @@ void __init setup_arch(char **cmdline_p)
setup_uv();
dma_contiguous_reserve(ident_map_size);
vmcp_cma_reserve();
- if (cpu_has_edat2())
- hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
reserve_crashkernel();
#ifdef CONFIG_CRASH_DUMP
diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c
index d42e61c7594e..d93417d1e53c 100644
--- a/arch/s390/mm/hugetlbpage.c
+++ b/arch/s390/mm/hugetlbpage.c
@@ -255,3 +255,11 @@ bool __init arch_hugetlb_valid_size(unsigned long size)
else
return false;
}
+
+unsigned int __init arch_hugetlb_cma_order(void)
+{
+ if (cpu_has_edat2())
+ return PUD_SHIFT - PAGE_SHIFT;
+
+ return 0;
+}
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index e2318fa9b1bb..e1efe3975aa0 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -1189,10 +1189,6 @@ void __init setup_arch(char **cmdline_p)
initmem_init();
dma_contiguous_reserve(max_pfn_mapped << PAGE_SHIFT);
- if (boot_cpu_has(X86_FEATURE_GBPAGES)) {
- hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
- }
-
/*
* Reserve memory for crash kernel after SRAT is parsed so that it
* won't consume hotpluggable memory.
diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
index 58f7f2bd535d..3b26621c9128 100644
--- a/arch/x86/mm/hugetlbpage.c
+++ b/arch/x86/mm/hugetlbpage.c
@@ -42,3 +42,11 @@ static __init int gigantic_pages_init(void)
arch_initcall(gigantic_pages_init);
#endif
#endif
+
+unsigned int __init arch_hugetlb_cma_order(void)
+{
+ if (boot_cpu_has(X86_FEATURE_GBPAGES))
+ return PUD_SHIFT - PAGE_SHIFT;
+
+ return 0;
+}
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 019a1c5281e4..08fc332e88a7 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -279,6 +279,8 @@ void fixup_hugetlb_reservations(struct vm_area_struct *vma);
void hugetlb_split(struct vm_area_struct *vma, unsigned long addr);
int hugetlb_vma_lock_alloc(struct vm_area_struct *vma);
+unsigned int arch_hugetlb_cma_order(void);
+
#else /* !CONFIG_HUGETLB_PAGE */
static inline void hugetlb_dup_vma_private(struct vm_area_struct *vma)
@@ -1316,9 +1318,9 @@ static inline spinlock_t *huge_pte_lock(struct hstate *h,
}
#if defined(CONFIG_HUGETLB_PAGE) && defined(CONFIG_CMA)
-extern void __init hugetlb_cma_reserve(int order);
+extern void __init hugetlb_cma_reserve(void);
#else
-static inline __init void hugetlb_cma_reserve(int order)
+static inline __init void hugetlb_cma_reserve(void)
{
}
#endif
diff --git a/mm/hugetlb_cma.c b/mm/hugetlb_cma.c
index e8e4dc7182d5..b1eb5998282c 100644
--- a/mm/hugetlb_cma.c
+++ b/mm/hugetlb_cma.c
@@ -134,12 +134,24 @@ static int __init cmdline_parse_hugetlb_cma_only(char *p)
early_param("hugetlb_cma_only", cmdline_parse_hugetlb_cma_only);
-void __init hugetlb_cma_reserve(int order)
+unsigned int __weak arch_hugetlb_cma_order(void)
{
- unsigned long size, reserved, per_node;
+ return 0;
+}
+
+void __init hugetlb_cma_reserve(void)
+{
+ unsigned long size, reserved, per_node, order;
bool node_specific_cma_alloc = false;
int nid;
+ if (!hugetlb_cma_size)
+ return;
+
+ order = arch_hugetlb_cma_order();
+ if (!order)
+ return;
+
/*
* HugeTLB CMA reservation is required for gigantic
* huge pages which could not be allocated via the
@@ -149,9 +161,6 @@ void __init hugetlb_cma_reserve(int order)
VM_WARN_ON(order <= MAX_PAGE_ORDER);
cma_reserve_called = true;
- if (!hugetlb_cma_size)
- return;
-
hugetlb_bootmem_set_nodes();
for (nid = 0; nid < MAX_NUMNODES; nid++) {
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 027d53073393..11491e455d17 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -2701,6 +2701,8 @@ void __init mm_core_init_early(void)
void __init mm_core_init(void)
{
arch_mm_preinit();
+
+ hugetlb_cma_reserve();
hugetlb_bootmem_alloc();
free_area_init();
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 27/28] mm/hugetlb: drop hugetlb_cma_check()
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (25 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 26/28] mm, arch: consolidate hugetlb CMA reservation Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-29 3:13 ` Muchun Song
2025-12-28 12:39 ` [PATCH 28/28] Revert "mm/hugetlb: deal with multiple calls to hugetlb_bootmem_alloc" Mike Rapoport
27 siblings, 1 reply; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
hugetlb_cma_check() was required when the ordering of hugetlb_cma_reserve()
and hugetlb_bootmem_alloc() was architecture depended.
Since hugetlb_cma_reserve() is always called before hugetlb_bootmem_alloc()
there is no need to check whether hugetlb_cma_reserve() was already called.
Drop unneeded hugetlb_cma_check() function.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
mm/hugetlb.c | 1 -
mm/hugetlb_cma.c | 16 +++-------------
mm/hugetlb_cma.h | 5 -----
3 files changed, 3 insertions(+), 19 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 51273baec9e5..82b322ae3fdc 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4159,7 +4159,6 @@ static int __init hugetlb_init(void)
}
}
- hugetlb_cma_check();
hugetlb_init_hstates();
gather_bootmem_prealloc();
report_hugepages();
diff --git a/mm/hugetlb_cma.c b/mm/hugetlb_cma.c
index b1eb5998282c..f5e79103e110 100644
--- a/mm/hugetlb_cma.c
+++ b/mm/hugetlb_cma.c
@@ -85,9 +85,6 @@ hugetlb_cma_alloc_bootmem(struct hstate *h, int *nid, bool node_exact)
return m;
}
-
-static bool cma_reserve_called __initdata;
-
static int __init cmdline_parse_hugetlb_cma(char *p)
{
int nid, count = 0;
@@ -149,8 +146,10 @@ void __init hugetlb_cma_reserve(void)
return;
order = arch_hugetlb_cma_order();
- if (!order)
+ if (!order) {
+ pr_warn("hugetlb_cma: the option isn't supported by current arch\n");
return;
+ }
/*
* HugeTLB CMA reservation is required for gigantic
@@ -159,7 +158,6 @@ void __init hugetlb_cma_reserve(void)
* breaking this assumption.
*/
VM_WARN_ON(order <= MAX_PAGE_ORDER);
- cma_reserve_called = true;
hugetlb_bootmem_set_nodes();
@@ -253,14 +251,6 @@ void __init hugetlb_cma_reserve(void)
hugetlb_cma_size = 0;
}
-void __init hugetlb_cma_check(void)
-{
- if (!hugetlb_cma_size || cma_reserve_called)
- return;
-
- pr_warn("hugetlb_cma: the option isn't supported by current arch\n");
-}
-
bool hugetlb_cma_exclusive_alloc(void)
{
return hugetlb_cma_only;
diff --git a/mm/hugetlb_cma.h b/mm/hugetlb_cma.h
index 2c2ec8a7e134..78186839df3a 100644
--- a/mm/hugetlb_cma.h
+++ b/mm/hugetlb_cma.h
@@ -8,7 +8,6 @@ struct folio *hugetlb_cma_alloc_folio(int order, gfp_t gfp_mask,
int nid, nodemask_t *nodemask);
struct huge_bootmem_page *hugetlb_cma_alloc_bootmem(struct hstate *h, int *nid,
bool node_exact);
-void hugetlb_cma_check(void);
bool hugetlb_cma_exclusive_alloc(void);
unsigned long hugetlb_cma_total_size(void);
void hugetlb_cma_validate_params(void);
@@ -31,10 +30,6 @@ struct huge_bootmem_page *hugetlb_cma_alloc_bootmem(struct hstate *h, int *nid,
return NULL;
}
-static inline void hugetlb_cma_check(void)
-{
-}
-
static inline bool hugetlb_cma_exclusive_alloc(void)
{
return false;
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* [PATCH 28/28] Revert "mm/hugetlb: deal with multiple calls to hugetlb_bootmem_alloc"
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
` (26 preceding siblings ...)
2025-12-28 12:39 ` [PATCH 27/28] mm/hugetlb: drop hugetlb_cma_check() Mike Rapoport
@ 2025-12-28 12:39 ` Mike Rapoport
2025-12-29 3:13 ` Muchun Song
27 siblings, 1 reply; 34+ messages in thread
From: Mike Rapoport @ 2025-12-28 12:39 UTC (permalink / raw)
To: Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Mike Rapoport,
Muchun Song, Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
hugetlb_bootmem_alloc() is called only once, no need to check if it was
called aready at its entry.
Other checks performed during HVO initialization are also no longer
necessary because sparse_init() that calls hugetlb_vmemmap_init_early()
and hugetlb_vmemmap_init_late() is alaways called after
hugetlb_bootmem_alloc().
This reverts commit d58b2498200724e4f8c12d71a5953da03c8c8bdf.
Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
include/linux/hugetlb.h | 6 ------
mm/hugetlb.c | 12 ------------
mm/hugetlb_vmemmap.c | 11 -----------
3 files changed, 29 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 08fc332e88a7..c8b1a6dd2d46 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -175,7 +175,6 @@ extern int sysctl_hugetlb_shm_group __read_mostly;
extern struct list_head huge_boot_pages[MAX_NUMNODES];
void hugetlb_bootmem_alloc(void);
-bool hugetlb_bootmem_allocated(void);
extern nodemask_t hugetlb_bootmem_nodes;
void hugetlb_bootmem_set_nodes(void);
@@ -1300,11 +1299,6 @@ static inline bool hugetlbfs_pagecache_present(
static inline void hugetlb_bootmem_alloc(void)
{
}
-
-static inline bool hugetlb_bootmem_allocated(void)
-{
- return false;
-}
#endif /* CONFIG_HUGETLB_PAGE */
static inline spinlock_t *huge_pte_lock(struct hstate *h,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 82b322ae3fdc..e5a350c83d75 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4470,21 +4470,11 @@ void __init hugetlb_bootmem_set_nodes(void)
}
}
-static bool __hugetlb_bootmem_allocated __initdata;
-
-bool __init hugetlb_bootmem_allocated(void)
-{
- return __hugetlb_bootmem_allocated;
-}
-
void __init hugetlb_bootmem_alloc(void)
{
struct hstate *h;
int i;
- if (__hugetlb_bootmem_allocated)
- return;
-
hugetlb_bootmem_set_nodes();
for (i = 0; i < MAX_NUMNODES; i++)
@@ -4498,8 +4488,6 @@ void __init hugetlb_bootmem_alloc(void)
if (hstate_is_gigantic(h))
hugetlb_hstate_alloc_pages(h);
}
-
- __hugetlb_bootmem_allocated = true;
}
/*
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 9d01f883fd71..a9280259e12a 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -794,14 +794,6 @@ void __init hugetlb_vmemmap_init_early(int nid)
struct huge_bootmem_page *m = NULL;
void *map;
- /*
- * Noting to do if bootmem pages were not allocated
- * early in boot, or if HVO wasn't enabled in the
- * first place.
- */
- if (!hugetlb_bootmem_allocated())
- return;
-
if (!READ_ONCE(vmemmap_optimize_enabled))
return;
@@ -847,9 +839,6 @@ void __init hugetlb_vmemmap_init_late(int nid)
struct hstate *h;
void *map;
- if (!hugetlb_bootmem_allocated())
- return;
-
if (!READ_ONCE(vmemmap_optimize_enabled))
return;
--
2.51.0
^ permalink raw reply related [flat|nested] 34+ messages in thread
* Re: [PATCH 25/28] x86: don't reserve hugetlb memory in setup_arch()
2025-12-28 12:39 ` [PATCH 25/28] x86: don't reserve hugetlb memory in setup_arch() Mike Rapoport
@ 2025-12-28 13:29 ` Sergey Shtylyov
0 siblings, 0 replies; 34+ messages in thread
From: Sergey Shtylyov @ 2025-12-28 13:29 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand, Dinh Nguyen,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Muchun Song,
Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
On 12/28/25 3:39 PM, Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
>
> Commit 665eaf313314 ("x86/setup: call hugetlb_bootmem_alloc early")
> added an early call to hugetlb_bootmem_alloc() to setup_arch() to allow
> HVO style pre-initialization of vmemmap on x86.
>
> With the ordering of hugetlb reservation vs memory map initiaization
> sorted out in core MM this no longer needs to be an architecture specific
> quirk.
>
> Drop the call to hugetlb_bootmem_alloc() from x86::setup_arch().
>
> Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> ---
> arch/x86/kernel/setup.c | 1 -
> 1 file changed, 1 deletion(-)
>
> diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
> index 1b2edd07a3e1..e2318fa9b1bb 100644
> --- a/arch/x86/kernel/setup.c
> +++ b/arch/x86/kernel/setup.c
> @@ -1191,7 +1191,6 @@ void __init setup_arch(char **cmdline_p)
>
> if (boot_cpu_has(X86_FEATURE_GBPAGES)) {
> hugetlb_cma_reserve(PUD_SHIFT - PAGE_SHIFT);
> - hugetlb_bootmem_alloc();
> }
You need to drop {} now, no? But seeing that this *if* gets dropped
altogether in the next patch, you may as well ignore me... :-)
[...]
MBR, Sergey
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 05/28] csky: introduce arch_zone_limits_init()
2025-12-28 12:39 ` [PATCH 05/28] csky: " Mike Rapoport
@ 2025-12-29 1:25 ` Guo Ren
0 siblings, 0 replies; 34+ messages in thread
From: Guo Ren @ 2025-12-29 1:25 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alex Shi, Alexander Gordeev, Andreas Larsson,
Borislav Petkov, Brian Cain, Christophe Leroy (CS GROUP),
Catalin Marinas, David S. Miller, Dave Hansen, David Hildenbrand,
Dinh Nguyen, Geert Uytterhoeven, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Muchun Song,
Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
On Sun, Dec 28, 2025 at 8:41 PM Mike Rapoport <rppt@kernel.org> wrote:
>
> From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
>
> Move calculations of zone limits to a dedicated arch_zone_limits_init()
> function.
>
> Later MM core will use this function as an architecture specific callback
> during nodes and zones initialization and thus there won't be a need to
> call free_area_init() from every architecture.
>
> Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> ---
> arch/csky/kernel/setup.c | 12 +++++++++---
> 1 file changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/arch/csky/kernel/setup.c b/arch/csky/kernel/setup.c
> index e0d6ca86ea8c..8968815d93e6 100644
> --- a/arch/csky/kernel/setup.c
> +++ b/arch/csky/kernel/setup.c
> @@ -51,6 +51,14 @@ static void __init setup_initrd(void)
> }
> #endif
>
> +void __init arch_zone_limits_init(unsigned long *max_zone_pfns)
> +{
> + max_zone_pfns[ZONE_NORMAL] = max_low_pfn;
> +#ifdef CONFIG_HIGHMEM
> + max_zone_pfns[ZONE_HIGHMEM] = max_pfn;
> +#endif
> +}
> +
LGTM!
Acked-by: Guo Ren <guoren@kernel.org>
> static void __init csky_memblock_init(void)
> {
> unsigned long lowmem_size = PFN_DOWN(LOWMEM_LIMIT - PHYS_OFFSET_OFFSET);
> @@ -83,12 +91,9 @@ static void __init csky_memblock_init(void)
> setup_initrd();
> #endif
>
> - max_zone_pfn[ZONE_NORMAL] = max_low_pfn;
> -
> mmu_init(min_low_pfn, max_low_pfn);
>
> #ifdef CONFIG_HIGHMEM
> - max_zone_pfn[ZONE_HIGHMEM] = max_pfn;
>
> highstart_pfn = max_low_pfn;
> highend_pfn = max_pfn;
> @@ -97,6 +102,7 @@ static void __init csky_memblock_init(void)
>
> dma_contiguous_reserve(0);
>
> + arch_zone_limits_init(max_zone_pfn);
> free_area_init(max_zone_pfn);
> }
>
> --
> 2.51.0
>
--
Best Regards
Guo Ren
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 27/28] mm/hugetlb: drop hugetlb_cma_check()
2025-12-28 12:39 ` [PATCH 27/28] mm/hugetlb: drop hugetlb_cma_check() Mike Rapoport
@ 2025-12-29 3:13 ` Muchun Song
0 siblings, 0 replies; 34+ messages in thread
From: Muchun Song @ 2025-12-29 3:13 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alex Shi, Alexander Gordeev, Andreas Larsson,
Borislav Petkov, Brian Cain, Christophe Leroy (CS GROUP),
Catalin Marinas, David S. Miller, Dave Hansen, David Hildenbrand,
Dinh Nguyen, Geert Uytterhoeven, Guo Ren, Heiko Carstens,
Helge Deller, Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Oscar Salvador,
Palmer Dabbelt, Pratyush Yadav, Richard Weinberger, Russell King,
Stafford Horne, Suren Baghdasaryan, Thomas Bogendoerfer,
Thomas Gleixner, Vasily Gorbik, Vineet Gupta, Vlastimil Babka,
Will Deacon, x86, linux-alpha, linux-arm-kernel, linux-csky,
linux-cxl, linux-doc, linux-hexagon, linux-kernel, linux-m68k,
linux-mips, linux-mm, linux-openrisc, linux-parisc, linux-riscv,
linux-s390, linux-sh, linux-snps-arc, linux-um, linuxppc-dev,
loongarch, sparclinux
> On Dec 28, 2025, at 20:39, Mike Rapoport <rppt@kernel.org> wrote:
>
> From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
>
> hugetlb_cma_check() was required when the ordering of hugetlb_cma_reserve()
> and hugetlb_bootmem_alloc() was architecture depended.
>
> Since hugetlb_cma_reserve() is always called before hugetlb_bootmem_alloc()
> there is no need to check whether hugetlb_cma_reserve() was already called.
>
> Drop unneeded hugetlb_cma_check() function.
>
> Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Nice cleanup.
Acked-by: Muchun Song <muchun.song@linux.dev>
Thanks.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 28/28] Revert "mm/hugetlb: deal with multiple calls to hugetlb_bootmem_alloc"
2025-12-28 12:39 ` [PATCH 28/28] Revert "mm/hugetlb: deal with multiple calls to hugetlb_bootmem_alloc" Mike Rapoport
@ 2025-12-29 3:13 ` Muchun Song
0 siblings, 0 replies; 34+ messages in thread
From: Muchun Song @ 2025-12-29 3:13 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alex Shi, Alexander Gordeev, Andreas Larsson,
Borislav Petkov, Brian Cain, Christophe Leroy (CS GROUP),
Catalin Marinas, David S. Miller, Dave Hansen, David Hildenbrand,
Dinh Nguyen, Geert Uytterhoeven, Guo Ren, Heiko Carstens,
Helge Deller, Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Oscar Salvador,
Palmer Dabbelt, Pratyush Yadav, Richard Weinberger, Russell King,
Stafford Horne, Suren Baghdasaryan, Thomas Bogendoerfer,
Thomas Gleixner, Vasily Gorbik, Vineet Gupta, Vlastimil Babka,
Will Deacon, x86, linux-alpha, linux-arm-kernel, linux-csky,
linux-cxl, linux-doc, linux-hexagon, linux-kernel, linux-m68k,
linux-mips, linux-mm, linux-openrisc, linux-parisc, linux-riscv,
linux-s390, linux-sh, linux-snps-arc, linux-um, linuxppc-dev,
loongarch, sparclinux
> On Dec 28, 2025, at 20:39, Mike Rapoport <rppt@kernel.org> wrote:
>
> From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
>
> hugetlb_bootmem_alloc() is called only once, no need to check if it was
> called aready at its entry.
>
> Other checks performed during HVO initialization are also no longer
> necessary because sparse_init() that calls hugetlb_vmemmap_init_early()
> and hugetlb_vmemmap_init_late() is alaways called after
> hugetlb_bootmem_alloc().
>
> This reverts commit d58b2498200724e4f8c12d71a5953da03c8c8bdf.
>
> Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Nice cleanup.
Acked-by: Muchun Song <muchun.song@linux.dev>
Thanks.
^ permalink raw reply [flat|nested] 34+ messages in thread
* Re: [PATCH 11/28] nios2: introduce arch_zone_limits_init()
2025-12-28 12:39 ` [PATCH 11/28] nios2: " Mike Rapoport
@ 2025-12-29 13:55 ` Dinh Nguyen
0 siblings, 0 replies; 34+ messages in thread
From: Dinh Nguyen @ 2025-12-29 13:55 UTC (permalink / raw)
To: Mike Rapoport, Andrew Morton
Cc: Alex Shi, Alexander Gordeev, Andreas Larsson, Borislav Petkov,
Brian Cain, Christophe Leroy (CS GROUP), Catalin Marinas,
David S. Miller, Dave Hansen, David Hildenbrand,
Geert Uytterhoeven, Guo Ren, Heiko Carstens, Helge Deller,
Huacai Chen, Ingo Molnar, Johannes Berg,
John Paul Adrian Glaubitz, Jonathan Corbet, Liam R. Howlett,
Lorenzo Stoakes, Magnus Lindholm, Matt Turner, Max Filippov,
Michael Ellerman, Michal Hocko, Michal Simek, Muchun Song,
Oscar Salvador, Palmer Dabbelt, Pratyush Yadav,
Richard Weinberger, Russell King, Stafford Horne,
Suren Baghdasaryan, Thomas Bogendoerfer, Thomas Gleixner,
Vasily Gorbik, Vineet Gupta, Vlastimil Babka, Will Deacon, x86,
linux-alpha, linux-arm-kernel, linux-csky, linux-cxl, linux-doc,
linux-hexagon, linux-kernel, linux-m68k, linux-mips, linux-mm,
linux-openrisc, linux-parisc, linux-riscv, linux-s390, linux-sh,
linux-snps-arc, linux-um, linuxppc-dev, loongarch, sparclinux
On 12/28/25 06:39, Mike Rapoport wrote:
> From: "Mike Rapoport (Microsoft)" <rppt@kernel.org>
>
> Move calculations of zone limits to a dedicated arch_zone_limits_init()
> function.
>
> Later MM core will use this function as an architecture specific callback
> during nodes and zones initialization and thus there won't be a need to
> call free_area_init() from every architecture.
>
> Signed-off-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> ---
> arch/nios2/mm/init.c | 8 ++++++--
> 1 file changed, 6 insertions(+), 2 deletions(-)
>
Acked-by: Dinh Nguyen <dinguyen@kernel.org>
^ permalink raw reply [flat|nested] 34+ messages in thread
end of thread, other threads:[~2025-12-29 13:55 UTC | newest]
Thread overview: 34+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-12-28 12:39 [PATCH 00/28] arch, mm: consolidate hugetlb early reservation Mike Rapoport
2025-12-28 12:39 ` [PATCH 01/28] alpha: introduce arch_zone_limits_init() Mike Rapoport
2025-12-28 12:39 ` [PATCH 02/28] arc: " Mike Rapoport
2025-12-28 12:39 ` [PATCH 03/28] arm: " Mike Rapoport
2025-12-28 12:39 ` [PATCH 04/28] arm64: " Mike Rapoport
2025-12-28 12:39 ` [PATCH 05/28] csky: " Mike Rapoport
2025-12-29 1:25 ` Guo Ren
2025-12-28 12:39 ` [PATCH 06/28] hexagon: " Mike Rapoport
2025-12-28 12:39 ` [PATCH 07/28] loongarch: " Mike Rapoport
2025-12-28 12:39 ` [PATCH 08/28] m68k: " Mike Rapoport
2025-12-28 12:39 ` [PATCH 09/28] microblaze: " Mike Rapoport
2025-12-28 12:39 ` [PATCH 10/28] mips: " Mike Rapoport
2025-12-28 12:39 ` [PATCH 11/28] nios2: " Mike Rapoport
2025-12-29 13:55 ` Dinh Nguyen
2025-12-28 12:39 ` [PATCH 12/28] openrisc: " Mike Rapoport
2025-12-28 12:39 ` [PATCH 13/28] parisc: " Mike Rapoport
2025-12-28 12:39 ` [PATCH 14/28] powerpc: " Mike Rapoport
2025-12-28 12:39 ` [PATCH 15/28] riscv: " Mike Rapoport
2025-12-28 12:39 ` [PATCH 16/28] s390: " Mike Rapoport
2025-12-28 12:39 ` [PATCH 17/28] sh: " Mike Rapoport
2025-12-28 12:39 ` [PATCH 18/28] sparc: " Mike Rapoport
2025-12-28 12:39 ` [PATCH 19/28] um: " Mike Rapoport
2025-12-28 12:39 ` [PATCH 20/28] x86: " Mike Rapoport
2025-12-28 12:39 ` [PATCH 21/28] xtensa: " Mike Rapoport
2025-12-28 12:39 ` [PATCH 22/28] arch, mm: consolidate initialization of nodes, zones and memory map Mike Rapoport
2025-12-28 12:39 ` [PATCH 23/28] arch, mm: consolidate initialization of SPARSE memory model Mike Rapoport
2025-12-28 12:39 ` [PATCH 24/28] mips: drop paging_init() Mike Rapoport
2025-12-28 12:39 ` [PATCH 25/28] x86: don't reserve hugetlb memory in setup_arch() Mike Rapoport
2025-12-28 13:29 ` Sergey Shtylyov
2025-12-28 12:39 ` [PATCH 26/28] mm, arch: consolidate hugetlb CMA reservation Mike Rapoport
2025-12-28 12:39 ` [PATCH 27/28] mm/hugetlb: drop hugetlb_cma_check() Mike Rapoport
2025-12-29 3:13 ` Muchun Song
2025-12-28 12:39 ` [PATCH 28/28] Revert "mm/hugetlb: deal with multiple calls to hugetlb_bootmem_alloc" Mike Rapoport
2025-12-29 3:13 ` Muchun Song
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).