* [merged mm-stable] mm-prepare-to-move-subsection_map_init-to-mm-sparse-vmemmapc.patch removed from -mm tree
@ 2026-03-29 0:42 Andrew Morton
0 siblings, 0 replies; only message in thread
From: Andrew Morton @ 2026-03-29 0:42 UTC (permalink / raw)
To: mm-commits, yuanchu, weixugc, vbabka, surenb, sidhartha.kumar,
rppt, osalvador, mhocko, ljs, liam.howlett, axelrasmussen, david,
akpm
The quilt patch titled
Subject: mm: prepare to move subsection_map_init() to mm/sparse-vmemmap.c
has been removed from the -mm tree. Its filename was
mm-prepare-to-move-subsection_map_init-to-mm-sparse-vmemmapc.patch
This patch was dropped because it was merged into the mm-stable branch
of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
------------------------------------------------------
From: "David Hildenbrand (Arm)" <david@kernel.org>
Subject: mm: prepare to move subsection_map_init() to mm/sparse-vmemmap.c
Date: Fri, 20 Mar 2026 23:13:43 +0100
We want to move subsection_map_init() to mm/sparse-vmemmap.c.
To prepare for getting rid of subsection_map_init() in mm/sparse.c
completely, use a static inline function for !CONFIG_SPARSEMEM_VMEMMAP.
While at it, move the declaration to internal.h and rename it to
"sparse_init_subsection_map()".
Link: https://lkml.kernel.org/r/20260320-sparsemem_cleanups-v2-11-096addc8800d@kernel.org
Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
Reviewed-by: Lorenzo Stoakes (Oracle) <ljs@kernel.org>
Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Cc: Axel Rasmussen <axelrasmussen@google.com>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Vlastimil Babka <vbabka@kernel.org>
Cc: Wei Xu <weixugc@google.com>
Cc: Yuanchu Xie <yuanchu@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
include/linux/mmzone.h | 3 ---
mm/internal.h | 12 ++++++++++++
mm/mm_init.c | 2 +-
mm/sparse.c | 6 +-----
4 files changed, 14 insertions(+), 9 deletions(-)
--- a/include/linux/mmzone.h~mm-prepare-to-move-subsection_map_init-to-mm-sparse-vmemmapc
+++ a/include/linux/mmzone.h
@@ -1982,8 +1982,6 @@ struct mem_section_usage {
unsigned long pageblock_flags[0];
};
-void subsection_map_init(unsigned long pfn, unsigned long nr_pages);
-
struct page;
struct page_ext;
struct mem_section {
@@ -2376,7 +2374,6 @@ static inline unsigned long next_present
#define sparse_vmemmap_init_nid_early(_nid) do {} while (0)
#define sparse_vmemmap_init_nid_late(_nid) do {} while (0)
#define pfn_in_present_section pfn_valid
-#define subsection_map_init(_pfn, _nr_pages) do {} while (0)
#endif /* CONFIG_SPARSEMEM */
/*
--- a/mm/internal.h~mm-prepare-to-move-subsection_map_init-to-mm-sparse-vmemmapc
+++ a/mm/internal.h
@@ -959,12 +959,24 @@ void memmap_init_range(unsigned long, in
unsigned long, enum meminit_context, struct vmem_altmap *, int,
bool);
+/*
+ * mm/sparse.c
+ */
#ifdef CONFIG_SPARSEMEM
void sparse_init(void);
#else
static inline void sparse_init(void) {}
#endif /* CONFIG_SPARSEMEM */
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
+void sparse_init_subsection_map(unsigned long pfn, unsigned long nr_pages);
+#else
+static inline void sparse_init_subsection_map(unsigned long pfn,
+ unsigned long nr_pages)
+{
+}
+#endif /* CONFIG_SPARSEMEM_VMEMMAP */
+
#if defined CONFIG_COMPACTION || defined CONFIG_CMA
/*
--- a/mm/mm_init.c~mm-prepare-to-move-subsection_map_init-to-mm-sparse-vmemmapc
+++ a/mm/mm_init.c
@@ -1896,7 +1896,7 @@ static void __init free_area_init(void)
pr_info(" node %3d: [mem %#018Lx-%#018Lx]\n", nid,
(u64)start_pfn << PAGE_SHIFT,
((u64)end_pfn << PAGE_SHIFT) - 1);
- subsection_map_init(start_pfn, end_pfn - start_pfn);
+ sparse_init_subsection_map(start_pfn, end_pfn - start_pfn);
}
/* Initialise every node */
--- a/mm/sparse.c~mm-prepare-to-move-subsection_map_init-to-mm-sparse-vmemmapc
+++ a/mm/sparse.c
@@ -185,7 +185,7 @@ static void subsection_mask_set(unsigned
bitmap_set(map, idx, end - idx + 1);
}
-void __init subsection_map_init(unsigned long pfn, unsigned long nr_pages)
+void __init sparse_init_subsection_map(unsigned long pfn, unsigned long nr_pages)
{
int end_sec_nr = pfn_to_section_nr(pfn + nr_pages - 1);
unsigned long nr, start_sec_nr = pfn_to_section_nr(pfn);
@@ -207,10 +207,6 @@ void __init subsection_map_init(unsigned
nr_pages -= pfns;
}
}
-#else
-void __init subsection_map_init(unsigned long pfn, unsigned long nr_pages)
-{
-}
#endif
/* Record a memory area against a node. */
_
Patches currently in -mm which might be from david@kernel.org are
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2026-03-29 0:42 UTC | newest]
Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-29 0:42 [merged mm-stable] mm-prepare-to-move-subsection_map_init-to-mm-sparse-vmemmapc.patch removed from -mm tree Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox