public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH v4 0/2] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range
@ 2026-04-21 12:55 Yuan Liu
  2026-04-21 12:55 ` [PATCH v4 1/2] mm: move overlap memory map init check to memmap_init() Yuan Liu
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Yuan Liu @ 2026-04-21 12:55 UTC (permalink / raw)
  To: David Hildenbrand, Oscar Salvador, Mike Rapoport, Wei Yang
  Cc: linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Tianyou Li, Chen Zhang, linux-kernel

This series cleans up the overlap memory map init check and
optimizes zone contiguous check when changing pfn range.

In addition to providing a significant improvement for VM hotplug
(see the second patch for reference), it brings benefits for CXL
hotplug as well. The link is as follows
https://lore.kernel.org/all/20260409023552.GA2807@AE/

v3 link:
    https://lore.kernel.org/all/20260408031615.1831922-1-yuan1.liu@intel.com/

v4 changes:
    Add a new patch for clean up overlap memory map init check

Yuan Liu (2):
  mm: move overlap memory map init check to memmap_init()
  mm/memory hotplug/unplug: Optimize zone contiguous check when changing
    pfn range

 Documentation/mm/physical_memory.rst | 13 +++++
 drivers/base/memory.c                |  6 ++
 include/linux/mmzone.h               | 47 ++++++++++++++++
 mm/internal.h                        |  8 +--
 mm/memory_hotplug.c                  | 12 +---
 mm/mm_init.c                         | 82 +++++++++++-----------------
 6 files changed, 100 insertions(+), 68 deletions(-)

-- 
2.47.3



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v4 1/2] mm: move overlap memory map init check to memmap_init()
  2026-04-21 12:55 [PATCH v4 0/2] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range Yuan Liu
@ 2026-04-21 12:55 ` Yuan Liu
  2026-04-22  1:11   ` Wei Yang
  2026-04-25  9:01   ` Mike Rapoport
  2026-04-21 12:55 ` [PATCH v4 2/2] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range Yuan Liu
  2026-04-22  7:46 ` [PATCH v4 0/2] " David Hildenbrand (Arm)
  2 siblings, 2 replies; 15+ messages in thread
From: Yuan Liu @ 2026-04-21 12:55 UTC (permalink / raw)
  To: David Hildenbrand, Oscar Salvador, Mike Rapoport, Wei Yang
  Cc: linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Tianyou Li, Chen Zhang, linux-kernel

Move the overlap memmap init check from memmap_init_range() into
memmap_init().

When mirrored kernelcore is enabled, avoid memory map initialization
for overlap regions. There are two cases that may overlap: a mirror
memory region assigned to movable zone, or a non-mirror memory region
assigned to a non-movable zone but falling within the movable zone
range.

Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
---
 mm/mm_init.c | 37 +++++++++++++------------------------
 1 file changed, 13 insertions(+), 24 deletions(-)

diff --git a/mm/mm_init.c b/mm/mm_init.c
index df34797691bd..2b5233060504 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -797,28 +797,6 @@ void __meminit reserve_bootmem_region(phys_addr_t start,
 	}
 }
 
-/* If zone is ZONE_MOVABLE but memory is mirrored, it is an overlapped init */
-static bool __meminit
-overlap_memmap_init(unsigned long zone, unsigned long *pfn)
-{
-	static struct memblock_region *r;
-
-	if (mirrored_kernelcore && zone == ZONE_MOVABLE) {
-		if (!r || *pfn >= memblock_region_memory_end_pfn(r)) {
-			for_each_mem_region(r) {
-				if (*pfn < memblock_region_memory_end_pfn(r))
-					break;
-			}
-		}
-		if (*pfn >= memblock_region_memory_base_pfn(r) &&
-		    memblock_is_mirror(r)) {
-			*pfn = memblock_region_memory_end_pfn(r);
-			return true;
-		}
-	}
-	return false;
-}
-
 /*
  * Only struct pages that correspond to ranges defined by memblock.memory
  * are zeroed and initialized by going through __init_single_page() during
@@ -905,8 +883,6 @@ void __meminit memmap_init_range(unsigned long size, int nid, unsigned long zone
 		 * function.  They do not exist on hotplugged memory.
 		 */
 		if (context == MEMINIT_EARLY) {
-			if (overlap_memmap_init(zone, &pfn))
-				continue;
 			if (defer_init(nid, pfn, zone_end_pfn)) {
 				deferred_struct_pages = true;
 				break;
@@ -971,6 +947,7 @@ static void __init memmap_init(void)
 
 	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
 		struct pglist_data *node = NODE_DATA(nid);
+		struct memblock_region *r = &memblock.memory.regions[i];
 
 		for (j = 0; j < MAX_NR_ZONES; j++) {
 			struct zone *zone = node->node_zones + j;
@@ -978,6 +955,18 @@ static void __init memmap_init(void)
 			if (!populated_zone(zone))
 				continue;
 
+			if (mirrored_kernelcore) {
+				const bool is_mirror = memblock_is_mirror(r);
+				const bool is_movable_zone = (j == ZONE_MOVABLE);
+
+				if (is_mirror && is_movable_zone)
+					continue;
+
+				if (!is_mirror && !is_movable_zone &&
+				    start_pfn >= zone_movable_pfn[nid])
+					continue;
+			}
+
 			memmap_init_zone_range(zone, start_pfn, end_pfn,
 					       &hole_pfn);
 			zone_id = j;
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v4 2/2] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range
  2026-04-21 12:55 [PATCH v4 0/2] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range Yuan Liu
  2026-04-21 12:55 ` [PATCH v4 1/2] mm: move overlap memory map init check to memmap_init() Yuan Liu
@ 2026-04-21 12:55 ` Yuan Liu
  2026-04-22  7:46 ` [PATCH v4 0/2] " David Hildenbrand (Arm)
  2 siblings, 0 replies; 15+ messages in thread
From: Yuan Liu @ 2026-04-21 12:55 UTC (permalink / raw)
  To: David Hildenbrand, Oscar Salvador, Mike Rapoport, Wei Yang
  Cc: linux-mm, Yong Hu, Nanhai Zou, Yuan Liu, Tim Chen, Qiuxu Zhuo,
	Yu C Chen, Pan Deng, Tianyou Li, Chen Zhang, linux-kernel

When move_pfn_range_to_zone() or remove_pfn_range_from_zone() updates a
zone, set_zone_contiguous() rescans the entire zone pageblock-by-pageblock
to rebuild zone->contiguous. For large zones this is a significant cost
during memory hotplug and hot-unplug.

Add a new zone member pages_with_online_memmap that tracks the number of
pages within the zone span that have an online memory map (including
present pages and memory holes whose memory map has been initialized).
When spanned_pages == pages_with_online_memmap the zone is contiguous and
pfn_to_page() can be called on any PFN in the zone span without further
pfn_valid() checks.

Only pages that fall within the current zone span are accounted towards
pages_with_online_memmap. A "too small" value is safe, it merely prevents
detecting a contiguous zone.

The following test cases of memory hotplug for a VM [1], tested in the
environment [2], show that this optimization can significantly reduce the
memory hotplug time [3].

+----------------+------+---------------+--------------+----------------+
|                | Size | Time (before) | Time (after) | Time Reduction |
|                +------+---------------+--------------+----------------+
| Plug Memory    | 256G |      10s      |      3s      |       70%      |
|                +------+---------------+--------------+----------------+
|                | 512G |      36s      |      7s      |       81%      |
+----------------+------+---------------+--------------+----------------+

+----------------+------+---------------+--------------+----------------+
|                | Size | Time (before) | Time (after) | Time Reduction |
|                +------+---------------+--------------+----------------+
| Unplug Memory  | 256G |      11s      |      4s      |       64%      |
|                +------+---------------+--------------+----------------+
|                | 512G |      36s      |      9s      |       75%      |
+----------------+------+---------------+--------------+----------------+

[1] Qemu commands to hotplug 256G/512G memory for a VM:
    object_add memory-backend-ram,id=hotmem0,size=256G/512G,share=on
    device_add virtio-mem-pci,id=vmem1,memdev=hotmem0,bus=port1
    qom-set vmem1 requested-size 256G/512G (Plug Memory)
    qom-set vmem1 requested-size 0G (Unplug Memory)

[2] Hardware     : Intel Icelake server
    Guest Kernel : v7.0-rc4
    Qemu         : v9.0.0

    Launch VM    :
    qemu-system-x86_64 -accel kvm -cpu host \
    -drive file=./Centos10_cloud.qcow2,format=qcow2,if=virtio \
    -drive file=./seed.img,format=raw,if=virtio \
    -smp 3,cores=3,threads=1,sockets=1,maxcpus=3 \
    -m 2G,slots=10,maxmem=2052472M \
    -device pcie-root-port,id=port1,bus=pcie.0,slot=1,multifunction=on \
    -device pcie-root-port,id=port2,bus=pcie.0,slot=2 \
    -nographic -machine q35 \
    -nic user,hostfwd=tcp::3000-:22

    Guest kernel auto-onlines newly added memory blocks:
    echo online > /sys/devices/system/memory/auto_online_blocks

[3] The time from typing the QEMU commands in [1] to when the output of
    'grep MemTotal /proc/meminfo' on Guest reflects that all hotplugged
    memory is recognized.

Reported-by: Nanhai Zou <nanhai.zou@intel.com>
Reported-by: Chen Zhang <zhangchen.kidd@jd.com>
Tested-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Tim Chen <tim.c.chen@linux.intel.com>
Reviewed-by: Qiuxu Zhuo <qiuxu.zhuo@intel.com>
Reviewed-by: Yu C Chen <yu.c.chen@intel.com>
Reviewed-by: Pan Deng <pan.deng@intel.com>
Reviewed-by: Nanhai Zou <nanhai.zou@intel.com>
Co-developed-by: Tianyou Li <tianyou.li@intel.com>
Signed-off-by: Tianyou Li <tianyou.li@intel.com>
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
---
 Documentation/mm/physical_memory.rst | 13 ++++++++
 drivers/base/memory.c                |  6 ++++
 include/linux/mmzone.h               | 47 ++++++++++++++++++++++++++++
 mm/internal.h                        |  8 +----
 mm/memory_hotplug.c                  | 12 ++-----
 mm/mm_init.c                         | 45 +++++++++++---------------
 6 files changed, 87 insertions(+), 44 deletions(-)

diff --git a/Documentation/mm/physical_memory.rst b/Documentation/mm/physical_memory.rst
index b76183545e5b..0aa65e6b5499 100644
--- a/Documentation/mm/physical_memory.rst
+++ b/Documentation/mm/physical_memory.rst
@@ -483,6 +483,19 @@ General
   ``present_pages`` should use ``get_online_mems()`` to get a stable value. It
   is initialized by ``calculate_node_totalpages()``.
 
+``pages_with_online_memmap``
+  Tracks pages within the zone that have an online memory map (present pages
+  and memory holes whose memory map has been initialized). When
+  ``spanned_pages`` == ``pages_with_online_memmap``, ``pfn_to_page()`` can be
+  performed without further checks on any PFN within the zone span.
+
+  Note: this counter may temporarily undercount when pages with an online
+  memory map exist outside the current zone span. This can only happen during
+  boot, when initializing the memory map of pages that do not fall into any
+  zone span. Growing the zone to cover such pages and later shrinking it back
+  may result in a "too small" value. This is safe: it merely prevents
+  detecting a contiguous zone.
+
 ``present_early_pages``
   The present pages existing within the zone located on memory available since
   early boot, excluding hotplugged memory. Defined only when
diff --git a/drivers/base/memory.c b/drivers/base/memory.c
index a3091924918b..2b6b4e5508af 100644
--- a/drivers/base/memory.c
+++ b/drivers/base/memory.c
@@ -246,6 +246,7 @@ static int memory_block_online(struct memory_block *mem)
 		nr_vmemmap_pages = mem->altmap->free;
 
 	mem_hotplug_begin();
+	clear_zone_contiguous(zone);
 	if (nr_vmemmap_pages) {
 		ret = mhp_init_memmap_on_memory(start_pfn, nr_vmemmap_pages, zone);
 		if (ret)
@@ -270,6 +271,7 @@ static int memory_block_online(struct memory_block *mem)
 
 	mem->zone = zone;
 out:
+	set_zone_contiguous(zone);
 	mem_hotplug_done();
 	return ret;
 }
@@ -282,6 +284,7 @@ static int memory_block_offline(struct memory_block *mem)
 	unsigned long start_pfn = section_nr_to_pfn(mem->start_section_nr);
 	unsigned long nr_pages = PAGES_PER_SECTION * sections_per_block;
 	unsigned long nr_vmemmap_pages = 0;
+	struct zone *zone;
 	int ret;
 
 	if (!mem->zone)
@@ -294,7 +297,9 @@ static int memory_block_offline(struct memory_block *mem)
 	if (mem->altmap)
 		nr_vmemmap_pages = mem->altmap->free;
 
+	zone = mem->zone;
 	mem_hotplug_begin();
+	clear_zone_contiguous(zone);
 	if (nr_vmemmap_pages)
 		adjust_present_page_count(pfn_to_page(start_pfn), mem->group,
 					  -nr_vmemmap_pages);
@@ -314,6 +319,7 @@ static int memory_block_offline(struct memory_block *mem)
 
 	mem->zone = NULL;
 out:
+	set_zone_contiguous(zone);
 	mem_hotplug_done();
 	return ret;
 }
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 3e51190a55e4..d4dd37a7222a 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -943,6 +943,20 @@ struct zone {
 	 * cma pages is present pages that are assigned for CMA use
 	 * (MIGRATE_CMA).
 	 *
+	 * pages_with_online_memmap tracks pages within the zone that have
+	 * an online memory map (present pages and memory holes whose memory
+	 * map has been initialized). When spanned_pages ==
+	 * pages_with_online_memmap, pfn_to_page() can be performed without
+	 * further checks on any PFN within the zone span.
+	 *
+	 * Note: this counter may temporarily undercount when pages with an
+	 * online memory map exist outside the current zone span. This can
+	 * only happen during boot, when initializing the memory map of
+	 * pages that do not fall into any zone span. Growing the zone to
+	 * cover such pages and later shrinking it back may result in a
+	 * "too small" value. This is safe: it merely prevents detecting a
+	 * contiguous zone.
+	 *
 	 * So present_pages may be used by memory hotplug or memory power
 	 * management logic to figure out unmanaged pages by checking
 	 * (present_pages - managed_pages). And managed_pages should be used
@@ -967,6 +981,7 @@ struct zone {
 	atomic_long_t		managed_pages;
 	unsigned long		spanned_pages;
 	unsigned long		present_pages;
+	unsigned long		pages_with_online_memmap;
 #if defined(CONFIG_MEMORY_HOTPLUG)
 	unsigned long		present_early_pages;
 #endif
@@ -1601,6 +1616,38 @@ static inline bool zone_is_zone_device(const struct zone *zone)
 }
 #endif
 
+/**
+ * zone_is_contiguous - test whether a zone is contiguous
+ * @zone: the zone to test.
+ *
+ * In a contiguous zone, it is valid to call pfn_to_page() on any PFN in the
+ * spanned zone without requiring pfn_valid() or pfn_to_online_page() checks.
+ *
+ * Note that missing synchronization with memory offlining makes any PFN
+ * traversal prone to races.
+ *
+ * ZONE_DEVICE zones are always marked non-contiguous.
+ *
+ * Return: true if contiguous, otherwise false.
+ */
+static inline bool zone_is_contiguous(const struct zone *zone)
+{
+	return zone->contiguous;
+}
+
+static inline void set_zone_contiguous(struct zone *zone)
+{
+	if (zone_is_zone_device(zone))
+		return;
+	if (zone->spanned_pages == zone->pages_with_online_memmap)
+		zone->contiguous = true;
+}
+
+static inline void clear_zone_contiguous(struct zone *zone)
+{
+	zone->contiguous = false;
+}
+
 /*
  * Returns true if a zone has pages managed by the buddy allocator.
  * All the reclaim decisions have to use this function rather than
diff --git a/mm/internal.h b/mm/internal.h
index cb0af847d7d9..92fee035c3f2 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -793,21 +793,15 @@ extern struct page *__pageblock_pfn_to_page(unsigned long start_pfn,
 static inline struct page *pageblock_pfn_to_page(unsigned long start_pfn,
 				unsigned long end_pfn, struct zone *zone)
 {
-	if (zone->contiguous)
+	if (zone_is_contiguous(zone))
 		return pfn_to_page(start_pfn);
 
 	return __pageblock_pfn_to_page(start_pfn, end_pfn, zone);
 }
 
-void set_zone_contiguous(struct zone *zone);
 bool pfn_range_intersects_zones(int nid, unsigned long start_pfn,
 			   unsigned long nr_pages);
 
-static inline void clear_zone_contiguous(struct zone *zone)
-{
-	zone->contiguous = false;
-}
-
 extern int __isolate_free_page(struct page *page, unsigned int order);
 extern void __putback_isolated_page(struct page *page, unsigned int order,
 				    int mt);
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index bc805029da51..3f73fcb042cf 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -565,18 +565,13 @@ void remove_pfn_range_from_zone(struct zone *zone,
 
 	/*
 	 * Zone shrinking code cannot properly deal with ZONE_DEVICE. So
-	 * we will not try to shrink the zones - which is okay as
-	 * set_zone_contiguous() cannot deal with ZONE_DEVICE either way.
+	 * we will not try to shrink it.
 	 */
 	if (zone_is_zone_device(zone))
 		return;
 
-	clear_zone_contiguous(zone);
-
 	shrink_zone_span(zone, start_pfn, start_pfn + nr_pages);
 	update_pgdat_span(pgdat);
-
-	set_zone_contiguous(zone);
 }
 
 /**
@@ -753,8 +748,6 @@ void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
 	struct pglist_data *pgdat = zone->zone_pgdat;
 	int nid = pgdat->node_id;
 
-	clear_zone_contiguous(zone);
-
 	if (zone_is_empty(zone))
 		init_currently_empty_zone(zone, start_pfn, nr_pages);
 	resize_zone_range(zone, start_pfn, nr_pages);
@@ -782,8 +775,6 @@ void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
 	memmap_init_range(nr_pages, nid, zone_idx(zone), start_pfn, 0,
 			 MEMINIT_HOTPLUG, altmap, migratetype,
 			 isolate_pageblock);
-
-	set_zone_contiguous(zone);
 }
 
 struct auto_movable_stats {
@@ -1079,6 +1070,7 @@ void adjust_present_page_count(struct page *page, struct memory_group *group,
 	if (early_section(__pfn_to_section(page_to_pfn(page))))
 		zone->present_early_pages += nr_pages;
 	zone->present_pages += nr_pages;
+	zone->pages_with_online_memmap += nr_pages;
 	zone->zone_pgdat->node_present_pages += nr_pages;
 
 	if (group && movable)
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 2b5233060504..165921e22a89 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -820,9 +820,9 @@ void __meminit reserve_bootmem_region(phys_addr_t start,
  *   zone/node above the hole except for the trailing pages in the last
  *   section that will be appended to the zone/node below.
  */
-static void __init init_unavailable_range(unsigned long spfn,
-					  unsigned long epfn,
-					  int zone, int node)
+static unsigned long __init init_unavailable_range(unsigned long spfn,
+						   unsigned long epfn,
+						   int zone, int node)
 {
 	unsigned long pfn;
 	u64 pgcnt = 0;
@@ -836,6 +836,7 @@ static void __init init_unavailable_range(unsigned long spfn,
 	if (pgcnt)
 		pr_info("On node %d, zone %s: %lld pages in unavailable ranges\n",
 			node, zone_names[zone], pgcnt);
+	return pgcnt;
 }
 
 /*
@@ -932,9 +933,21 @@ static void __init memmap_init_zone_range(struct zone *zone,
 	memmap_init_range(end_pfn - start_pfn, nid, zone_id, start_pfn,
 			  zone_end_pfn, MEMINIT_EARLY, NULL, MIGRATE_MOVABLE,
 			  false);
+	zone->pages_with_online_memmap += end_pfn - start_pfn;
 
-	if (*hole_pfn < start_pfn)
-		init_unavailable_range(*hole_pfn, start_pfn, zone_id, nid);
+	if (*hole_pfn < start_pfn) {
+		unsigned long hole_start_pfn = *hole_pfn;
+		unsigned long pgcnt;
+
+		if (hole_start_pfn < zone_start_pfn) {
+			init_unavailable_range(hole_start_pfn, zone_start_pfn,
+					       zone_id, nid);
+			hole_start_pfn = zone_start_pfn;
+		}
+		pgcnt = init_unavailable_range(hole_start_pfn, start_pfn,
+					       zone_id, nid);
+		zone->pages_with_online_memmap += pgcnt;
+	}
 
 	*hole_pfn = end_pfn;
 }
@@ -2250,28 +2263,6 @@ void __init init_cma_pageblock(struct page *page)
 }
 #endif
 
-void set_zone_contiguous(struct zone *zone)
-{
-	unsigned long block_start_pfn = zone->zone_start_pfn;
-	unsigned long block_end_pfn;
-
-	block_end_pfn = pageblock_end_pfn(block_start_pfn);
-	for (; block_start_pfn < zone_end_pfn(zone);
-			block_start_pfn = block_end_pfn,
-			 block_end_pfn += pageblock_nr_pages) {
-
-		block_end_pfn = min(block_end_pfn, zone_end_pfn(zone));
-
-		if (!__pageblock_pfn_to_page(block_start_pfn,
-					     block_end_pfn, zone))
-			return;
-		cond_resched();
-	}
-
-	/* We confirm that there is no hole */
-	zone->contiguous = true;
-}
-
 /*
  * Check if a PFN range intersects multiple zones on one or more
  * NUMA nodes. Specify the @nid argument if it is known that this
-- 
2.47.3



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 1/2] mm: move overlap memory map init check to memmap_init()
  2026-04-21 12:55 ` [PATCH v4 1/2] mm: move overlap memory map init check to memmap_init() Yuan Liu
@ 2026-04-22  1:11   ` Wei Yang
  2026-04-22  3:26     ` Wei Yang
  2026-04-22  7:08     ` Liu, Yuan1
  2026-04-25  9:01   ` Mike Rapoport
  1 sibling, 2 replies; 15+ messages in thread
From: Wei Yang @ 2026-04-22  1:11 UTC (permalink / raw)
  To: Yuan Liu
  Cc: David Hildenbrand, Oscar Salvador, Mike Rapoport, Wei Yang,
	linux-mm, Yong Hu, Nanhai Zou, Tim Chen, Qiuxu Zhuo, Yu C Chen,
	Pan Deng, Tianyou Li, Chen Zhang, linux-kernel

On Tue, Apr 21, 2026 at 08:55:07AM -0400, Yuan Liu wrote:
>Move the overlap memmap init check from memmap_init_range() into
>memmap_init().
>
>When mirrored kernelcore is enabled, avoid memory map initialization
>for overlap regions. There are two cases that may overlap: a mirror
>memory region assigned to movable zone, or a non-mirror memory region
>assigned to a non-movable zone but falling within the movable zone
>range.
>
>Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
>---
> mm/mm_init.c | 37 +++++++++++++------------------------
> 1 file changed, 13 insertions(+), 24 deletions(-)
>
>diff --git a/mm/mm_init.c b/mm/mm_init.c
>index df34797691bd..2b5233060504 100644
>--- a/mm/mm_init.c
>+++ b/mm/mm_init.c
>@@ -797,28 +797,6 @@ void __meminit reserve_bootmem_region(phys_addr_t start,
> 	}
> }
> 
>-/* If zone is ZONE_MOVABLE but memory is mirrored, it is an overlapped init */
>-static bool __meminit
>-overlap_memmap_init(unsigned long zone, unsigned long *pfn)
>-{
>-	static struct memblock_region *r;
>-
>-	if (mirrored_kernelcore && zone == ZONE_MOVABLE) {
>-		if (!r || *pfn >= memblock_region_memory_end_pfn(r)) {
>-			for_each_mem_region(r) {
>-				if (*pfn < memblock_region_memory_end_pfn(r))
>-					break;
>-			}
>-		}
>-		if (*pfn >= memblock_region_memory_base_pfn(r) &&
>-		    memblock_is_mirror(r)) {
>-			*pfn = memblock_region_memory_end_pfn(r);
>-			return true;
>-		}
>-	}
>-	return false;
>-}
>-
> /*
>  * Only struct pages that correspond to ranges defined by memblock.memory
>  * are zeroed and initialized by going through __init_single_page() during
>@@ -905,8 +883,6 @@ void __meminit memmap_init_range(unsigned long size, int nid, unsigned long zone
> 		 * function.  They do not exist on hotplugged memory.
> 		 */
> 		if (context == MEMINIT_EARLY) {
>-			if (overlap_memmap_init(zone, &pfn))
>-				continue;
> 			if (defer_init(nid, pfn, zone_end_pfn)) {
> 				deferred_struct_pages = true;
> 				break;
>@@ -971,6 +947,7 @@ static void __init memmap_init(void)
> 
> 	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
> 		struct pglist_data *node = NODE_DATA(nid);
>+		struct memblock_region *r = &memblock.memory.regions[i];
> 
> 		for (j = 0; j < MAX_NR_ZONES; j++) {
> 			struct zone *zone = node->node_zones + j;
>@@ -978,6 +955,18 @@ static void __init memmap_init(void)
> 			if (!populated_zone(zone))
> 				continue;
> 
>+			if (mirrored_kernelcore) {
>+				const bool is_mirror = memblock_is_mirror(r);
>+				const bool is_movable_zone = (j == ZONE_MOVABLE);
>+
>+				if (is_mirror && is_movable_zone)
>+					continue;
>+
>+				if (!is_mirror && !is_movable_zone &&
>+				    start_pfn >= zone_movable_pfn[nid])
>+					continue;

IIUC, when mirrored_kernelcore is set but !memblock_has_mirror() or
is_kdump_kernel(), zone_movable_pfn[nid] is kept to be 0.

This means it will skip all memory regions.

>+			}
>+
> 			memmap_init_zone_range(zone, start_pfn, end_pfn,
> 					       &hole_pfn);
> 			zone_id = j;
>-- 
>2.47.3

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 1/2] mm: move overlap memory map init check to memmap_init()
  2026-04-22  1:11   ` Wei Yang
@ 2026-04-22  3:26     ` Wei Yang
  2026-04-22  9:28       ` Liu, Yuan1
  2026-04-22  7:08     ` Liu, Yuan1
  1 sibling, 1 reply; 15+ messages in thread
From: Wei Yang @ 2026-04-22  3:26 UTC (permalink / raw)
  To: Wei Yang
  Cc: Yuan Liu, David Hildenbrand, Oscar Salvador, Mike Rapoport,
	linux-mm, Yong Hu, Nanhai Zou, Tim Chen, Qiuxu Zhuo, Yu C Chen,
	Pan Deng, Tianyou Li, Chen Zhang, linux-kernel

On Wed, Apr 22, 2026 at 01:11:26AM +0000, Wei Yang wrote:
>On Tue, Apr 21, 2026 at 08:55:07AM -0400, Yuan Liu wrote:
>>Move the overlap memmap init check from memmap_init_range() into
>>memmap_init().
>>
>>When mirrored kernelcore is enabled, avoid memory map initialization
>>for overlap regions. There are two cases that may overlap: a mirror
>>memory region assigned to movable zone, or a non-mirror memory region
>>assigned to a non-movable zone but falling within the movable zone
>>range.
>>
>>Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
>>---
>> mm/mm_init.c | 37 +++++++++++++------------------------
>> 1 file changed, 13 insertions(+), 24 deletions(-)
>>
>>diff --git a/mm/mm_init.c b/mm/mm_init.c
>>index df34797691bd..2b5233060504 100644
>>--- a/mm/mm_init.c
>>+++ b/mm/mm_init.c
>>@@ -797,28 +797,6 @@ void __meminit reserve_bootmem_region(phys_addr_t start,
>> 	}
>> }
>> 
>>-/* If zone is ZONE_MOVABLE but memory is mirrored, it is an overlapped init */
>>-static bool __meminit
>>-overlap_memmap_init(unsigned long zone, unsigned long *pfn)
>>-{
>>-	static struct memblock_region *r;
>>-
>>-	if (mirrored_kernelcore && zone == ZONE_MOVABLE) {
>>-		if (!r || *pfn >= memblock_region_memory_end_pfn(r)) {
>>-			for_each_mem_region(r) {
>>-				if (*pfn < memblock_region_memory_end_pfn(r))
>>-					break;
>>-			}
>>-		}
>>-		if (*pfn >= memblock_region_memory_base_pfn(r) &&
>>-		    memblock_is_mirror(r)) {
>>-			*pfn = memblock_region_memory_end_pfn(r);
>>-			return true;
>>-		}
>>-	}
>>-	return false;
>>-}
>>-
>> /*
>>  * Only struct pages that correspond to ranges defined by memblock.memory
>>  * are zeroed and initialized by going through __init_single_page() during
>>@@ -905,8 +883,6 @@ void __meminit memmap_init_range(unsigned long size, int nid, unsigned long zone
>> 		 * function.  They do not exist on hotplugged memory.
>> 		 */
>> 		if (context == MEMINIT_EARLY) {
>>-			if (overlap_memmap_init(zone, &pfn))
>>-				continue;
>> 			if (defer_init(nid, pfn, zone_end_pfn)) {
>> 				deferred_struct_pages = true;
>> 				break;
>>@@ -971,6 +947,7 @@ static void __init memmap_init(void)
>> 
>> 	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
>> 		struct pglist_data *node = NODE_DATA(nid);
>>+		struct memblock_region *r = &memblock.memory.regions[i];
>> 
>> 		for (j = 0; j < MAX_NR_ZONES; j++) {
>> 			struct zone *zone = node->node_zones + j;
>>@@ -978,6 +955,18 @@ static void __init memmap_init(void)
>> 			if (!populated_zone(zone))
>> 				continue;
>> 
>>+			if (mirrored_kernelcore) {
>>+				const bool is_mirror = memblock_is_mirror(r);
>>+				const bool is_movable_zone = (j == ZONE_MOVABLE);
>>+
>>+				if (is_mirror && is_movable_zone)
>>+					continue;
>>+
>>+				if (!is_mirror && !is_movable_zone &&
>>+				    start_pfn >= zone_movable_pfn[nid])
>>+					continue;
>
>IIUC, when mirrored_kernelcore is set but !memblock_has_mirror() or
>is_kdump_kernel(), zone_movable_pfn[nid] is kept to be 0.
>
>This means it will skip all memory regions.
>

Did some tests. When mirrored_kernelcore && !memblock_has_mirror(), which
means there is no is_mirror memblock. This will leave zone_movable_pfn[nid] 0.

So for all memory regions, the above logic will skip them.

Adjust the code as below, my local test could pass and kernel bootup as
expected.

From 6351ac79a17edbfd830510fba2959ddc47b17258 Mon Sep 17 00:00:00 2001
From: Wei Yang <richard.weiyang@gmail.com>
Date: Wed, 22 Apr 2026 09:13:24 +0800
Subject: [PATCH] skip overlap region higher level

---
 mm/mm_init.c | 29 ++++++++++++++++++++++-------
 1 file changed, 22 insertions(+), 7 deletions(-)

diff --git a/mm/mm_init.c b/mm/mm_init.c
index 79f93f2a90cf..7a85ba58e87f 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -916,8 +916,8 @@ void __meminit memmap_init_range(unsigned long size, int nid, unsigned long zone
 		 * function.  They do not exist on hotplugged memory.
 		 */
 		if (context == MEMINIT_EARLY) {
-			if (overlap_memmap_init(zone, &pfn))
-				continue;
+			// if (overlap_memmap_init(zone, &pfn))
+			// 	continue;
 			if (defer_init(nid, pfn, zone_end_pfn)) {
 				deferred_struct_pages = true;
 				break;
@@ -974,6 +974,17 @@ static void __init memmap_init_zone_range(struct zone *zone,
 	*hole_pfn = end_pfn;
 }
 
+static bool __init region_overlapped(struct memblock_region *rgn, unsigned long zone_type)
+{
+	if (zone_type == ZONE_MOVABLE && memblock_is_mirror(rgn))
+		return true;
+
+	if (zone_type == ZONE_NORMAL && !memblock_is_mirror(rgn))
+		return true;
+
+	return false;
+}
+
 static void __init memmap_init(void)
 {
 	unsigned long start_pfn, end_pfn;
@@ -985,10 +996,15 @@ static void __init memmap_init(void)
 
 		for (j = 0; j < MAX_NR_ZONES; j++) {
 			struct zone *zone = node->node_zones + j;
+			struct memblock_region *r = &memblock.memory.regions[i];
 
 			if (!populated_zone(zone))
 				continue;
 
+			if (mirrored_kernelcore && zone_movable_pfn[nid] &&
+			    region_overlapped(r, j))
+				continue;
+
 			memmap_init_zone_range(zone, start_pfn, end_pfn,
 					       &hole_pfn);
 			zone_id = j;
@@ -1257,13 +1273,12 @@ static unsigned long __init zone_absent_pages_in_node(int nid,
 			end_pfn = clamp(memblock_region_memory_end_pfn(r),
 					zone_start_pfn, zone_end_pfn);
 
-			if (zone_type == ZONE_MOVABLE &&
-			    memblock_is_mirror(r))
-				nr_absent += end_pfn - start_pfn;
+			if (start_pfn == end_pfn)
+				continue;
 
-			if (zone_type == ZONE_NORMAL &&
-			    !memblock_is_mirror(r))
+			if (region_overlapped(r, zone_type))
 				nr_absent += end_pfn - start_pfn;
+
 		}
 	}
 
Want to confirm, the logic in zone_absent_pages_in_node() only handle
ZONE_NORMAL and ZONE_MOVABLE. So the assumption is ZONE_MOVABLE only could
overlap with ZONE_NORMAL?

When kernelcore=[nn]M is used, the "highest" populated zone is picked up to be
ZONE_MOVABLE, as indicated by find_usable_zone_for_movable(). So looks it is
possible to choose ZONE_DMA32 as ZONE_MOVABLE.

For kernelcore=mirror, we want to eliminate the complexity?

-- 
Wei Yang
Help you, Help me


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* RE: [PATCH v4 1/2] mm: move overlap memory map init check to memmap_init()
  2026-04-22  1:11   ` Wei Yang
  2026-04-22  3:26     ` Wei Yang
@ 2026-04-22  7:08     ` Liu, Yuan1
  1 sibling, 0 replies; 15+ messages in thread
From: Liu, Yuan1 @ 2026-04-22  7:08 UTC (permalink / raw)
  To: Wei Yang
  Cc: David Hildenbrand, Oscar Salvador, Mike Rapoport,
	linux-mm@kvack.org, Hu, Yong, Zou, Nanhai, Tim Chen, Zhuo, Qiuxu,
	Chen, Yu C, Deng, Pan, Li, Tianyou, Chen Zhang,
	linux-kernel@vger.kernel.org


> -----Original Message-----
> From: Wei Yang <richard.weiyang@gmail.com>
> Sent: Wednesday, April 22, 2026 9:11 AM
> To: Liu, Yuan1 <yuan1.liu@intel.com>
> Cc: David Hildenbrand <david@kernel.org>; Oscar Salvador
> <osalvador@suse.de>; Mike Rapoport <rppt@kernel.org>; Wei Yang
> <richard.weiyang@gmail.com>; linux-mm@kvack.org; Hu, Yong
> <yong.hu@intel.com>; Zou, Nanhai <nanhai.zou@intel.com>; Tim Chen
> <tim.c.chen@linux.intel.com>; Zhuo, Qiuxu <qiuxu.zhuo@intel.com>; Chen, Yu
> C <yu.c.chen@intel.com>; Deng, Pan <pan.deng@intel.com>; Li, Tianyou
> <tianyou.li@intel.com>; Chen Zhang <zhangchen.kidd@jd.com>; linux-
> kernel@vger.kernel.org
> Subject: Re: [PATCH v4 1/2] mm: move overlap memory map init check to
> memmap_init()
> 
> On Tue, Apr 21, 2026 at 08:55:07AM -0400, Yuan Liu wrote:
> >Move the overlap memmap init check from memmap_init_range() into
> >memmap_init().
> >
> >When mirrored kernelcore is enabled, avoid memory map initialization
> >for overlap regions. There are two cases that may overlap: a mirror
> >memory region assigned to movable zone, or a non-mirror memory region
> >assigned to a non-movable zone but falling within the movable zone
> >range.
> >
> >Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
> >---
> > mm/mm_init.c | 37 +++++++++++++------------------------
> > 1 file changed, 13 insertions(+), 24 deletions(-)
> >
> >diff --git a/mm/mm_init.c b/mm/mm_init.c
> >index df34797691bd..2b5233060504 100644
> >--- a/mm/mm_init.c
> >+++ b/mm/mm_init.c
> >@@ -797,28 +797,6 @@ void __meminit reserve_bootmem_region(phys_addr_t
> start,
> > 	}
> > }
> >
> >-/* If zone is ZONE_MOVABLE but memory is mirrored, it is an overlapped
> init */
> >-static bool __meminit
> >-overlap_memmap_init(unsigned long zone, unsigned long *pfn)
> >-{
> >-	static struct memblock_region *r;
> >-
> >-	if (mirrored_kernelcore && zone == ZONE_MOVABLE) {
> >-		if (!r || *pfn >= memblock_region_memory_end_pfn(r)) {
> >-			for_each_mem_region(r) {
> >-				if (*pfn < memblock_region_memory_end_pfn(r))
> >-					break;
> >-			}
> >-		}
> >-		if (*pfn >= memblock_region_memory_base_pfn(r) &&
> >-		    memblock_is_mirror(r)) {
> >-			*pfn = memblock_region_memory_end_pfn(r);
> >-			return true;
> >-		}
> >-	}
> >-	return false;
> >-}
> >-
> > /*
> >  * Only struct pages that correspond to ranges defined by
> memblock.memory
> >  * are zeroed and initialized by going through __init_single_page()
> during
> >@@ -905,8 +883,6 @@ void __meminit memmap_init_range(unsigned long size,
> int nid, unsigned long zone
> > 		 * function.  They do not exist on hotplugged memory.
> > 		 */
> > 		if (context == MEMINIT_EARLY) {
> >-			if (overlap_memmap_init(zone, &pfn))
> >-				continue;
> > 			if (defer_init(nid, pfn, zone_end_pfn)) {
> > 				deferred_struct_pages = true;
> > 				break;
> >@@ -971,6 +947,7 @@ static void __init memmap_init(void)
> >
> > 	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid)
> {
> > 		struct pglist_data *node = NODE_DATA(nid);
> >+		struct memblock_region *r = &memblock.memory.regions[i];
> >
> > 		for (j = 0; j < MAX_NR_ZONES; j++) {
> > 			struct zone *zone = node->node_zones + j;
> >@@ -978,6 +955,18 @@ static void __init memmap_init(void)
> > 			if (!populated_zone(zone))
> > 				continue;
> >
> >+			if (mirrored_kernelcore) {
> >+				const bool is_mirror = memblock_is_mirror(r);
> >+				const bool is_movable_zone = (j == ZONE_MOVABLE);
> >+
> >+				if (is_mirror && is_movable_zone)
> >+					continue;
> >+
> >+				if (!is_mirror && !is_movable_zone &&
> >+				    start_pfn >= zone_movable_pfn[nid])
> >+					continue;
> 
> IIUC, when mirrored_kernelcore is set but !memblock_has_mirror() or
> is_kdump_kernel(), zone_movable_pfn[nid] is kept to be 0.
> 
> This means it will skip all memory regions.

You're right. I verified the case you mentioned that mirrored_kernelcore
is enabled but there are no mirrored memory blocks, it indeed has a problem. 

Thank you very much for pointing that out.

> >+			}
> >+
> > 			memmap_init_zone_range(zone, start_pfn, end_pfn,
> > 					       &hole_pfn);
> > 			zone_id = j;
> >--
> >2.47.3
> 
> --
> Wei Yang
> Help you, Help me


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 0/2] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range
  2026-04-21 12:55 [PATCH v4 0/2] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range Yuan Liu
  2026-04-21 12:55 ` [PATCH v4 1/2] mm: move overlap memory map init check to memmap_init() Yuan Liu
  2026-04-21 12:55 ` [PATCH v4 2/2] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range Yuan Liu
@ 2026-04-22  7:46 ` David Hildenbrand (Arm)
  2026-04-22  7:56   ` Liu, Yuan1
  2 siblings, 1 reply; 15+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-22  7:46 UTC (permalink / raw)
  To: Yuan Liu, Oscar Salvador, Mike Rapoport, Wei Yang
  Cc: linux-mm, Yong Hu, Nanhai Zou, Tim Chen, Qiuxu Zhuo, Yu C Chen,
	Pan Deng, Tianyou Li, Chen Zhang, linux-kernel

On 4/21/26 14:55, Yuan Liu wrote:
> This series cleans up the overlap memory map init check and
> optimizes zone contiguous check when changing pfn range.
> 
> In addition to providing a significant improvement for VM hotplug
> (see the second patch for reference), it brings benefits for CXL
> hotplug as well. The link is as follows
> https://lore.kernel.org/all/20260409023552.GA2807@AE/
> 
> v3 link:
>     https://lore.kernel.org/all/20260408031615.1831922-1-yuan1.liu@intel.com/
> 
> v4 changes:
>     Add a new patch for clean up overlap memory map init check

Didn't you also wanted to add a patch to improve shrink_zone_span to check both
sides of the PAGES_PER_SUBSECTION block for fitting nid+zid?

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH v4 0/2] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range
  2026-04-22  7:46 ` [PATCH v4 0/2] " David Hildenbrand (Arm)
@ 2026-04-22  7:56   ` Liu, Yuan1
  2026-04-22 19:13     ` David Hildenbrand (Arm)
  0 siblings, 1 reply; 15+ messages in thread
From: Liu, Yuan1 @ 2026-04-22  7:56 UTC (permalink / raw)
  To: David Hildenbrand (Arm), Oscar Salvador, Mike Rapoport, Wei Yang
  Cc: linux-mm@kvack.org, Hu, Yong, Zou, Nanhai, Tim Chen, Zhuo, Qiuxu,
	Chen, Yu C, Deng, Pan, Li, Tianyou, Chen Zhang,
	linux-kernel@vger.kernel.org

> -----Original Message-----
> From: David Hildenbrand (Arm) <david@kernel.org>
> Sent: Wednesday, April 22, 2026 3:47 PM
> To: Liu, Yuan1 <yuan1.liu@intel.com>; Oscar Salvador <osalvador@suse.de>;
> Mike Rapoport <rppt@kernel.org>; Wei Yang <richard.weiyang@gmail.com>
> Cc: linux-mm@kvack.org; Hu, Yong <yong.hu@intel.com>; Zou, Nanhai
> <nanhai.zou@intel.com>; Tim Chen <tim.c.chen@linux.intel.com>; Zhuo, Qiuxu
> <qiuxu.zhuo@intel.com>; Chen, Yu C <yu.c.chen@intel.com>; Deng, Pan
> <pan.deng@intel.com>; Li, Tianyou <tianyou.li@intel.com>; Chen Zhang
> <zhangchen.kidd@jd.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v4 0/2] mm/memory hotplug/unplug: Optimize zone
> contiguous check when changing pfn range
> 
> On 4/21/26 14:55, Yuan Liu wrote:
> > This series cleans up the overlap memory map init check and
> > optimizes zone contiguous check when changing pfn range.
> >
> > In addition to providing a significant improvement for VM hotplug
> > (see the second patch for reference), it brings benefits for CXL
> > hotplug as well. The link is as follows
> > https://lore.kernel.org/all/20260409023552.GA2807@AE/
> >
> > v3 link:
> >     https://lore.kernel.org/all/20260408031615.1831922-1-
> yuan1.liu@intel.com/
> >
> > v4 changes:
> >     Add a new patch for clean up overlap memory map init check
> 
> Didn't you also wanted to add a patch to improve shrink_zone_span to check
> both
> sides of the PAGES_PER_SUBSECTION block for fitting nid+zid?

Hi David

My apologies for missing this. I will include it in the next version.
 
> --
> Cheers,
> 
> David

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH v4 1/2] mm: move overlap memory map init check to memmap_init()
  2026-04-22  3:26     ` Wei Yang
@ 2026-04-22  9:28       ` Liu, Yuan1
  2026-04-24  1:05         ` Wei Yang
  0 siblings, 1 reply; 15+ messages in thread
From: Liu, Yuan1 @ 2026-04-22  9:28 UTC (permalink / raw)
  To: Wei Yang
  Cc: David Hildenbrand, Oscar Salvador, Mike Rapoport,
	linux-mm@kvack.org, Hu, Yong, Zou, Nanhai, Tim Chen, Zhuo, Qiuxu,
	Chen, Yu C, Deng, Pan, Li, Tianyou, Chen Zhang,
	linux-kernel@vger.kernel.org

> -----Original Message-----
> From: Wei Yang <richard.weiyang@gmail.com>
> Sent: Wednesday, April 22, 2026 11:27 AM
> To: Wei Yang <richard.weiyang@gmail.com>
> Cc: Liu, Yuan1 <yuan1.liu@intel.com>; David Hildenbrand
> <david@kernel.org>; Oscar Salvador <osalvador@suse.de>; Mike Rapoport
> <rppt@kernel.org>; linux-mm@kvack.org; Hu, Yong <yong.hu@intel.com>; Zou,
> Nanhai <nanhai.zou@intel.com>; Tim Chen <tim.c.chen@linux.intel.com>;
> Zhuo, Qiuxu <qiuxu.zhuo@intel.com>; Chen, Yu C <yu.c.chen@intel.com>;
> Deng, Pan <pan.deng@intel.com>; Li, Tianyou <tianyou.li@intel.com>; Chen
> Zhang <zhangchen.kidd@jd.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v4 1/2] mm: move overlap memory map init check to
> memmap_init()
> 
> On Wed, Apr 22, 2026 at 01:11:26AM +0000, Wei Yang wrote:
> >On Tue, Apr 21, 2026 at 08:55:07AM -0400, Yuan Liu wrote:
> >>Move the overlap memmap init check from memmap_init_range() into
> >>memmap_init().
> >>
> >>When mirrored kernelcore is enabled, avoid memory map initialization
> >>for overlap regions. There are two cases that may overlap: a mirror
> >>memory region assigned to movable zone, or a non-mirror memory region
> >>assigned to a non-movable zone but falling within the movable zone
> >>range.
> >>
> >>Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
> >>---
> >> mm/mm_init.c | 37 +++++++++++++------------------------
> >> 1 file changed, 13 insertions(+), 24 deletions(-)
> >>
> >>diff --git a/mm/mm_init.c b/mm/mm_init.c
> >>index df34797691bd..2b5233060504 100644
> >>--- a/mm/mm_init.c
> >>+++ b/mm/mm_init.c
> >>@@ -797,28 +797,6 @@ void __meminit reserve_bootmem_region(phys_addr_t
> start,
> >> 	}
> >> }
> >>
> >>-/* If zone is ZONE_MOVABLE but memory is mirrored, it is an overlapped
> init */
> >>-static bool __meminit
> >>-overlap_memmap_init(unsigned long zone, unsigned long *pfn)
> >>-{
> >>-	static struct memblock_region *r;
> >>-
> >>-	if (mirrored_kernelcore && zone == ZONE_MOVABLE) {
> >>-		if (!r || *pfn >= memblock_region_memory_end_pfn(r)) {
> >>-			for_each_mem_region(r) {
> >>-				if (*pfn < memblock_region_memory_end_pfn(r))
> >>-					break;
> >>-			}
> >>-		}
> >>-		if (*pfn >= memblock_region_memory_base_pfn(r) &&
> >>-		    memblock_is_mirror(r)) {
> >>-			*pfn = memblock_region_memory_end_pfn(r);
> >>-			return true;
> >>-		}
> >>-	}
> >>-	return false;
> >>-}
> >>-
> >> /*
> >>  * Only struct pages that correspond to ranges defined by
> memblock.memory
> >>  * are zeroed and initialized by going through __init_single_page()
> during
> >>@@ -905,8 +883,6 @@ void __meminit memmap_init_range(unsigned long size,
> int nid, unsigned long zone
> >> 		 * function.  They do not exist on hotplugged memory.
> >> 		 */
> >> 		if (context == MEMINIT_EARLY) {
> >>-			if (overlap_memmap_init(zone, &pfn))
> >>-				continue;
> >> 			if (defer_init(nid, pfn, zone_end_pfn)) {
> >> 				deferred_struct_pages = true;
> >> 				break;
> >>@@ -971,6 +947,7 @@ static void __init memmap_init(void)
> >>
> >> 	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid)
> {
> >> 		struct pglist_data *node = NODE_DATA(nid);
> >>+		struct memblock_region *r = &memblock.memory.regions[i];
> >>
> >> 		for (j = 0; j < MAX_NR_ZONES; j++) {
> >> 			struct zone *zone = node->node_zones + j;
> >>@@ -978,6 +955,18 @@ static void __init memmap_init(void)
> >> 			if (!populated_zone(zone))
> >> 				continue;
> >>
> >>+			if (mirrored_kernelcore) {
> >>+				const bool is_mirror = memblock_is_mirror(r);
> >>+				const bool is_movable_zone = (j == ZONE_MOVABLE);
> >>+
> >>+				if (is_mirror && is_movable_zone)
> >>+					continue;
> >>+
> >>+				if (!is_mirror && !is_movable_zone &&
> >>+				    start_pfn >= zone_movable_pfn[nid])
> >>+					continue;
> >
> >IIUC, when mirrored_kernelcore is set but !memblock_has_mirror() or
> >is_kdump_kernel(), zone_movable_pfn[nid] is kept to be 0.
> >
> >This means it will skip all memory regions.
> >
> 
> Did some tests. When mirrored_kernelcore && !memblock_has_mirror(), which
> means there is no is_mirror memblock. This will leave
> zone_movable_pfn[nid] 0.
> 
> So for all memory regions, the above logic will skip them.
> 
> Adjust the code as below, my local test could pass and kernel bootup as
> expected.
> 
> From 6351ac79a17edbfd830510fba2959ddc47b17258 Mon Sep 17 00:00:00 2001
> From: Wei Yang <richard.weiyang@gmail.com>
> Date: Wed, 22 Apr 2026 09:13:24 +0800
> Subject: [PATCH] skip overlap region higher level
> 
> ---
>  mm/mm_init.c | 29 ++++++++++++++++++++++-------
>  1 file changed, 22 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index 79f93f2a90cf..7a85ba58e87f 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -916,8 +916,8 @@ void __meminit memmap_init_range(unsigned long size,
> int nid, unsigned long zone
>  		 * function.  They do not exist on hotplugged memory.
>  		 */
>  		if (context == MEMINIT_EARLY) {
> -			if (overlap_memmap_init(zone, &pfn))
> -				continue;
> +			// if (overlap_memmap_init(zone, &pfn))
> +			// 	continue;
>  			if (defer_init(nid, pfn, zone_end_pfn)) {
>  				deferred_struct_pages = true;
>  				break;
> @@ -974,6 +974,17 @@ static void __init memmap_init_zone_range(struct zone
> *zone,
>  	*hole_pfn = end_pfn;
>  }
> 
> +static bool __init region_overlapped(struct memblock_region *rgn,
> unsigned long zone_type)
> +{
> +	if (zone_type == ZONE_MOVABLE && memblock_is_mirror(rgn))
> +		return true;
> +
> +	if (zone_type == ZONE_NORMAL && !memblock_is_mirror(rgn))
> +		return true;
> +
> +	return false;
> +}
> +
>  static void __init memmap_init(void)
>  {
>  	unsigned long start_pfn, end_pfn;
> @@ -985,10 +996,15 @@ static void __init memmap_init(void)
> 
>  		for (j = 0; j < MAX_NR_ZONES; j++) {
>  			struct zone *zone = node->node_zones + j;
> +			struct memblock_region *r = &memblock.memory.regions[i];
> 
>  			if (!populated_zone(zone))
>  				continue;
> 
> +			if (mirrored_kernelcore && zone_movable_pfn[nid] &&
> +			    region_overlapped(r, j))
> +				continue;
> +
>  			memmap_init_zone_range(zone, start_pfn, end_pfn,
>  					       &hole_pfn);
>  			zone_id = j;
> @@ -1257,13 +1273,12 @@ static unsigned long __init
> zone_absent_pages_in_node(int nid,
>  			end_pfn = clamp(memblock_region_memory_end_pfn(r),
>  					zone_start_pfn, zone_end_pfn);
> 
> -			if (zone_type == ZONE_MOVABLE &&
> -			    memblock_is_mirror(r))
> -				nr_absent += end_pfn - start_pfn;
> +			if (start_pfn == end_pfn)
> +				continue;
> 
> -			if (zone_type == ZONE_NORMAL &&
> -			    !memblock_is_mirror(r))
> +			if (region_overlapped(r, zone_type))
>  				nr_absent += end_pfn - start_pfn;
> +
>  		}
>  	}

Hi Wei Yang

I ran some tests based on this patch and didn't observe any issues. 
Thanks for the patch.

> Want to confirm, the logic in zone_absent_pages_in_node() only handle
> ZONE_NORMAL and ZONE_MOVABLE. So the assumption is ZONE_MOVABLE only could
> overlap with ZONE_NORMAL?

I think memory below 4GB does not include mirrored memory and 4GB is also 
the upper boundary of DMA32, ZONE_MOVABLE may only overlap with ZONE_NORMAL

This is just my understanding, so it would be good to get a clearer confirmation
from David and Mike.

void __init arch_zone_limits_init(unsigned long *max_zone_pfns) {
...
    /* 4GB broken PCI/AGP hardware bus master zone */
    #define MAX_DMA32_PFN (1UL << (32 - PAGE_SHIFT))
...
}
static void __init find_zone_movable_pfns_for_nodes(void) {
...
    if (usable_startpfn < PHYS_PFN(SZ_4G)) {
        mem_below_4gb_not_mirrored = true;
	  continue;
    }
...
}

> When kernelcore=[nn]M is used, the "highest" populated zone is picked up
> to be
> ZONE_MOVABLE, as indicated by find_usable_zone_for_movable(). So looks it
> is
> possible to choose ZONE_DMA32 as ZONE_MOVABLE.
> 
> For kernelcore=mirror, we want to eliminate the complexity?
>
> --
> Wei Yang
> Help you, Help me


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 0/2] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range
  2026-04-22  7:56   ` Liu, Yuan1
@ 2026-04-22 19:13     ` David Hildenbrand (Arm)
  2026-04-23  3:17       ` Liu, Yuan1
  0 siblings, 1 reply; 15+ messages in thread
From: David Hildenbrand (Arm) @ 2026-04-22 19:13 UTC (permalink / raw)
  To: Liu, Yuan1, Oscar Salvador, Mike Rapoport, Wei Yang
  Cc: linux-mm@kvack.org, Hu, Yong, Zou, Nanhai, Tim Chen, Zhuo, Qiuxu,
	Chen, Yu C, Deng, Pan, Li, Tianyou, Chen Zhang,
	linux-kernel@vger.kernel.org

On 4/22/26 09:56, Liu, Yuan1 wrote:
>> -----Original Message-----
>> From: David Hildenbrand (Arm) <david@kernel.org>
>> Sent: Wednesday, April 22, 2026 3:47 PM
>> To: Liu, Yuan1 <yuan1.liu@intel.com>; Oscar Salvador <osalvador@suse.de>;
>> Mike Rapoport <rppt@kernel.org>; Wei Yang <richard.weiyang@gmail.com>
>> Cc: linux-mm@kvack.org; Hu, Yong <yong.hu@intel.com>; Zou, Nanhai
>> <nanhai.zou@intel.com>; Tim Chen <tim.c.chen@linux.intel.com>; Zhuo, Qiuxu
>> <qiuxu.zhuo@intel.com>; Chen, Yu C <yu.c.chen@intel.com>; Deng, Pan
>> <pan.deng@intel.com>; Li, Tianyou <tianyou.li@intel.com>; Chen Zhang
>> <zhangchen.kidd@jd.com>; linux-kernel@vger.kernel.org
>> Subject: Re: [PATCH v4 0/2] mm/memory hotplug/unplug: Optimize zone
>> contiguous check when changing pfn range
>>
>> On 4/21/26 14:55, Yuan Liu wrote:
>>> This series cleans up the overlap memory map init check and
>>> optimizes zone contiguous check when changing pfn range.
>>>
>>> In addition to providing a significant improvement for VM hotplug
>>> (see the second patch for reference), it brings benefits for CXL
>>> hotplug as well. The link is as follows
>>> https://lore.kernel.org/all/20260409023552.GA2807@AE/
>>>
>>> v3 link:
>>>     https://lore.kernel.org/all/20260408031615.1831922-1-
>> yuan1.liu@intel.com/
>>>
>>> v4 changes:
>>>     Add a new patch for clean up overlap memory map init check
>>
>> Didn't you also wanted to add a patch to improve shrink_zone_span to check
>> both
>> sides of the PAGES_PER_SUBSECTION block for fitting nid+zid?
> 
> Hi David
> 
> My apologies for missing this. I will include it in the next version.

And isn't there also the problem with the subsection pfn_valid() handling etc
where I proposed a change?

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH v4 0/2] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range
  2026-04-22 19:13     ` David Hildenbrand (Arm)
@ 2026-04-23  3:17       ` Liu, Yuan1
  0 siblings, 0 replies; 15+ messages in thread
From: Liu, Yuan1 @ 2026-04-23  3:17 UTC (permalink / raw)
  To: David Hildenbrand (Arm), Oscar Salvador, Mike Rapoport, Wei Yang
  Cc: linux-mm@kvack.org, Hu, Yong, Zou, Nanhai, Tim Chen, Zhuo, Qiuxu,
	Chen, Yu C, Deng, Pan, Li, Tianyou, Chen Zhang,
	linux-kernel@vger.kernel.org

> -----Original Message-----
> From: David Hildenbrand (Arm) <david@kernel.org>
> Sent: Thursday, April 23, 2026 3:13 AM
> To: Liu, Yuan1 <yuan1.liu@intel.com>; Oscar Salvador <osalvador@suse.de>;
> Mike Rapoport <rppt@kernel.org>; Wei Yang <richard.weiyang@gmail.com>
> Cc: linux-mm@kvack.org; Hu, Yong <yong.hu@intel.com>; Zou, Nanhai
> <nanhai.zou@intel.com>; Tim Chen <tim.c.chen@linux.intel.com>; Zhuo, Qiuxu
> <qiuxu.zhuo@intel.com>; Chen, Yu C <yu.c.chen@intel.com>; Deng, Pan
> <pan.deng@intel.com>; Li, Tianyou <tianyou.li@intel.com>; Chen Zhang
> <zhangchen.kidd@jd.com>; linux-kernel@vger.kernel.org
> Subject: Re: [PATCH v4 0/2] mm/memory hotplug/unplug: Optimize zone
> contiguous check when changing pfn range
> 
> On 4/22/26 09:56, Liu, Yuan1 wrote:
> >> -----Original Message-----
> >> From: David Hildenbrand (Arm) <david@kernel.org>
> >> Sent: Wednesday, April 22, 2026 3:47 PM
> >> To: Liu, Yuan1 <yuan1.liu@intel.com>; Oscar Salvador
> <osalvador@suse.de>;
> >> Mike Rapoport <rppt@kernel.org>; Wei Yang <richard.weiyang@gmail.com>
> >> Cc: linux-mm@kvack.org; Hu, Yong <yong.hu@intel.com>; Zou, Nanhai
> >> <nanhai.zou@intel.com>; Tim Chen <tim.c.chen@linux.intel.com>; Zhuo,
> Qiuxu
> >> <qiuxu.zhuo@intel.com>; Chen, Yu C <yu.c.chen@intel.com>; Deng, Pan
> >> <pan.deng@intel.com>; Li, Tianyou <tianyou.li@intel.com>; Chen Zhang
> >> <zhangchen.kidd@jd.com>; linux-kernel@vger.kernel.org
> >> Subject: Re: [PATCH v4 0/2] mm/memory hotplug/unplug: Optimize zone
> >> contiguous check when changing pfn range
> >>
> >> On 4/21/26 14:55, Yuan Liu wrote:
> >>> This series cleans up the overlap memory map init check and
> >>> optimizes zone contiguous check when changing pfn range.
> >>>
> >>> In addition to providing a significant improvement for VM hotplug
> >>> (see the second patch for reference), it brings benefits for CXL
> >>> hotplug as well. The link is as follows
> >>> https://lore.kernel.org/all/20260409023552.GA2807@AE/
> >>>
> >>> v3 link:
> >>>     https://lore.kernel.org/all/20260408031615.1831922-1-
> >> yuan1.liu@intel.com/
> >>>
> >>> v4 changes:
> >>>     Add a new patch for clean up overlap memory map init check
> >>
> >> Didn't you also wanted to add a patch to improve shrink_zone_span to
> check
> >> both
> >> sides of the PAGES_PER_SUBSECTION block for fitting nid+zid?
> >
> > Hi David
> >
> > My apologies for missing this. I will include it in the next version.
> 
> And isn't there also the problem with the subsection pfn_valid() handling
> etc
> where I proposed a change?

Hi David

Sorry I didn’t catch your earlier discussion with Wei Yang. I’ve found your 
suggestion and implementation, and will split the pfn_valid/first_valid_pfn
changes into a separate patch for the next version.

For testing, I'll reproduce Wei Yang's setup (memblock_remove() of a
sub-section sized hole inside an early section) and confirm that
zone contiguous stays at 0, matching the behaviour before this series.

Thanks for the reminder, and thanks to Wei for the original report.

> --
> Cheers,
> 
> David

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 1/2] mm: move overlap memory map init check to memmap_init()
  2026-04-22  9:28       ` Liu, Yuan1
@ 2026-04-24  1:05         ` Wei Yang
  2026-04-24  7:49           ` Liu, Yuan1
  0 siblings, 1 reply; 15+ messages in thread
From: Wei Yang @ 2026-04-24  1:05 UTC (permalink / raw)
  To: Liu, Yuan1
  Cc: Wei Yang, David Hildenbrand, Oscar Salvador, Mike Rapoport,
	linux-mm@kvack.org, Hu, Yong, Zou, Nanhai, Tim Chen, Zhuo, Qiuxu,
	Chen, Yu C, Deng, Pan, Li, Tianyou, Chen Zhang,
	linux-kernel@vger.kernel.org

On Wed, Apr 22, 2026 at 09:28:52AM +0000, Liu, Yuan1 wrote:
>> -----Original Message-----
>> From: Wei Yang <richard.weiyang@gmail.com>
>> Sent: Wednesday, April 22, 2026 11:27 AM
>> To: Wei Yang <richard.weiyang@gmail.com>
>> Cc: Liu, Yuan1 <yuan1.liu@intel.com>; David Hildenbrand
>> <david@kernel.org>; Oscar Salvador <osalvador@suse.de>; Mike Rapoport
>> <rppt@kernel.org>; linux-mm@kvack.org; Hu, Yong <yong.hu@intel.com>; Zou,
>> Nanhai <nanhai.zou@intel.com>; Tim Chen <tim.c.chen@linux.intel.com>;
>> Zhuo, Qiuxu <qiuxu.zhuo@intel.com>; Chen, Yu C <yu.c.chen@intel.com>;
>> Deng, Pan <pan.deng@intel.com>; Li, Tianyou <tianyou.li@intel.com>; Chen
>> Zhang <zhangchen.kidd@jd.com>; linux-kernel@vger.kernel.org
>> Subject: Re: [PATCH v4 1/2] mm: move overlap memory map init check to
>> memmap_init()
>> 
>> On Wed, Apr 22, 2026 at 01:11:26AM +0000, Wei Yang wrote:
>> >On Tue, Apr 21, 2026 at 08:55:07AM -0400, Yuan Liu wrote:
>> >>Move the overlap memmap init check from memmap_init_range() into
>> >>memmap_init().
>> >>
>> >>When mirrored kernelcore is enabled, avoid memory map initialization
>> >>for overlap regions. There are two cases that may overlap: a mirror
>> >>memory region assigned to movable zone, or a non-mirror memory region
>> >>assigned to a non-movable zone but falling within the movable zone
>> >>range.
>> >>
>> >>Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
>> >>---
>> >> mm/mm_init.c | 37 +++++++++++++------------------------
>> >> 1 file changed, 13 insertions(+), 24 deletions(-)
>> >>
>> >>diff --git a/mm/mm_init.c b/mm/mm_init.c
>> >>index df34797691bd..2b5233060504 100644
>> >>--- a/mm/mm_init.c
>> >>+++ b/mm/mm_init.c
>> >>@@ -797,28 +797,6 @@ void __meminit reserve_bootmem_region(phys_addr_t
>> start,
>> >> 	}
>> >> }
>> >>
>> >>-/* If zone is ZONE_MOVABLE but memory is mirrored, it is an overlapped
>> init */
>> >>-static bool __meminit
>> >>-overlap_memmap_init(unsigned long zone, unsigned long *pfn)
>> >>-{
>> >>-	static struct memblock_region *r;
>> >>-
>> >>-	if (mirrored_kernelcore && zone == ZONE_MOVABLE) {
>> >>-		if (!r || *pfn >= memblock_region_memory_end_pfn(r)) {
>> >>-			for_each_mem_region(r) {
>> >>-				if (*pfn < memblock_region_memory_end_pfn(r))
>> >>-					break;
>> >>-			}
>> >>-		}
>> >>-		if (*pfn >= memblock_region_memory_base_pfn(r) &&
>> >>-		    memblock_is_mirror(r)) {
>> >>-			*pfn = memblock_region_memory_end_pfn(r);
>> >>-			return true;
>> >>-		}
>> >>-	}
>> >>-	return false;
>> >>-}
>> >>-
>> >> /*
>> >>  * Only struct pages that correspond to ranges defined by
>> memblock.memory
>> >>  * are zeroed and initialized by going through __init_single_page()
>> during
>> >>@@ -905,8 +883,6 @@ void __meminit memmap_init_range(unsigned long size,
>> int nid, unsigned long zone
>> >> 		 * function.  They do not exist on hotplugged memory.
>> >> 		 */
>> >> 		if (context == MEMINIT_EARLY) {
>> >>-			if (overlap_memmap_init(zone, &pfn))
>> >>-				continue;
>> >> 			if (defer_init(nid, pfn, zone_end_pfn)) {
>> >> 				deferred_struct_pages = true;
>> >> 				break;
>> >>@@ -971,6 +947,7 @@ static void __init memmap_init(void)
>> >>
>> >> 	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid)
>> {
>> >> 		struct pglist_data *node = NODE_DATA(nid);
>> >>+		struct memblock_region *r = &memblock.memory.regions[i];
>> >>
>> >> 		for (j = 0; j < MAX_NR_ZONES; j++) {
>> >> 			struct zone *zone = node->node_zones + j;
>> >>@@ -978,6 +955,18 @@ static void __init memmap_init(void)
>> >> 			if (!populated_zone(zone))
>> >> 				continue;
>> >>
>> >>+			if (mirrored_kernelcore) {
>> >>+				const bool is_mirror = memblock_is_mirror(r);
>> >>+				const bool is_movable_zone = (j == ZONE_MOVABLE);
>> >>+
>> >>+				if (is_mirror && is_movable_zone)
>> >>+					continue;
>> >>+
>> >>+				if (!is_mirror && !is_movable_zone &&
>> >>+				    start_pfn >= zone_movable_pfn[nid])
>> >>+					continue;
>> >
>> >IIUC, when mirrored_kernelcore is set but !memblock_has_mirror() or
>> >is_kdump_kernel(), zone_movable_pfn[nid] is kept to be 0.
>> >
>> >This means it will skip all memory regions.
>> >
>> 
>> Did some tests. When mirrored_kernelcore && !memblock_has_mirror(), which
>> means there is no is_mirror memblock. This will leave
>> zone_movable_pfn[nid] 0.
>> 
>> So for all memory regions, the above logic will skip them.
>> 
>> Adjust the code as below, my local test could pass and kernel bootup as
>> expected.
>> 
>> From 6351ac79a17edbfd830510fba2959ddc47b17258 Mon Sep 17 00:00:00 2001
>> From: Wei Yang <richard.weiyang@gmail.com>
>> Date: Wed, 22 Apr 2026 09:13:24 +0800
>> Subject: [PATCH] skip overlap region higher level
>> 
>> ---
>>  mm/mm_init.c | 29 ++++++++++++++++++++++-------
>>  1 file changed, 22 insertions(+), 7 deletions(-)
>> 
>> diff --git a/mm/mm_init.c b/mm/mm_init.c
>> index 79f93f2a90cf..7a85ba58e87f 100644
>> --- a/mm/mm_init.c
>> +++ b/mm/mm_init.c
>> @@ -916,8 +916,8 @@ void __meminit memmap_init_range(unsigned long size,
>> int nid, unsigned long zone
>>  		 * function.  They do not exist on hotplugged memory.
>>  		 */
>>  		if (context == MEMINIT_EARLY) {
>> -			if (overlap_memmap_init(zone, &pfn))
>> -				continue;
>> +			// if (overlap_memmap_init(zone, &pfn))
>> +			// 	continue;
>>  			if (defer_init(nid, pfn, zone_end_pfn)) {
>>  				deferred_struct_pages = true;
>>  				break;
>> @@ -974,6 +974,17 @@ static void __init memmap_init_zone_range(struct zone
>> *zone,
>>  	*hole_pfn = end_pfn;
>>  }
>> 
>> +static bool __init region_overlapped(struct memblock_region *rgn,
>> unsigned long zone_type)
>> +{
>> +	if (zone_type == ZONE_MOVABLE && memblock_is_mirror(rgn))
>> +		return true;
>> +
>> +	if (zone_type == ZONE_NORMAL && !memblock_is_mirror(rgn))
>> +		return true;
>> +
>> +	return false;
>> +}
>> +
>>  static void __init memmap_init(void)
>>  {
>>  	unsigned long start_pfn, end_pfn;
>> @@ -985,10 +996,15 @@ static void __init memmap_init(void)
>> 
>>  		for (j = 0; j < MAX_NR_ZONES; j++) {
>>  			struct zone *zone = node->node_zones + j;
>> +			struct memblock_region *r = &memblock.memory.regions[i];
>> 
>>  			if (!populated_zone(zone))
>>  				continue;
>> 
>> +			if (mirrored_kernelcore && zone_movable_pfn[nid] &&
>> +			    region_overlapped(r, j))
>> +				continue;
>> +
>>  			memmap_init_zone_range(zone, start_pfn, end_pfn,
>>  					       &hole_pfn);
>>  			zone_id = j;
>> @@ -1257,13 +1273,12 @@ static unsigned long __init
>> zone_absent_pages_in_node(int nid,
>>  			end_pfn = clamp(memblock_region_memory_end_pfn(r),
>>  					zone_start_pfn, zone_end_pfn);
>> 
>> -			if (zone_type == ZONE_MOVABLE &&
>> -			    memblock_is_mirror(r))
>> -				nr_absent += end_pfn - start_pfn;
>> +			if (start_pfn == end_pfn)
>> +				continue;
>> 
>> -			if (zone_type == ZONE_NORMAL &&
>> -			    !memblock_is_mirror(r))
>> +			if (region_overlapped(r, zone_type))
>>  				nr_absent += end_pfn - start_pfn;
>> +
>>  		}
>>  	}
>
>Hi Wei Yang
>
>I ran some tests based on this patch and didn't observe any issues. 
>Thanks for the patch.
>

You are welcome.

Well, maybe we need to do something more. Let me explain what I see.

My assumption of the position of mirror memory is:

   When there is mirror memory in system, all memory in low zone should be
   mirror memory. Non-Mirror memory only could be in the range of ZONE_NORMAL.

   And in the range of ZONE_NORMAL

     * there could be no mirror memory
     * the mirror memory could be at the head or middle in ZONE_NORMAL

Take my test machine as an example, 

    MEMBLOCK configuration:
     memory size = 0x000000017ff7dc00 reserved size = 0x0000000005a9a9c2
     memory.cnt  = 0x3
     memory[0x0]     [0x0000000000001000-0x000000000009efff], 0x000000000009e000 bytes on node 0 flags: 0x0
     memory[0x1]     [0x0000000000100000-0x00000000bffdefff], 0x00000000bfedf000 bytes on node 0 flags: 0x0
     memory[0x2]     [0x0000000100000000-0x00000001bfffffff], 0x00000000c0000000 bytes on node 1 flags: 0x0

The first two memblock region span ZONE_DMA and ZONE_DMA32. The third one span
ZONE_NORMAL.(When kernelcore is not specified).

So I did test with below code change:

@@ -147,6 +148,14 @@ static int __init numa_register_nodes(void)
        }
 
        /* Dump memblock with node info and return. */
+
+       /* Mark mirror by hand */
+       for_each_mem_region(r) {
+               if (i++ < 2)
+                       memblock_mark_mirror(r->base, r->size);
+       }
+

This mark the first two memblock region as mirror. And then use

        memblock_mark_mirror(0x100000000, 0x40000000);
or 
        memblock_mark_mirror(0x140000000, 0x40000000);

To mark the head 1G or second 1G as mirror in the 3rd memblock region to mimic
the overlap case.

So I manually create 3 cases:

A: all ZONE_NORMAL is non-mirror
  memory[0x0]     [0x0000000000001000-0x000000000009efff], node 0 flags: 0x2 mirror
  memory[0x1]     [0x0000000000100000-0x00000000bffdefff], node 0 flags: 0x2 mirror
  memory[0x2]     [0x0000000100000000-0x00000001bfffffff], node 1 flags: 0x0 non-mirror

B: head 1G of ZONE_NORMAL is mirror
  memory[0x0]     [0x0000000000001000-0x000000000009efff], node 0 flags: 0x2 mirror
  memory[0x1]     [0x0000000000100000-0x00000000bffdefff], node 0 flags: 0x2 mirror
  memory[0x2]     [0x0000000100000000-0x000000013fffffff], node 1 flags: 0x2 mirror
  memory[0x3]     [0x0000000140000000-0x00000001bfffffff], node 1 flags: 0x0 non-mirror

C: second 1G of ZONE_NORMAL is mirror
  memory[0x0]     [0x0000000000001000-0x000000000009efff], node 0 flags: 0x2 mirror
  memory[0x1]     [0x0000000000100000-0x00000000bffdefff], node 0 flags: 0x2 mirror
  memory[0x2]     [0x0000000100000000-0x000000013fffffff], node 1 flags: 0x0 non-mirror
  memory[0x3]     [0x0000000140000000-0x000000017fffffff], node 1 flags: 0x2 mirror
  memory[0x4]     [0x0000000180000000-0x00000001bfffffff], node 1 flags: 0x0 non-mirror

The change I proposed works fine for A/B, but for C pages in
[0x140000000-0x17fffffff] is miss placed.

    Node 1, zone  Normal
            spanned  0
            present  0           <-- missing
            managed  0
    Node 1, zone  Movable
            spanned  786432
            present  524288
            managed  773552      <-- but put in here

The reason is in adjust_zone_range_for_zone_movable(), ZONE_NORMAL is
truncated, since zone_movable_pfn[nid] equals to ZONE_NORMAL's start. So this
range is skipped and then by "accident" it is initialized by
init_unavailable_range() to ZONE_MOVABLE. And then it is freed to ZONE_MOVABLE
in __free_pages_core().

After removing this truncation, the zone stats looks good.

Node 1, zone   Normal
        spanned  786432
        present  262144
        managed  249310
Node 1, zone  Movable
        spanned  786432
        present  524288
        managed  517223

@@ -1204,10 +1204,7 @@ static void __init adjust_zone_range_for_zone_movable(int nid,
                        *zone_start_pfn < zone_movable_pfn[nid] &&
                        *zone_end_pfn > zone_movable_pfn[nid]) {
                        *zone_end_pfn = zone_movable_pfn[nid];
-
-               /* Check if this whole range is within ZONE_MOVABLE */
-               } else if (*zone_start_pfn >= zone_movable_pfn[nid])
-                       *zone_start_pfn = *zone_end_pfn;
+               }
        }
 }

All above analysis is based on my assumption on possible mirror memory
position in system. If my assumption of mirror memory is not true, this may
not be true.

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH v4 1/2] mm: move overlap memory map init check to memmap_init()
  2026-04-24  1:05         ` Wei Yang
@ 2026-04-24  7:49           ` Liu, Yuan1
  0 siblings, 0 replies; 15+ messages in thread
From: Liu, Yuan1 @ 2026-04-24  7:49 UTC (permalink / raw)
  To: Wei Yang
  Cc: David Hildenbrand, Oscar Salvador, Mike Rapoport,
	linux-mm@kvack.org, Hu, Yong, Zou, Nanhai, Tim Chen, Zhuo, Qiuxu,
	Chen, Yu C, Deng, Pan, Li, Tianyou, Chen Zhang,
	linux-kernel@vger.kernel.org

[...]
> >> >>diff --git a/mm/mm_init.c b/mm/mm_init.c
> >> >>index df34797691bd..2b5233060504 100644
> >> >>--- a/mm/mm_init.c
> >> >>+++ b/mm/mm_init.c
> >> >>@@ -797,28 +797,6 @@ void __meminit
> reserve_bootmem_region(phys_addr_t
> >> start,
> >> >> 	}
> >> >> }
> >> >>
> >> >>-/* If zone is ZONE_MOVABLE but memory is mirrored, it is an
> overlapped
> >> init */
> >> >>-static bool __meminit
> >> >>-overlap_memmap_init(unsigned long zone, unsigned long *pfn)
> >> >>-{
> >> >>-	static struct memblock_region *r;
> >> >>-
> >> >>-	if (mirrored_kernelcore && zone == ZONE_MOVABLE) {
> >> >>-		if (!r || *pfn >= memblock_region_memory_end_pfn(r)) {
> >> >>-			for_each_mem_region(r) {
> >> >>-				if (*pfn <
> memblock_region_memory_end_pfn(r))
> >> >>-					break;
> >> >>-			}
> >> >>-		}
> >> >>-		if (*pfn >= memblock_region_memory_base_pfn(r) &&
> >> >>-		    memblock_is_mirror(r)) {
> >> >>-			*pfn = memblock_region_memory_end_pfn(r);
> >> >>-			return true;
> >> >>-		}
> >> >>-	}
> >> >>-	return false;
> >> >>-}
> >> >>-
> >> >> /*
> >> >>  * Only struct pages that correspond to ranges defined by
> >> memblock.memory
> >> >>  * are zeroed and initialized by going through __init_single_page()
> >> during
> >> >>@@ -905,8 +883,6 @@ void __meminit memmap_init_range(unsigned long
> size,
> >> int nid, unsigned long zone
> >> >> 		 * function.  They do not exist on hotplugged memory.
> >> >> 		 */
> >> >> 		if (context == MEMINIT_EARLY) {
> >> >>-			if (overlap_memmap_init(zone, &pfn))
> >> >>-				continue;
> >> >> 			if (defer_init(nid, pfn, zone_end_pfn)) {
> >> >> 				deferred_struct_pages = true;
> >> >> 				break;
> >> >>@@ -971,6 +947,7 @@ static void __init memmap_init(void)
> >> >>
> >> >> 	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn,
> &nid)
> >> {
> >> >> 		struct pglist_data *node = NODE_DATA(nid);
> >> >>+		struct memblock_region *r = &memblock.memory.regions[i];
> >> >>
> >> >> 		for (j = 0; j < MAX_NR_ZONES; j++) {
> >> >> 			struct zone *zone = node->node_zones + j;
> >> >>@@ -978,6 +955,18 @@ static void __init memmap_init(void)
> >> >> 			if (!populated_zone(zone))
> >> >> 				continue;
> >> >>
> >> >>+			if (mirrored_kernelcore) {
> >> >>+				const bool is_mirror =
> memblock_is_mirror(r);
> >> >>+				const bool is_movable_zone = (j ==
> ZONE_MOVABLE);
> >> >>+
> >> >>+				if (is_mirror && is_movable_zone)
> >> >>+					continue;
> >> >>+
> >> >>+				if (!is_mirror && !is_movable_zone &&
> >> >>+				    start_pfn >= zone_movable_pfn[nid])
> >> >>+					continue;
> >> >
> >> >IIUC, when mirrored_kernelcore is set but !memblock_has_mirror() or
> >> >is_kdump_kernel(), zone_movable_pfn[nid] is kept to be 0.
> >> >
> >> >This means it will skip all memory regions.
> >> >
> >>
> >> Did some tests. When mirrored_kernelcore && !memblock_has_mirror(),
> which
> >> means there is no is_mirror memblock. This will leave
> >> zone_movable_pfn[nid] 0.
> >>
> >> So for all memory regions, the above logic will skip them.
> >>
> >> Adjust the code as below, my local test could pass and kernel bootup as
> >> expected.
> >>
> >> From 6351ac79a17edbfd830510fba2959ddc47b17258 Mon Sep 17 00:00:00 2001
> >> From: Wei Yang <richard.weiyang@gmail.com>
> >> Date: Wed, 22 Apr 2026 09:13:24 +0800
> >> Subject: [PATCH] skip overlap region higher level
> >>
> >> ---
> >>  mm/mm_init.c | 29 ++++++++++++++++++++++-------
> >>  1 file changed, 22 insertions(+), 7 deletions(-)
> >>
> >> diff --git a/mm/mm_init.c b/mm/mm_init.c
> >> index 79f93f2a90cf..7a85ba58e87f 100644
> >> --- a/mm/mm_init.c
> >> +++ b/mm/mm_init.c
> >> @@ -916,8 +916,8 @@ void __meminit memmap_init_range(unsigned long
> size,
> >> int nid, unsigned long zone
> >>  		 * function.  They do not exist on hotplugged memory.
> >>  		 */
> >>  		if (context == MEMINIT_EARLY) {
> >> -			if (overlap_memmap_init(zone, &pfn))
> >> -				continue;
> >> +			// if (overlap_memmap_init(zone, &pfn))
> >> +			// 	continue;
> >>  			if (defer_init(nid, pfn, zone_end_pfn)) {
> >>  				deferred_struct_pages = true;
> >>  				break;
> >> @@ -974,6 +974,17 @@ static void __init memmap_init_zone_range(struct
> zone
> >> *zone,
> >>  	*hole_pfn = end_pfn;
> >>  }
> >>
> >> +static bool __init region_overlapped(struct memblock_region *rgn,
> >> unsigned long zone_type)
> >> +{
> >> +	if (zone_type == ZONE_MOVABLE && memblock_is_mirror(rgn))
> >> +		return true;
> >> +
> >> +	if (zone_type == ZONE_NORMAL && !memblock_is_mirror(rgn))
> >> +		return true;
> >> +
> >> +	return false;
> >> +}
> >> +
> >>  static void __init memmap_init(void)
> >>  {
> >>  	unsigned long start_pfn, end_pfn;
> >> @@ -985,10 +996,15 @@ static void __init memmap_init(void)
> >>
> >>  		for (j = 0; j < MAX_NR_ZONES; j++) {
> >>  			struct zone *zone = node->node_zones + j;
> >> +			struct memblock_region *r = &memblock.memory.regions[i];
> >>
> >>  			if (!populated_zone(zone))
> >>  				continue;
> >>
> >> +			if (mirrored_kernelcore && zone_movable_pfn[nid] &&
> >> +			    region_overlapped(r, j))
> >> +				continue;
> >> +
> >>  			memmap_init_zone_range(zone, start_pfn, end_pfn,
> >>  					       &hole_pfn);
> >>  			zone_id = j;
> >> @@ -1257,13 +1273,12 @@ static unsigned long __init
> >> zone_absent_pages_in_node(int nid,
> >>  			end_pfn = clamp(memblock_region_memory_end_pfn(r),
> >>  					zone_start_pfn, zone_end_pfn);
> >>
> >> -			if (zone_type == ZONE_MOVABLE &&
> >> -			    memblock_is_mirror(r))
> >> -				nr_absent += end_pfn - start_pfn;
> >> +			if (start_pfn == end_pfn)
> >> +				continue;
> >>
> >> -			if (zone_type == ZONE_NORMAL &&
> >> -			    !memblock_is_mirror(r))
> >> +			if (region_overlapped(r, zone_type))
> >>  				nr_absent += end_pfn - start_pfn;
> >> +
> >>  		}
> >>  	}
> >
> >Hi Wei Yang
> >
> >I ran some tests based on this patch and didn't observe any issues.
> >Thanks for the patch.
> >
> 
> You are welcome.
> 
> Well, maybe we need to do something more. Let me explain what I see.
> 
> My assumption of the position of mirror memory is:
> 
>    When there is mirror memory in system, all memory in low zone should be
>    mirror memory. Non-Mirror memory only could be in the range of
> ZONE_NORMAL.
> 
>    And in the range of ZONE_NORMAL
> 
>      * there could be no mirror memory
>      * the mirror memory could be at the head or middle in ZONE_NORMAL
> 
> Take my test machine as an example,
> 
>     MEMBLOCK configuration:
>      memory size = 0x000000017ff7dc00 reserved size = 0x0000000005a9a9c2
>      memory.cnt  = 0x3
>      memory[0x0]     [0x0000000000001000-0x000000000009efff],
> 0x000000000009e000 bytes on node 0 flags: 0x0
>      memory[0x1]     [0x0000000000100000-0x00000000bffdefff],
> 0x00000000bfedf000 bytes on node 0 flags: 0x0
>      memory[0x2]     [0x0000000100000000-0x00000001bfffffff],
> 0x00000000c0000000 bytes on node 1 flags: 0x0
> 
> The first two memblock region span ZONE_DMA and ZONE_DMA32. The third one
> span
> ZONE_NORMAL.(When kernelcore is not specified).
> 
> So I did test with below code change:
> 
> @@ -147,6 +148,14 @@ static int __init numa_register_nodes(void)
>         }
> 
>         /* Dump memblock with node info and return. */
> +
> +       /* Mark mirror by hand */
> +       for_each_mem_region(r) {
> +               if (i++ < 2)
> +                       memblock_mark_mirror(r->base, r->size);
> +       }
> +
> 
> This mark the first two memblock region as mirror. And then use
> 
>         memblock_mark_mirror(0x100000000, 0x40000000);
> or
>         memblock_mark_mirror(0x140000000, 0x40000000);
> 
> To mark the head 1G or second 1G as mirror in the 3rd memblock region to
> mimic
> the overlap case.
> 
> So I manually create 3 cases:
> 
> A: all ZONE_NORMAL is non-mirror
>   memory[0x0]     [0x0000000000001000-0x000000000009efff], node 0 flags:
> 0x2 mirror
>   memory[0x1]     [0x0000000000100000-0x00000000bffdefff], node 0 flags:
> 0x2 mirror
>   memory[0x2]     [0x0000000100000000-0x00000001bfffffff], node 1 flags:
> 0x0 non-mirror
> 
> B: head 1G of ZONE_NORMAL is mirror
>   memory[0x0]     [0x0000000000001000-0x000000000009efff], node 0 flags:
> 0x2 mirror
>   memory[0x1]     [0x0000000000100000-0x00000000bffdefff], node 0 flags:
> 0x2 mirror
>   memory[0x2]     [0x0000000100000000-0x000000013fffffff], node 1 flags:
> 0x2 mirror
>   memory[0x3]     [0x0000000140000000-0x00000001bfffffff], node 1 flags:
> 0x0 non-mirror
> 
> C: second 1G of ZONE_NORMAL is mirror
>   memory[0x0]     [0x0000000000001000-0x000000000009efff], node 0 flags:
> 0x2 mirror
>   memory[0x1]     [0x0000000000100000-0x00000000bffdefff], node 0 flags:
> 0x2 mirror
>   memory[0x2]     [0x0000000100000000-0x000000013fffffff], node 1 flags:
> 0x0 non-mirror
>   memory[0x3]     [0x0000000140000000-0x000000017fffffff], node 1 flags:
> 0x2 mirror
>   memory[0x4]     [0x0000000180000000-0x00000001bfffffff], node 1 flags:
> 0x0 non-mirror
> 
> The change I proposed works fine for A/B, but for C pages in
> [0x140000000-0x17fffffff] is miss placed.
> 
>     Node 1, zone  Normal
>             spanned  0
>             present  0           <-- missing
>             managed  0
>     Node 1, zone  Movable
>             spanned  786432
>             present  524288
>             managed  773552      <-- but put in here

Thanks for the detailed description. I can reproduce the same issue in a VM.

I'm not sure whether the case C occurs in real-world that the overlap happens at 
the beginning of ZONE_NORMAL and this issue is also present in v7.0-rc4 w/o this 
patch-set.

> The reason is in adjust_zone_range_for_zone_movable(), ZONE_NORMAL is
> truncated, since zone_movable_pfn[nid] equals to ZONE_NORMAL's start. So
> this
> range is skipped and then by "accident" it is initialized by
> init_unavailable_range() to ZONE_MOVABLE. And then it is freed to
> ZONE_MOVABLE
> in __free_pages_core().
> 
> After removing this truncation, the zone stats looks good.
> 
> Node 1, zone   Normal
>         spanned  786432
>         present  262144
>         managed  249310
> Node 1, zone  Movable
>         spanned  786432
>         present  524288
>         managed  517223
> 
> @@ -1204,10 +1204,7 @@ static void __init
> adjust_zone_range_for_zone_movable(int nid,
>                         *zone_start_pfn < zone_movable_pfn[nid] &&
>                         *zone_end_pfn > zone_movable_pfn[nid]) {
>                         *zone_end_pfn = zone_movable_pfn[nid];
> -
> -               /* Check if this whole range is within ZONE_MOVABLE */
> -               } else if (*zone_start_pfn >= zone_movable_pfn[nid])
> -                       *zone_start_pfn = *zone_end_pfn;
> +               }
>         }
>  }

Yes, this change can address the issue for case C.
We also need to check mirrored_kernelcore here, I think the issue only occurs 
in the overlap happens.

> All above analysis is based on my assumption on possible mirror memory
> position in system. If my assumption of mirror memory is not true, this
> may
> not be true.

Yes, I understand your point. If there are no further comments, I will include 
your suggestions in the next version for further review by the community. 

If you have any further suggestions, please feel free to let me know.

> --
> Wei Yang
> Help you, Help me


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 1/2] mm: move overlap memory map init check to memmap_init()
  2026-04-21 12:55 ` [PATCH v4 1/2] mm: move overlap memory map init check to memmap_init() Yuan Liu
  2026-04-22  1:11   ` Wei Yang
@ 2026-04-25  9:01   ` Mike Rapoport
  2026-04-26  4:00     ` Wei Yang
  1 sibling, 1 reply; 15+ messages in thread
From: Mike Rapoport @ 2026-04-25  9:01 UTC (permalink / raw)
  To: Yuan Liu
  Cc: David Hildenbrand, Oscar Salvador, Wei Yang, linux-mm, Yong Hu,
	Nanhai Zou, Tim Chen, Qiuxu Zhuo, Yu C Chen, Pan Deng, Tianyou Li,
	Chen Zhang, linux-kernel

Hi,

On Tue, Apr 21, 2026 at 08:55:07AM -0400, Yuan Liu wrote:
> Move the overlap memmap init check from memmap_init_range() into
> memmap_init().
> 
> When mirrored kernelcore is enabled, avoid memory map initialization
> for overlap regions. There are two cases that may overlap: a mirror
> memory region assigned to movable zone, or a non-mirror memory region
> assigned to a non-movable zone but falling within the movable zone
> range.
> 
> Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
> ---
>  mm/mm_init.c | 37 +++++++++++++------------------------
>  1 file changed, 13 insertions(+), 24 deletions(-)
> 
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index df34797691bd..2b5233060504 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -797,28 +797,6 @@ void __meminit reserve_bootmem_region(phys_addr_t start,
>  	}
>  }
>  
> -/* If zone is ZONE_MOVABLE but memory is mirrored, it is an overlapped init */
> -static bool __meminit
> -overlap_memmap_init(unsigned long zone, unsigned long *pfn)
> -{
> -	static struct memblock_region *r;
> -
> -	if (mirrored_kernelcore && zone == ZONE_MOVABLE) {
> -		if (!r || *pfn >= memblock_region_memory_end_pfn(r)) {
> -			for_each_mem_region(r) {
> -				if (*pfn < memblock_region_memory_end_pfn(r))
> -					break;
> -			}
> -		}
> -		if (*pfn >= memblock_region_memory_base_pfn(r) &&
> -		    memblock_is_mirror(r)) {
> -			*pfn = memblock_region_memory_end_pfn(r);
> -			return true;
> -		}
> -	}
> -	return false;
> -}
> -
>  /*
>   * Only struct pages that correspond to ranges defined by memblock.memory
>   * are zeroed and initialized by going through __init_single_page() during
> @@ -905,8 +883,6 @@ void __meminit memmap_init_range(unsigned long size, int nid, unsigned long zone
>  		 * function.  They do not exist on hotplugged memory.
>  		 */
>  		if (context == MEMINIT_EARLY) {
> -			if (overlap_memmap_init(zone, &pfn))
> -				continue;
>  			if (defer_init(nid, pfn, zone_end_pfn)) {
>  				deferred_struct_pages = true;
>  				break;
> @@ -971,6 +947,7 @@ static void __init memmap_init(void)
>  
>  	for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) {
>  		struct pglist_data *node = NODE_DATA(nid);
> +		struct memblock_region *r = &memblock.memory.regions[i];

Please move this declaration above struct pglist_data, let's keep reverse
xmas tree where possible.
>  
>  		for (j = 0; j < MAX_NR_ZONES; j++) {
>  			struct zone *zone = node->node_zones + j;
> @@ -978,6 +955,18 @@ static void __init memmap_init(void)
>  			if (!populated_zone(zone))
>  				continue;
>  
> +			if (mirrored_kernelcore) {
> +				const bool is_mirror = memblock_is_mirror(r);
> +				const bool is_movable_zone = (j == ZONE_MOVABLE);
> +
> +				if (is_mirror && is_movable_zone)
> +					continue;
> +
> +				if (!is_mirror && !is_movable_zone &&
> +				    start_pfn >= zone_movable_pfn[nid])
> +					continue;
> +			}
> +

I think this:

			if (mirrored_kernelcore && j == ZONE_MOVABLE &&
			    memblock_is_mirror(r))
				continue;

would be enough to remove overlap_memmap_init() and keep the existing
logic.

I wouldn't deal the theoretical cases Wei mentioned in this thread for
now and prefer to keep the things simple. 
The assumptions that mirrored memory spans a contiguous range below some
limit and that mirrored memory is not removable existed for years and I
don't see why we should change the logic now and complicate the code for
exotic theoretical memory layouts.

>  			memmap_init_zone_range(zone, start_pfn, end_pfn,
>  					       &hole_pfn);
>  			zone_id = j;
> -- 
> 2.47.3
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4 1/2] mm: move overlap memory map init check to memmap_init()
  2026-04-25  9:01   ` Mike Rapoport
@ 2026-04-26  4:00     ` Wei Yang
  0 siblings, 0 replies; 15+ messages in thread
From: Wei Yang @ 2026-04-26  4:00 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Yuan Liu, David Hildenbrand, Oscar Salvador, Wei Yang, linux-mm,
	Yong Hu, Nanhai Zou, Tim Chen, Qiuxu Zhuo, Yu C Chen, Pan Deng,
	Tianyou Li, Chen Zhang, linux-kernel

On Sat, Apr 25, 2026 at 11:01:42AM +0200, Mike Rapoport wrote:
[...]
>>  
>>  		for (j = 0; j < MAX_NR_ZONES; j++) {
>>  			struct zone *zone = node->node_zones + j;
>> @@ -978,6 +955,18 @@ static void __init memmap_init(void)
>>  			if (!populated_zone(zone))
>>  				continue;
>>  
>> +			if (mirrored_kernelcore) {
>> +				const bool is_mirror = memblock_is_mirror(r);
>> +				const bool is_movable_zone = (j == ZONE_MOVABLE);
>> +
>> +				if (is_mirror && is_movable_zone)
>> +					continue;
>> +
>> +				if (!is_mirror && !is_movable_zone &&
>> +				    start_pfn >= zone_movable_pfn[nid])
>> +					continue;
>> +			}
>> +
>

Hi, Mike

Thanks for your review.

>I think this:
>
>			if (mirrored_kernelcore && j == ZONE_MOVABLE &&
>			    memblock_is_mirror(r))
>				continue;
>
>would be enough to remove overlap_memmap_init() and keep the existing
>logic.
>
>I wouldn't deal the theoretical cases Wei mentioned in this thread for
>now and prefer to keep the things simple. 

That would be great to keep things simple.

>The assumptions that mirrored memory spans a contiguous range below some
>limit and that mirrored memory is not removable existed for years and I
>don't see why we should change the logic now and complicate the code for
>exotic theoretical memory layouts.
>

I don't follow here. Still not clear what the memory layout should be.

IIUC, case C is not real, but case A/B are.

I took case B as an example and do some tests. Below is my finding.

Here is memblock layout for case B, with the head 1G of ZONE_NORMAL is mirror
memory.

   MEMBLOCK configuration:
    memory size = 0x000000017ff7dc00 reserved size = 0x0000000005a939c2
    memory.cnt  = 0x4
    memory[0x0]     [0x0000000000001000-0x000000000009efff], node 0 flags: 0x2
    memory[0x1]     [0x0000000000100000-0x00000000bffdefff], node 0 flags: 0x2
    memory[0x2]     [0x0000000100000000-0x000000013fffffff], node 1 flags: 0x2
    memory[0x3]     [0x0000000140000000-0x00000001bfffffff], node 1 flags: 0x0

This meets:

  * mirrored memory span from low to 0x13fffffff

Then I add below change along with your suggested change.

@@ -964,6 +964,8 @@ static void __init memmap_init_zone_range(struct zone *zone,
        if (start_pfn >= end_pfn)
                return;

+       pr_info(" [%lx, %lx] init to %s\n",
+                       start_pfn, end_pfn, zone->name);
        memmap_init_range(end_pfn - start_pfn, nid, zone_id, start_pfn,
                          zone_end_pfn, MEMINIT_EARLY, NULL, MIGRATE_MOVABLE,
                          false);

And see the last normal memory range is initialized twice.

 [140000, 1c0000] init to Normal
 [140000, 1c0000] init to Movable

Then I removed your suggested change and adjust code like below.

@@ -954,6 +954,7 @@ static void __init memmap_init_zone_range(struct zone *zone,
                                          unsigned long end_pfn,
                                          unsigned long *hole_pfn)
 {
+       unsigned long old_start = start_pfn, old_end = end_pfn;
        unsigned long zone_start_pfn = zone->zone_start_pfn;
        unsigned long zone_end_pfn = zone_start_pfn + zone->spanned_pages;
        int nid = zone_to_nid(zone), zone_id = zone_idx(zone);
,
        start_pfn = clamp(start_pfn, zone_start_pfn, zone_end_pfn);
        end_pfn = clamp(end_pfn, zone_start_pfn, zone_end_pfn);

-       if (start_pfn >= end_pfn)
+       if (start_pfn >= end_pfn) {
+               pr_info(" [%lx, %lx] skipped to %s\n",
+                       old_start, old_end, zone->name);
                return;
+       }

+       pr_info(" [%lx, %lx] init to %s\n",
+                       start_pfn, end_pfn, zone->name);
        memmap_init_range(end_pfn - start_pfn, nid, zone_id, start_pfn,
                          zone_end_pfn, MEMINIT_EARLY, NULL, MIGRATE_MOVABLE,
                          false);

This shows current code already could skip the mirror range to ZONE_MOVABLE
for this kind memory layout, since ZONE_MOVABLE doesn't span to it.

  [100000, 140000] skipped to Movable
  [140000, 1c0000] init to Normal
  [140000, 1c0000] init to Movable

So I am not sure the real mirrored memory layout could be. Would you mind
giving more detail to help me get on the right track?

>>  			memmap_init_zone_range(zone, start_pfn, end_pfn,
>>  					       &hole_pfn);
>>  			zone_id = j;
>> -- 
>> 2.47.3
>> 
>
>-- 
>Sincerely yours,
>Mike.

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2026-04-26  4:00 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-21 12:55 [PATCH v4 0/2] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range Yuan Liu
2026-04-21 12:55 ` [PATCH v4 1/2] mm: move overlap memory map init check to memmap_init() Yuan Liu
2026-04-22  1:11   ` Wei Yang
2026-04-22  3:26     ` Wei Yang
2026-04-22  9:28       ` Liu, Yuan1
2026-04-24  1:05         ` Wei Yang
2026-04-24  7:49           ` Liu, Yuan1
2026-04-22  7:08     ` Liu, Yuan1
2026-04-25  9:01   ` Mike Rapoport
2026-04-26  4:00     ` Wei Yang
2026-04-21 12:55 ` [PATCH v4 2/2] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range Yuan Liu
2026-04-22  7:46 ` [PATCH v4 0/2] " David Hildenbrand (Arm)
2026-04-22  7:56   ` Liu, Yuan1
2026-04-22 19:13     ` David Hildenbrand (Arm)
2026-04-23  3:17       ` Liu, Yuan1

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox