From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-wr1-f44.google.com (mail-wr1-f44.google.com [209.85.221.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C6564233134 for ; Wed, 22 Apr 2026 03:26:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.44 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776828412; cv=none; b=YPfSalZFefBDJu3IAyJV/nIGhLFsqHJtqxZbplwotQoU+9mrScEh9OLBtg+WvOB91wRUWmmbqwpnhbB3BDP4FBbN3AZy8o0iBdxhX9HSDADi52HV5BPbi5DLJT7nenAhuRBOQtSLpUXLezVeb2/XTPnns+w6FxDqx2afcGb5Wco= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776828412; c=relaxed/simple; bh=J+1WHnJypXRtEFFbVAPYkkc28vg8ZuNIYKViYeCvMCg=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=aM6mMwRyGuwaEb2Jjax7SmorPDU/H0n6tkQUa1rQ/XjHldCuwhieU5M7SWFa/FfkrMrp7CaX/1H04cigp+hWk5529nyp0aaIdSDFotO3KWxXd0Cy4yg9JKJWjldIBqFcHU7S+cRi3cqMztLqhwbcWXlJ+bTjOg6H8kgdXpIr9hY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ln/0m9ba; arc=none smtp.client-ip=209.85.221.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ln/0m9ba" Received: by mail-wr1-f44.google.com with SMTP id ffacd0b85a97d-43d7213b6ebso3258692f8f.3 for ; Tue, 21 Apr 2026 20:26:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1776828409; x=1777433209; darn=vger.kernel.org; h=user-agent:in-reply-to:content-disposition:mime-version:references :reply-to:message-id:subject:cc:to:from:date:from:to:cc:subject:date :message-id:reply-to; bh=jnuY7BaxyAqTyMW14b2KAmW1GqPqbyMPyZtJcjP7seY=; b=ln/0m9baIHcJA1AlLO1Wb9jHQsj9CFVhUJjW/pFWsoBG04B2+3cDaP4m0NJvThyoon lWIrvU9Rm2dwVIrXWZaZlk/EB4g+Jlo2OhhLZ9fg9OZq9a3APsB2zWFjRuoHgLxwTHfq rJLPwEE6bjdkOCePQTR3voSlxEDGkYwclV3Itsa49AnC2q5eNi6XqCgjZMuVJ9wz+2BX aAB/i0p5/zyRDHZPXUzr9LlOa01OAbQONrCW+Q6oemdDlv4395UO1kQuIpnDyHqrtt8c RIPvDEfVyEAObnyMedQXuuZIO65zHcAGNrr1HXifpJOsraYGGnzr6G+kYSrgSfztZJZw vZgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1776828409; x=1777433209; h=user-agent:in-reply-to:content-disposition:mime-version:references :reply-to:message-id:subject:cc:to:from:date:x-gm-gg :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jnuY7BaxyAqTyMW14b2KAmW1GqPqbyMPyZtJcjP7seY=; b=ijRZHY3GXPKO4ma37cuGnQ9b6dPxErSNHSMhbPx5KQ5X+3KC/iMYWoxy/alzJdCB+e vIS4DVT8KH6uEOGKbwWb8pD0oVNTIl62dppZ75D3y0Oo09OLju04EfqZtvyZm/nUV56q efa/nAI7uCp946uD0mc6MbQtwTuSSuc/kMI1Jrcp8PiyoJF6LOnfzNqpwrm/SSagAhuw Cxow8B0NQoDs/3i3XsTPO0K7ImEaIU5Y4RRLk74a6eEsiFcsHwKqx/Q/TY4/WyPA3wxs iYBq0ab2n9Ig1qWzUC1LcweT9uSShvVrON0Oib942+lQRad6dHP0vw2L8NUZujCg77Dq spNA== X-Forwarded-Encrypted: i=1; AFNElJ/YQqLukEHQeonWae1vmI/pE8D/olcpCthbZgUk6aeUjMiaXTlmPE8kwAwlUz2bVlbTKhslh1/kR3+Z+Pc=@vger.kernel.org X-Gm-Message-State: AOJu0YxcjWUk86+xZKVHC4scYQpbc8RoOAShObRLEONNrOdKKgr/DQDL x3aR7tlU64epb2XUK7HDlK6/SEnWNxYAdkDSfdtJWNmh/HlOHxRNkkxS X-Gm-Gg: AeBDiesG71Qxq4Z1LAODb7H0mGy1ugnhOfFXzsHEHbG9iwJUYWuuNWn65fR5iwnfXF4 YrBjNzNnuvkUFiZUNFRw/u0PRVSCxxSsiSjKo9y6SQAtFp9EmXc6Kzx9z7+YjiSScTSpNCMWAZo LiyXWbNgPEmXkkLIe2dokHV22TcoN51jkYOxhM8tSNwqjtKE98vP2BgWvn/4JnHoPAz8RqIZRJY bJkWU2ihVZz3kpLyb/cDc2RJ034zWmhb75YL63QCMPnDxmU/zpH4EOeiE7tLSGzIFgTYMytFRPE jkLiC3je0ojG6ko2uH6WuOrmdS5O3DFrxODYTKc+sFIOP7XyHCZ3MLBjzVz8FYmzDfCkTkPJKwe ULA3mUiYJqM135QFApH98MER//y1QwP/0LdhfO2uYfwwpcpZtHKGmpTYEtN3kHBOriZ0Yk0kPvZ IcaobOI+VJ38+ybHpI1/9Ya4LX51TLU6b4 X-Received: by 2002:a05:600c:8b55:b0:48a:57e1:d8cc with SMTP id 5b1f17b1804b1-48a57e1d8d4mr56062835e9.9.1776828408940; Tue, 21 Apr 2026 20:26:48 -0700 (PDT) Received: from localhost ([185.92.221.13]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-43fe4e44f69sm44086904f8f.25.2026.04.21.20.26.47 (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Tue, 21 Apr 2026 20:26:47 -0700 (PDT) Date: Wed, 22 Apr 2026 03:26:46 +0000 From: Wei Yang To: Wei Yang Cc: Yuan Liu , David Hildenbrand , Oscar Salvador , Mike Rapoport , linux-mm@kvack.org, Yong Hu , Nanhai Zou , Tim Chen , Qiuxu Zhuo , Yu C Chen , Pan Deng , Tianyou Li , Chen Zhang , linux-kernel@vger.kernel.org Subject: Re: [PATCH v4 1/2] mm: move overlap memory map init check to memmap_init() Message-ID: <20260422032646.vtkdoliudlxgtpv3@master> Reply-To: Wei Yang References: <20260421125508.2317429-1-yuan1.liu@intel.com> <20260421125508.2317429-2-yuan1.liu@intel.com> <20260422011126.thu67icgj5qfbecj@master> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20260422011126.thu67icgj5qfbecj@master> User-Agent: NeoMutt/20170113 (1.7.2) On Wed, Apr 22, 2026 at 01:11:26AM +0000, Wei Yang wrote: >On Tue, Apr 21, 2026 at 08:55:07AM -0400, Yuan Liu wrote: >>Move the overlap memmap init check from memmap_init_range() into >>memmap_init(). >> >>When mirrored kernelcore is enabled, avoid memory map initialization >>for overlap regions. There are two cases that may overlap: a mirror >>memory region assigned to movable zone, or a non-mirror memory region >>assigned to a non-movable zone but falling within the movable zone >>range. >> >>Signed-off-by: Yuan Liu >>--- >> mm/mm_init.c | 37 +++++++++++++------------------------ >> 1 file changed, 13 insertions(+), 24 deletions(-) >> >>diff --git a/mm/mm_init.c b/mm/mm_init.c >>index df34797691bd..2b5233060504 100644 >>--- a/mm/mm_init.c >>+++ b/mm/mm_init.c >>@@ -797,28 +797,6 @@ void __meminit reserve_bootmem_region(phys_addr_t start, >> } >> } >> >>-/* If zone is ZONE_MOVABLE but memory is mirrored, it is an overlapped init */ >>-static bool __meminit >>-overlap_memmap_init(unsigned long zone, unsigned long *pfn) >>-{ >>- static struct memblock_region *r; >>- >>- if (mirrored_kernelcore && zone == ZONE_MOVABLE) { >>- if (!r || *pfn >= memblock_region_memory_end_pfn(r)) { >>- for_each_mem_region(r) { >>- if (*pfn < memblock_region_memory_end_pfn(r)) >>- break; >>- } >>- } >>- if (*pfn >= memblock_region_memory_base_pfn(r) && >>- memblock_is_mirror(r)) { >>- *pfn = memblock_region_memory_end_pfn(r); >>- return true; >>- } >>- } >>- return false; >>-} >>- >> /* >> * Only struct pages that correspond to ranges defined by memblock.memory >> * are zeroed and initialized by going through __init_single_page() during >>@@ -905,8 +883,6 @@ void __meminit memmap_init_range(unsigned long size, int nid, unsigned long zone >> * function. They do not exist on hotplugged memory. >> */ >> if (context == MEMINIT_EARLY) { >>- if (overlap_memmap_init(zone, &pfn)) >>- continue; >> if (defer_init(nid, pfn, zone_end_pfn)) { >> deferred_struct_pages = true; >> break; >>@@ -971,6 +947,7 @@ static void __init memmap_init(void) >> >> for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) { >> struct pglist_data *node = NODE_DATA(nid); >>+ struct memblock_region *r = &memblock.memory.regions[i]; >> >> for (j = 0; j < MAX_NR_ZONES; j++) { >> struct zone *zone = node->node_zones + j; >>@@ -978,6 +955,18 @@ static void __init memmap_init(void) >> if (!populated_zone(zone)) >> continue; >> >>+ if (mirrored_kernelcore) { >>+ const bool is_mirror = memblock_is_mirror(r); >>+ const bool is_movable_zone = (j == ZONE_MOVABLE); >>+ >>+ if (is_mirror && is_movable_zone) >>+ continue; >>+ >>+ if (!is_mirror && !is_movable_zone && >>+ start_pfn >= zone_movable_pfn[nid]) >>+ continue; > >IIUC, when mirrored_kernelcore is set but !memblock_has_mirror() or >is_kdump_kernel(), zone_movable_pfn[nid] is kept to be 0. > >This means it will skip all memory regions. > Did some tests. When mirrored_kernelcore && !memblock_has_mirror(), which means there is no is_mirror memblock. This will leave zone_movable_pfn[nid] 0. So for all memory regions, the above logic will skip them. Adjust the code as below, my local test could pass and kernel bootup as expected. >From 6351ac79a17edbfd830510fba2959ddc47b17258 Mon Sep 17 00:00:00 2001 From: Wei Yang Date: Wed, 22 Apr 2026 09:13:24 +0800 Subject: [PATCH] skip overlap region higher level --- mm/mm_init.c | 29 ++++++++++++++++++++++------- 1 file changed, 22 insertions(+), 7 deletions(-) diff --git a/mm/mm_init.c b/mm/mm_init.c index 79f93f2a90cf..7a85ba58e87f 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -916,8 +916,8 @@ void __meminit memmap_init_range(unsigned long size, int nid, unsigned long zone * function. They do not exist on hotplugged memory. */ if (context == MEMINIT_EARLY) { - if (overlap_memmap_init(zone, &pfn)) - continue; + // if (overlap_memmap_init(zone, &pfn)) + // continue; if (defer_init(nid, pfn, zone_end_pfn)) { deferred_struct_pages = true; break; @@ -974,6 +974,17 @@ static void __init memmap_init_zone_range(struct zone *zone, *hole_pfn = end_pfn; } +static bool __init region_overlapped(struct memblock_region *rgn, unsigned long zone_type) +{ + if (zone_type == ZONE_MOVABLE && memblock_is_mirror(rgn)) + return true; + + if (zone_type == ZONE_NORMAL && !memblock_is_mirror(rgn)) + return true; + + return false; +} + static void __init memmap_init(void) { unsigned long start_pfn, end_pfn; @@ -985,10 +996,15 @@ static void __init memmap_init(void) for (j = 0; j < MAX_NR_ZONES; j++) { struct zone *zone = node->node_zones + j; + struct memblock_region *r = &memblock.memory.regions[i]; if (!populated_zone(zone)) continue; + if (mirrored_kernelcore && zone_movable_pfn[nid] && + region_overlapped(r, j)) + continue; + memmap_init_zone_range(zone, start_pfn, end_pfn, &hole_pfn); zone_id = j; @@ -1257,13 +1273,12 @@ static unsigned long __init zone_absent_pages_in_node(int nid, end_pfn = clamp(memblock_region_memory_end_pfn(r), zone_start_pfn, zone_end_pfn); - if (zone_type == ZONE_MOVABLE && - memblock_is_mirror(r)) - nr_absent += end_pfn - start_pfn; + if (start_pfn == end_pfn) + continue; - if (zone_type == ZONE_NORMAL && - !memblock_is_mirror(r)) + if (region_overlapped(r, zone_type)) nr_absent += end_pfn - start_pfn; + } } Want to confirm, the logic in zone_absent_pages_in_node() only handle ZONE_NORMAL and ZONE_MOVABLE. So the assumption is ZONE_MOVABLE only could overlap with ZONE_NORMAL? When kernelcore=[nn]M is used, the "highest" populated zone is picked up to be ZONE_MOVABLE, as indicated by find_usable_zone_for_movable(). So looks it is possible to choose ZONE_DMA32 as ZONE_MOVABLE. For kernelcore=mirror, we want to eliminate the complexity? -- Wei Yang Help you, Help me