From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 09CAD35AC2B for ; Mon, 20 Apr 2026 14:03:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776693817; cv=none; b=rBgEvMeyAHiFfoPC8Eh0Ilkj51RtfapPZ3xhx1NdGrnFHpQiKxoaenaxJUokuMh92xoK1PXqt1ek5quzkPiu5B0iYkOCnAq1AmahYoKQMXIqys8H8mbOAfIEBbTb0inEKTd/AWP1ObIy1PqYcMqp/wHSfLnXGUzKRF47/eKrb5Q= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1776693817; c=relaxed/simple; bh=OxuN49bK9qrmMSMd11iXbzEzvR4ZpTtBtJ1jENfQcdY=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=R/20UASfHt1v4gdRP/seZ2LHWitbK85W6u25shiAEpjSRhyxCcG3r9Z+DpdeTa9HCKrgjOUJlm6AkvQsBW4UcDQQ15g7UPHrn/iJ4R4ODFwX6JMtM7rHB17Q1zgHFhe+umnjS+F+tJAzr4qlBfETfv6x0INWUrC6rLIWHxFdgB8= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Bpe+6CmT; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Bpe+6CmT" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7FC81C19425; Mon, 20 Apr 2026 14:03:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1776693816; bh=OxuN49bK9qrmMSMd11iXbzEzvR4ZpTtBtJ1jENfQcdY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=Bpe+6CmT+5l2LxfZhErSNVrhTl2np05OftUtbE0d/+4jIZEHNdtkB3Eb9oXCwmh7S eYPTeAyLFi5DX2ye5zDrqoIU3R+ffediBPC0fHYqmiiU8A8Yi9RJzj8Q0F+2BCBaww 9WXMWw+70oP+sHyb2hLq01uUG/2SlaQqPBhN2LyGGFbvoiXL5rmQhyxC/pc7IUASM3 e8ZL6eM0SRk6Cwd7fDzyM33ZeCJ3mU19eALR84hl+c21G0G/b1XnBVqoV+APUy54IC 7y01hgTX+UNVBJPhwy7VPYSpd3q66lP/2lxDlSJcFic97WYahmu5Mqpso/nJ0qpCsT 56pGO+RzKTEDw== Date: Mon, 20 Apr 2026 17:03:28 +0300 From: Mike Rapoport To: "Liu, Yuan1" Cc: "David Hildenbrand (Arm)" , Oscar Salvador , Wei Yang , "linux-mm@kvack.org" , "Hu, Yong" , "Zou, Nanhai" , Tim Chen , "Zhuo, Qiuxu" , "Chen, Yu C" , "Deng, Pan" , "Li, Tianyou" , Chen Zhang , "linux-kernel@vger.kernel.org" Subject: Re: [PATCH v3] mm/memory hotplug/unplug: Optimize zone contiguous check when changing pfn range Message-ID: References: <20260408031615.1831922-1-yuan1.liu@intel.com> <17b821b6-0176-43d5-92f7-fe2a0c4f70cf@kernel.org> <12b8ba83-54b1-454e-b787-2d2e967c9b58@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: On Fri, Apr 17, 2026 at 06:34:50AM +0000, Liu, Yuan1 wrote: > > > >>> sashiko had several comments > > >>> https://sashiko.dev/#/patchset/20260408031615.1831922-1- > > >> yuan1.liu%40intel.com > > >>> > > >>> I skipped the ones related to hotplug, but in the mm_init part the > > >> comment > > >>> about zones that can have overlapping physical spans when mirrored > > >>> kernelcore is enabled seems valid. > > > > > > Hi David & Mike > > > > > > I’ve spent some time working through these issues to better understand > > them. > > > For the overlapping physical spans(mirrored kernelcore), should I avoid > > counting > > > overlap_memmap_init in memmap_init_range in the next version? > > > For example, change it as follows: > > > > > > +unsigned long __meminit > > > +memmap_init_range(unsigned long size, int nid, unsigned long zone, > > > + unsigned long start_pfn, > > > + unsigned long zone_end_pfn, > > > enum meminit_context context, > > > struct vmem_altmap *altmap, int migratetype, > > > bool isolate_pageblock) > > > { > > > unsigned long pfn, end_pfn = start_pfn + size; > > > + unsigned long nr_init = 0; > > > struct page *page; > > > > > > if (highest_memmap_pfn < end_pfn - 1) > > > @@ -893,7 +897,7 @@ void __meminit memmap_init_range(unsigned long size, > > int nid, unsigned long zone > > > if (zone == ZONE_DEVICE) { > > > if (!altmap) > > > - return; > > > + return 0; > > > > > > if (start_pfn == altmap->base_pfn) > > > start_pfn += altmap->reserve; > > > @@ -911,6 +915,7 @@ void __meminit memmap_init_range(unsigned long size, > > int nid, unsigned long zone > > > if (defer_init(nid, pfn, zone_end_pfn)) { > > > deferred_struct_pages = true; > > > + nr_init += end_pfn - pfn; > > > > It's confusing. Could the remaining range also include overlapping inits? > > > > Maybe the whole "skip overlapping init" should actually be handled on a > > higher level? > > > > I guess we'd want to skip any memblock_is_mirror(r) regions entirely. > > > > @Mike? > > Hi Mike > > David suggested moving the overlap handling to a higher level and > skipping memblock_is_mirror() regions entirely. I think this makes sense. > > Would this work for you, or do you have a different preference? Looks about right :) > Something like this > static void __init memmap_init(void) > { > ... > for_each_mem_pfn_range(i, MAX_NUMNODES, &start_pfn, &end_pfn, &nid) { > struct pglist_data *node = NODE_DATA(nid); > bool is_mirror = mirrored_kernelcore && > memblock_is_mirror(&memblock.memory.regions[i]); I'd add a local memblock_region variable. > > for (j = 0; j < MAX_NR_ZONES; j++) { > ... > if (is_mirror && j == ZONE_MOVABLE) > continue; > > memmap_init_zone_range(zone, start_pfn, end_pfn, > &hole_pfn); > Best Regards, > Liu, Yuan1 -- Sincerely yours, Mike.