From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B8282F7AB0 for ; Fri, 19 Dec 2025 09:38:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766137085; cv=none; b=u2BZ0si6XCopdnSR5AZkvPsQY9jRZiYZrq8WX5tdfmIkcmndUdPcyWJpC0VHe/hjZcO3YQ4ay7Cc/8/swiO7ZTLiDPFDn3TxIzietZRH8pu/Sy9SUrLjRAqa0dS3nCkVBeidFnEMsk9LKiUH4avSnVu6O/zgS/jsEgEoJB2UgT4= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766137085; c=relaxed/simple; bh=QRucwd58Q3iUPx5TpAFxLvxSW3Ui4n0gJQQkxLBi0VY=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=AoHNcNPuMeJuuJkTZtUknJf6Dm0AqiHzp3cENKyROWVmzmC4pobt00A94ioGTqQd//zKb+6blFt6rg2abRt1WX9xyL4BGAb6n5hDlhYfL4PXNquCn/ZTsV885MalH1AKRFEN91Jyj/SsI4hqNDupfAq0y55n1dX4fSBulJe/8d0= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=UypafefL; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="UypafefL" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 03D1FC4CEF1; Fri, 19 Dec 2025 09:37:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1766137084; bh=QRucwd58Q3iUPx5TpAFxLvxSW3Ui4n0gJQQkxLBi0VY=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=UypafefLIlHW1HMmBo4AT2CBFg/Xz2177d7oodrDQgDkjRn+qBbEzPSSKxaL6f74S 9XPrW3itvb8QKGb2oJ+OTHqvVP5fWM5SqQI+2FgsRyVOQkBcCULhRtfIfDAm7DWojj OHrGnzePuYVfsdEYRlQ/uXYEx9Vla1HD8By5/4hb4CBwQ7I1AImF2xfx1sKBfaMMiD ulJ6AXkuWiT6XdVoCOpXP3rXbB20Wf+dM6w/Y6ZV1R/r7sfaRn67BUopp6bD5Ti7eW sbcYAmfdPD12fDfK5DjIEo5N11mYuQHQltx9vnjDK6xKHZPIxhNz6IFoS4xRX594/P EJR8hkvrUcqfA== Date: Fri, 19 Dec 2025 11:37:56 +0200 From: Mike Rapoport To: Tianyou Li Cc: David Hildenbrand , Oscar Salvador , Wei Yang , linux-mm@kvack.org, Yong Hu , Nanhai Zou , Yuan Liu , Tim Chen , Qiuxu Zhuo , Yu C Chen , Pan Deng , Chen Zhang , linux-kernel@vger.kernel.org Subject: Re: [PATCH v6 2/2] mm/memory hotplug: fix zone->contiguous always false when hotplug Message-ID: References: <20251215130437.3914342-1-tianyou.li@intel.com> <20251215130437.3914342-3-tianyou.li@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20251215130437.3914342-3-tianyou.li@intel.com> Hi, On Mon, Dec 15, 2025 at 09:04:37PM +0800, Tianyou Li wrote: > From: Yuan Liu > > Function set_zone_contiguous used __pageblock_pfn_to_page to > check the whole pageblock is in the same zone. One assumption is > the memory section must online, otherwise the __pageblock_pfn_to_page > will return NULL, then the set_zone_contiguous will be false. > When move_pfn_range_to_zone invoked set_zone_contiguous, since the > memory section did not online, the return value will always be false. > > To fix this issue, we removed the set_zone_contiguous from the > move_pfn_range_to_zone, and place it after memory section onlined. > > Function remove_pfn_range_from_zone did not have this issue because > memory section remains online at the time set_zone_contiguous invoked. Since the fix is relevant even without the optimization patch, can we please reorder the patches so that the fix will be the first in the series? Than it can be applied to stable trees as well. > Reviewed-by: Tianyou Li > Reviewed-by: Nanhai Zou > Signed-off-by: Yuan Liu > --- > mm/memory_hotplug.c | 18 ++++++++++++++---- > 1 file changed, 14 insertions(+), 4 deletions(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 12839032ad42..0220021f6a68 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -810,8 +810,7 @@ void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, > { > struct pglist_data *pgdat = zone->zone_pgdat; > int nid = pgdat->node_id; > - const enum zone_contig_state new_contiguous_state = > - zone_contig_state_after_growing(zone, start_pfn, nr_pages); > + > clear_zone_contiguous(zone); > > if (zone_is_empty(zone)) > @@ -841,8 +840,6 @@ void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, > memmap_init_range(nr_pages, nid, zone_idx(zone), start_pfn, 0, > MEMINIT_HOTPLUG, altmap, migratetype, > isolate_pageblock); > - > - set_zone_contiguous(zone, new_contiguous_state); > } > > struct auto_movable_stats { > @@ -1151,6 +1148,7 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages, > { > unsigned long end_pfn = pfn + nr_pages; > int ret, i; > + enum zone_contig_state new_contiguous_state = ZONE_CONTIG_NO; > > ret = kasan_add_zero_shadow(__va(PFN_PHYS(pfn)), PFN_PHYS(nr_pages)); > if (ret) > @@ -1165,6 +1163,14 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages, > if (mhp_off_inaccessible) > page_init_poison(pfn_to_page(pfn), sizeof(struct page) * nr_pages); > > + /* > + * If the allocated memmap pages are not in a full section, keep the > + * contiguous state as ZONE_CONTIG_NO. > + */ > + if (IS_ALIGNED(end_pfn, PAGES_PER_SECTION)) > + new_contiguous_state = zone_contig_state_after_growing(zone, > + pfn, nr_pages); > + > move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_UNMOVABLE, > false); > > @@ -1183,6 +1189,7 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages, > if (nr_pages >= PAGES_PER_SECTION) > online_mem_sections(pfn, ALIGN_DOWN(end_pfn, PAGES_PER_SECTION)); > > + set_zone_contiguous(zone, new_contiguous_state); > return ret; > } > > @@ -1221,6 +1228,7 @@ int online_pages(unsigned long pfn, unsigned long nr_pages, > }; > const int nid = zone_to_nid(zone); > int need_zonelists_rebuild = 0; > + enum zone_contig_state new_contiguous_state = ZONE_CONTIG_NO; > unsigned long flags; > int ret; > > @@ -1235,6 +1243,7 @@ int online_pages(unsigned long pfn, unsigned long nr_pages, > !IS_ALIGNED(pfn + nr_pages, PAGES_PER_SECTION))) > return -EINVAL; > > + new_contiguous_state = zone_contig_state_after_growing(zone, pfn, nr_pages); > > /* associate pfn range with the zone */ > move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_MOVABLE, > @@ -1273,6 +1282,7 @@ int online_pages(unsigned long pfn, unsigned long nr_pages, > } > > online_pages_range(pfn, nr_pages); > + set_zone_contiguous(zone, new_contiguous_state); > adjust_present_page_count(pfn_to_page(pfn), group, nr_pages); > > if (node_arg.nid >= 0) > -- > 2.47.1 > -- Sincerely yours, Mike.