From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6AE2C2D321D for ; Tue, 18 Nov 2025 05:13:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763442841; cv=none; b=GrQ1G4BoluWm4bnyTfmhcSf9hYumD4BBOlOyr2kW0PFyC2vCq3GYZvT5VrCRwe9XI1pcapACAetJf4IRbx2qPWHQSPIV2j2LFg+nATyfpRRNcSpOFW5GxkIOGSy6WaBIai6AA/1EYh3y6xnrAi4lZmwuCHQR1eRuWSXHtc5OepA= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763442841; c=relaxed/simple; bh=jdRkvdlH3v2L3QcvROvVXorTnpNzH8U4UQVnYOf8xYs=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=KKQukOkVR1+y4mlEl7a2Wwtab2qC3Qjjwu3T4F2/PHnD+kWGSk0Mr78bxRBtADQrx9eT4duJHuGr0U+TVc1sqFS1vZ/Tcums3XSlfxqtlsJHn0C1v8o8ZpB0TEvlkhcM80sD80cvDjraMXW2mFAEp3yAp+VKARXj7XLMiMxPLyk= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=sAegSsMW; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sAegSsMW" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 46982C19422; Tue, 18 Nov 2025 05:13:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1763442838; bh=jdRkvdlH3v2L3QcvROvVXorTnpNzH8U4UQVnYOf8xYs=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=sAegSsMWaprMohFyiJoNMpH6ElBw95gK0DWumYUkdCk2F5OXBuwLvMuDYLqMhZu4I WMmvLXuC21mNm02z87SKX+L6ICui6bC73ZzrOFOE/91BCFR4oSU7VFQ78Osh0PRgPf tnIPwIQ3sybdcorfXCLRaYAi5dlT1zDY3OF+MMK+eVHo1g3dRPcWvGi8D4bz5Wj+x6 v5OiWDSG3WoKuuXF5mVXhwCcDKffthk6fSNzF1hH7VkEwSTMIfMDsjwf6KRO2sGzuB 7W0KpjuMhB3/imcQHxjqORrf9k5Eh5xHr2rXGVtYI/LZUmURoKDaPkcaW7u0tYmlid tLVI1/Vx+V5Xw== Date: Tue, 18 Nov 2025 07:13:49 +0200 From: Mike Rapoport To: Tianyou Li Cc: David Hildenbrand , Oscar Salvador , linux-mm@kvack.org, Yong Hu , Nanhai Zou , Yuan Liu , Tim Chen , Qiuxu Zhuo , Yu C Chen , Pan Deng , Chen Zhang , linux-kernel@vger.kernel.org Subject: Re: [PATCH] mm/memory hotplug/unplug: Optimize zone->contiguous update when move pfn range Message-ID: References: <20251117033052.371890-1-tianyou.li@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20251117033052.371890-1-tianyou.li@intel.com> On Mon, Nov 17, 2025 at 11:30:52AM +0800, Tianyou Li wrote: > When invoke move_pfn_range_to_zone, it will update the zone->contiguous by > checking the new zone's pfn range from the beginning to the end, regardless > the previous state of the old zone. When the zone's pfn range is large, the > cost of traversing the pfn range to update the zone->contiguous could be > significant. > > Add fast paths to quickly detect cases where zone is definitely not > contiguous without scanning the new zone. The cases are: when the new range > did not overlap with previous range, the contiguous should be false; if the > new range adjacent with the previous range, just need to check the new > range; if the new added pages could not fill the hole of previous zone, the > contiguous should be false. > > The following test cases of memory hotplug for a VM [1], tested in the > environment [2], show that this optimization can significantly reduce the > memory hotplug time [3]. > > +----------------+------+---------------+--------------+----------------+ > | | Size | Time (before) | Time (after) | Time Reduction | > | +------+---------------+--------------+----------------+ > | Memory Hotplug | 256G | 10s | 3s | 70% | > | +------+---------------+--------------+----------------+ > | | 512G | 33s | 8s | 76% | > +----------------+------+---------------+--------------+----------------+ > > [1] Qemu commands to hotplug 512G memory for a VM: > object_add memory-backend-ram,id=hotmem0,size=512G,share=on > device_add virtio-mem-pci,id=vmem1,memdev=hotmem0,bus=port1 > qom-set vmem1 requested-size 512G > > [2] Hardware : Intel Icelake server > Guest Kernel : v6.18-rc2 > Qemu : v9.0.0 > > Launch VM : > qemu-system-x86_64 -accel kvm -cpu host \ > -drive file=./Centos10_cloud.qcow2,format=qcow2,if=virtio \ > -drive file=./seed.img,format=raw,if=virtio \ > -smp 3,cores=3,threads=1,sockets=1,maxcpus=3 \ > -m 2G,slots=10,maxmem=2052472M \ > -device pcie-root-port,id=port1,bus=pcie.0,slot=1,multifunction=on \ > -device pcie-root-port,id=port2,bus=pcie.0,slot=2 \ > -nographic -machine q35 \ > -nic user,hostfwd=tcp::3000-:22 > > Guest kernel auto-onlines newly added memory blocks: > echo online > /sys/devices/system/memory/auto_online_blocks > > [3] The time from typing the QEMU commands in [1] to when the output of > 'grep MemTotal /proc/meminfo' on Guest reflects that all hotplugged > memory is recognized. > > Reported-by: Nanhai Zou > Reported-by: Chen Zhang > Tested-by: Yuan Liu > Reviewed-by: Tim Chen > Reviewed-by: Qiuxu Zhuo > Reviewed-by: Yu C Chen > Reviewed-by: Pan Deng > Reviewed-by: Nanhai Zou > Signed-off-by: Tianyou Li > --- > mm/internal.h | 3 +++ > mm/memory_hotplug.c | 48 ++++++++++++++++++++++++++++++++++++++++++++- > mm/mm_init.c | 31 ++++++++++++++++++++++------- > 3 files changed, 74 insertions(+), 8 deletions(-) > > diff --git a/mm/internal.h b/mm/internal.h > index 1561fc2ff5b8..734caae6873c 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -734,6 +734,9 @@ void set_zone_contiguous(struct zone *zone); > bool pfn_range_intersects_zones(int nid, unsigned long start_pfn, > unsigned long nr_pages); > > +bool check_zone_contiguous(struct zone *zone, unsigned long start_pfn, > + unsigned long nr_pages); > + > static inline void clear_zone_contiguous(struct zone *zone) > { > zone->contiguous = false; > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 0be83039c3b5..96c003271b8e 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -723,6 +723,47 @@ static void __meminit resize_pgdat_range(struct pglist_data *pgdat, unsigned lon > > } > > +static void __meminit update_zone_contiguous(struct zone *zone, > + bool old_contiguous, unsigned long old_start_pfn, > + unsigned long old_nr_pages, unsigned long old_absent_pages, > + unsigned long new_start_pfn, unsigned long new_nr_pages) > +{ > + unsigned long old_end_pfn = old_start_pfn + old_nr_pages; > + unsigned long new_end_pfn = new_start_pfn + new_nr_pages; > + unsigned long new_filled_pages = 0; > + > + /* > + * If the moved pfn range does not intersect with the old zone span, > + * the contiguous property is surely false. > + */ > + if (new_end_pfn < old_start_pfn || new_start_pfn > old_end_pfn) > + return; > + > + /* > + * If the moved pfn range is adjacent to the old zone span, > + * check the range to the left or to the right > + */ > + if (new_end_pfn == old_start_pfn || new_start_pfn == old_end_pfn) { > + zone->contiguous = old_contiguous && > + check_zone_contiguous(zone, new_start_pfn, new_nr_pages); > + return; The check for adjacency of the new range to the zone can be moved to the beginning of move_pfn_range_to_zone() and it will already optimize the common case when we hotplug memory to a contiguous zone. > + } > + > + /* > + * If old zone's hole larger than the new filled pages, the contiguous > + * property is surely false. > + */ > + new_filled_pages = new_end_pfn - old_start_pfn; > + if (new_start_pfn > old_start_pfn) > + new_filled_pages -= new_start_pfn - old_start_pfn; > + if (new_end_pfn > old_end_pfn) > + new_filled_pages -= new_end_pfn - old_end_pfn; > + if (new_filled_pages < old_absent_pages) > + return; Let's just check that we don't add enough pages to cover the hole if (nr_new_pages < old_absent_pages) return; and if we do go to the slow path and walk the pageblocks. > + > + set_zone_contiguous(zone); > +} > + -- Sincerely yours, Mike.