From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7915F309EF9 for ; Thu, 20 Nov 2025 12:00:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763640016; cv=none; b=LUi0Moz6tPVinhwEjRkDQQ3goyXgxfqKZndE67F1VnuG7JVVBbl1UizqRbquqoh9fzB1VTeVucd82Qn+Fo4XMBIwR3wlujraSaUsykyGsiP3IQZn1ei2U9s4NAwOZ9FKtQEi0vbC8QSN7Y2iiVH4S5OjJ7Jso3OxAjDrab5zQv8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763640016; c=relaxed/simple; bh=a891wy2LWb3q2/Qd/lnECCF9sVIO/WgE2s+OoTfoJFc=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=XUMEs3Mwe8N5/zTbqu7Uaa7C6VGsNMnf+rHvTt8esR0RC5AzX4SVpJ0wmMyBG0F8ArDA4eLUY+PJZRv4kQ2vHJG7R8Qdy/ayZaR2qUM61ztFuizN1KtjEAHWQgjvywXZcoOQcODwauGH+ROzcGNGBfomAu/O82RhSt0iAAxCKNE= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=B1dqDXb5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="B1dqDXb5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D163FC116B1; Thu, 20 Nov 2025 12:00:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1763640016; bh=a891wy2LWb3q2/Qd/lnECCF9sVIO/WgE2s+OoTfoJFc=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=B1dqDXb5vwyekZMQTVzWrNIa3ii435VJA9YGdsMd12UcKhIyrY47VzRMFUQst9HEM lqmNlXlj0DfISnogXhDAcDa76t1saCVD5TtDEV7CdFlQNEirQnRdZxtG6eDAgMUQXp NBhW8+ko2f9Az3rlRQ+U2aMYlEQnnqZiIqakVgcajTN+UndTLeibEzPmhdMJ4XnbMl 3PMImHGIJ0+jwJ4PfNkXwkeOTdc/dUwN93UBgJjVrbJYOMBxBgJ7qLPMYDaPIQUhiE UhY2BSOj8zLyCxKseM4ZRlB7pwwbCOsQPhPMRnGGYDWjsyEXUWuu/QolHVt8PmnZFV 1U0iwQ8F48AqA== Date: Thu, 20 Nov 2025 14:00:07 +0200 From: Mike Rapoport To: Tianyou Li Cc: David Hildenbrand , Oscar Salvador , Wei Yang , linux-mm@kvack.org, Yong Hu , Nanhai Zou , Yuan Liu , Tim Chen , Qiuxu Zhuo , Yu C Chen , Pan Deng , Chen Zhang , linux-kernel@vger.kernel.org Subject: Re: [PATCH v3] mm/memory hotplug/unplug: Optimize zone->contiguous update when move pfn range Message-ID: References: <20251119114252.oykrczprf3ecd7ak@master> <20251119140657.3845818-1-tianyou.li@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20251119140657.3845818-1-tianyou.li@intel.com> Hi, Please start a new thread when sending a new version of a patch next time. And as Wei mentioned, wait a bit for the discussion on vN to settle before sending vN+1. On Wed, Nov 19, 2025 at 10:06:57PM +0800, Tianyou Li wrote: > When invoke move_pfn_range_to_zone, it will update the zone->contiguous by > checking the new zone's pfn range from the beginning to the end, regardless > the previous state of the old zone. When the zone's pfn range is large, the > cost of traversing the pfn range to update the zone->contiguous could be > significant. > > Add fast paths to quickly detect cases where zone is definitely not > contiguous without scanning the new zone. The cases are: when the new range > did not overlap with previous range, the contiguous should be false; if the > new range adjacent with the previous range, just need to check the new > range; if the new added pages could not fill the hole of previous zone, the > contiguous should be false. > > The following test cases of memory hotplug for a VM [1], tested in the > environment [2], show that this optimization can significantly reduce the > memory hotplug time [3]. > > +----------------+------+---------------+--------------+----------------+ > | | Size | Time (before) | Time (after) | Time Reduction | > | +------+---------------+--------------+----------------+ > | Memory Hotplug | 256G | 10s | 2s | 80% | > | +------+---------------+--------------+----------------+ > | | 512G | 33s | 6s | 81% | > +----------------+------+---------------+--------------+----------------+ > > [1] Qemu commands to hotplug 512G memory for a VM: > object_add memory-backend-ram,id=hotmem0,size=512G,share=on > device_add virtio-mem-pci,id=vmem1,memdev=hotmem0,bus=port1 > qom-set vmem1 requested-size 512G > > [2] Hardware : Intel Icelake server > Guest Kernel : v6.18-rc2 > Qemu : v9.0.0 > > Launch VM : > qemu-system-x86_64 -accel kvm -cpu host \ > -drive file=./Centos10_cloud.qcow2,format=qcow2,if=virtio \ > -drive file=./seed.img,format=raw,if=virtio \ > -smp 3,cores=3,threads=1,sockets=1,maxcpus=3 \ > -m 2G,slots=10,maxmem=2052472M \ > -device pcie-root-port,id=port1,bus=pcie.0,slot=1,multifunction=on \ > -device pcie-root-port,id=port2,bus=pcie.0,slot=2 \ > -nographic -machine q35 \ > -nic user,hostfwd=tcp::3000-:22 > > Guest kernel auto-onlines newly added memory blocks: > echo online > /sys/devices/system/memory/auto_online_blocks > > [3] The time from typing the QEMU commands in [1] to when the output of > 'grep MemTotal /proc/meminfo' on Guest reflects that all hotplugged > memory is recognized. > > Reported-by: Nanhai Zou > Reported-by: Chen Zhang > Tested-by: Yuan Liu > Reviewed-by: Tim Chen > Reviewed-by: Qiuxu Zhuo > Reviewed-by: Yu C Chen > Reviewed-by: Pan Deng > Reviewed-by: Nanhai Zou > Reviewed-by: Yuan Liu > Signed-off-by: Tianyou Li > --- > mm/memory_hotplug.c | 51 ++++++++++++++++++++++++++++++++++++++++++--- > 1 file changed, 48 insertions(+), 3 deletions(-) > > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 0be83039c3b5..aed1827a2778 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -723,6 +723,51 @@ static void __meminit resize_pgdat_range(struct pglist_data *pgdat, unsigned lon > > } > > +static bool __meminit check_zone_contiguous_fast(struct zone *zone, > + unsigned long start_pfn, unsigned long nr_pages) > +{ > + const unsigned long end_pfn = start_pfn + nr_pages; > + > + /* > + * Given the moved pfn range's contiguous property is always true, > + * under the conditional of empty zone, the contiguous property should > + * be true. > + */ > + if (zone_is_empty(zone)) { > + zone->contiguous = true; I don't think it's safe to set zone->contiguous until the end of move_pfn_range_to_zone(). See commit feee6b298916 ("mm/memory_hotplug: shrink zones when offlining memory"). check_zone_contiguous_fast() should only check if the zone remains contiguous after hotplug or it's certainly discontinuous, but should not set zone->contiguous. It still must be cleared before resizing the zone and set after the initialization of the memory map. > + return true; > + } > + > + /* > + * If the moved pfn range does not intersect with the original zone span, > + * the contiguous property is surely false. > + */ > + if (end_pfn < zone->zone_start_pfn || start_pfn > zone_end_pfn(zone)) { > + zone->contiguous = false; > + return true; > + } > + > + /* > + * If the moved pfn range is adjacent to the original zone span, given > + * the moved pfn range's contiguous property is always true, the zone's > + * contiguous property inherited from the original value. > + */ > + if (end_pfn == zone->zone_start_pfn || start_pfn == zone_end_pfn(zone)) > + return true; > + > + /* > + * If the original zone's hole larger than the moved pages in the range, > + * the contiguous property is surely false. > + */ > + if (nr_pages < (zone->spanned_pages - zone->present_pages)) { > + zone->contiguous = false; > + return true; > + } > + > + clear_zone_contiguous(zone); > + return false; > +} > + > #ifdef CONFIG_ZONE_DEVICE > static void section_taint_zone_device(unsigned long pfn) > { > @@ -752,8 +797,7 @@ void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, > { > struct pglist_data *pgdat = zone->zone_pgdat; > int nid = pgdat->node_id; > - > - clear_zone_contiguous(zone); > + const bool fast_path = check_zone_contiguous_fast(zone, start_pfn, nr_pages); > > if (zone_is_empty(zone)) > init_currently_empty_zone(zone, start_pfn, nr_pages); > @@ -783,7 +827,8 @@ void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn, > MEMINIT_HOTPLUG, altmap, migratetype, > isolate_pageblock); > > - set_zone_contiguous(zone); > + if (!fast_path) > + set_zone_contiguous(zone); > } > > struct auto_movable_stats { > -- > 2.47.1 > -- Sincerely yours, Mike.