From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from stravinsky.debian.org (stravinsky.debian.org [82.195.75.108]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 15450326942 for ; Wed, 8 Apr 2026 15:14:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=82.195.75.108 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775661244; cv=none; b=Tm0EZxQADLPlLTz7gluEtHA8HfNY9r+hC1zKQqhTZR7yvXozRcefBQJzOpnxe1aIQq6hX8bZYxHX626kT5heEKXtTTGYbi5wDqULFOq44OXUCiboX+6NAHr9+jOBEjp55sxxbBD+RnUwQNlJmC/ZVntsRNk6f+DFwqGpZjFZKH8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775661244; c=relaxed/simple; bh=2BwVe/SocDh7/DSH4omFlKar4dk2PuBNVGbGfwg+X40=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=JRG7wNGiNTEgTU0wu0JOigfpQOVi6iqGV+PnVjdLduaMCbthDfY2bXpBdHxxyyhVdirnmHK6SSbydBPRj1oM2ab6TZdrJjeYklsM6lSdrwq/uEmI8926+WrZcqUnFWbEGQ4X1VgSyKJoSUKEZwylpY7too8zwqZV2XKmGqXx6+Q= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=debian.org; spf=none smtp.mailfrom=debian.org; dkim=pass (2048-bit key) header.d=debian.org header.i=@debian.org header.b=q1pg2Ckv; arc=none smtp.client-ip=82.195.75.108 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=debian.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=debian.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=debian.org header.i=@debian.org header.b="q1pg2Ckv" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=debian.org; s=smtpauto.stravinsky; h=X-Debian-User:In-Reply-To:Content-Transfer-Encoding: Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date: Reply-To:Content-ID:Content-Description; bh=7qFmuP/MnKK1BHIb03/wMrebmLz+b5nfHur0r4Fia6A=; b=q1pg2CkvuUVpDz7OpwXLAlQsNe YOaQN8KYTe04O7+Ge8xDiSjUw2cHQdic8sSmgVly2h+sywnL95YbHfEIDl6lFBeyTMBUF6N1lHK8j QnHpB4/TAq2Kw8rHr/d7gd4r1OHSAYSfUIZ4tLExqDIldJpYbBEw9VUwAgmkkrIngHk6YRDBc+bZm 0DctdYt88yBtmNq7ua8m7fEnf28Jyn7Ool1PxbIT0qITqlm0geQ0M0QjOYPv1HS/P4WC7AsJcR9Qk khHnLebIEUV3Zj+8Ch2mi3oWrZu5zvMH+upTPxmZmwneAaFX4p5EDha+N6Z3HaMIPqATd7nEVPmio 6J3Z6Uag==; Received: from authenticated user by stravinsky.debian.org with esmtpsa (TLS1.3:ECDHE_X25519__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim 4.96) (envelope-from ) id 1wAUbG-008VWe-0T; Wed, 08 Apr 2026 15:13:50 +0000 Date: Wed, 8 Apr 2026 08:13:43 -0700 From: Breno Leitao To: "Vlastimil Babka (SUSE)" Cc: Andrew Morton , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kas@kernel.org, shakeel.butt@linux.dev, usama.arif@linux.dev, kernel-team@meta.com Subject: Re: [PATCH] mm/vmstat: spread vmstat_update requeue across the stat interval Message-ID: References: <20260401-vmstat-v1-1-b68ce4a35055@debian.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Debian-User: leitao On Wed, Apr 08, 2026 at 12:13:04PM +0200, Vlastimil Babka (SUSE) wrote: > On 4/7/26 17:39, Breno Leitao wrote: > > On Thu, Apr 02, 2026 at 06:33:17AM -0700, Breno Leitao wrote: > >> 1) The interval duration now varies per CPU. Specifically, vmstat_update() > >> is scheduled at sysctl_stat_interval*2 for the highest CPU with my > >> proposed change, rather than a uniform sysctl_stat_interval across > >> all CPUs. (as you raised in the first email) > > However, we certainly don't want scenario (1) where the delay varies per > > CPU, resulting in the last CPU having vmstat_update() scheduled every > > 2 seconds instead of 1 second. > > Indeed. > .... > > So you have 72 cpus, the vmstat interval is 1s, and what's the CONFIG_HZ? Yes, CONFIG_HZ=1000 in my configuration. > If it's 1000, it means 13 jiffies per cpu. Would changing the > round_jiffies_common() implementation to add 13 jiffies per cpu instead of 3 > have the same effect? That approach would increase the spread in round_jiffies_common(), but it brings us back to scenario 1 - each CPU would have a variable rescheduling delay, which defeats the purpose of maintaining consistent timing. > > Link: https://github.com/leitao/linux/commit/ac200164df1bda45ee8504cc3db5bff5b696245e [1] > Link: https://github.com/leitao/linux/commit/baa2ea6ea4c4c2b1df689de6db0a2a6f119e51be [2] > > > commit 41b7aaa1a51f07fc1f0db0614d140fbca78463d3 > Author: Breno Leitao > Date: Tue Apr 7 07:56:35 2026 -0700 > > mm/vmstat: spread per-cpu vmstat work to reduce zone->lock contention > > vmstat_shepherd() queues all per-cpu vmstat_update work with zero delay, > and vmstat_update() re-queues itself with round_jiffies_relative(), which > clusters timers near the same second boundary due to the small per-CPU > spread in round_jiffies_common(). On many-CPU systems this causes > thundering-herd contention on zone->lock when multiple CPUs > simultaneously call refresh_cpu_vm_stats() -> decay_pcp_high() -> > free_pcppages_bulk(). > > Introduce vmstat_spread_delay() to assign each CPU a unique offset > distributed evenly across sysctl_stat_interval. The shepherd uses this > when initially queuing per-cpu work, and vmstat_update re-queues with a > plain sysctl_stat_interval to preserve the spread (round_jiffies_relative > would snap CPUs back to the same boundary). > > Signed-off-by: Breno Leitao > > I think this approach could have the following problems: > > - the initially spread delays can drift over time, there's nothing keeping > them in sync > - not using round_jiffies_relative() means firing at other times than other > timers that are using the rounding, so this could be working against the > power savings effects of rounding > - it's a vmstat-specific workaround for some yet unclear underlying > suboptimality that's likely not vmstat specific I believe the issue is that vmstat's current use of round_jiffies_relative() fundamentally solve a different problem than the one we trying to solve. The round_jiffies_relative() mechanism is designed for a different purpose, as documented: """ By rounding these timers to whole seconds, all such timers will fire at the same time, rather than at various times spread out. The goal of this is to have the CPU wake up less, which saves power. """ Backing up a bit, let me summarize the problem: 1) vmstat_shepherd() starts all vmstat_update workers simultaneously (or nearly so) 2) All vmstat_update() workers then reschedule themselves for the next second with only 216ms of variance across 72 cores Upon further reflection, issue (2) wouldn't exist if we simply avoided starting all workqueues simultaneously in the first place. That said, I created a hacky code to find how often the vmstat_update is called, and it is not uncommon to see cases where vmstat_update is called a few jiffies after it was called. Debugging further, I found a race between vmstat_shepherd() trying to schedule vmstat_update() even when it is already running, which happens a lot, given they both run at the commit 86e27727b6cc9fbe8bf67818a065105fd30bc7e8 (HEAD -> b4/vmstat) Author: Breno Leitao Date: Wed Apr 8 08:11:01 2026 -0700 mm/vmstat: warn when shepherd schedules vmstat_update while it is running vmstat_shepherd checks delayed_work_pending() to decide whether to queue vmstat_update for a CPU. However, delayed_work_pending() only tests the WORK_STRUCT_PENDING_BIT, which is cleared as soon as a worker thread picks up the work for execution. This means that while vmstat_update is actively running on a CPU, delayed_work_pending() returns false, and the shepherd may queue another invocation if need_update() also returns true (which it can, since per-cpu counters are still being flushed mid-execution). Add a WARN_ONCE to make this race visible: fire a warning when the shepherd is about to queue vmstat_update for a CPU where it is currently executing, i.e., work_busy() reports WORK_BUSY_RUNNING but delayed_work_pending() does not prevent the re-queue. Signed-off-by: Breno Leitao diff --git a/mm/vmstat.c b/mm/vmstat.c index 2370c6fb1fcd6..8d53242e7aa66 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -2139,8 +2139,12 @@ static void vmstat_shepherd(struct work_struct *w) if (cpu_is_isolated(cpu)) continue; - if (!delayed_work_pending(dw) && need_update(cpu)) + if (!delayed_work_pending(dw) && need_update(cpu)) { + WARN_ONCE(work_busy(&dw->work) & WORK_BUSY_RUNNING, + "cpu%d: vmstat_update already running, scheduling again\n", + cpu); queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0); + } } cond_resched(); The fix is a one-line change: !delayed_work_pending(dw) → !work_busy(&dw->work)