From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from stravinsky.debian.org (stravinsky.debian.org [82.195.75.108]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5279D35C1B2 for ; Wed, 8 Apr 2026 17:01:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=82.195.75.108 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775667673; cv=none; b=aEvngAwyaX57n4T9pDdina3q2uNBxfFMDS7bxbHlp81OMt5QvM55hgY8LtcVlqEIssgxW6I81MMzG+pgDHo4BPDP0GISkEH51QgIQTyC076LOv7/vHkB6/UfMOWWmhuv4QCdoBqwASieXIgU0Cj9N2JnW3ppamQVxZrLzgiVJMU= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775667673; c=relaxed/simple; bh=qcmq3qp0uoUOjOtESnoUeOiH6EgCbjDDG7G92jnZOgk=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=Xr3u304GZM5EMttdFjxOp59NkdxX9C5XZIw+B2YLSZODa/HSo9QbHF00nJD56ObLZfnqE60OzuOZ0Hfe0jpoAWFFaYgmC8Ib0/GEz1O9Lj3zqUKU9k4XaWALphuqtJzIKWA+K3ANKq16vN5HHsV2JXfjs/3vCRs8NLghTPbPPBI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=debian.org; spf=none smtp.mailfrom=debian.org; dkim=pass (2048-bit key) header.d=debian.org header.i=@debian.org header.b=ikjTaqfk; arc=none smtp.client-ip=82.195.75.108 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=debian.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=debian.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=debian.org header.i=@debian.org header.b="ikjTaqfk" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=debian.org; s=smtpauto.stravinsky; h=X-Debian-User:In-Reply-To:Content-Transfer-Encoding: Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date: Reply-To:Content-ID:Content-Description; bh=fWlcN0d2sqnEz/Q4L3iF8sYk4gtZDkaL7FX/NnHp/cg=; b=ikjTaqfkDuOZYFxSWfbAxa48aT gydnE7Ecs8QhVeEDKO0LHEKuIy97O3gA5L91knIiLs6Gfnn2rorBY2WBC1Gwdpy1ql7tcZ/nHGkU1 rzlPCzrz1MMlXb5bPHiHjkR/3RwgxQ3pGyKJn0D7SKW/hV351hp3WJDl2GeZ/s45HRuIpufu9HSUh LXZpn99OiD+WtQCLZtL+Q+k2uwsJKZEptZGOFk2K0MurqwvP/4JtuJxBx9VRQulA1fiy+Pkjl9QsZ BjSoWkBBme+Sv7ohA67A7m75kuHcV5cq4q7z2BCGDFqkc9QWl9iyzXtq9qV8zrFs5yIB+fqPUtSOQ /VojlE2g==; Received: from authenticated user by stravinsky.debian.org with esmtpsa (TLS1.3:ECDHE_X25519__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim 4.96) (envelope-from ) id 1wAWGx-008ZGd-2R; Wed, 08 Apr 2026 17:01:00 +0000 Date: Wed, 8 Apr 2026 10:00:53 -0700 From: Breno Leitao To: "Vlastimil Babka (SUSE)" Cc: Andrew Morton , David Hildenbrand , Lorenzo Stoakes , "Liam R. Howlett" , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kas@kernel.org, shakeel.butt@linux.dev, usama.arif@linux.dev, kernel-team@meta.com Subject: Re: [PATCH] mm/vmstat: spread vmstat_update requeue across the stat interval Message-ID: References: <20260401-vmstat-v1-1-b68ce4a35055@debian.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: X-Debian-User: leitao On Wed, Apr 08, 2026 at 08:13:43AM -0700, Breno Leitao wrote: > On Wed, Apr 08, 2026 at 12:13:04PM +0200, Vlastimil Babka (SUSE) wrote: > > diff --git a/mm/vmstat.c b/mm/vmstat.c > index 2370c6fb1fcd6..8d53242e7aa66 100644 > --- a/mm/vmstat.c > +++ b/mm/vmstat.c > @@ -2139,8 +2139,12 @@ static void vmstat_shepherd(struct work_struct *w) > if (cpu_is_isolated(cpu)) > continue; > > - if (!delayed_work_pending(dw) && need_update(cpu)) > + if (!delayed_work_pending(dw) && need_update(cpu)) { > + WARN_ONCE(work_busy(&dw->work) & WORK_BUSY_RUNNING, > + "cpu%d: vmstat_update already running, scheduling again\n", > + cpu); > queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0); > + } > } > > cond_resched(); > > The fix is a one-line change: !delayed_work_pending(dw) → !work_busy(&dw->work) In my testing, this race condition occurs more frequently than expected, likely due to the timer configurations we've been discussing throughout this thread. I developed a diagnostic patch to monitor vmstat_update worker scheduling frequency, and the results show consistently low values. Avoiding rescheduling a worker that is already also reduces the contention on stress-ng test case. commit d725f0664b70aa5c677215b0fc1abc0117aaf114 Author: Breno Leitao Date: Wed Apr 8 09:01:02 2026 -0700 mm/vmstat: fix vmstat_shepherd double-scheduling vmstat_update vmstat_shepherd uses delayed_work_pending() to check whether vmstat_update is already scheduled for a given CPU before queuing it. However, delayed_work_pending() only tests WORK_STRUCT_PENDING_BIT, which is cleared the moment a worker thread picks up the work to execute it. This means that while vmstat_update is actively running on a CPU, delayed_work_pending() returns false. If need_update() also returns true at that point (per-cpu counters not yet zeroed mid-flush), the shepherd queues a second invocation with delay=0, causing vmstat_update to run again immediately after finishing. On a 72-CPU system this race is readily observable: before the fix, many CPUs show invocation gaps well below 500 jiffies (the minimum round_jiffies_relative() can produce), with the most extreme cases reaching 0 jiffies—vmstat_update called twice within the same jiffy. Fix this by replacing delayed_work_pending() with work_busy(), which returns non-zero for both WORK_BUSY_PENDING (timer armed or work queued) and WORK_BUSY_RUNNING (work currently executing). The shepherd now correctly skips a CPU in all busy states. After the fix, all sub-jiffy and most sub-100-jiffie gaps disappear. The remaining early invocations have gaps in the 700–999 jiffie range, attributable to round_jiffies_relative() aligning to a nearer jiffie-second boundary rather than to this race. Each spurious vmstat_update invocation has a measurable side effect: refresh_cpu_vm_stats() calls decay_pcp_high() for every zone, which drains idle per-CPU pages back to the buddy allocator via free_pcppages_bulk(), taking the zone spinlock each time. Eliminating the double-scheduling therefore reduces zone lock contention directly. On a 72-CPU stress-ng workload measured with perf lock contention: free_pcppages_bulk contention count: ~55% reduction free_pcppages_bulk total wait time: ~57% reduction free_pcppages_bulk max wait time: ~47% reduction Note: work_busy() is inherently racy—between the check and the subsequent queue_delayed_work_on() call, vmstat_update can finish execution, leaving the work neither pending nor running. In that narrow window the shepherd can still queue a second invocation. After the fix, this residual race is rare and produces only occasional small gaps, a significant improvement over the systematic double-scheduling seen with delayed_work_pending(). Signed-off-by: Breno Leitao diff --git a/mm/vmstat.c b/mm/vmstat.c index d59eff1582547..5489549241b51 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -2156,7 +2156,7 @@ static void vmstat_shepherd(struct work_struct *w) if (cpu_is_isolated(cpu)) continue; - if (!delayed_work_pending(dw) && need_update(cpu)) + if (!work_busy(&dw->work) && need_update(cpu)) queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0); }