From: Breno Leitao <leitao@debian.org>
To: "Vlastimil Babka (SUSE)" <vbabka@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@kernel.org>,
Lorenzo Stoakes <ljs@kernel.org>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
kas@kernel.org, shakeel.butt@linux.dev, usama.arif@linux.dev,
kernel-team@meta.com
Subject: Re: [PATCH] mm/vmstat: spread vmstat_update requeue across the stat interval
Date: Wed, 8 Apr 2026 10:00:53 -0700 [thread overview]
Message-ID: <adaGcf2hKNGHwVPd@gmail.com> (raw)
In-Reply-To: <adZopu5wjXIR5HOR@gmail.com>
On Wed, Apr 08, 2026 at 08:13:43AM -0700, Breno Leitao wrote:
> On Wed, Apr 08, 2026 at 12:13:04PM +0200, Vlastimil Babka (SUSE) wrote:
>
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 2370c6fb1fcd6..8d53242e7aa66 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -2139,8 +2139,12 @@ static void vmstat_shepherd(struct work_struct *w)
> if (cpu_is_isolated(cpu))
> continue;
>
> - if (!delayed_work_pending(dw) && need_update(cpu))
> + if (!delayed_work_pending(dw) && need_update(cpu)) {
> + WARN_ONCE(work_busy(&dw->work) & WORK_BUSY_RUNNING,
> + "cpu%d: vmstat_update already running, scheduling again\n",
> + cpu);
> queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);
> + }
> }
>
> cond_resched();
>
> The fix is a one-line change: !delayed_work_pending(dw) → !work_busy(&dw->work)
In my testing, this race condition occurs more frequently than expected,
likely due to the timer configurations we've been discussing throughout
this thread.
I developed a diagnostic patch to monitor vmstat_update worker scheduling
frequency, and the results show consistently low values. Avoiding
rescheduling a worker that is already also reduces the contention on
stress-ng test case.
commit d725f0664b70aa5c677215b0fc1abc0117aaf114
Author: Breno Leitao <leitao@debian.org>
Date: Wed Apr 8 09:01:02 2026 -0700
mm/vmstat: fix vmstat_shepherd double-scheduling vmstat_update
vmstat_shepherd uses delayed_work_pending() to check whether
vmstat_update is already scheduled for a given CPU before queuing it.
However, delayed_work_pending() only tests WORK_STRUCT_PENDING_BIT,
which is cleared the moment a worker thread picks up the work to
execute it.
This means that while vmstat_update is actively running on a CPU,
delayed_work_pending() returns false. If need_update() also returns
true at that point (per-cpu counters not yet zeroed mid-flush), the
shepherd queues a second invocation with delay=0, causing vmstat_update
to run again immediately after finishing.
On a 72-CPU system this race is readily observable: before the fix,
many CPUs show invocation gaps well below 500 jiffies (the minimum
round_jiffies_relative() can produce), with the most extreme cases
reaching 0 jiffies—vmstat_update called twice within the same jiffy.
Fix this by replacing delayed_work_pending() with work_busy(), which
returns non-zero for both WORK_BUSY_PENDING (timer armed or work
queued) and WORK_BUSY_RUNNING (work currently executing). The shepherd
now correctly skips a CPU in all busy states.
After the fix, all sub-jiffy and most sub-100-jiffie gaps disappear.
The remaining early invocations have gaps in the 700–999 jiffie range,
attributable to round_jiffies_relative() aligning to a nearer
jiffie-second boundary rather than to this race.
Each spurious vmstat_update invocation has a measurable side effect:
refresh_cpu_vm_stats() calls decay_pcp_high() for every zone, which
drains idle per-CPU pages back to the buddy allocator via
free_pcppages_bulk(), taking the zone spinlock each time. Eliminating
the double-scheduling therefore reduces zone lock contention directly.
On a 72-CPU stress-ng workload measured with perf lock contention:
free_pcppages_bulk contention count: ~55% reduction
free_pcppages_bulk total wait time: ~57% reduction
free_pcppages_bulk max wait time: ~47% reduction
Note: work_busy() is inherently racy—between the check and the
subsequent queue_delayed_work_on() call, vmstat_update can finish
execution, leaving the work neither pending nor running. In that
narrow window the shepherd can still queue a second invocation.
After the fix, this residual race is rare and produces only occasional
small gaps, a significant improvement over the systematic
double-scheduling seen with delayed_work_pending().
Signed-off-by: Breno Leitao <leitao@debian.org>
diff --git a/mm/vmstat.c b/mm/vmstat.c
index d59eff1582547..5489549241b51 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -2156,7 +2156,7 @@ static void vmstat_shepherd(struct work_struct *w)
if (cpu_is_isolated(cpu))
continue;
- if (!delayed_work_pending(dw) && need_update(cpu))
+ if (!work_busy(&dw->work) && need_update(cpu))
queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);
}
next prev parent reply other threads:[~2026-04-08 17:01 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-01 13:57 [PATCH] mm/vmstat: spread vmstat_update requeue across the stat interval Breno Leitao
2026-04-01 14:25 ` Johannes Weiner
2026-04-01 14:39 ` Breno Leitao
2026-04-01 14:57 ` Johannes Weiner
2026-04-01 14:47 ` Breno Leitao
2026-04-01 15:01 ` Kiryl Shutsemau
2026-04-01 15:23 ` Usama Arif
2026-04-01 15:43 ` Breno Leitao
2026-04-01 15:50 ` Usama Arif
2026-04-01 15:52 ` Breno Leitao
2026-04-01 17:46 ` Vlastimil Babka (SUSE)
2026-04-02 12:40 ` Vlastimil Babka (SUSE)
2026-04-02 13:33 ` Breno Leitao
2026-04-07 15:39 ` Breno Leitao
2026-04-08 10:13 ` Vlastimil Babka (SUSE)
2026-04-08 15:13 ` Breno Leitao
2026-04-08 17:00 ` Breno Leitao [this message]
2026-04-09 9:36 ` Vlastimil Babka (SUSE)
2026-04-09 12:27 ` Dmitry Ilvokhin
2026-04-09 9:17 ` Vlastimil Babka (SUSE)
2026-04-02 12:43 ` Dmitry Ilvokhin
2026-04-02 7:18 ` Michal Hocko
2026-04-02 12:49 ` Matthew Wilcox
2026-04-02 13:26 ` Breno Leitao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=adaGcf2hKNGHwVPd@gmail.com \
--to=leitao@debian.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=david@kernel.org \
--cc=kas@kernel.org \
--cc=kernel-team@meta.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=ljs@kernel.org \
--cc=mhocko@suse.com \
--cc=rppt@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=surenb@google.com \
--cc=usama.arif@linux.dev \
--cc=vbabka@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.