public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: Breno Leitao <leitao@debian.org>
To: "Vlastimil Babka (SUSE)" <vbabka@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	 David Hildenbrand <david@kernel.org>,
	Lorenzo Stoakes <ljs@kernel.org>,
	 "Liam R. Howlett" <Liam.Howlett@oracle.com>,
	Mike Rapoport <rppt@kernel.org>,
	 Suren Baghdasaryan <surenb@google.com>,
	Michal Hocko <mhocko@suse.com>,
	linux-mm@kvack.org,  linux-kernel@vger.kernel.org,
	kas@kernel.org, shakeel.butt@linux.dev, usama.arif@linux.dev,
	 kernel-team@meta.com
Subject: Re: [PATCH] mm/vmstat: spread vmstat_update requeue across the stat interval
Date: Wed, 8 Apr 2026 08:13:43 -0700	[thread overview]
Message-ID: <adZopu5wjXIR5HOR@gmail.com> (raw)
In-Reply-To: <a9129d39-f9dc-4f09-b951-203c8e28b600@kernel.org>

On Wed, Apr 08, 2026 at 12:13:04PM +0200, Vlastimil Babka (SUSE) wrote:
> On 4/7/26 17:39, Breno Leitao wrote:
> > On Thu, Apr 02, 2026 at 06:33:17AM -0700, Breno Leitao wrote:
> >> 1) The interval duration now varies per CPU. Specifically, vmstat_update()
> >>    is scheduled at sysctl_stat_interval*2 for the highest CPU with my
> >>    proposed change, rather than a uniform sysctl_stat_interval across
> >>    all CPUs. (as you raised in the first email)
> > However, we certainly don't want scenario (1) where the delay varies per
> > CPU, resulting in the last CPU having vmstat_update() scheduled every
> > 2 seconds instead of 1 second.
>
> Indeed.
>
....
>
> So you have 72 cpus, the vmstat interval is 1s, and what's the CONFIG_HZ?

Yes, CONFIG_HZ=1000 in my configuration.

> If it's 1000, it means 13 jiffies per cpu. Would changing the
> round_jiffies_common() implementation to add 13 jiffies per cpu instead of 3
> have the same effect?

That approach would increase the spread in round_jiffies_common(), but
it brings us back to scenario 1 - each CPU would have a variable
rescheduling delay, which defeats the purpose of maintaining consistent
timing.

>
> Link: https://github.com/leitao/linux/commit/ac200164df1bda45ee8504cc3db5bff5b696245e [1]
> Link: https://github.com/leitao/linux/commit/baa2ea6ea4c4c2b1df689de6db0a2a6f119e51be [2]
>
>
> commit 41b7aaa1a51f07fc1f0db0614d140fbca78463d3
> Author: Breno Leitao <leitao@debian.org>
> Date:   Tue Apr 7 07:56:35 2026 -0700
>
>     mm/vmstat: spread per-cpu vmstat work to reduce zone->lock contention
>
>     vmstat_shepherd() queues all per-cpu vmstat_update work with zero delay,
>     and vmstat_update() re-queues itself with round_jiffies_relative(), which
>     clusters timers near the same second boundary due to the small per-CPU
>     spread in round_jiffies_common(). On many-CPU systems this causes
>     thundering-herd contention on zone->lock when multiple CPUs
>     simultaneously call refresh_cpu_vm_stats() -> decay_pcp_high() ->
>     free_pcppages_bulk().
>
>     Introduce vmstat_spread_delay() to assign each CPU a unique offset
>     distributed evenly across sysctl_stat_interval. The shepherd uses this
>     when initially queuing per-cpu work, and vmstat_update re-queues with a
>     plain sysctl_stat_interval to preserve the spread (round_jiffies_relative
>     would snap CPUs back to the same boundary).
>
>     Signed-off-by: Breno Leitao <leitao@debian.org>
>
> I think this approach could have the following problems:
>
> - the initially spread delays can drift over time, there's nothing keeping
> them in sync
> - not using round_jiffies_relative() means firing at other times than other
> timers that are using the rounding, so this could be working against the
> power savings effects of rounding
> - it's a vmstat-specific workaround for some yet unclear underlying
> suboptimality that's likely not vmstat specific

I believe the issue is that vmstat's current use of round_jiffies_relative()
fundamentally solve a different problem than the one we trying to solve.
The round_jiffies_relative() mechanism is designed for a different
purpose, as documented:

 """
  By rounding these timers to whole seconds, all such timers will fire
  at the same time, rather than at various times spread out. The goal
  of this is to have the CPU wake up less, which saves power.
 """

Backing up a bit, let me summarize the problem:

1) vmstat_shepherd() starts all vmstat_update workers simultaneously
   (or nearly so)

2) All vmstat_update() workers then reschedule themselves for the next
   second with only 216ms of variance across 72 cores

Upon further reflection, issue (2) wouldn't exist if we simply avoided
starting all workqueues simultaneously in the first place.

That said, I created a hacky code to find how often the vmstat_update
is called, and it is not uncommon to see cases where vmstat_update is
called a few jiffies after it was called.

Debugging further, I found a race between vmstat_shepherd() trying to
schedule vmstat_update() even when it is already running, which happens
a lot, given they both run at the 

commit 86e27727b6cc9fbe8bf67818a065105fd30bc7e8 (HEAD -> b4/vmstat)
Author: Breno Leitao <leitao@debian.org>
Date:   Wed Apr 8 08:11:01 2026 -0700

    mm/vmstat: warn when shepherd schedules vmstat_update while it is running

    vmstat_shepherd checks delayed_work_pending() to decide whether to
    queue vmstat_update for a CPU. However, delayed_work_pending() only
    tests the WORK_STRUCT_PENDING_BIT, which is cleared as soon as a
    worker thread picks up the work for execution.

    This means that while vmstat_update is actively running on a CPU,
    delayed_work_pending() returns false, and the shepherd may queue
    another invocation if need_update() also returns true (which it can,
    since per-cpu counters are still being flushed mid-execution).

    Add a WARN_ONCE to make this race visible: fire a warning when the
    shepherd is about to queue vmstat_update for a CPU where it is
    currently executing, i.e., work_busy() reports WORK_BUSY_RUNNING
    but delayed_work_pending() does not prevent the re-queue.

    Signed-off-by: Breno Leitao <leitao@debian.org>

diff --git a/mm/vmstat.c b/mm/vmstat.c
index 2370c6fb1fcd6..8d53242e7aa66 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -2139,8 +2139,12 @@ static void vmstat_shepherd(struct work_struct *w)
                        if (cpu_is_isolated(cpu))
                                continue;

-                       if (!delayed_work_pending(dw) && need_update(cpu))
+                       if (!delayed_work_pending(dw) && need_update(cpu)) {
+                               WARN_ONCE(work_busy(&dw->work) & WORK_BUSY_RUNNING,
+                                         "cpu%d: vmstat_update already running, scheduling again\n",
+                                         cpu);
                                queue_delayed_work_on(cpu, mm_percpu_wq, dw, 0);
+                       }
                }

                cond_resched();

The fix is a one-line change: !delayed_work_pending(dw) → !work_busy(&dw->work)


  reply	other threads:[~2026-04-08 15:14 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-01 13:57 [PATCH] mm/vmstat: spread vmstat_update requeue across the stat interval Breno Leitao
2026-04-01 14:25 ` Johannes Weiner
2026-04-01 14:39   ` Breno Leitao
2026-04-01 14:57     ` Johannes Weiner
2026-04-01 14:47 ` Breno Leitao
2026-04-01 15:01 ` Kiryl Shutsemau
2026-04-01 15:23 ` Usama Arif
2026-04-01 15:43   ` Breno Leitao
2026-04-01 15:50     ` Usama Arif
2026-04-01 15:52       ` Breno Leitao
2026-04-01 17:46 ` Vlastimil Babka (SUSE)
2026-04-02 12:40   ` Vlastimil Babka (SUSE)
2026-04-02 13:33     ` Breno Leitao
2026-04-07 15:39       ` Breno Leitao
2026-04-08 10:13         ` Vlastimil Babka (SUSE)
2026-04-08 15:13           ` Breno Leitao [this message]
2026-04-08 17:00             ` Breno Leitao
2026-04-02 12:43   ` Dmitry Ilvokhin
2026-04-02  7:18 ` Michal Hocko
2026-04-02 12:49 ` Matthew Wilcox
2026-04-02 13:26   ` Breno Leitao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=adZopu5wjXIR5HOR@gmail.com \
    --to=leitao@debian.org \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=david@kernel.org \
    --cc=kas@kernel.org \
    --cc=kernel-team@meta.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ljs@kernel.org \
    --cc=mhocko@suse.com \
    --cc=rppt@kernel.org \
    --cc=shakeel.butt@linux.dev \
    --cc=surenb@google.com \
    --cc=usama.arif@linux.dev \
    --cc=vbabka@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox