public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Tim Chen <tim.c.chen@linux.intel.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>, Chen Yu <yu.c.chen@intel.com>,
	Doug Nelson	 <doug.nelson@intel.com>,
	Mohini Narkhede <mohini.narkhede@intel.com>,
		linux-kernel@vger.kernel.org,
	Vincent Guittot <vincent.guittot@linaro.org>,
	 Shrikanth Hegde <sshegde@linux.ibm.com>,
	K Prateek Nayak <kprateek.nayak@amd.com>
Subject: Re: [RESEND PATCH] sched/fair: Skip sched_balance_running cmpxchg when balance is not due
Date: Mon, 13 Oct 2025 14:54:19 -0700	[thread overview]
Message-ID: <aa3d20e6d451e0d0b812fe16e9d403c1033feeaa.camel@linux.intel.com> (raw)
In-Reply-To: <20251013142638.GM3245006@noisy.programming.kicks-ass.net>

On Mon, 2025-10-13 at 16:26 +0200, Peter Zijlstra wrote:
> On Thu, Oct 02, 2025 at 04:00:12PM -0700, Tim Chen wrote:
> 
> > During load balancing, balancing at the LLC level and above must be
> > serialized. 
> 
> I would argue the wording here, there is no *must*, they *are*. Per the
> current rules SD_NUMA and up get SD_SERIALIZE.
> 
> This is a *very* old thing, done by Christoph Lameter back when he was
> at SGI. I'm not sure this default is still valid or not. Someone would
> have to investigate. An alternative would be moving it into
> node_reclaim_distance or somesuch.
> 
> > The scheduler currently checks the atomic
> > `sched_balance_running` flag before verifying whether a balance is
> > actually due. This causes high contention, as multiple CPUs may attempt
> > to acquire the flag concurrently.
> 
> Right.
> 
> > On a 2-socket Granite Rapids system with sub-NUMA clustering enabled
> > and running OLTP workloads, 7.6% of CPU cycles were spent on cmpxchg
> > operations for `sched_balance_running`. In most cases, the attempt
> > aborts immediately after acquisition because the load balance time is
> > not yet due.
> 
> So I'm not sure I understand the situation, @continue_balancing should
> limit this concurrency to however many groups are on this domain -- your
> granite thing with SNC on would have something like 6 groups?

That's a good point.  But I think the contention is worse than
6 CPUs.

The hierarchy would be

SMT
NUMA-level1
NUMA-level2
NUMA-level3
NUMA-level4

There would be multiple CPUs in that are first in the SMT group
with continue_balancing=1 going up in the hierachy and
attempting the cmpxchg in the first NUMA domain level,
before calling should_we_balance() and finding that they are
not the first in the NUMA domain and set continue_balancing=0
and abort. Those CPUS are in same L3.
But at the same time, there could be CPUs in other sockets
cmpxchg on sched_balance_running.

In our experiment, we have not applied
NUMA hierarchy fix [1]. The system can have up to 4 NUMA domains,
with 3 NUMA levels spanning different sockets.
That means there can be up to three concurrent cmpxchg attempts
per level from CPUs in separate sockets. So that would make
things worse.

[1] https://lore.kernel.org/lkml/cover.1759515405.git.tim.c.chen@linux.intel.com/

> 
> > Fix this by checking whether a balance is due *before* trying to
> > acquire `sched_balance_running`. This avoids many wasted acquisitions
> > and reduces the cmpxchg overhead in `sched_balance_domain()` from 7.6%
> > to 0.05%. As a result, OLTP throughput improves by 11%.
> 
> Yeah, I see no harm flipping this, but the Changelog needs help.
> 
> > Reviewed-by: Chen Yu <yu.c.chen@intel.com>
> > Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org>
> > Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
> > ---
> >  kernel/sched/fair.c | 16 ++++++++--------
> >  1 file changed, 8 insertions(+), 8 deletions(-)
> > 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 8ce56a8d507f..bedd785c4a39 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -12126,13 +12126,13 @@ static void sched_balance_domains(struct rq *rq, enum cpu_idle_type idle)
> >  
> >  		interval = get_sd_balance_interval(sd, busy);
> >  
> > -		need_serialize = sd->flags & SD_SERIALIZE;
> > -		if (need_serialize) {
> > -			if (atomic_cmpxchg_acquire(&sched_balance_running, 0, 1))
> > -				goto out;
> > -		}
> > -
> >  		if (time_after_eq(jiffies, sd->last_balance + interval)) {
> > +			need_serialize = sd->flags & SD_SERIALIZE;
> > +			if (need_serialize) {
> > +				if (atomic_cmpxchg_acquire(&sched_balance_running, 0, 1))
> > +					goto out;
> > +			}
> > +
> >  			if (sched_balance_rq(cpu, rq, sd, idle, &continue_balancing)) {
> >  				/*
> >  				 * The LBF_DST_PINNED logic could have changed
> > @@ -12144,9 +12144,9 @@ static void sched_balance_domains(struct rq *rq, enum cpu_idle_type idle)
> >  			}
> >  			sd->last_balance = jiffies;
> >  			interval = get_sd_balance_interval(sd, busy);
> > +			if (need_serialize)
> > +				atomic_set_release(&sched_balance_running, 0);
> >  		}
> > -		if (need_serialize)
> > -			atomic_set_release(&sched_balance_running, 0);
> >  out:
> >  		if (time_after(next_balance, sd->last_balance + interval)) {
> >  			next_balance = sd->last_balance + interval;
> 
> Instead of making the indenting worse, could we make it better?

Yes, this is much better.  Thanks.

Tim

> 
> ---
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e743d9d0576c..6318834ff42a 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -12215,6 +12215,8 @@ static void sched_balance_domains(struct rq *rq, enum cpu_idle_type idle)
>  		}
>  
>  		interval = get_sd_balance_interval(sd, busy);
> +		if (time_before(jiffies, sd->last_balance + interval))
> +			goto out;
>  
>  		need_serialize = sd->flags & SD_SERIALIZE;
>  		if (need_serialize) {
> @@ -12222,19 +12224,18 @@ static void sched_balance_domains(struct rq *rq, enum cpu_idle_type idle)
>  				goto out;
>  		}
>  
> -		if (time_after_eq(jiffies, sd->last_balance + interval)) {
> -			if (sched_balance_rq(cpu, rq, sd, idle, &continue_balancing)) {
> -				/*
> -				 * The LBF_DST_PINNED logic could have changed
> -				 * env->dst_cpu, so we can't know our idle
> -				 * state even if we migrated tasks. Update it.
> -				 */
> -				idle = idle_cpu(cpu);
> -				busy = !idle && !sched_idle_cpu(cpu);
> -			}
> -			sd->last_balance = jiffies;
> -			interval = get_sd_balance_interval(sd, busy);
> +		if (sched_balance_rq(cpu, rq, sd, idle, &continue_balancing)) {
> +			/*
> +			 * The LBF_DST_PINNED logic could have changed
> +			 * env->dst_cpu, so we can't know our idle
> +			 * state even if we migrated tasks. Update it.
> +			 */
> +			idle = idle_cpu(cpu);
> +			busy = !idle && !sched_idle_cpu(cpu);
>  		}
> +		sd->last_balance = jiffies;
> +		interval = get_sd_balance_interval(sd, busy);
> +
>  		if (need_serialize)
>  			atomic_set_release(&sched_balance_running, 0);
>  out:

  parent reply	other threads:[~2025-10-13 21:54 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-02 23:00 [RESEND PATCH] sched/fair: Skip sched_balance_running cmpxchg when balance is not due Tim Chen
2025-10-03  5:23 ` Shrikanth Hegde
2025-10-03 16:37   ` Tim Chen
2025-10-13 14:26 ` Peter Zijlstra
2025-10-13 16:32   ` Chen, Yu C
2025-10-13 16:41     ` Shrikanth Hegde
2025-10-13 16:43       ` Chen, Yu C
2025-10-14  9:26     ` Peter Zijlstra
2025-10-13 21:54   ` Tim Chen [this message]
2025-10-14  9:24     ` Peter Zijlstra
2025-10-14  9:33       ` Shrikanth Hegde
2025-10-14  9:42         ` Peter Zijlstra
2025-10-14  9:51           ` Shrikanth Hegde
2025-10-16 14:03           ` Shrikanth Hegde
2025-10-22 17:42             ` Shrikanth Hegde
2025-10-14 13:50       ` Srikar Dronamraju
2025-10-14 13:59         ` Peter Zijlstra
2025-10-14 14:28       ` Shrikanth Hegde
2025-10-14 18:05       ` Tim Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aa3d20e6d451e0d0b812fe16e9d403c1033feeaa.camel@linux.intel.com \
    --to=tim.c.chen@linux.intel.com \
    --cc=doug.nelson@intel.com \
    --cc=kprateek.nayak@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=mohini.narkhede@intel.com \
    --cc=peterz@infradead.org \
    --cc=sshegde@linux.ibm.com \
    --cc=vincent.guittot@linaro.org \
    --cc=yu.c.chen@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox