From: Peter Zijlstra <peterz@infradead.org>
To: K Prateek Nayak <kprateek.nayak@amd.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>,
Shrikanth Hegde <sshegde@linux.ibm.com>,
"Chen, Yu C" <yu.c.chen@intel.com>,
Tim Chen <tim.c.chen@linux.intel.com>,
Ingo Molnar <mingo@kernel.org>,
Doug Nelson <doug.nelson@intel.com>,
Mohini Narkhede <mohini.narkhede@intel.com>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] sched: Skip useless sched_balance_running acquisition if load balance is not due
Date: Thu, 17 Apr 2025 14:01:30 +0200 [thread overview]
Message-ID: <20250417120130.GE17910@noisy.programming.kicks-ass.net> (raw)
In-Reply-To: <7a5a5f1f-0bbc-4a63-b2aa-67bc6c724b2d@amd.com>
On Thu, Apr 17, 2025 at 05:01:37PM +0530, K Prateek Nayak wrote:
> On 4/16/2025 3:17 PM, Vincent Guittot wrote:
> > >
> > > Sorry, forgot to add.
> > >
> > > Do we really need newidle running all the way till NUMA? or if it runs till PKG is it enough?
> > > the regular (idle) can take care for NUMA by serializing it?
> > >
> > > - if (sd->flags & SD_BALANCE_NEWIDLE) {
> > > + if (sd->flags & SD_BALANCE_NEWIDLE && !(sd->flags & SD_SERIALIZE)) {
> >
> > Why not just clearing SD_BALANCE_NEWIDLE in your sched domain when you
> > set SD_SERIALIZE
>
> I've some questions around "sched_balance_running":
>
> o Since this is a single flag across the entire system, it also implies
> CPUs cannon concurrently do load balancing across different NUMA
> domains which seems reasonable since a load balance at lower NUMA
> domain can potentially change the "nr_numa_running" and
> "nr_preferred_running" stats for the higher domain but if this is the
> case, a newidle balance at lower NUMA domain can interfere with a
> concurrent busy / newidle load balancing at higher NUMA domain.
> Is this expected? Should newidle balance be serialized too?
Serializing new-idle might create too much idle time.
> (P.S. I copied over the serialize logic from sched_balance_domains()
> into sched_balance_newidle() and did not see any difference in my
> testing but perhaps there are benchmarks out there that care for
> this)
>
> o If the intention of SD_SERIALIZE was to actually "serializes
> load-balancing passes over large domains (above the NODE topology
> level)" as the comment above "sched_balance_running" states, and
> this question is specific to x86 - when enabling SNC on Intel or
> NPS on AMD servers, the first NUMA domain is in fact as big as the
> NODE (now PKG domain) if not smaller. Is it okay to clear
> SD_SERIALIZE for these domains since they are small enough now?
You'll have to dive into the history here, but IIRC it was from SGI back
in the day, where NUMA factors were quite large and load-balancing
across numa was a pain.
Small really isn't the criteria, but inter-node latency might be, we
also have this node_reclaim_distance thing.
Not quite sure what makes sense, someone should tinker I suppose, see
what works with today's hardare.
next prev parent reply other threads:[~2025-04-17 12:04 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-16 3:58 [PATCH] sched: Skip useless sched_balance_running acquisition if load balance is not due Tim Chen
2025-04-16 5:30 ` Shrikanth Hegde
2025-04-16 6:28 ` Chen, Yu C
2025-04-16 9:16 ` Shrikanth Hegde
2025-04-16 9:29 ` Shrikanth Hegde
2025-04-16 9:47 ` Vincent Guittot
2025-04-16 14:14 ` Shrikanth Hegde
2025-04-17 11:10 ` K Prateek Nayak
2025-04-18 15:02 ` Vincent Guittot
2025-04-18 17:55 ` Shrikanth Hegde
2025-04-17 11:31 ` K Prateek Nayak
2025-04-17 12:01 ` Peter Zijlstra [this message]
2025-04-18 5:26 ` K Prateek Nayak
2025-04-18 9:28 ` Peter Zijlstra
2025-04-18 12:13 ` K Prateek Nayak
2025-04-16 16:19 ` Tim Chen
2025-04-16 17:11 ` Shrikanth Hegde
2025-04-17 9:19 ` Shrikanth Hegde
2025-04-17 17:12 ` Tim Chen
2025-05-29 9:00 ` K Prateek Nayak
2025-06-04 4:26 ` Chen, Yu C
2025-06-06 13:51 ` Vincent Guittot
2025-10-27 18:06 ` Mel Gorman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250417120130.GE17910@noisy.programming.kicks-ass.net \
--to=peterz@infradead.org \
--cc=doug.nelson@intel.com \
--cc=kprateek.nayak@amd.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=mohini.narkhede@intel.com \
--cc=sshegde@linux.ibm.com \
--cc=tim.c.chen@linux.intel.com \
--cc=vincent.guittot@linaro.org \
--cc=yu.c.chen@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox