From: Andrea Righi <arighi@nvidia.com>
To: Yury Norov <yury.norov@gmail.com>
Cc: Tejun Heo <tj@kernel.org>, David Vernet <void@manifault.com>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/3] sched_ext: Introduce per-NUMA idle cpumasks
Date: Wed, 4 Dec 2024 09:47:00 +0100 [thread overview]
Message-ID: <Z1AXBHsCuX_6SOra@gpd3> (raw)
In-Reply-To: <Z0-kovS-Ba9CaP9J@yury-ThinkPad>
On Tue, Dec 03, 2024 at 04:38:58PM -0800, Yury Norov wrote:
> On Tue, Dec 03, 2024 at 02:04:15PM -1000, Tejun Heo wrote:
...
> > > +static s32 scx_pick_idle_cpu(const struct cpumask *cpus_allowed, u64 flags)
> > > +{
> > > + int start = cpu_to_node(smp_processor_id());
> > > + int node, cpu;
> > > +
> > > + for_each_node_state_wrap(node, N_ONLINE, start) {
> > > + /*
> > > + * scx_pick_idle_cpu_from_node() can be expensive and redundant
> > > + * if none of the CPUs in the NUMA node can be used (according
> > > + * to cpus_allowed).
> > > + *
> > > + * Therefore, check if the NUMA node is usable in advance to
> > > + * save some CPU cycles.
> > > + */
> > > + if (!cpumask_intersects(cpumask_of_node(node), cpus_allowed))
> > > + continue;
> > > + cpu = scx_pick_idle_cpu_from_node(node, cpus_allowed, flags);
> > > + if (cpu >= 0)
> > > + return cpu;
> >
> > This is fine for now but it'd be ideal if the iteration is in inter-node
> > distance order so that each CPU radiates from local node to the furthest
> > ones.
>
> cpumask_local_spread() does exactly that - traverses CPUs in NUMA-aware
> order. Or we can use for_each_numa_hop_mask() iterator, which does the
> same thing more efficiently.
Nice, for_each_numa_hop_mask() seems to be exactly what I need, there's
also a starting node, so with that we don't need to introduce
for_each_online_node_wrap() and the other new *_wrap() helpers.
Thanks,
-Andrea
next prev parent reply other threads:[~2024-12-04 8:47 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-03 15:36 [PATCHSET v3 sched_ext/for-6.13] sched_ext: split global idle cpumask into per-NUMA cpumasks Andrea Righi
2024-12-03 15:36 ` [PATCH 1/3] nodemask: Introduce for_each_node_mask_wrap/for_each_node_state_wrap() Andrea Righi
2024-12-03 16:27 ` Yury Norov
2024-12-03 15:36 ` [PATCH 2/3] sched_ext: Introduce per-NUMA idle cpumasks Andrea Righi
2024-12-04 0:04 ` Tejun Heo
2024-12-04 0:38 ` Yury Norov
2024-12-04 8:47 ` Andrea Righi [this message]
2024-12-04 8:41 ` Andrea Righi
2024-12-04 18:53 ` Tejun Heo
2024-12-03 15:36 ` [PATCH 3/3] sched_ext: get rid of the scx_selcpu_topo_numa logic Andrea Righi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z1AXBHsCuX_6SOra@gpd3 \
--to=arighi@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=tj@kernel.org \
--cc=void@manifault.com \
--cc=yury.norov@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox