public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Tejun Heo <tj@kernel.org>
To: Andrea Righi <arighi@nvidia.com>
Cc: David Vernet <void@manifault.com>,
	Changwoo Min <changwoo@igalia.com>,
	Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>, Mel Gorman <mgorman@suse.de>,
	Valentin Schneider <vschneid@redhat.com>,
	Ian May <ianm@nvidia.com>,
	bpf@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 6/6] sched_ext: idle: Introduce node-aware idle cpu kfunc helpers
Date: Sat, 8 Feb 2025 20:31:27 -1000	[thread overview]
Message-ID: <Z6hLvxEKFlgmIeOQ@slm.duckdns.org> (raw)
In-Reply-To: <Z6chqn0Xf6xhL5gA@gpd3>

Hello,

On Sat, Feb 08, 2025 at 10:19:38AM +0100, Andrea Righi wrote:
...
> > This is contingent on scx_builtin_idle_per_node, right? It's confusing for
> > CPU -> node mapping function to return NUMA_NO_NODE depending on an ops
> > flag. Shouldn't this be a generic mapping function?
> 
> The idea is that BPF schedulers can use this kfunc to determine the right
> idle cpumask to use, for example a typical usage could be:
> 
>   int node = scx_bpf_cpu_node(prev_cpu);
>   s32 cpu = scx_bpf_pick_idle_cpu_in_node(p->cpus_ptr, node, SCX_PICK_IDLE_IN_NODE);
> 
> Or:
> 
>   int node = scx_bpf_cpu_node(prev_cpu);
>   const struct cpumask *idle_cpumask = scx_bpf_get_idle_cpumask_node(node);
> 
> When SCX_OPS_BUILTIN_IDLE_PER_NODE is disabled, we need to point to the
> global idle cpumask, that is identified by NUMA_NO_NODE, so this is why we
> can return NUMA_NO_NODE fro scx_bpf_cpu_node().
> 
> Do you think we should make this more clear / document this better. Or do
> you think we should use a different API?

I think this is too error-prone. It'd be really easy for users to assume
that scx_bpf_cpu_node() always returns the NUMA node for the given CPU which
can lead to really subtle surprises. Why even allow e.g.
scx_bpf_get_idle_cpumask_node() if IDLE_PER_NODE is not enabled?

Thanks.

-- 
tejun

  reply	other threads:[~2025-02-09  6:31 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-07 20:40 [PATCHSET v10 sched_ext/for-6.15] sched_ext: split global idle cpumask into per-NUMA cpumasks Andrea Righi
2025-02-07 20:40 ` [PATCH 1/6] mm/numa: Introduce numa_nearest_nodemask() Andrea Righi
2025-02-09 17:40   ` Yury Norov
2025-02-10  8:28     ` Andrea Righi
2025-02-10 16:41       ` Yury Norov
2025-02-10 16:51         ` Andrea Righi
2025-02-07 20:40 ` [PATCH 2/6] sched/topology: Introduce for_each_numa_node() iterator Andrea Righi
2025-02-07 21:46   ` Tejun Heo
2025-02-07 21:55     ` Andrea Righi
2025-02-07 21:56       ` Tejun Heo
2025-02-09 17:51         ` Yury Norov
2025-02-09 17:50   ` Yury Norov
2025-02-07 20:40 ` [PATCH 3/6] sched_ext: idle: Introduce SCX_OPS_BUILTIN_IDLE_PER_NODE Andrea Righi
2025-02-07 20:40 ` [PATCH 4/6] sched_ext: idle: introduce SCX_PICK_IDLE_IN_NODE Andrea Righi
2025-02-07 22:02   ` Tejun Heo
2025-02-07 20:40 ` [PATCH 5/6] sched_ext: idle: Per-node idle cpumasks Andrea Righi
2025-02-07 22:30   ` Tejun Heo
2025-02-08  8:47     ` Andrea Righi
2025-02-09 18:07   ` Yury Norov
2025-02-10 16:57     ` Yury Norov
2025-02-11  7:32       ` Andrea Righi
2025-02-11  7:41         ` Andrea Righi
2025-02-11  9:50           ` Andrea Righi
2025-02-11 14:19             ` Yury Norov
2025-02-11 14:34               ` Andrea Righi
2025-02-11 14:45                 ` Andrea Righi
2025-02-11 16:38                   ` Steven Rostedt
2025-02-11 18:05                     ` Andrea Righi
2025-02-07 20:40 ` [PATCH 6/6] sched_ext: idle: Introduce node-aware idle cpu kfunc helpers Andrea Righi
2025-02-07 22:39   ` Tejun Heo
2025-02-08  9:19     ` Andrea Righi
2025-02-09  6:31       ` Tejun Heo [this message]
2025-02-09  8:11         ` Andrea Righi
2025-02-10  6:01           ` Tejun Heo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z6hLvxEKFlgmIeOQ@slm.duckdns.org \
    --to=tj@kernel.org \
    --cc=arighi@nvidia.com \
    --cc=bpf@vger.kernel.org \
    --cc=bsegall@google.com \
    --cc=changwoo@igalia.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=ianm@nvidia.com \
    --cc=juri.lelli@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=vincent.guittot@linaro.org \
    --cc=void@manifault.com \
    --cc=vschneid@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox