From: Andrea Righi <arighi@nvidia.com>
To: Tejun Heo <tj@kernel.org>
Cc: David Vernet <void@manifault.com>,
Changwoo Min <changwoo@igalia.com>,
Joel Fernandes <joelagnelf@nvidia.com>,
bpf@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 1/6] sched_ext: idle: Extend topology optimizations to all tasks
Date: Tue, 18 Mar 2025 08:31:29 +0100 [thread overview]
Message-ID: <Z9khUVcHNfnQuN-u@gpd3> (raw)
In-Reply-To: <Z9hoa5iPpDEOnXKt@slm.duckdns.org>
On Mon, Mar 17, 2025 at 08:22:35AM -1000, Tejun Heo wrote:
...
> > + /*
> > + * If the task is allowed to run on all CPUs, simply use the
> > + * architecture's cpumask directly. Otherwise, compute the
> > + * intersection of the architecture's cpumask and the task's
> > + * allowed cpumask.
> > + */
> > + if (!cpus || p->nr_cpus_allowed >= num_possible_cpus() ||
> > + cpumask_subset(cpus, p->cpus_ptr))
> > + return cpus;
> > +
> > + if (!cpumask_equal(cpus, p->cpus_ptr) &&
>
> Hmm... isn't this covered by the preceding cpumask_subset() test? Here, cpus
> is not a subset of p->cpus_ptr, so how can it be the same as p->cpus_ptr?
>
> > + cpumask_and(local_cpus, cpus, p->cpus_ptr))
> > + return local_cpus;
> > +
> > + return NULL;
Also, I'm also wondering if there's really a benefit checking for
cpumask_subset() and then doing cpumask_and() only when it's needed, or if
we should just do cpumask_and(). It's true that we can save some writes,
but they're done on a temporary local per-CPU cpumask, so they shouldn't
introduce cache contention.
-Andrea
next prev parent reply other threads:[~2025-03-18 7:31 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-17 17:53 [PATCHSET v4 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Andrea Righi
2025-03-17 17:53 ` [PATCH 1/6] sched_ext: idle: Extend topology optimizations to all tasks Andrea Righi
2025-03-17 18:22 ` Tejun Heo
2025-03-18 4:43 ` Andrea Righi
2025-03-18 7:31 ` Andrea Righi [this message]
2025-03-18 17:31 ` Tejun Heo
2025-03-17 17:53 ` [PATCH 2/6] sched_ext: idle: Explicitly pass allowed cpumask to scx_select_cpu_dfl() Andrea Righi
2025-03-17 17:53 ` [PATCH 3/6] sched_ext: idle: Accept an arbitrary cpumask in scx_select_cpu_dfl() Andrea Righi
2025-03-17 17:53 ` [PATCH 4/6] sched_ext: idle: Introduce scx_bpf_select_cpu_and() Andrea Righi
2025-03-17 17:53 ` [PATCH 5/6] selftests/sched_ext: Add test for scx_bpf_select_cpu_and() Andrea Righi
2025-03-17 17:53 ` [PATCH 6/6] sched_ext: idle: Deprecate scx_bpf_select_cpu_dfl() Andrea Righi
-- strict thread matches above, loose matches on Subject: below --
2025-03-20 7:36 [PATCHSET v5 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Andrea Righi
2025-03-20 7:36 ` [PATCH 1/6] sched_ext: idle: Extend topology optimizations to all tasks Andrea Righi
2025-03-20 16:49 ` Tejun Heo
2025-03-20 22:08 ` Andrea Righi
2025-03-21 22:10 [PATCHSET v6 sched_ext/for-6.15] sched_ext: Enhance built-in idle selection with allowed CPUs Andrea Righi
2025-03-21 22:10 ` [PATCH 1/6] sched_ext: idle: Extend topology optimizations to all tasks Andrea Righi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z9khUVcHNfnQuN-u@gpd3 \
--to=arighi@nvidia.com \
--cc=bpf@vger.kernel.org \
--cc=changwoo@igalia.com \
--cc=joelagnelf@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=tj@kernel.org \
--cc=void@manifault.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox