From: Breno Leitao <leitao@debian.org>
To: Tejun Heo <tj@kernel.org>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>,
linux-kernel@vger.kernel.org, kernel-team@meta.com,
kernel test robot <lkp@intel.com>
Subject: Re: [PATCH] workqueue: validate cpumask_first() result in llc_populate_cpu_shard_id()
Date: Fri, 10 Apr 2026 03:42:00 -0700 [thread overview]
Message-ID: <adjTrRL-PrCRUd57@gmail.com> (raw)
In-Reply-To: <adi8Llt3tW-EwpPt@slm.duckdns.org>
Hello Tejun,
On Thu, Apr 09, 2026 at 11:00:30PM -1000, Tejun Heo wrote:
> On Fri, Apr 10, 2026 at 01:49:50AM -0700, Breno Leitao wrote:
> > In llc_populate_cpu_shard_id(), cpumask_first(sibling_cpus) is used to
> > find the leader CPU, and the result is then used to index into
> > cpu_shard_id[]. Add a bounds check with WARN_ON_ONCE to guard against
> > unexpected values before using it as an array index.
> >
> > Store the result in a local variable to make the code clearer, as also
> > to avoid calling cpumask_first() twice.
> >
> > Fixes: 5920d046f7ae3 ("workqueue: add WQ_AFFN_CACHE_SHARD affinity scope")
> ...
> > @@ -8318,7 +8319,11 @@ static void __init llc_populate_cpu_shard_id(const struct cpumask *pod_cpus,
> > * The siblings' shard MUST be the same as the leader.
> > * never split threads in the same core.
> > */
> > - cpu_shard_id[c] = cpu_shard_id[cpumask_first(sibling_cpus)];
> > + leader = cpumask_first(sibling_cpus);
> > +
> > + if (WARN_ON_ONCE(leader >= nr_cpu_ids))
> > + continue;
> > + cpu_shard_id[c] = cpu_shard_id[leader];
>
> sibling_cpus can't be empty, right?
Correct. sibling_cpus will have, at least, 'c' set.
> This is mostly to shut up the reported
> compiler warning? If so, can you please note that in a ocmment and the
> description?
Sure. Is something like the following acceptable?
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 083d8fe301f46..5dc304cdfa7f9 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -8300,6 +8300,7 @@ static void __init llc_populate_cpu_shard_id(const struct cpumask *pod_cpus,
int cores_in_shard = 0;
/* This is a cursor for the shards. Go from zero to nr_shards - 1*/
int shard_id = 0;
+ int leader;
int c;
/* Iterate at every CPU for a given LLC pod, and assign it a shard */
@@ -8318,7 +8319,17 @@ static void __init llc_populate_cpu_shard_id(const struct cpumask *pod_cpus,
* The siblings' shard MUST be the same as the leader.
* never split threads in the same core.
*/
- cpu_shard_id[c] = cpu_shard_id[cpumask_first(sibling_cpus)];
+ leader = cpumask_first(sibling_cpus);
+
+ /*
+ * sibling_cpus cannot be empty here since 'c'
+ * is always set in it. This check silences a
+ * compiler warning about using the unchecked
+ * cpumask_first() result as an array index.
+ */
+ if (WARN_ON_ONCE(leader >= nr_cpu_ids))
+ continue;
+ cpu_shard_id[c] = cpu_shard_id[leader];
}
}
next prev parent reply other threads:[~2026-04-10 10:42 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-10 8:49 [PATCH] workqueue: validate cpumask_first() result in llc_populate_cpu_shard_id() Breno Leitao
2026-04-10 9:00 ` Tejun Heo
2026-04-10 10:42 ` Breno Leitao [this message]
2026-04-10 17:36 ` Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=adjTrRL-PrCRUd57@gmail.com \
--to=leitao@debian.org \
--cc=jiangshanlai@gmail.com \
--cc=kernel-team@meta.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lkp@intel.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox