From: "Chuck Lever" <cel@kernel.org>
To: "Breno Leitao" <leitao@debian.org>, "Tejun Heo" <tj@kernel.org>,
"Lai Jiangshan" <jiangshanlai@gmail.com>,
"Andrew Morton" <akpm@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org, puranjay@kernel.org,
linux-crypto@vger.kernel.org, linux-btrfs@vger.kernel.org,
linux-fsdevel@vger.kernel.org,
"Michael van der Westhuizen" <rmikey@meta.com>,
kernel-team@meta.com, "Chuck Lever" <chuck.lever@oracle.com>
Subject: Re: [PATCH v2 0/5] workqueue: Introduce a sharded cache affinity scope
Date: Mon, 23 Mar 2026 10:11:07 -0400 [thread overview]
Message-ID: <04af531d-d8a3-4fbb-993d-e1da2df62a03@app.fastmail.com> (raw)
In-Reply-To: <20260320-workqueue_sharded-v2-0-8372930931af@debian.org>
On Fri, Mar 20, 2026, at 1:56 PM, Breno Leitao wrote:
> TL;DR: Some modern processors have many CPUs per LLC (L3 cache), and
> unbound workqueues using the default affinity (WQ_AFFN_CACHE) collapse
> to a single worker pool, causing heavy spinlock (pool->lock) contention.
> Create a new affinity (WQ_AFFN_CACHE_SHARD) that caps each pool at
> wq_cache_shard_size CPUs (default 8).
>
> Changes from RFC:
>
> * wq_cache_shard_size is in terms of cores (not vCPU). So,
> wq_cache_shard_size=8 means the pool will have 8 cores and their siblings,
> like 16 threads/CPUs if SMT=1
My concern about the "cores per shard" approach is that it
improves the default situation for moderately-sized machines
little or not at all.
A machine with one L3 and 10 cores will go from 1 UNBOUND
pool to only 2. For virtual machines commonly deployed as
cloud instances, which are 2, 4, or 8 core systems (up to
16 threads) there will still be significant contention for
UNBOUND workers.
IOW, if you want good scaling, human intervention (via a
boot command-line option) is still needed.
--
Chuck Lever
next prev parent reply other threads:[~2026-03-23 14:11 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-20 17:56 [PATCH v2 0/5] workqueue: Introduce a sharded cache affinity scope Breno Leitao
2026-03-20 17:56 ` [PATCH v2 1/5] workqueue: fix typo in WQ_AFFN_SMT comment Breno Leitao
2026-03-20 17:56 ` [PATCH v2 2/5] workqueue: add WQ_AFFN_CACHE_SHARD affinity scope Breno Leitao
2026-03-23 22:43 ` Tejun Heo
2026-03-20 17:56 ` [PATCH v2 3/5] workqueue: set WQ_AFFN_CACHE_SHARD as the default " Breno Leitao
2026-03-20 17:56 ` [PATCH v2 4/5] tools/workqueue: add CACHE_SHARD support to wq_dump.py Breno Leitao
2026-03-20 17:56 ` [PATCH v2 5/5] workqueue: add test_workqueue benchmark module Breno Leitao
2026-03-23 14:11 ` Chuck Lever [this message]
2026-03-23 15:10 ` [PATCH v2 0/5] workqueue: Introduce a sharded cache affinity scope Breno Leitao
2026-03-23 15:28 ` Chuck Lever
2026-03-23 16:26 ` Breno Leitao
2026-03-23 18:04 ` Chuck Lever
2026-03-23 18:19 ` Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=04af531d-d8a3-4fbb-993d-e1da2df62a03@app.fastmail.com \
--to=cel@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=chuck.lever@oracle.com \
--cc=jiangshanlai@gmail.com \
--cc=kernel-team@meta.com \
--cc=leitao@debian.org \
--cc=linux-btrfs@vger.kernel.org \
--cc=linux-crypto@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=puranjay@kernel.org \
--cc=rmikey@meta.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox