public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] replace old wq(s), added WQ_PERCPU to alloc_workqueue
@ 2026-02-23 10:23 Marco Crivellari
  2026-02-23 10:23 ` [PATCH v2 1/3] nvmet: replace use of system_wq with system_percpu_wq Marco Crivellari
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Marco Crivellari @ 2026-02-23 10:23 UTC (permalink / raw)
  To: linux-kernel, linux-nvme
  Cc: Tejun Heo, Lai Jiangshan, Frederic Weisbecker,
	Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
	Christoph Hellwig, Sagi Grimberg, Chaitanya Kulkarni, Justin Tee,
	Naresh Gottumukkala, Paul Ely

Hi,

=== Current situation: problems ===

Let's consider a nohz_full system with isolated CPUs: wq_unbound_cpumask is
set to the housekeeping CPUs, for !WQ_UNBOUND the local CPU is selected.

This leads to different scenarios if a work item is scheduled on an
isolated CPU where "delay" value is 0 or greater then 0:
        schedule_delayed_work(, 0);

This will be handled by __queue_work() that will queue the work item on the
current local (isolated) CPU, while:

        schedule_delayed_work(, 1);

Will move the timer on an housekeeping CPU, and schedule the work there.

Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.

This lack of consistency cannot be addressed without refactoring the API.

=== Recent changes to the WQ API ===

The following, address the recent changes in the Workqueue API:

- commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
- commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")

The old workqueues will be removed in a future release cycle.

=== Introduced Changes by this series ===

1) [P 1] Replace uses of system_wq

    system_wq is a per-CPU workqueue, but his name is not clear.
    Because of that, system_wq has been replaced with system_percpu_wq.

2) [P 2-3] add WQ_PERCPU to all relevant alloc_workqueue() users

    This change adds a new WQ_PERCPU flag to explicitly request
    alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified.


Thanks!

---
Changes in v2:
- improved commit logs

- rebased on v7.0-rc1

Marco Crivellari (3):
  nvmet: replace use of system_wq with system_percpu_wq
  nvme: add WQ_PERCPU to alloc_workqueue users
  nvmet-fc: add WQ_PERCPU to alloc_workqueue users

 drivers/nvme/target/admin-cmd.c        | 2 +-
 drivers/nvme/target/core.c             | 5 +++--
 drivers/nvme/target/fabrics-cmd-auth.c | 2 +-
 drivers/nvme/target/fc.c               | 6 +++---
 drivers/nvme/target/tcp.c              | 2 +-
 5 files changed, 9 insertions(+), 8 deletions(-)

-- 
2.51.1



^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2026-03-24 15:31 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-02-23 10:23 [PATCH v2 0/3] replace old wq(s), added WQ_PERCPU to alloc_workqueue Marco Crivellari
2026-02-23 10:23 ` [PATCH v2 1/3] nvmet: replace use of system_wq with system_percpu_wq Marco Crivellari
2026-03-20  7:57   ` Christoph Hellwig
2026-03-20  8:30     ` Sebastian Andrzej Siewior
2026-02-23 10:23 ` [PATCH v2 2/3] nvme: add WQ_PERCPU to alloc_workqueue users Marco Crivellari
2026-03-20  7:58   ` Christoph Hellwig
2026-02-23 10:23 ` [PATCH v2 3/3] nvmet-fc: " Marco Crivellari
2026-03-20  7:58   ` Christoph Hellwig
2026-03-24 15:23 ` [PATCH v2 0/3] replace old wq(s), added WQ_PERCPU to alloc_workqueue Keith Busch
2026-03-24 15:30   ` Marco Crivellari

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox