linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/3] Workqueue: add WQ_PERCPU, system_dfl_wq and system_percpu_wq
@ 2025-06-14 13:35 Marco Crivellari
  2025-06-14 13:35 ` [PATCH v5 1/3] Workqueue: add system_percpu_wq and system_dfl_wq Marco Crivellari
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Marco Crivellari @ 2025-06-14 13:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: Tejun Heo, Lai Jiangshan, Thomas Gleixner, Frederic Weisbecker,
	Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko

Hi!

Below is a summary of a discussion about the Workqueue API and cpu isolation
considerations. Details and more information are available here:

    "workqueue: Always use wq_select_unbound_cpu() for WORK_CPU_UNBOUND."
    https://lore.kernel.org/all/20250221112003.1dSuoGyc@linutronix.de/

=== Current situation: problems ===

Let's consider a nohz_full system with isolated CPUs: wq_unbound_cpumask is
set to the housekeeping CPUs, for !WQ_UNBOUND the local CPU is selected.

This leads to different scenarios if a work item is scheduled on an isolated
CPU where "delay" value is 0 or greater then 0:
    schedule_delayed_work(, 0);

This will be handled by __queue_work() that will queue the work item on the
current local (isolated) CPU, while:

    schedule_delayed_work(, 1);

Will move the timer on an housekeeping CPU, and schedule the work there.

Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.

This lack of consistentcy cannot be addressed without refactoring the API.

=== Plan and future plans ===

This patchset is the first stone on a refactoring needed in order to
address the points aforementioned; it will have a positive impact also
on the cpu isolation, in the long term, moving away percpu workqueue in
favor to an unbound model.

These are the main steps:
1)  API refactoring (that this patch is introducing)
    -   Make more clear and uniform the system wq names, both per-cpu and
        unbound. This to avoid any possible confusion on what should be
        used.

    -   Introduction of WQ_PERCPU: this flag is the complement of WQ_UNBOUND,
        introduced in this patchset and used on all the callers that are not
        currently using WQ_UNBOUND.

        WQ_UNBOUND will be removed in a future release cycle.

        Most users don't need to be per-cpu, because they don't have
        locality requirements, because of that, a next future step will be
        make "unbound" the default behavior.

2)  Check who really needs to be per-cpu
    -   Remove the WQ_PERCPU flag when is not strictly required.

3)  Add a new API (prefer local cpu)
    -   There are users that don't require a local execution, like mentioned
        above; despite that, local execution yeld to performance gain.

        This new API will prefer the local execution, without requiring it.

=== Introduced Changes by this patchset ===

1)  [P1] add system_percpu_wq and system_dfl_wq

    system_wq is a per-CPU workqueue, but his name is not clear.
    system_unbound_wq is to be used when locality is not required.

    Because of that, system_percpu_wq and system_dfl_wq have been
    introduced in order to replace, in future, system_wq and
    system_unbound_wq.

2)  [P2] add new WQ_PERCPU flag

    This patch adds the new WQ_PERCPU flag to explicitly require to be per-cpu.
    WQ_UNBOUND will be removed in a next release cycle.

3)  [P3] Doc change about WQ_PERCPU

    Added a short section about WQ_PERCPU and a Note under WQ_UNBOUND
    mentioning that it will be removed in the future.

---
Changes in v5:
-	workqueue(s) early init allocation
-	Doc fixes

Changes in v4:
-   Take a step back from the previous version, in order to add first the new
    wq(s) and the new flag (WQ_PERCPU), addressing later all the other changes.

Changes in v3:
-   The introduction of the new wq(s) and the WQ_PERCPU flag have been moved
    in separated patches (1 for wq(s) and 1 for WQ_PERCPU).
-   WQ_PERCPU is now added to all the alloc_workqueue callers in separated patches
    addressing few subsystems first (fs, mm, net).

Changes in v2:
-   Introduction of WQ_PERCPU change has been merged with the alloc_workqueue()
    patch that pass the WQ_PERCPU flag explicitly to every caller.
-   (2 drivers) in the code not matched by Coccinelle; WQ_PERCPU added also there.
-   WQ_PERCPU added to __WQ_BH_ALLOWS.
-   queue_work() now prints a warning (pr_warn_once()) if a user is using the
    old wq and redirect the wrong / old wq to the new one.
-   Changes to workqueue.rst about the WQ_PERCPU flag and a Note about the
    future of WQ_UNBOUND.


Marco Crivellari (3):
  Workqueue: add system_percpu_wq and system_dfl_wq
  Workqueue: add new WQ_PERCPU flag
  [Doc] Workqueue: add WQ_PERCPU

 Documentation/core-api/workqueue.rst |  6 ++++++
 include/linux/workqueue.h            |  9 ++++++---
 kernel/workqueue.c                   | 13 +++++++++----
 3 files changed, 21 insertions(+), 7 deletions(-)

-- 
2.49.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v5 1/3] Workqueue: add system_percpu_wq and system_dfl_wq
  2025-06-14 13:35 [PATCH v5 0/3] Workqueue: add WQ_PERCPU, system_dfl_wq and system_percpu_wq Marco Crivellari
@ 2025-06-14 13:35 ` Marco Crivellari
  2025-06-23 23:49   ` Hillf Danton
  2025-06-14 13:35 ` [PATCH v5 2/3] Workqueue: add new WQ_PERCPU flag Marco Crivellari
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Marco Crivellari @ 2025-06-14 13:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: Tejun Heo, Lai Jiangshan, Thomas Gleixner, Frederic Weisbecker,
	Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko

Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.

This lack of consistentcy cannot be addressed without refactoring the API.

system_wq is a per-CPU worqueue, yet nothing in its name tells about that
CPU affinity constraint, which is very often not required by users. Make
it clear by adding a system_percpu_wq.

system_unbound_wq should be the default workqueue so as not to enforce
locality constraints for random work whenever it's not required.

Adding system_dfl_wq to encourage its use when unbound work should be used.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
 include/linux/workqueue.h |  8 +++++---
 kernel/workqueue.c        | 13 +++++++++----
 2 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 6e30f275da77..502ec4a5e32c 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -427,7 +427,7 @@ enum wq_consts {
 /*
  * System-wide workqueues which are always present.
  *
- * system_wq is the one used by schedule[_delayed]_work[_on]().
+ * system_percpu_wq is the one used by schedule[_delayed]_work[_on]().
  * Multi-CPU multi-threaded.  There are users which expect relatively
  * short queue flush time.  Don't queue works which can run for too
  * long.
@@ -438,7 +438,7 @@ enum wq_consts {
  * system_long_wq is similar to system_wq but may host long running
  * works.  Queue flushing might take relatively long.
  *
- * system_unbound_wq is unbound workqueue.  Workers are not bound to
+ * system_dfl_wq is unbound workqueue.  Workers are not bound to
  * any specific CPU, not concurrency managed, and all queued works are
  * executed immediately as long as max_active limit is not reached and
  * resources are available.
@@ -455,10 +455,12 @@ enum wq_consts {
  * system_bh[_highpri]_wq are convenience interface to softirq. BH work items
  * are executed in the queueing CPU's BH context in the queueing order.
  */
-extern struct workqueue_struct *system_wq;
+extern struct workqueue_struct *system_wq; /* use system_percpu_wq, this will be removed */
+extern struct workqueue_struct *system_percpu_wq;
 extern struct workqueue_struct *system_highpri_wq;
 extern struct workqueue_struct *system_long_wq;
 extern struct workqueue_struct *system_unbound_wq;
+extern struct workqueue_struct *system_dfl_wq;
 extern struct workqueue_struct *system_freezable_wq;
 extern struct workqueue_struct *system_power_efficient_wq;
 extern struct workqueue_struct *system_freezable_power_efficient_wq;
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 97f37b5bae66..9047f658ccf2 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -505,12 +505,16 @@ static struct kthread_worker *pwq_release_worker __ro_after_init;
 
 struct workqueue_struct *system_wq __ro_after_init;
 EXPORT_SYMBOL(system_wq);
+struct workqueue_struct *system_percpu_wq __ro_after_init;
+EXPORT_SYMBOL(system_percpu_wq);
 struct workqueue_struct *system_highpri_wq __ro_after_init;
 EXPORT_SYMBOL_GPL(system_highpri_wq);
 struct workqueue_struct *system_long_wq __ro_after_init;
 EXPORT_SYMBOL_GPL(system_long_wq);
 struct workqueue_struct *system_unbound_wq __ro_after_init;
 EXPORT_SYMBOL_GPL(system_unbound_wq);
+struct workqueue_struct *system_dfl_wq __ro_after_init;
+EXPORT_SYMBOL_GPL(system_dfl_wq);
 struct workqueue_struct *system_freezable_wq __ro_after_init;
 EXPORT_SYMBOL_GPL(system_freezable_wq);
 struct workqueue_struct *system_power_efficient_wq __ro_after_init;
@@ -7829,10 +7833,11 @@ void __init workqueue_init_early(void)
 	}
 
 	system_wq = alloc_workqueue("events", 0, 0);
+	system_percpu_wq = alloc_workqueue("events", 0, 0);
 	system_highpri_wq = alloc_workqueue("events_highpri", WQ_HIGHPRI, 0);
 	system_long_wq = alloc_workqueue("events_long", 0, 0);
-	system_unbound_wq = alloc_workqueue("events_unbound", WQ_UNBOUND,
-					    WQ_MAX_ACTIVE);
+	system_unbound_wq = alloc_workqueue("events_unbound", WQ_UNBOUND, WQ_MAX_ACTIVE);
+	system_dfl_wq = alloc_workqueue("events_unbound", WQ_UNBOUND, WQ_MAX_ACTIVE);
 	system_freezable_wq = alloc_workqueue("events_freezable",
 					      WQ_FREEZABLE, 0);
 	system_power_efficient_wq = alloc_workqueue("events_power_efficient",
@@ -7843,8 +7848,8 @@ void __init workqueue_init_early(void)
 	system_bh_wq = alloc_workqueue("events_bh", WQ_BH, 0);
 	system_bh_highpri_wq = alloc_workqueue("events_bh_highpri",
 					       WQ_BH | WQ_HIGHPRI, 0);
-	BUG_ON(!system_wq || !system_highpri_wq || !system_long_wq ||
-	       !system_unbound_wq || !system_freezable_wq ||
+	BUG_ON(!system_wq || !system_percpu_wq|| !system_highpri_wq || !system_long_wq ||
+	       !system_unbound_wq || !system_freezable_wq || !system_dfl_wq ||
 	       !system_power_efficient_wq ||
 	       !system_freezable_power_efficient_wq ||
 	       !system_bh_wq || !system_bh_highpri_wq);
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v5 2/3] Workqueue: add new WQ_PERCPU flag
  2025-06-14 13:35 [PATCH v5 0/3] Workqueue: add WQ_PERCPU, system_dfl_wq and system_percpu_wq Marco Crivellari
  2025-06-14 13:35 ` [PATCH v5 1/3] Workqueue: add system_percpu_wq and system_dfl_wq Marco Crivellari
@ 2025-06-14 13:35 ` Marco Crivellari
  2025-06-14 13:35 ` [PATCH v5 3/3] [Doc] Workqueue: add WQ_PERCPU Marco Crivellari
  2025-06-16 18:35 ` [PATCH v5 0/3] Workqueue: add WQ_PERCPU, system_dfl_wq and system_percpu_wq Tejun Heo
  3 siblings, 0 replies; 11+ messages in thread
From: Marco Crivellari @ 2025-06-14 13:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: Tejun Heo, Lai Jiangshan, Thomas Gleixner, Frederic Weisbecker,
	Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko

Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.

This patch adds a new WQ_PERCPU flag to explicitly request the use of
the per-CPU behavior. Both flags coexist for one release cycle to allow
callers to transition their calls.

Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
 include/linux/workqueue.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
index 502ec4a5e32c..6347b9b3e472 100644
--- a/include/linux/workqueue.h
+++ b/include/linux/workqueue.h
@@ -401,6 +401,7 @@ enum wq_flags {
 	 * http://thread.gmane.org/gmane.linux.kernel/1480396
 	 */
 	WQ_POWER_EFFICIENT	= 1 << 7,
+	WQ_PERCPU		= 1 << 8, /* bound to a specific cpu */
 
 	__WQ_DESTROYING		= 1 << 15, /* internal: workqueue is destroying */
 	__WQ_DRAINING		= 1 << 16, /* internal: workqueue is draining */
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v5 3/3] [Doc] Workqueue: add WQ_PERCPU
  2025-06-14 13:35 [PATCH v5 0/3] Workqueue: add WQ_PERCPU, system_dfl_wq and system_percpu_wq Marco Crivellari
  2025-06-14 13:35 ` [PATCH v5 1/3] Workqueue: add system_percpu_wq and system_dfl_wq Marco Crivellari
  2025-06-14 13:35 ` [PATCH v5 2/3] Workqueue: add new WQ_PERCPU flag Marco Crivellari
@ 2025-06-14 13:35 ` Marco Crivellari
  2025-06-16 18:35 ` [PATCH v5 0/3] Workqueue: add WQ_PERCPU, system_dfl_wq and system_percpu_wq Tejun Heo
  3 siblings, 0 replies; 11+ messages in thread
From: Marco Crivellari @ 2025-06-14 13:35 UTC (permalink / raw)
  To: linux-kernel
  Cc: Tejun Heo, Lai Jiangshan, Thomas Gleixner, Frederic Weisbecker,
	Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko

Workqueue documentation upgraded with the description
of the new added flag, WQ_PERCPU.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
 Documentation/core-api/workqueue.rst | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/Documentation/core-api/workqueue.rst b/Documentation/core-api/workqueue.rst
index e295835fc116..165ca73e8351 100644
--- a/Documentation/core-api/workqueue.rst
+++ b/Documentation/core-api/workqueue.rst
@@ -183,6 +183,12 @@ resources, scheduled and executed.
   BH work items cannot sleep. All other features such as delayed queueing,
   flushing and canceling are supported.
 
+``WQ_PERCPU``
+  Work items queued to a per-cpu wq are bound to a specific CPU.
+  This flag is the right choice when cpu locality is important.
+
+  This flag is the complement of ``WQ_UNBOUND``.
+
 ``WQ_UNBOUND``
   Work items queued to an unbound wq are served by the special
   worker-pools which host workers which are not bound to any
-- 
2.49.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 0/3] Workqueue: add WQ_PERCPU, system_dfl_wq and system_percpu_wq
  2025-06-14 13:35 [PATCH v5 0/3] Workqueue: add WQ_PERCPU, system_dfl_wq and system_percpu_wq Marco Crivellari
                   ` (2 preceding siblings ...)
  2025-06-14 13:35 ` [PATCH v5 3/3] [Doc] Workqueue: add WQ_PERCPU Marco Crivellari
@ 2025-06-16 18:35 ` Tejun Heo
  2025-06-17 13:08   ` Frederic Weisbecker
  3 siblings, 1 reply; 11+ messages in thread
From: Tejun Heo @ 2025-06-16 18:35 UTC (permalink / raw)
  To: Marco Crivellari
  Cc: linux-kernel, Lai Jiangshan, Thomas Gleixner, Frederic Weisbecker,
	Sebastian Andrzej Siewior, Michal Hocko

On Sat, Jun 14, 2025 at 03:35:28PM +0200, Marco Crivellari wrote:
> Marco Crivellari (3):
>   Workqueue: add system_percpu_wq and system_dfl_wq
>   Workqueue: add new WQ_PERCPU flag
>   [Doc] Workqueue: add WQ_PERCPU

Applied 1-3 to wq/for-6.17. I applied as-is but the third patch didn't need
to be separate. Maybe something to consider for future.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 0/3] Workqueue: add WQ_PERCPU, system_dfl_wq and system_percpu_wq
  2025-06-16 18:35 ` [PATCH v5 0/3] Workqueue: add WQ_PERCPU, system_dfl_wq and system_percpu_wq Tejun Heo
@ 2025-06-17 13:08   ` Frederic Weisbecker
  2025-06-17 18:14     ` Tejun Heo
  0 siblings, 1 reply; 11+ messages in thread
From: Frederic Weisbecker @ 2025-06-17 13:08 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Marco Crivellari, linux-kernel, Lai Jiangshan, Thomas Gleixner,
	Sebastian Andrzej Siewior, Michal Hocko

Le Mon, Jun 16, 2025 at 08:35:32AM -1000, Tejun Heo a écrit :
> On Sat, Jun 14, 2025 at 03:35:28PM +0200, Marco Crivellari wrote:
> > Marco Crivellari (3):
> >   Workqueue: add system_percpu_wq and system_dfl_wq
> >   Workqueue: add new WQ_PERCPU flag
> >   [Doc] Workqueue: add WQ_PERCPU
> 
> Applied 1-3 to wq/for-6.17. I applied as-is but the third patch didn't need
> to be separate. Maybe something to consider for future.

If this is for the next merge window, I guess the easiest is to wait for it
before sending patches to other subsystems to convert them?

I guess we could shortcut that with providing a branch that other subsystems
could pull from but that doesn't look convenient...

Thanks.

> 
> Thanks.
> 
> -- 
> tejun

-- 
Frederic Weisbecker
SUSE Labs

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 0/3] Workqueue: add WQ_PERCPU, system_dfl_wq and system_percpu_wq
  2025-06-17 13:08   ` Frederic Weisbecker
@ 2025-06-17 18:14     ` Tejun Heo
  2025-06-17 18:54       ` Tejun Heo
  0 siblings, 1 reply; 11+ messages in thread
From: Tejun Heo @ 2025-06-17 18:14 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Marco Crivellari, linux-kernel, Lai Jiangshan, Thomas Gleixner,
	Sebastian Andrzej Siewior, Michal Hocko

On Tue, Jun 17, 2025 at 03:08:30PM +0200, Frederic Weisbecker wrote:
> Le Mon, Jun 16, 2025 at 08:35:32AM -1000, Tejun Heo a écrit :
> > On Sat, Jun 14, 2025 at 03:35:28PM +0200, Marco Crivellari wrote:
> > > Marco Crivellari (3):
> > >   Workqueue: add system_percpu_wq and system_dfl_wq
> > >   Workqueue: add new WQ_PERCPU flag
> > >   [Doc] Workqueue: add WQ_PERCPU
> > 
> > Applied 1-3 to wq/for-6.17. I applied as-is but the third patch didn't need
> > to be separate. Maybe something to consider for future.
> 
> If this is for the next merge window, I guess the easiest is to wait for it
> before sending patches to other subsystems to convert them?
> 
> I guess we could shortcut that with providing a branch that other subsystems
> could pull from but that doesn't look convenient...

Oh yeah, I said I was gonna do that and promptly forgot. I'll set up a
separate branch based on v6.15.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 0/3] Workqueue: add WQ_PERCPU, system_dfl_wq and system_percpu_wq
  2025-06-17 18:14     ` Tejun Heo
@ 2025-06-17 18:54       ` Tejun Heo
  2025-06-20 16:13         ` Marco Crivellari
  0 siblings, 1 reply; 11+ messages in thread
From: Tejun Heo @ 2025-06-17 18:54 UTC (permalink / raw)
  To: Frederic Weisbecker
  Cc: Marco Crivellari, linux-kernel, Lai Jiangshan, Thomas Gleixner,
	Sebastian Andrzej Siewior, Michal Hocko

On Tue, Jun 17, 2025 at 08:14:48AM -1000, Tejun Heo wrote:
> On Tue, Jun 17, 2025 at 03:08:30PM +0200, Frederic Weisbecker wrote:
> > Le Mon, Jun 16, 2025 at 08:35:32AM -1000, Tejun Heo a écrit :
> > > On Sat, Jun 14, 2025 at 03:35:28PM +0200, Marco Crivellari wrote:
> > > > Marco Crivellari (3):
> > > >   Workqueue: add system_percpu_wq and system_dfl_wq
> > > >   Workqueue: add new WQ_PERCPU flag
> > > >   [Doc] Workqueue: add WQ_PERCPU
> > > 
> > > Applied 1-3 to wq/for-6.17. I applied as-is but the third patch didn't need
> > > to be separate. Maybe something to consider for future.
> > 
> > If this is for the next merge window, I guess the easiest is to wait for it
> > before sending patches to other subsystems to convert them?
> > 
> > I guess we could shortcut that with providing a branch that other subsystems
> > could pull from but that doesn't look convenient...
> 
> Oh yeah, I said I was gonna do that and promptly forgot. I'll set up a
> separate branch based on v6.15.

Okay, I folded the doc patch into the second one and applied them to the
following branch.

 git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git WQ_PERCPU

This is v6.15 + only the two patches and should be easy to pull into any
devel branch.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 0/3] Workqueue: add WQ_PERCPU, system_dfl_wq and system_percpu_wq
  2025-06-17 18:54       ` Tejun Heo
@ 2025-06-20 16:13         ` Marco Crivellari
  2025-06-23 19:23           ` Tejun Heo
  0 siblings, 1 reply; 11+ messages in thread
From: Marco Crivellari @ 2025-06-20 16:13 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Frederic Weisbecker, linux-kernel, Lai Jiangshan, Thomas Gleixner,
	Sebastian Andrzej Siewior, Michal Hocko

Hi,

Just a quick question Tejun: when do you expect to receive the other patches?
Should I wait till the next rc1?

I just want to check the work again, but they are ready.

Thanks!


On Tue, Jun 17, 2025 at 8:54 PM Tejun Heo <tj@kernel.org> wrote:
>
> On Tue, Jun 17, 2025 at 08:14:48AM -1000, Tejun Heo wrote:
> > On Tue, Jun 17, 2025 at 03:08:30PM +0200, Frederic Weisbecker wrote:
> > > Le Mon, Jun 16, 2025 at 08:35:32AM -1000, Tejun Heo a écrit :
> > > > On Sat, Jun 14, 2025 at 03:35:28PM +0200, Marco Crivellari wrote:
> > > > > Marco Crivellari (3):
> > > > >   Workqueue: add system_percpu_wq and system_dfl_wq
> > > > >   Workqueue: add new WQ_PERCPU flag
> > > > >   [Doc] Workqueue: add WQ_PERCPU
> > > >
> > > > Applied 1-3 to wq/for-6.17. I applied as-is but the third patch didn't need
> > > > to be separate. Maybe something to consider for future.
> > >
> > > If this is for the next merge window, I guess the easiest is to wait for it
> > > before sending patches to other subsystems to convert them?
> > >
> > > I guess we could shortcut that with providing a branch that other subsystems
> > > could pull from but that doesn't look convenient...
> >
> > Oh yeah, I said I was gonna do that and promptly forgot. I'll set up a
> > separate branch based on v6.15.
>
> Okay, I folded the doc patch into the second one and applied them to the
> following branch.
>
>  git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git WQ_PERCPU
>
> This is v6.15 + only the two patches and should be easy to pull into any
> devel branch.
>
> Thanks.
>
> --
> tejun



-- 

Marco Crivellari

L3 Support Engineer, Technology & Product




marco.crivellari@suse.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 0/3] Workqueue: add WQ_PERCPU, system_dfl_wq and system_percpu_wq
  2025-06-20 16:13         ` Marco Crivellari
@ 2025-06-23 19:23           ` Tejun Heo
  0 siblings, 0 replies; 11+ messages in thread
From: Tejun Heo @ 2025-06-23 19:23 UTC (permalink / raw)
  To: Marco Crivellari
  Cc: Frederic Weisbecker, linux-kernel, Lai Jiangshan, Thomas Gleixner,
	Sebastian Andrzej Siewior, Michal Hocko

Hello,

On Fri, Jun 20, 2025 at 06:13:09PM +0200, Marco Crivellari wrote:
> Just a quick question Tejun: when do you expect to receive the other patches?
> Should I wait till the next rc1?
> 
> I just want to check the work again, but they are ready.

So, I can route the patches through the wq tree but I shouldn't do so unless
subsystem maintainers want to do so for the specific subsystem. Waiting for
rc1 is an option but not the only one. You can send out subsystem-specific
patchdes to the subsystem maintainers and me cc'd with:

- Explanation on what's going on and why.

- What needs to happen if the subsystem wants to route the patch (pull the
  wq branch with the prep changes).

- Offer the option to route the changes through a wq branch.

There are no hard rules on how to do this but it's all about making
logistics understandable and easy for the involved subsystems.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v5 1/3] Workqueue: add system_percpu_wq and system_dfl_wq
  2025-06-14 13:35 ` [PATCH v5 1/3] Workqueue: add system_percpu_wq and system_dfl_wq Marco Crivellari
@ 2025-06-23 23:49   ` Hillf Danton
  0 siblings, 0 replies; 11+ messages in thread
From: Hillf Danton @ 2025-06-23 23:49 UTC (permalink / raw)
  To: Marco Crivellari
  Cc: linux-kernel, Tejun Heo, Frederic Weisbecker,
	Sebastian Andrzej Siewior, Michal Hocko

On Sat, 14 Jun 2025 15:35:29 +0200 Marco Crivellari wrote:
> @@ -7829,10 +7833,11 @@ void __init workqueue_init_early(void)
>  	}
>  
>  	system_wq = alloc_workqueue("events", 0, 0);
> +	system_percpu_wq = alloc_workqueue("events", 0, 0);

Different workqueue names are prefered until system_wq is cut off.

>  	system_highpri_wq = alloc_workqueue("events_highpri", WQ_HIGHPRI, 0);
>  	system_long_wq = alloc_workqueue("events_long", 0, 0);
> -	system_unbound_wq = alloc_workqueue("events_unbound", WQ_UNBOUND,
> -					    WQ_MAX_ACTIVE);
> +	system_unbound_wq = alloc_workqueue("events_unbound", WQ_UNBOUND, WQ_MAX_ACTIVE);
> +	system_dfl_wq = alloc_workqueue("events_unbound", WQ_UNBOUND, WQ_MAX_ACTIVE);

Ditto

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2025-06-23 23:49 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-14 13:35 [PATCH v5 0/3] Workqueue: add WQ_PERCPU, system_dfl_wq and system_percpu_wq Marco Crivellari
2025-06-14 13:35 ` [PATCH v5 1/3] Workqueue: add system_percpu_wq and system_dfl_wq Marco Crivellari
2025-06-23 23:49   ` Hillf Danton
2025-06-14 13:35 ` [PATCH v5 2/3] Workqueue: add new WQ_PERCPU flag Marco Crivellari
2025-06-14 13:35 ` [PATCH v5 3/3] [Doc] Workqueue: add WQ_PERCPU Marco Crivellari
2025-06-16 18:35 ` [PATCH v5 0/3] Workqueue: add WQ_PERCPU, system_dfl_wq and system_percpu_wq Tejun Heo
2025-06-17 13:08   ` Frederic Weisbecker
2025-06-17 18:14     ` Tejun Heo
2025-06-17 18:54       ` Tejun Heo
2025-06-20 16:13         ` Marco Crivellari
2025-06-23 19:23           ` Tejun Heo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).