public inbox for linux-rdma@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] replaced system_unbound_wq, added WQ_PERCPU to alloc_workqueue
@ 2025-11-01 16:31 Marco Crivellari
  2025-11-01 16:31 ` [PATCH 1/5] RDMA/core: RDMA/mlx5: replace use of system_unbound_wq with system_dfl_wq Marco Crivellari
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: Marco Crivellari @ 2025-11-01 16:31 UTC (permalink / raw)
  To: linux-kernel, linux-rdma
  Cc: Tejun Heo, Lai Jiangshan, Frederic Weisbecker,
	Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
	Jason Gunthorpe, Leon Romanovsky, Dennis Dalessandro,
	Yishai Hadas

Hi,

=== Current situation: problems ===

Let's consider a nohz_full system with isolated CPUs: wq_unbound_cpumask is
set to the housekeeping CPUs, for !WQ_UNBOUND the local CPU is selected.

This leads to different scenarios if a work item is scheduled on an
isolated CPU where "delay" value is 0 or greater then 0:
        schedule_delayed_work(, 0);

This will be handled by __queue_work() that will queue the work item on the
current local (isolated) CPU, while:

        schedule_delayed_work(, 1);

Will move the timer on an housekeeping CPU, and schedule the work there.

Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.

This lack of consistency cannot be addressed without refactoring the API.

=== Recent changes to the WQ API ===

The following, address the recent changes in the Workqueue API:

- commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
- commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")

The old workqueues will be removed in a future release cycle.

=== Introduced Changes by this series ===

1) [P 1]  Replace uses of system_wq and system_unbound_wq

    system_unbound_wq is to be used when locality is not required.

    Because of that, system_unbound_wq has been replaced with
    system_dfl_wq, to make sure it is the default choice when locality
    is not important.

    system_dfl_wq has the same behavior of the old system_unbound_wq.

2) [P 2-5] WQ_PERCPU added to alloc_workqueue()

    This change adds a new WQ_PERCPU flag to explicitly request
    alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified.


Thanks!


Marco Crivellari (5):
  RDMA/core: RDMA/mlx5: replace use of system_unbound_wq with
    system_dfl_wq
  RDMA/core: WQ_PERCPU added to alloc_workqueue users
  hfi1: WQ_PERCPU added to alloc_workqueue users
  RDMA/mlx4: WQ_PERCPU added to alloc_workqueue users
  IB/rdmavt: WQ_PERCPU added to alloc_workqueue users

 drivers/infiniband/core/cm.c      | 2 +-
 drivers/infiniband/core/device.c  | 4 ++--
 drivers/infiniband/core/ucma.c    | 2 +-
 drivers/infiniband/hw/hfi1/init.c | 4 ++--
 drivers/infiniband/hw/hfi1/opfn.c | 4 ++--
 drivers/infiniband/hw/mlx4/cm.c   | 2 +-
 drivers/infiniband/hw/mlx5/odp.c  | 4 ++--
 drivers/infiniband/sw/rdmavt/cq.c | 3 ++-
 8 files changed, 13 insertions(+), 12 deletions(-)

-- 
2.51.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/5] RDMA/core: RDMA/mlx5: replace use of system_unbound_wq with system_dfl_wq
  2025-11-01 16:31 [PATCH 0/5] replaced system_unbound_wq, added WQ_PERCPU to alloc_workqueue Marco Crivellari
@ 2025-11-01 16:31 ` Marco Crivellari
  2025-11-01 16:31 ` [PATCH 2/5] RDMA/core: WQ_PERCPU added to alloc_workqueue users Marco Crivellari
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Marco Crivellari @ 2025-11-01 16:31 UTC (permalink / raw)
  To: linux-kernel, linux-rdma
  Cc: Tejun Heo, Lai Jiangshan, Frederic Weisbecker,
	Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
	Jason Gunthorpe, Leon Romanovsky

Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.

This lack of consistency cannot be addressed without refactoring the API.

system_unbound_wq should be the default workqueue so as not to enforce
locality constraints for random work whenever it's not required.

Adding system_dfl_wq to encourage its use when unbound work should be used.

The old system_unbound_wq will be kept for a few release cycles.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
 drivers/infiniband/core/ucma.c   | 2 +-
 drivers/infiniband/hw/mlx5/odp.c | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
index f86ece701db6..ec3be65a2b88 100644
--- a/drivers/infiniband/core/ucma.c
+++ b/drivers/infiniband/core/ucma.c
@@ -366,7 +366,7 @@ static int ucma_event_handler(struct rdma_cm_id *cm_id,
 	if (event->event == RDMA_CM_EVENT_DEVICE_REMOVAL) {
 		xa_lock(&ctx_table);
 		if (xa_load(&ctx_table, ctx->id) == ctx)
-			queue_work(system_unbound_wq, &ctx->close_work);
+			queue_work(system_dfl_wq, &ctx->close_work);
 		xa_unlock(&ctx_table);
 	}
 	return 0;
diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
index 0e8ae85af5a6..6441abdf1f3b 100644
--- a/drivers/infiniband/hw/mlx5/odp.c
+++ b/drivers/infiniband/hw/mlx5/odp.c
@@ -265,7 +265,7 @@ static void destroy_unused_implicit_child_mr(struct mlx5_ib_mr *mr)
 
 	/* Freeing a MR is a sleeping operation, so bounce to a work queue */
 	INIT_WORK(&mr->odp_destroy.work, free_implicit_child_mr_work);
-	queue_work(system_unbound_wq, &mr->odp_destroy.work);
+	queue_work(system_dfl_wq, &mr->odp_destroy.work);
 }
 
 static bool mlx5_ib_invalidate_range(struct mmu_interval_notifier *mni,
@@ -2093,6 +2093,6 @@ int mlx5_ib_advise_mr_prefetch(struct ib_pd *pd,
 		destroy_prefetch_work(work);
 		return rc;
 	}
-	queue_work(system_unbound_wq, &work->work);
+	queue_work(system_dfl_wq, &work->work);
 	return 0;
 }
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/5] RDMA/core: WQ_PERCPU added to alloc_workqueue users
  2025-11-01 16:31 [PATCH 0/5] replaced system_unbound_wq, added WQ_PERCPU to alloc_workqueue Marco Crivellari
  2025-11-01 16:31 ` [PATCH 1/5] RDMA/core: RDMA/mlx5: replace use of system_unbound_wq with system_dfl_wq Marco Crivellari
@ 2025-11-01 16:31 ` Marco Crivellari
  2025-11-01 16:31 ` [PATCH 3/5] hfi1: " Marco Crivellari
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Marco Crivellari @ 2025-11-01 16:31 UTC (permalink / raw)
  To: linux-kernel, linux-rdma
  Cc: Tejun Heo, Lai Jiangshan, Frederic Weisbecker,
	Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
	Jason Gunthorpe, Leon Romanovsky

Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.

alloc_workqueue() treats all queues as per-CPU by default, while unbound
workqueues must opt-in via WQ_UNBOUND.

This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.

This change adds a new WQ_PERCPU flag to explicitly request
alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified.

With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU.

Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.

Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
 drivers/infiniband/core/cm.c     | 2 +-
 drivers/infiniband/core/device.c | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
index 01bede8ba105..47d0022cadac 100644
--- a/drivers/infiniband/core/cm.c
+++ b/drivers/infiniband/core/cm.c
@@ -4518,7 +4518,7 @@ static int __init ib_cm_init(void)
 	get_random_bytes(&cm.random_id_operand, sizeof cm.random_id_operand);
 	INIT_LIST_HEAD(&cm.timewait_list);
 
-	cm.wq = alloc_workqueue("ib_cm", 0, 1);
+	cm.wq = alloc_workqueue("ib_cm", WQ_PERCPU, 1);
 	if (!cm.wq) {
 		ret = -ENOMEM;
 		goto error2;
diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
index b4f3c835844a..13e8a1714bbd 100644
--- a/drivers/infiniband/core/device.c
+++ b/drivers/infiniband/core/device.c
@@ -3021,7 +3021,7 @@ static int __init ib_core_init(void)
 {
 	int ret = -ENOMEM;
 
-	ib_wq = alloc_workqueue("infiniband", 0, 0);
+	ib_wq = alloc_workqueue("infiniband", WQ_PERCPU, 0);
 	if (!ib_wq)
 		return -ENOMEM;
 
@@ -3031,7 +3031,7 @@ static int __init ib_core_init(void)
 		goto err;
 
 	ib_comp_wq = alloc_workqueue("ib-comp-wq",
-			WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_SYSFS, 0);
+			WQ_HIGHPRI | WQ_MEM_RECLAIM | WQ_SYSFS | WQ_PERCPU, 0);
 	if (!ib_comp_wq)
 		goto err_unbound;
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 3/5] hfi1: WQ_PERCPU added to alloc_workqueue users
  2025-11-01 16:31 [PATCH 0/5] replaced system_unbound_wq, added WQ_PERCPU to alloc_workqueue Marco Crivellari
  2025-11-01 16:31 ` [PATCH 1/5] RDMA/core: RDMA/mlx5: replace use of system_unbound_wq with system_dfl_wq Marco Crivellari
  2025-11-01 16:31 ` [PATCH 2/5] RDMA/core: WQ_PERCPU added to alloc_workqueue users Marco Crivellari
@ 2025-11-01 16:31 ` Marco Crivellari
  2025-11-01 16:31 ` [PATCH 4/5] RDMA/mlx4: " Marco Crivellari
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Marco Crivellari @ 2025-11-01 16:31 UTC (permalink / raw)
  To: linux-kernel, linux-rdma
  Cc: Tejun Heo, Lai Jiangshan, Frederic Weisbecker,
	Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
	Jason Gunthorpe, Leon Romanovsky, Dennis Dalessandro

Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.

alloc_workqueue() treats all queues as per-CPU by default, while unbound
workqueues must opt-in via WQ_UNBOUND.

This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.

This change adds a new WQ_PERCPU flag to explicitly request
alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified.

With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU.

Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.

CC: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
 drivers/infiniband/hw/hfi1/init.c | 4 ++--
 drivers/infiniband/hw/hfi1/opfn.c | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/infiniband/hw/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
index b35f92e7d865..e4aef102dac0 100644
--- a/drivers/infiniband/hw/hfi1/init.c
+++ b/drivers/infiniband/hw/hfi1/init.c
@@ -745,8 +745,8 @@ static int create_workqueues(struct hfi1_devdata *dd)
 			ppd->hfi1_wq =
 				alloc_workqueue(
 				    "hfi%d_%d",
-				    WQ_SYSFS | WQ_HIGHPRI | WQ_CPU_INTENSIVE |
-				    WQ_MEM_RECLAIM,
+				    WQ_SYSFS | WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM |
+				    WQ_PERCPU,
 				    HFI1_MAX_ACTIVE_WORKQUEUE_ENTRIES,
 				    dd->unit, pidx);
 			if (!ppd->hfi1_wq)
diff --git a/drivers/infiniband/hw/hfi1/opfn.c b/drivers/infiniband/hw/hfi1/opfn.c
index 370a5a8eaa71..6e0e3458d202 100644
--- a/drivers/infiniband/hw/hfi1/opfn.c
+++ b/drivers/infiniband/hw/hfi1/opfn.c
@@ -305,8 +305,8 @@ void opfn_trigger_conn_request(struct rvt_qp *qp, u32 bth1)
 int opfn_init(void)
 {
 	opfn_wq = alloc_workqueue("hfi_opfn",
-				  WQ_SYSFS | WQ_HIGHPRI | WQ_CPU_INTENSIVE |
-				  WQ_MEM_RECLAIM,
+				  WQ_SYSFS | WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_MEM_RECLAIM |
+				  WQ_PERCPU,
 				  HFI1_MAX_ACTIVE_WORKQUEUE_ENTRIES);
 	if (!opfn_wq)
 		return -ENOMEM;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 4/5] RDMA/mlx4: WQ_PERCPU added to alloc_workqueue users
  2025-11-01 16:31 [PATCH 0/5] replaced system_unbound_wq, added WQ_PERCPU to alloc_workqueue Marco Crivellari
                   ` (2 preceding siblings ...)
  2025-11-01 16:31 ` [PATCH 3/5] hfi1: " Marco Crivellari
@ 2025-11-01 16:31 ` Marco Crivellari
  2025-11-01 16:31 ` [PATCH 5/5] IB/rdmavt: " Marco Crivellari
  2025-12-02 13:22 ` [PATCH 0/5] replaced system_unbound_wq, added WQ_PERCPU to alloc_workqueue Marco Crivellari
  5 siblings, 0 replies; 9+ messages in thread
From: Marco Crivellari @ 2025-11-01 16:31 UTC (permalink / raw)
  To: linux-kernel, linux-rdma
  Cc: Tejun Heo, Lai Jiangshan, Frederic Weisbecker,
	Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
	Jason Gunthorpe, Leon Romanovsky, Yishai Hadas

Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.

alloc_workqueue() treats all queues as per-CPU by default, while unbound
workqueues must opt-in via WQ_UNBOUND.

This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.

This change adds a new WQ_PERCPU flag to explicitly request
alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified.

With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU.

Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.

CC: Yishai Hadas <yishaih@nvidia.com>
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
 drivers/infiniband/hw/mlx4/cm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/infiniband/hw/mlx4/cm.c b/drivers/infiniband/hw/mlx4/cm.c
index 12b481d138cf..03aacd526860 100644
--- a/drivers/infiniband/hw/mlx4/cm.c
+++ b/drivers/infiniband/hw/mlx4/cm.c
@@ -591,7 +591,7 @@ void mlx4_ib_cm_paravirt_clean(struct mlx4_ib_dev *dev, int slave)
 
 int mlx4_ib_cm_init(void)
 {
-	cm_wq = alloc_workqueue("mlx4_ib_cm", 0, 0);
+	cm_wq = alloc_workqueue("mlx4_ib_cm", WQ_PERCPU, 0);
 	if (!cm_wq)
 		return -ENOMEM;
 
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 5/5] IB/rdmavt: WQ_PERCPU added to alloc_workqueue users
  2025-11-01 16:31 [PATCH 0/5] replaced system_unbound_wq, added WQ_PERCPU to alloc_workqueue Marco Crivellari
                   ` (3 preceding siblings ...)
  2025-11-01 16:31 ` [PATCH 4/5] RDMA/mlx4: " Marco Crivellari
@ 2025-11-01 16:31 ` Marco Crivellari
  2025-12-02 13:22 ` [PATCH 0/5] replaced system_unbound_wq, added WQ_PERCPU to alloc_workqueue Marco Crivellari
  5 siblings, 0 replies; 9+ messages in thread
From: Marco Crivellari @ 2025-11-01 16:31 UTC (permalink / raw)
  To: linux-kernel, linux-rdma
  Cc: Tejun Heo, Lai Jiangshan, Frederic Weisbecker,
	Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
	Jason Gunthorpe, Leon Romanovsky, Dennis Dalessandro

Currently if a user enqueue a work item using schedule_delayed_work() the
used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
schedule_work() that is using system_wq and queue_work(), that makes use
again of WORK_CPU_UNBOUND.
This lack of consistentcy cannot be addressed without refactoring the API.

alloc_workqueue() treats all queues as per-CPU by default, while unbound
workqueues must opt-in via WQ_UNBOUND.

This default is suboptimal: most workloads benefit from unbound queues,
allowing the scheduler to place worker threads where they’re needed and
reducing noise when CPUs are isolated.

This change adds a new WQ_PERCPU flag to explicitly request
alloc_workqueue() to be per-cpu when WQ_UNBOUND has not been specified.

With the introduction of the WQ_PERCPU flag (equivalent to !WQ_UNBOUND),
any alloc_workqueue() caller that doesn’t explicitly specify WQ_UNBOUND
must now use WQ_PERCPU.

Once migration is complete, WQ_UNBOUND can be removed and unbound will
become the implicit default.

CC: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com>
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
 drivers/infiniband/sw/rdmavt/cq.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/infiniband/sw/rdmavt/cq.c b/drivers/infiniband/sw/rdmavt/cq.c
index 0ca2743f1075..e7835ca70e2b 100644
--- a/drivers/infiniband/sw/rdmavt/cq.c
+++ b/drivers/infiniband/sw/rdmavt/cq.c
@@ -518,7 +518,8 @@ int rvt_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *entry)
  */
 int rvt_driver_cq_init(void)
 {
-	comp_vector_wq = alloc_workqueue("%s", WQ_HIGHPRI | WQ_CPU_INTENSIVE,
+	comp_vector_wq = alloc_workqueue("%s",
+					 WQ_HIGHPRI | WQ_CPU_INTENSIVE | WQ_PERCPU,
 					 0, "rdmavt_cq");
 	if (!comp_vector_wq)
 		return -ENOMEM;
-- 
2.51.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/5] replaced system_unbound_wq, added WQ_PERCPU to alloc_workqueue
  2025-11-01 16:31 [PATCH 0/5] replaced system_unbound_wq, added WQ_PERCPU to alloc_workqueue Marco Crivellari
                   ` (4 preceding siblings ...)
  2025-11-01 16:31 ` [PATCH 5/5] IB/rdmavt: " Marco Crivellari
@ 2025-12-02 13:22 ` Marco Crivellari
  2025-12-02 19:17   ` Jason Gunthorpe
  5 siblings, 1 reply; 9+ messages in thread
From: Marco Crivellari @ 2025-12-02 13:22 UTC (permalink / raw)
  To: linux-kernel, linux-rdma
  Cc: Tejun Heo, Lai Jiangshan, Frederic Weisbecker,
	Sebastian Andrzej Siewior, Michal Hocko, Jason Gunthorpe,
	Leon Romanovsky, Dennis Dalessandro, Yishai Hadas

Hi,

On Sat, Nov 1, 2025 at 5:31 PM Marco Crivellari
<marco.crivellari@suse.com> wrote:
> Marco Crivellari (5):
>   RDMA/core: RDMA/mlx5: replace use of system_unbound_wq with
>     system_dfl_wq
>   RDMA/core: WQ_PERCPU added to alloc_workqueue users
>   hfi1: WQ_PERCPU added to alloc_workqueue users
>   RDMA/mlx4: WQ_PERCPU added to alloc_workqueue users
>   IB/rdmavt: WQ_PERCPU added to alloc_workqueue users
>
>  drivers/infiniband/core/cm.c      | 2 +-
>  drivers/infiniband/core/device.c  | 4 ++--
>  drivers/infiniband/core/ucma.c    | 2 +-
>  drivers/infiniband/hw/hfi1/init.c | 4 ++--
>  drivers/infiniband/hw/hfi1/opfn.c | 4 ++--
>  drivers/infiniband/hw/mlx4/cm.c   | 2 +-
>  drivers/infiniband/hw/mlx5/odp.c  | 4 ++--
>  drivers/infiniband/sw/rdmavt/cq.c | 3 ++-
>  8 files changed, 13 insertions(+), 12 deletions(-)

Gentle ping.

Thanks!

-- 

Marco Crivellari

L3 Support Engineer, Technology & Product

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/5] replaced system_unbound_wq, added WQ_PERCPU to alloc_workqueue
  2025-12-02 13:22 ` [PATCH 0/5] replaced system_unbound_wq, added WQ_PERCPU to alloc_workqueue Marco Crivellari
@ 2025-12-02 19:17   ` Jason Gunthorpe
  2025-12-03 13:46     ` Marco Crivellari
  0 siblings, 1 reply; 9+ messages in thread
From: Jason Gunthorpe @ 2025-12-02 19:17 UTC (permalink / raw)
  To: Marco Crivellari
  Cc: linux-kernel, linux-rdma, Tejun Heo, Lai Jiangshan,
	Frederic Weisbecker, Sebastian Andrzej Siewior, Michal Hocko,
	Leon Romanovsky, Dennis Dalessandro, Yishai Hadas

On Tue, Dec 02, 2025 at 02:22:55PM +0100, Marco Crivellari wrote:
> Hi,
> 
> On Sat, Nov 1, 2025 at 5:31 PM Marco Crivellari
> <marco.crivellari@suse.com> wrote:
> > Marco Crivellari (5):
> >   RDMA/core: RDMA/mlx5: replace use of system_unbound_wq with
> >     system_dfl_wq
> >   RDMA/core: WQ_PERCPU added to alloc_workqueue users
> >   hfi1: WQ_PERCPU added to alloc_workqueue users
> >   RDMA/mlx4: WQ_PERCPU added to alloc_workqueue users
> >   IB/rdmavt: WQ_PERCPU added to alloc_workqueue users
> >
> >  drivers/infiniband/core/cm.c      | 2 +-
> >  drivers/infiniband/core/device.c  | 4 ++--
> >  drivers/infiniband/core/ucma.c    | 2 +-
> >  drivers/infiniband/hw/hfi1/init.c | 4 ++--
> >  drivers/infiniband/hw/hfi1/opfn.c | 4 ++--
> >  drivers/infiniband/hw/mlx4/cm.c   | 2 +-
> >  drivers/infiniband/hw/mlx5/odp.c  | 4 ++--
> >  drivers/infiniband/sw/rdmavt/cq.c | 3 ++-
> >  8 files changed, 13 insertions(+), 12 deletions(-)
> 
> Gentle ping.

It looks like it was picked up, the thank you email must have become lost:

5c467151f6197d IB/isert: add WQ_PERCPU to alloc_workqueue users
65d21dee533755 IB/iser: add WQ_PERCPU to alloc_workqueue users
7196156b0ce3dc IB/rdmavt: WQ_PERCPU added to alloc_workqueue users
5267feda50680c RDMA/mlx4: WQ_PERCPU added to alloc_workqueue users
5f93287fa9d0db hfi1: WQ_PERCPU added to alloc_workqueue users
e60c5583b661da RDMA/core: WQ_PERCPU added to alloc_workqueue users
f673fb3449fcd8 RDMA/core: RDMA/mlx5: replace use of system_unbound_wq with system_dfl_wq

Jason

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/5] replaced system_unbound_wq, added WQ_PERCPU to alloc_workqueue
  2025-12-02 19:17   ` Jason Gunthorpe
@ 2025-12-03 13:46     ` Marco Crivellari
  0 siblings, 0 replies; 9+ messages in thread
From: Marco Crivellari @ 2025-12-03 13:46 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: linux-kernel, linux-rdma, Tejun Heo, Lai Jiangshan,
	Frederic Weisbecker, Sebastian Andrzej Siewior, Michal Hocko,
	Leon Romanovsky, Dennis Dalessandro, Yishai Hadas

On Tue, Dec 2, 2025 at 8:17 PM Jason Gunthorpe <jgg@ziepe.ca> wrote:
> It looks like it was picked up, the thank you email must have become lost:
>
> 5c467151f6197d IB/isert: add WQ_PERCPU to alloc_workqueue users
> 65d21dee533755 IB/iser: add WQ_PERCPU to alloc_workqueue users
> 7196156b0ce3dc IB/rdmavt: WQ_PERCPU added to alloc_workqueue users
> 5267feda50680c RDMA/mlx4: WQ_PERCPU added to alloc_workqueue users
> 5f93287fa9d0db hfi1: WQ_PERCPU added to alloc_workqueue users
> e60c5583b661da RDMA/core: WQ_PERCPU added to alloc_workqueue users
> f673fb3449fcd8 RDMA/core: RDMA/mlx5: replace use of system_unbound_wq with system_dfl_wq
>
> Jason

Aha, thank you and sorry for the useless email!

-- 

Marco Crivellari

L3 Support Engineer, Technology & Product

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2025-12-03 13:46 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-01 16:31 [PATCH 0/5] replaced system_unbound_wq, added WQ_PERCPU to alloc_workqueue Marco Crivellari
2025-11-01 16:31 ` [PATCH 1/5] RDMA/core: RDMA/mlx5: replace use of system_unbound_wq with system_dfl_wq Marco Crivellari
2025-11-01 16:31 ` [PATCH 2/5] RDMA/core: WQ_PERCPU added to alloc_workqueue users Marco Crivellari
2025-11-01 16:31 ` [PATCH 3/5] hfi1: " Marco Crivellari
2025-11-01 16:31 ` [PATCH 4/5] RDMA/mlx4: " Marco Crivellari
2025-11-01 16:31 ` [PATCH 5/5] IB/rdmavt: " Marco Crivellari
2025-12-02 13:22 ` [PATCH 0/5] replaced system_unbound_wq, added WQ_PERCPU to alloc_workqueue Marco Crivellari
2025-12-02 19:17   ` Jason Gunthorpe
2025-12-03 13:46     ` Marco Crivellari

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox