* [PATCH net-next 1/2] ipmr: Replace use of system_unbound_wq with system_dfl_wq
2026-05-11 13:47 [PATCH net-next 0/2] net: Replace system_unbound_wq with system_dfl_wq Marco Crivellari
@ 2026-05-11 13:47 ` Marco Crivellari
2026-05-11 13:47 ` [PATCH net-next 2/2] ipvs: " Marco Crivellari
1 sibling, 0 replies; 3+ messages in thread
From: Marco Crivellari @ 2026-05-11 13:47 UTC (permalink / raw)
To: linux-kernel, netdev
Cc: Tejun Heo, Lai Jiangshan, Frederic Weisbecker,
Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
Simon Horman, Eric Dumazet, David S . Miller, Jakub Kicinski,
Paolo Abeni, David Ahern, Ido Schimmel, Simon Horman
This patch continues the effort to refactor workqueue APIs, which has begun
with the changes introducing new workqueues and a new alloc_workqueue flag:
commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
The point of the refactoring is to eventually alter the default behavior of
workqueues to become unbound by default so that their workload placement is
optimized by the scheduler.
Before that to happen, workqueue users must be converted to the better named
new workqueues with no intended behaviour changes:
system_wq -> system_percpu_wq
system_unbound_wq -> system_dfl_wq
This way the old obsolete workqueues (system_wq, system_unbound_wq) can be
removed in the future.
Cc: David Ahern <dsahern@kernel.org>
Cc: Ido Schimmel <idosch@nvidia.com>
Cc: Simon Horman <horms@kernel.org>
Link: https://lore.kernel.org/all/20250221112003.1dSuoGyc@linutronix.de/
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
net/ipv4/ipmr_base.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ipv4/ipmr_base.c b/net/ipv4/ipmr_base.c
index 3930d612c3de..867b24beded1 100644
--- a/net/ipv4/ipmr_base.c
+++ b/net/ipv4/ipmr_base.c
@@ -39,7 +39,7 @@ static void __mr_free_table(struct work_struct *work)
void mr_table_free(struct mr_table *mrt)
{
- queue_rcu_work(system_unbound_wq, &mrt->work);
+ queue_rcu_work(system_dfl_wq, &mrt->work);
}
struct mr_table *
--
2.54.0
^ permalink raw reply related [flat|nested] 3+ messages in thread* [PATCH net-next 2/2] ipvs: Replace use of system_unbound_wq with system_dfl_wq
2026-05-11 13:47 [PATCH net-next 0/2] net: Replace system_unbound_wq with system_dfl_wq Marco Crivellari
2026-05-11 13:47 ` [PATCH net-next 1/2] ipmr: Replace use of " Marco Crivellari
@ 2026-05-11 13:47 ` Marco Crivellari
1 sibling, 0 replies; 3+ messages in thread
From: Marco Crivellari @ 2026-05-11 13:47 UTC (permalink / raw)
To: linux-kernel, netdev
Cc: Tejun Heo, Lai Jiangshan, Frederic Weisbecker,
Sebastian Andrzej Siewior, Marco Crivellari, Michal Hocko,
Simon Horman, Eric Dumazet, David S . Miller, Jakub Kicinski,
Paolo Abeni, Julian Anastasov, Pablo Neira Ayuso,
Florian Westphal, Phil Sutter, lvs-devel, netfilter-devel,
coreteam
This patch continues the effort to refactor workqueue APIs, which has begun
with the changes introducing new workqueues and a new alloc_workqueue flag:
commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
The point of the refactoring is to eventually alter the default behavior of
workqueues to become unbound by default so that their workload placement is
optimized by the scheduler.
Before that to happen, workqueue users must be converted to the better named
new workqueues with no intended behaviour changes:
system_wq -> system_percpu_wq
system_unbound_wq -> system_dfl_wq
This way the old obsolete workqueues (system_wq, system_unbound_wq) can be
removed in the future.
Cc: Julian Anastasov <ja@ssi.bg>
Cc: Pablo Neira Ayuso <pablo@netfilter.org>
Cc: Florian Westphal <fw@strlen.de>
Cc: Phil Sutter <phil@nwl.cc>
Cc: lvs-devel@vger.kernel.org
Cc: netfilter-devel@vger.kernel.org
Cc: coreteam@netfilter.org
Link: https://lore.kernel.org/all/20250221112003.1dSuoGyc@linutronix.de/
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
net/netfilter/ipvs/ip_vs_conn.c | 4 ++--
net/netfilter/ipvs/ip_vs_ctl.c | 10 +++++-----
2 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/net/netfilter/ipvs/ip_vs_conn.c b/net/netfilter/ipvs/ip_vs_conn.c
index 9ea6b4fa78bf..2625c0379556 100644
--- a/net/netfilter/ipvs/ip_vs_conn.c
+++ b/net/netfilter/ipvs/ip_vs_conn.c
@@ -285,7 +285,7 @@ static inline int ip_vs_conn_hash(struct ip_vs_conn *cp)
/* Schedule resizing if load increases */
if (atomic_read(&ipvs->conn_count) > t->u_thresh &&
!test_and_set_bit(IP_VS_WORK_CONN_RESIZE, &ipvs->work_flags))
- mod_delayed_work(system_unbound_wq, &ipvs->conn_resize_work, 0);
+ mod_delayed_work(system_dfl_wq, &ipvs->conn_resize_work, 0);
return ret;
}
@@ -916,7 +916,7 @@ static void conn_resize_work_handler(struct work_struct *work)
out:
/* Monitor if we need to shrink table */
- queue_delayed_work(system_unbound_wq, &ipvs->conn_resize_work,
+ queue_delayed_work(system_dfl_wq, &ipvs->conn_resize_work,
more_work ? 1 : 2 * HZ);
}
diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
index c7c7f6a7a9f6..f8fe1c8981d8 100644
--- a/net/netfilter/ipvs/ip_vs_ctl.c
+++ b/net/netfilter/ipvs/ip_vs_ctl.c
@@ -800,7 +800,7 @@ static void svc_resize_work_handler(struct work_struct *work)
if (!READ_ONCE(ipvs->enable) || !more_work ||
test_bit(IP_VS_WORK_SVC_NORESIZE, &ipvs->work_flags))
return;
- queue_delayed_work(system_unbound_wq, &ipvs->svc_resize_work, 1);
+ queue_delayed_work(system_dfl_wq, &ipvs->svc_resize_work, 1);
}
static inline void
@@ -1833,7 +1833,7 @@ ip_vs_add_service(struct netns_ipvs *ipvs, struct ip_vs_service_user_kern *u,
/* Schedule resize work */
if (t && ip_vs_get_num_services(ipvs) > t->u_thresh &&
!test_and_set_bit(IP_VS_WORK_SVC_RESIZE, &ipvs->work_flags))
- queue_delayed_work(system_unbound_wq, &ipvs->svc_resize_work,
+ queue_delayed_work(system_dfl_wq, &ipvs->svc_resize_work,
1);
*svc_p = svc;
@@ -2078,7 +2078,7 @@ static int ip_vs_del_service(struct ip_vs_service *svc)
} else if (ns <= t->l_thresh &&
!test_and_set_bit(IP_VS_WORK_SVC_RESIZE,
&ipvs->work_flags)) {
- queue_delayed_work(system_unbound_wq, &ipvs->svc_resize_work,
+ queue_delayed_work(system_dfl_wq, &ipvs->svc_resize_work,
1);
}
return 0;
@@ -2511,7 +2511,7 @@ static int ipvs_proc_conn_lfactor(const struct ctl_table *table, int write,
} else {
WRITE_ONCE(*valp, val);
if (rcu_access_pointer(ipvs->conn_tab))
- mod_delayed_work(system_unbound_wq,
+ mod_delayed_work(system_dfl_wq,
&ipvs->conn_resize_work, 0);
}
}
@@ -2543,7 +2543,7 @@ static int ipvs_proc_svc_lfactor(const struct ctl_table *table, int write,
READ_ONCE(ipvs->enable) &&
!test_bit(IP_VS_WORK_SVC_NORESIZE,
&ipvs->work_flags))
- mod_delayed_work(system_unbound_wq,
+ mod_delayed_work(system_dfl_wq,
&ipvs->svc_resize_work, 0);
mutex_unlock(&ipvs->service_mutex);
}
--
2.54.0
^ permalink raw reply related [flat|nested] 3+ messages in thread