From: Julian Anastasov <ja@ssi.bg>
To: Marco Crivellari <marco.crivellari@suse.com>
Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
Tejun Heo <tj@kernel.org>, Lai Jiangshan <jiangshanlai@gmail.com>,
Frederic Weisbecker <frederic@kernel.org>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Michal Hocko <mhocko@suse.com>, Simon Horman <horms@verge.net.au>,
Eric Dumazet <edumazet@google.com>,
"David S . Miller" <davem@davemloft.net>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Pablo Neira Ayuso <pablo@netfilter.org>,
Florian Westphal <fw@strlen.de>, Phil Sutter <phil@nwl.cc>,
lvs-devel@vger.kernel.org, netfilter-devel@vger.kernel.org,
coreteam@netfilter.org
Subject: Re: [PATCH net-next 2/2] ipvs: Replace use of system_unbound_wq with system_dfl_wq
Date: Tue, 12 May 2026 07:22:26 +0300 (EEST) [thread overview]
Message-ID: <734b9aa0-3af4-819a-49fe-8bba7035856f@ssi.bg> (raw)
In-Reply-To: <20260511134744.277032-3-marco.crivellari@suse.com>
Hello,
On Mon, 11 May 2026, Marco Crivellari wrote:
> This patch continues the effort to refactor workqueue APIs, which has begun
> with the changes introducing new workqueues and a new alloc_workqueue flag:
>
> commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
> commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
>
> The point of the refactoring is to eventually alter the default behavior of
> workqueues to become unbound by default so that their workload placement is
> optimized by the scheduler.
>
> Before that to happen, workqueue users must be converted to the better named
> new workqueues with no intended behaviour changes:
>
> system_wq -> system_percpu_wq
> system_unbound_wq -> system_dfl_wq
>
> This way the old obsolete workqueues (system_wq, system_unbound_wq) can be
> removed in the future.
>
> Cc: Julian Anastasov <ja@ssi.bg>
> Cc: Pablo Neira Ayuso <pablo@netfilter.org>
> Cc: Florian Westphal <fw@strlen.de>
> Cc: Phil Sutter <phil@nwl.cc>
> Cc: lvs-devel@vger.kernel.org
> Cc: netfilter-devel@vger.kernel.org
> Cc: coreteam@netfilter.org
> Link: https://lore.kernel.org/all/20250221112003.1dSuoGyc@linutronix.de/
> Suggested-by: Tejun Heo <tj@kernel.org>
> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Sorry that such change was delayed but there were
many changes in IPVS for the last month. The last that may
delay this patch is:
v3 of "ipvs: avoid possible loop in ip_vs_dst_event on resizing"
https://lore.kernel.org/lvs-devel/20260510104605.24218-1-ja@ssi.bg/T/#u
May be we have to wait this change to reach net and
net-next. Also, we can reconsider which queue to use, these works
resize hash tables and call synchronize_rcu(), should we switch to
system_dfl_long_wq if such job is considered "long" ?
> ---
> net/netfilter/ipvs/ip_vs_conn.c | 4 ++--
> net/netfilter/ipvs/ip_vs_ctl.c | 10 +++++-----
> 2 files changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/net/netfilter/ipvs/ip_vs_conn.c b/net/netfilter/ipvs/ip_vs_conn.c
> index 9ea6b4fa78bf..2625c0379556 100644
> --- a/net/netfilter/ipvs/ip_vs_conn.c
> +++ b/net/netfilter/ipvs/ip_vs_conn.c
> @@ -285,7 +285,7 @@ static inline int ip_vs_conn_hash(struct ip_vs_conn *cp)
> /* Schedule resizing if load increases */
> if (atomic_read(&ipvs->conn_count) > t->u_thresh &&
> !test_and_set_bit(IP_VS_WORK_CONN_RESIZE, &ipvs->work_flags))
> - mod_delayed_work(system_unbound_wq, &ipvs->conn_resize_work, 0);
> + mod_delayed_work(system_dfl_wq, &ipvs->conn_resize_work, 0);
>
> return ret;
> }
> @@ -916,7 +916,7 @@ static void conn_resize_work_handler(struct work_struct *work)
>
> out:
> /* Monitor if we need to shrink table */
> - queue_delayed_work(system_unbound_wq, &ipvs->conn_resize_work,
> + queue_delayed_work(system_dfl_wq, &ipvs->conn_resize_work,
> more_work ? 1 : 2 * HZ);
> }
>
> diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
> index c7c7f6a7a9f6..f8fe1c8981d8 100644
> --- a/net/netfilter/ipvs/ip_vs_ctl.c
> +++ b/net/netfilter/ipvs/ip_vs_ctl.c
> @@ -800,7 +800,7 @@ static void svc_resize_work_handler(struct work_struct *work)
> if (!READ_ONCE(ipvs->enable) || !more_work ||
> test_bit(IP_VS_WORK_SVC_NORESIZE, &ipvs->work_flags))
> return;
> - queue_delayed_work(system_unbound_wq, &ipvs->svc_resize_work, 1);
> + queue_delayed_work(system_dfl_wq, &ipvs->svc_resize_work, 1);
> }
>
> static inline void
> @@ -1833,7 +1833,7 @@ ip_vs_add_service(struct netns_ipvs *ipvs, struct ip_vs_service_user_kern *u,
> /* Schedule resize work */
> if (t && ip_vs_get_num_services(ipvs) > t->u_thresh &&
> !test_and_set_bit(IP_VS_WORK_SVC_RESIZE, &ipvs->work_flags))
> - queue_delayed_work(system_unbound_wq, &ipvs->svc_resize_work,
> + queue_delayed_work(system_dfl_wq, &ipvs->svc_resize_work,
> 1);
>
> *svc_p = svc;
> @@ -2078,7 +2078,7 @@ static int ip_vs_del_service(struct ip_vs_service *svc)
> } else if (ns <= t->l_thresh &&
> !test_and_set_bit(IP_VS_WORK_SVC_RESIZE,
> &ipvs->work_flags)) {
> - queue_delayed_work(system_unbound_wq, &ipvs->svc_resize_work,
> + queue_delayed_work(system_dfl_wq, &ipvs->svc_resize_work,
> 1);
> }
> return 0;
> @@ -2511,7 +2511,7 @@ static int ipvs_proc_conn_lfactor(const struct ctl_table *table, int write,
> } else {
> WRITE_ONCE(*valp, val);
> if (rcu_access_pointer(ipvs->conn_tab))
> - mod_delayed_work(system_unbound_wq,
> + mod_delayed_work(system_dfl_wq,
> &ipvs->conn_resize_work, 0);
> }
> }
> @@ -2543,7 +2543,7 @@ static int ipvs_proc_svc_lfactor(const struct ctl_table *table, int write,
> READ_ONCE(ipvs->enable) &&
> !test_bit(IP_VS_WORK_SVC_NORESIZE,
> &ipvs->work_flags))
> - mod_delayed_work(system_unbound_wq,
> + mod_delayed_work(system_dfl_wq,
> &ipvs->svc_resize_work, 0);
> mutex_unlock(&ipvs->service_mutex);
> }
> --
> 2.54.0
Regards
--
Julian Anastasov <ja@ssi.bg>
next prev parent reply other threads:[~2026-05-12 4:22 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-11 13:47 [PATCH net-next 0/2] net: Replace system_unbound_wq with system_dfl_wq Marco Crivellari
2026-05-11 13:47 ` [PATCH net-next 1/2] ipmr: Replace use of " Marco Crivellari
2026-05-11 13:47 ` [PATCH net-next 2/2] ipvs: " Marco Crivellari
2026-05-12 4:22 ` Julian Anastasov [this message]
2026-05-12 7:36 ` Marco Crivellari
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=734b9aa0-3af4-819a-49fe-8bba7035856f@ssi.bg \
--to=ja@ssi.bg \
--cc=bigeasy@linutronix.de \
--cc=coreteam@netfilter.org \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=frederic@kernel.org \
--cc=fw@strlen.de \
--cc=horms@verge.net.au \
--cc=jiangshanlai@gmail.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=lvs-devel@vger.kernel.org \
--cc=marco.crivellari@suse.com \
--cc=mhocko@suse.com \
--cc=netdev@vger.kernel.org \
--cc=netfilter-devel@vger.kernel.org \
--cc=pabeni@redhat.com \
--cc=pablo@netfilter.org \
--cc=phil@nwl.cc \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox