From: Dave Jiang <dave.jiang@intel.com>
To: Marco Crivellari <marco.crivellari@suse.com>,
linux-kernel@vger.kernel.org, linux-cxl@vger.kernel.org
Cc: Tejun Heo <tj@kernel.org>, Lai Jiangshan <jiangshanlai@gmail.com>,
Frederic Weisbecker <frederic@kernel.org>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Michal Hocko <mhocko@suse.com>,
Davidlohr Bueso <dave@stgolabs.net>,
Jonathan Cameron <jonathan.cameron@huawei.com>,
Alison Schofield <alison.schofield@intel.com>,
Vishal Verma <vishal.l.verma@intel.com>,
Ira Weiny <ira.weiny@intel.com>,
Dan Williams <dan.j.williams@intel.com>
Subject: Re: [PATCH] cxl/pci: replace use of system_wq with system_percpu_wq
Date: Mon, 3 Nov 2025 16:44:14 -0700 [thread overview]
Message-ID: <ce2f5f34-8855-41eb-9f4e-6bdaaaae90b4@intel.com> (raw)
In-Reply-To: <20251030163839.307752-1-marco.crivellari@suse.com>
On 10/30/25 9:38 AM, Marco Crivellari wrote:
> Currently if a user enqueue a work item using schedule_delayed_work() the
> used wq is "system_wq" (per-cpu wq) while queue_delayed_work() use
> WORK_CPU_UNBOUND (used when a cpu is not specified). The same applies to
> schedule_work() that is using system_wq and queue_work(), that makes use
> again of WORK_CPU_UNBOUND.
>
> This lack of consistency cannot be addressed without refactoring the API.
>
> system_wq should be the per-cpu workqueue, yet in this name nothing makes
> that clear, so replace system_wq with system_percpu_wq.
>
> The old wq (system_wq) will be kept for a few release cycles.
>
> Suggested-by: Tejun Heo <tj@kernel.org>
> Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
Applied to cxl/next
952e9057e66c17a9718232664368ffdaca468f93
> ---
> drivers/cxl/pci.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
> index bd100ac31672..0be4e508affe 100644
> --- a/drivers/cxl/pci.c
> +++ b/drivers/cxl/pci.c
> @@ -136,7 +136,7 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id)
> if (opcode == CXL_MBOX_OP_SANITIZE) {
> mutex_lock(&cxl_mbox->mbox_mutex);
> if (mds->security.sanitize_node)
> - mod_delayed_work(system_wq, &mds->security.poll_dwork, 0);
> + mod_delayed_work(system_percpu_wq, &mds->security.poll_dwork, 0);
> mutex_unlock(&cxl_mbox->mbox_mutex);
> } else {
> /* short-circuit the wait in __cxl_pci_mbox_send_cmd() */
next prev parent reply other threads:[~2025-11-03 23:44 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-30 16:38 [PATCH] cxl/pci: replace use of system_wq with system_percpu_wq Marco Crivellari
2025-10-30 16:56 ` Dave Jiang
2025-10-31 14:49 ` Ira Weiny
2025-10-31 16:50 ` Marco Crivellari
2025-10-31 16:16 ` Davidlohr Bueso
2025-11-03 23:44 ` Dave Jiang [this message]
2025-11-04 8:59 ` Marco Crivellari
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ce2f5f34-8855-41eb-9f4e-6bdaaaae90b4@intel.com \
--to=dave.jiang@intel.com \
--cc=alison.schofield@intel.com \
--cc=bigeasy@linutronix.de \
--cc=dan.j.williams@intel.com \
--cc=dave@stgolabs.net \
--cc=frederic@kernel.org \
--cc=ira.weiny@intel.com \
--cc=jiangshanlai@gmail.com \
--cc=jonathan.cameron@huawei.com \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=marco.crivellari@suse.com \
--cc=mhocko@suse.com \
--cc=tj@kernel.org \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox