From: Marco Crivellari <marco.crivellari@suse.com>
To: linux-kernel@vger.kernel.org, intel-xe@lists.freedesktop.org,
dri-devel@lists.freedesktop.org
Cc: Tejun Heo <tj@kernel.org>, Lai Jiangshan <jiangshanlai@gmail.com>,
Frederic Weisbecker <frederic@kernel.org>,
Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Marco Crivellari <marco.crivellari@suse.com>,
Michal Hocko <mhocko@suse.com>,
Thomas Hellstrom <thomas.hellstrom@linux.intel.com>,
Rodrigo Vivi <rodrigo.vivi@intel.com>,
David Airlie <airlied@gmail.com>, Simona Vetter <simona@ffwll.ch>
Subject: [PATCH v4 1/2] drm/xe: replace use of system_unbound_wq with system_dfl_wq
Date: Mon, 2 Feb 2026 11:37:55 +0100 [thread overview]
Message-ID: <20260202103756.62138-2-marco.crivellari@suse.com> (raw)
In-Reply-To: <20260202103756.62138-1-marco.crivellari@suse.com>
This patch continues the effort to refactor workqueue APIs, which has begun
with the changes introducing new workqueues and a new alloc_workqueue flag:
commit 128ea9f6ccfb ("workqueue: Add system_percpu_wq and system_dfl_wq")
commit 930c2ea566af ("workqueue: Add new WQ_PERCPU flag")
The point of the refactoring is to eventually alter the default behavior of
workqueues to become unbound by default so that their workload placement is
optimized by the scheduler.
Before that to happen, workqueue users must be converted to the better named
new workqueues with no intended behaviour changes:
system_wq -> system_percpu_wq
system_unbound_wq -> system_dfl_wq
This way the old obsolete workqueues (system_wq, system_unbound_wq) can be
removed in the future.
Link: https://lore.kernel.org/all/20250221112003.1dSuoGyc@linutronix.de/
Suggested-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Marco Crivellari <marco.crivellari@suse.com>
---
drivers/gpu/drm/xe/xe_devcoredump.c | 2 +-
drivers/gpu/drm/xe/xe_execlist.c | 2 +-
drivers/gpu/drm/xe/xe_guc_ct.c | 4 ++--
drivers/gpu/drm/xe/xe_oa.c | 2 +-
drivers/gpu/drm/xe/xe_vm.c | 4 ++--
5 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
index cf41bb6d2172..558a1a9841a0 100644
--- a/drivers/gpu/drm/xe/xe_devcoredump.c
+++ b/drivers/gpu/drm/xe/xe_devcoredump.c
@@ -356,7 +356,7 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
xe_engine_snapshot_capture_for_queue(q);
- queue_work(system_unbound_wq, &ss->work);
+ queue_work(system_dfl_wq, &ss->work);
dma_fence_end_signalling(cookie);
}
diff --git a/drivers/gpu/drm/xe/xe_execlist.c b/drivers/gpu/drm/xe/xe_execlist.c
index 005a5b2c36fe..dc25caf47813 100644
--- a/drivers/gpu/drm/xe/xe_execlist.c
+++ b/drivers/gpu/drm/xe/xe_execlist.c
@@ -421,7 +421,7 @@ static void execlist_exec_queue_kill(struct xe_exec_queue *q)
static void execlist_exec_queue_destroy(struct xe_exec_queue *q)
{
INIT_WORK(&q->execlist->destroy_async, execlist_exec_queue_destroy_async);
- queue_work(system_unbound_wq, &q->execlist->destroy_async);
+ queue_work(system_dfl_wq, &q->execlist->destroy_async);
}
static int execlist_exec_queue_set_priority(struct xe_exec_queue *q,
diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
index dfbf76037b04..a0498f36bd74 100644
--- a/drivers/gpu/drm/xe/xe_guc_ct.c
+++ b/drivers/gpu/drm/xe/xe_guc_ct.c
@@ -643,7 +643,7 @@ static int __xe_guc_ct_start(struct xe_guc_ct *ct, bool needs_register)
spin_lock_irq(&ct->dead.lock);
if (ct->dead.reason) {
ct->dead.reason |= (1 << CT_DEAD_STATE_REARM);
- queue_work(system_unbound_wq, &ct->dead.worker);
+ queue_work(system_dfl_wq, &ct->dead.worker);
}
spin_unlock_irq(&ct->dead.lock);
#endif
@@ -2165,7 +2165,7 @@ static void ct_dead_capture(struct xe_guc_ct *ct, struct guc_ctb *ctb, u32 reaso
spin_unlock_irqrestore(&ct->dead.lock, flags);
- queue_work(system_unbound_wq, &(ct)->dead.worker);
+ queue_work(system_dfl_wq, &(ct)->dead.worker);
}
static void ct_dead_print(struct xe_dead_ct *dead)
diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
index abf87fe0b345..8b37e49f639f 100644
--- a/drivers/gpu/drm/xe/xe_oa.c
+++ b/drivers/gpu/drm/xe/xe_oa.c
@@ -969,7 +969,7 @@ static void xe_oa_config_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
struct xe_oa_fence *ofence = container_of(cb, typeof(*ofence), cb);
INIT_DELAYED_WORK(&ofence->work, xe_oa_fence_work_fn);
- queue_delayed_work(system_unbound_wq, &ofence->work,
+ queue_delayed_work(system_dfl_wq, &ofence->work,
usecs_to_jiffies(NOA_PROGRAM_ADDITIONAL_DELAY_US));
dma_fence_put(fence);
}
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 293b92ed2fdd..e6cfa5dc7f62 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -1112,7 +1112,7 @@ static void vma_destroy_cb(struct dma_fence *fence,
struct xe_vma *vma = container_of(cb, struct xe_vma, destroy_cb);
INIT_WORK(&vma->destroy_work, vma_destroy_work_func);
- queue_work(system_unbound_wq, &vma->destroy_work);
+ queue_work(system_dfl_wq, &vma->destroy_work);
}
static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence)
@@ -1894,7 +1894,7 @@ static void xe_vm_free(struct drm_gpuvm *gpuvm)
struct xe_vm *vm = container_of(gpuvm, struct xe_vm, gpuvm);
/* To destroy the VM we need to be able to sleep */
- queue_work(system_unbound_wq, &vm->destroy_work);
+ queue_work(system_dfl_wq, &vm->destroy_work);
}
struct xe_vm *xe_vm_lookup(struct xe_file *xef, u32 id)
--
2.52.0
next prev parent reply other threads:[~2026-02-02 10:38 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-02 10:37 [PATCH v4 0/2] replace system_unbound_wq, add WQ_PERCPU to alloc_workqueue Marco Crivellari
2026-02-02 10:37 ` Marco Crivellari [this message]
2026-02-02 10:37 ` [PATCH v4 2/2] drm/xe: add WQ_PERCPU to alloc_workqueue users Marco Crivellari
2026-02-02 16:20 ` [PATCH v4 0/2] replace system_unbound_wq, add WQ_PERCPU to alloc_workqueue Rodrigo Vivi
2026-02-02 16:22 ` Marco Crivellari
2026-02-03 0:19 ` Rodrigo Vivi
2026-02-02 18:53 ` ✗ CI.checkpatch: warning for replace system_unbound_wq, add WQ_PERCPU to alloc_workqueue (rev3) Patchwork
2026-02-02 18:54 ` ✓ CI.KUnit: success " Patchwork
2026-02-02 19:35 ` ✓ Xe.CI.BAT: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260202103756.62138-2-marco.crivellari@suse.com \
--to=marco.crivellari@suse.com \
--cc=airlied@gmail.com \
--cc=bigeasy@linutronix.de \
--cc=dri-devel@lists.freedesktop.org \
--cc=frederic@kernel.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=jiangshanlai@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mhocko@suse.com \
--cc=rodrigo.vivi@intel.com \
--cc=simona@ffwll.ch \
--cc=thomas.hellstrom@linux.intel.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox