public inbox for stable@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] workqueue: Add pool_workqueue to pending_pwqs list when unplugging multiple inactive works
@ 2026-03-31 22:18 Matthew Brost
  2026-04-01  0:05 ` Tejun Heo
  0 siblings, 1 reply; 4+ messages in thread
From: Matthew Brost @ 2026-03-31 22:18 UTC (permalink / raw)
  To: intel-xe, dri-devel, linux-kernel
  Cc: Carlos Santa, Ryan Neph, stable, Tejun Heo, Lai Jiangshan,
	Waiman Long

In unplug_oldest_pwq(), the first inactive pool_workqueue is activated
correctly. However, if multiple inactive works exist on the same
pool_workqueue, subsequent works fail to activate because
wq_node_nr_active.pending_pwqs is empty — the list insertion is skipped
when the pool_workqueue is plugged.

Fix this by checking for additional inactive works in
unplug_oldest_pwq() and updating wq_node_nr_active.pending_pwqs
accordingly.

Cc: Carlos Santa <carlos.santa@intel.com>
Cc: Ryan Neph <ryanneph@google.com>
Cc: stable@vger.kernel.org
Cc: Tejun Heo <tj@kernel.org>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: Waiman Long <longman@redhat.com>
Cc: linux-kernel@vger.kernel.org
Fixes: 4c065dbce1e8 ("workqueue: Enable unbound cpumask update on ordered workqueues")
Signed-off-by: Matthew Brost <matthew.brost@intel.com>

---

This bug was first reported by Google, where the Xe driver appeared to
hang due to a fencing signal not completing. We traced the issue to work
items not being scheduled, and it can be trivially reproduced on drm-tip
with the following commands:

shell0:
for i in {1..100}; do echo "Run $i"; xe_exec_threads --r \
threads-rebind-bindexecqueue; done

shell1:
for i in {1..1000}; do echo "toggle $i"; echo f > \
/sys/devices/virtual/workqueue/cpumask; echo ff > \
/sys/devices/virtual/workqueue/cpumask; echo fff > \
/sys/devices/virtual/workqueue/cpumask ; echo ffff > \
/sys/devices/virtual/workqueue/cpumask; sleep .1; done
---
 kernel/workqueue.c | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index b77119d71641..b2cdb44ccb56 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1849,8 +1849,20 @@ static void unplug_oldest_pwq(struct workqueue_struct *wq)
 	raw_spin_lock_irq(&pwq->pool->lock);
 	if (pwq->plugged) {
 		pwq->plugged = false;
-		if (pwq_activate_first_inactive(pwq, true))
+		if (pwq_activate_first_inactive(pwq, true)) {
+			if (!list_empty(&pwq->inactive_works)) {
+				struct worker_pool *pool = pwq->pool;
+				struct wq_node_nr_active *nna =
+					wq_node_nr_active(wq, pool->node);
+
+				raw_spin_lock(&nna->lock);
+				if (list_empty(&pwq->pending_node))
+					list_add_tail(&pwq->pending_node,
+						      &nna->pending_pwqs);
+				raw_spin_unlock(&nna->lock);
+			}
 			kick_pool(pwq->pool);
+		}
 	}
 	raw_spin_unlock_irq(&pwq->pool->lock);
 }
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2026-04-01  0:48 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-31 22:18 [PATCH] workqueue: Add pool_workqueue to pending_pwqs list when unplugging multiple inactive works Matthew Brost
2026-04-01  0:05 ` Tejun Heo
2026-04-01  0:22   ` Matthew Brost
2026-04-01  0:48     ` Matthew Brost

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox