public inbox for intel-xe@lists.freedesktop.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Tejun Heo <tj@kernel.org>
Cc: <intel-xe@lists.freedesktop.org>,
	<dri-devel@lists.freedesktop.org>, <linux-kernel@vger.kernel.org>,
	Carlos Santa <carlos.santa@intel.com>,
	"Ryan Neph" <ryanneph@google.com>, <stable@vger.kernel.org>,
	Lai Jiangshan <jiangshanlai@gmail.com>,
	Waiman Long <longman@redhat.com>
Subject: Re: [PATCH] workqueue: Add pool_workqueue to pending_pwqs list when unplugging multiple inactive works
Date: Tue, 31 Mar 2026 17:22:08 -0700	[thread overview]
Message-ID: <acxlMHWd/Vri7to6@gsse-cloud1.jf.intel.com> (raw)
In-Reply-To: <acxhVZK_zlv1orIX@slm.duckdns.org>

On Tue, Mar 31, 2026 at 02:05:41PM -1000, Tejun Heo wrote:
> Hello,
> 
> On Tue, Mar 31, 2026 at 03:18:39PM -0700, Matthew Brost wrote:
> > @@ -1849,8 +1849,20 @@ static void unplug_oldest_pwq(struct workqueue_struct *wq)
> >  	raw_spin_lock_irq(&pwq->pool->lock);
> >  	if (pwq->plugged) {
> >  		pwq->plugged = false;
> > -		if (pwq_activate_first_inactive(pwq, true))
> > +		if (pwq_activate_first_inactive(pwq, true)) {
> > +			if (!list_empty(&pwq->inactive_works)) {
> > +				struct worker_pool *pool = pwq->pool;
> > +				struct wq_node_nr_active *nna =
> > +					wq_node_nr_active(wq, pool->node);
> > +
> > +				raw_spin_lock(&nna->lock);
> > +				if (list_empty(&pwq->pending_node))
> > +					list_add_tail(&pwq->pending_node,
> > +						      &nna->pending_pwqs);
> > +				raw_spin_unlock(&nna->lock);
> > +			}
> 
> It's a bit gnarly to open code locking and list operation. Would just
> calling pwq_activate_first_inactive(pwq, false) one more time work here?
> That'd trigger tryinc_node_nr_active() failure in pwq_tryinc_nr_active() and
> the addition to the pending list. As this is quite subtle, it'd be nice to
> have some comment - it's compensating for the missed pwq_tryinc_nr_active()
> call due to plugging, right?

Yeah, I think that will work. Let me verify with my reproducer and
adjust the patch accordingly.

+1 on the comment as well—very subtle. Took a few days of
reverse-engineering work queues to track down.

Matt

> 
> Thanks.
> 
> -- 
> tejun

  reply	other threads:[~2026-04-01  0:22 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-03-31 22:18 [PATCH] workqueue: Add pool_workqueue to pending_pwqs list when unplugging multiple inactive works Matthew Brost
2026-03-31 22:25 ` ✓ CI.KUnit: success for " Patchwork
2026-03-31 23:14 ` ✓ Xe.CI.BAT: " Patchwork
2026-04-01  0:05 ` [PATCH] " Tejun Heo
2026-04-01  0:22   ` Matthew Brost [this message]
2026-04-01  0:48     ` Matthew Brost

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=acxlMHWd/Vri7to6@gsse-cloud1.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=carlos.santa@intel.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=jiangshanlai@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=ryanneph@google.com \
    --cc=stable@vger.kernel.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox