From: Philipp Stanner <phasta@mailbox.org>
To: Alice Ryhl <aliceryhl@google.com>, Tejun Heo <tj@kernel.org>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>,
Danilo Krummrich <dakr@kernel.org>,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH] workqueue: flush all pending jobs in destroy_workqueue()
Date: Fri, 25 Apr 2025 11:57:18 +0200 [thread overview]
Message-ID: <6b6267872fcc5e75883144f241c79c93c03fcead.camel@mailbox.org> (raw)
In-Reply-To: <aAtXApA8ggJa6sQg@google.com>
On Fri, 2025-04-25 at 09:33 +0000, Alice Ryhl wrote:
> On Thu, Apr 24, 2025 at 09:57:55AM -1000, Tejun Heo wrote:
> > Hello, Alice.
> >
> > On Wed, Apr 23, 2025 at 05:51:27PM +0000, Alice Ryhl wrote:
> > ...
> > > @@ -367,6 +367,8 @@ struct workqueue_struct {
> > > struct lockdep_map __lockdep_map;
> > > struct lockdep_map *lockdep_map;
> > > #endif
> > > + raw_spinlock_t delayed_lock; /* protects
> > > pending_list */
> > > + struct list_head delayed_list; /* list of
> > > pending delayed jobs */
> >
> > I think we'll have to make this per-CPU or per-pwq. There can be a
> > lot of
> > delayed work items being queued on, e.g., system_wq. Imagine that
> > happening
> > on a multi-socket NUMA system. That cacheline is going to be
> > bounced around
> > pretty hard.
>
> Hmm. I think we would need to add a new field to delayed_work to keep
> track of which list it has been added to.
>
> Another option could be to add a boolean that disables the list.
> After
> all, we never call destroy_workqueue() on system_wq so we don't need
> the
> list for that workqueue.
>
> Thoughts?
I for my part was astonished that I actually found this half-bug in the
WQ implementation, because WQs are a) very important and b) very
intensively used, so I had expected that the bug *must* be on my side.
The fact that it wasn't is a hint for me that there are not that many
parties in the kernel that tear down with non-canceled DW.
You also have to race a bit to run into the problem.
I'm not sure how relevant that is for the synchronization overhead
Tejun describes; but take it for what it's worth.
P.
>
> Alice
next prev parent reply other threads:[~2025-04-25 9:57 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-23 17:51 [PATCH] workqueue: flush all pending jobs in destroy_workqueue() Alice Ryhl
2025-04-24 19:57 ` Tejun Heo
2025-04-25 9:33 ` Alice Ryhl
2025-04-25 9:57 ` Philipp Stanner [this message]
2025-04-25 19:25 ` Tejun Heo
2025-04-28 9:32 ` Alice Ryhl
2025-04-28 18:30 ` Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=6b6267872fcc5e75883144f241c79c93c03fcead.camel@mailbox.org \
--to=phasta@mailbox.org \
--cc=aliceryhl@google.com \
--cc=dakr@kernel.org \
--cc=jiangshanlai@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=phasta@kernel.org \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).