From: Tejun Heo <tj@kernel.org>
To: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: LKML <linux-kernel@vger.kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Frederic Weisbecker <frederic@kernel.org>,
Juri Lelli <juri.lelli@redhat.com>, Phil Auld <pauld@redhat.com>,
Marcelo Tosatti <mtosatti@redhat.com>,
Lai Jiangshan <jiangshan.ljs@antgroup.com>,
Zqiang <qiang1.zhang@intel.com>
Subject: Re: [PATCH] workqueue: Protects wq_unbound_cpumask with wq_pool_attach_mutex
Date: Sun, 4 Sep 2022 10:23:26 -1000 [thread overview]
Message-ID: <YxUJPrRGmUTQu5VS@slm.duckdns.org> (raw)
In-Reply-To: <CAJhGHyB69M7uSu6Ot5JQ=Uc_svRCKqXbvUvwFK1xCm=FcS9Zmw@mail.gmail.com>
Hello,
On Tue, Aug 30, 2022 at 05:32:17PM +0800, Lai Jiangshan wrote:
> > Is this enough? Shouldn't the lock be protecting a wider scope? If there's
> > someone reading the flag with just pool_attach_mutex, what prevents them
> > reading it right before the new value is committed and keeps using the stale
> > value?
>
> Which "flag"? wq_unbound_cpumask?
Oh, yeah, sorry.
> This code is adding protection for wq_unbound_cpumask and makes
> unbind_workers() use a stable version of wq_unbound_cpumask during
> operation.
>
> It doesn't really matter if pool's mask becomes stale later again
> with respect to wq_unbound_cpumask.
>
> No code ensures the disassociated pool's mask is kept with the newest
> wq_unbound_cpumask since the 10a5a651e3af ("workqueue: Restrict kworker
> in the offline CPU pool running on housekeeping CPUs") first uses
> wq_unbound_cpumask for the disassociated pools.
>
> What matters is that the pool's mask should the wq_unbound_cpumask
> at the time when it becomes disassociated which has no isolated CPUs.
>
> I don't like 10a5a651e3af for it not synching the pool's mask
> with wq_unbound_cpumask. But I think it works anyway.
Hmm... I see. Can you add a comment explaining why we're grasbbing
wq_pool_attach_mutex there?
Thanks.
--
tejun
next prev parent reply other threads:[~2022-09-04 20:23 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-02 8:41 [RFC PATCH v3 0/3] workqueue: destroy_worker() vs isolated CPUs Valentin Schneider
2022-08-02 8:41 ` [RFC PATCH v3 1/3] workqueue: Hold wq_pool_mutex while affining tasks to wq_unbound_cpumask Valentin Schneider
2022-08-03 3:40 ` Lai Jiangshan
2022-08-04 11:40 ` Valentin Schneider
2022-08-05 2:43 ` Lai Jiangshan
2022-08-15 23:50 ` Tejun Heo
2022-08-18 14:33 ` [PATCH] workqueue: Protects wq_unbound_cpumask with wq_pool_attach_mutex Lai Jiangshan
2022-08-27 0:33 ` Tejun Heo
2022-08-30 9:32 ` Lai Jiangshan
2022-09-04 20:23 ` Tejun Heo [this message]
2022-08-30 14:16 ` [RFC PATCH v3 1/3] workqueue: Hold wq_pool_mutex while affining tasks to wq_unbound_cpumask Lai Jiangshan
2022-08-02 8:41 ` [RFC PATCH v3 2/3] workqueue: Unbind workers before sending them to exit() Valentin Schneider
2022-08-05 3:16 ` Lai Jiangshan
2022-08-05 16:47 ` Valentin Schneider
2022-08-02 8:41 ` [RFC PATCH v3 3/3] DEBUG-DO-NOT-MERGE: workqueue: kworker spawner Valentin Schneider
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YxUJPrRGmUTQu5VS@slm.duckdns.org \
--to=tj@kernel.org \
--cc=frederic@kernel.org \
--cc=jiangshan.ljs@antgroup.com \
--cc=jiangshanlai@gmail.com \
--cc=juri.lelli@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mtosatti@redhat.com \
--cc=pauld@redhat.com \
--cc=peterz@infradead.org \
--cc=qiang1.zhang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox