From: Andrew Morton <akpm@linux-foundation.org>
To: Tejun Heo <tj@kernel.org>
Cc: Mikulas Patocka <mpatocka@redhat.com>,
Mandeep Singh Baines <msb@chromium.org>,
linux-kernel@vger.kernel.org, dm-devel@redhat.com,
Alasdair G Kergon <agk@redhat.com>,
Will Drewry <wad@chromium.org>,
Elly Jones <ellyjones@chromium.org>,
Milan Broz <mbroz@redhat.com>,
Olof Johansson <olofj@chromium.org>,
Steffen Klassert <steffen.klassert@secunet.com>,
Rusty Russell <rusty@rustcorp.com.au>
Subject: Re: workqueues and percpu (was: [PATCH] dm: remake of the verity target)
Date: Thu, 8 Mar 2012 15:30:48 -0800 [thread overview]
Message-ID: <20120308153048.4a80de34.akpm@linux-foundation.org> (raw)
In-Reply-To: <20120308231521.GA2968@htj.dyndns.org>
On Thu, 8 Mar 2012 15:15:21 -0800
Tejun Heo <tj@kernel.org> wrote:
> > I'm not sure what we can do about it really, apart from blocking unplug
> > until all the target CPU's workqueues have been cleared. And/or refusing
> > to unplug a CPU until all pinned-to-that-cpu kernel threads have been
> > shut down or pinned elsewhere (which is the same thing, only more
> > general).
> >
> > Tejun, is this new behaviour? I do recall that a long time ago we
> > wrestled with unplug-vs-worker-threads and I ended up OK with the
> > result, but I forget what it was. IIRC Rusty was involved.
>
> Unfortunately, yes, this is a new behavior. Before, we could have
> unbound delays during unplug from work items. Now, we have CPU
> affinity assumption breakage.
Ow, didn't know that.
> The behavior change was primarily to
> allow long running work items to use regular workqueues without
> worrying about inducing delay across cpu hotplug operations, which is
> important as it's also used on suspend / hibernation, especially on
> mobile platforms.
Well.. why did we want to support these long-running work items?
They're abusive, aren't they? Where are they?
> During the cmwq conversion, I ended up auditing a lot of (I think I
> went through most of them) workqueue users and IIRC there weren't too
> many which required stable affinity.
>
> > That being said, I don't think it's worth compromising the DM code
> > because of this workqueue wart: lots of other code has the same wart,
> > and we should find a centralised fix for it.
>
> Probably the best way to solve this is introducing pinned attribute to
> workqueues and have them drained automatically on cpu hotplug events.
> It'll require auditing workqueue users but I guess we'll just have to
> do it given that we need to actually distinguish the ones need to be
> pinned.
That will make future use of workqueues more complex and people will
screw it up.
> Or maybe we can use explicit queue_work_on() to distinguish
> the ones which require pinning.
>
> Another approach would be requiring all workqueues to be drained on
> cpu offlining and requiring any work item which may stall to use
> unbound wq. IMHO, picking out the ones which may stall would be much
> less obvious than the ones which require cpu pinning.
I'd be surprised if it's *that* hard to find and fix the long-running
work items. Hopefully most of them are already using
create_freezable_workqueue() or create_singlethread_workqueue().
I wonder if there's some debug code we can put in workqueue.c to detect
when a pinned work item takes "too long".
next prev parent reply other threads:[~2012-03-08 23:30 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-03-02 0:33 [PATCH] dm: verity target Mandeep Singh Baines
2012-03-02 16:08 ` Mandeep Singh Baines
2012-03-04 19:18 ` [PATCH] dm: remake of the " Mikulas Patocka
2012-03-04 19:35 ` userspace hashing utility for dm-verity Mikulas Patocka
2012-03-06 21:59 ` [PATCH] dm: remake of the verity target Mandeep Singh Baines
2012-03-08 22:21 ` workqueues and percpu (was: [PATCH] dm: remake of the verity target) Mikulas Patocka
2012-03-08 22:39 ` Andrew Morton
2012-03-08 23:15 ` Tejun Heo
2012-03-08 23:30 ` Andrew Morton [this message]
2012-03-09 0:33 ` Tejun Heo
2012-03-09 0:51 ` Tejun Heo
2012-03-09 21:15 ` Mandeep Singh Baines
2012-03-09 21:20 ` Tejun Heo
2012-03-09 22:06 ` Mandeep Singh Baines
2012-08-14 17:54 ` Tejun Heo
2012-03-13 22:20 ` [PATCH] dm: remake of the verity target Mikulas Patocka
2012-03-14 21:13 ` Will Drewry
2012-03-17 1:16 ` Mikulas Patocka
2012-03-17 3:06 ` Will Drewry
2012-03-14 21:43 ` Mandeep Singh Baines
2012-03-20 15:41 ` Mandeep Singh Baines
2012-03-21 0:54 ` Mikulas Patocka
2012-03-21 3:03 ` Mandeep Singh Baines
2012-03-21 3:11 ` [dm-devel] " Mikulas Patocka
2012-03-21 3:30 ` Mandeep Singh Baines
2012-03-21 3:44 ` Mikulas Patocka
2012-03-21 3:49 ` Mandeep Singh Baines
2012-03-21 17:08 ` Mikulas Patocka
2012-03-21 17:09 ` Mikulas Patocka
2012-03-22 17:41 ` [dm-devel] " Mandeep Singh Baines
2012-03-22 21:52 ` Mikulas Patocka
2012-03-23 3:15 ` Mandeep Singh Baines
2012-03-24 3:48 ` Mikulas Patocka
2012-03-21 1:10 ` Mikulas Patocka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120308153048.4a80de34.akpm@linux-foundation.org \
--to=akpm@linux-foundation.org \
--cc=agk@redhat.com \
--cc=dm-devel@redhat.com \
--cc=ellyjones@chromium.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mbroz@redhat.com \
--cc=mpatocka@redhat.com \
--cc=msb@chromium.org \
--cc=olofj@chromium.org \
--cc=rusty@rustcorp.com.au \
--cc=steffen.klassert@secunet.com \
--cc=tj@kernel.org \
--cc=wad@chromium.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).