From: Tejun Heo <tj@kernel.org>
To: Michal Hocko <mhocko@kernel.org>
Cc: Petr Mladek <pmladek@suse.com>,
cgroups@vger.kernel.org, Cyril Hrubis <chrubis@suse.cz>,
linux-kernel@vger.kernel.org,
Johannes Weiner <hannes@cmpxchg.org>
Subject: Re: [BUG] cgroup/workques/fork: deadlock when moving cgroups
Date: Thu, 14 Apr 2016 11:32:27 -0400 [thread overview]
Message-ID: <20160414153227.GA12583@htj.duckdns.org> (raw)
In-Reply-To: <20160414070623.GC2850@dhcp22.suse.cz>
Hello,
On Thu, Apr 14, 2016 at 09:06:23AM +0200, Michal Hocko wrote:
> On Wed 13-04-16 21:48:20, Michal Hocko wrote:
> [...]
> > I was thinking about something like flush_per_cpu_work() which would
> > assert on group_threadgroup_rwsem held for write.
>
> I have thought about this some more and I guess this is not limitted to
> per cpu workers. Basically any flush_work with group_threadgroup_rwsem
> held for write is dangerous, right?
Whether per-cpu or not doesn't matter. What matters is whether the
workqueue has WQ_MEM_RECLAIM or not. That said, I think what we want
to do is avoiding performing heavy operations in migration path. It's
where the core and all controllers have to synchronize, so performing
operations with many external dependencies is bound to get messy. I
wonder whether memory charge moving can be restructured in a similar
fashion to how cpuset node migration is made async. However, given
that charge moving has always been a best effort thing, for now, I
think it'd be best to drop lru_add_drain.
Thanks.
--
tejun
next prev parent reply other threads:[~2016-04-14 15:32 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-04-13 9:42 [BUG] cgroup/workques/fork: deadlock when moving cgroups Petr Mladek
2016-04-13 18:33 ` Tejun Heo
2016-04-13 18:57 ` Tejun Heo
2016-04-13 19:23 ` Michal Hocko
2016-04-13 19:28 ` Michal Hocko
2016-04-13 19:37 ` Tejun Heo
2016-04-13 19:48 ` Michal Hocko
2016-04-14 7:06 ` Michal Hocko
2016-04-14 15:32 ` Tejun Heo [this message]
2016-04-14 17:50 ` Johannes Weiner
2016-04-15 7:06 ` Michal Hocko
2016-04-15 14:38 ` Tejun Heo
2016-04-15 15:08 ` Michal Hocko
2016-04-15 15:25 ` Tejun Heo
2016-04-17 12:00 ` Michal Hocko
2016-04-18 14:40 ` Petr Mladek
2016-04-19 14:01 ` Michal Hocko
2016-04-19 15:39 ` Petr Mladek
2016-04-15 19:17 ` [PATCH for-4.6-fixes] memcg: remove lru_add_drain_all() invocation from mem_cgroup_move_charge() Tejun Heo
2016-04-17 12:07 ` Michal Hocko
2016-04-20 21:29 ` Tejun Heo
2016-04-21 3:27 ` Michal Hocko
2016-04-21 15:00 ` Petr Mladek
2016-04-21 15:51 ` Tejun Heo
2016-04-21 23:06 ` [PATCH 1/2] cgroup, cpuset: replace cpuset_post_attach_flush() with cgroup_subsys->post_attach callback Tejun Heo
2016-04-21 23:09 ` [PATCH 2/2] memcg: relocate charge moving from ->attach to ->post_attach Tejun Heo
2016-04-22 13:57 ` Petr Mladek
2016-04-25 8:25 ` Michal Hocko
2016-04-25 19:42 ` Tejun Heo
2016-04-25 19:44 ` Tejun Heo
2016-04-21 23:11 ` [PATCH 1/2] cgroup, cpuset: replace cpuset_post_attach_flush() with cgroup_subsys->post_attach callback Tejun Heo
2016-04-21 15:56 ` [PATCH for-4.6-fixes] memcg: remove lru_add_drain_all() invocation from mem_cgroup_move_charge() Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160414153227.GA12583@htj.duckdns.org \
--to=tj@kernel.org \
--cc=cgroups@vger.kernel.org \
--cc=chrubis@suse.cz \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mhocko@kernel.org \
--cc=pmladek@suse.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).