From: Mel Gorman <mgorman@techsingularity.net>
To: Michal Hocko <mhocko@kernel.org>
Cc: Dmitry Vyukov <dvyukov@google.com>,
Vlastimil Babka <vbabka@suse.cz>, Tejun Heo <tj@kernel.org>,
Christoph Lameter <cl@linux.com>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
LKML <linux-kernel@vger.kernel.org>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
syzkaller <syzkaller@googlegroups.com>,
Andrew Morton <akpm@linux-foundation.org>
Subject: Re: mm: deadlock between get_online_cpus/pcpu_alloc
Date: Tue, 7 Feb 2017 09:43:00 +0000 [thread overview]
Message-ID: <20170207094300.cuxfqi35wflk5nr5@techsingularity.net> (raw)
In-Reply-To: <20170207084855.GC5065@dhcp22.suse.cz>
On Tue, Feb 07, 2017 at 09:48:56AM +0100, Michal Hocko wrote:
> > +
> > + /*
> > + * Only drain from contexts allocating for user allocations.
> > + * Kernel allocations could be holding a CPU hotplug-related
> > + * mutex, particularly hot-add allocating per-cpu structures
> > + * while hotplug-related mutex's are held which would prevent
> > + * get_online_cpus ever returning.
> > + */
> > + if (gfp_mask & __GFP_HARDWALL)
> > + drain_all_pages(NULL);
> > +
>
> This wouldn't work AFAICS. If you look at the lockdep splat, the path
> which reverses the locking order (takes pcpu_alloc_mutex prior to
> cpu_hotplug.lock is bpf_array_alloc_percpu which is GFP_USER and thus
> __GFP_HARDWALL.
>
You're right, I looked at the wrong caller.
> I believe we shouldn't pull any dependency on the hotplug locks inside
> the allocator. This is just too fragile! Can we simply drop the
> get_online_cpus()? Why do we need it, anyway?
To stop the CPU being queued going offline. Another alternative is bringing
back try_get_offline_cpus which Peter pointed out to be offline was removed
in commit 02ef3c4a2aae65a1632b27770bfea3f83ca06772 although it'd be a
shame for just this case.
> Say we are racing with the
> cpu offlining. I have to check the code but my impression was that WQ
> code will ignore the cpu requested by the work item when the cpu is
> going offline. If the offline happens while the worker function already
> executes then it has to wait as we run with preemption disabled so we
> should be safe here. Or am I missing something obvious?
It may be safe but I'd prefer to get confirmation from Tejun. If it happens
that a drain on a particular CPU gets missed in this path, it's not the
end of the world as the CPU offlining drains the pages in another path so
nothing gets lost.
It would also be acceptable if one CPU was drained twice because the
workqueue was unbound from a CPU being hot-removed and moved during a
hotplug operation.
If I'm reading this right, a hot-remove will set the pool POOL_DISASSOCIATED
and unbound. A workqueue queued for draining get migrated during hot-remove
and a drain operation will execute twice on a CPU -- one for what was
queued and a second time for the CPU it was migrated from. It should still
work with flush_work which doesn't appear to block forever if an item
got migrated to another workqueue. The actual drain workqueue function is
using the CPU ID it's currently running on so it shouldn't get confused.
Tejun, did I miss anything? Does a workqueue item queued on a CPU being
offline get unbound and a caller can still flush it safely? In this
specific case, it's ok that the workqueue item does not run on the CPU it
was queued on.
--
Mel Gorman
SUSE Labs
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-02-07 9:43 UTC|newest]
Thread overview: 60+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-01-29 12:44 mm: deadlock between get_online_cpus/pcpu_alloc Dmitry Vyukov
2017-01-29 17:22 ` Vlastimil Babka
2017-01-30 15:48 ` Dmitry Vyukov
2017-02-06 19:13 ` Dmitry Vyukov
2017-02-06 22:05 ` Mel Gorman
2017-02-07 8:48 ` Michal Hocko
2017-02-07 9:23 ` Vlastimil Babka
2017-02-07 9:46 ` Mel Gorman
2017-02-07 9:53 ` Michal Hocko
2017-02-07 10:42 ` Mel Gorman
2017-02-07 11:13 ` Mel Gorman
2017-02-07 9:43 ` Mel Gorman [this message]
2017-02-07 9:49 ` Vlastimil Babka
2017-02-07 10:05 ` Michal Hocko
2017-02-07 10:28 ` Mel Gorman
2017-02-07 10:35 ` Michal Hocko
2017-02-07 11:34 ` Mel Gorman
2017-02-07 11:43 ` Michal Hocko
2017-02-07 11:54 ` Vlastimil Babka
2017-02-07 12:08 ` Michal Hocko
2017-02-07 12:37 ` Michal Hocko
2017-02-07 12:43 ` Vlastimil Babka
2017-02-07 12:48 ` Michal Hocko
2017-02-07 13:57 ` Vlastimil Babka
2017-02-07 13:58 ` Mel Gorman
2017-02-07 14:19 ` Michal Hocko
2017-02-07 15:34 ` Michal Hocko
2017-02-07 16:22 ` Mel Gorman
2017-02-07 16:41 ` Michal Hocko
2017-02-07 16:55 ` Christoph Lameter
2017-02-07 22:25 ` Thomas Gleixner
2017-02-08 7:35 ` Michal Hocko
2017-02-08 12:02 ` Thomas Gleixner
2017-02-08 12:21 ` Michal Hocko
2017-02-08 12:26 ` Mel Gorman
2017-02-08 13:23 ` Thomas Gleixner
2017-02-08 14:03 ` Mel Gorman
2017-02-08 14:11 ` Peter Zijlstra
2017-02-08 15:11 ` Christoph Lameter
2017-02-08 15:21 ` Michal Hocko
2017-02-08 16:17 ` Christoph Lameter
2017-02-08 17:46 ` Thomas Gleixner
2017-02-09 3:15 ` Christoph Lameter
2017-02-09 11:42 ` Thomas Gleixner
2017-02-09 14:00 ` Christoph Lameter
2017-02-09 14:53 ` Thomas Gleixner
2017-02-09 15:42 ` Christoph Lameter
2017-02-09 16:12 ` Thomas Gleixner
2017-02-09 17:22 ` Christoph Lameter
2017-02-09 17:40 ` Thomas Gleixner
2017-02-09 19:15 ` Michal Hocko
2017-02-10 17:58 ` Christoph Lameter
2017-02-08 15:06 ` Christoph Lameter
2017-02-07 17:03 ` Tejun Heo
2017-02-07 20:16 ` Michal Hocko
2017-02-07 13:03 ` Mel Gorman
2017-02-07 13:48 ` Michal Hocko
2017-02-07 11:24 ` Tetsuo Handa
2017-02-07 8:43 ` Michal Hocko
2017-02-07 21:53 ` Thomas Gleixner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170207094300.cuxfqi35wflk5nr5@techsingularity.net \
--to=mgorman@techsingularity.net \
--cc=akpm@linux-foundation.org \
--cc=cl@linux.com \
--cc=dvyukov@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=mingo@kernel.org \
--cc=peterz@infradead.org \
--cc=syzkaller@googlegroups.com \
--cc=tglx@linutronix.de \
--cc=tj@kernel.org \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).