From: Dave Chinner <david@fromorbit.com>
To: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Brian Foster <bfoster@redhat.com>,
Alexander Polakov <apolyakov@beget.ru>,
linux-mm@kvack.org, linux-xfs@vger.kernel.org,
bugzilla-daemon@bugzilla.kernel.org
Subject: Re: [Bug 192981] New: page allocation stalls
Date: Sat, 18 Feb 2017 10:58:06 +1100 [thread overview]
Message-ID: <20170217235806.GF15349@dastard> (raw)
In-Reply-To: <077aa22b-7d84-c1cc-3ae6-1d67f762d291@I-love.SAKURA.ne.jp>
On Fri, Feb 17, 2017 at 08:11:09PM +0900, Tetsuo Handa wrote:
> On 2017/02/17 7:21, Dave Chinner wrote:
> > FWIW, the major problem with removing the blocking in inode reclaim
> > is the ease with which you can then trigger the OOM killer from
> > userspace. The high level memory reclaim algorithms break down when
> > there are hundreds of direct reclaim processes hammering on reclaim
> > and reclaim stops making progress because it's skipping dirty
> > objects. Direct reclaim ends up insufficiently throttled, so rather
> > than blocking it winds up reclaim priority and then declares OOM
> > because reclaim runs out of retries before sufficient memory has
> > been freed.
> >
> > That, right now, looks to be an unsolvable problem without a major
> > rework of direct reclaim. I've pretty much given up on ever getting
> > the unbound direct reclaim concurrency problem that is causing us
> > these problems fixed, so we are left to handle it in the subsystem
> > shrinkers as best we can. That leaves us with an unfortunate choice:
> >
> > a) throttle excessive concurrency in the shrinker to prevent
> > IO breakdown, thereby causing reclaim latency bubbles
> > under load but having a stable, reliable system; or
> > b) optimise for minimal reclaim latency and risk userspace
> > memory demand triggering the OOM killer whenever there
> > are lots of dirty inodes in the system.
> >
> > Quite frankly, there's only one choice we can make in this
> > situation: reliability is always more important than performance.
>
> Is it possible to get rid of direct reclaim and let allocating thread
> wait on queue? I wished such change in context of __GFP_KILLABLE at
> http://lkml.kernel.org/r/201702012049.BAG95379.VJFFOHMStLQFOO@I-love.SAKURA.ne.jp .
Yup, that's similar to what I've been suggesting - offloading the
direct reclaim slowpath to a limited set of kswapd-like workers
and blocking the allocating processes until there is either memory
for them or OOM is declared...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
next prev parent reply other threads:[~2017-02-17 23:58 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <bug-192981-27@https.bugzilla.kernel.org/>
[not found] ` <20170123135111.13ac3e47110de10a4bd503ef@linux-foundation.org>
2017-02-15 12:56 ` [Bug 192981] New: page allocation stalls Alexander Polakov
2017-02-15 16:05 ` Brian Foster
2017-02-15 16:52 ` Alexander Polakov
2017-02-15 18:09 ` Brian Foster
2017-02-16 10:56 ` Alexander Polakov
2017-02-16 17:20 ` Brian Foster
2017-02-16 22:21 ` Dave Chinner
2017-02-17 11:11 ` Tetsuo Handa
2017-02-17 23:58 ` Dave Chinner [this message]
2017-02-17 19:05 ` Brian Foster
2017-02-17 23:52 ` Dave Chinner
2017-02-18 13:05 ` Brian Foster
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170217235806.GF15349@dastard \
--to=david@fromorbit.com \
--cc=apolyakov@beget.ru \
--cc=bfoster@redhat.com \
--cc=bugzilla-daemon@bugzilla.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-xfs@vger.kernel.org \
--cc=penguin-kernel@I-love.SAKURA.ne.jp \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).