public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: David Rientjes <rientjes@google.com>
Cc: Vlastimil Babka <vbabka@suse.cz>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [patch] mm, oom: stop reclaiming if GFP_ATOMIC will start failing soon
Date: Mon, 27 Apr 2020 13:30:51 -0700	[thread overview]
Message-ID: <20200427133051.b71f961c1bc53a8e72c4f003@linux-foundation.org> (raw)
In-Reply-To: <alpine.DEB.2.22.394.2004261959310.80211@chino.kir.corp.google.com>

On Sun, 26 Apr 2020 20:12:58 -0700 (PDT) David Rientjes <rientjes@google.com> wrote:

> > > blockable allocations and then queue a worker to asynchronously oom kill
> > > if it finds watermarks to be sufficiently low as well.
> > > 
> > 
> > Well, what's really going on here?
> > 
> > Is networking potentially consuming an unbounded amount of memory?  If
> > so, then killing a process will just cause networking to consume more
> > memory then hit against the same thing.  So presumably the answer is
> > "no, the watermarks are inappropriately set for this workload".
> > 
> > So would it not be sensible to dynamically adjust the watermarks in
> > response to this condition?  Maintain a larger pool of memory for these
> > allocations?  Or possibly push back on networking and tell it to reduce
> > its queue sizes?  So that stuff doesn't keep on getting oom-killed?
> > 
> 
> No - that would actually make the problem worse.
> 
> Today, per-zone min watermarks dictate when user allocations will loop or 
> oom kill.  should_reclaim_retry() currently loops if reclaim has succeeded 
> in the past few tries and we should be able to allocate if we are able to 
> reclaim the amount of memory that we think we can.
> 
> The issue is that this supposes that looping to reclaim more will result 
> in more free memory.  That doesn't always happen if there are concurrent 
> memory allocators.
> 
> GFP_ATOMIC allocators can access below these per-zone watermarks.  So the 
> issue is that per-zone free pages stays between ALLOC_HIGH watermarks 
> (the watermark that GFP_ATOMIC allocators can allocate to) and min 
> watermarks.  We never reclaim enough memory to get back to min watermarks 
> because reclaim cannot keep up with the amount of GFP_ATOMIC allocations.

But there should be an upper bound upon the total amount of in-flight
GFP_ATOMIC memory at any point in time?  These aren't like pagecache
which will take more if we give it more.  Setting the various
thresholds appropriately should ensure that blockable allocations don't
get their memory stolen by GPP_ATOMIC allocations?

I took a look at doing a quick-fix for the
direct-reclaimers-get-their-stuff-stolen issue about a million years
ago.  I don't recall where it ended up.  It's pretty trivial for the
direct reclaimer to free pages into current->reclaimed_pages and to
take a look in there on the allocation path, etc.  But it's only
practical for order-0 pages.



  parent reply	other threads:[~2020-04-27 20:30 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-24 20:48 [patch] mm, oom: stop reclaiming if GFP_ATOMIC will start failing soon David Rientjes
2020-04-25  0:32 ` Tetsuo Handa
2020-04-26  0:27 ` Andrew Morton
2020-04-26  3:04   ` Tetsuo Handa
2020-04-27  3:12   ` David Rientjes
2020-04-27  5:03     ` Tetsuo Handa
2020-04-27 20:30     ` Andrew Morton [this message]
2020-04-27 23:03       ` David Rientjes
2020-04-27 23:35         ` Andrew Morton
2020-04-28  7:43           ` Michal Hocko
2020-04-29  8:31             ` peter enderborg
2020-04-29  9:00               ` Michal Hocko
2020-04-28  9:38       ` Vlastimil Babka
2020-04-28 21:48         ` David Rientjes
2020-04-28 23:37           ` Tetsuo Handa
2020-04-29  7:51           ` Vlastimil Babka
2020-04-29  9:04             ` Michal Hocko
2020-04-29 10:45               ` Tetsuo Handa
2020-04-29 11:43                 ` Michal Hocko
2020-04-27  8:20   ` peter enderborg
2020-04-27 15:01 ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200427133051.b71f961c1bc53a8e72c4f003@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=rientjes@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox