From: Dave Chinner <david@fromorbit.com>
To: David Rientjes <rientjes@google.com>
Cc: Mel Gorman <mgorman@suse.de>,
Andrew Morton <akpm@linux-foundation.org>,
Michal Hocko <mhocko@suse.cz>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [patch] mm, page_alloc: allow __GFP_NOFAIL to allocate below watermarks after reclaim
Date: Thu, 12 Dec 2013 12:10:38 +1100 [thread overview]
Message-ID: <20131212011038.GF31386@dastard> (raw)
In-Reply-To: <alpine.DEB.2.02.1312101453020.22701@chino.kir.corp.google.com>
On Tue, Dec 10, 2013 at 03:03:39PM -0800, David Rientjes wrote:
> On Tue, 10 Dec 2013, Mel Gorman wrote:
>
> > > If direct reclaim has failed to free memory, __GFP_NOFAIL allocations
> > > can potentially loop forever in the page allocator. In this case, it's
> > > better to give them the ability to access below watermarks so that they
> > > may allocate similar to the same privilege given to GFP_ATOMIC
> > > allocations.
> > >
> > > We're careful to ensure this is only done after direct reclaim has had
> > > the chance to free memory, however.
> > >
> > > Signed-off-by: David Rientjes <rientjes@google.com>
> >
> > The main problem with doing something like this is that it just smacks
> > into the adjusted watermark if there are a number of __GFP_NOFAIL. Who
> > was the user of __GFP_NOFAIL that was fixed by this patch?
> >
>
> Nobody, it comes out of a memcg discussion where __GFP_NOFAIL were
> recently given the ability to bypass charges to the root memcg when the
> memcg has hit its limit since we disallow the oom killer to kill a process
> (for the same reason that the vast majority of __GFP_NOFAIL users, those
> that do GFP_NOFS | __GFP_NOFAIL, disallow the oom killer in the page
> allocator).
>
> Without some other thread freeing memory, these allocations simply loop
> forever.
So what is kswapd doing in this situation?
> Since there are comments in both gfp.h and page_alloc.c that say no new
> users will be added, it seems legitimate to ensure that the allocation
> will at least have a chance of succeeding, but not the point of depleting
> memory reserves entirely.
As it said before, the filesystem will then simply keep allocating
memory until it hits the next limit, and then you're back in the
same situation. Moving the limit at which it fails does not solve
the problem at all.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
prev parent reply other threads:[~2013-12-12 1:10 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-12-09 22:03 [patch] mm, page_alloc: allow __GFP_NOFAIL to allocate below watermarks after reclaim David Rientjes
2013-12-10 7:50 ` Mel Gorman
2013-12-10 23:03 ` David Rientjes
2013-12-11 9:26 ` Mel Gorman
2013-12-12 1:10 ` Dave Chinner [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20131212011038.GF31386@dastard \
--to=david@fromorbit.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@suse.cz \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).