public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: Fengguang Wu <fengguang.wu@intel.com>
Cc: Michal Hocko <mhocko@suse.cz>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujtisu.com>,
	Mel Gorman <mgorman@suse.de>, Minchan Kim <minchan@kernel.org>,
	Rik van Riel <riel@redhat.com>, Ying Han <yinghan@google.com>,
	Greg Thelen <gthelen@google.com>, Hugh Dickins <hughd@google.com>
Subject: Re: [RFC -mm] memcg: prevent from OOM with too many dirty pages
Date: Tue, 29 May 2012 09:28:53 +0200	[thread overview]
Message-ID: <20120529072853.GD1734@cmpxchg.org> (raw)
In-Reply-To: <20120529030857.GA7762@localhost>

On Tue, May 29, 2012 at 11:08:57AM +0800, Fengguang Wu wrote:
> Hi Michal,
> 
> On Mon, May 28, 2012 at 05:38:55PM +0200, Michal Hocko wrote:
> > Current implementation of dirty pages throttling is not memcg aware which makes
> > it easy to have LRUs full of dirty pages which might lead to memcg OOM if the
> > hard limit is small and so the lists are scanned faster than pages written
> > back.
> > 
> > This patch fixes the problem by throttling the allocating process (possibly
> > a writer) during the hard limit reclaim by waiting on PageReclaim pages.
> > We are waiting only for PageReclaim pages because those are the pages
> > that made one full round over LRU and that means that the writeback is much
> > slower than scanning.
> > The solution is far from being ideal - long term solution is memcg aware
> > dirty throttling - but it is meant to be a band aid until we have a real
> > fix.
> 
> IMHO it's still an important "band aid" -- perhaps worthwhile for
> sending to Greg's stable trees. Because it fixes a really important
> use case: it enables the users to put backups into a small memcg.
> 
> The users visible changes are:
> 
>         the backup program get OOM killed
> =>
>         it runs now, although being a bit slow and bumpy

The problem is workloads that /don't/ have excessive dirty pages, but
instantiate clean page cache at a much faster rate than writeback can
clean the few dirties.  The dirty/writeback pages reach the end of the
lru several times while there are always easily reclaimable pages
around.

This was the rationale for introducing the backoff function that
considers the dirty page percentage of all pages looked at (bottom of
shrink_active_list) and removing all other sleeps that didn't look at
the bigger picture and made problems.  I'd hate for them to come back.

On the other hand, is there a chance to make this backoff function
work for memcgs?  Right now it only applies to the global case to not
mark a whole zone congested because of some dirty pages on a single
memcg LRU.  But maybe it can work by considering congestion on a
per-lruvec basis rather than per-zone?

  reply	other threads:[~2012-05-29  7:29 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-05-28 15:38 [RFC -mm] memcg: prevent from OOM with too many dirty pages Michal Hocko
2012-05-29  3:08 ` Fengguang Wu
2012-05-29  7:28   ` Johannes Weiner [this message]
2012-05-29  8:48     ` Fengguang Wu
2012-05-29  9:35       ` Johannes Weiner
2012-05-29 10:21         ` Fengguang Wu
2012-05-29 13:32         ` Mel Gorman
2012-05-29 13:51         ` Michal Hocko
2012-05-31  9:09           ` Michal Hocko
2012-06-01  8:37             ` Michal Hocko
2012-06-07 14:45               ` Michal Hocko
2012-06-14  7:27                 ` Johannes Weiner
2012-06-14 10:13                   ` Michal Hocko
2012-05-31 15:18           ` Fengguang Wu
     [not found]             ` <20120531153249.GD12809@tiehlicka.suse.cz>
     [not found]               ` <20120531154248.GA32734@localhost>
     [not found]                 ` <20120531154859.GA20546@tiehlicka.suse.cz>
     [not found]                   ` <20120531160129.GA439@localhost>
     [not found]                     ` <20120531182509.GA22539@tiehlicka.suse.cz>
2012-06-01  1:33                       ` Fengguang Wu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120529072853.GD1734@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=akpm@linux-foundation.org \
    --cc=fengguang.wu@intel.com \
    --cc=gthelen@google.com \
    --cc=hughd@google.com \
    --cc=kamezawa.hiroyu@jp.fujtisu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@suse.cz \
    --cc=minchan@kernel.org \
    --cc=riel@redhat.com \
    --cc=yinghan@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox