linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mel@csn.ul.ie>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Simon Kirby <sim@hostway.ca>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Rik van Riel <riel@redhat.com>,
	linux-mm@kvack.org
Subject: Re: [patch] mm: skip rebalance of hopeless zones
Date: Thu, 9 Dec 2010 14:44:12 +0000	[thread overview]
Message-ID: <20101209144412.GE20133@csn.ul.ie> (raw)
In-Reply-To: <20101208172324.d45911f4.akpm@linux-foundation.org>

On Wed, Dec 08, 2010 at 05:23:24PM -0800, Andrew Morton wrote:
> On Wed, 8 Dec 2010 16:36:21 -0800 Simon Kirby <sim@hostway.ca> wrote:
> 
> > On Wed, Dec 08, 2010 at 04:16:59PM +0100, Johannes Weiner wrote:
> > 
> > > Kswapd tries to rebalance zones persistently until their high
> > > watermarks are restored.
> > > 
> > > If the amount of unreclaimable pages in a zone makes this impossible
> > > for reclaim, though, kswapd will end up in a busy loop without a
> > > chance of reaching its goal.
> > > 
> > > This behaviour was observed on a virtual machine with a tiny
> > > Normal-zone that filled up with unreclaimable slab objects.
> > > 
> > > This patch makes kswapd skip rebalancing on such 'hopeless' zones and
> > > leaves them to direct reclaim.
> > 
> > Hi!
> > 
> > We are experiencing a similar issue, though with a 757 MB Normal zone,
> > where kswapd tries to rebalance Normal after an order-3 allocation while
> > page cache allocations (order-0) keep splitting it back up again.  It can
> > run the whole day like this (SSD storage) without sleeping.
> 
> People at google have told me they've seen the same thing.  A fork is
> taking 15 minutes when someone else is doing a dd, because the fork
> enters direct-reclaim trying for an order-one page.  It successfully
> frees some order-one pages but before it gets back to allocate one, dd
> has gone and stolen them, or split them apart.
> 

Is there a known test case for this or should I look at doing a
streaming-IO test with a basic workload constantly forking in the
background to measure the fork latency?

> This problem would have got worse when slub came along doing its stupid
> unnecessary high-order allocations.
> 
> Billions of years ago a direct-reclaimer had a one-deep cache in the
> task_struct into which it freed the page to prevent it from getting
> stolen.
> 
> Later, we took that out because pages were being freed into the
> per-cpu-pages magazine, which is effectively task-local anyway.  But
> per-cpu-pages are only for order-0 pages.  See slub stupidity, above.
> 
> I expect that this is happening so repeatably because the
> direct-reclaimer is dong a sleep somewhere after freeing the pages it
> needs - if it wasn't doing that then surely the window wouldn't be wide
> enough for it to happen so often.  But I didn't look.
> 
> Suitable fixes might be
> 
> a) don't go to sleep after the successful direct-reclaim.
> 

I submitted a patch for this a long time ago but at the time we didn't
have a test case that made a difference to it. Might be worth
revisiting. I can't find the related patch any more but it was fairly
trivial.

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2010-12-09 14:44 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-12-08 15:16 [patch] mm: skip rebalance of hopeless zones Johannes Weiner
2010-12-08 18:05 ` Rik van Riel
2010-12-08 22:19 ` Andrew Morton
2010-12-09  0:04   ` Johannes Weiner
2010-12-09 21:17     ` Andrew Morton
2010-12-10 16:27       ` Johannes Weiner
2011-01-05 11:15         ` Johannes Weiner
2011-01-04 23:56     ` Andrew Morton
2010-12-09  0:47   ` Rik van Riel
2010-12-09 14:34   ` Mel Gorman
2010-12-09  0:36 ` Simon Kirby
2010-12-09  0:49   ` Rik van Riel
2010-12-09  1:08     ` Simon Kirby
2010-12-09 14:42       ` Mel Gorman
2010-12-09  1:23   ` Andrew Morton
2010-12-09  1:55     ` Minchan Kim
2010-12-09  1:57       ` Minchan Kim
2010-12-09  2:01       ` Andrew Morton
2010-12-09  2:19         ` Minchan Kim
2010-12-09  5:18         ` Minchan Kim
2010-12-09  2:05     ` Simon Kirby
2010-12-09  8:55     ` Pekka Enberg
2010-12-09 14:46       ` Mel Gorman
2010-12-09 14:44     ` Mel Gorman [this message]
2010-12-09 18:03       ` Andrew Morton
2010-12-09 18:48       ` Ying Han
2010-12-10 11:34         ` Mel Gorman
2010-12-09 18:39     ` Ying Han
2010-12-10 11:37       ` Mel Gorman
2010-12-10 19:46         ` Ying Han
2010-12-09  1:29 ` Minchan Kim
2010-12-09 18:51 ` Ying Han
2010-12-10  7:25   ` KOSAKI Motohiro
2010-12-10  7:37     ` KOSAKI Motohiro
2010-12-10 10:54   ` Johannes Weiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20101209144412.GE20133@csn.ul.ie \
    --to=mel@csn.ul.ie \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=riel@redhat.com \
    --cc=sim@hostway.ca \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).