linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Wu Fengguang <fengguang.wu@intel.com>
To: Mel Gorman <mel@csn.ul.ie>
Cc: Christoph Hellwig <hch@infradead.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	Dave Chinner <david@fromorbit.com>,
	Chris Mason <chris.mason@oracle.com>,
	Nick Piggin <npiggin@suse.de>, Rik van Riel <riel@redhat.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Minchan Kim <minchan.kim@gmail.com>
Subject: Re: [PATCH 7/8] writeback: sync old inodes first in background writeback
Date: Fri, 23 Jul 2010 17:45:15 +0800	[thread overview]
Message-ID: <20100723094515.GD5043@localhost> (raw)
In-Reply-To: <20100722104823.GF13117@csn.ul.ie>

On Thu, Jul 22, 2010 at 06:48:23PM +0800, Mel Gorman wrote:
> On Thu, Jul 22, 2010 at 05:21:55PM +0800, Wu Fengguang wrote:
> > > I guess this new patch is more problem oriented and acceptable:
> > > 
> > > --- linux-next.orig/mm/vmscan.c	2010-07-22 16:36:58.000000000 +0800
> > > +++ linux-next/mm/vmscan.c	2010-07-22 16:39:57.000000000 +0800
> > > @@ -1217,7 +1217,8 @@ static unsigned long shrink_inactive_lis
> > >  			count_vm_events(PGDEACTIVATE, nr_active);
> > >  
> > >  			nr_freed += shrink_page_list(&page_list, sc,
> > > -							PAGEOUT_IO_SYNC);
> > > +					priority < DEF_PRIORITY / 3 ?
> > > +					PAGEOUT_IO_SYNC : PAGEOUT_IO_ASYNC);
> > >  		}
> > >  
> > >  		nr_reclaimed += nr_freed;
> > 
> > This one looks better:
> > ---
> > vmscan: raise the bar to PAGEOUT_IO_SYNC stalls
> > 
> > Fix "system goes totally unresponsive with many dirty/writeback pages"
> > problem:
> > 
> > 	http://lkml.org/lkml/2010/4/4/86
> > 
> > The root cause is, wait_on_page_writeback() is called too early in the
> > direct reclaim path, which blocks many random/unrelated processes when
> > some slow (USB stick) writeback is on the way.
> > 
> 
> So, what's the bet if lumpy reclaim is a factor that it's
> high-order-but-low-cost such as fork() that are getting caught by this since
> [78dc583d: vmscan: low order lumpy reclaim also should use PAGEOUT_IO_SYNC]
> was introduced?

Sorry I'm a bit confused by your wording..

> That could manifest to the user as stalls creating new processes when under
> heavy IO. I would be surprised it would freeze the entire system but certainly
> any new work would feel very slow.
> 
> > A simple dd can easily create a big range of dirty pages in the LRU
> > list. Therefore priority can easily go below (DEF_PRIORITY - 2) in a
> > typical desktop, which triggers the lumpy reclaim mode and hence
> > wait_on_page_writeback().
> > 
> 
> which triggers the lumpy reclaim mode for high-order allocations.

Exactly. Changelog updated.

> lumpy reclaim mode is not something that is triggered just because priority
> is high.

Right.

> I think there is a second possibility for causing stalls as well that is
> unrelated to lumpy reclaim. Once dirty_limit is reached, new page faults may
> also result in stalls. If it is taking a long time to writeback dirty data,
> random processes could be getting stalled just because they happened to dirty
> data at the wrong time.  This would be the case if the main dirtying process
> (e.g. dd) is not calling sync and dropping pages it's no longer using.

The dirty_limit throttling will slow down the dirty process to the
writeback throughput. If a process is dirtying files on sda (HDD),
it will be throttled at 80MB/s. If another process is dirtying files
on sdb (USB 1.1), it will be throttled at 1MB/s.

So dirty throttling will slow things down. However the slow down
should be smooth (a series of 100ms stalls instead of a sudden 10s
stall), and won't impact random processes (which does no read/write IO
at all).

> > In Andreas' case, 512MB/1024 = 512KB, this is way too low comparing to
> > the 22MB writeback and 190MB dirty pages. There can easily be a
> > continuous range of 512KB dirty/writeback pages in the LRU, which will
> > trigger the wait logic.
> > 
> > To make it worse, when there are 50MB writeback pages and USB 1.1 is
> > writing them in 1MB/s, wait_on_page_writeback() may stuck for up to 50
> > seconds.
> > 
> > So only enter sync write&wait when priority goes below DEF_PRIORITY/3,
> > or 6.25% LRU. As the default dirty throttle ratio is 20%, sync write&wait
> > will hardly be triggered by pure dirty pages.
> > 
> > Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
> > ---
> >  mm/vmscan.c |    4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > --- linux-next.orig/mm/vmscan.c	2010-07-22 16:36:58.000000000 +0800
> > +++ linux-next/mm/vmscan.c	2010-07-22 17:03:47.000000000 +0800
> > @@ -1206,7 +1206,7 @@ static unsigned long shrink_inactive_lis
> >  		 * but that should be acceptable to the caller
> >  		 */
> >  		if (nr_freed < nr_taken && !current_is_kswapd() &&
> > -		    sc->lumpy_reclaim_mode) {
> > +		    sc->lumpy_reclaim_mode && priority < DEF_PRIORITY / 3) {
> >  			congestion_wait(BLK_RW_ASYNC, HZ/10);
> >  
> 
> This will also delay waiting on congestion for really high-order
> allocations such as huge pages, some video decoder and the like which
> really should be stalling.

I absolutely agree that high order allocators should be somehow throttled.

However given that one can easily create a large _continuous_ range of
dirty LRU pages, let someone bumping all the way through the range
sounds a bit cruel..

> How about the following compile-tested diff?
> It takes the cost of the high-order allocation into account and the
> priority when deciding whether to synchronously wait or not.

Very nice patch. Thanks!

Cheers,
Fengguang

> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 9c7e57c..d652e0c 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1110,6 +1110,48 @@ static int too_many_isolated(struct zone *zone, int file,
>  }
>  
>  /*
> + * Returns true if the caller should stall on congestion and retry to clean
> + * the list of pages synchronously.
> + *
> + * If we are direct reclaiming for contiguous pages and we do not reclaim
> + * everything in the list, try again and wait for IO to complete. This
> + * will stall high-order allocations but that should be acceptable to
> + * the caller
> + */
> +static inline bool should_reclaim_stall(unsigned long nr_taken,
> +				unsigned long nr_freed,
> +				int priority,
> +				struct scan_control *sc)
> +{
> +	int lumpy_stall_priority;
> +
> +	/* kswapd should not stall on sync IO */
> +	if (current_is_kswapd())
> +		return false;
> +
> +	/* Only stall on lumpy reclaim */
> +	if (!sc->lumpy_reclaim_mode)
> +		return false;
> +
> +	/* If we have relaimed everything on the isolated list, no stall */
> +	if (nr_freed == nr_taken)
> +		return false;
> +
> +	/*
> +	 * For high-order allocations, there are two stall thresholds.
> +	 * High-cost allocations stall immediately where as lower
> +	 * order allocations such as stacks require the scanning
> +	 * priority to be much higher before stalling
> +	 */
> +	if (sc->order > PAGE_ALLOC_COSTLY_ORDER)
> +		lumpy_stall_priority = DEF_PRIORITY;
> +	else
> +		lumpy_stall_priority = DEF_PRIORITY / 3;
> +
> +	return priority <= lumpy_stall_priority;
> +}
> +
> +/*
>   * shrink_inactive_list() is a helper for shrink_zone().  It returns the number
>   * of reclaimed pages
>   */
> @@ -1199,14 +1241,8 @@ static unsigned long shrink_inactive_list(unsigned long max_scan,
>  		nr_scanned += nr_scan;
>  		nr_freed = shrink_page_list(&page_list, sc, PAGEOUT_IO_ASYNC);
>  
> -		/*
> -		 * If we are direct reclaiming for contiguous pages and we do
> -		 * not reclaim everything in the list, try again and wait
> -		 * for IO to complete. This will stall high-order allocations
> -		 * but that should be acceptable to the caller
> -		 */
> -		if (nr_freed < nr_taken && !current_is_kswapd() &&
> -		    sc->lumpy_reclaim_mode) {
> +		/* Check if we should syncronously wait for writeback */
> +		if (should_reclaim_stall(nr_taken, nr_freed, priority, sc)) {
>  			congestion_wait(BLK_RW_ASYNC, HZ/10);
>  
>  			/*
> 
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2010-07-23  9:45 UTC|newest]

Thread overview: 87+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-07-19 13:11 [PATCH 0/8] Reduce writeback from page reclaim context V4 Mel Gorman
2010-07-19 13:11 ` [PATCH 1/8] vmscan: tracing: Roll up of patches currently in mmotm Mel Gorman
2010-07-19 13:11 ` [PATCH 2/8] vmscan: tracing: Update trace event to track if page reclaim IO is for anon or file pages Mel Gorman
2010-07-19 13:24   ` Rik van Riel
2010-07-19 14:15   ` Christoph Hellwig
2010-07-19 14:24     ` Mel Gorman
2010-07-19 14:26       ` Christoph Hellwig
2010-07-19 13:11 ` [PATCH 3/8] vmscan: tracing: Update post-processing script to distinguish between anon and file IO from page reclaim Mel Gorman
2010-07-19 13:32   ` Rik van Riel
2010-07-19 13:11 ` [PATCH 4/8] vmscan: Do not writeback filesystem pages in direct reclaim Mel Gorman
2010-07-19 14:19   ` Christoph Hellwig
2010-07-19 14:26     ` Mel Gorman
2010-07-19 18:25   ` Rik van Riel
2010-07-19 22:14   ` Johannes Weiner
2010-07-20 13:45     ` Mel Gorman
2010-07-20 22:02       ` Johannes Weiner
2010-07-21 11:36         ` Johannes Weiner
2010-07-21 11:52         ` Mel Gorman
2010-07-21 12:01           ` KAMEZAWA Hiroyuki
2010-07-21 14:27             ` Mel Gorman
2010-07-21 23:57               ` KAMEZAWA Hiroyuki
2010-07-22  9:19                 ` Mel Gorman
2010-07-22  9:22                   ` KAMEZAWA Hiroyuki
2010-07-21 13:04           ` Johannes Weiner
2010-07-21 13:38             ` Mel Gorman
2010-07-21 14:28               ` Johannes Weiner
2010-07-21 14:31                 ` Mel Gorman
2010-07-21 14:39                   ` Johannes Weiner
2010-07-21 15:06                     ` Mel Gorman
2010-07-26  8:29               ` Wu Fengguang
2010-07-26  9:12                 ` Mel Gorman
2010-07-26 11:19                   ` Wu Fengguang
2010-07-26 12:53                     ` Mel Gorman
2010-07-26 13:03                       ` Wu Fengguang
2010-07-19 13:11 ` [PATCH 5/8] fs,btrfs: Allow kswapd to writeback pages Mel Gorman
2010-07-19 18:27   ` Rik van Riel
2010-07-19 13:11 ` [PATCH 6/8] fs,xfs: " Mel Gorman
2010-07-19 14:20   ` Christoph Hellwig
2010-07-19 14:43     ` Mel Gorman
2010-07-19 13:11 ` [PATCH 7/8] writeback: sync old inodes first in background writeback Mel Gorman
2010-07-19 14:21   ` Christoph Hellwig
2010-07-19 14:40     ` Mel Gorman
2010-07-19 14:48       ` Christoph Hellwig
2010-07-22  8:52       ` Wu Fengguang
2010-07-22  9:02         ` Wu Fengguang
2010-07-22  9:21         ` Wu Fengguang
2010-07-22 10:48           ` Mel Gorman
2010-07-23  9:45             ` Wu Fengguang [this message]
2010-07-23 10:57               ` Mel Gorman
2010-07-23 11:49                 ` Wu Fengguang
2010-07-23 12:20                   ` Wu Fengguang
2010-07-25 10:43                 ` KOSAKI Motohiro
2010-07-25 12:03                   ` Minchan Kim
2010-07-26  3:27                     ` Wu Fengguang
2010-07-26  4:11                       ` Minchan Kim
2010-07-26  4:37                         ` Wu Fengguang
2010-07-26 16:30                           ` Minchan Kim
2010-07-26 22:48                             ` Wu Fengguang
2010-07-26  3:08                   ` Wu Fengguang
2010-07-26  3:11                     ` Rik van Riel
2010-07-26  3:17                       ` Wu Fengguang
2010-07-22 15:34           ` Minchan Kim
2010-07-23 11:59             ` Wu Fengguang
2010-07-22  9:42         ` Mel Gorman
2010-07-23  8:33           ` Wu Fengguang
2010-07-22  1:13     ` Wu Fengguang
2010-07-19 18:43   ` Rik van Riel
2010-07-19 13:11 ` [PATCH 8/8] vmscan: Kick flusher threads to clean pages when reclaim is encountering dirty pages Mel Gorman
2010-07-19 14:23   ` Christoph Hellwig
2010-07-19 14:37     ` Mel Gorman
2010-07-19 22:48       ` Johannes Weiner
2010-07-20 14:10         ` Mel Gorman
2010-07-20 22:05           ` Johannes Weiner
2010-07-19 18:59   ` Rik van Riel
2010-07-19 22:26   ` Johannes Weiner
2010-07-26  7:28   ` Wu Fengguang
2010-07-26  9:26     ` Mel Gorman
2010-07-26 11:27       ` Wu Fengguang
2010-07-26 12:57         ` Mel Gorman
2010-07-26 13:10           ` Wu Fengguang
2010-07-27 13:35             ` Mel Gorman
2010-07-27 14:24               ` Wu Fengguang
2010-07-27 14:34                 ` Wu Fengguang
2010-07-27 14:40                   ` Mel Gorman
2010-07-27 14:55                     ` Wu Fengguang
2010-07-27 14:38                 ` Mel Gorman
2010-07-27 15:21                   ` Wu Fengguang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100723094515.GD5043@localhost \
    --to=fengguang.wu@intel.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=chris.mason@oracle.com \
    --cc=david@fromorbit.com \
    --cc=hannes@cmpxchg.org \
    --cc=hch@infradead.org \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mel@csn.ul.ie \
    --cc=minchan.kim@gmail.com \
    --cc=npiggin@suse.de \
    --cc=riel@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).