linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Wu Fengguang <fengguang.wu@intel.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Minchan Kim <minchan.kim@gmail.com>,
	Dave Young <hidave.darkstar@gmail.com>,
	linux-mm <linux-mm@kvack.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Mel Gorman <mel@linux.vnet.ibm.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	Christoph Lameter <cl@linux.com>,
	Dave Chinner <david@fromorbit.com>,
	David Rientjes <rientjes@google.com>
Subject: Re: [RFC][PATCH] mm: cut down __GFP_NORETRY page allocation failures
Date: Fri, 29 Apr 2011 10:28:24 +0800	[thread overview]
Message-ID: <20110429022824.GA8061@localhost> (raw)
In-Reply-To: <20110428133644.GA12400@localhost>

> Test results:
> 
> - the failure rate is pretty sensible to the page reclaim size,
>   from 282 (WMARK_HIGH) to 704 (WMARK_MIN) to 10496 (SWAP_CLUSTER_MAX)
> 
> - the IPIs are reduced by over 100 times

It's reduced by 500 times indeed.

CAL:     220449     220246     220372     220558     220251     219740     220043     219968   Function call interrupts
CAL:         93        463        410        540        298        282        272        306   Function call interrupts

> base kernel: vanilla 2.6.39-rc3 + __GFP_NORETRY readahead page allocation patch
> -------------------------------------------------------------------------------
> nr_alloc_fail 10496
> allocstall 1576602

> patched (WMARK_MIN)
> -------------------
> nr_alloc_fail 704
> allocstall 105551

> patched (WMARK_HIGH)
> --------------------
> nr_alloc_fail 282
> allocstall 53860

> this patch (WMARK_HIGH, limited scan)
> -------------------------------------
> nr_alloc_fail 276
> allocstall 54034

There is a bad side effect though: the much reduced "allocstall" means
each direct reclaim will take much more time to complete. A simple solution
is to terminate direct reclaim after 10ms. I noticed that an 100ms
time threshold can reduce the reclaim latency from 621ms to 358ms.
Further lowering the time threshold to 20ms does not help reducing the
real latencies though.

However the very subjective perception is, in such heavy 1000-dd
workload, the reduced reclaim latency hardly improves the overall
responsiveness.

base kernel
-----------

start time: 243
total time: 529

wfg@fat ~% getdelays -dip 3971
print delayacct stats ON
printing IO accounting
PID     3971


CPU             count     real total  virtual total    delay total
                  961     3176517096     3158468847   313952766099
IO              count    delay total  delay average
                    2      181251847             60ms
SWAP            count    delay total  delay average
                    0              0              0ms
RECLAIM         count    delay total  delay average
                 1205    38120615476             31ms
dd: read=16384, write=0, cancelled_write=0
wfg@fat ~% getdelays -dip 3383
print delayacct stats ON
printing IO accounting
PID     3383


CPU             count     real total  virtual total    delay total
                 1270     4206360536     4181445838   358641985177
IO              count    delay total  delay average
                    0              0              0ms
SWAP            count    delay total  delay average
                    0              0              0ms
RECLAIM         count    delay total  delay average
                 1606    39897314399             24ms
dd: read=0, write=0, cancelled_write=0

no time limit
-------------
wfg@fat ~% getdelays -dip `pidof dd`
print delayacct stats ON
printing IO accounting
PID     9609


CPU             count     real total  virtual total    delay total
                  865     2792575464     2779071029   235345541230
IO              count    delay total  delay average
                    4      300247552             60ms
SWAP            count    delay total  delay average
                    0              0              0ms
RECLAIM         count    delay total  delay average
                   32    20504634169            621ms
dd: read=106496, write=0, cancelled_write=0

100ms limit
-----------

start time: 288
total time: 514
nr_alloc_fail 1269
allocstall 128915

wfg@fat ~% getdelays -dip `pidof dd`
print delayacct stats ON
printing IO accounting
PID     5077


CPU             count     real total  virtual total    delay total
                  937     2949551600     2935087806   207877301298
IO              count    delay total  delay average
                    1      151891691            151ms
SWAP            count    delay total  delay average
                    0              0              0ms
RECLAIM         count    delay total  delay average
                   71    25475514278            358ms
dd: read=507904, write=0, cancelled_write=0

PID     5101


CPU             count     real total  virtual total    delay total
                 1201     3827418144     3805399187   221075772599
IO              count    delay total  delay average
                    4      300331997             60ms
SWAP            count    delay total  delay average
                    0              0              0ms
RECLAIM         count    delay total  delay average
                   94    31996779648            336ms
dd: read=618496, write=0, cancelled_write=0

nr_alloc_fail 937
allocstall 128684

slabs_scanned 63616
kswapd_steal 4616011
kswapd_inodesteal 5
kswapd_low_wmark_hit_quickly 5394
kswapd_high_wmark_hit_quickly 2826
kswapd_skip_congestion_wait 0
pageoutrun 36679

20ms limit
----------

start time: 294
total time: 516
nr_alloc_fail 1662
allocstall 132101

CPU             count     real total  virtual total    delay total
                  839     2750581848     2734464704   198489159459
IO              count    delay total  delay average
                    1       43566814             43ms
SWAP            count    delay total  delay average
                    0              0              0ms
RECLAIM         count    delay total  delay average
                   95    35234061367            370ms
dd: read=20480, write=0, cancelled_write=0

test script
-----------
tic=$(date +'%s')

for i in `seq 1000`
do
        truncate -s 1G /fs/sparse-$i
        dd if=/fs/sparse-$i of=/dev/null &>/dev/null &
done

tac=$(date +'%s')
echo start time: $((tac-tic))

wait

tac=$(date +'%s')
echo total time: $((tac-tic))

egrep '(nr_alloc_fail|allocstall)' /proc/vmstat

Thanks,
Fengguang
---
Subject: mm: limit direct reclaim delays
Date: Fri Apr 29 09:04:11 CST 2011

Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
 mm/vmscan.c |   14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

--- linux-next.orig/mm/vmscan.c	2011-04-29 09:02:42.000000000 +0800
+++ linux-next/mm/vmscan.c	2011-04-29 09:04:10.000000000 +0800
@@ -2037,6 +2037,7 @@ static unsigned long do_try_to_free_page
 	struct zone *zone;
 	unsigned long writeback_threshold;
 	unsigned long min_reclaim = sc->nr_to_reclaim;
+	unsigned long start_time = jiffies;
 
 	get_mems_allowed();
 	delayacct_freepages_start();
@@ -2070,11 +2071,14 @@ static unsigned long do_try_to_free_page
 			}
 		}
 		total_scanned += sc->nr_scanned;
-		if (sc->nr_reclaimed >= min_reclaim &&
-		    total_scanned > 2 * sc->nr_to_reclaim)
-			goto out;
-		if (sc->nr_reclaimed >= sc->nr_to_reclaim)
-			goto out;
+		if (sc->nr_reclaimed >= min_reclaim) {
+			if (sc->nr_reclaimed >= sc->nr_to_reclaim)
+				goto out;
+			if (total_scanned > 2 * sc->nr_to_reclaim)
+				goto out;
+			if (jiffies - start_time > HZ / 100)
+				goto out;
+		}
 
 		/*
 		 * Try to write back as many pages as we just scanned.  This

  parent reply	other threads:[~2011-04-29  2:28 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-04-26  5:49 readahead and oom Dave Young
2011-04-26  5:55 ` Wu Fengguang
2011-04-26  6:05   ` Dave Young
2011-04-26  6:07     ` Dave Young
2011-04-26  6:25       ` Wu Fengguang
2011-04-26  6:29         ` Dave Young
2011-04-26  6:34           ` Wu Fengguang
2011-04-26  6:50             ` KOSAKI Motohiro
2011-04-26  7:41             ` Minchan Kim
2011-04-26  9:20               ` Wu Fengguang
2011-04-26  9:28                 ` Minchan Kim
2011-04-26 10:18                   ` Pekka Enberg
2011-04-26 19:47                 ` Andrew Morton
2011-04-28  4:19                   ` Wu Fengguang
2011-04-28 13:36                   ` [RFC][PATCH] mm: cut down __GFP_NORETRY page allocation failures Wu Fengguang
2011-04-28 13:38                     ` [patch] vmstat: account " Wu Fengguang
2011-04-28 13:50                       ` KOSAKI Motohiro
2011-04-29  2:28                     ` Wu Fengguang [this message]
2011-04-29  2:58                       ` [RFC][PATCH] mm: cut down __GFP_NORETRY " Wu Fengguang
2011-04-30 14:17                       ` Wu Fengguang
2011-05-01 16:35                         ` Minchan Kim
2011-05-01 16:37                           ` Minchan Kim
2011-05-02 10:14                             ` KOSAKI Motohiro
2011-05-03  0:53                               ` Minchan Kim
2011-05-03  1:25                                 ` KOSAKI Motohiro
2011-05-02 10:29                           ` Wu Fengguang
2011-05-02 11:08                             ` Wu Fengguang
2011-05-03  0:49                             ` Minchan Kim
2011-05-03  3:51                               ` Wu Fengguang
2011-05-03  4:17                                 ` Minchan Kim
2011-05-02 13:29                           ` Wu Fengguang
2011-05-02 13:49                             ` Wu Fengguang
2011-05-03  0:27                               ` Satoru Moriya
2011-05-03  2:49                                 ` Wu Fengguang
2011-05-04  1:56                     ` Dave Young
2011-05-04  2:32                       ` Dave Young
2011-05-04  2:56                         ` Wu Fengguang
2011-05-04  4:23                           ` Wu Fengguang
2011-05-04  4:00                       ` Wu Fengguang
2011-05-04  7:33                         ` Dave Young
2011-04-26  6:13     ` readahead and oom Wu Fengguang
2011-04-26  6:23       ` Dave Young
2011-04-26  9:37 ` [PATCH] mm: readahead page allocations are OK to fail Wu Fengguang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110429022824.GA8061@localhost \
    --to=fengguang.wu@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=david@fromorbit.com \
    --cc=hidave.darkstar@gmail.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mel@linux.vnet.ibm.com \
    --cc=minchan.kim@gmail.com \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).