linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] readahead: make context readahead more conservative
@ 2011-12-14  6:41 Wu Fengguang
  0 siblings, 0 replies; only message in thread
From: Wu Fengguang @ 2011-12-14  6:41 UTC (permalink / raw)
  To: Tao Ma
  Cc: Jan Kara, Andrew Morton, Andi Kleen, Ingo Molnar, Jens Axboe,
	Peter Zijlstra, Rik van Riel, Linux Memory Management List,
	linux-fsdevel@vger.kernel.org, LKML

Try to prevent negatively impact moderately dense random reads on SSD.

Transaction-Per-Second numbers provided by Taobao:

		QPS	case
		-------------------------------------------------------
		7536	disable context readahead totally
w/ patch:	7129	slower size rampup and start RA on the 3rd read
		6717	slower size rampup
w/o patch:	5581	unmodified context readahead

Before, readahead will be started whenever reading page N+1 when it
happen to read N recently. After patch, we'll only start readahead
when *three* random reads happen to access pages N, N+1, N+2. The
probability of this happening is extremely low for pure random reads,
unless they are very dense, which actually deserves some readahead.

Also start with a smaller readahead window. The impact to interleaved
sequential reads should be small, because for a long run stream, the
the small readahead window rampup phase is negletable.

The context readahead actually benefits clustered random reads on HDD
whose seek cost is pretty high.  However as SSD is increasingly used for
random read workloads it's better for the context readahead to
concentrate on interleaved sequential reads.

Tested-by: Tao Ma <tm@tao.ma>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
 mm/readahead.c |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

Post for review first, will include this in the next readahead series.

--- linux-next.orig/mm/readahead.c	2011-12-14 08:57:29.000000000 +0800
+++ linux-next/mm/readahead.c	2011-12-14 08:59:24.000000000 +0800
@@ -594,10 +594,10 @@ static int try_context_readahead(struct 
 	size = count_history_pages(mapping, ra, offset, max);
 
 	/*
-	 * no history pages:
+	 * not enough history pages:
 	 * it could be a random read
 	 */
-	if (!size)
+	if (size <= req_size)
 		return 0;
 
 	/*
@@ -609,8 +609,8 @@ static int try_context_readahead(struct 
 
 	ra->pattern = RA_PATTERN_CONTEXT;
 	ra->start = offset;
-	ra->size = get_init_ra_size(size + req_size, max);
-	ra->async_size = ra->size;
+	ra->size = min(size + req_size, max);
+	ra->async_size = 1;
 
 	return 1;
 }

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2011-12-14  6:41 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-12-14  6:41 [PATCH] readahead: make context readahead more conservative Wu Fengguang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).