linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Wu Fengguang <fengguang.wu@intel.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Jan Kara <jack@suse.cz>, Theodore Tso <tytso@mit.edu>,
	Dave Chinner <david@fromorbit.com>,
	Chris Mason <chris.mason@oracle.com>,
	Peter Zijlstra <a.p.zijlstra@chello.nl>,
	Wu Fengguang <fengguang.wu@intel.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Rik van Riel <riel@redhat.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: linux-mm <linux-mm@kvack.org>
Cc: <linux-fsdevel@vger.kernel.org>
Cc: LKML <linux-kernel@vger.kernel.org>
Subject: [PATCH 35/47] writeback: scale IO chunk size up to device bandwidth
Date: Mon, 13 Dec 2010 14:43:24 +0800	[thread overview]
Message-ID: <20101213064841.322569223@intel.com> (raw)
In-Reply-To: 20101213064249.648862451@intel.com

[-- Attachment #1: writeback-128M-MAX_WRITEBACK_PAGES.patch --]
[-- Type: text/plain, Size: 5257 bytes --]

Originally, MAX_WRITEBACK_PAGES was hard-coded to 1024 because of a
concern of not holding I_SYNC for too long.  (At least, that was the
comment previously.)  This doesn't make sense now because the only
time we wait for I_SYNC is if we are calling sync or fsync, and in
that case we need to write out all of the data anyway.  Previously
there may have been other code paths that waited on I_SYNC, but not
any more.					    -- Theodore Ts'o

According to Christoph, the current writeback size is way too small,
and XFS had a hack that bumped out nr_to_write to four times the value
sent by the VM to be able to saturate medium-sized RAID arrays.  This
value was also problematic for ext4 as well, as it caused large files
to be come interleaved on disk by in 8 megabyte chunks (we bumped up
the nr_to_write by a factor of two).

So remove the MAX_WRITEBACK_PAGES constraint totally. The writeback pages
will adapt to as large as the storage device can write within 1 second.

For a typical hard disk, the resulted chunk size will be 32MB or 64MB.

http://bugzilla.kernel.org/show_bug.cgi?id=13930

CC: Theodore Ts'o <tytso@mit.edu>
CC: Dave Chinner <david@fromorbit.com>
CC: Chris Mason <chris.mason@oracle.com>
CC: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
 fs/fs-writeback.c         |   60 +++++++++++++++++++-----------------
 include/linux/writeback.h |    5 +++
 2 files changed, 38 insertions(+), 27 deletions(-)

--- linux-next.orig/fs/fs-writeback.c	2010-12-09 12:24:57.000000000 +0800
+++ linux-next/fs/fs-writeback.c	2010-12-09 12:24:58.000000000 +0800
@@ -602,15 +602,6 @@ static void __writeback_inodes_sb(struct
 	spin_unlock(&inode_lock);
 }
 
-/*
- * The maximum number of pages to writeout in a single bdi flush/kupdate
- * operation.  We do this so we don't hold I_SYNC against an inode for
- * enormous amounts of time, which would block a userspace task which has
- * been forced to throttle against that inode.  Also, the code reevaluates
- * the dirty each time it has written this many pages.
- */
-#define MAX_WRITEBACK_PAGES     1024
-
 static inline bool over_bground_thresh(void)
 {
 	unsigned long background_thresh, dirty_thresh;
@@ -622,6 +613,38 @@ static inline bool over_bground_thresh(v
 }
 
 /*
+ * Give each inode a nr_to_write that can complete within 1 second.
+ */
+static unsigned long writeback_chunk_size(struct backing_dev_info *bdi,
+					  int sync_mode)
+{
+	unsigned long pages;
+
+	/*
+	 * WB_SYNC_ALL mode does livelock avoidance by syncing dirty
+	 * inodes/pages in one big loop. Setting wbc.nr_to_write=LONG_MAX
+	 * here avoids calling into writeback_inodes_wb() more than once.
+	 *
+	 * The intended call sequence for WB_SYNC_ALL writeback is:
+	 *
+	 *      wb_writeback()
+	 *          __writeback_inodes_sb()     <== called only once
+	 *              write_cache_pages()     <== called once for each inode
+	 *                  (quickly) tag currently dirty pages
+	 *                  (maybe slowly) sync all tagged pages
+	 */
+	if (sync_mode == WB_SYNC_ALL)
+		return LONG_MAX;
+
+	pages = bdi->write_bandwidth;
+
+	if (pages < MIN_WRITEBACK_PAGES)
+		return MIN_WRITEBACK_PAGES;
+
+	return rounddown_pow_of_two(pages);
+}
+
+/*
  * Explicit flushing or periodic writeback of "old" data.
  *
  * Define "old": the first time one of an inode's pages is dirtied, we mark the
@@ -661,24 +684,6 @@ static long wb_writeback(struct bdi_writ
 		wbc.range_end = LLONG_MAX;
 	}
 
-	/*
-	 * WB_SYNC_ALL mode does livelock avoidance by syncing dirty
-	 * inodes/pages in one big loop. Setting wbc.nr_to_write=LONG_MAX
-	 * here avoids calling into writeback_inodes_wb() more than once.
-	 *
-	 * The intended call sequence for WB_SYNC_ALL writeback is:
-	 *
-	 *      wb_writeback()
-	 *          __writeback_inodes_sb()     <== called only once
-	 *              write_cache_pages()     <== called once for each inode
-	 *                   (quickly) tag currently dirty pages
-	 *                   (maybe slowly) sync all tagged pages
-	 */
-	if (wbc.sync_mode == WB_SYNC_NONE)
-		write_chunk = MAX_WRITEBACK_PAGES;
-	else
-		write_chunk = LONG_MAX;
-
 	wbc.wb_start = jiffies; /* livelock avoidance */
 	bdi_update_write_bandwidth(wb->bdi, wbc.wb_start);
 
@@ -707,6 +712,7 @@ static long wb_writeback(struct bdi_writ
 			break;
 
 		wbc.more_io = 0;
+		write_chunk = writeback_chunk_size(wb->bdi, wbc.sync_mode);
 		wbc.nr_to_write = write_chunk;
 		wbc.per_file_limit = write_chunk;
 		wbc.pages_skipped = 0;
--- linux-next.orig/include/linux/writeback.h	2010-12-09 12:21:03.000000000 +0800
+++ linux-next/include/linux/writeback.h	2010-12-09 12:24:58.000000000 +0800
@@ -22,6 +22,11 @@ extern spinlock_t inode_lock;
 #define TASK_SOFT_DIRTY_LIMIT	(BDI_SOFT_DIRTY_LIMIT * 2)
 
 /*
+ * 4MB minimal write chunk size
+ */
+#define MIN_WRITEBACK_PAGES     (4096 >> (PAGE_CACHE_SHIFT - 10))
+
+/*
  * fs/fs-writeback.c
  */
 enum writeback_sync_modes {


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2010-12-13  6:43 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-12-13  6:42 [PATCH 00/47] IO-less dirty throttling v3 Wu Fengguang
2010-12-13  6:42 ` [PATCH 01/47] writeback: enabling gate limit for light dirtied bdi Wu Fengguang
2010-12-13  6:42 ` [PATCH 02/47] writeback: safety margin for bdi stat error Wu Fengguang
2010-12-13  6:42 ` [PATCH 03/47] writeback: IO-less balance_dirty_pages() Wu Fengguang
2010-12-13  6:42 ` [PATCH 04/47] writeback: consolidate variable names in balance_dirty_pages() Wu Fengguang
2010-12-13  6:42 ` [PATCH 05/47] writeback: per-task rate limit on balance_dirty_pages() Wu Fengguang
2010-12-13  6:42 ` [PATCH 06/47] writeback: prevent duplicate balance_dirty_pages_ratelimited() calls Wu Fengguang
2010-12-13  6:42 ` [PATCH 07/47] writeback: account per-bdi accumulated written pages Wu Fengguang
2010-12-13  6:42 ` [PATCH 08/47] writeback: bdi write bandwidth estimation Wu Fengguang
2010-12-13  6:42 ` [PATCH 09/47] writeback: show bdi write bandwidth in debugfs Wu Fengguang
2010-12-13  6:42 ` [PATCH 10/47] writeback: quit throttling when bdi dirty pages dropped low Wu Fengguang
2010-12-13  6:43 ` [PATCH 11/47] writeback: reduce per-bdi dirty threshold ramp up time Wu Fengguang
2010-12-13  6:43 ` [PATCH 12/47] writeback: make reasonable gap between the dirty/background thresholds Wu Fengguang
2010-12-13  6:43 ` [PATCH 13/47] writeback: scale down max throttle bandwidth on concurrent dirtiers Wu Fengguang
2010-12-13  6:43 ` [PATCH 14/47] writeback: add trace event for balance_dirty_pages() Wu Fengguang
2010-12-13  6:43 ` [PATCH 15/47] writeback: make nr_to_write a per-file limit Wu Fengguang
2010-12-13  6:43 ` [PATCH 16/47] writeback: make-nr_to_write-a-per-file-limit fix Wu Fengguang
2010-12-13  6:43 ` [PATCH 17/47] writeback: do uninterruptible sleep in balance_dirty_pages() Wu Fengguang
2010-12-13  6:43 ` [PATCH 18/47] writeback: move BDI_WRITTEN accounting into __bdi_writeout_inc() Wu Fengguang
2010-12-13  6:43 ` [PATCH 19/47] writeback: fix increasement of nr_dirtied_pause Wu Fengguang
2010-12-13  6:43 ` [PATCH 20/47] writeback: use do_div in bw calculation Wu Fengguang
2010-12-13  6:43 ` [PATCH 21/47] writeback: prevent divide error on tiny HZ Wu Fengguang
2010-12-13  6:43 ` [PATCH 22/47] writeback: prevent bandwidth calculation overflow Wu Fengguang
2010-12-13  6:43 ` [PATCH 23/47] writeback: spinlock protected bdi bandwidth update Wu Fengguang
2010-12-13  6:43 ` [PATCH 24/47] writeback: increase pause time on concurrent dirtiers Wu Fengguang
2010-12-13  6:43 ` [PATCH 25/47] writeback: make it easier to break from a dirty exceeded bdi Wu Fengguang
2010-12-13  6:43 ` [PATCH 26/47] writeback: start background writeback earlier Wu Fengguang
2010-12-13  6:43 ` [PATCH 27/47] writeback: user space think time compensation Wu Fengguang
2010-12-13  6:43 ` [PATCH 28/47] writeback: bdi base throttle bandwidth Wu Fengguang
2010-12-13  6:43 ` [PATCH 29/47] writeback: smoothed bdi dirty pages Wu Fengguang
2010-12-13  6:43 ` [PATCH 30/47] writeback: adapt max balance pause time to memory size Wu Fengguang
2010-12-13  6:43 ` [PATCH 31/47] writeback: increase min pause time on concurrent dirtiers Wu Fengguang
2010-12-13  6:43 ` [PATCH 32/47] writeback: extend balance_dirty_pages() trace event Wu Fengguang
2010-12-13  6:43 ` [PATCH 33/47] writeback: trace global dirty page states Wu Fengguang
2010-12-13  6:43 ` [PATCH 34/47] writeback: trace writeback_single_inode() Wu Fengguang
2010-12-13  6:43 ` Wu Fengguang [this message]
2010-12-13  6:43 ` [PATCH 36/47] btrfs: dont call balance_dirty_pages_ratelimited() on already dirty pages Wu Fengguang
2010-12-13  6:43 ` [PATCH 37/47] btrfs: lower the dirty balacing rate limit Wu Fengguang
2010-12-13  6:43 ` [PATCH 38/47] btrfs: wait on too many nr_async_bios Wu Fengguang
2010-12-13  6:43 ` [PATCH 39/47] nfs: livelock prevention is now done in VFS Wu Fengguang
2010-12-13  6:43 ` [PATCH 40/47] nfs: writeback pages wait queue Wu Fengguang
2010-12-13  6:43 ` [PATCH 41/47] nfs: in-commit pages accounting and " Wu Fengguang
2010-12-13  6:43 ` [PATCH 42/47] nfs: heuristics to avoid commit Wu Fengguang
2010-12-13  6:43 ` [PATCH 43/47] nfs: dont change wbc->nr_to_write in write_inode() Wu Fengguang
2010-12-13  6:43 ` [PATCH 44/47] nfs: limit the range of commits Wu Fengguang
2010-12-13  6:43 ` [PATCH 45/47] nfs: adapt congestion threshold to dirty threshold Wu Fengguang
2010-12-13  6:43 ` [PATCH 46/47] nfs: trace nfs_commit_unstable_pages() Wu Fengguang
2010-12-13  6:43 ` [PATCH 47/47] nfs: trace nfs_commit_release() Wu Fengguang
2010-12-13 11:27 ` [PATCH 00/47] IO-less dirty throttling v3 Peter Zijlstra
2010-12-13 11:49   ` Wu Fengguang
2010-12-13 12:38     ` Peter Zijlstra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20101213064841.322569223@intel.com \
    --to=fengguang.wu@intel.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=akpm@linux-foundation.org \
    --cc=chris.mason@oracle.com \
    --cc=david@fromorbit.com \
    --cc=jack@suse.cz \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).