linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Wu Fengguang <fengguang.wu@intel.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Jan Kara <jack@suse.cz>, Li Shaohua <shaohua.li@intel.com>,
	Wu Fengguang <fengguang.wu@intel.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Chris Mason <chris.mason@oracle.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Rik van Riel <riel@redhat.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: linux-mm <linux-mm@kvack.org>
Cc: <linux-fsdevel@vger.kernel.org>
Cc: LKML <linux-kernel@vger.kernel.org>
Subject: [PATCH 10/35] writeback: bdi write bandwidth estimation
Date: Mon, 13 Dec 2010 22:46:56 +0800	[thread overview]
Message-ID: <20101213150327.573549251@intel.com> (raw)
In-Reply-To: 20101213144646.341970461@intel.com

[-- Attachment #1: writeback-bandwidth-estimation-in-flusher.patch --]
[-- Type: text/plain, Size: 6828 bytes --]

The estimation value will start from 100MB/s and adapt to the real
bandwidth in seconds.  It's pretty accurate for common filesystems.

As the first use case, it replaces the fixed 100MB/s value used for
throttle bandwidth calculation in balance_dirty_pages().

The overheads won't be high because the bdi bandwidth update only occurs
in >10ms intervals.

Initially it's only estimated in balance_dirty_pages() because this is
the most reliable place to get reasonable large bandwidth -- the bdi is
normally fully utilized when bdi_thresh is reached.

Then Shaohua recommends to also do it in the flusher thread, to keep the
value updated when there are only periodic/background writeback and no
tasks throttled.

The original plan is to use per-cpu vars for bdi->write_bandwidth.
However Peter suggested that it opens the window that some CPU see
outdated values. So switch to use spinlock protected global vars.

It tries to update the bandwidth only when disk is fully utilized.
Any inactive period of more than 500ms will be skipped.

The estimation is not done purely in the flusher thread because slow
devices may take dozens of seconds to write the initial 64MB chunk
(write_bandwidth starts with 100MB/s, this translates to 64MB
nr_to_write). So it may take more than 1 minute to adapt to the smallish
bandwidth if the bandwidth is only updated in the flusher thread.

CC: Li Shaohua <shaohua.li@intel.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
 fs/fs-writeback.c           |    4 ++
 include/linux/backing-dev.h |    5 ++
 include/linux/writeback.h   |   10 +++++
 mm/backing-dev.c            |    3 +
 mm/page-writeback.c         |   59 ++++++++++++++++++++++++++++++++--
 5 files changed, 78 insertions(+), 3 deletions(-)

--- linux-next.orig/include/linux/backing-dev.h	2010-12-13 21:46:13.000000000 +0800
+++ linux-next/include/linux/backing-dev.h	2010-12-13 21:46:14.000000000 +0800
@@ -74,6 +74,11 @@ struct backing_dev_info {
 
 	struct percpu_counter bdi_stat[NR_BDI_STAT_ITEMS];
 
+	spinlock_t bw_lock;
+	unsigned long bw_time_stamp;
+	unsigned long written_stamp;
+	unsigned long write_bandwidth;
+
 	struct prop_local_percpu completions;
 	int dirty_exceeded;
 
--- linux-next.orig/mm/backing-dev.c	2010-12-13 21:46:13.000000000 +0800
+++ linux-next/mm/backing-dev.c	2010-12-13 21:46:14.000000000 +0800
@@ -660,6 +660,9 @@ int bdi_init(struct backing_dev_info *bd
 			goto err;
 	}
 
+	spin_lock_init(&bdi->bw_lock);
+	bdi->write_bandwidth = 100 << (20 - PAGE_SHIFT);  /* 100 MB/s */
+
 	bdi->dirty_exceeded = 0;
 	err = prop_local_init_percpu(&bdi->completions);
 
--- linux-next.orig/fs/fs-writeback.c	2010-12-13 21:46:10.000000000 +0800
+++ linux-next/fs/fs-writeback.c	2010-12-13 21:46:14.000000000 +0800
@@ -668,6 +668,8 @@ static long wb_writeback(struct bdi_writ
 		write_chunk = LONG_MAX;
 
 	wbc.wb_start = jiffies; /* livelock avoidance */
+	bdi_update_write_bandwidth(wb->bdi, wbc.wb_start);
+
 	for (;;) {
 		/*
 		 * Stop writeback when nr_pages has been consumed
@@ -703,6 +705,8 @@ static long wb_writeback(struct bdi_writ
 			writeback_inodes_wb(wb, &wbc);
 		trace_wbc_writeback_written(&wbc, wb->bdi);
 
+		bdi_update_write_bandwidth(wb->bdi, wbc.wb_start);
+
 		work->nr_pages -= write_chunk - wbc.nr_to_write;
 		wrote += write_chunk - wbc.nr_to_write;
 
--- linux-next.orig/mm/page-writeback.c	2010-12-13 21:46:13.000000000 +0800
+++ linux-next/mm/page-writeback.c	2010-12-13 21:46:14.000000000 +0800
@@ -521,6 +521,56 @@ out:
 	return 1 + int_sqrt(dirty_thresh - dirty_pages);
 }
 
+static void __bdi_update_write_bandwidth(struct backing_dev_info *bdi,
+					 unsigned long elapsed,
+					 unsigned long written)
+{
+	const unsigned long period = roundup_pow_of_two(HZ);
+	u64 bw;
+
+	bw = written - bdi->written_stamp;
+	bw *= HZ;
+	if (elapsed > period / 2) {
+		do_div(bw, elapsed);
+		elapsed = period / 2;
+		bw *= elapsed;
+	}
+	bw += (u64)bdi->write_bandwidth * (period - elapsed);
+	bdi->write_bandwidth = bw >> ilog2(period);
+}
+
+void bdi_update_bandwidth(struct backing_dev_info *bdi,
+			  unsigned long start_time,
+			  unsigned long bdi_dirty,
+			  unsigned long bdi_thresh)
+{
+	unsigned long elapsed;
+	unsigned long written;
+
+	if (!spin_trylock(&bdi->bw_lock))
+		return;
+
+	elapsed = jiffies - bdi->bw_time_stamp;
+	written = percpu_counter_read(&bdi->bdi_stat[BDI_WRITTEN]);
+
+	/* skip quiet periods when disk bandwidth is under-utilized */
+	if (elapsed > HZ/2 &&
+	    elapsed > jiffies - start_time)
+		goto snapshot;
+
+	/* rate-limit, only update once every 100ms */
+	if (elapsed <= HZ/10)
+		goto unlock;
+
+	__bdi_update_write_bandwidth(bdi, elapsed, written);
+
+snapshot:
+	bdi->written_stamp = written;
+	bdi->bw_time_stamp = jiffies;
+unlock:
+	spin_unlock(&bdi->bw_lock);
+}
+
 /*
  * balance_dirty_pages() must be called by processes which are generating dirty
  * data.  It looks at the number of dirty pages in the machine and will force
@@ -537,11 +587,12 @@ static void balance_dirty_pages(struct a
 	unsigned long background_thresh;
 	unsigned long dirty_thresh;
 	unsigned long bdi_thresh;
-	unsigned long bw;
+	unsigned long long bw;
 	unsigned long period;
 	unsigned long pause = 0;
 	bool dirty_exceeded = false;
 	struct backing_dev_info *bdi = mapping->backing_dev_info;
+	unsigned long start_time = jiffies;
 
 	for (;;) {
 		/*
@@ -585,17 +636,19 @@ static void balance_dirty_pages(struct a
 				    bdi_stat(bdi, BDI_WRITEBACK);
 		}
 
+		bdi_update_bandwidth(bdi, start_time, bdi_dirty, bdi_thresh);
+
 		if (bdi_dirty >= bdi_thresh || nr_dirty > dirty_thresh) {
 			pause = MAX_PAUSE;
 			goto pause;
 		}
 
-		bw = 100 << 20; /* use static 100MB/s for the moment */
+		bw = bdi->write_bandwidth;
 
 		bw = bw * (bdi_thresh - bdi_dirty);
 		do_div(bw, bdi_thresh / TASK_SOFT_DIRTY_LIMIT + 1);
 
-		period = HZ * (pages_dirtied << PAGE_CACHE_SHIFT) / (bw + 1) + 1;
+		period = HZ * pages_dirtied / ((unsigned long)bw + 1) + 1;
 		pause = current->paused_when + period - jiffies;
 		/*
 		 * Take it as long think time if pause falls into (-10s, 0).
--- linux-next.orig/include/linux/writeback.h	2010-12-13 21:46:12.000000000 +0800
+++ linux-next/include/linux/writeback.h	2010-12-13 21:46:14.000000000 +0800
@@ -139,6 +139,16 @@ unsigned long bdi_dirty_limit(struct bac
 			       unsigned long dirty,
 			       unsigned long dirty_pages);
 
+void bdi_update_bandwidth(struct backing_dev_info *bdi,
+			  unsigned long start_time,
+			  unsigned long bdi_dirty,
+			  unsigned long bdi_thresh);
+static inline void bdi_update_write_bandwidth(struct backing_dev_info *bdi,
+					      unsigned long start_time)
+{
+	bdi_update_bandwidth(bdi, start_time, 0, 0);
+}
+
 void page_writeback_init(void);
 void balance_dirty_pages_ratelimited_nr(struct address_space *mapping,
 					unsigned long nr_pages_dirtied);

  parent reply	other threads:[~2010-12-13 14:46 UTC|newest]

Thread overview: 87+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-12-13 14:46 [PATCH 00/35] IO-less dirty throttling v4 Wu Fengguang
2010-12-13 14:46 ` [PATCH 01/35] writeback: enabling gate limit for light dirtied bdi Wu Fengguang
2011-01-12 21:43   ` Jan Kara
2011-01-13  3:44     ` Wu Fengguang
2011-01-13  3:58       ` Wu Fengguang
2011-01-13 19:26       ` Peter Zijlstra
2011-01-14  3:21         ` Wu Fengguang
2010-12-13 14:46 ` [PATCH 02/35] writeback: safety margin for bdi stat error Wu Fengguang
2011-01-12 21:59   ` Jan Kara
2011-01-13  4:14     ` Wu Fengguang
2011-01-13 10:38       ` Jan Kara
2011-01-13 10:41         ` Wu Fengguang
2010-12-13 14:46 ` [PATCH 03/35] writeback: prevent duplicate balance_dirty_pages_ratelimited() calls Wu Fengguang
2010-12-13 14:46 ` [PATCH 04/35] writeback: reduce per-bdi dirty threshold ramp up time Wu Fengguang
2010-12-14 13:37   ` Richard Kennedy
2010-12-14 13:59     ` Wu Fengguang
2010-12-14 14:33       ` Wu Fengguang
2010-12-14 14:39         ` Wu Fengguang
2010-12-14 14:50           ` Peter Zijlstra
2010-12-14 15:15             ` Wu Fengguang
2010-12-14 15:26               ` Wu Fengguang
2010-12-14 14:56           ` Wu Fengguang
2010-12-15 18:48       ` Richard Kennedy
2010-12-17 13:07         ` Wu Fengguang
2010-12-13 14:46 ` [PATCH 05/35] writeback: IO-less balance_dirty_pages() Wu Fengguang
2010-12-13 14:46 ` [PATCH 06/35] writeback: consolidate variable names in balance_dirty_pages() Wu Fengguang
2010-12-13 14:46 ` [PATCH 07/35] writeback: per-task rate limit on balance_dirty_pages() Wu Fengguang
2010-12-13 14:46 ` [PATCH 08/35] writeback: user space think time compensation Wu Fengguang
2010-12-13 14:46 ` [PATCH 09/35] writeback: account per-bdi accumulated written pages Wu Fengguang
2010-12-13 14:46 ` Wu Fengguang [this message]
2010-12-13 14:46 ` [PATCH 11/35] writeback: show bdi write bandwidth in debugfs Wu Fengguang
2010-12-13 14:46 ` [PATCH 12/35] writeback: scale down max throttle bandwidth on concurrent dirtiers Wu Fengguang
2010-12-14  1:21   ` Yan, Zheng
2010-12-14  7:00     ` Wu Fengguang
2010-12-14 13:00       ` Wu Fengguang
2010-12-13 14:46 ` [PATCH 13/35] writeback: bdi base throttle bandwidth Wu Fengguang
2010-12-13 14:47 ` [PATCH 14/35] writeback: smoothed bdi dirty pages Wu Fengguang
2010-12-13 14:47 ` [PATCH 15/35] writeback: adapt max balance pause time to memory size Wu Fengguang
2010-12-13 14:47 ` [PATCH 16/35] writeback: increase min pause time on concurrent dirtiers Wu Fengguang
2010-12-13 18:23   ` Valdis.Kletnieks
2010-12-14  6:51     ` Wu Fengguang
2010-12-14 18:42       ` Valdis.Kletnieks
2010-12-14 18:55         ` Peter Zijlstra
2010-12-14 20:13           ` Valdis.Kletnieks
2010-12-14 20:24             ` Peter Zijlstra
2010-12-14 20:37               ` Valdis.Kletnieks
2010-12-13 14:47 ` [PATCH 17/35] writeback: quit throttling when bdi dirty pages dropped low Wu Fengguang
2010-12-16  5:17   ` Wu Fengguang
2010-12-13 14:47 ` [PATCH 18/35] writeback: start background writeback earlier Wu Fengguang
2010-12-16  5:37   ` Wu Fengguang
2010-12-13 14:47 ` [PATCH 19/35] writeback: make nr_to_write a per-file limit Wu Fengguang
2010-12-13 14:47 ` [PATCH 20/35] writeback: scale IO chunk size up to device bandwidth Wu Fengguang
2010-12-13 14:47 ` [PATCH 21/35] writeback: trace balance_dirty_pages() Wu Fengguang
2010-12-13 14:47 ` [PATCH 22/35] writeback: trace global dirty page states Wu Fengguang
2010-12-17  2:19   ` Wu Fengguang
2010-12-17  3:11     ` Wu Fengguang
2010-12-17  6:52     ` Hugh Dickins
2010-12-17  9:31       ` Wu Fengguang
2010-12-17 11:21       ` [PATCH] writeback: skip balance_dirty_pages() for in-memory fs Wu Fengguang
2010-12-17 14:21         ` Rik van Riel
2010-12-17 15:34         ` Minchan Kim
2010-12-17 15:42           ` Minchan Kim
2010-12-21  5:59         ` Hugh Dickins
2010-12-21  9:39           ` Wu Fengguang
2010-12-30  3:15             ` Hugh Dickins
2010-12-13 14:47 ` [PATCH 23/35] writeback: trace writeback_single_inode() Wu Fengguang
2010-12-13 14:47 ` [PATCH 24/35] btrfs: dont call balance_dirty_pages_ratelimited() on already dirty pages Wu Fengguang
2010-12-13 14:47 ` [PATCH 25/35] btrfs: lower the dirty balacing rate limit Wu Fengguang
2010-12-13 14:47 ` [PATCH 26/35] btrfs: wait on too many nr_async_bios Wu Fengguang
2010-12-13 14:47 ` [PATCH 27/35] nfs: livelock prevention is now done in VFS Wu Fengguang
2010-12-13 14:47 ` [PATCH 28/35] nfs: writeback pages wait queue Wu Fengguang
2010-12-13 14:47 ` [PATCH 29/35] nfs: in-commit pages accounting and " Wu Fengguang
2010-12-13 21:15   ` Trond Myklebust
2010-12-14 15:40     ` Wu Fengguang
2010-12-14 15:57       ` Trond Myklebust
2010-12-15 15:07         ` Wu Fengguang
2010-12-13 14:47 ` [PATCH 30/35] nfs: heuristics to avoid commit Wu Fengguang
2010-12-13 20:53   ` Trond Myklebust
2010-12-14  8:20     ` Wu Fengguang
2010-12-13 14:47 ` [PATCH 31/35] nfs: dont change wbc->nr_to_write in write_inode() Wu Fengguang
2010-12-13 21:01   ` Trond Myklebust
2010-12-14 15:53     ` Wu Fengguang
2010-12-13 14:47 ` [PATCH 32/35] nfs: limit the range of commits Wu Fengguang
2010-12-13 14:47 ` [PATCH 33/35] nfs: adapt congestion threshold to dirty threshold Wu Fengguang
2010-12-13 14:47 ` [PATCH 34/35] nfs: trace nfs_commit_unstable_pages() Wu Fengguang
2010-12-13 14:47 ` [PATCH 35/35] nfs: trace nfs_commit_release() Wu Fengguang
     [not found] ` <AANLkTinFeu7LMaDFgUcP3r2oqVHE5bei3T5JTPGBNvS9@mail.gmail.com>
2010-12-14  4:59   ` [PATCH 00/35] IO-less dirty throttling v4 Wu Fengguang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20101213150327.573549251@intel.com \
    --to=fengguang.wu@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=jack@suse.cz \
    --cc=shaohua.li@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).