From: Wu Fengguang <fengguang.wu@intel.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Jan Kara <jack@suse.cz>, Li Shaohua <shaohua.li@intel.com>,
Wu Fengguang <fengguang.wu@intel.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Chris Mason <chris.mason@oracle.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Rik van Riel <riel@redhat.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: linux-mm <linux-mm@kvack.org>
Cc: <linux-fsdevel@vger.kernel.org>
Cc: LKML <linux-kernel@vger.kernel.org>
Subject: [PATCH 08/47] writeback: bdi write bandwidth estimation
Date: Mon, 13 Dec 2010 14:42:57 +0800 [thread overview]
Message-ID: <20101213064837.909758859@intel.com> (raw)
In-Reply-To: 20101213064249.648862451@intel.com
[-- Attachment #1: writeback-bandwidth-estimation-in-flusher.patch --]
[-- Type: text/plain, Size: 6519 bytes --]
The estimation value will start from 100MB/s and adapt to the real
bandwidth in seconds. It's pretty accurate for common filesystems.
As the first use case, it replaces the fixed 100MB/s value used for
throttle bandwidth calculation in balance_dirty_pages().
The overheads won't be high because the bdi bandwidth update only occurs
in >10ms intervals.
Initially it's only estimated in balance_dirty_pages() because this is
the most reliable place to get reasonable large bandwidth -- the bdi is
normally fully utilized when bdi_thresh is reached.
Then Shaohua recommends to also do it in the flusher thread, to keep the
value updated when there are only periodic/background writeback and no
tasks throttled.
The estimation cannot be done purely in the flusher thread because it's
not sufficient for NFS. NFS writeback won't block at get_request_wait(),
so tend to complete quickly. Another problem is, slow devices may take
dozens of seconds to write the initial 64MB chunk (write_bandwidth
starts with 100MB/s, this translates to 64MB nr_to_write). So it may
take more than 1 minute to adapt to the smallish bandwidth if the
bandwidth is only updated in the flusher thread.
CC: Li Shaohua <shaohua.li@intel.com>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
fs/fs-writeback.c | 5 ++++
include/linux/backing-dev.h | 2 +
include/linux/writeback.h | 3 ++
mm/backing-dev.c | 1
mm/page-writeback.c | 41 +++++++++++++++++++++++++++++++++-
5 files changed, 51 insertions(+), 1 deletion(-)
--- linux-next.orig/include/linux/backing-dev.h 2010-12-08 22:44:24.000000000 +0800
+++ linux-next/include/linux/backing-dev.h 2010-12-08 22:44:24.000000000 +0800
@@ -75,6 +75,8 @@ struct backing_dev_info {
struct percpu_counter bdi_stat[NR_BDI_STAT_ITEMS];
struct prop_local_percpu completions;
+ unsigned long write_bandwidth_update_time;
+ int write_bandwidth;
int dirty_exceeded;
unsigned int min_ratio;
--- linux-next.orig/mm/backing-dev.c 2010-12-08 22:44:24.000000000 +0800
+++ linux-next/mm/backing-dev.c 2010-12-08 22:44:24.000000000 +0800
@@ -660,6 +660,7 @@ int bdi_init(struct backing_dev_info *bd
goto err;
}
+ bdi->write_bandwidth = 100 << 20;
bdi->dirty_exceeded = 0;
err = prop_local_init_percpu(&bdi->completions);
--- linux-next.orig/fs/fs-writeback.c 2010-12-08 22:44:22.000000000 +0800
+++ linux-next/fs/fs-writeback.c 2010-12-08 22:44:24.000000000 +0800
@@ -635,6 +635,8 @@ static long wb_writeback(struct bdi_writ
.range_cyclic = work->range_cyclic,
};
unsigned long oldest_jif;
+ unsigned long bw_time;
+ s64 bw_written = 0;
long wrote = 0;
long write_chunk;
struct inode *inode;
@@ -668,6 +670,8 @@ static long wb_writeback(struct bdi_writ
write_chunk = LONG_MAX;
wbc.wb_start = jiffies; /* livelock avoidance */
+ bdi_update_write_bandwidth(wb->bdi, &bw_time, &bw_written);
+
for (;;) {
/*
* Stop writeback when nr_pages has been consumed
@@ -702,6 +706,7 @@ static long wb_writeback(struct bdi_writ
else
writeback_inodes_wb(wb, &wbc);
trace_wbc_writeback_written(&wbc, wb->bdi);
+ bdi_update_write_bandwidth(wb->bdi, &bw_time, &bw_written);
work->nr_pages -= write_chunk - wbc.nr_to_write;
wrote += write_chunk - wbc.nr_to_write;
--- linux-next.orig/mm/page-writeback.c 2010-12-08 22:44:24.000000000 +0800
+++ linux-next/mm/page-writeback.c 2010-12-08 22:44:24.000000000 +0800
@@ -515,6 +515,41 @@ out:
return 1 + int_sqrt(dirty_thresh - dirty_pages);
}
+void bdi_update_write_bandwidth(struct backing_dev_info *bdi,
+ unsigned long *bw_time,
+ s64 *bw_written)
+{
+ unsigned long written;
+ unsigned long elapsed;
+ unsigned long bw;
+ unsigned long w;
+
+ if (*bw_written == 0)
+ goto snapshot;
+
+ elapsed = jiffies - *bw_time;
+ if (elapsed < HZ/100)
+ return;
+
+ /*
+ * When there lots of tasks throttled in balance_dirty_pages(), they
+ * will each try to update the bandwidth for the same period, making
+ * the bandwidth drift much faster than the desired rate (as in the
+ * single dirtier case). So do some rate limiting.
+ */
+ if (jiffies - bdi->write_bandwidth_update_time < elapsed)
+ goto snapshot;
+
+ written = percpu_counter_read(&bdi->bdi_stat[BDI_WRITTEN]) - *bw_written;
+ bw = (HZ * PAGE_CACHE_SIZE * written + elapsed/2) / elapsed;
+ w = min(elapsed / (HZ/100), 128UL);
+ bdi->write_bandwidth = (bdi->write_bandwidth * (1024-w) + bw * w) >> 10;
+ bdi->write_bandwidth_update_time = jiffies;
+snapshot:
+ *bw_written = percpu_counter_read(&bdi->bdi_stat[BDI_WRITTEN]);
+ *bw_time = jiffies;
+}
+
/*
* balance_dirty_pages() must be called by processes which are generating dirty
* data. It looks at the number of dirty pages in the machine and will force
@@ -535,6 +570,8 @@ static void balance_dirty_pages(struct a
unsigned long pause = 0;
bool dirty_exceeded = false;
struct backing_dev_info *bdi = mapping->backing_dev_info;
+ unsigned long bw_time;
+ s64 bw_written = 0;
for (;;) {
/*
@@ -583,7 +620,7 @@ static void balance_dirty_pages(struct a
goto pause;
}
- bw = 100 << 20; /* use static 100MB/s for the moment */
+ bw = bdi->write_bandwidth;
bw = bw * (bdi_thresh - bdi_dirty);
bw = bw / (bdi_thresh / TASK_SOFT_DIRTY_LIMIT + 1);
@@ -592,8 +629,10 @@ static void balance_dirty_pages(struct a
pause = clamp_val(pause, 1, HZ/10);
pause:
+ bdi_update_write_bandwidth(bdi, &bw_time, &bw_written);
__set_current_state(TASK_INTERRUPTIBLE);
io_schedule_timeout(pause);
+ bdi_update_write_bandwidth(bdi, &bw_time, &bw_written);
/*
* The bdi thresh is somehow "soft" limit derived from the
--- linux-next.orig/include/linux/writeback.h 2010-12-08 22:44:22.000000000 +0800
+++ linux-next/include/linux/writeback.h 2010-12-08 22:44:24.000000000 +0800
@@ -138,6 +138,9 @@ void global_dirty_limits(unsigned long *
unsigned long bdi_dirty_limit(struct backing_dev_info *bdi,
unsigned long dirty,
unsigned long dirty_pages);
+void bdi_update_write_bandwidth(struct backing_dev_info *bdi,
+ unsigned long *bw_time,
+ s64 *bw_written);
void page_writeback_init(void);
void balance_dirty_pages_ratelimited_nr(struct address_space *mapping,
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2010-12-13 6:42 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-12-13 6:42 [PATCH 00/47] IO-less dirty throttling v3 Wu Fengguang
2010-12-13 6:42 ` [PATCH 01/47] writeback: enabling gate limit for light dirtied bdi Wu Fengguang
2010-12-13 6:42 ` [PATCH 02/47] writeback: safety margin for bdi stat error Wu Fengguang
2010-12-13 6:42 ` [PATCH 03/47] writeback: IO-less balance_dirty_pages() Wu Fengguang
2010-12-13 6:42 ` [PATCH 04/47] writeback: consolidate variable names in balance_dirty_pages() Wu Fengguang
2010-12-13 6:42 ` [PATCH 05/47] writeback: per-task rate limit on balance_dirty_pages() Wu Fengguang
2010-12-13 6:42 ` [PATCH 06/47] writeback: prevent duplicate balance_dirty_pages_ratelimited() calls Wu Fengguang
2010-12-13 6:42 ` [PATCH 07/47] writeback: account per-bdi accumulated written pages Wu Fengguang
2010-12-13 6:42 ` Wu Fengguang [this message]
2010-12-13 6:42 ` [PATCH 09/47] writeback: show bdi write bandwidth in debugfs Wu Fengguang
2010-12-13 6:42 ` [PATCH 10/47] writeback: quit throttling when bdi dirty pages dropped low Wu Fengguang
2010-12-13 6:43 ` [PATCH 11/47] writeback: reduce per-bdi dirty threshold ramp up time Wu Fengguang
2010-12-13 6:43 ` [PATCH 12/47] writeback: make reasonable gap between the dirty/background thresholds Wu Fengguang
2010-12-13 6:43 ` [PATCH 13/47] writeback: scale down max throttle bandwidth on concurrent dirtiers Wu Fengguang
2010-12-13 6:43 ` [PATCH 14/47] writeback: add trace event for balance_dirty_pages() Wu Fengguang
2010-12-13 6:43 ` [PATCH 15/47] writeback: make nr_to_write a per-file limit Wu Fengguang
2010-12-13 6:43 ` [PATCH 16/47] writeback: make-nr_to_write-a-per-file-limit fix Wu Fengguang
2010-12-13 6:43 ` [PATCH 17/47] writeback: do uninterruptible sleep in balance_dirty_pages() Wu Fengguang
2010-12-13 6:43 ` [PATCH 18/47] writeback: move BDI_WRITTEN accounting into __bdi_writeout_inc() Wu Fengguang
2010-12-13 6:43 ` [PATCH 19/47] writeback: fix increasement of nr_dirtied_pause Wu Fengguang
2010-12-13 6:43 ` [PATCH 20/47] writeback: use do_div in bw calculation Wu Fengguang
2010-12-13 6:43 ` [PATCH 21/47] writeback: prevent divide error on tiny HZ Wu Fengguang
2010-12-13 6:43 ` [PATCH 22/47] writeback: prevent bandwidth calculation overflow Wu Fengguang
2010-12-13 6:43 ` [PATCH 23/47] writeback: spinlock protected bdi bandwidth update Wu Fengguang
2010-12-13 6:43 ` [PATCH 24/47] writeback: increase pause time on concurrent dirtiers Wu Fengguang
2010-12-13 6:43 ` [PATCH 25/47] writeback: make it easier to break from a dirty exceeded bdi Wu Fengguang
2010-12-13 6:43 ` [PATCH 26/47] writeback: start background writeback earlier Wu Fengguang
2010-12-13 6:43 ` [PATCH 27/47] writeback: user space think time compensation Wu Fengguang
2010-12-13 6:43 ` [PATCH 28/47] writeback: bdi base throttle bandwidth Wu Fengguang
2010-12-13 6:43 ` [PATCH 29/47] writeback: smoothed bdi dirty pages Wu Fengguang
2010-12-13 6:43 ` [PATCH 30/47] writeback: adapt max balance pause time to memory size Wu Fengguang
2010-12-13 6:43 ` [PATCH 31/47] writeback: increase min pause time on concurrent dirtiers Wu Fengguang
2010-12-13 6:43 ` [PATCH 32/47] writeback: extend balance_dirty_pages() trace event Wu Fengguang
2010-12-13 6:43 ` [PATCH 33/47] writeback: trace global dirty page states Wu Fengguang
2010-12-13 6:43 ` [PATCH 34/47] writeback: trace writeback_single_inode() Wu Fengguang
2010-12-13 6:43 ` [PATCH 35/47] writeback: scale IO chunk size up to device bandwidth Wu Fengguang
2010-12-13 6:43 ` [PATCH 36/47] btrfs: dont call balance_dirty_pages_ratelimited() on already dirty pages Wu Fengguang
2010-12-13 6:43 ` [PATCH 37/47] btrfs: lower the dirty balacing rate limit Wu Fengguang
2010-12-13 6:43 ` [PATCH 38/47] btrfs: wait on too many nr_async_bios Wu Fengguang
2010-12-13 6:43 ` [PATCH 39/47] nfs: livelock prevention is now done in VFS Wu Fengguang
2010-12-13 6:43 ` [PATCH 40/47] nfs: writeback pages wait queue Wu Fengguang
2010-12-13 6:43 ` [PATCH 41/47] nfs: in-commit pages accounting and " Wu Fengguang
2010-12-13 6:43 ` [PATCH 42/47] nfs: heuristics to avoid commit Wu Fengguang
2010-12-13 6:43 ` [PATCH 43/47] nfs: dont change wbc->nr_to_write in write_inode() Wu Fengguang
2010-12-13 6:43 ` [PATCH 44/47] nfs: limit the range of commits Wu Fengguang
2010-12-13 6:43 ` [PATCH 45/47] nfs: adapt congestion threshold to dirty threshold Wu Fengguang
2010-12-13 6:43 ` [PATCH 46/47] nfs: trace nfs_commit_unstable_pages() Wu Fengguang
2010-12-13 6:43 ` [PATCH 47/47] nfs: trace nfs_commit_release() Wu Fengguang
2010-12-13 11:27 ` [PATCH 00/47] IO-less dirty throttling v3 Peter Zijlstra
2010-12-13 11:49 ` Wu Fengguang
2010-12-13 12:38 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20101213064837.909758859@intel.com \
--to=fengguang.wu@intel.com \
--cc=akpm@linux-foundation.org \
--cc=jack@suse.cz \
--cc=shaohua.li@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).