From: Wu Fengguang <fengguang.wu@intel.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Jan Kara <jack@suse.cz>, Peter Zijlstra <a.p.zijlstra@chello.nl>,
Wu Fengguang <fengguang.wu@intel.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Trond Myklebust <Trond.Myklebust@netapp.com>
Cc: Dave Chinner <david@fromorbit.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Chris Mason <chris.mason@oracle.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Rik van Riel <riel@redhat.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Greg Thelen <gthelen@google.com>
Cc: Minchan Kim <minchan.kim@gmail.com>
Cc: linux-mm <linux-mm@kvack.org>
Cc: <linux-fsdevel@vger.kernel.org>
Cc: LKML <linux-kernel@vger.kernel.org>
Subject: [PATCH 23/47] writeback: spinlock protected bdi bandwidth update
Date: Mon, 13 Dec 2010 14:43:12 +0800 [thread overview]
Message-ID: <20101213064839.785105287@intel.com> (raw)
In-Reply-To: 20101213064249.648862451@intel.com
[-- Attachment #1: writeback-trylock.patch --]
[-- Type: text/plain, Size: 6992 bytes --]
The original plan is to use per-cpu vars for bdi->write_bandwidth.
However Peter suggested that it opens the window that some CPU see
outdated values. So switch to use spinlock protected global vars.
It tries to update the bandwidth only when disk is fully utilized.
Any inactive period of more than 500ms will be skipped.
CC: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
fs/fs-writeback.c | 7 +--
include/linux/backing-dev.h | 4 +
include/linux/writeback.h | 13 ++++-
mm/backing-dev.c | 4 +
mm/page-writeback.c | 74 +++++++++++++++++++---------------
5 files changed, 62 insertions(+), 40 deletions(-)
--- linux-next.orig/mm/page-writeback.c 2010-12-08 22:44:29.000000000 +0800
+++ linux-next/mm/page-writeback.c 2010-12-08 22:44:29.000000000 +0800
@@ -523,41 +523,54 @@ out:
return 1 + int_sqrt(dirty_thresh - dirty_pages);
}
-void bdi_update_write_bandwidth(struct backing_dev_info *bdi,
- unsigned long *bw_time,
- s64 *bw_written)
+static void __bdi_update_write_bandwidth(struct backing_dev_info *bdi,
+ unsigned long elapsed,
+ unsigned long written)
+{
+ const unsigned long period = roundup_pow_of_two(HZ);
+ u64 bw;
+
+ bw = written - bdi->written_stamp;
+ bw *= HZ;
+ if (elapsed > period / 2) {
+ do_div(bw, elapsed);
+ elapsed = period / 2;
+ bw *= elapsed;
+ }
+ bw += (u64)bdi->write_bandwidth * (period - elapsed);
+ bdi->write_bandwidth = bw >> ilog2(period);
+}
+
+void bdi_update_bandwidth(struct backing_dev_info *bdi,
+ unsigned long start_time,
+ unsigned long bdi_dirty,
+ unsigned long bdi_thresh)
{
- const unsigned long unit_time = max(HZ/100, 1);
- unsigned long written;
unsigned long elapsed;
- unsigned long bw;
- unsigned long long w;
-
- if (*bw_written == 0)
- goto snapshot;
+ unsigned long written;
- elapsed = jiffies - *bw_time;
- if (elapsed < unit_time)
+ if (!spin_trylock(&bdi->bw_lock))
return;
- /*
- * When there lots of tasks throttled in balance_dirty_pages(), they
- * will each try to update the bandwidth for the same period, making
- * the bandwidth drift much faster than the desired rate (as in the
- * single dirtier case). So do some rate limiting.
- */
- if (jiffies - bdi->write_bandwidth_update_time < elapsed)
+ elapsed = jiffies - bdi->bw_time_stamp;
+ written = percpu_counter_read(&bdi->bdi_stat[BDI_WRITTEN]);
+
+ /* skip quiet periods when disk bandwidth is under-utilized */
+ if (elapsed > HZ/2 &&
+ elapsed > jiffies - start_time)
goto snapshot;
- written = percpu_counter_read(&bdi->bdi_stat[BDI_WRITTEN]) - *bw_written;
- bw = (HZ * written + elapsed / 2) / elapsed;
- w = min(elapsed / unit_time, 128UL);
- bdi->write_bandwidth = (bdi->write_bandwidth * (1024-w) +
- bw * w + 1023) >> 10;
- bdi->write_bandwidth_update_time = jiffies;
+ /* rate-limit, only update once every 100ms */
+ if (elapsed <= HZ/10)
+ goto unlock;
+
+ __bdi_update_write_bandwidth(bdi, elapsed, written);
+
snapshot:
- *bw_written = percpu_counter_read(&bdi->bdi_stat[BDI_WRITTEN]);
- *bw_time = jiffies;
+ bdi->written_stamp = written;
+ bdi->bw_time_stamp = jiffies;
+unlock:
+ spin_unlock(&bdi->bw_lock);
}
/*
@@ -582,8 +595,7 @@ static void balance_dirty_pages(struct a
unsigned long pause = 0;
bool dirty_exceeded = false;
struct backing_dev_info *bdi = mapping->backing_dev_info;
- unsigned long bw_time;
- s64 bw_written = 0;
+ unsigned long start_time = jiffies;
for (;;) {
/*
@@ -645,6 +657,8 @@ static void balance_dirty_pages(struct a
break;
bdi_prev_dirty = bdi_dirty;
+ bdi_update_bandwidth(bdi, start_time, bdi_dirty, bdi_thresh);
+
if (bdi_dirty >= task_thresh) {
pause = HZ/10;
goto pause;
@@ -674,10 +688,8 @@ pause:
task_thresh,
pages_dirtied,
pause);
- bdi_update_write_bandwidth(bdi, &bw_time, &bw_written);
__set_current_state(TASK_UNINTERRUPTIBLE);
io_schedule_timeout(pause);
- bdi_update_write_bandwidth(bdi, &bw_time, &bw_written);
/*
* The bdi thresh is somehow "soft" limit derived from the
--- linux-next.orig/include/linux/backing-dev.h 2010-12-08 22:44:29.000000000 +0800
+++ linux-next/include/linux/backing-dev.h 2010-12-08 22:44:29.000000000 +0800
@@ -74,8 +74,10 @@ struct backing_dev_info {
struct percpu_counter bdi_stat[NR_BDI_STAT_ITEMS];
+ spinlock_t bw_lock;
+ unsigned long bw_time_stamp;
+ unsigned long written_stamp;
unsigned long write_bandwidth;
- unsigned long write_bandwidth_update_time;
struct prop_local_percpu completions;
int dirty_exceeded;
--- linux-next.orig/mm/backing-dev.c 2010-12-08 22:44:29.000000000 +0800
+++ linux-next/mm/backing-dev.c 2010-12-08 22:44:29.000000000 +0800
@@ -662,7 +662,9 @@ int bdi_init(struct backing_dev_info *bd
goto err;
}
- bdi->write_bandwidth = (100 << 20) / PAGE_CACHE_SIZE;
+ spin_lock_init(&bdi->bw_lock);
+ bdi->write_bandwidth = 100 << (20 - PAGE_SHIFT); /* 100 MB/s */
+
bdi->dirty_exceeded = 0;
err = prop_local_init_percpu(&bdi->completions);
--- linux-next.orig/fs/fs-writeback.c 2010-12-08 22:44:27.000000000 +0800
+++ linux-next/fs/fs-writeback.c 2010-12-08 22:44:29.000000000 +0800
@@ -645,8 +645,6 @@ static long wb_writeback(struct bdi_writ
.range_cyclic = work->range_cyclic,
};
unsigned long oldest_jif;
- unsigned long bw_time;
- s64 bw_written = 0;
long wrote = 0;
long write_chunk;
struct inode *inode;
@@ -680,7 +678,7 @@ static long wb_writeback(struct bdi_writ
write_chunk = LONG_MAX;
wbc.wb_start = jiffies; /* livelock avoidance */
- bdi_update_write_bandwidth(wb->bdi, &bw_time, &bw_written);
+ bdi_update_write_bandwidth(wb->bdi, wbc.wb_start);
for (;;) {
/*
@@ -717,7 +715,8 @@ static long wb_writeback(struct bdi_writ
else
writeback_inodes_wb(wb, &wbc);
trace_wbc_writeback_written(&wbc, wb->bdi);
- bdi_update_write_bandwidth(wb->bdi, &bw_time, &bw_written);
+
+ bdi_update_write_bandwidth(wb->bdi, wbc.wb_start);
work->nr_pages -= write_chunk - wbc.nr_to_write;
wrote += write_chunk - wbc.nr_to_write;
--- linux-next.orig/include/linux/writeback.h 2010-12-08 22:44:26.000000000 +0800
+++ linux-next/include/linux/writeback.h 2010-12-08 22:44:29.000000000 +0800
@@ -139,9 +139,16 @@ void global_dirty_limits(unsigned long *
unsigned long bdi_dirty_limit(struct backing_dev_info *bdi,
unsigned long dirty,
unsigned long dirty_pages);
-void bdi_update_write_bandwidth(struct backing_dev_info *bdi,
- unsigned long *bw_time,
- s64 *bw_written);
+
+void bdi_update_bandwidth(struct backing_dev_info *bdi,
+ unsigned long start_time,
+ unsigned long bdi_dirty,
+ unsigned long bdi_thresh);
+static inline void bdi_update_write_bandwidth(struct backing_dev_info *bdi,
+ unsigned long start_time)
+{
+ bdi_update_bandwidth(bdi, start_time, 0, 0);
+}
void page_writeback_init(void);
void balance_dirty_pages_ratelimited_nr(struct address_space *mapping,
next prev parent reply other threads:[~2010-12-13 6:49 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-12-13 6:42 [PATCH 00/47] IO-less dirty throttling v3 Wu Fengguang
2010-12-13 6:42 ` [PATCH 01/47] writeback: enabling gate limit for light dirtied bdi Wu Fengguang
2010-12-13 6:42 ` [PATCH 02/47] writeback: safety margin for bdi stat error Wu Fengguang
2010-12-13 6:42 ` [PATCH 03/47] writeback: IO-less balance_dirty_pages() Wu Fengguang
2010-12-13 6:42 ` [PATCH 04/47] writeback: consolidate variable names in balance_dirty_pages() Wu Fengguang
2010-12-13 6:42 ` [PATCH 05/47] writeback: per-task rate limit on balance_dirty_pages() Wu Fengguang
2010-12-13 6:42 ` [PATCH 06/47] writeback: prevent duplicate balance_dirty_pages_ratelimited() calls Wu Fengguang
2010-12-13 6:42 ` [PATCH 07/47] writeback: account per-bdi accumulated written pages Wu Fengguang
2010-12-13 6:42 ` [PATCH 08/47] writeback: bdi write bandwidth estimation Wu Fengguang
2010-12-13 6:42 ` [PATCH 09/47] writeback: show bdi write bandwidth in debugfs Wu Fengguang
2010-12-13 6:42 ` [PATCH 10/47] writeback: quit throttling when bdi dirty pages dropped low Wu Fengguang
2010-12-13 6:43 ` [PATCH 11/47] writeback: reduce per-bdi dirty threshold ramp up time Wu Fengguang
2010-12-13 6:43 ` [PATCH 12/47] writeback: make reasonable gap between the dirty/background thresholds Wu Fengguang
2010-12-13 6:43 ` [PATCH 13/47] writeback: scale down max throttle bandwidth on concurrent dirtiers Wu Fengguang
2010-12-13 6:43 ` [PATCH 14/47] writeback: add trace event for balance_dirty_pages() Wu Fengguang
2010-12-13 6:43 ` [PATCH 15/47] writeback: make nr_to_write a per-file limit Wu Fengguang
2010-12-13 6:43 ` [PATCH 16/47] writeback: make-nr_to_write-a-per-file-limit fix Wu Fengguang
2010-12-13 6:43 ` [PATCH 17/47] writeback: do uninterruptible sleep in balance_dirty_pages() Wu Fengguang
2010-12-13 6:43 ` [PATCH 18/47] writeback: move BDI_WRITTEN accounting into __bdi_writeout_inc() Wu Fengguang
2010-12-13 6:43 ` [PATCH 19/47] writeback: fix increasement of nr_dirtied_pause Wu Fengguang
2010-12-13 6:43 ` [PATCH 20/47] writeback: use do_div in bw calculation Wu Fengguang
2010-12-13 6:43 ` [PATCH 21/47] writeback: prevent divide error on tiny HZ Wu Fengguang
2010-12-13 6:43 ` [PATCH 22/47] writeback: prevent bandwidth calculation overflow Wu Fengguang
2010-12-13 6:43 ` Wu Fengguang [this message]
2010-12-13 6:43 ` [PATCH 24/47] writeback: increase pause time on concurrent dirtiers Wu Fengguang
2010-12-13 6:43 ` [PATCH 25/47] writeback: make it easier to break from a dirty exceeded bdi Wu Fengguang
2010-12-13 6:43 ` [PATCH 26/47] writeback: start background writeback earlier Wu Fengguang
2010-12-13 6:43 ` [PATCH 27/47] writeback: user space think time compensation Wu Fengguang
2010-12-13 6:43 ` [PATCH 28/47] writeback: bdi base throttle bandwidth Wu Fengguang
2010-12-13 6:43 ` [PATCH 29/47] writeback: smoothed bdi dirty pages Wu Fengguang
2010-12-13 6:43 ` [PATCH 30/47] writeback: adapt max balance pause time to memory size Wu Fengguang
2010-12-13 6:43 ` [PATCH 31/47] writeback: increase min pause time on concurrent dirtiers Wu Fengguang
2010-12-13 6:43 ` [PATCH 32/47] writeback: extend balance_dirty_pages() trace event Wu Fengguang
2010-12-13 6:43 ` [PATCH 33/47] writeback: trace global dirty page states Wu Fengguang
2010-12-13 6:43 ` [PATCH 34/47] writeback: trace writeback_single_inode() Wu Fengguang
2010-12-13 6:43 ` [PATCH 35/47] writeback: scale IO chunk size up to device bandwidth Wu Fengguang
2010-12-13 6:43 ` [PATCH 36/47] btrfs: dont call balance_dirty_pages_ratelimited() on already dirty pages Wu Fengguang
2010-12-13 6:43 ` [PATCH 37/47] btrfs: lower the dirty balacing rate limit Wu Fengguang
2010-12-13 6:43 ` [PATCH 38/47] btrfs: wait on too many nr_async_bios Wu Fengguang
2010-12-13 6:43 ` [PATCH 39/47] nfs: livelock prevention is now done in VFS Wu Fengguang
2010-12-13 6:43 ` [PATCH 40/47] nfs: writeback pages wait queue Wu Fengguang
2010-12-13 6:43 ` [PATCH 41/47] nfs: in-commit pages accounting and " Wu Fengguang
2010-12-13 6:43 ` [PATCH 42/47] nfs: heuristics to avoid commit Wu Fengguang
2010-12-13 6:43 ` [PATCH 43/47] nfs: dont change wbc->nr_to_write in write_inode() Wu Fengguang
2010-12-13 6:43 ` [PATCH 44/47] nfs: limit the range of commits Wu Fengguang
2010-12-13 6:43 ` [PATCH 45/47] nfs: adapt congestion threshold to dirty threshold Wu Fengguang
2010-12-13 6:43 ` [PATCH 46/47] nfs: trace nfs_commit_unstable_pages() Wu Fengguang
2010-12-13 6:43 ` [PATCH 47/47] nfs: trace nfs_commit_release() Wu Fengguang
2010-12-13 11:27 ` [PATCH 00/47] IO-less dirty throttling v3 Peter Zijlstra
2010-12-13 11:49 ` Wu Fengguang
2010-12-13 12:38 ` Peter Zijlstra
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20101213064839.785105287@intel.com \
--to=fengguang.wu@intel.com \
--cc=a.p.zijlstra@chello.nl \
--cc=akpm@linux-foundation.org \
--cc=jack@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).