* [PATCH 12/13] writeback: add trace event for balance_dirty_pages()
@ 2010-11-17 3:58 Wu Fengguang
0 siblings, 0 replies; 3+ messages in thread
From: Wu Fengguang @ 2010-11-17 3:58 UTC (permalink / raw)
To: Andrew Morton
Cc: Theodore Ts'o, Wu Fengguang, Dave Chinner, Jan Kara,
Peter Zijlstra, Mel Gorman, Rik van Riel, KOSAKI Motohiro,
Chris Mason, Christoph Hellwig, linux-mm, linux-fsdevel, LKML
Andrew,
References: <20101117035821.000579293@intel.com>
Content-Disposition: inline; filename=writeback-trace-balance_dirty_pages.patch
Here is an interesting test to verify the theory with balance_dirty_pages()
tracing. On a partition that can do ~60MB/s, a sparse file is created and
4 rsync tasks with different write bandwidth started:
dd if=/dev/zero of=/mnt/1T bs=1M count=1 seek=1024000
echo 1 > /debug/tracing/events/writeback/balance_dirty_pages/enable
rsync localhost:/mnt/1T /mnt/a --bwlimit 10000&
rsync localhost:/mnt/1T /mnt/A --bwlimit 10000&
rsync localhost:/mnt/1T /mnt/b --bwlimit 20000&
rsync localhost:/mnt/1T /mnt/c --bwlimit 30000&
Trace outputs within 0.1 second, grouped by tasks:
rsync-3824 [004] 15002.076447: balance_dirty_pages: bdi=btrfs-2 weight=15% limit=130876 gap=5340 dirtied=192 pause=20
rsync-3822 [003] 15002.091701: balance_dirty_pages: bdi=btrfs-2 weight=15% limit=130777 gap=5113 dirtied=192 pause=20
rsync-3821 [006] 15002.004667: balance_dirty_pages: bdi=btrfs-2 weight=30% limit=129570 gap=3714 dirtied=64 pause=8
rsync-3821 [006] 15002.012654: balance_dirty_pages: bdi=btrfs-2 weight=30% limit=129589 gap=3733 dirtied=64 pause=8
rsync-3821 [006] 15002.021838: balance_dirty_pages: bdi=btrfs-2 weight=30% limit=129604 gap=3748 dirtied=64 pause=8
rsync-3821 [004] 15002.091193: balance_dirty_pages: bdi=btrfs-2 weight=29% limit=129583 gap=3983 dirtied=64 pause=8
rsync-3821 [004] 15002.102729: balance_dirty_pages: bdi=btrfs-2 weight=29% limit=129594 gap=3802 dirtied=64 pause=8
rsync-3821 [000] 15002.109252: balance_dirty_pages: bdi=btrfs-2 weight=29% limit=129619 gap=3827 dirtied=64 pause=8
rsync-3823 [002] 15002.009029: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128762 gap=2842 dirtied=64 pause=12
rsync-3823 [002] 15002.021598: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128813 gap=3021 dirtied=64 pause=12
rsync-3823 [003] 15002.032973: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128805 gap=2885 dirtied=64 pause=12
rsync-3823 [003] 15002.048800: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128823 gap=2967 dirtied=64 pause=12
rsync-3823 [003] 15002.060728: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128821 gap=3221 dirtied=64 pause=12
rsync-3823 [000] 15002.073152: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128825 gap=3225 dirtied=64 pause=12
rsync-3823 [005] 15002.090111: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128782 gap=3214 dirtied=64 pause=12
rsync-3823 [004] 15002.102520: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128764 gap=3036 dirtied=64 pause=12
The data vividly show that
- the heaviest writer is throttled a bit (weight=39%)
- the lighter writers run at full speed (weight=15%,15%,30%)
rsync is smart enough to compensate for the in-kernel pause time
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
include/trace/events/writeback.h | 61 +++++++++++++++++++++++++++--
mm/page-writeback.c | 6 ++
2 files changed, 64 insertions(+), 3 deletions(-)
--- linux-next.orig/include/trace/events/writeback.h 2010-11-15 18:59:00.000000000 +0800
+++ linux-next/include/trace/events/writeback.h 2010-11-15 21:31:46.000000000 +0800
@@ -147,11 +147,66 @@ DEFINE_EVENT(wbc_class, name, \
DEFINE_WBC_EVENT(wbc_writeback_start);
DEFINE_WBC_EVENT(wbc_writeback_written);
DEFINE_WBC_EVENT(wbc_writeback_wait);
-DEFINE_WBC_EVENT(wbc_balance_dirty_start);
-DEFINE_WBC_EVENT(wbc_balance_dirty_written);
-DEFINE_WBC_EVENT(wbc_balance_dirty_wait);
DEFINE_WBC_EVENT(wbc_writepage);
+#define BDP_PERCENT(a, b, c) ((__entry->a - __entry->b) * 100 * c + \
+ __entry->bdi_limit/2) / (__entry->bdi_limit|1)
+TRACE_EVENT(balance_dirty_pages,
+
+ TP_PROTO(struct backing_dev_info *bdi,
+ long bdi_dirty,
+ long bdi_limit,
+ long task_limit,
+ long pages_dirtied,
+ long pause),
+
+ TP_ARGS(bdi, bdi_dirty, bdi_limit, task_limit,
+ pages_dirtied, pause),
+
+ TP_STRUCT__entry(
+ __array(char, bdi, 32)
+ __field(long, bdi_dirty)
+ __field(long, bdi_limit)
+ __field(long, task_limit)
+ __field(long, pages_dirtied)
+ __field(long, pause)
+ ),
+
+ TP_fast_assign(
+ strlcpy(__entry->bdi, dev_name(bdi->dev), 32);
+ __entry->bdi_dirty = bdi_dirty;
+ __entry->bdi_limit = bdi_limit;
+ __entry->task_limit = task_limit;
+ __entry->pages_dirtied = pages_dirtied;
+ __entry->pause = pause * 1000 / HZ;
+ ),
+
+
+ /*
+ * [..................soft throttling range.........]
+ * ^ |<=========== bdi_gap =========>|
+ * background_thresh |<== task_gap ==>|
+ * -------------------|-------+----------------|--------------|
+ * (bdi_limit * 7/8)^ ^bdi_dirty ^task_limit ^bdi_limit
+ *
+ * Reasonable large gaps help produce smooth pause times.
+ */
+ TP_printk("bdi=%s bdi_dirty=%lu bdi_limit=%lu task_limit=%lu "
+ "task_weight=%ld%% task_gap=%ld%% bdi_gap=%ld%% "
+ "pages_dirtied=%lu pause=%lu",
+ __entry->bdi,
+ __entry->bdi_dirty,
+ __entry->bdi_limit,
+ __entry->task_limit,
+ /* task weight: proportion of recent dirtied pages */
+ BDP_PERCENT(bdi_limit, task_limit, TASK_SOFT_DIRTY_LIMIT),
+ BDP_PERCENT(task_limit, bdi_dirty, TASK_SOFT_DIRTY_LIMIT),
+ BDP_PERCENT(bdi_limit, bdi_dirty, BDI_SOFT_DIRTY_LIMIT),
+ __entry->pages_dirtied,
+ __entry->pause
+ )
+);
+
DECLARE_EVENT_CLASS(writeback_congest_waited_template,
TP_PROTO(unsigned int usec_timeout, unsigned int usec_delayed),
--- linux-next.orig/mm/page-writeback.c 2010-11-15 21:30:45.000000000 +0800
+++ linux-next/mm/page-writeback.c 2010-11-15 21:31:46.000000000 +0800
@@ -629,6 +629,12 @@ static void balance_dirty_pages(struct a
pause = clamp_val(pause, 1, HZ/10);
pause:
+ trace_balance_dirty_pages(bdi,
+ bdi_dirty,
+ bdi_thresh,
+ task_thresh,
+ pages_dirtied,
+ pause);
bdi_update_write_bandwidth(bdi, &bw_time, &bw_written);
__set_current_state(TASK_INTERRUPTIBLE);
io_schedule_timeout(pause);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH 00/13] IO-less dirty throttling v2
@ 2010-11-17 4:27 Wu Fengguang
2010-11-17 4:27 ` [PATCH 12/13] writeback: add trace event for balance_dirty_pages() Wu Fengguang
0 siblings, 1 reply; 3+ messages in thread
From: Wu Fengguang @ 2010-11-17 4:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Jan Kara, Christoph Hellwig, Dave Chinner, Theodore Ts'o,
Chris Mason, Peter Zijlstra, Mel Gorman, Rik van Riel,
KOSAKI Motohiro, Wu Fengguang, linux-mm, linux-fsdevel, LKML
Andrew,
This is a revised subset of "[RFC] soft and dynamic dirty throttling limits"
<http://thread.gmane.org/gmane.linux.kernel.mm/52966>.
The basic idea is to introduce a small region under the bdi dirty threshold.
The task will be throttled gently when stepping into the bottom of region,
and get throttled more and more aggressively as bdi dirty+writeback pages
goes up closer to the top of region. At some point the application will be
throttled at the right bandwidth that balances with the device write bandwidth.
(the first patch and documentation has more details)
Changes from initial RFC:
- adaptive rate limiting, to reduce overheads when under throttle threshold
- prevent overrunning dirty limit on lots of concurrent dirtiers
- add Documentation/filesystems/writeback-throttling-design.txt
- lower max pause time from 200ms to 100ms; min pause time from 10ms to 1jiffy
- don't drop the laptop mode code
- update and comment the trace event
- benchmarks on concurrent dd and fs_mark covering both large and tiny files
- bdi->write_bandwidth updates should be rate limited on concurrent dirtiers,
otherwise it will drift fast and fluctuate
- don't call balance_dirty_pages_ratelimit() when writing to already dirtied
pages, otherwise the task will be throttled too much
The patches are based on 2.6.37-rc2 and Jan's sync livelock patches. For easier
access I put them in
git://git.kernel.org/pub/scm/linux/kernel/git/wfg/writeback.git dirty-throttling-v2
Wu Fengguang (12):
writeback: IO-less balance_dirty_pages()
writeback: consolidate variable names in balance_dirty_pages()
writeback: per-task rate limit on balance_dirty_pages()
writeback: prevent duplicate balance_dirty_pages_ratelimited() calls
writeback: bdi write bandwidth estimation
writeback: show bdi write bandwidth in debugfs
writeback: quit throttling when bdi dirty pages dropped
writeback: reduce per-bdi dirty threshold ramp up time
writeback: make reasonable gap between the dirty/background thresholds
writeback: scale down max throttle bandwidth on concurrent dirtiers
writeback: add trace event for balance_dirty_pages()
writeback: make nr_to_write a per-file limit
Jan Kara (1):
writeback: account per-bdi accumulated written pages
.../filesystems/writeback-throttling-design.txt | 210 +++++++++++++
fs/fs-writeback.c | 16 +
include/linux/backing-dev.h | 3 +
include/linux/sched.h | 7 +
include/linux/writeback.h | 14 +
include/trace/events/writeback.h | 61 ++++-
mm/backing-dev.c | 29 +-
mm/filemap.c | 5 +-
mm/memory_hotplug.c | 3 -
mm/page-writeback.c | 320 +++++++++++---------
10 files changed, 511 insertions(+), 157 deletions(-)
It runs smoothly on typical configurations. Under small memory system the pause
time will fluctuate much more due to the limited range for soft throttling.
The soft dirty threshold is now lowered to (background + dirty)/2=15%. So it
will be throttling the applications a bit earlier, and may be perceived by end
users as performance "slow down" if his application happens to dirty a bit more
than 15%. Note that vanilla kernel also has this limit at fresh boot: it starts
checking bdi limits when exceeding the global 15%, however the bdi limit ramps
up pretty slowly in common configurations, so the task is immediately throttled.
The task's think time is not considered for now when computing the pause time.
So it will throttle an "scp" over network way harder than a local "cp". When
to take the user space think time into account and ensure accurate throttle
bandwidth, we will effectively create a simple write I/O bandwidth controller.
On a simple test of 100 dd, it reduces the CPU %system time from 30% to 3%, and
improves IO throughput from 38MB/s to 42MB/s.
The fs_mark benchmark is interesting. The CPU overheads are almost reduced by
half. Before patch the benchmark is actually bounded by CPU. After patch it's
IO bound, but strangely the throughput becomes slightly slower.
# ./fs_mark -D 10000 -S0 -n 100000 -s 1 -L 63 -d /mnt/scratch/0 -d /mnt/scratch/1 -d /mnt/scratch/2 -d /mnt/scratch/3 -d /mnt/scratch/4 -d /mnt/scratch/5 -d /mnt/scratch/6 -d /mnt/scratch/7 -d /mnt/scratch/8 -d /mnt/scratch/9 -d /mnt/scratch/10 -d /mnt/scratch/11
# Version 3.3, 12 thread(s) starting at Thu Nov 11 21:01:36 2010
# Sync method: NO SYNC: Test does not issue sync() or fsync() calls.
# Directories: Time based hash between directories across 10000 subdirectories with 180 seconds per subdirectory.
# File names: 40 bytes long, (16 initial bytes of time stamp with 24 random bytes at end of name)
# Files info: size 1 bytes, written with an IO size of 16384 bytes per write
# App overhead is time in microseconds spent in the test not doing file writing related system calls.
#
2.6.36
FSUse% Count Size Files/sec App Overhead
0 1200000 1 1261.7 524762513
0 2400000 1 1195.3 537844546
0 3600000 1 1231.9 496441566
1 4800000 1 1175.8 552421522
1 6000000 1 1191.6 558529735
1 7200000 1 1165.3 551178395
2 8400000 1 1175.0 533209632
2 9600000 1 1200.6 534862246
2 10800000 1 1181.2 540616486
2 12000000 1 1137.4 554551797
3 13200000 1 1143.7 563319651
3 14400000 1 1169.0 519527533
3 15600000 1 1184.0 533550370
4 16800000 1 1161.3 534358727
4 18000000 1 1193.4 521610050
4 19200000 1 1177.6 524117437
5 20400000 1 1172.6 506166634
5 21600000 1 1172.3 515725633
avg 1182.761 533488581.833
2.6.36+
FSUse% Count Size Files/sec App Overhead
0 1200000 1 1125.0 357885976
0 2400000 1 1155.6 288103795
0 3600000 1 1172.4 296521755
1 4800000 1 1136.0 301718887
1 6000000 1 1156.7 303605077
1 7200000 1 1102.9 288852150
2 8400000 1 1140.9 294894485
2 9600000 1 1148.0 314394450
2 10800000 1 1099.7 296365560
2 12000000 1 1153.6 316283083
3 13200000 1 1087.9 339988006
3 14400000 1 1183.9 270836344
3 15600000 1 1122.7 276400918
4 16800000 1 1132.1 285272223
4 18000000 1 1154.8 283424055
4 19200000 1 1202.5 294558877
5 20400000 1 1158.1 293971332
5 21600000 1 1159.4 287720335
5 22800000 1 1150.1 282987509
5 24000000 1 1150.7 283870613
6 25200000 1 1123.8 288094185
6 26400000 1 1152.1 296984323
6 27600000 1 1190.7 282403174
7 28800000 1 1088.6 290493643
7 30000000 1 1144.1 290311419
7 31200000 1 1186.0 290021271
7 32400000 1 1213.9 279465138
8 33600000 1 1117.3 275745401
avg 1146.768 294684785.143
I noticed that
1) BdiWriteback can grow very large. For example, bdi 8:16 has 72960KB
writeback pages, however the disk IO queue can hold at most
nr_request*max_sectors_kb=128*512kb=64MB writeback pages. Maybe xfs manages
to create perfect sequential layouts and writes, and the other 8MB writeback
pages are flying inside the disk?
root@wfg-ne02 /cc/fs_mark-3.3/ne02-2.6.36+# g BdiWriteback /debug/bdi/8:*/*
/debug/bdi/8:0/stats:BdiWriteback: 0 kB
/debug/bdi/8:112/stats:BdiWriteback: 68352 kB
/debug/bdi/8:128/stats:BdiWriteback: 62336 kB
/debug/bdi/8:144/stats:BdiWriteback: 61824 kB
/debug/bdi/8:160/stats:BdiWriteback: 67328 kB
/debug/bdi/8:16/stats:BdiWriteback: 72960 kB
/debug/bdi/8:176/stats:BdiWriteback: 57984 kB
/debug/bdi/8:192/stats:BdiWriteback: 71936 kB
/debug/bdi/8:32/stats:BdiWriteback: 68352 kB
/debug/bdi/8:48/stats:BdiWriteback: 56704 kB
/debug/bdi/8:64/stats:BdiWriteback: 50304 kB
/debug/bdi/8:80/stats:BdiWriteback: 68864 kB
/debug/bdi/8:96/stats:BdiWriteback: 2816 kB
2) the 12 disks are not all 100% utilized. Not even close: sdd, sdf, sdh, sdj
are almost idle at the moment. Dozens of seconds later, some other disks
become idle. This happens both before/after patch. There may be some hidden
bugs (unrelated to this patchset).
avg-cpu: %user %nice %system %iowait %steal %idle
0.17 0.00 97.87 1.08 0.00 0.88
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc 0.00 63.00 0.00 125.00 0.00 1909.33 30.55 3.88 31.65 6.57 82.13
sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sde 0.00 19.00 0.00 112.00 0.00 1517.17 27.09 3.95 35.33 8.00 89.60
sdg 0.00 92.67 0.33 126.00 2.67 1773.33 28.12 14.83 120.78 7.73 97.60
sdf 0.00 32.33 0.00 91.67 0.00 1408.17 30.72 4.84 52.97 7.72 70.80
sdh 0.00 17.67 0.00 5.00 0.00 124.00 49.60 0.07 13.33 9.60 4.80
sdi 0.00 44.67 0.00 5.00 0.00 253.33 101.33 0.15 29.33 10.93 5.47
sdl 0.00 168.00 0.00 135.67 0.00 2216.33 32.67 6.41 45.42 5.75 78.00
sdk 0.00 225.00 0.00 123.00 0.00 2355.83 38.31 9.50 73.03 6.94 85.33
sdj 0.00 1.00 0.00 2.33 0.00 26.67 22.86 0.01 2.29 1.71 0.40
sdb 0.00 14.33 0.00 101.67 0.00 1278.00 25.14 2.02 19.95 7.16 72.80
sdm 0.00 150.33 0.00 144.33 0.00 2344.50 32.49 5.43 33.94 5.39 77.73
avg-cpu: %user %nice %system %iowait %steal %idle
0.12 0.00 98.63 0.83 0.00 0.42
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc 0.00 105.67 0.00 127.33 0.00 1810.17 28.43 4.39 32.43 6.67 84.93
sdd 0.00 5.33 0.00 10.67 0.00 128.00 24.00 0.03 2.50 1.25 1.33
sde 0.00 180.33 0.33 107.67 2.67 2109.33 39.11 8.11 73.93 8.99 97.07
sdg 0.00 7.67 0.00 63.67 0.00 1387.50 43.59 1.45 24.29 11.08 70.53
sdf 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdh 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdi 0.00 62.67 0.00 94.67 0.00 1743.50 36.83 3.28 34.68 8.52 80.67
sdl 0.00 162.00 0.00 141.67 0.00 2295.83 32.41 7.09 51.79 6.14 86.93
sdk 0.00 34.33 0.00 143.67 0.00 1910.17 26.59 5.07 38.90 6.26 90.00
sdj 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 0.00 195.00 0.00 96.67 0.00 1949.50 40.33 5.54 57.23 8.39 81.07
sdm 0.00 155.00 0.00 143.00 0.00 2357.50 32.97 5.21 39.98 5.71 81.60
Thanks,
Fengguang
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH 12/13] writeback: add trace event for balance_dirty_pages()
2010-11-17 4:27 [PATCH 00/13] IO-less dirty throttling v2 Wu Fengguang
@ 2010-11-17 4:27 ` Wu Fengguang
2010-11-17 4:41 ` Wu Fengguang
0 siblings, 1 reply; 3+ messages in thread
From: Wu Fengguang @ 2010-11-17 4:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Jan Kara, Wu Fengguang, Christoph Hellwig, Dave Chinner,
Theodore Ts'o, Chris Mason, Peter Zijlstra, Mel Gorman,
Rik van Riel, KOSAKI Motohiro, linux-mm, linux-fsdevel, LKML
[-- Attachment #1: writeback-trace-balance_dirty_pages.patch --]
[-- Type: text/plain, Size: 6173 bytes --]
Here is an interesting test to verify the theory with balance_dirty_pages()
tracing. On a partition that can do ~60MB/s, a sparse file is created and
4 rsync tasks with different write bandwidth started:
dd if=/dev/zero of=/mnt/1T bs=1M count=1 seek=1024000
echo 1 > /debug/tracing/events/writeback/balance_dirty_pages/enable
rsync localhost:/mnt/1T /mnt/a --bwlimit 10000&
rsync localhost:/mnt/1T /mnt/A --bwlimit 10000&
rsync localhost:/mnt/1T /mnt/b --bwlimit 20000&
rsync localhost:/mnt/1T /mnt/c --bwlimit 30000&
Trace outputs within 0.1 second, grouped by tasks:
rsync-3824 [004] 15002.076447: balance_dirty_pages: bdi=btrfs-2 weight=15% limit=130876 gap=5340 dirtied=192 pause=20
rsync-3822 [003] 15002.091701: balance_dirty_pages: bdi=btrfs-2 weight=15% limit=130777 gap=5113 dirtied=192 pause=20
rsync-3821 [006] 15002.004667: balance_dirty_pages: bdi=btrfs-2 weight=30% limit=129570 gap=3714 dirtied=64 pause=8
rsync-3821 [006] 15002.012654: balance_dirty_pages: bdi=btrfs-2 weight=30% limit=129589 gap=3733 dirtied=64 pause=8
rsync-3821 [006] 15002.021838: balance_dirty_pages: bdi=btrfs-2 weight=30% limit=129604 gap=3748 dirtied=64 pause=8
rsync-3821 [004] 15002.091193: balance_dirty_pages: bdi=btrfs-2 weight=29% limit=129583 gap=3983 dirtied=64 pause=8
rsync-3821 [004] 15002.102729: balance_dirty_pages: bdi=btrfs-2 weight=29% limit=129594 gap=3802 dirtied=64 pause=8
rsync-3821 [000] 15002.109252: balance_dirty_pages: bdi=btrfs-2 weight=29% limit=129619 gap=3827 dirtied=64 pause=8
rsync-3823 [002] 15002.009029: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128762 gap=2842 dirtied=64 pause=12
rsync-3823 [002] 15002.021598: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128813 gap=3021 dirtied=64 pause=12
rsync-3823 [003] 15002.032973: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128805 gap=2885 dirtied=64 pause=12
rsync-3823 [003] 15002.048800: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128823 gap=2967 dirtied=64 pause=12
rsync-3823 [003] 15002.060728: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128821 gap=3221 dirtied=64 pause=12
rsync-3823 [000] 15002.073152: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128825 gap=3225 dirtied=64 pause=12
rsync-3823 [005] 15002.090111: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128782 gap=3214 dirtied=64 pause=12
rsync-3823 [004] 15002.102520: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128764 gap=3036 dirtied=64 pause=12
The data vividly show that
- the heaviest writer is throttled a bit (weight=39%)
- the lighter writers run at full speed (weight=15%,15%,30%)
rsync is smart enough to compensate for the in-kernel pause time
Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
---
include/trace/events/writeback.h | 61 +++++++++++++++++++++++++++--
mm/page-writeback.c | 6 ++
2 files changed, 64 insertions(+), 3 deletions(-)
--- linux-next.orig/include/trace/events/writeback.h 2010-11-15 18:59:00.000000000 +0800
+++ linux-next/include/trace/events/writeback.h 2010-11-15 21:31:46.000000000 +0800
@@ -147,11 +147,66 @@ DEFINE_EVENT(wbc_class, name, \
DEFINE_WBC_EVENT(wbc_writeback_start);
DEFINE_WBC_EVENT(wbc_writeback_written);
DEFINE_WBC_EVENT(wbc_writeback_wait);
-DEFINE_WBC_EVENT(wbc_balance_dirty_start);
-DEFINE_WBC_EVENT(wbc_balance_dirty_written);
-DEFINE_WBC_EVENT(wbc_balance_dirty_wait);
DEFINE_WBC_EVENT(wbc_writepage);
+#define BDP_PERCENT(a, b, c) ((__entry->a - __entry->b) * 100 * c + \
+ __entry->bdi_limit/2) / (__entry->bdi_limit|1)
+TRACE_EVENT(balance_dirty_pages,
+
+ TP_PROTO(struct backing_dev_info *bdi,
+ long bdi_dirty,
+ long bdi_limit,
+ long task_limit,
+ long pages_dirtied,
+ long pause),
+
+ TP_ARGS(bdi, bdi_dirty, bdi_limit, task_limit,
+ pages_dirtied, pause),
+
+ TP_STRUCT__entry(
+ __array(char, bdi, 32)
+ __field(long, bdi_dirty)
+ __field(long, bdi_limit)
+ __field(long, task_limit)
+ __field(long, pages_dirtied)
+ __field(long, pause)
+ ),
+
+ TP_fast_assign(
+ strlcpy(__entry->bdi, dev_name(bdi->dev), 32);
+ __entry->bdi_dirty = bdi_dirty;
+ __entry->bdi_limit = bdi_limit;
+ __entry->task_limit = task_limit;
+ __entry->pages_dirtied = pages_dirtied;
+ __entry->pause = pause * 1000 / HZ;
+ ),
+
+
+ /*
+ * [..................soft throttling range.........]
+ * ^ |<=========== bdi_gap =========>|
+ * background_thresh |<== task_gap ==>|
+ * -------------------|-------+----------------|--------------|
+ * (bdi_limit * 7/8)^ ^bdi_dirty ^task_limit ^bdi_limit
+ *
+ * Reasonable large gaps help produce smooth pause times.
+ */
+ TP_printk("bdi=%s bdi_dirty=%lu bdi_limit=%lu task_limit=%lu "
+ "task_weight=%ld%% task_gap=%ld%% bdi_gap=%ld%% "
+ "pages_dirtied=%lu pause=%lu",
+ __entry->bdi,
+ __entry->bdi_dirty,
+ __entry->bdi_limit,
+ __entry->task_limit,
+ /* task weight: proportion of recent dirtied pages */
+ BDP_PERCENT(bdi_limit, task_limit, TASK_SOFT_DIRTY_LIMIT),
+ BDP_PERCENT(task_limit, bdi_dirty, TASK_SOFT_DIRTY_LIMIT),
+ BDP_PERCENT(bdi_limit, bdi_dirty, BDI_SOFT_DIRTY_LIMIT),
+ __entry->pages_dirtied,
+ __entry->pause
+ )
+);
+
DECLARE_EVENT_CLASS(writeback_congest_waited_template,
TP_PROTO(unsigned int usec_timeout, unsigned int usec_delayed),
--- linux-next.orig/mm/page-writeback.c 2010-11-15 21:30:45.000000000 +0800
+++ linux-next/mm/page-writeback.c 2010-11-15 21:31:46.000000000 +0800
@@ -629,6 +629,12 @@ static void balance_dirty_pages(struct a
pause = clamp_val(pause, 1, HZ/10);
pause:
+ trace_balance_dirty_pages(bdi,
+ bdi_dirty,
+ bdi_thresh,
+ task_thresh,
+ pages_dirtied,
+ pause);
bdi_update_write_bandwidth(bdi, &bw_time, &bw_written);
__set_current_state(TASK_INTERRUPTIBLE);
io_schedule_timeout(pause);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH 12/13] writeback: add trace event for balance_dirty_pages()
2010-11-17 4:27 ` [PATCH 12/13] writeback: add trace event for balance_dirty_pages() Wu Fengguang
@ 2010-11-17 4:41 ` Wu Fengguang
0 siblings, 0 replies; 3+ messages in thread
From: Wu Fengguang @ 2010-11-17 4:41 UTC (permalink / raw)
To: Andrew Morton
Cc: Jan Kara, Christoph Hellwig, Dave Chinner, Theodore Ts'o,
Chris Mason, Peter Zijlstra, Mel Gorman, Rik van Riel,
KOSAKI Motohiro, linux-mm, linux-fsdevel@vger.kernel.org, LKML
Sorry, there are some lags in the document.
On Wed, Nov 17, 2010 at 12:27:32PM +0800, Wu, Fengguang wrote:
> Here is an interesting test to verify the theory with balance_dirty_pages()
> tracing. On a partition that can do ~60MB/s, a sparse file is created and
> 4 rsync tasks with different write bandwidth started:
>
> dd if=/dev/zero of=/mnt/1T bs=1M count=1 seek=1024000
> echo 1 > /debug/tracing/events/writeback/balance_dirty_pages/enable
>
> rsync localhost:/mnt/1T /mnt/a --bwlimit 10000&
> rsync localhost:/mnt/1T /mnt/A --bwlimit 10000&
> rsync localhost:/mnt/1T /mnt/b --bwlimit 20000&
> rsync localhost:/mnt/1T /mnt/c --bwlimit 30000&
>
> Trace outputs within 0.1 second, grouped by tasks:
>
> rsync-3824 [004] 15002.076447: balance_dirty_pages: bdi=btrfs-2 weight=15% limit=130876 gap=5340 dirtied=192 pause=20
>
> rsync-3822 [003] 15002.091701: balance_dirty_pages: bdi=btrfs-2 weight=15% limit=130777 gap=5113 dirtied=192 pause=20
>
> rsync-3821 [006] 15002.004667: balance_dirty_pages: bdi=btrfs-2 weight=30% limit=129570 gap=3714 dirtied=64 pause=8
> rsync-3821 [006] 15002.012654: balance_dirty_pages: bdi=btrfs-2 weight=30% limit=129589 gap=3733 dirtied=64 pause=8
> rsync-3821 [006] 15002.021838: balance_dirty_pages: bdi=btrfs-2 weight=30% limit=129604 gap=3748 dirtied=64 pause=8
> rsync-3821 [004] 15002.091193: balance_dirty_pages: bdi=btrfs-2 weight=29% limit=129583 gap=3983 dirtied=64 pause=8
> rsync-3821 [004] 15002.102729: balance_dirty_pages: bdi=btrfs-2 weight=29% limit=129594 gap=3802 dirtied=64 pause=8
> rsync-3821 [000] 15002.109252: balance_dirty_pages: bdi=btrfs-2 weight=29% limit=129619 gap=3827 dirtied=64 pause=8
>
> rsync-3823 [002] 15002.009029: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128762 gap=2842 dirtied=64 pause=12
> rsync-3823 [002] 15002.021598: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128813 gap=3021 dirtied=64 pause=12
> rsync-3823 [003] 15002.032973: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128805 gap=2885 dirtied=64 pause=12
> rsync-3823 [003] 15002.048800: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128823 gap=2967 dirtied=64 pause=12
> rsync-3823 [003] 15002.060728: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128821 gap=3221 dirtied=64 pause=12
> rsync-3823 [000] 15002.073152: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128825 gap=3225 dirtied=64 pause=12
> rsync-3823 [005] 15002.090111: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128782 gap=3214 dirtied=64 pause=12
> rsync-3823 [004] 15002.102520: balance_dirty_pages: bdi=btrfs-2 weight=39% limit=128764 gap=3036 dirtied=64 pause=12
The above lines are in the old output format, but you get the idea..
> + /*
> + * [..................soft throttling range.........]
> + * ^ |<=========== bdi_gap =========>|
> + * background_thresh |<== task_gap ==>|
That background_thresh should be global (background+dirty)/2.
> + * -------------------|-------+----------------|--------------|
> + * (bdi_limit * 7/8)^ ^bdi_dirty ^task_limit ^bdi_limit
> + *
> + * Reasonable large gaps help produce smooth pause times.
> + */
> + TP_printk("bdi=%s bdi_dirty=%lu bdi_limit=%lu task_limit=%lu "
> + "task_weight=%ld%% task_gap=%ld%% bdi_gap=%ld%% "
> + "pages_dirtied=%lu pause=%lu",
> + __entry->bdi,
> + __entry->bdi_dirty,
> + __entry->bdi_limit,
> + __entry->task_limit,
> + /* task weight: proportion of recent dirtied pages */
> + BDP_PERCENT(bdi_limit, task_limit, TASK_SOFT_DIRTY_LIMIT),
> + BDP_PERCENT(task_limit, bdi_dirty, TASK_SOFT_DIRTY_LIMIT),
> + BDP_PERCENT(bdi_limit, bdi_dirty, BDI_SOFT_DIRTY_LIMIT),
> + __entry->pages_dirtied,
> + __entry->pause
> + )
> +);
Thanks,
Fengguang
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2010-11-17 4:41 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-11-17 3:58 [PATCH 12/13] writeback: add trace event for balance_dirty_pages() Wu Fengguang
-- strict thread matches above, loose matches on Subject: below --
2010-11-17 4:27 [PATCH 00/13] IO-less dirty throttling v2 Wu Fengguang
2010-11-17 4:27 ` [PATCH 12/13] writeback: add trace event for balance_dirty_pages() Wu Fengguang
2010-11-17 4:41 ` Wu Fengguang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).