* XFS/btrfs performance after IO-less dirty throttling
@ 2011-12-14 14:31 Wu Fengguang
2011-12-14 14:59 ` Wu Fengguang
[not found] ` <20111215133137.GA14562@localhost>
0 siblings, 2 replies; 9+ messages in thread
From: Wu Fengguang @ 2011-12-14 14:31 UTC (permalink / raw)
To: linux-fsdevel; +Cc: LKML, Dave Chinner, Christoph Hellwig
Hi,
This very basic 1-disk performance comparison shows +8.3% overall
improvements for XFS and +6.2% for btrfs.
The thresh=1M cases see big "regressions", however that's due to much
more strictly executed global dirty limit. So not real problems.
The other big regressions happen in the XFS UKEY-thresh=100M cases.
Need to explore what's going on inside XFS...
wfg@bee /export/writeback% ./compare -g xfs fat/*/*-3.1.0+ fat/*/*-3.2.0-rc3
3.1.0+ 3.2.0-rc3
------------------------ ------------------------
43.24 +11.6% 48.26 fat/UKEY-HDD/xfs-100dd-1-3.1.0+
51.35 +8.3% 55.62 fat/UKEY-HDD/xfs-10dd-1-3.1.0+
58.73 +5.7% 62.09 fat/UKEY-HDD/xfs-1dd-1-3.1.0+
4.17 -37.8% 2.59 fat/UKEY-thresh=100M/xfs-100dd-1-3.1.0+
4.14 -53.3% 1.94 fat/UKEY-thresh=100M/xfs-10dd-1-3.1.0+
6.30 +0.4% 6.33 fat/UKEY-thresh=100M/xfs-1dd-1-3.1.0+
8.88 +18.0% 10.48 fat/fio/xfs-fio_fat_mmap_randwrite_4k-1-3.1.0+
36.01 +25.2% 45.07 fat/fio/xfs-fio_fat_mmap_randwrite_64k-1-3.1.0+
47.04 +9.2% 51.38 fat/fio/xfs-fio_fat_rates-1-3.1.0+
38.23 +22.7% 46.92 fat/thresh=1000M/xfs-100dd-1-3.1.0+
45.17 +22.6% 55.39 fat/thresh=1000M/xfs-10dd-1-3.1.0+
51.66 +9.2% 56.44 fat/thresh=1000M/xfs-1dd-1-3.1.0+
38.31 +23.2% 47.18 fat/thresh=1000M:990M/xfs-100dd-1-3.1.0+
43.60 +26.1% 54.99 fat/thresh=1000M:990M/xfs-10dd-1-3.1.0+
50.14 +14.4% 57.39 fat/thresh=1000M:990M/xfs-1dd-1-3.1.0+
38.06 +26.8% 48.26 fat/thresh=1000M:999M/xfs-100dd-1-3.1.0+
42.43 +28.1% 54.34 fat/thresh=1000M:999M/xfs-10dd-1-3.1.0+
50.30 +10.4% 55.53 fat/thresh=1000M:999M/xfs-1dd-1-3.1.0+
28.89 -3.9% 27.76 fat/thresh=100M/xfs-100dd-1-3.1.0+
44.94 +3.6% 46.54 fat/thresh=100M/xfs-10dd-1-3.1.0+
55.60 -3.0% 53.95 fat/thresh=100M/xfs-1dd-1-3.1.0+
37.05 -15.2% 31.43 fat/thresh=10M/xfs-10dd-1-3.1.0+
55.42 +1.1% 56.03 fat/thresh=10M/xfs-1dd-1-3.1.0+
41.64 -30.3% 29.00 fat/thresh=1M/xfs-10dd-1-3.1.0+
56.68 -4.5% 54.14 fat/thresh=1M/xfs-1dd-1-3.1.0+
977.98 +8.3% 1059.05 TOTAL write_bw
wfg@bee /export/writeback% ./compare -g btrfs fat/*/*-3.1.0+ fat/*/*-3.2.0-rc3
3.1.0+ 3.2.0-rc3
------------------------ ------------------------
53.49 +19.6% 63.98 fat/UKEY-HDD/btrfs-100dd-1-3.1.0+
56.50 +11.8% 63.19 fat/UKEY-HDD/btrfs-10dd-1-3.1.0+
60.07 +6.3% 63.83 fat/UKEY-HDD/btrfs-1dd-1-3.1.0+
4.81 +26.8% 6.10 fat/UKEY-thresh=100M/btrfs-100dd-1-3.1.0+
5.00 +23.2% 6.16 fat/UKEY-thresh=100M/btrfs-10dd-1-3.1.0+
5.99 +1.8% 6.10 fat/UKEY-thresh=100M/btrfs-1dd-1-3.1.0+
15.98 -10.6% 14.29 fat/fio/btrfs-fio_fat_mmap_randwrite_4k-1-3.1.0+
52.02 +5.4% 54.85 fat/fio/btrfs-fio_fat_mmap_randwrite_64k-1-3.1.0+
56.01 +2.6% 57.48 fat/fio/btrfs-fio_fat_rates-1-3.1.0+
52.42 +10.4% 57.85 fat/thresh=1000M/btrfs-100dd-1-3.1.0+
53.75 +7.7% 57.86 fat/thresh=1000M/btrfs-10dd-1-3.1.0+
53.23 +11.2% 59.18 fat/thresh=1000M/btrfs-1dd-1-3.1.0+
55.11 +5.3% 58.02 fat/thresh=1000M:990M/btrfs-100dd-1-3.1.0+
53.01 +5.3% 55.83 fat/thresh=1000M:990M/btrfs-10dd-1-3.1.0+
55.71 +5.9% 58.97 fat/thresh=1000M:990M/btrfs-1dd-1-3.1.0+
54.07 +6.5% 57.61 fat/thresh=1000M:999M/btrfs-100dd-1-3.1.0+
53.05 +4.1% 55.21 fat/thresh=1000M:999M/btrfs-10dd-1-3.1.0+
54.79 +5.6% 57.85 fat/thresh=1000M:999M/btrfs-1dd-1-3.1.0+
56.42 +2.9% 58.06 fat/thresh=100M/btrfs-100dd-1-3.1.0+
57.41 +1.5% 58.26 fat/thresh=100M/btrfs-10dd-1-3.1.0+
58.12 +4.1% 60.50 fat/thresh=100M/btrfs-1dd-1-3.1.0+
44.41 +29.2% 57.37 fat/thresh=10M/btrfs-10dd-1-3.1.0+
54.33 +9.3% 59.37 fat/thresh=10M/btrfs-1dd-1-3.1.0+
5.00 -47.3% 2.64 fat/thresh=1M/btrfs-10dd-1-3.1.0+
15.21 -83.6% 2.50 fat/thresh=1M/btrfs-1dd-1-3.1.0+
1085.90 +6.2% 1153.08 TOTAL write_bw
It performs roughly the same with the pending writeback changes:
wfg@bee /export/writeback% ./compare -g xfs fat/*/*-3.1.0+ fat/*/*-3.2.0-rc3-pause6+
3.1.0+ 3.2.0-rc3-pause6+
------------------------ ------------------------
43.24 +9.2% 47.21 fat/UKEY-HDD/xfs-100dd-1-3.1.0+
51.35 +8.9% 55.90 fat/UKEY-HDD/xfs-10dd-1-3.1.0+
58.73 +5.8% 62.15 fat/UKEY-HDD/xfs-1dd-1-3.1.0+
4.17 -33.1% 2.79 fat/UKEY-thresh=100M/xfs-100dd-1-3.1.0+
4.14 -23.9% 3.15 fat/UKEY-thresh=100M/xfs-10dd-1-3.1.0+
6.30 +0.2% 6.32 fat/UKEY-thresh=100M/xfs-1dd-1-3.1.0+
8.88 +10.2% 9.78 fat/fio/xfs-fio_fat_mmap_randwrite_4k-1-3.1.0+
36.01 +24.4% 44.80 fat/fio/xfs-fio_fat_mmap_randwrite_64k-1-3.1.0+
47.04 +11.9% 52.64 fat/fio/xfs-fio_fat_rates-1-3.1.0+
38.23 +27.9% 48.89 fat/thresh=1000M/xfs-100dd-1-3.1.0+
45.17 +21.0% 54.67 fat/thresh=1000M/xfs-10dd-1-3.1.0+
51.66 +11.2% 57.46 fat/thresh=1000M/xfs-1dd-1-3.1.0+
38.31 +15.1% 44.08 fat/thresh=1000M:990M/xfs-100dd-1-3.1.0+
43.60 +24.8% 54.41 fat/thresh=1000M:990M/xfs-10dd-1-3.1.0+
50.14 +13.4% 56.87 fat/thresh=1000M:990M/xfs-1dd-1-3.1.0+
38.06 +28.9% 49.05 fat/thresh=1000M:999M/xfs-100dd-1-3.1.0+
42.43 +28.3% 54.41 fat/thresh=1000M:999M/xfs-10dd-1-3.1.0+
50.30 +8.1% 54.36 fat/thresh=1000M:999M/xfs-1dd-1-3.1.0+
28.89 -1.2% 28.55 fat/thresh=100M/xfs-100dd-1-3.1.0+
44.94 +5.3% 47.33 fat/thresh=100M/xfs-10dd-1-3.1.0+
55.60 +3.4% 57.48 fat/thresh=100M/xfs-1dd-1-3.1.0+
37.05 -14.5% 31.68 fat/thresh=10M/xfs-10dd-1-3.1.0+
55.42 +0.6% 55.77 fat/thresh=10M/xfs-1dd-1-3.1.0+
41.64 -30.8% 28.82 fat/thresh=1M/xfs-10dd-1-3.1.0+
56.68 -3.0% 54.98 fat/thresh=1M/xfs-1dd-1-3.1.0+
977.98 +8.7% 1063.55 TOTAL write_bw
wfg@bee /export/writeback% ./compare -g btrfs fat/*/*-{3.1.0+,3.2.0-rc3-pause6+}
3.1.0+ 3.2.0-rc3-pause6+
------------------------ ------------------------
53.49 +19.2% 63.75 fat/UKEY-HDD/btrfs-100dd-1-3.1.0+
56.50 +14.1% 64.49 fat/UKEY-HDD/btrfs-10dd-1-3.1.0+
60.07 +6.7% 64.11 fat/UKEY-HDD/btrfs-1dd-1-3.1.0+
4.81 +27.4% 6.12 fat/UKEY-thresh=100M/btrfs-100dd-1-3.1.0+
5.00 +21.7% 6.09 fat/UKEY-thresh=100M/btrfs-10dd-1-3.1.0+
5.99 +1.9% 6.11 fat/UKEY-thresh=100M/btrfs-1dd-1-3.1.0+
15.98 -8.5% 14.62 fat/fio/btrfs-fio_fat_mmap_randwrite_4k-1-3.1.0+
52.02 +4.2% 54.20 fat/fio/btrfs-fio_fat_mmap_randwrite_64k-1-3.1.0+
56.01 +1.0% 56.57 fat/fio/btrfs-fio_fat_rates-1-3.1.0+
52.42 +11.4% 58.38 fat/thresh=1000M/btrfs-100dd-1-3.1.0+
53.75 +9.2% 58.67 fat/thresh=1000M/btrfs-10dd-1-3.1.0+
53.23 +9.6% 58.34 fat/thresh=1000M/btrfs-1dd-1-3.1.0+
55.11 +5.8% 58.33 fat/thresh=1000M:990M/btrfs-100dd-1-3.1.0+
53.01 +6.8% 56.62 fat/thresh=1000M:990M/btrfs-10dd-1-3.1.0+
55.71 +3.8% 57.84 fat/thresh=1000M:990M/btrfs-1dd-1-3.1.0+
54.07 +6.9% 57.78 fat/thresh=1000M:999M/btrfs-100dd-1-3.1.0+
53.05 +5.3% 55.87 fat/thresh=1000M:999M/btrfs-10dd-1-3.1.0+
54.79 +4.2% 57.09 fat/thresh=1000M:999M/btrfs-1dd-1-3.1.0+
56.42 +2.7% 57.97 fat/thresh=100M/btrfs-100dd-1-3.1.0+
57.41 +1.9% 58.51 fat/thresh=100M/btrfs-10dd-1-3.1.0+
58.12 +1.0% 58.71 fat/thresh=100M/btrfs-1dd-1-3.1.0+
44.41 +31.0% 58.16 fat/thresh=10M/btrfs-10dd-1-3.1.0+
54.33 +7.9% 58.60 fat/thresh=10M/btrfs-1dd-1-3.1.0+
5.00 -55.3% 2.24 fat/thresh=1M/btrfs-10dd-1-3.1.0+
15.21 -81.0% 2.89 fat/thresh=1M/btrfs-1dd-1-3.1.0+
1085.90 +6.1% 1152.06 TOTAL write_bw
Thanks,
Fengguang
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: XFS/btrfs performance after IO-less dirty throttling
2011-12-14 14:31 XFS/btrfs performance after IO-less dirty throttling Wu Fengguang
@ 2011-12-14 14:59 ` Wu Fengguang
[not found] ` <20111215133137.GA14562@localhost>
1 sibling, 0 replies; 9+ messages in thread
From: Wu Fengguang @ 2011-12-14 14:59 UTC (permalink / raw)
To: linux-fsdevel; +Cc: LKML, Dave Chinner, Christoph Hellwig, Chris Mason
On Wed, Dec 14, 2011 at 10:31:56PM +0800, Wu Fengguang wrote:
> Hi,
>
> This very basic 1-disk performance comparison shows +8.3% overall
> improvements for XFS and +6.2% for btrfs.
Hmm... this is beyond my expectation. Actually I see lower overall
numbers in earlier tests:
wfg@bee /export/writeback% ./compare -g xfs fat/*/*-{3.1.0+,3.2.0-rc1-ioless-full+}
3.1.0+ 3.2.0-rc1-ioless-full+
------------------------ ------------------------
43.24 -0.6% 42.96 fat/UKEY-HDD/xfs-100dd-1-3.1.0+
42.94 +2.9% 44.19 fat/UKEY-HDD/xfs-100dd-2-3.1.0+
43.13 +2.6% 44.23 fat/UKEY-HDD/xfs-100dd-3-3.1.0+
51.35 +6.1% 54.50 fat/UKEY-HDD/xfs-10dd-1-3.1.0+
51.31 +3.4% 53.03 fat/UKEY-HDD/xfs-10dd-2-3.1.0+
52.72 +2.7% 54.12 fat/UKEY-HDD/xfs-10dd-3-3.1.0+
58.73 +8.1% 63.50 fat/UKEY-HDD/xfs-1dd-1-3.1.0+
57.74 +4.6% 60.42 fat/UKEY-HDD/xfs-1dd-2-3.1.0+
61.12 +2.0% 62.33 fat/UKEY-HDD/xfs-1dd-3-3.1.0+
2.00 +16.8% 2.34 fat/mmap_randwrite_4k/xfs-fio_fat_mmap_randwrite_4k-1-3.1.0+
2.07 +12.3% 2.33 fat/mmap_randwrite_4k/xfs-fio_fat_mmap_randwrite_4k-2-3.1.0+
44.12 +18.8% 52.41 fat/mmap_randwrite_64k/xfs-fio_fat_mmap_randwrite_64k-1-3.1.0+
44.70 +5.5% 47.14 fat/mmap_randwrite_64k/xfs-fio_fat_mmap_randwrite_64k-2-3.1.0+
48.17 +5.8% 50.97 fat/rates/xfs-fio_fat_rates-1-3.1.0+
48.79 +5.0% 51.23 fat/rates/xfs-fio_fat_rates-2-3.1.0+
38.23 +13.5% 43.40 fat/thresh=1000M/xfs-100dd-1-3.1.0+
38.91 +14.1% 44.41 fat/thresh=1000M/xfs-100dd-2-3.1.0+
39.56 +16.0% 45.88 fat/thresh=1000M/xfs-100dd-3-3.1.0+
45.17 +14.4% 51.70 fat/thresh=1000M/xfs-10dd-1-3.1.0+
45.73 +11.4% 50.93 fat/thresh=1000M/xfs-10dd-2-3.1.0+
46.04 +15.7% 53.26 fat/thresh=1000M/xfs-10dd-3-3.1.0+
51.66 +10.8% 57.22 fat/thresh=1000M/xfs-1dd-1-3.1.0+
51.42 +5.6% 54.31 fat/thresh=1000M/xfs-1dd-2-3.1.0+
52.51 +5.6% 55.45 fat/thresh=1000M/xfs-1dd-3-3.1.0+
38.31 +22.0% 46.75 fat/thresh=1000M:990M/xfs-100dd-1-3.1.0+
43.60 +24.3% 54.20 fat/thresh=1000M:990M/xfs-10dd-1-3.1.0+
50.14 +10.0% 55.17 fat/thresh=1000M:990M/xfs-1dd-1-3.1.0+
38.06 +24.1% 47.24 fat/thresh=1000M:999M/xfs-100dd-1-3.1.0+
42.43 +28.4% 54.46 fat/thresh=1000M:999M/xfs-10dd-1-3.1.0+
50.30 +10.7% 55.70 fat/thresh=1000M:999M/xfs-1dd-1-3.1.0+
28.89 -5.7% 27.24 fat/thresh=100M/xfs-100dd-1-3.1.0+
29.19 -4.9% 27.75 fat/thresh=100M/xfs-100dd-3-3.1.0+
43.47 +6.4% 46.28 fat/thresh=100M/xfs-10dd-3-3.1.0+
55.33 -0.1% 55.28 fat/thresh=100M/xfs-1dd-3-3.1.0+
37.05 -17.0% 30.76 fat/thresh=10M/xfs-10dd-1-3.1.0+
37.23 -17.5% 30.71 fat/thresh=10M/xfs-10dd-2-3.1.0+
38.83 -19.3% 31.33 fat/thresh=10M/xfs-10dd-3-3.1.0+
55.42 +1.1% 56.06 fat/thresh=10M/xfs-1dd-1-3.1.0+
55.08 -4.9% 52.39 fat/thresh=10M/xfs-1dd-2-3.1.0+
54.78 +0.5% 55.07 fat/thresh=10M/xfs-1dd-3-3.1.0+
41.64 -32.7% 28.04 fat/thresh=1M/xfs-10dd-1-3.1.0+
40.82 -31.0% 28.18 fat/thresh=1M/xfs-10dd-2-3.1.0+
40.87 -30.1% 28.59 fat/thresh=1M/xfs-10dd-3-3.1.0+
56.68 -6.4% 53.04 fat/thresh=1M/xfs-1dd-1-3.1.0+
57.48 -11.8% 50.67 fat/thresh=1M/xfs-1dd-2-3.1.0+
56.32 -7.4% 52.15 fat/thresh=1M/xfs-1dd-3-3.1.0+
2053.29 +2.7% 2109.30 TOTAL write_bw
wfg@bee /export/writeback% ./compare -g btrfs fat/*/*-{3.1.0+,3.2.0-rc1-ioless-full+}
3.1.0+ 3.2.0-rc1-ioless-full+
------------------------ ------------------------
53.49 +12.6% 60.21 fat/UKEY-HDD/btrfs-100dd-1-3.1.0+
55.42 +9.6% 60.75 fat/UKEY-HDD/btrfs-100dd-2-3.1.0+
56.54 +10.4% 62.40 fat/UKEY-HDD/btrfs-100dd-3-3.1.0+
56.50 +6.2% 60.00 fat/UKEY-HDD/btrfs-10dd-1-3.1.0+
56.63 +7.3% 60.76 fat/UKEY-HDD/btrfs-10dd-2-3.1.0+
55.64 +11.8% 62.19 fat/UKEY-HDD/btrfs-10dd-3-3.1.0+
60.07 +1.5% 60.97 fat/UKEY-HDD/btrfs-1dd-1-3.1.0+
60.18 -0.7% 59.73 fat/UKEY-HDD/btrfs-1dd-2-3.1.0+
59.36 +5.4% 62.56 fat/UKEY-HDD/btrfs-1dd-3-3.1.0+
1.54 +11.3% 1.72 fat/mmap_randwrite_4k/btrfs-fio_fat_mmap_randwrite_4k-1-3.1.0+
1.67 +3.0% 1.72 fat/mmap_randwrite_4k/btrfs-fio_fat_mmap_randwrite_4k-2-3.1.0+
50.65 +1.8% 51.58 fat/mmap_randwrite_64k/btrfs-fio_fat_mmap_randwrite_64k-1-3.1.0+
50.44 +2.3% 51.61 fat/mmap_randwrite_64k/btrfs-fio_fat_mmap_randwrite_64k-2-3.1.0+
55.25 +1.3% 55.97 fat/rates/btrfs-fio_fat_rates-1-3.1.0+
55.34 +0.7% 55.73 fat/rates/btrfs-fio_fat_rates-2-3.1.0+
52.42 +3.5% 54.26 fat/thresh=1000M/btrfs-100dd-1-3.1.0+
53.60 +2.3% 54.84 fat/thresh=1000M/btrfs-100dd-2-3.1.0+
54.62 +3.9% 56.72 fat/thresh=1000M/btrfs-100dd-3-3.1.0+
53.75 +0.5% 54.04 fat/thresh=1000M/btrfs-10dd-1-3.1.0+
53.75 +1.8% 54.73 fat/thresh=1000M/btrfs-10dd-2-3.1.0+
53.31 +5.6% 56.31 fat/thresh=1000M/btrfs-10dd-3-3.1.0+
53.23 +4.7% 55.74 fat/thresh=1000M/btrfs-1dd-1-3.1.0+
54.53 -1.3% 53.81 fat/thresh=1000M/btrfs-1dd-2-3.1.0+
55.37 +0.4% 55.60 fat/thresh=1000M/btrfs-1dd-3-3.1.0+
55.11 +2.9% 56.72 fat/thresh=1000M:990M/btrfs-100dd-1-3.1.0+
53.01 +3.1% 54.64 fat/thresh=1000M:990M/btrfs-10dd-1-3.1.0+
55.71 +2.1% 56.90 fat/thresh=1000M:990M/btrfs-1dd-1-3.1.0+
54.07 +3.3% 55.84 fat/thresh=1000M:999M/btrfs-100dd-1-3.1.0+
53.05 +2.6% 54.41 fat/thresh=1000M:999M/btrfs-10dd-1-3.1.0+
54.79 +3.1% 56.49 fat/thresh=1000M:999M/btrfs-1dd-1-3.1.0+
56.42 -2.9% 54.80 fat/thresh=100M/btrfs-100dd-1-3.1.0+
56.43 +1.4% 57.20 fat/thresh=100M/btrfs-100dd-3-3.1.0+
57.41 -5.4% 54.33 fat/thresh=100M/btrfs-10dd-1-3.1.0+
55.27 +2.4% 56.62 fat/thresh=100M/btrfs-10dd-3-3.1.0+
56.88 -1.3% 56.15 fat/thresh=100M/btrfs-1dd-3-3.1.0+
44.41 +20.5% 53.50 fat/thresh=10M/btrfs-10dd-1-3.1.0+
46.88 +14.7% 53.78 fat/thresh=10M/btrfs-10dd-2-3.1.0+
50.11 +11.5% 55.87 fat/thresh=10M/btrfs-10dd-3-3.1.0+
54.33 +2.3% 55.59 fat/thresh=10M/btrfs-1dd-1-3.1.0+
55.46 -1.3% 54.73 fat/thresh=10M/btrfs-1dd-2-3.1.0+
54.15 +3.3% 55.91 fat/thresh=10M/btrfs-1dd-3-3.1.0+
5.00 -41.4% 2.93 fat/thresh=1M/btrfs-10dd-1-3.1.0+
3.70 -30.3% 2.58 fat/thresh=1M/btrfs-10dd-2-3.1.0+
5.22 -43.9% 2.92 fat/thresh=1M/btrfs-10dd-3-3.1.0+
15.21 -86.3% 2.08 fat/thresh=1M/btrfs-1dd-1-3.1.0+
14.52 -84.7% 2.22 fat/thresh=1M/btrfs-1dd-2-3.1.0+
14.53 -85.6% 2.09 fat/thresh=1M/btrfs-1dd-3-3.1.0+
2184.97 +1.7% 2222.27 TOTAL write_bw
Thanks,
Fengguang
> The thresh=1M cases see big "regressions", however that's due to much
> more strictly executed global dirty limit. So not real problems.
>
> The other big regressions happen in the XFS UKEY-thresh=100M cases.
> Need to explore what's going on inside XFS...
>
> wfg@bee /export/writeback% ./compare -g xfs fat/*/*-3.1.0+ fat/*/*-3.2.0-rc3
> 3.1.0+ 3.2.0-rc3
> ------------------------ ------------------------
> 43.24 +11.6% 48.26 fat/UKEY-HDD/xfs-100dd-1-3.1.0+
> 51.35 +8.3% 55.62 fat/UKEY-HDD/xfs-10dd-1-3.1.0+
> 58.73 +5.7% 62.09 fat/UKEY-HDD/xfs-1dd-1-3.1.0+
> 4.17 -37.8% 2.59 fat/UKEY-thresh=100M/xfs-100dd-1-3.1.0+
> 4.14 -53.3% 1.94 fat/UKEY-thresh=100M/xfs-10dd-1-3.1.0+
> 6.30 +0.4% 6.33 fat/UKEY-thresh=100M/xfs-1dd-1-3.1.0+
> 8.88 +18.0% 10.48 fat/fio/xfs-fio_fat_mmap_randwrite_4k-1-3.1.0+
> 36.01 +25.2% 45.07 fat/fio/xfs-fio_fat_mmap_randwrite_64k-1-3.1.0+
> 47.04 +9.2% 51.38 fat/fio/xfs-fio_fat_rates-1-3.1.0+
> 38.23 +22.7% 46.92 fat/thresh=1000M/xfs-100dd-1-3.1.0+
> 45.17 +22.6% 55.39 fat/thresh=1000M/xfs-10dd-1-3.1.0+
> 51.66 +9.2% 56.44 fat/thresh=1000M/xfs-1dd-1-3.1.0+
> 38.31 +23.2% 47.18 fat/thresh=1000M:990M/xfs-100dd-1-3.1.0+
> 43.60 +26.1% 54.99 fat/thresh=1000M:990M/xfs-10dd-1-3.1.0+
> 50.14 +14.4% 57.39 fat/thresh=1000M:990M/xfs-1dd-1-3.1.0+
> 38.06 +26.8% 48.26 fat/thresh=1000M:999M/xfs-100dd-1-3.1.0+
> 42.43 +28.1% 54.34 fat/thresh=1000M:999M/xfs-10dd-1-3.1.0+
> 50.30 +10.4% 55.53 fat/thresh=1000M:999M/xfs-1dd-1-3.1.0+
> 28.89 -3.9% 27.76 fat/thresh=100M/xfs-100dd-1-3.1.0+
> 44.94 +3.6% 46.54 fat/thresh=100M/xfs-10dd-1-3.1.0+
> 55.60 -3.0% 53.95 fat/thresh=100M/xfs-1dd-1-3.1.0+
> 37.05 -15.2% 31.43 fat/thresh=10M/xfs-10dd-1-3.1.0+
> 55.42 +1.1% 56.03 fat/thresh=10M/xfs-1dd-1-3.1.0+
> 41.64 -30.3% 29.00 fat/thresh=1M/xfs-10dd-1-3.1.0+
> 56.68 -4.5% 54.14 fat/thresh=1M/xfs-1dd-1-3.1.0+
> 977.98 +8.3% 1059.05 TOTAL write_bw
> wfg@bee /export/writeback% ./compare -g btrfs fat/*/*-3.1.0+ fat/*/*-3.2.0-rc3
> 3.1.0+ 3.2.0-rc3
> ------------------------ ------------------------
> 53.49 +19.6% 63.98 fat/UKEY-HDD/btrfs-100dd-1-3.1.0+
> 56.50 +11.8% 63.19 fat/UKEY-HDD/btrfs-10dd-1-3.1.0+
> 60.07 +6.3% 63.83 fat/UKEY-HDD/btrfs-1dd-1-3.1.0+
> 4.81 +26.8% 6.10 fat/UKEY-thresh=100M/btrfs-100dd-1-3.1.0+
> 5.00 +23.2% 6.16 fat/UKEY-thresh=100M/btrfs-10dd-1-3.1.0+
> 5.99 +1.8% 6.10 fat/UKEY-thresh=100M/btrfs-1dd-1-3.1.0+
> 15.98 -10.6% 14.29 fat/fio/btrfs-fio_fat_mmap_randwrite_4k-1-3.1.0+
> 52.02 +5.4% 54.85 fat/fio/btrfs-fio_fat_mmap_randwrite_64k-1-3.1.0+
> 56.01 +2.6% 57.48 fat/fio/btrfs-fio_fat_rates-1-3.1.0+
> 52.42 +10.4% 57.85 fat/thresh=1000M/btrfs-100dd-1-3.1.0+
> 53.75 +7.7% 57.86 fat/thresh=1000M/btrfs-10dd-1-3.1.0+
> 53.23 +11.2% 59.18 fat/thresh=1000M/btrfs-1dd-1-3.1.0+
> 55.11 +5.3% 58.02 fat/thresh=1000M:990M/btrfs-100dd-1-3.1.0+
> 53.01 +5.3% 55.83 fat/thresh=1000M:990M/btrfs-10dd-1-3.1.0+
> 55.71 +5.9% 58.97 fat/thresh=1000M:990M/btrfs-1dd-1-3.1.0+
> 54.07 +6.5% 57.61 fat/thresh=1000M:999M/btrfs-100dd-1-3.1.0+
> 53.05 +4.1% 55.21 fat/thresh=1000M:999M/btrfs-10dd-1-3.1.0+
> 54.79 +5.6% 57.85 fat/thresh=1000M:999M/btrfs-1dd-1-3.1.0+
> 56.42 +2.9% 58.06 fat/thresh=100M/btrfs-100dd-1-3.1.0+
> 57.41 +1.5% 58.26 fat/thresh=100M/btrfs-10dd-1-3.1.0+
> 58.12 +4.1% 60.50 fat/thresh=100M/btrfs-1dd-1-3.1.0+
> 44.41 +29.2% 57.37 fat/thresh=10M/btrfs-10dd-1-3.1.0+
> 54.33 +9.3% 59.37 fat/thresh=10M/btrfs-1dd-1-3.1.0+
> 5.00 -47.3% 2.64 fat/thresh=1M/btrfs-10dd-1-3.1.0+
> 15.21 -83.6% 2.50 fat/thresh=1M/btrfs-1dd-1-3.1.0+
> 1085.90 +6.2% 1153.08 TOTAL write_bw
>
> It performs roughly the same with the pending writeback changes:
>
> wfg@bee /export/writeback% ./compare -g xfs fat/*/*-3.1.0+ fat/*/*-3.2.0-rc3-pause6+
> 3.1.0+ 3.2.0-rc3-pause6+
> ------------------------ ------------------------
> 43.24 +9.2% 47.21 fat/UKEY-HDD/xfs-100dd-1-3.1.0+
> 51.35 +8.9% 55.90 fat/UKEY-HDD/xfs-10dd-1-3.1.0+
> 58.73 +5.8% 62.15 fat/UKEY-HDD/xfs-1dd-1-3.1.0+
> 4.17 -33.1% 2.79 fat/UKEY-thresh=100M/xfs-100dd-1-3.1.0+
> 4.14 -23.9% 3.15 fat/UKEY-thresh=100M/xfs-10dd-1-3.1.0+
> 6.30 +0.2% 6.32 fat/UKEY-thresh=100M/xfs-1dd-1-3.1.0+
> 8.88 +10.2% 9.78 fat/fio/xfs-fio_fat_mmap_randwrite_4k-1-3.1.0+
> 36.01 +24.4% 44.80 fat/fio/xfs-fio_fat_mmap_randwrite_64k-1-3.1.0+
> 47.04 +11.9% 52.64 fat/fio/xfs-fio_fat_rates-1-3.1.0+
> 38.23 +27.9% 48.89 fat/thresh=1000M/xfs-100dd-1-3.1.0+
> 45.17 +21.0% 54.67 fat/thresh=1000M/xfs-10dd-1-3.1.0+
> 51.66 +11.2% 57.46 fat/thresh=1000M/xfs-1dd-1-3.1.0+
> 38.31 +15.1% 44.08 fat/thresh=1000M:990M/xfs-100dd-1-3.1.0+
> 43.60 +24.8% 54.41 fat/thresh=1000M:990M/xfs-10dd-1-3.1.0+
> 50.14 +13.4% 56.87 fat/thresh=1000M:990M/xfs-1dd-1-3.1.0+
> 38.06 +28.9% 49.05 fat/thresh=1000M:999M/xfs-100dd-1-3.1.0+
> 42.43 +28.3% 54.41 fat/thresh=1000M:999M/xfs-10dd-1-3.1.0+
> 50.30 +8.1% 54.36 fat/thresh=1000M:999M/xfs-1dd-1-3.1.0+
> 28.89 -1.2% 28.55 fat/thresh=100M/xfs-100dd-1-3.1.0+
> 44.94 +5.3% 47.33 fat/thresh=100M/xfs-10dd-1-3.1.0+
> 55.60 +3.4% 57.48 fat/thresh=100M/xfs-1dd-1-3.1.0+
> 37.05 -14.5% 31.68 fat/thresh=10M/xfs-10dd-1-3.1.0+
> 55.42 +0.6% 55.77 fat/thresh=10M/xfs-1dd-1-3.1.0+
> 41.64 -30.8% 28.82 fat/thresh=1M/xfs-10dd-1-3.1.0+
> 56.68 -3.0% 54.98 fat/thresh=1M/xfs-1dd-1-3.1.0+
> 977.98 +8.7% 1063.55 TOTAL write_bw
>
> wfg@bee /export/writeback% ./compare -g btrfs fat/*/*-{3.1.0+,3.2.0-rc3-pause6+}
> 3.1.0+ 3.2.0-rc3-pause6+
> ------------------------ ------------------------
> 53.49 +19.2% 63.75 fat/UKEY-HDD/btrfs-100dd-1-3.1.0+
> 56.50 +14.1% 64.49 fat/UKEY-HDD/btrfs-10dd-1-3.1.0+
> 60.07 +6.7% 64.11 fat/UKEY-HDD/btrfs-1dd-1-3.1.0+
> 4.81 +27.4% 6.12 fat/UKEY-thresh=100M/btrfs-100dd-1-3.1.0+
> 5.00 +21.7% 6.09 fat/UKEY-thresh=100M/btrfs-10dd-1-3.1.0+
> 5.99 +1.9% 6.11 fat/UKEY-thresh=100M/btrfs-1dd-1-3.1.0+
> 15.98 -8.5% 14.62 fat/fio/btrfs-fio_fat_mmap_randwrite_4k-1-3.1.0+
> 52.02 +4.2% 54.20 fat/fio/btrfs-fio_fat_mmap_randwrite_64k-1-3.1.0+
> 56.01 +1.0% 56.57 fat/fio/btrfs-fio_fat_rates-1-3.1.0+
> 52.42 +11.4% 58.38 fat/thresh=1000M/btrfs-100dd-1-3.1.0+
> 53.75 +9.2% 58.67 fat/thresh=1000M/btrfs-10dd-1-3.1.0+
> 53.23 +9.6% 58.34 fat/thresh=1000M/btrfs-1dd-1-3.1.0+
> 55.11 +5.8% 58.33 fat/thresh=1000M:990M/btrfs-100dd-1-3.1.0+
> 53.01 +6.8% 56.62 fat/thresh=1000M:990M/btrfs-10dd-1-3.1.0+
> 55.71 +3.8% 57.84 fat/thresh=1000M:990M/btrfs-1dd-1-3.1.0+
> 54.07 +6.9% 57.78 fat/thresh=1000M:999M/btrfs-100dd-1-3.1.0+
> 53.05 +5.3% 55.87 fat/thresh=1000M:999M/btrfs-10dd-1-3.1.0+
> 54.79 +4.2% 57.09 fat/thresh=1000M:999M/btrfs-1dd-1-3.1.0+
> 56.42 +2.7% 57.97 fat/thresh=100M/btrfs-100dd-1-3.1.0+
> 57.41 +1.9% 58.51 fat/thresh=100M/btrfs-10dd-1-3.1.0+
> 58.12 +1.0% 58.71 fat/thresh=100M/btrfs-1dd-1-3.1.0+
> 44.41 +31.0% 58.16 fat/thresh=10M/btrfs-10dd-1-3.1.0+
> 54.33 +7.9% 58.60 fat/thresh=10M/btrfs-1dd-1-3.1.0+
> 5.00 -55.3% 2.24 fat/thresh=1M/btrfs-10dd-1-3.1.0+
> 15.21 -81.0% 2.89 fat/thresh=1M/btrfs-1dd-1-3.1.0+
> 1085.90 +6.1% 1152.06 TOTAL write_bw
>
> Thanks,
> Fengguang
^ permalink raw reply [flat|nested] 9+ messages in thread
[parent not found: <20111215133137.GA14562@localhost>]
* Re: XFS/btrfs performance after IO-less dirty throttling
[not found] ` <20111215133137.GA14562@localhost>
@ 2011-12-16 0:31 ` Dave Chinner
2011-12-16 1:53 ` Wu Fengguang
[not found] ` <20111215135250.GB14562@localhost>
1 sibling, 1 reply; 9+ messages in thread
From: Dave Chinner @ 2011-12-16 0:31 UTC (permalink / raw)
To: Wu Fengguang; +Cc: linux-fsdevel, LKML, Christoph Hellwig
On Thu, Dec 15, 2011 at 09:31:37PM +0800, Wu Fengguang wrote:
> > The other big regressions happen in the XFS UKEY-thresh=100M cases.
>
> > 3.1.0+ 3.2.0-rc3
> > ------------------------ ------------------------
> > 4.17 -37.8% 2.59 fat/UKEY-thresh=100M/xfs-100dd-1-3.1.0+
> > 4.14 -53.3% 1.94 fat/UKEY-thresh=100M/xfs-10dd-1-3.1.0+
> > 6.30 +0.4% 6.33 fat/UKEY-thresh=100M/xfs-1dd-1-3.1.0+
>
> Here are more details for the 10dd case. The attached
> balance_dirty_pages-pause.png shows small pause time (mostly in range
> 10-50ms) and nr_dirtied_pause (mostly < 5), which may be the root cause.
>
> The iostat graphs show very unstable throughput and IO size often
> drops low.
And it's doing shitloads more allocation work. IOWs, the delayed
allocation algorithms are being strangled by writeback, causing
fragmentation and hence not allowing enough data per thread to be
written at a time to maximise throughput.
However, I'd argue that the performance of 10 concurrent writers to
a slow USB key formatted with XFS is so *completely irrelevant* that
I'd ignore it. Spend your time optimising writeback on XFS for high
throughputs (e.g > 500MB/s), not for shitty $5 USB keys that are 2-3
orders of magnitude slower than the target market for XFS...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: XFS/btrfs performance after IO-less dirty throttling
2011-12-16 0:31 ` Dave Chinner
@ 2011-12-16 1:53 ` Wu Fengguang
2011-12-16 4:25 ` Dave Chinner
0 siblings, 1 reply; 9+ messages in thread
From: Wu Fengguang @ 2011-12-16 1:53 UTC (permalink / raw)
To: Dave Chinner; +Cc: linux-fsdevel@vger.kernel.org, LKML, Christoph Hellwig
On Fri, Dec 16, 2011 at 08:31:57AM +0800, Dave Chinner wrote:
> On Thu, Dec 15, 2011 at 09:31:37PM +0800, Wu Fengguang wrote:
> > > The other big regressions happen in the XFS UKEY-thresh=100M cases.
> >
> > > 3.1.0+ 3.2.0-rc3
> > > ------------------------ ------------------------
> > > 4.17 -37.8% 2.59 fat/UKEY-thresh=100M/xfs-100dd-1-3.1.0+
> > > 4.14 -53.3% 1.94 fat/UKEY-thresh=100M/xfs-10dd-1-3.1.0+
> > > 6.30 +0.4% 6.33 fat/UKEY-thresh=100M/xfs-1dd-1-3.1.0+
> >
> > Here are more details for the 10dd case. The attached
> > balance_dirty_pages-pause.png shows small pause time (mostly in range
> > 10-50ms) and nr_dirtied_pause (mostly < 5), which may be the root cause.
> >
> > The iostat graphs show very unstable throughput and IO size often
> > drops low.
>
> And it's doing shitloads more allocation work. IOWs, the delayed
> allocation algorithms are being strangled by writeback, causing
> fragmentation and hence not allowing enough data per thread to be
> written at a time to maximise throughput.
OK, good to know this. For your convenience, below is a comparison
between 3.2.0-rc3 and 3.2.0-rc3-pause6+, where both kernels are doing
IO-less dirty throttling and the latter does longer pause time/intervals
and hence make XFS work better. Hopefully this comparison is more
suitable for understanding the pause time/intervals' impact on XFS.
> However, I'd argue that the performance of 10 concurrent writers to
> a slow USB key formatted with XFS is so *completely irrelevant* that
> I'd ignore it. Spend your time optimising writeback on XFS for high
> throughputs (e.g > 500MB/s), not for shitty $5 USB keys that are 2-3
> orders of magnitude slower than the target market for XFS...
That makes good sense. However it's always good to understand the root
cause (as you already did) and make sure there is no bug inside XFS
(and the other FS) that could somehow show up elsewhere in the fast
arrays that we care.
I'm indeed happy that you don't care that much on that regression
introduced by me ;-)
Thanks,
Fengguang
---
Note: the case for 3.2.0-rc3-pause6+ is re-ran to generate the missing
perf-stat in the previous email.
% ./compare-xfs fat/UKEY-thresh=100M/xfs-10dd-1-3.2.0-rc3 fat/UKEY-thresh=100M/xfs-10dd-1-3.2.0-rc3-pause6+| g TOTAL
1.94 +39.5% 2.70 TOTAL write_bw
1995.32 +45.0% 2893.64 TOTAL io_wkB_s
17.96 +42.1% 25.52 TOTAL io_w_s
0.06 -42.1% 0.04 TOTAL io_wrqm_s
0.04 +18.2% 0.04 TOTAL io_rkB_s
0.02 +11.1% 0.02 TOTAL io_r_s
0.00 0.00 TOTAL io_rrqm_s
205.05 +1.2% 207.47 TOTAL io_avgrq_sz
172.78 -14.2% 148.33 TOTAL io_avgqu_sz
8231.55 -32.3% 5570.70 TOTAL io_await
86.04 -18.4% 70.20 TOTAL io_svctm
100.00 -0.0% 100.00 TOTAL io_util
0.26 -81.7% 0.05 TOTAL cpu_user
0.00 0.00 TOTAL cpu_nice
0.10 +24.6% 0.12 TOTAL cpu_system
66.28 -0.7% 65.79 TOTAL cpu_iowait
0.00 0.00 TOTAL cpu_steal
33.36 +2.0% 34.05 TOTAL cpu_idle
1.94 +39.5% 2.70 TOTAL write_bw
0.00 0.00 TOTAL writeback:writeback_nothread
0.00 0.00 TOTAL writeback:writeback_queue
0.00 0.00 TOTAL writeback:writeback_exec
827.00 +20.7% 998.00 TOTAL writeback:writeback_start
827.00 +20.7% 998.00 TOTAL writeback:writeback_written
0.00 1.00 TOTAL writeback:writeback_wait
402.00 -30.3% 280.00 TOTAL writeback:writeback_pages_written
0.00 0.00 TOTAL writeback:writeback_nowork
349.00 -32.7% 235.00 TOTAL writeback:writeback_wake_background
80.00 +22.5% 98.00 TOTAL writeback:writeback_wake_thread
0.00 2.00 TOTAL writeback:writeback_wake_forker_thread
0.00 0.00 TOTAL writeback:writeback_bdi_register
0.00 0.00 TOTAL writeback:writeback_bdi_unregister
1.00 +100.0% 2.00 TOTAL writeback:writeback_thread_start
1.00 +100.0% 2.00 TOTAL writeback:writeback_thread_stop
1506.00 -1.0% 1491.00 TOTAL writeback:wbc_writepage
666.00 +4.5% 696.00 TOTAL writeback:writeback_queue_io
169482.00 -82.8% 29193.00 TOTAL writeback:global_dirty_state
2910.00 -11.5% 2574.00 TOTAL writeback:bdi_dirty_ratelimit
166968.00 -83.8% 27111.00 TOTAL writeback:balance_dirty_pages
0.00 0.00 TOTAL writeback:writeback_congestion_wait
0.00 0.00 TOTAL writeback:writeback_wait_iff_congested
0.00 0.00 TOTAL writeback:writeback_single_inode_requeue
534.00 +50.2% 802.00 TOTAL writeback:writeback_single_inode
0.00 0.00 TOTAL block:block_rq_abort
0.00 0.00 TOTAL block:block_rq_requeue
10789.00 +42.0% 15323.00 TOTAL block:block_rq_complete
11639.00 +38.1% 16078.00 TOTAL block:block_rq_insert
11376.00 +40.0% 15927.00 TOTAL block:block_rq_issue
0.00 0.00 TOTAL block:block_bio_bounce
0.00 0.00 TOTAL block:block_bio_complete
38.00 -42.1% 22.00 TOTAL block:block_bio_backmerge
0.00 0.00 TOTAL block:block_bio_frontmerge
11091.00 +39.8% 15500.00 TOTAL block:block_bio_queue
11639.00 +38.1% 16079.00 TOTAL block:block_getrq
280.00 +63.6% 458.00 TOTAL block:block_sleeprq
1244.00 +30.5% 1623.00 TOTAL block:block_plug
1244.00 +30.5% 1623.00 TOTAL block:block_unplug
0.00 0.00 TOTAL block:block_split
11091.00 +39.8% 15500.00 TOTAL block:block_bio_remap
0.00 0.00 TOTAL block:block_rq_remap
1.94 +39.5% 2.70 TOTAL write_bw
0.00 0.00 TOTAL xfs:xfs_attr_list_sf
0.00 0.00 TOTAL xfs:xfs_attr_list_sf_all
0.00 0.00 TOTAL xfs:xfs_attr_list_leaf
0.00 0.00 TOTAL xfs:xfs_attr_list_leaf_end
0.00 0.00 TOTAL xfs:xfs_attr_list_full
0.00 0.00 TOTAL xfs:xfs_attr_list_add
0.00 0.00 TOTAL xfs:xfs_attr_list_wrong_blk
0.00 0.00 TOTAL xfs:xfs_attr_list_notfound
102834.00 -10.2% 92301.00 TOTAL xfs:xfs_perag_get
0.00 0.00 TOTAL xfs:xfs_perag_get_tag
85219.00 -11.3% 75617.00 TOTAL xfs:xfs_perag_put
0.00 1.00 TOTAL xfs:xfs_perag_set_reclaim
0.00 0.00 TOTAL xfs:xfs_perag_clear_reclaim
0.00 0.00 TOTAL xfs:xfs_attr_list_node_descend
302.00 -13.2% 262.00 TOTAL xfs:xfs_iext_insert
93.00 +55.9% 145.00 TOTAL xfs:xfs_iext_remove
10839.00 +3.9% 11265.00 TOTAL xfs:xfs_bmap_pre_update
10839.00 +3.9% 11265.00 TOTAL xfs:xfs_bmap_post_update
0.00 0.00 TOTAL xfs:xfs_extlist
393.00 -74.8% 99.00 TOTAL xfs:xfs_buf_init
0.00 0.00 TOTAL xfs:xfs_buf_free
1121.00 -17.4% 926.00 TOTAL xfs:xfs_buf_hold
14006.00 -45.4% 7650.00 TOTAL xfs:xfs_buf_rele
652.00 -23.6% 498.00 TOTAL xfs:xfs_buf_iodone
480.00 -20.0% 384.00 TOTAL xfs:xfs_buf_iorequest
0.00 0.00 TOTAL xfs:xfs_buf_bawrite
54.00 -24.1% 41.00 TOTAL xfs:xfs_buf_lock
54.00 -24.1% 41.00 TOTAL xfs:xfs_buf_lock_done
36012.00 -55.0% 16207.00 TOTAL xfs:xfs_buf_trylock
3453.00 -0.8% 3426.00 TOTAL xfs:xfs_buf_unlock
13.00 +0.0% 13.00 TOTAL xfs:xfs_buf_iowait
15.00 +0.0% 15.00 TOTAL xfs:xfs_buf_iowait_done
2734.00 +8.7% 2971.00 TOTAL xfs:xfs_buf_delwri_queue
0.00 1.00 TOTAL xfs:xfs_buf_delwri_dequeue
169.00 -34.3% 111.00 TOTAL xfs:xfs_buf_delwri_split
0.00 0.00 TOTAL xfs:xfs_buf_get_uncached
0.00 0.00 TOTAL xfs:xfs_bdstrat_shut
321.00 +6.2% 341.00 TOTAL xfs:xfs_buf_item_relse
0.00 0.00 TOTAL xfs:xfs_buf_item_iodone
0.00 0.00 TOTAL xfs:xfs_buf_item_iodone_async
0.00 0.00 TOTAL xfs:xfs_buf_error_relse
0.00 0.00 TOTAL xfs:xfs_trans_read_buf_io
0.00 0.00 TOTAL xfs:xfs_trans_read_buf_shut
0.00 0.00 TOTAL xfs:xfs_btree_corrupt
0.00 0.00 TOTAL xfs:xfs_da_btree_corrupt
0.00 0.00 TOTAL xfs:xfs_reset_dqcounts
0.00 0.00 TOTAL xfs:xfs_inode_item_push
1661.00 +11.4% 1851.00 TOTAL xfs:xfs_buf_find
1600.00 +14.1% 1825.00 TOTAL xfs:xfs_buf_get
1597.00 +14.1% 1822.00 TOTAL xfs:xfs_buf_read
482.00 -19.7% 387.00 TOTAL xfs:xfs_buf_ioerror
1206.00 -2.7% 1173.00 TOTAL xfs:xfs_buf_item_size
0.00 1.00 TOTAL xfs:xfs_buf_item_size_stale
1206.00 -2.7% 1173.00 TOTAL xfs:xfs_buf_item_format
0.00 1.00 TOTAL xfs:xfs_buf_item_format_stale
1137.00 -11.9% 1002.00 TOTAL xfs:xfs_buf_item_pin
1131.00 -12.4% 991.00 TOTAL xfs:xfs_buf_item_unpin
0.00 0.00 TOTAL xfs:xfs_buf_item_unpin_stale
149.00 -43.6% 84.00 TOTAL xfs:xfs_buf_item_trylock
1208.00 -2.6% 1176.00 TOTAL xfs:xfs_buf_item_unlock
0.00 1.00 TOTAL xfs:xfs_buf_item_unlock_stale
1131.00 -12.4% 991.00 TOTAL xfs:xfs_buf_item_committed
0.00 0.00 TOTAL xfs:xfs_buf_item_push
149.00 -43.6% 84.00 TOTAL xfs:xfs_buf_item_pushbuf
3.00 +0.0% 3.00 TOTAL xfs:xfs_trans_get_buf
0.00 0.00 TOTAL xfs:xfs_trans_get_buf_recur
19.00 -26.3% 14.00 TOTAL xfs:xfs_trans_getsb
0.00 0.00 TOTAL xfs:xfs_trans_getsb_recur
1503.00 -0.3% 1499.00 TOTAL xfs:xfs_trans_read_buf
9.00 +922.2% 92.00 TOTAL xfs:xfs_trans_read_buf_recur
2638.00 +0.3% 2646.00 TOTAL xfs:xfs_trans_log_buf
1198.00 +4.3% 1250.00 TOTAL xfs:xfs_trans_brelse
0.00 0.00 TOTAL xfs:xfs_trans_bjoin
0.00 0.00 TOTAL xfs:xfs_trans_bhold
0.00 0.00 TOTAL xfs:xfs_trans_bhold_release
0.00 1.00 TOTAL xfs:xfs_trans_binval
1497731.00 +45.8% 2183524.00 TOTAL xfs:xfs_ilock
13167.00 -32.9% 8829.00 TOTAL xfs:xfs_ilock_nowait
0.00 0.00 TOTAL xfs:xfs_ilock_demote
1510701.00 +45.1% 2192357.00 TOTAL xfs:xfs_iunlock
0.00 0.00 TOTAL xfs:xfs_iget_skip
0.00 0.00 TOTAL xfs:xfs_iget_reclaim
0.00 0.00 TOTAL xfs:xfs_iget_reclaim_fail
0.00 0.00 TOTAL xfs:xfs_iget_hit
0.00 0.00 TOTAL xfs:xfs_iget_miss
33.00 +6.1% 35.00 TOTAL xfs:xfs_getattr
0.00 0.00 TOTAL xfs:xfs_setattr
0.00 0.00 TOTAL xfs:xfs_readlink
0.00 0.00 TOTAL xfs:xfs_alloc_file_space
0.00 0.00 TOTAL xfs:xfs_free_file_space
4.00 +50.0% 6.00 TOTAL xfs:xfs_readdir
0.00 0.00 TOTAL xfs:xfs_get_acl
0.00 0.00 TOTAL xfs:xfs_vm_bmap
0.00 0.00 TOTAL xfs:xfs_file_ioctl
0.00 0.00 TOTAL xfs:xfs_file_compat_ioctl
0.00 0.00 TOTAL xfs:xfs_ioctl_setattr
0.00 0.00 TOTAL xfs:xfs_dir_fsync
0.00 0.00 TOTAL xfs:xfs_file_fsync
0.00 1.00 TOTAL xfs:xfs_destroy_inode
246.00 +68.7% 415.00 TOTAL xfs:xfs_write_inode
1.00 +200.0% 3.00 TOTAL xfs:xfs_evict_inode
0.00 0.00 TOTAL xfs:xfs_dquot_dqalloc
0.00 0.00 TOTAL xfs:xfs_dquot_dqdetach
0.00 0.00 TOTAL xfs:xfs_ihold
193614.00 -5.3% 183377.00 TOTAL xfs:xfs_irele
310.00 -11.3% 275.00 TOTAL xfs:xfs_inode_pin
315.00 -11.7% 278.00 TOTAL xfs:xfs_inode_unpin
0.00 0.00 TOTAL xfs:xfs_inode_unpin_nowait
1.00 +200.0% 3.00 TOTAL xfs:xfs_remove
0.00 0.00 TOTAL xfs:xfs_link
0.00 0.00 TOTAL xfs:xfs_lookup
0.00 0.00 TOTAL xfs:xfs_create
0.00 0.00 TOTAL xfs:xfs_symlink
0.00 0.00 TOTAL xfs:xfs_rename
0.00 0.00 TOTAL xfs:xfs_dqadjust
0.00 0.00 TOTAL xfs:xfs_dqreclaim_want
0.00 0.00 TOTAL xfs:xfs_dqreclaim_dirty
0.00 0.00 TOTAL xfs:xfs_dqreclaim_unlink
0.00 0.00 TOTAL xfs:xfs_dqattach_found
0.00 0.00 TOTAL xfs:xfs_dqattach_get
0.00 0.00 TOTAL xfs:xfs_dqinit
0.00 0.00 TOTAL xfs:xfs_dqreuse
0.00 0.00 TOTAL xfs:xfs_dqalloc
0.00 0.00 TOTAL xfs:xfs_dqtobp_read
0.00 0.00 TOTAL xfs:xfs_dqread
0.00 0.00 TOTAL xfs:xfs_dqread_fail
0.00 0.00 TOTAL xfs:xfs_dqlookup_found
0.00 0.00 TOTAL xfs:xfs_dqlookup_want
0.00 0.00 TOTAL xfs:xfs_dqlookup_freelist
0.00 0.00 TOTAL xfs:xfs_dqlookup_done
0.00 0.00 TOTAL xfs:xfs_dqget_hit
0.00 0.00 TOTAL xfs:xfs_dqget_miss
0.00 0.00 TOTAL xfs:xfs_dqput
0.00 0.00 TOTAL xfs:xfs_dqput_wait
0.00 0.00 TOTAL xfs:xfs_dqput_free
0.00 0.00 TOTAL xfs:xfs_dqrele
0.00 0.00 TOTAL xfs:xfs_dqflush
0.00 0.00 TOTAL xfs:xfs_dqflush_force
0.00 0.00 TOTAL xfs:xfs_dqflush_done
627.00 -12.1% 551.00 TOTAL xfs:xfs_log_done_nonperm
12.00 +483.3% 70.00 TOTAL xfs:xfs_log_done_perm
339.00 +5.9% 359.00 TOTAL xfs:xfs_log_reserve
0.00 0.00 TOTAL xfs:xfs_log_umount_write
327.00 -11.6% 289.00 TOTAL xfs:xfs_log_grant_enter
327.00 -11.6% 289.00 TOTAL xfs:xfs_log_grant_exit
0.00 0.00 TOTAL xfs:xfs_log_grant_error
0.00 0.00 TOTAL xfs:xfs_log_grant_sleep1
0.00 0.00 TOTAL xfs:xfs_log_grant_wake1
0.00 0.00 TOTAL xfs:xfs_log_grant_sleep2
0.00 0.00 TOTAL xfs:xfs_log_grant_wake2
0.00 0.00 TOTAL xfs:xfs_log_grant_wake_up
6.00 +933.3% 62.00 TOTAL xfs:xfs_log_regrant_write_enter
6.00 +933.3% 62.00 TOTAL xfs:xfs_log_regrant_write_exit
0.00 0.00 TOTAL xfs:xfs_log_regrant_write_error
0.00 0.00 TOTAL xfs:xfs_log_regrant_write_sleep1
0.00 0.00 TOTAL xfs:xfs_log_regrant_write_wake1
0.00 0.00 TOTAL xfs:xfs_log_regrant_write_sleep2
0.00 0.00 TOTAL xfs:xfs_log_regrant_write_wake2
0.00 0.00 TOTAL xfs:xfs_log_regrant_write_wake_up
12.00 +483.3% 70.00 TOTAL xfs:xfs_log_regrant_reserve_enter
6.00 +933.3% 62.00 TOTAL xfs:xfs_log_regrant_reserve_exit
12.00 +483.3% 70.00 TOTAL xfs:xfs_log_regrant_reserve_sub
627.00 -12.1% 551.00 TOTAL xfs:xfs_log_ungrant_enter
627.00 -11.8% 553.00 TOTAL xfs:xfs_log_ungrant_exit
627.00 -11.8% 553.00 TOTAL xfs:xfs_log_ungrant_sub
261.00 -96.6% 9.00 TOTAL xfs:xfs_ail_push
10520.00 -55.2% 4715.00 TOTAL xfs:xfs_ail_pushbuf
0.00 0.00 TOTAL xfs:xfs_ail_pushbuf_pinned
152.00 -28.9% 108.00 TOTAL xfs:xfs_ail_pinned
23075.00 -66.4% 7756.00 TOTAL xfs:xfs_ail_locked
0.00 0.00 TOTAL xfs:xfs_file_read
296153.00 +46.5% 433727.00 TOTAL xfs:xfs_file_buffered_write
0.00 0.00 TOTAL xfs:xfs_file_direct_write
0.00 0.00 TOTAL xfs:xfs_file_splice_read
0.00 0.00 TOTAL xfs:xfs_file_splice_write
638.00 +13.9% 727.00 TOTAL xfs:xfs_writepage
44327.00 +400.3% 221766.00 TOTAL xfs:xfs_releasepage
44327.00 +419.4% 230216.00 TOTAL xfs:xfs_invalidatepage
337.00 +37.4% 463.00 TOTAL xfs:xfs_map_blocks_found
303.00 -12.2% 266.00 TOTAL xfs:xfs_map_blocks_alloc
275503.00 +50.2% 413675.00 TOTAL xfs:xfs_get_blocks_found
20489.00 -3.1% 19847.00 TOTAL xfs:xfs_get_blocks_alloc
10829.00 +4.3% 11296.00 TOTAL xfs:xfs_delalloc_enospc
0.00 0.00 TOTAL xfs:xfs_unwritten_convert
0.00 0.00 TOTAL xfs:xfs_get_blocks_notfound
555.00 +27.4% 707.00 TOTAL xfs:xfs_setfilesize
6.00 +66.7% 10.00 TOTAL xfs:xfs_itruncate_data_start
6.00 +66.7% 10.00 TOTAL xfs:xfs_itruncate_data_end
0.00 0.00 TOTAL xfs:xfs_pagecache_inval
6.00 +516.7% 37.00 TOTAL xfs:xfs_bunmap
6.00 +983.3% 65.00 TOTAL xfs:xfs_alloc_busy
0.00 0.00 TOTAL xfs:xfs_alloc_busy_enomem
0.00 0.00 TOTAL xfs:xfs_alloc_busy_force
0.00 0.00 TOTAL xfs:xfs_alloc_busy_reuse
0.00 0.00 TOTAL xfs:xfs_alloc_busy_clear
0.00 0.00 TOTAL xfs:xfs_alloc_busy_trim
0.00 0.00 TOTAL xfs:xfs_trans_commit_lsn
928.00 -5.0% 882.00 TOTAL xfs:xfs_agf
6.00 +983.3% 65.00 TOTAL xfs:xfs_free_extent
0.00 0.00 TOTAL xfs:xfs_alloc_exact_done
0.00 0.00 TOTAL xfs:xfs_alloc_exact_notfound
0.00 0.00 TOTAL xfs:xfs_alloc_exact_error
0.00 0.00 TOTAL xfs:xfs_alloc_near_nominleft
291.00 -11.7% 257.00 TOTAL xfs:xfs_alloc_near_first
0.00 0.00 TOTAL xfs:xfs_alloc_near_greater
0.00 0.00 TOTAL xfs:xfs_alloc_near_lesser
0.00 0.00 TOTAL xfs:xfs_alloc_near_error
0.00 0.00 TOTAL xfs:xfs_alloc_near_noentry
0.00 0.00 TOTAL xfs:xfs_alloc_near_busy
0.00 0.00 TOTAL xfs:xfs_alloc_size_neither
0.00 0.00 TOTAL xfs:xfs_alloc_size_noentry
2.00 +0.0% 2.00 TOTAL xfs:xfs_alloc_size_nominleft
14.00 -14.3% 12.00 TOTAL xfs:xfs_alloc_size_done
0.00 0.00 TOTAL xfs:xfs_alloc_size_error
0.00 0.00 TOTAL xfs:xfs_alloc_size_busy
0.00 0.00 TOTAL xfs:xfs_alloc_small_freelist
0.00 0.00 TOTAL xfs:xfs_alloc_small_notenough
5.00 +0.0% 5.00 TOTAL xfs:xfs_alloc_small_done
0.00 0.00 TOTAL xfs:xfs_alloc_small_error
0.00 0.00 TOTAL xfs:xfs_alloc_vextent_badargs
0.00 0.00 TOTAL xfs:xfs_alloc_vextent_nofix
0.00 0.00 TOTAL xfs:xfs_alloc_vextent_noagbp
112.00 -14.3% 96.00 TOTAL xfs:xfs_alloc_vextent_loopfailed
0.00 0.00 TOTAL xfs:xfs_alloc_vextent_allfailed
0.00 0.00 TOTAL xfs:xfs_dir2_sf_addname
0.00 0.00 TOTAL xfs:xfs_dir2_sf_create
0.00 0.00 TOTAL xfs:xfs_dir2_sf_lookup
0.00 0.00 TOTAL xfs:xfs_dir2_sf_replace
1.00 +700.0% 8.00 TOTAL xfs:xfs_dir2_sf_removename
0.00 0.00 TOTAL xfs:xfs_dir2_sf_toino4
0.00 0.00 TOTAL xfs:xfs_dir2_sf_toino8
0.00 0.00 TOTAL xfs:xfs_dir2_sf_to_block
0.00 0.00 TOTAL xfs:xfs_dir2_block_addname
0.00 0.00 TOTAL xfs:xfs_dir2_block_lookup
0.00 0.00 TOTAL xfs:xfs_dir2_block_replace
0.00 0.00 TOTAL xfs:xfs_dir2_block_removename
0.00 0.00 TOTAL xfs:xfs_dir2_block_to_sf
0.00 0.00 TOTAL xfs:xfs_dir2_block_to_leaf
0.00 0.00 TOTAL xfs:xfs_dir2_leaf_addname
0.00 0.00 TOTAL xfs:xfs_dir2_leaf_lookup
0.00 0.00 TOTAL xfs:xfs_dir2_leaf_replace
0.00 0.00 TOTAL xfs:xfs_dir2_leaf_removename
0.00 0.00 TOTAL xfs:xfs_dir2_leaf_to_block
0.00 0.00 TOTAL xfs:xfs_dir2_leaf_to_node
0.00 0.00 TOTAL xfs:xfs_dir2_node_addname
0.00 0.00 TOTAL xfs:xfs_dir2_node_lookup
0.00 0.00 TOTAL xfs:xfs_dir2_node_replace
0.00 0.00 TOTAL xfs:xfs_dir2_node_removename
0.00 0.00 TOTAL xfs:xfs_dir2_node_to_leaf
0.00 0.00 TOTAL xfs:xfs_dir2_leafn_add
0.00 0.00 TOTAL xfs:xfs_dir2_leafn_remove
0.00 0.00 TOTAL xfs:xfs_dir2_grow_inode
0.00 0.00 TOTAL xfs:xfs_dir2_shrink_inode
0.00 0.00 TOTAL xfs:xfs_dir2_leafn_moveents
0.00 0.00 TOTAL xfs:xfs_swap_extent_before
0.00 0.00 TOTAL xfs:xfs_swap_extent_after
0.00 0.00 TOTAL xfs:xfs_log_recover_item_add
0.00 0.00 TOTAL xfs:xfs_log_recover_item_add_cont
0.00 0.00 TOTAL xfs:xfs_log_recover_item_reorder_head
0.00 0.00 TOTAL xfs:xfs_log_recover_item_reorder_tail
0.00 0.00 TOTAL xfs:xfs_log_recover_item_recover
0.00 0.00 TOTAL xfs:xfs_log_recover_buf_not_cancel
0.00 0.00 TOTAL xfs:xfs_log_recover_buf_cancel
0.00 0.00 TOTAL xfs:xfs_log_recover_buf_cancel_add
0.00 0.00 TOTAL xfs:xfs_log_recover_buf_cancel_ref_inc
0.00 0.00 TOTAL xfs:xfs_log_recover_buf_recover
0.00 0.00 TOTAL xfs:xfs_log_recover_buf_inode_buf
0.00 0.00 TOTAL xfs:xfs_log_recover_buf_reg_buf
0.00 0.00 TOTAL xfs:xfs_log_recover_buf_dquot_buf
0.00 0.00 TOTAL xfs:xfs_log_recover_inode_recover
0.00 0.00 TOTAL xfs:xfs_log_recover_inode_cancel
0.00 0.00 TOTAL xfs:xfs_log_recover_inode_skip
0.00 0.00 TOTAL xfs:xfs_discard_extent
0.00 0.00 TOTAL xfs:xfs_discard_toosmall
0.00 0.00 TOTAL xfs:xfs_discard_exclude
0.00 0.00 TOTAL xfs:xfs_discard_busy
1.94 +39.5% 2.70 TOTAL write_bw
0.00 0.00 TOTAL nr_vmscan_write
0.00 0.00 TOTAL nr_vmscan_immediate_reclaim
319739.00 +42.6% 456105.00 TOTAL nr_dirtied
297242.00 +45.8% 433527.00 TOTAL nr_written
568782.00 +24.3% 706989.00 TOTAL numa_hit
0.00 0.00 TOTAL numa_miss
0.00 0.00 TOTAL numa_foreign
8920.00 +2.8% 9171.00 TOTAL numa_interleave
568782.00 +24.3% 706989.00 TOTAL numa_local
0.00 0.00 TOTAL numa_other
13047.00 +0.0% 13051.00 TOTAL pgpgin
1238595.00 +42.6% 1766404.00 TOTAL pgpgout
0.00 0.00 TOTAL pswpin
0.00 0.00 TOTAL pswpout
0.00 0.00 TOTAL pgalloc_dma
597775.00 +23.6% 738677.00 TOTAL pgalloc_dma32
0.00 0.00 TOTAL pgalloc_normal
0.00 0.00 TOTAL pgalloc_movable
969675.00 -0.4% 965391.00 TOTAL pgfree
3887.00 +9.9% 4272.00 TOTAL pgactivate
0.00 0.00 TOTAL pgdeactivate
520365.00 -4.7% 496138.00 TOTAL pgfault
194.00 +5.7% 205.00 TOTAL pgmajfault
0.00 0.00 TOTAL pgrefill_dma
0.00 0.00 TOTAL pgrefill_dma32
0.00 0.00 TOTAL pgrefill_normal
0.00 0.00 TOTAL pgrefill_movable
0.00 0.00 TOTAL pgsteal_dma
0.00 0.00 TOTAL pgsteal_dma32
0.00 0.00 TOTAL pgsteal_normal
0.00 0.00 TOTAL pgsteal_movable
0.00 0.00 TOTAL pgscan_kswapd_dma
0.00 0.00 TOTAL pgscan_kswapd_dma32
0.00 0.00 TOTAL pgscan_kswapd_normal
0.00 0.00 TOTAL pgscan_kswapd_movable
0.00 0.00 TOTAL pgscan_direct_dma
0.00 0.00 TOTAL pgscan_direct_dma32
0.00 0.00 TOTAL pgscan_direct_normal
0.00 0.00 TOTAL pgscan_direct_movable
0.00 0.00 TOTAL zone_reclaim_failed
0.00 0.00 TOTAL pginodesteal
0.00 0.00 TOTAL slabs_scanned
0.00 0.00 TOTAL kswapd_steal
0.00 0.00 TOTAL kswapd_inodesteal
0.00 0.00 TOTAL kswapd_low_wmark_hit_quickly
0.00 0.00 TOTAL kswapd_high_wmark_hit_quickly
0.00 0.00 TOTAL kswapd_skip_congestion_wait
1.00 +0.0% 1.00 TOTAL pageoutrun
0.00 0.00 TOTAL allocstall
0.00 0.00 TOTAL pgrotated
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: XFS/btrfs performance after IO-less dirty throttling
2011-12-16 1:53 ` Wu Fengguang
@ 2011-12-16 4:25 ` Dave Chinner
2011-12-16 5:16 ` Wu Fengguang
0 siblings, 1 reply; 9+ messages in thread
From: Dave Chinner @ 2011-12-16 4:25 UTC (permalink / raw)
To: Wu Fengguang; +Cc: linux-fsdevel@vger.kernel.org, LKML, Christoph Hellwig
On Fri, Dec 16, 2011 at 09:53:11AM +0800, Wu Fengguang wrote:
> I'm indeed happy that you don't care that much on that regression
> introduced by me ;-)
Heh.
BTW, do these tests run to ENOSPC?
> 10829.00 +4.3% 11296.00 TOTAL xfs:xfs_delalloc_enospc
This implies that it does.
If so, I'm not sure how much we can really trust these overall
results because allocation and writeback speeds at ENOSPC is
anything but deterministic. It will certainly have an effect
(detrimental) on throughput as the filesystem gets close to full....
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: XFS/btrfs performance after IO-less dirty throttling
2011-12-16 4:25 ` Dave Chinner
@ 2011-12-16 5:16 ` Wu Fengguang
2011-12-19 1:57 ` Dave Chinner
0 siblings, 1 reply; 9+ messages in thread
From: Wu Fengguang @ 2011-12-16 5:16 UTC (permalink / raw)
To: Dave Chinner; +Cc: linux-fsdevel@vger.kernel.org, LKML, Christoph Hellwig
On Fri, Dec 16, 2011 at 12:25:08PM +0800, Dave Chinner wrote:
> On Fri, Dec 16, 2011 at 09:53:11AM +0800, Wu Fengguang wrote:
> > I'm indeed happy that you don't care that much on that regression
> > introduced by me ;-)
>
> Heh.
>
> BTW, do these tests run to ENOSPC?
Nope. Shall ENOSPC (performance) be tested?
> > 10829.00 +4.3% 11296.00 TOTAL xfs:xfs_delalloc_enospc
>
> This implies that it does.
Not really. The USB key partition size is 7.1GB.
Even in the fastest 1dd case, only 4GB data is written:
wfg@bee /export/writeback% cat fat/UKEY-thresh=100M/xfs-1dd-1-3.2.0-rc3/ls-files
131 -rw-rw-r-- 1 root root 4060078080 Dec 8 15:57 /fs/sdb3/zero-1
That's about 6.7MB/s write bandwidth running for 600 seconds.
Thanks,
Fengguang
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: XFS/btrfs performance after IO-less dirty throttling
2011-12-16 5:16 ` Wu Fengguang
@ 2011-12-19 1:57 ` Dave Chinner
2011-12-19 5:44 ` Wu Fengguang
0 siblings, 1 reply; 9+ messages in thread
From: Dave Chinner @ 2011-12-19 1:57 UTC (permalink / raw)
To: Wu Fengguang; +Cc: linux-fsdevel@vger.kernel.org, LKML, Christoph Hellwig
On Fri, Dec 16, 2011 at 01:16:09PM +0800, Wu Fengguang wrote:
> On Fri, Dec 16, 2011 at 12:25:08PM +0800, Dave Chinner wrote:
> > On Fri, Dec 16, 2011 at 09:53:11AM +0800, Wu Fengguang wrote:
> > > I'm indeed happy that you don't care that much on that regression
> > > introduced by me ;-)
> >
> > Heh.
> >
> > BTW, do these tests run to ENOSPC?
>
> Nope. Shall ENOSPC (performance) be tested?
>
> > > 10829.00 +4.3% 11296.00 TOTAL xfs:xfs_delalloc_enospc
> >
> > This implies that it does.
>
> Not really. The USB key partition size is 7.1GB.
> Even in the fastest 1dd case, only 4GB data is written:
There's a couple of ways this can be tripping ENOSPC during delayed
allocation. Speculative preallocation is the most likely cause given
that for a 4GB file being written sequentially it will try to
preallocate a 4GB chunk for the next delalloc extent....
And by triggering this path, it will force data writeback to occur
through the xfssyncd workqueue. i.e. the writeback behaviour that is
occurring is not what you are expecting it to be - XFS is detecting
a potential ENOSPC problem, and taking steps to flush delalloc data
much faster than writeback is doing.
> wfg@bee /export/writeback% cat fat/UKEY-thresh=100M/xfs-1dd-1-3.2.0-rc3/ls-files
> 131 -rw-rw-r-- 1 root root 4060078080 Dec 8 15:57 /fs/sdb3/zero-1
What's the dd command line you are using?
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: XFS/btrfs performance after IO-less dirty throttling
2011-12-19 1:57 ` Dave Chinner
@ 2011-12-19 5:44 ` Wu Fengguang
0 siblings, 0 replies; 9+ messages in thread
From: Wu Fengguang @ 2011-12-19 5:44 UTC (permalink / raw)
To: Dave Chinner; +Cc: linux-fsdevel@vger.kernel.org, LKML, Christoph Hellwig
On Mon, Dec 19, 2011 at 09:57:16AM +0800, Dave Chinner wrote:
> On Fri, Dec 16, 2011 at 01:16:09PM +0800, Wu Fengguang wrote:
> > On Fri, Dec 16, 2011 at 12:25:08PM +0800, Dave Chinner wrote:
> > > On Fri, Dec 16, 2011 at 09:53:11AM +0800, Wu Fengguang wrote:
> > > > I'm indeed happy that you don't care that much on that regression
> > > > introduced by me ;-)
> > >
> > > Heh.
> > >
> > > BTW, do these tests run to ENOSPC?
> >
> > Nope. Shall ENOSPC (performance) be tested?
> >
> > > > 10829.00 +4.3% 11296.00 TOTAL xfs:xfs_delalloc_enospc
> > >
> > > This implies that it does.
> >
> > Not really. The USB key partition size is 7.1GB.
> > Even in the fastest 1dd case, only 4GB data is written:
>
> There's a couple of ways this can be tripping ENOSPC during delayed
> allocation. Speculative preallocation is the most likely cause given
> that for a 4GB file being written sequentially it will try to
> preallocate a 4GB chunk for the next delalloc extent....
Yeah I suspected some heuristic allocation, too.
> And by triggering this path, it will force data writeback to occur
> through the xfssyncd workqueue. i.e. the writeback behaviour that is
> occurring is not what you are expecting it to be - XFS is detecting
> a potential ENOSPC problem, and taking steps to flush delalloc data
> much faster than writeback is doing.
OK.
> > wfg@bee /export/writeback% cat fat/UKEY-thresh=100M/xfs-1dd-1-3.2.0-rc3/ls-files
> > 131 -rw-rw-r-- 1 root root 4060078080 Dec 8 15:57 /fs/sdb3/zero-1
>
> What's the dd command line you are using?
It's a loop of
dd bs=$bs if=/dev/zero of=$mnt/zero-$i &
where bs=4k by default.
Thanks,
Fengguang
^ permalink raw reply [flat|nested] 9+ messages in thread
[parent not found: <20111215135250.GB14562@localhost>]
* Re: XFS/btrfs performance after IO-less dirty throttling
[not found] ` <20111215135250.GB14562@localhost>
@ 2011-12-16 5:27 ` Wu Fengguang
0 siblings, 0 replies; 9+ messages in thread
From: Wu Fengguang @ 2011-12-16 5:27 UTC (permalink / raw)
To: linux-fsdevel
Cc: LKML, Dave Chinner, Christoph Hellwig, Li Shaohua, Jens Axboe
On Thu, Dec 15, 2011 at 09:52:50PM +0800, Wu Fengguang wrote:
> On Thu, Dec 15, 2011 at 09:31:37PM +0800, Wu Fengguang wrote:
> > > The other big regressions happen in the XFS UKEY-thresh=100M cases.
> >
> > > 3.1.0+ 3.2.0-rc3
> > > ------------------------ ------------------------
> > > 4.17 -37.8% 2.59 fat/UKEY-thresh=100M/xfs-100dd-1-3.1.0+
> > > 4.14 -53.3% 1.94 fat/UKEY-thresh=100M/xfs-10dd-1-3.1.0+
> > > 6.30 +0.4% 6.33 fat/UKEY-thresh=100M/xfs-1dd-1-3.1.0+
> >
> > Here are more details for the 10dd case. The attached
> > balance_dirty_pages-pause.png shows small pause time (mostly in range
> > 10-50ms) and nr_dirtied_pause (mostly < 5), which may be the root cause.
> >
> > The iostat graphs show very unstable throughput and IO size often
> > drops low.
>
> With the recently patches to raise pause time, the performance becomes
> better, but still not enough:
>
> 3.1.0+ 3.2.0-rc3-pause6+
> ------------------------ ------------------------
> 4.17 -33.1% 2.79 fat/UKEY-thresh=100M/xfs-100dd-1-3.1.0+
> 4.14 -23.9% 3.15 fat/UKEY-thresh=100M/xfs-10dd-1-3.1.0+
> 6.30 +0.2% 6.32 fat/UKEY-thresh=100M/xfs-1dd-1-3.1.0+
I have more findings when comparing the cfq and deadline IO
schedulers: the deadline IO scheduler has no such regression
(seems that it's not affected by the writeback changes).
% ./compare -g xfs fat/*/*-3.2.0-rc5-ioless-full+{,:deadline}
3.2.0-rc5-ioless-full+ 3.2.0-rc5-ioless-full+:deadline
------------------------ ------------------------
56.64 +7.3% 60.80 fat/UKEY-HDD/xfs-10dd-1-3.2.0-rc5-ioless-full+
61.43 -0.4% 61.20 fat/UKEY-HDD/xfs-1dd-1-3.2.0-rc5-ioless-full+
===> 2.41 +62.8% 3.93 fat/UKEY-thresh=100M/xfs-10dd-1-3.2.0-rc5-ioless-full+
6.34 -0.1% 6.34 fat/UKEY-thresh=100M/xfs-1dd-1-3.2.0-rc5-ioless-full+
10.40 +5.4% 10.97 fat/fio/xfs-fio_fat_mmap_randwrite_4k-1-3.2.0-rc5-ioless-full+
44.15 +9.0% 48.10 fat/fio/xfs-fio_fat_mmap_randwrite_64k-1-3.2.0-rc5-ioless-full+
53.92 +3.2% 55.66 fat/fio/xfs-fio_fat_rates-1-3.2.0-rc5-ioless-full+
55.13 +2.8% 56.70 fat/thresh=1000M/xfs-10dd-1-3.2.0-rc5-ioless-full+
56.96 -4.3% 54.51 fat/thresh=1000M/xfs-1dd-1-3.2.0-rc5-ioless-full+
56.56 +0.2% 56.67 fat/thresh=1000M:990M/xfs-10dd-1-3.2.0-rc5-ioless-full+
55.50 -4.0% 53.30 fat/thresh=1000M:990M/xfs-1dd-1-3.2.0-rc5-ioless-full+
54.55 +2.5% 55.92 fat/thresh=1000M:999M/xfs-10dd-1-3.2.0-rc5-ioless-full+
56.67 -3.5% 54.70 fat/thresh=1000M:999M/xfs-1dd-1-3.2.0-rc5-ioless-full+
56.76 -4.8% 54.06 fat/thresh=100M/xfs-1dd-1-3.2.0-rc5-ioless-full+
53.20 +1.5% 53.98 fat/thresh=10M/xfs-1dd-1-3.2.0-rc5-ioless-full+
52.78 +3.0% 54.38 fat/thresh=1M/xfs-1dd-1-3.2.0-rc5-ioless-full+
733.42 +1.1% 741.22 TOTAL write_bw
% ./compare-xfs fat/UKEY-thresh=100M/xfs-10dd-1-3.2.0-rc5-ioless-full+{,:deadline}| g TOTAL
2.41 +62.8% 3.93 TOTAL write_bw
2559.12 +57.1% 4019.64 TOTAL io_wkB_s
22.82 +52.1% 34.70 TOTAL io_w_s
0.04 +575.0% 0.24 TOTAL io_wrqm_s
0.04 +499.0% 0.26 TOTAL io_rkB_s
0.02 +343.7% 0.07 TOTAL io_r_s
0.00 0.40 TOTAL io_rrqm_s
199.06 +15.7% 230.24 TOTAL io_avgrq_sz
152.84 -4.0% 146.75 TOTAL io_avgqu_sz
6486.82 -35.1% 4212.99 TOTAL io_await
83.67 -62.9% 31.06 TOTAL io_svctm
100.00 +0.0% 100.00 TOTAL io_util
0.07 -9.0% 0.06 TOTAL cpu_user
0.00 0.00 TOTAL cpu_nice
0.11 +3.3% 0.12 TOTAL cpu_system
48.00 +13.1% 54.31 TOTAL cpu_iowait
0.00 0.00 TOTAL cpu_steal
51.82 -12.2% 45.51 TOTAL cpu_idle
2.41 +62.8% 3.93 TOTAL write_bw
0.00 0.00 TOTAL writeback:writeback_nothread
0.00 0.00 TOTAL writeback:writeback_queue
0.00 0.00 TOTAL writeback:writeback_exec
811.00 +21.6% 986.00 TOTAL writeback:writeback_start
811.00 +21.5% 985.00 TOTAL writeback:writeback_written
1.00 +200.0% 3.00 TOTAL writeback:writeback_wait
197.00 -35.0% 128.00 TOTAL writeback:writeback_pages_written
0.00 0.00 TOTAL writeback:writeback_nowork
182.00 -83.0% 31.00 TOTAL writeback:writeback_wake_background
88.00 +18.2% 104.00 TOTAL writeback:writeback_wake_thread
2.00 -50.0% 1.00 TOTAL writeback:writeback_wake_forker_thread
0.00 0.00 TOTAL writeback:writeback_bdi_register
0.00 0.00 TOTAL writeback:writeback_bdi_unregister
2.00 -50.0% 1.00 TOTAL writeback:writeback_thread_start
2.00 -50.0% 1.00 TOTAL writeback:writeback_thread_stop
1515.00 +3.6% 1570.00 TOTAL writeback:wbc_writepage
539.00 -10.8% 481.00 TOTAL writeback:writeback_queue_io
29290.00 -13.7% 25279.00 TOTAL writeback:global_dirty_state
2454.00 +6.2% 2605.00 TOTAL writeback:bdi_dirty_ratelimit
27349.00 -13.8% 23567.00 TOTAL writeback:balance_dirty_pages
0.00 0.00 TOTAL writeback:writeback_congestion_wait
0.00 0.00 TOTAL writeback:writeback_wait_iff_congested
0.00 0.00 TOTAL writeback:writeback_single_inode_requeue
724.00 +43.6% 1040.00 TOTAL writeback:writeback_single_inode
0.00 0.00 TOTAL block:block_rq_abort
0.00 0.00 TOTAL block:block_rq_requeue
13757.00 +52.0% 20904.00 TOTAL block:block_rq_complete
14505.00 +49.5% 21678.00 TOTAL block:block_rq_insert
14350.00 +49.9% 21509.00 TOTAL block:block_rq_issue
0.00 0.00 TOTAL block:block_bio_bounce
0.00 0.00 TOTAL block:block_bio_complete
20.00 +2075.0% 435.00 TOTAL block:block_bio_backmerge
1.00 +0.0% 1.00 TOTAL block:block_bio_frontmerge
13935.00 +54.4% 21513.00 TOTAL block:block_bio_queue
14505.00 +49.5% 21678.00 TOTAL block:block_getrq
403.00 +58.3% 638.00 TOTAL block:block_sleeprq
1513.00 +33.3% 2017.00 TOTAL block:block_plug
1513.00 +33.3% 2017.00 TOTAL block:block_unplug
0.00 0.00 TOTAL block:block_split
13934.00 +54.4% 21512.00 TOTAL block:block_bio_remap
0.00 0.00 TOTAL block:block_rq_remap
2.41 +62.8% 3.93 TOTAL write_bw
0.00 0.00 TOTAL xfs:xfs_attr_list_sf
0.00 0.00 TOTAL xfs:xfs_attr_list_sf_all
0.00 0.00 TOTAL xfs:xfs_attr_list_leaf
0.00 0.00 TOTAL xfs:xfs_attr_list_leaf_end
0.00 0.00 TOTAL xfs:xfs_attr_list_full
0.00 0.00 TOTAL xfs:xfs_attr_list_add
0.00 0.00 TOTAL xfs:xfs_attr_list_wrong_blk
0.00 0.00 TOTAL xfs:xfs_attr_list_notfound
102292.00 -80.2% 20294.00 TOTAL xfs:xfs_perag_get
0.00 0.00 TOTAL xfs:xfs_perag_get_tag
83956.00 -79.4% 17281.00 TOTAL xfs:xfs_perag_put
0.00 0.00 TOTAL xfs:xfs_perag_set_reclaim
0.00 0.00 TOTAL xfs:xfs_perag_clear_reclaim
0.00 0.00 TOTAL xfs:xfs_attr_list_node_descend
339.00 -34.5% 222.00 TOTAL xfs:xfs_iext_insert
101.00 -74.3% 26.00 TOTAL xfs:xfs_iext_remove
11277.00 -81.3% 2108.00 TOTAL xfs:xfs_bmap_pre_update
11277.00 -81.3% 2108.00 TOTAL xfs:xfs_bmap_post_update
0.00 0.00 TOTAL xfs:xfs_extlist
181.00 -48.1% 94.00 TOTAL xfs:xfs_buf_init
0.00 0.00 TOTAL xfs:xfs_buf_free
1010.00 -36.7% 639.00 TOTAL xfs:xfs_buf_hold
9262.00 -56.2% 4057.00 TOTAL xfs:xfs_buf_rele
568.00 -42.8% 325.00 TOTAL xfs:xfs_buf_iodone
453.00 -46.6% 242.00 TOTAL xfs:xfs_buf_iorequest
0.00 0.00 TOTAL xfs:xfs_buf_bawrite
39.00 -51.3% 19.00 TOTAL xfs:xfs_buf_lock
39.00 -51.3% 19.00 TOTAL xfs:xfs_buf_lock_done
23130.00 -68.3% 7325.00 TOTAL xfs:xfs_buf_trylock
2991.00 -22.1% 2329.00 TOTAL xfs:xfs_buf_unlock
12.00 +25.0% 15.00 TOTAL xfs:xfs_buf_iowait
14.00 +14.3% 16.00 TOTAL xfs:xfs_buf_iowait_done
3273.00 -24.8% 2462.00 TOTAL xfs:xfs_buf_delwri_queue
0.00 0.00 TOTAL xfs:xfs_buf_delwri_dequeue
111.00 -26.1% 82.00 TOTAL xfs:xfs_buf_delwri_split
0.00 0.00 TOTAL xfs:xfs_buf_get_uncached
0.00 0.00 TOTAL xfs:xfs_bdstrat_shut
358.00 -33.0% 240.00 TOTAL xfs:xfs_buf_item_relse
0.00 0.00 TOTAL xfs:xfs_buf_item_iodone
0.00 0.00 TOTAL xfs:xfs_buf_item_iodone_async
0.00 0.00 TOTAL xfs:xfs_buf_error_relse
0.00 0.00 TOTAL xfs:xfs_trans_read_buf_io
0.00 0.00 TOTAL xfs:xfs_trans_read_buf_shut
0.00 0.00 TOTAL xfs:xfs_btree_corrupt
0.00 0.00 TOTAL xfs:xfs_da_btree_corrupt
0.00 0.00 TOTAL xfs:xfs_reset_dqcounts
0.00 0.00 TOTAL xfs:xfs_inode_item_push
2024.00 -20.9% 1600.00 TOTAL xfs:xfs_buf_find
1990.00 -20.1% 1591.00 TOTAL xfs:xfs_buf_get
1987.00 -20.1% 1588.00 TOTAL xfs:xfs_buf_read
457.00 -46.8% 243.00 TOTAL xfs:xfs_buf_ioerror
1364.00 -34.5% 893.00 TOTAL xfs:xfs_buf_item_size
0.00 0.00 TOTAL xfs:xfs_buf_item_size_stale
1364.00 -34.5% 893.00 TOTAL xfs:xfs_buf_item_format
0.00 0.00 TOTAL xfs:xfs_buf_item_format_stale
1283.00 -50.9% 630.00 TOTAL xfs:xfs_buf_item_pin
1275.00 -51.5% 619.00 TOTAL xfs:xfs_buf_item_unpin
0.00 0.00 TOTAL xfs:xfs_buf_item_unpin_stale
86.00 -33.7% 57.00 TOTAL xfs:xfs_buf_item_trylock
1365.00 -34.4% 895.00 TOTAL xfs:xfs_buf_item_unlock
0.00 0.00 TOTAL xfs:xfs_buf_item_unlock_stale
1275.00 -51.5% 619.00 TOTAL xfs:xfs_buf_item_committed
0.00 0.00 TOTAL xfs:xfs_buf_item_push
86.00 -33.7% 57.00 TOTAL xfs:xfs_buf_item_pushbuf
3.00 +0.0% 3.00 TOTAL xfs:xfs_trans_get_buf
0.00 0.00 TOTAL xfs:xfs_trans_get_buf_recur
14.00 +14.3% 16.00 TOTAL xfs:xfs_trans_getsb
0.00 0.00 TOTAL xfs:xfs_trans_getsb_recur
1706.00 -34.6% 1116.00 TOTAL xfs:xfs_trans_read_buf
10.00 +0.0% 10.00 TOTAL xfs:xfs_trans_read_buf_recur
2990.00 -32.6% 2014.00 TOTAL xfs:xfs_trans_log_buf
1362.00 -33.8% 901.00 TOTAL xfs:xfs_trans_brelse
0.00 0.00 TOTAL xfs:xfs_trans_bjoin
0.00 0.00 TOTAL xfs:xfs_trans_bhold
0.00 0.00 TOTAL xfs:xfs_trans_bhold_release
0.00 0.00 TOTAL xfs:xfs_trans_binval
1940825.00 +55.6% 3020579.00 TOTAL xfs:xfs_ilock
9846.00 -16.4% 8234.00 TOTAL xfs:xfs_ilock_nowait
0.00 0.00 TOTAL xfs:xfs_ilock_demote
1950676.00 +55.3% 3028815.00 TOTAL xfs:xfs_iunlock
0.00 0.00 TOTAL xfs:xfs_iget_skip
0.00 0.00 TOTAL xfs:xfs_iget_reclaim
0.00 0.00 TOTAL xfs:xfs_iget_reclaim_fail
0.00 0.00 TOTAL xfs:xfs_iget_hit
0.00 0.00 TOTAL xfs:xfs_iget_miss
21.00 +57.1% 33.00 TOTAL xfs:xfs_getattr
0.00 0.00 TOTAL xfs:xfs_setattr
0.00 0.00 TOTAL xfs:xfs_readlink
0.00 0.00 TOTAL xfs:xfs_alloc_file_space
0.00 0.00 TOTAL xfs:xfs_free_file_space
6.00 -33.3% 4.00 TOTAL xfs:xfs_readdir
0.00 0.00 TOTAL xfs:xfs_get_acl
0.00 0.00 TOTAL xfs:xfs_vm_bmap
0.00 0.00 TOTAL xfs:xfs_file_ioctl
0.00 0.00 TOTAL xfs:xfs_file_compat_ioctl
0.00 0.00 TOTAL xfs:xfs_ioctl_setattr
0.00 0.00 TOTAL xfs:xfs_dir_fsync
0.00 0.00 TOTAL xfs:xfs_file_fsync
1.00 -100.0% 0.00 TOTAL xfs:xfs_destroy_inode
382.00 +102.4% 773.00 TOTAL xfs:xfs_write_inode
3.00 -66.7% 1.00 TOTAL xfs:xfs_evict_inode
0.00 0.00 TOTAL xfs:xfs_dquot_dqalloc
0.00 0.00 TOTAL xfs:xfs_dquot_dqdetach
0.00 0.00 TOTAL xfs:xfs_ihold
201561.00 -83.7% 32947.00 TOTAL xfs:xfs_irele
352.00 -35.5% 227.00 TOTAL xfs:xfs_inode_pin
355.00 -35.8% 228.00 TOTAL xfs:xfs_inode_unpin
0.00 0.00 TOTAL xfs:xfs_inode_unpin_nowait
4.00 -75.0% 1.00 TOTAL xfs:xfs_remove
0.00 0.00 TOTAL xfs:xfs_link
0.00 0.00 TOTAL xfs:xfs_lookup
0.00 0.00 TOTAL xfs:xfs_create
0.00 0.00 TOTAL xfs:xfs_symlink
0.00 0.00 TOTAL xfs:xfs_rename
0.00 0.00 TOTAL xfs:xfs_dqadjust
0.00 0.00 TOTAL xfs:xfs_dqreclaim_want
0.00 0.00 TOTAL xfs:xfs_dqreclaim_dirty
0.00 0.00 TOTAL xfs:xfs_dqreclaim_unlink
0.00 0.00 TOTAL xfs:xfs_dqattach_found
0.00 0.00 TOTAL xfs:xfs_dqattach_get
0.00 0.00 TOTAL xfs:xfs_dqinit
0.00 0.00 TOTAL xfs:xfs_dqreuse
0.00 0.00 TOTAL xfs:xfs_dqalloc
0.00 0.00 TOTAL xfs:xfs_dqtobp_read
0.00 0.00 TOTAL xfs:xfs_dqread
0.00 0.00 TOTAL xfs:xfs_dqread_fail
0.00 0.00 TOTAL xfs:xfs_dqlookup_found
0.00 0.00 TOTAL xfs:xfs_dqlookup_want
0.00 0.00 TOTAL xfs:xfs_dqlookup_freelist
0.00 0.00 TOTAL xfs:xfs_dqlookup_done
0.00 0.00 TOTAL xfs:xfs_dqget_hit
0.00 0.00 TOTAL xfs:xfs_dqget_miss
0.00 0.00 TOTAL xfs:xfs_dqput
0.00 0.00 TOTAL xfs:xfs_dqput_wait
0.00 0.00 TOTAL xfs:xfs_dqput_free
0.00 0.00 TOTAL xfs:xfs_dqrele
0.00 0.00 TOTAL xfs:xfs_dqflush
0.00 0.00 TOTAL xfs:xfs_dqflush_force
0.00 0.00 TOTAL xfs:xfs_dqflush_done
699.00 -43.3% 396.00 TOTAL xfs:xfs_log_done_nonperm
88.00 -84.1% 14.00 TOTAL xfs:xfs_log_done_perm
455.00 -42.4% 262.00 TOTAL xfs:xfs_log_reserve
0.00 0.00 TOTAL xfs:xfs_log_umount_write
367.00 -32.4% 248.00 TOTAL xfs:xfs_log_grant_enter
367.00 -32.4% 248.00 TOTAL xfs:xfs_log_grant_exit
0.00 0.00 TOTAL xfs:xfs_log_grant_error
0.00 0.00 TOTAL xfs:xfs_log_grant_sleep1
0.00 0.00 TOTAL xfs:xfs_log_grant_wake1
0.00 0.00 TOTAL xfs:xfs_log_grant_sleep2
0.00 0.00 TOTAL xfs:xfs_log_grant_wake2
0.00 0.00 TOTAL xfs:xfs_log_grant_wake_up
80.00 -91.2% 7.00 TOTAL xfs:xfs_log_regrant_write_enter
81.00 -91.4% 7.00 TOTAL xfs:xfs_log_regrant_write_exit
0.00 0.00 TOTAL xfs:xfs_log_regrant_write_error
0.00 0.00 TOTAL xfs:xfs_log_regrant_write_sleep1
0.00 0.00 TOTAL xfs:xfs_log_regrant_write_wake1
0.00 0.00 TOTAL xfs:xfs_log_regrant_write_sleep2
0.00 0.00 TOTAL xfs:xfs_log_regrant_write_wake2
0.00 0.00 TOTAL xfs:xfs_log_regrant_write_wake_up
90.00 -84.4% 14.00 TOTAL xfs:xfs_log_regrant_reserve_enter
81.00 -91.4% 7.00 TOTAL xfs:xfs_log_regrant_reserve_exit
90.00 -84.4% 14.00 TOTAL xfs:xfs_log_regrant_reserve_sub
701.00 -43.5% 396.00 TOTAL xfs:xfs_log_ungrant_enter
701.00 -43.5% 396.00 TOTAL xfs:xfs_log_ungrant_exit
701.00 -43.5% 396.00 TOTAL xfs:xfs_log_ungrant_sub
91.00 -97.8% 2.00 TOTAL xfs:xfs_ail_push
5911.00 -71.9% 1659.00 TOTAL xfs:xfs_ail_pushbuf
0.00 0.00 TOTAL xfs:xfs_ail_pushbuf_pinned
106.00 -18.9% 86.00 TOTAL xfs:xfs_ail_pinned
14285.00 -72.2% 3971.00 TOTAL xfs:xfs_ail_locked
0.00 0.00 TOTAL xfs:xfs_file_read
385016.00 +56.5% 602645.00 TOTAL xfs:xfs_file_buffered_write
0.00 0.00 TOTAL xfs:xfs_file_direct_write
0.00 0.00 TOTAL xfs:xfs_file_splice_read
0.00 0.00 TOTAL xfs:xfs_file_splice_write
714.00 +16.7% 833.00 TOTAL xfs:xfs_writepage
231206.00 -62.3% 87239.00 TOTAL xfs:xfs_releasepage
231206.00 -62.3% 87239.00 TOTAL xfs:xfs_invalidatepage
372.00 +63.4% 608.00 TOTAL xfs:xfs_map_blocks_found
343.00 -34.1% 226.00 TOTAL xfs:xfs_map_blocks_alloc
365357.00 +60.8% 587469.00 TOTAL xfs:xfs_get_blocks_found
19574.00 -22.5% 15177.00 TOTAL xfs:xfs_get_blocks_alloc
11300.00 -81.4% 2098.00 TOTAL xfs:xfs_delalloc_enospc
0.00 0.00 TOTAL xfs:xfs_unwritten_convert
0.00 0.00 TOTAL xfs:xfs_get_blocks_notfound
688.00 +19.6% 823.00 TOTAL xfs:xfs_setfilesize
9.00 -22.2% 7.00 TOTAL xfs:xfs_itruncate_data_start
9.00 -22.2% 7.00 TOTAL xfs:xfs_itruncate_data_end
0.00 0.00 TOTAL xfs:xfs_pagecache_inval
45.00 -84.4% 7.00 TOTAL xfs:xfs_bunmap
82.00 -91.5% 7.00 TOTAL xfs:xfs_alloc_busy
0.00 0.00 TOTAL xfs:xfs_alloc_busy_enomem
0.00 0.00 TOTAL xfs:xfs_alloc_busy_force
0.00 0.00 TOTAL xfs:xfs_alloc_busy_reuse
0.00 0.00 TOTAL xfs:xfs_alloc_busy_clear
0.00 3.00 TOTAL xfs:xfs_alloc_busy_trim
0.00 0.00 TOTAL xfs:xfs_trans_commit_lsn
1128.00 -38.0% 699.00 TOTAL xfs:xfs_agf
82.00 -91.5% 7.00 TOTAL xfs:xfs_free_extent
0.00 0.00 TOTAL xfs:xfs_alloc_exact_done
0.00 0.00 TOTAL xfs:xfs_alloc_exact_notfound
0.00 0.00 TOTAL xfs:xfs_alloc_exact_error
0.00 0.00 TOTAL xfs:xfs_alloc_near_nominleft
334.00 -34.4% 219.00 TOTAL xfs:xfs_alloc_near_first
0.00 0.00 TOTAL xfs:xfs_alloc_near_greater
0.00 0.00 TOTAL xfs:xfs_alloc_near_lesser
0.00 0.00 TOTAL xfs:xfs_alloc_near_error
0.00 0.00 TOTAL xfs:xfs_alloc_near_noentry
0.00 0.00 TOTAL xfs:xfs_alloc_near_busy
0.00 0.00 TOTAL xfs:xfs_alloc_size_neither
0.00 0.00 TOTAL xfs:xfs_alloc_size_noentry
2.00 +50.0% 3.00 TOTAL xfs:xfs_alloc_size_nominleft
11.00 -9.1% 10.00 TOTAL xfs:xfs_alloc_size_done
0.00 0.00 TOTAL xfs:xfs_alloc_size_error
0.00 0.00 TOTAL xfs:xfs_alloc_size_busy
0.00 0.00 TOTAL xfs:xfs_alloc_small_freelist
0.00 0.00 TOTAL xfs:xfs_alloc_small_notenough
5.00 +20.0% 6.00 TOTAL xfs:xfs_alloc_small_done
0.00 0.00 TOTAL xfs:xfs_alloc_small_error
0.00 0.00 TOTAL xfs:xfs_alloc_vextent_badargs
0.00 0.00 TOTAL xfs:xfs_alloc_vextent_nofix
0.00 0.00 TOTAL xfs:xfs_alloc_vextent_noagbp
106.00 -40.6% 63.00 TOTAL xfs:xfs_alloc_vextent_loopfailed
0.00 0.00 TOTAL xfs:xfs_alloc_vextent_allfailed
0.00 0.00 TOTAL xfs:xfs_dir2_sf_addname
0.00 0.00 TOTAL xfs:xfs_dir2_sf_create
0.00 0.00 TOTAL xfs:xfs_dir2_sf_lookup
0.00 0.00 TOTAL xfs:xfs_dir2_sf_replace
6.00 -83.3% 1.00 TOTAL xfs:xfs_dir2_sf_removename
0.00 0.00 TOTAL xfs:xfs_dir2_sf_toino4
0.00 0.00 TOTAL xfs:xfs_dir2_sf_toino8
0.00 0.00 TOTAL xfs:xfs_dir2_sf_to_block
0.00 0.00 TOTAL xfs:xfs_dir2_block_addname
0.00 0.00 TOTAL xfs:xfs_dir2_block_lookup
0.00 0.00 TOTAL xfs:xfs_dir2_block_replace
0.00 0.00 TOTAL xfs:xfs_dir2_block_removename
0.00 0.00 TOTAL xfs:xfs_dir2_block_to_sf
0.00 0.00 TOTAL xfs:xfs_dir2_block_to_leaf
0.00 0.00 TOTAL xfs:xfs_dir2_leaf_addname
0.00 0.00 TOTAL xfs:xfs_dir2_leaf_lookup
0.00 0.00 TOTAL xfs:xfs_dir2_leaf_replace
0.00 0.00 TOTAL xfs:xfs_dir2_leaf_removename
0.00 0.00 TOTAL xfs:xfs_dir2_leaf_to_block
0.00 0.00 TOTAL xfs:xfs_dir2_leaf_to_node
0.00 0.00 TOTAL xfs:xfs_dir2_node_addname
0.00 0.00 TOTAL xfs:xfs_dir2_node_lookup
0.00 0.00 TOTAL xfs:xfs_dir2_node_replace
0.00 0.00 TOTAL xfs:xfs_dir2_node_removename
0.00 0.00 TOTAL xfs:xfs_dir2_node_to_leaf
0.00 0.00 TOTAL xfs:xfs_dir2_leafn_add
0.00 0.00 TOTAL xfs:xfs_dir2_leafn_remove
0.00 0.00 TOTAL xfs:xfs_dir2_grow_inode
0.00 0.00 TOTAL xfs:xfs_dir2_shrink_inode
0.00 0.00 TOTAL xfs:xfs_dir2_leafn_moveents
0.00 0.00 TOTAL xfs:xfs_swap_extent_before
0.00 0.00 TOTAL xfs:xfs_swap_extent_after
0.00 0.00 TOTAL xfs:xfs_log_recover_item_add
0.00 0.00 TOTAL xfs:xfs_log_recover_item_add_cont
0.00 0.00 TOTAL xfs:xfs_log_recover_item_reorder_head
0.00 0.00 TOTAL xfs:xfs_log_recover_item_reorder_tail
0.00 0.00 TOTAL xfs:xfs_log_recover_item_recover
0.00 0.00 TOTAL xfs:xfs_log_recover_buf_not_cancel
0.00 0.00 TOTAL xfs:xfs_log_recover_buf_cancel
0.00 0.00 TOTAL xfs:xfs_log_recover_buf_cancel_add
0.00 0.00 TOTAL xfs:xfs_log_recover_buf_cancel_ref_inc
0.00 0.00 TOTAL xfs:xfs_log_recover_buf_recover
0.00 0.00 TOTAL xfs:xfs_log_recover_buf_inode_buf
0.00 0.00 TOTAL xfs:xfs_log_recover_buf_reg_buf
0.00 0.00 TOTAL xfs:xfs_log_recover_buf_dquot_buf
0.00 0.00 TOTAL xfs:xfs_log_recover_inode_recover
0.00 0.00 TOTAL xfs:xfs_log_recover_inode_cancel
0.00 0.00 TOTAL xfs:xfs_log_recover_inode_skip
0.00 0.00 TOTAL xfs:xfs_discard_extent
0.00 0.00 TOTAL xfs:xfs_discard_toosmall
0.00 0.00 TOTAL xfs:xfs_discard_exclude
0.00 0.00 TOTAL xfs:xfs_discard_busy
2.41 +62.8% 3.93 TOTAL write_bw
0.00 0.00 TOTAL nr_vmscan_write
0.00 0.00 TOTAL nr_vmscan_immediate_reclaim
407728.00 +53.1% 624195.00 TOTAL nr_dirtied
384722.00 +56.7% 602813.00 TOTAL nr_written
677531.00 +33.7% 905537.00 TOTAL numa_hit
0.00 0.00 TOTAL numa_miss
0.00 0.00 TOTAL numa_foreign
8884.00 +0.6% 8934.00 TOTAL numa_interleave
677531.00 +33.7% 905537.00 TOTAL numa_local
0.00 0.00 TOTAL numa_other
13095.00 +0.8% 13203.00 TOTAL pgpgin
1572498.00 +55.5% 2444726.00 TOTAL pgpgout
0.00 0.00 TOTAL pswpin
0.00 0.00 TOTAL pswpout
0.00 0.00 TOTAL pgalloc_dma
707945.00 +32.3% 936950.00 TOTAL pgalloc_dma32
0.00 0.00 TOTAL pgalloc_normal
0.00 0.00 TOTAL pgalloc_movable
990276.00 +0.8% 998211.00 TOTAL pgfree
3864.00 +0.5% 3883.00 TOTAL pgactivate
0.00 0.00 TOTAL pgdeactivate
576901.00 +1.6% 585908.00 TOTAL pgfault
189.00 +3.7% 196.00 TOTAL pgmajfault
0.00 0.00 TOTAL pgrefill_dma
0.00 0.00 TOTAL pgrefill_dma32
0.00 0.00 TOTAL pgrefill_normal
0.00 0.00 TOTAL pgrefill_movable
0.00 0.00 TOTAL pgsteal_dma
0.00 0.00 TOTAL pgsteal_dma32
0.00 0.00 TOTAL pgsteal_normal
0.00 0.00 TOTAL pgsteal_movable
0.00 0.00 TOTAL pgscan_kswapd_dma
0.00 0.00 TOTAL pgscan_kswapd_dma32
0.00 0.00 TOTAL pgscan_kswapd_normal
0.00 0.00 TOTAL pgscan_kswapd_movable
0.00 0.00 TOTAL pgscan_direct_dma
0.00 0.00 TOTAL pgscan_direct_dma32
0.00 0.00 TOTAL pgscan_direct_normal
0.00 0.00 TOTAL pgscan_direct_movable
0.00 0.00 TOTAL zone_reclaim_failed
0.00 0.00 TOTAL pginodesteal
0.00 0.00 TOTAL slabs_scanned
0.00 0.00 TOTAL kswapd_steal
0.00 0.00 TOTAL kswapd_inodesteal
0.00 0.00 TOTAL kswapd_low_wmark_hit_quickly
0.00 0.00 TOTAL kswapd_high_wmark_hit_quickly
0.00 0.00 TOTAL kswapd_skip_congestion_wait
1.00 +0.0% 1.00 TOTAL pageoutrun
0.00 0.00 TOTAL allocstall
0.00 0.00 TOTAL pgrotated
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2011-12-19 5:47 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-12-14 14:31 XFS/btrfs performance after IO-less dirty throttling Wu Fengguang
2011-12-14 14:59 ` Wu Fengguang
[not found] ` <20111215133137.GA14562@localhost>
2011-12-16 0:31 ` Dave Chinner
2011-12-16 1:53 ` Wu Fengguang
2011-12-16 4:25 ` Dave Chinner
2011-12-16 5:16 ` Wu Fengguang
2011-12-19 1:57 ` Dave Chinner
2011-12-19 5:44 ` Wu Fengguang
[not found] ` <20111215135250.GB14562@localhost>
2011-12-16 5:27 ` Wu Fengguang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).