linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* ext4 write performance regression in 3.6-rc1
       [not found]     ` <20120816152513.GA31346@thunk.org>
@ 2012-08-17  6:09       ` Fengguang Wu
  2012-08-17 13:40         ` Theodore Ts'o
  0 siblings, 1 reply; 14+ messages in thread
From: Fengguang Wu @ 2012-08-17  6:09 UTC (permalink / raw)
  To: Theodore Ts'o, Marti Raudsepp, Kernel hackers, ext4 hackers,
	maze
  Cc: Shi, Alex, linux-fsdevel

Ted,

I find ext4 write performance dropped by 3.3% on average in the
3.6-rc1 merge window. xfs and btrfs are fine.

Two machines are tested. The performance regression happens in the
lkp-nex04 machine, which is equipped with 12 SSD drives. lkp-st02 does
not see regression, which is equipped with HDD drives. I'll continue
to repeat the tests and report variations.

The below 3.6.0-rc1+ kernel is 3.6.0-rc1 plus the NULL deference fix.

wfg@bee /export/writeback% ./compare -g ext4 lkp-nex04/*/*-{3.5.0,3.6.0-rc1+}
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  720.62        -1.5%       710.16  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-100dd-1-3.5.0
                  706.04        -0.0%       705.86  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-10dd-1-3.5.0
                  702.86        -0.2%       701.74  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-1dd-1-3.5.0
                  779.52        +6.5%       830.11  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-100dd-1-3.5.0
                  646.70        +4.9%       678.59  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-10dd-1-3.5.0
                  704.49        +2.6%       723.00  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-1dd-1-3.5.0
                  705.26        -1.2%       696.61  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-100dd-1-3.5.0
                  703.37        +0.1%       703.76  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-10dd-1-3.5.0
                  701.66        -0.1%       700.83  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-1dd-1-3.5.0
                  675.08       -10.5%       604.29  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-100dd-1-3.5.0
                  676.52        -2.7%       658.38  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-10dd-1-3.5.0
                  512.70        +4.0%       533.22  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-1dd-1-3.5.0
                  709.76       -15.7%       598.44  lkp-nex04/RAID0-12HDD-thresh=100M/ext4-100dd-1-3.5.0
                  681.39        -2.1%       667.25  lkp-nex04/RAID0-12HDD-thresh=100M/ext4-10dd-1-3.5.0
                  699.77       -19.2%       565.54  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-100dd-1-3.5.0
                  675.79        -1.9%       663.17  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-10dd-1-3.5.0
                  484.84        -7.4%       448.83  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-1dd-1-3.5.0
                  167.97       -38.7%       103.03  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-100dd-1-3.5.0
                  243.67        -9.1%       221.41  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-10dd-1-3.5.0
                  248.98       +12.2%       279.33  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-1dd-1-3.5.0
                   71.18       -34.2%        46.82  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-100dd-1-3.5.0
                  145.84        -7.3%       135.25  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-10dd-1-3.5.0
                  255.22        +6.7%       272.35  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-1dd-1-3.5.0
                  209.24       -23.6%       159.96  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-100dd-1-3.5.0
                  243.73       -10.9%       217.28  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-10dd-1-3.5.0
                  214.25        +5.6%       226.32  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-1dd-1-3.5.0
                13286.46        -3.3%     12851.55  TOTAL write_bw

wfg@bee /export/writeback% ./compare -g xfs lkp-nex04/*/*-{3.5.0,3.6.0-rc1+}
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  687.76        +2.4%       704.52  lkp-nex04/JBOD-12HDD-thresh=1000M/xfs-100dd-1-3.5.0
                  705.09        +0.0%       705.11  lkp-nex04/JBOD-12HDD-thresh=1000M/xfs-10dd-1-3.5.0
                  702.21        -0.1%       701.72  lkp-nex04/JBOD-12HDD-thresh=1000M/xfs-1dd-1-3.5.0
                  664.86       +21.8%       809.81  lkp-nex04/JBOD-12HDD-thresh=100M/xfs-100dd-1-3.5.0
                  609.97       +13.6%       693.12  lkp-nex04/JBOD-12HDD-thresh=100M/xfs-10dd-1-3.5.0
                  708.30        +0.8%       713.68  lkp-nex04/JBOD-12HDD-thresh=100M/xfs-1dd-1-3.5.0
                  701.19        -0.0%       700.85  lkp-nex04/JBOD-12HDD-thresh=8G/xfs-10dd-1-3.5.0
                  701.69        -0.1%       701.01  lkp-nex04/JBOD-12HDD-thresh=8G/xfs-1dd-1-3.5.0
                  699.98        -0.4%       697.40  lkp-nex04/RAID0-12HDD-thresh=1000M/xfs-10dd-1-3.5.0
                  653.92        +0.3%       656.07  lkp-nex04/RAID0-12HDD-thresh=1000M/xfs-1dd-1-3.5.0
                  650.25        +0.5%       653.32  lkp-nex04/RAID0-12HDD-thresh=100M/xfs-10dd-1-3.5.0
                  612.47        -2.9%       594.93  lkp-nex04/RAID0-12HDD-thresh=100M/xfs-1dd-1-3.5.0
                  694.90        +0.0%       695.19  lkp-nex04/RAID0-12HDD-thresh=8G/xfs-10dd-1-3.5.0
                  607.37       +14.2%       693.36  lkp-nex04/RAID0-12HDD-thresh=8G/xfs-1dd-1-3.5.0
                  273.54       +27.1%       347.67  lkp-nex04/RAID5-12HDD-thresh=1000M/xfs-10dd-1-3.5.0
                  277.00       +30.6%       361.71  lkp-nex04/RAID5-12HDD-thresh=1000M/xfs-1dd-1-3.5.0
                  194.74        +6.6%       207.62  lkp-nex04/RAID5-12HDD-thresh=100M/xfs-10dd-1-3.5.0
                  288.92       +21.2%       350.05  lkp-nex04/RAID5-12HDD-thresh=100M/xfs-1dd-1-3.5.0
                  278.33       +26.4%       351.78  lkp-nex04/RAID5-12HDD-thresh=8G/xfs-10dd-1-3.5.0
                  285.64       +24.2%       354.68  lkp-nex04/RAID5-12HDD-thresh=8G/xfs-1dd-1-3.5.0
                10998.15        +6.3%     11693.60  TOTAL write_bw

wfg@bee /export/writeback% ./compare -g btrfs lkp-nex04/*/*-{3.5.0,3.6.0-rc1+}
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  703.26        -0.1%       702.57  lkp-nex04/JBOD-12HDD-thresh=1000M/btrfs-10dd-1-3.5.0
                  701.88        -0.0%       701.85  lkp-nex04/JBOD-12HDD-thresh=1000M/btrfs-1dd-1-3.5.0
                  697.67        +7.1%       747.07  lkp-nex04/JBOD-12HDD-thresh=100M/btrfs-10dd-1-3.5.0
                  712.91        -0.4%       710.36  lkp-nex04/JBOD-12HDD-thresh=100M/btrfs-1dd-1-3.5.0
                  702.02        -0.1%       701.26  lkp-nex04/JBOD-12HDD-thresh=8G/btrfs-10dd-1-3.5.0
                  702.06        -0.1%       701.66  lkp-nex04/JBOD-12HDD-thresh=8G/btrfs-1dd-1-3.5.0
                  709.01        -0.7%       703.83  lkp-nex04/RAID0-12HDD-thresh=1000M/btrfs-10dd-1-3.5.0
                  696.67        -4.2%       667.22  lkp-nex04/RAID0-12HDD-thresh=1000M/btrfs-1dd-1-3.5.0
                  822.15        +0.1%       823.01  lkp-nex04/RAID0-12HDD-thresh=100M/btrfs-10dd-1-3.5.0
                  685.14        +2.9%       705.35  lkp-nex04/RAID0-12HDD-thresh=100M/btrfs-1dd-1-3.5.0
                  702.55        -0.0%       702.23  lkp-nex04/RAID0-12HDD-thresh=8G/btrfs-10dd-1-3.5.0
                  674.09        -7.1%       626.31  lkp-nex04/RAID0-12HDD-thresh=8G/btrfs-1dd-1-3.5.0
                  270.81       +21.0%       327.76  lkp-nex04/RAID5-12HDD-thresh=1000M/btrfs-10dd-1-3.5.0
                  267.19       +15.8%       309.36  lkp-nex04/RAID5-12HDD-thresh=1000M/btrfs-1dd-1-3.5.0
                  273.89       +25.3%       343.10  lkp-nex04/RAID5-12HDD-thresh=100M/btrfs-10dd-1-3.5.0
                  276.31       +19.7%       330.87  lkp-nex04/RAID5-12HDD-thresh=100M/btrfs-1dd-1-3.5.0
                  251.25       +17.3%       294.80  lkp-nex04/RAID5-12HDD-thresh=8G/btrfs-10dd-1-3.5.0
                  267.48        +7.1%       286.47  lkp-nex04/RAID5-12HDD-thresh=8G/btrfs-1dd-1-3.5.0
                10116.34        +2.7%     10385.07  TOTAL write_bw

wfg@bee /export/writeback% ./compare -g ext4 lkp-st02-x8664/*/*-{3.5.0,3.6.0-rc1+}
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  900.62        +0.1%       901.66  lkp-st02-x8664/JBOD-12HDD-thresh=100M/ext4-1dd-1-3.5.0
                  898.13        +1.4%       910.73  lkp-st02-x8664/JBOD-12HDD-thresh=2G/ext4-1dd-1-3.5.0
                  166.95        +3.8%       173.33  lkp-st02-x8664/RAID5-12HDD-thresh=100M/ext4-1dd-1-3.5.0
                  176.14        +2.8%       181.01  lkp-st02-x8664/RAID5-12HDD-thresh=2G/ext4-1dd-1-3.5.0
                   25.84        +0.3%        25.92  lkp-st02-x8664/jbod_12hdd/ext4-fio_jbod_12hdd_randrw_mmap_0_4k-1-3.5.0
                   92.34        -4.8%        87.88  lkp-st02-x8664/jbod_12hdd/ext4-fio_jbod_12hdd_randrw_mmap_0_64k-1-3.5.0
                   21.20        +2.1%        21.65  lkp-st02-x8664/jbod_12hdd/ext4-fio_jbod_12hdd_randrw_mmap_1_4k-1-3.5.0
                   90.43        +1.6%        91.90  lkp-st02-x8664/jbod_12hdd/ext4-fio_jbod_12hdd_randrw_mmap_1_64k-1-3.5.0
                   28.69        -1.8%        28.18  lkp-st02-x8664/jbod_12hdd/ext4-fio_jbod_12hdd_randrw_sync_0_4k-1-3.5.0
                  201.86        +0.2%       202.17  lkp-st02-x8664/jbod_12hdd/ext4-fio_jbod_12hdd_randrw_sync_0_64k-1-3.5.0
                   28.43        -0.2%        28.37  lkp-st02-x8664/jbod_12hdd/ext4-fio_jbod_12hdd_randwrite_mmap_0_4k-1-3.5.0
                  110.25        -0.1%       110.20  lkp-st02-x8664/jbod_12hdd/ext4-fio_jbod_12hdd_randwrite_mmap_0_64k-1-3.5.0
                   31.20        +0.5%        31.36  lkp-st02-x8664/jbod_12hdd/ext4-fio_jbod_12hdd_randwrite_sync_0_4k-1-3.5.0
                  289.28        +1.0%       292.08  lkp-st02-x8664/jbod_12hdd/ext4-fio_jbod_12hdd_randwrite_sync_0_64k-1-3.5.0
                   20.50        +0.9%        20.67  lkp-st02-x8664/jbod_12hdd/ext4-fio_jbod_12hdd_randwrite_sync_1_4k-1-3.5.0
                  294.64        +0.4%       295.94  lkp-st02-x8664/jbod_12hdd/ext4-fio_jbod_12hdd_randwrite_sync_1_64k-1-3.5.0
                 3376.51        +0.8%      3403.05  TOTAL write_bw

wfg@bee /export/writeback% ./compare -g xfs lkp-st02-x8664/*/*-{3.5.0,3.6.0-rc1+}
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  976.57        -4.8%       929.50  lkp-st02-x8664/JBOD-12HDD-thresh=100M/xfs-1dd-1-3.5.0
                 1003.33        +2.3%      1026.41  lkp-st02-x8664/JBOD-12HDD-thresh=2G/xfs-1dd-1-3.5.0
                  796.67        -2.1%       780.09  lkp-st02-x8664/RAID0-12HDD-thresh=100M/xfs-1dd-1-3.5.0
                  754.89        +0.3%       757.24  lkp-st02-x8664/RAID0-12HDD-thresh=2G/xfs-1dd-1-3.5.0
                  183.18        +7.6%       197.02  lkp-st02-x8664/RAID5-12HDD-thresh=100M/xfs-1dd-1-3.5.0
                  191.62        +9.0%       208.92  lkp-st02-x8664/RAID5-12HDD-thresh=2G/xfs-1dd-1-3.5.0
                   71.83        -1.0%        71.13  lkp-st02-x8664/jbod_12hdd/xfs-fio_jbod_12hdd_randrw_mmap_0_4k-1-3.5.0
                  104.93        -1.3%       103.56  lkp-st02-x8664/jbod_12hdd/xfs-fio_jbod_12hdd_randrw_mmap_0_64k-1-3.5.0
                   25.90        -0.4%        25.79  lkp-st02-x8664/jbod_12hdd/xfs-fio_jbod_12hdd_randrw_mmap_1_4k-1-3.5.0
                   88.13        +1.1%        89.06  lkp-st02-x8664/jbod_12hdd/xfs-fio_jbod_12hdd_randrw_mmap_1_64k-1-3.5.0
                   88.63        +0.2%        88.85  lkp-st02-x8664/jbod_12hdd/xfs-fio_jbod_12hdd_randrw_sync_0_4k-1-3.5.0
                  291.55        +0.1%       291.70  lkp-st02-x8664/jbod_12hdd/xfs-fio_jbod_12hdd_randrw_sync_0_64k-1-3.5.0
                   87.44        -1.5%        86.15  lkp-st02-x8664/jbod_12hdd/xfs-fio_jbod_12hdd_randwrite_mmap_0_4k-1-3.5.0
                  122.64        -1.6%       120.69  lkp-st02-x8664/jbod_12hdd/xfs-fio_jbod_12hdd_randwrite_mmap_0_64k-1-3.5.0
                  507.15        +0.2%       508.12  lkp-st02-x8664/jbod_12hdd/xfs-fio_jbod_12hdd_randwrite_sync_0_64k-1-3.5.0
                   32.09        -0.8%        31.85  lkp-st02-x8664/jbod_12hdd/xfs-fio_jbod_12hdd_randwrite_sync_1_4k-1-3.5.0
                  331.16        +0.2%       331.77  lkp-st02-x8664/jbod_12hdd/xfs-fio_jbod_12hdd_randwrite_sync_1_64k-1-3.5.0
                 5657.70        -0.2%      5647.85  TOTAL write_bw

wfg@bee /export/writeback% ./compare -g btrfs lkp-st02-x8664/*/*-{3.5.0,3.6.0-rc1+}
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  970.57        -2.9%       942.80  lkp-st02-x8664/JBOD-12HDD-thresh=100M/btrfs-1dd-1-3.5.0
                  965.95        -0.1%       964.91  lkp-st02-x8664/JBOD-12HDD-thresh=2G/btrfs-1dd-1-3.5.0
                  813.94        -2.3%       794.99  lkp-st02-x8664/RAID0-12HDD-thresh=100M/btrfs-1dd-1-3.5.0
                  860.05       -11.1%       764.50  lkp-st02-x8664/RAID0-12HDD-thresh=2G/btrfs-1dd-1-3.5.0
                  164.02       +15.3%       189.09  lkp-st02-x8664/RAID5-12HDD-thresh=100M/btrfs-1dd-1-3.5.0
                  163.78       +14.1%       186.94  lkp-st02-x8664/RAID5-12HDD-thresh=2G/btrfs-1dd-1-3.5.0
                 3938.30        -2.4%      3843.24  TOTAL write_bw

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ext4 write performance regression in 3.6-rc1
  2012-08-17  6:09       ` ext4 write performance regression in 3.6-rc1 Fengguang Wu
@ 2012-08-17 13:40         ` Theodore Ts'o
  2012-08-17 14:13           ` Fengguang Wu
  2012-08-17 14:25           ` ext4 write performance regression in 3.6-rc1 on RAID0/5 Fengguang Wu
  0 siblings, 2 replies; 14+ messages in thread
From: Theodore Ts'o @ 2012-08-17 13:40 UTC (permalink / raw)
  To: Fengguang Wu
  Cc: Marti Raudsepp, Kernel hackers, ext4 hackers, maze, Shi, Alex,
	linux-fsdevel

On Fri, Aug 17, 2012 at 02:09:15PM +0800, Fengguang Wu wrote:
> Ted,
> 
> I find ext4 write performance dropped by 3.3% on average in the
> 3.6-rc1 merge window. xfs and btrfs are fine.
> 
> Two machines are tested. The performance regression happens in the
> lkp-nex04 machine, which is equipped with 12 SSD drives. lkp-st02 does
> not see regression, which is equipped with HDD drives. I'll continue
> to repeat the tests and report variations.

Hmm... I've checked out the commits in "git log v3.5..v3.6-rc1 --
fs/ext4 fs/jbd2" and I don't see anything that I would expect would
cause that.  The are the lock elimination changes for Direct I/O
overwrites, but that shouldn't matter for your tests which are
measuring buffered writes, correct?

Is there any chance you could do me a favor and do a git bisect
restricted to commits involving fs/ext4 and fs/jbd2?

Many thanks,

						- Ted

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ext4 write performance regression in 3.6-rc1
  2012-08-17 13:40         ` Theodore Ts'o
@ 2012-08-17 14:13           ` Fengguang Wu
  2012-08-17 14:25           ` ext4 write performance regression in 3.6-rc1 on RAID0/5 Fengguang Wu
  1 sibling, 0 replies; 14+ messages in thread
From: Fengguang Wu @ 2012-08-17 14:13 UTC (permalink / raw)
  To: Theodore Ts'o, Marti Raudsepp, Kernel hackers, ext4 hackers,
	maze, Shi, Alex, linux-fsdevel

On Fri, Aug 17, 2012 at 09:40:39AM -0400, Theodore Ts'o wrote:
> On Fri, Aug 17, 2012 at 02:09:15PM +0800, Fengguang Wu wrote:
> > Ted,
> > 
> > I find ext4 write performance dropped by 3.3% on average in the
> > 3.6-rc1 merge window. xfs and btrfs are fine.
> > 
> > Two machines are tested. The performance regression happens in the
> > lkp-nex04 machine, which is equipped with 12 SSD drives. lkp-st02 does
> > not see regression, which is equipped with HDD drives. I'll continue
> > to repeat the tests and report variations.
> 
> Hmm... I've checked out the commits in "git log v3.5..v3.6-rc1 --
> fs/ext4 fs/jbd2" and I don't see anything that I would expect would
> cause that.  The are the lock elimination changes for Direct I/O
> overwrites, but that shouldn't matter for your tests which are
> measuring buffered writes, correct?
> 
> Is there any chance you could do me a favor and do a git bisect
> restricted to commits involving fs/ext4 and fs/jbd2?

No problem :)

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ext4 write performance regression in 3.6-rc1 on RAID0/5
  2012-08-17 13:40         ` Theodore Ts'o
  2012-08-17 14:13           ` Fengguang Wu
@ 2012-08-17 14:25           ` Fengguang Wu
       [not found]             ` <20120817151318.GA2341@localhost>
                               ` (2 more replies)
  1 sibling, 3 replies; 14+ messages in thread
From: Fengguang Wu @ 2012-08-17 14:25 UTC (permalink / raw)
  To: Theodore Ts'o, Marti Raudsepp, Kernel hackers, ext4 hackers,
	maze, Shi, Alex, linux-fsdevel
  Cc: linux RAID, NeilBrown, Li Shaohua

[CC md list]

On Fri, Aug 17, 2012 at 09:40:39AM -0400, Theodore Ts'o wrote:
> On Fri, Aug 17, 2012 at 02:09:15PM +0800, Fengguang Wu wrote:
> > Ted,
> > 
> > I find ext4 write performance dropped by 3.3% on average in the
> > 3.6-rc1 merge window. xfs and btrfs are fine.
> > 
> > Two machines are tested. The performance regression happens in the
> > lkp-nex04 machine, which is equipped with 12 SSD drives. lkp-st02 does
> > not see regression, which is equipped with HDD drives. I'll continue
> > to repeat the tests and report variations.
> 
> Hmm... I've checked out the commits in "git log v3.5..v3.6-rc1 --
> fs/ext4 fs/jbd2" and I don't see anything that I would expect would
> cause that.  The are the lock elimination changes for Direct I/O
> overwrites, but that shouldn't matter for your tests which are
> measuring buffered writes, correct?
> 
> Is there any chance you could do me a favor and do a git bisect
> restricted to commits involving fs/ext4 and fs/jbd2?

I noticed that the regressions all happen in the RAID0/RAID5 cases.
So it may be some interactions between the RAID/ext4 code?

I'll try to get some ext2/3 numbers, which should have less changes on the fs side.

wfg@bee /export/writeback% ./compare -g ext4 lkp-nex04/*/*-{3.5.0,3.6.0-rc1+}     
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  720.62        -1.5%       710.16  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-100dd-1-3.5.0
                  706.04        -0.0%       705.86  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-10dd-1-3.5.0
                  702.86        -0.2%       701.74  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-1dd-1-3.5.0
                  702.41        -0.0%       702.06  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-1dd-2-3.5.0
                  779.52        +6.5%       830.11  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-100dd-1-3.5.0
                  646.70        +4.9%       678.59  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-10dd-1-3.5.0
                  704.49        +2.6%       723.00  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-1dd-1-3.5.0
                  704.21        +1.2%       712.47  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-1dd-2-3.5.0
                  705.26        -1.2%       696.61  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-100dd-1-3.5.0
                  703.37        +0.1%       703.76  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-10dd-1-3.5.0
                  701.66        -0.1%       700.83  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-1dd-1-3.5.0
                  701.17        +0.0%       701.36  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-1dd-2-3.5.0
                  675.08       -10.5%       604.29  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-100dd-1-3.5.0
                  676.52        -2.7%       658.38  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-10dd-1-3.5.0
                  512.70        +4.0%       533.22  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-1dd-1-3.5.0
                  524.61        -0.3%       522.90  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-1dd-2-3.5.0
                  709.76       -15.7%       598.44  lkp-nex04/RAID0-12HDD-thresh=100M/ext4-100dd-1-3.5.0
                  681.39        -2.1%       667.25  lkp-nex04/RAID0-12HDD-thresh=100M/ext4-10dd-1-3.5.0
                  524.16        +0.8%       528.25  lkp-nex04/RAID0-12HDD-thresh=100M/ext4-1dd-2-3.5.0
                  699.77       -19.2%       565.54  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-100dd-1-3.5.0
                  675.79        -1.9%       663.17  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-10dd-1-3.5.0
                  484.84        -7.4%       448.83  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-1dd-1-3.5.0
                  470.40        -3.2%       455.31  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-1dd-2-3.5.0
                  167.97       -38.7%       103.03  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-100dd-1-3.5.0
                  243.67        -9.1%       221.41  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-10dd-1-3.5.0
                  248.98       +12.2%       279.33  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-1dd-1-3.5.0
                  208.45       +14.1%       237.86  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-1dd-2-3.5.0
                   71.18       -34.2%        46.82  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-100dd-1-3.5.0
                  145.84        -7.3%       135.25  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-10dd-1-3.5.0
                  255.22        +6.7%       272.35  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-1dd-1-3.5.0
                  243.09       +20.7%       293.30  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-1dd-2-3.5.0
                  209.24       -23.6%       159.96  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-100dd-1-3.5.0
                  243.73       -10.9%       217.28  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-10dd-1-3.5.0
                  214.25        +5.6%       226.32  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-1dd-1-3.5.0
                  207.16       +13.4%       234.98  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-1dd-2-3.5.0
                17572.12        -1.9%     17240.05  TOTAL write_bw

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ext4 write performance regression in 3.6-rc1 on RAID0/5
       [not found]             ` <20120817151318.GA2341@localhost>
@ 2012-08-17 15:37               ` Theodore Ts'o
  0 siblings, 0 replies; 14+ messages in thread
From: Theodore Ts'o @ 2012-08-17 15:37 UTC (permalink / raw)
  To: Fengguang Wu
  Cc: linux RAID, NeilBrown, Li Shaohua, Marti Raudsepp, Kernel hackers,
	ext4 hackers, maze, Shi, Alex, linux-fsdevel

On Fri, Aug 17, 2012 at 11:13:18PM +0800, Fengguang Wu wrote:
> 
> Obviously the major regressions happen to the 100dd over raid cases.
> Some 10dd cases are also impacted.
> 
> The attached graphs show that everything becomes more fluctuated in
> 3.6.0-rc1 for the lkp-nex04/RAID0-12HDD-thresh=8G/ext4-100dd-1 case.

Hmm... I'm not seeing any differences in the block allocation code, or
in ext4's buffered writeback code paths, which would be the most
likely cause of such problems.  Maybe a quick eyeball of the blktrace
to see if we're doing something pathalogically stupid?

You could also try running a filefrag -v on a few of the dd files to
see if there's any significant difference, although as I said, there
doesn't look like there was any significant changes in the block
allocation code between v3.5 and v3.6-rc1 --- although I suppose
changes in timeing could have have caused the block allocation
decisions to be different, so it's worth checking that out.

Thanks, regards,

						- Ted

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ext4 write performance regression in 3.6-rc1 on RAID0/5
  2012-08-17 14:25           ` ext4 write performance regression in 3.6-rc1 on RAID0/5 Fengguang Wu
       [not found]             ` <20120817151318.GA2341@localhost>
@ 2012-08-17 20:44             ` NeilBrown
  2012-08-21  9:42               ` Fengguang Wu
       [not found]             ` <20120822035702.GF2570@yliu-dev.sh.intel.com>
  2 siblings, 1 reply; 14+ messages in thread
From: NeilBrown @ 2012-08-17 20:44 UTC (permalink / raw)
  To: Fengguang Wu
  Cc: Theodore Ts'o, Marti Raudsepp, Kernel hackers, ext4 hackers,
	maze, Shi, Alex, linux-fsdevel, linux RAID, Li Shaohua

[-- Attachment #1: Type: text/plain, Size: 5924 bytes --]

On Fri, 17 Aug 2012 22:25:26 +0800 Fengguang Wu <fengguang.wu@intel.com>
wrote:

> [CC md list]
> 
> On Fri, Aug 17, 2012 at 09:40:39AM -0400, Theodore Ts'o wrote:
> > On Fri, Aug 17, 2012 at 02:09:15PM +0800, Fengguang Wu wrote:
> > > Ted,
> > > 
> > > I find ext4 write performance dropped by 3.3% on average in the
> > > 3.6-rc1 merge window. xfs and btrfs are fine.
> > > 
> > > Two machines are tested. The performance regression happens in the
> > > lkp-nex04 machine, which is equipped with 12 SSD drives. lkp-st02 does
> > > not see regression, which is equipped with HDD drives. I'll continue
> > > to repeat the tests and report variations.
> > 
> > Hmm... I've checked out the commits in "git log v3.5..v3.6-rc1 --
> > fs/ext4 fs/jbd2" and I don't see anything that I would expect would
> > cause that.  The are the lock elimination changes for Direct I/O
> > overwrites, but that shouldn't matter for your tests which are
> > measuring buffered writes, correct?
> > 
> > Is there any chance you could do me a favor and do a git bisect
> > restricted to commits involving fs/ext4 and fs/jbd2?
> 
> I noticed that the regressions all happen in the RAID0/RAID5 cases.
> So it may be some interactions between the RAID/ext4 code?

I'm aware of some performance regression in RAID5 which I will be drilling
down into next week.  Some things are faster, but some are slower :-(

RAID0 should be unchanged though - I don't think I've changed anything there.

Looking at your numbers, JBOD ranges from  +6.5% to -1.5%
                        RAID0 ranges from  +4.0% to -19.2%
                        RAID5 ranges from +20.7% to -39.7%

I'm guessing + is good and - is bad?
The RAID5 numbers don't surprise me.  The RAID0 do.

> 
> I'll try to get some ext2/3 numbers, which should have less changes on the fs side.

Thanks.  That will be useful.

NeilBrown


> 
> wfg@bee /export/writeback% ./compare -g ext4 lkp-nex04/*/*-{3.5.0,3.6.0-rc1+}     
>                    3.5.0                3.6.0-rc1+
> ------------------------  ------------------------
>                   720.62        -1.5%       710.16  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-100dd-1-3.5.0
>                   706.04        -0.0%       705.86  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-10dd-1-3.5.0
>                   702.86        -0.2%       701.74  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-1dd-1-3.5.0
>                   702.41        -0.0%       702.06  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-1dd-2-3.5.0
>                   779.52        +6.5%       830.11  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-100dd-1-3.5.0
>                   646.70        +4.9%       678.59  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-10dd-1-3.5.0
>                   704.49        +2.6%       723.00  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-1dd-1-3.5.0
>                   704.21        +1.2%       712.47  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-1dd-2-3.5.0
>                   705.26        -1.2%       696.61  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-100dd-1-3.5.0
>                   703.37        +0.1%       703.76  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-10dd-1-3.5.0
>                   701.66        -0.1%       700.83  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-1dd-1-3.5.0
>                   701.17        +0.0%       701.36  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-1dd-2-3.5.0
>                   675.08       -10.5%       604.29  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-100dd-1-3.5.0
>                   676.52        -2.7%       658.38  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-10dd-1-3.5.0
>                   512.70        +4.0%       533.22  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-1dd-1-3.5.0
>                   524.61        -0.3%       522.90  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-1dd-2-3.5.0
>                   709.76       -15.7%       598.44  lkp-nex04/RAID0-12HDD-thresh=100M/ext4-100dd-1-3.5.0
>                   681.39        -2.1%       667.25  lkp-nex04/RAID0-12HDD-thresh=100M/ext4-10dd-1-3.5.0
>                   524.16        +0.8%       528.25  lkp-nex04/RAID0-12HDD-thresh=100M/ext4-1dd-2-3.5.0
>                   699.77       -19.2%       565.54  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-100dd-1-3.5.0
>                   675.79        -1.9%       663.17  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-10dd-1-3.5.0
>                   484.84        -7.4%       448.83  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-1dd-1-3.5.0
>                   470.40        -3.2%       455.31  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-1dd-2-3.5.0
>                   167.97       -38.7%       103.03  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-100dd-1-3.5.0
>                   243.67        -9.1%       221.41  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-10dd-1-3.5.0
>                   248.98       +12.2%       279.33  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-1dd-1-3.5.0
>                   208.45       +14.1%       237.86  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-1dd-2-3.5.0
>                    71.18       -34.2%        46.82  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-100dd-1-3.5.0
>                   145.84        -7.3%       135.25  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-10dd-1-3.5.0
>                   255.22        +6.7%       272.35  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-1dd-1-3.5.0
>                   243.09       +20.7%       293.30  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-1dd-2-3.5.0
>                   209.24       -23.6%       159.96  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-100dd-1-3.5.0
>                   243.73       -10.9%       217.28  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-10dd-1-3.5.0
>                   214.25        +5.6%       226.32  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-1dd-1-3.5.0
>                   207.16       +13.4%       234.98  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-1dd-2-3.5.0
>                 17572.12        -1.9%     17240.05  TOTAL write_bw
> 
> Thanks,
> Fengguang


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ext4 write performance regression in 3.6-rc1 on RAID0/5
  2012-08-17 20:44             ` NeilBrown
@ 2012-08-21  9:42               ` Fengguang Wu
  2012-08-21 12:07                 ` Fengguang Wu
  0 siblings, 1 reply; 14+ messages in thread
From: Fengguang Wu @ 2012-08-21  9:42 UTC (permalink / raw)
  To: NeilBrown
  Cc: Theodore Ts'o, Marti Raudsepp, Kernel hackers, ext4 hackers,
	maze, Shi, Alex, linux-fsdevel, linux RAID, Li Shaohua

On Sat, Aug 18, 2012 at 06:44:57AM +1000, NeilBrown wrote:
> On Fri, 17 Aug 2012 22:25:26 +0800 Fengguang Wu <fengguang.wu@intel.com>
> wrote:
> 
> > [CC md list]
> > 
> > On Fri, Aug 17, 2012 at 09:40:39AM -0400, Theodore Ts'o wrote:
> > > On Fri, Aug 17, 2012 at 02:09:15PM +0800, Fengguang Wu wrote:
> > > > Ted,
> > > > 
> > > > I find ext4 write performance dropped by 3.3% on average in the
> > > > 3.6-rc1 merge window. xfs and btrfs are fine.
> > > > 
> > > > Two machines are tested. The performance regression happens in the
> > > > lkp-nex04 machine, which is equipped with 12 SSD drives. lkp-st02 does
> > > > not see regression, which is equipped with HDD drives. I'll continue
> > > > to repeat the tests and report variations.
> > > 
> > > Hmm... I've checked out the commits in "git log v3.5..v3.6-rc1 --
> > > fs/ext4 fs/jbd2" and I don't see anything that I would expect would
> > > cause that.  The are the lock elimination changes for Direct I/O
> > > overwrites, but that shouldn't matter for your tests which are
> > > measuring buffered writes, correct?
> > > 
> > > Is there any chance you could do me a favor and do a git bisect
> > > restricted to commits involving fs/ext4 and fs/jbd2?
> > 
> > I noticed that the regressions all happen in the RAID0/RAID5 cases.
> > So it may be some interactions between the RAID/ext4 code?
> 
> I'm aware of some performance regression in RAID5 which I will be drilling
> down into next week.  Some things are faster, but some are slower :-(
> 
> RAID0 should be unchanged though - I don't think I've changed anything there.
> 
> Looking at your numbers, JBOD ranges from  +6.5% to -1.5%
>                         RAID0 ranges from  +4.0% to -19.2%
>                         RAID5 ranges from +20.7% to -39.7%
> 
> I'm guessing + is good and - is bad?

Yes.

> The RAID5 numbers don't surprise me.  The RAID0 do.

You are right. I did more tests and it's now obvious that RAID0 is
mostly fine. The major regressions are in the RAID5 10/100dd cases.
JBOD is performing better in 3.6.0-rc1 :-)

> > 
> > I'll try to get some ext2/3 numbers, which should have less changes on the fs side.
> 
> Thanks.  That will be useful.

Here are the more complete results.

   RAID5     ext4    100dd    -7.3%
   RAID5     ext4     10dd    -2.2%
   RAID5     ext4      1dd   +12.1%
   RAID5     ext3    100dd    -3.1%
   RAID5     ext3     10dd   -11.5%
   RAID5     ext3      1dd    +8.9%
   RAID5     ext2    100dd   -10.5%
   RAID5     ext2     10dd    -5.2%
   RAID5     ext2      1dd   +10.0%
   RAID0     ext4    100dd    +1.7%
   RAID0     ext4     10dd    -0.9%
   RAID0     ext4      1dd    -1.1%
   RAID0     ext3    100dd    -4.2%
   RAID0     ext3     10dd    -0.2%
   RAID0     ext3      1dd    -1.0%
   RAID0     ext2    100dd   +11.3%
   RAID0     ext2     10dd    +4.7%
   RAID0     ext2      1dd    -1.6%
    JBOD     ext4    100dd    +5.9%
    JBOD     ext4     10dd    +6.0%
    JBOD     ext4      1dd    +0.6%
    JBOD     ext3    100dd    +6.1%
    JBOD     ext3     10dd    +1.9%
    JBOD     ext3      1dd    +1.7%
    JBOD     ext2    100dd    +9.9%
    JBOD     ext2     10dd    +9.4%
    JBOD     ext2      1dd    +0.5%

wfg@bee /export/writeback% ./compare-groups 'RAID5 RAID0 JBOD' 'ext4 ext3 ext2' '100dd 10dd 1dd' lkp-nex04/*/*-{3.5.0,3.6.0-rc1+}          
RAID5 ext4 100dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  167.97       -38.7%       103.03  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-100dd-1-3.5.0
                  130.42       -21.7%       102.06  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-100dd-2-3.5.0
                   83.45       +10.2%        91.96  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-100dd-3-3.5.0
                  105.97       +11.5%       118.12  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-100dd-4-3.5.0
                   71.18       -34.2%        46.82  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-100dd-1-3.5.0
                   52.79        +1.1%        53.36  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-100dd-2-3.5.0
                   40.75        -5.1%        38.69  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-100dd-3-3.5.0
                   42.79       +14.5%        48.99  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-100dd-4-3.5.0
                  209.24       -23.6%       159.96  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-100dd-1-3.5.0
                  176.21       +11.3%       196.16  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-100dd-2-3.5.0
                  158.12        +3.7%       163.99  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-100dd-3-3.5.0
                  180.18        +6.4%       191.74  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-100dd-4-3.5.0
                 1419.08        -7.3%      1314.88  TOTAL write_bw

RAID5 ext4 10dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  243.67        -9.1%       221.41  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-10dd-1-3.5.0
                  212.84       +16.7%       248.39  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-10dd-2-3.5.0
                  145.84        -7.3%       135.25  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-10dd-1-3.5.0
                  124.61        +3.2%       128.65  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-10dd-2-3.5.0
                  243.73       -10.9%       217.28  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-10dd-1-3.5.0
                  229.35        -2.8%       222.82  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-10dd-2-3.5.0
                 1200.03        -2.2%      1173.81  TOTAL write_bw

RAID5 ext4 1dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  248.98       +12.2%       279.33  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-1dd-1-3.5.0
                  208.45       +14.1%       237.86  lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-1dd-2-3.5.0
                  255.22        +6.7%       272.35  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-1dd-1-3.5.0
                  243.09       +20.7%       293.30  lkp-nex04/RAID5-12HDD-thresh=100M/ext4-1dd-2-3.5.0
                  214.25        +5.6%       226.32  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-1dd-1-3.5.0
                  207.16       +13.4%       234.98  lkp-nex04/RAID5-12HDD-thresh=8G/ext4-1dd-2-3.5.0
                 1377.15       +12.1%      1544.14  TOTAL write_bw

RAID5 ext3 100dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                   72.75        -5.8%        68.50  lkp-nex04/RAID5-12HDD-thresh=1000M/ext3-100dd-1-3.5.0
                   52.04        +0.8%        52.45  lkp-nex04/RAID5-12HDD-thresh=1000M/ext3-100dd-2-3.5.0
                   48.85       +19.2%        58.21  lkp-nex04/RAID5-12HDD-thresh=1000M/ext3-100dd-3-3.5.0
                   47.04        +9.4%        51.44  lkp-nex04/RAID5-12HDD-thresh=1000M/ext3-100dd-4-3.5.0
                   53.89        -7.4%        49.90  lkp-nex04/RAID5-12HDD-thresh=100M/ext3-100dd-1-3.5.0
                   43.00       -10.7%        38.39  lkp-nex04/RAID5-12HDD-thresh=100M/ext3-100dd-2-3.5.0
                   37.82        +0.8%        38.11  lkp-nex04/RAID5-12HDD-thresh=100M/ext3-100dd-3-3.5.0
                   39.59        -4.0%        38.02  lkp-nex04/RAID5-12HDD-thresh=100M/ext3-100dd-4-3.5.0
                   54.45       -15.0%        46.26  lkp-nex04/RAID5-12HDD-thresh=8G/ext3-100dd-1-3.5.0
                   45.81        -4.5%        43.77  lkp-nex04/RAID5-12HDD-thresh=8G/ext3-100dd-2-3.5.0
                   51.20       -12.6%        44.75  lkp-nex04/RAID5-12HDD-thresh=8G/ext3-100dd-3-3.5.0
                   47.39        -3.9%        45.53  lkp-nex04/RAID5-12HDD-thresh=8G/ext3-100dd-4-3.5.0
                  593.84        -3.1%       575.32  TOTAL write_bw

RAID5 ext3 10dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                   50.29       -10.2%        45.14  lkp-nex04/RAID5-12HDD-thresh=1000M/ext3-10dd-1-3.5.0
                   48.46        -7.1%        45.04  lkp-nex04/RAID5-12HDD-thresh=1000M/ext3-10dd-2-3.5.0
                   67.11       -17.8%        55.16  lkp-nex04/RAID5-12HDD-thresh=100M/ext3-10dd-1-3.5.0
                   75.45       -28.2%        54.21  lkp-nex04/RAID5-12HDD-thresh=100M/ext3-10dd-2-3.5.0
                   42.08        +6.4%        44.78  lkp-nex04/RAID5-12HDD-thresh=8G/ext3-10dd-1-3.5.0
                   40.48        +4.8%        42.44  lkp-nex04/RAID5-12HDD-thresh=8G/ext3-10dd-2-3.5.0
                  323.87       -11.5%       286.76  TOTAL write_bw

RAID5 ext3 1dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  190.20       +14.5%       217.69  lkp-nex04/RAID5-12HDD-thresh=1000M/ext3-1dd-1-3.5.0
                  192.30        +9.4%       210.43  lkp-nex04/RAID5-12HDD-thresh=1000M/ext3-1dd-2-3.5.0
                  193.63       +14.0%       220.64  lkp-nex04/RAID5-12HDD-thresh=100M/ext3-1dd-1-3.5.0
                  224.33        -6.8%       209.07  lkp-nex04/RAID5-12HDD-thresh=100M/ext3-1dd-2-3.5.0
                  188.30       +14.6%       215.83  lkp-nex04/RAID5-12HDD-thresh=8G/ext3-1dd-1-3.5.0
                  179.04       +10.7%       198.17  lkp-nex04/RAID5-12HDD-thresh=8G/ext3-1dd-2-3.5.0
                 1167.79        +8.9%      1271.83  TOTAL write_bw

RAID5 ext2 100dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                   76.75       -17.9%        62.98  lkp-nex04/RAID5-12HDD-thresh=1000M/ext2-100dd-1-3.5.0
                   72.32        -7.7%        66.73  lkp-nex04/RAID5-12HDD-thresh=1000M/ext2-100dd-2-3.5.0
                   56.48        +2.2%        57.75  lkp-nex04/RAID5-12HDD-thresh=1000M/ext2-100dd-3-3.5.0
                   56.81        -1.8%        55.81  lkp-nex04/RAID5-12HDD-thresh=1000M/ext2-100dd-4-3.5.0
                   58.58        -5.0%        55.67  lkp-nex04/RAID5-12HDD-thresh=100M/ext2-100dd-1-3.5.0
                   60.02        -3.1%        58.15  lkp-nex04/RAID5-12HDD-thresh=100M/ext2-100dd-2-3.5.0
                   54.01        -9.1%        49.12  lkp-nex04/RAID5-12HDD-thresh=100M/ext2-100dd-3-3.5.0
                   61.00       -22.3%        47.38  lkp-nex04/RAID5-12HDD-thresh=100M/ext2-100dd-4-3.5.0
                   50.98       -15.2%        43.22  lkp-nex04/RAID5-12HDD-thresh=8G/ext2-100dd-1-3.5.0
                   49.52       -14.8%        42.18  lkp-nex04/RAID5-12HDD-thresh=8G/ext2-100dd-2-3.5.0
                   48.35       -17.2%        40.04  lkp-nex04/RAID5-12HDD-thresh=8G/ext2-100dd-3-3.5.0
                   49.14       -14.8%        41.88  lkp-nex04/RAID5-12HDD-thresh=8G/ext2-100dd-4-3.5.0
                  693.96       -10.5%       620.90  TOTAL write_bw

RAID5 ext2 10dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                   46.88        -0.5%        46.67  lkp-nex04/RAID5-12HDD-thresh=1000M/ext2-10dd-1-3.5.0
                   49.98        -8.8%        45.59  lkp-nex04/RAID5-12HDD-thresh=1000M/ext2-10dd-2-3.5.0
                   45.01        +0.7%        45.32  lkp-nex04/RAID5-12HDD-thresh=100M/ext2-10dd-1-3.5.0
                   84.88       -25.5%        63.27  lkp-nex04/RAID5-12HDD-thresh=100M/ext2-10dd-2-3.5.0
                   44.49       +15.9%        51.56  lkp-nex04/RAID5-12HDD-thresh=8G/ext2-10dd-1-3.5.0
                   43.73        +5.6%        46.19  lkp-nex04/RAID5-12HDD-thresh=8G/ext2-10dd-2-3.5.0
                  314.97        -5.2%       298.60  TOTAL write_bw

RAID5 ext2 1dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  234.85        +7.6%       252.80  lkp-nex04/RAID5-12HDD-thresh=1000M/ext2-1dd-1-3.5.0
                  250.77       +17.9%       295.65  lkp-nex04/RAID5-12HDD-thresh=1000M/ext2-1dd-2-3.5.0
                  205.84        +4.9%       215.93  lkp-nex04/RAID5-12HDD-thresh=100M/ext2-1dd-1-3.5.0
                  213.89        +7.2%       229.37  lkp-nex04/RAID5-12HDD-thresh=100M/ext2-1dd-2-3.5.0
                  217.70       +13.1%       246.25  lkp-nex04/RAID5-12HDD-thresh=8G/ext2-1dd-1-3.5.0
                  241.22        +8.3%       261.19  lkp-nex04/RAID5-12HDD-thresh=8G/ext2-1dd-2-3.5.0
                 1364.27       +10.0%      1501.19  TOTAL write_bw

RAID0 ext4 100dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  675.08       -10.5%       604.29  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-100dd-1-3.5.0
                  640.40        -0.8%       635.21  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-100dd-2-3.5.0
                  370.03        +4.9%       388.06  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-100dd-3-3.5.0
                  376.90        +6.1%       399.96  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-100dd-4-3.5.0
                  709.76       -15.7%       598.44  lkp-nex04/RAID0-12HDD-thresh=100M/ext4-100dd-1-3.5.0
                  399.91       +52.7%       610.76  lkp-nex04/RAID0-12HDD-thresh=100M/ext4-100dd-2-3.5.0
                  342.58        +6.3%       364.24  lkp-nex04/RAID0-12HDD-thresh=100M/ext4-100dd-3-3.5.0
                  300.55       +24.6%       374.34  lkp-nex04/RAID0-12HDD-thresh=100M/ext4-100dd-4-3.5.0
                  699.77       -19.2%       565.54  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-100dd-1-3.5.0
                  582.28        -1.5%       573.28  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-100dd-2-3.5.0
                  491.00        +8.4%       532.13  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-100dd-3-3.5.0
                  485.84        +9.5%       532.12  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-100dd-4-3.5.0
                 6074.09        +1.7%      6178.38  TOTAL write_bw

RAID0 ext4 10dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  676.52        -2.7%       658.38  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-10dd-1-3.5.0
                  626.18        -3.2%       606.21  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-10dd-2-3.5.0
                  681.39        -2.1%       667.25  lkp-nex04/RAID0-12HDD-thresh=100M/ext4-10dd-1-3.5.0
                  630.30        +3.4%       651.81  lkp-nex04/RAID0-12HDD-thresh=100M/ext4-10dd-2-3.5.0
                  675.79        -1.9%       663.17  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-10dd-1-3.5.0
                  665.04        +1.3%       673.54  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-10dd-2-3.5.0
                 3955.21        -0.9%      3920.37  TOTAL write_bw

RAID0 ext4 1dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  512.70        +4.0%       533.22  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-1dd-1-3.5.0
                  524.61        -0.3%       522.90  lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-1dd-2-3.5.0
                  524.16        +0.8%       528.25  lkp-nex04/RAID0-12HDD-thresh=100M/ext4-1dd-2-3.5.0
                  484.84        -7.4%       448.83  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-1dd-1-3.5.0
                  470.40        -3.2%       455.31  lkp-nex04/RAID0-12HDD-thresh=8G/ext4-1dd-2-3.5.0
                 2516.71        -1.1%      2488.51  TOTAL write_bw

RAID0 ext3 100dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  500.94        -4.0%       481.11  lkp-nex04/RAID0-12HDD-thresh=1000M/ext3-100dd-1-3.5.0
                  494.13        -4.3%       473.13  lkp-nex04/RAID0-12HDD-thresh=1000M/ext3-100dd-2-3.5.0
                  513.57        -9.2%       466.07  lkp-nex04/RAID0-12HDD-thresh=1000M/ext3-100dd-3-3.5.0
                  490.19        -5.9%       461.42  lkp-nex04/RAID0-12HDD-thresh=1000M/ext3-100dd-4-3.5.0
                  511.08        -2.9%       496.04  lkp-nex04/RAID0-12HDD-thresh=100M/ext3-100dd-1-3.5.0
                  520.57        -7.6%       480.95  lkp-nex04/RAID0-12HDD-thresh=100M/ext3-100dd-2-3.5.0
                  523.62        -5.2%       496.52  lkp-nex04/RAID0-12HDD-thresh=100M/ext3-100dd-3-3.5.0
                  497.72        -0.1%       497.16  lkp-nex04/RAID0-12HDD-thresh=100M/ext3-100dd-4-3.5.0
                  470.99        -5.0%       447.64  lkp-nex04/RAID0-12HDD-thresh=8G/ext3-100dd-1-3.5.0
                  444.63        +2.0%       453.54  lkp-nex04/RAID0-12HDD-thresh=8G/ext3-100dd-2-3.5.0
                  448.25        -4.7%       427.18  lkp-nex04/RAID0-12HDD-thresh=8G/ext3-100dd-3-3.5.0
                  475.57        -3.1%       460.84  lkp-nex04/RAID0-12HDD-thresh=8G/ext3-100dd-4-3.5.0
                 5891.26        -4.2%      5641.62  TOTAL write_bw

RAID0 ext3 10dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  560.26        +2.8%       576.15  lkp-nex04/RAID0-12HDD-thresh=1000M/ext3-10dd-1-3.5.0
                  583.44        +0.5%       586.08  lkp-nex04/RAID0-12HDD-thresh=1000M/ext3-10dd-2-3.5.0
                  566.37        -3.2%       548.19  lkp-nex04/RAID0-12HDD-thresh=100M/ext3-10dd-1-3.5.0
                  579.37        -2.1%       567.13  lkp-nex04/RAID0-12HDD-thresh=100M/ext3-10dd-2-3.5.0
                  623.24        +0.2%       624.71  lkp-nex04/RAID0-12HDD-thresh=8G/ext3-10dd-1-3.5.0
                  624.26        +0.7%       628.74  lkp-nex04/RAID0-12HDD-thresh=8G/ext3-10dd-2-3.5.0
                 3536.93        -0.2%      3531.00  TOTAL write_bw

RAID0 ext3 1dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  351.69        -2.3%       343.73  lkp-nex04/RAID0-12HDD-thresh=1000M/ext3-1dd-1-3.5.0
                  355.50        -4.1%       340.77  lkp-nex04/RAID0-12HDD-thresh=1000M/ext3-1dd-2-3.5.0
                  383.34        +0.9%       386.96  lkp-nex04/RAID0-12HDD-thresh=100M/ext3-1dd-1-3.5.0
                  385.74        +1.3%       390.69  lkp-nex04/RAID0-12HDD-thresh=100M/ext3-1dd-2-3.5.0
                  315.53        -0.2%       314.81  lkp-nex04/RAID0-12HDD-thresh=8G/ext3-1dd-1-3.5.0
                  319.52        -1.9%       313.36  lkp-nex04/RAID0-12HDD-thresh=8G/ext3-1dd-2-3.5.0
                 2111.31        -1.0%      2090.32  TOTAL write_bw

RAID0 ext2 100dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  694.06        +0.0%       694.22  lkp-nex04/RAID0-12HDD-thresh=1000M/ext2-100dd-1-3.5.0
                  693.19        -0.1%       692.38  lkp-nex04/RAID0-12HDD-thresh=1000M/ext2-100dd-2-3.5.0
                  686.16        +1.2%       694.35  lkp-nex04/RAID0-12HDD-thresh=1000M/ext2-100dd-3-3.5.0
                  691.17        -0.2%       690.13  lkp-nex04/RAID0-12HDD-thresh=1000M/ext2-100dd-4-3.5.0
                  668.16        +1.3%       677.07  lkp-nex04/RAID0-12HDD-thresh=100M/ext2-100dd-1-3.5.0
                  404.60       +62.9%       658.90  lkp-nex04/RAID0-12HDD-thresh=100M/ext2-100dd-2-3.5.0
                  346.48       +81.1%       627.62  lkp-nex04/RAID0-12HDD-thresh=100M/ext2-100dd-3-3.5.0
                  373.48       +71.3%       639.85  lkp-nex04/RAID0-12HDD-thresh=100M/ext2-100dd-4-3.5.0
                  691.96        +0.2%       693.29  lkp-nex04/RAID0-12HDD-thresh=8G/ext2-100dd-1-3.5.0
                  690.73        +0.5%       694.40  lkp-nex04/RAID0-12HDD-thresh=8G/ext2-100dd-2-3.5.0
                  692.65        +0.1%       693.27  lkp-nex04/RAID0-12HDD-thresh=8G/ext2-100dd-3-3.5.0
                  690.10        +0.2%       691.71  lkp-nex04/RAID0-12HDD-thresh=8G/ext2-100dd-4-3.5.0
                 7322.74       +11.3%      8147.20  TOTAL write_bw

RAID0 ext2 10dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  247.29       +23.2%       304.58  lkp-nex04/RAID0-12HDD-thresh=1000M/ext2-10dd-1-3.5.0
                  697.35        +0.4%       700.38  lkp-nex04/RAID0-12HDD-thresh=1000M/ext2-10dd-2-3.5.0
                  662.14        +1.8%       673.83  lkp-nex04/RAID0-12HDD-thresh=100M/ext2-10dd-1-3.5.0
                  613.81       +10.0%       675.44  lkp-nex04/RAID0-12HDD-thresh=100M/ext2-10dd-2-3.5.0
                  337.37        +5.5%       355.95  lkp-nex04/RAID0-12HDD-thresh=8G/ext2-10dd-1-3.5.0
                  682.57        +0.0%       682.90  lkp-nex04/RAID0-12HDD-thresh=8G/ext2-10dd-2-3.5.0
                 3240.53        +4.7%      3393.07  TOTAL write_bw

RAID0 ext2 1dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  526.72        -4.1%       505.29  lkp-nex04/RAID0-12HDD-thresh=1000M/ext2-1dd-1-3.5.0
                  516.77        -0.6%       513.81  lkp-nex04/RAID0-12HDD-thresh=1000M/ext2-1dd-2-3.5.0
                  617.83        +0.3%       619.45  lkp-nex04/RAID0-12HDD-thresh=100M/ext2-1dd-1-3.5.0
                  617.49        +0.6%       621.09  lkp-nex04/RAID0-12HDD-thresh=100M/ext2-1dd-2-3.5.0
                  502.60        -1.0%       497.39  lkp-nex04/RAID0-12HDD-thresh=8G/ext2-1dd-1-3.5.0
                  504.82        -5.7%       475.86  lkp-nex04/RAID0-12HDD-thresh=8G/ext2-1dd-2-3.5.0
                 3286.22        -1.6%      3232.89  TOTAL write_bw

JBOD ext4 100dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  720.62        -1.5%       710.16  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-100dd-1-3.5.0
                  469.82       +14.3%       536.78  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-100dd-2-3.5.0
                  666.90        -2.6%       649.61  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-100dd-3-3.5.0
                  343.93       +24.1%       426.86  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-100dd-4-3.5.0
                  779.52        +6.5%       830.11  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-100dd-1-3.5.0
                  457.65        -1.4%       451.18  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-100dd-2-3.5.0
                  739.08        +5.7%       781.16  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-100dd-3-3.5.0
                  332.98        -9.6%       301.13  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-100dd-4-3.5.0
                  705.26        -1.2%       696.61  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-100dd-1-3.5.0
                  565.73       +16.8%       660.76  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-100dd-2-3.5.0
                  647.47        +5.3%       681.63  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-100dd-3-3.5.0
                  416.22       +25.5%       522.50  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-100dd-4-3.5.0
                 6845.19        +5.9%      7248.49  TOTAL write_bw

JBOD ext4 10dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  706.04        -0.0%       705.86  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-10dd-1-3.5.0
                  525.34       +12.1%       589.06  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-10dd-2-3.5.0
                  646.70        +4.9%       678.59  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-10dd-1-3.5.0
                  335.12       +25.1%       419.10  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-10dd-2-3.5.0
                  703.37        +0.1%       703.76  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-10dd-1-3.5.0
                  665.60        +5.4%       701.85  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-10dd-2-3.5.0
                 3582.17        +6.0%      3798.23  TOTAL write_bw

JBOD ext4 1dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  702.86        -0.2%       701.74  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-1dd-1-3.5.0
                  702.41        -0.0%       702.06  lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-1dd-2-3.5.0
                  704.49        +2.6%       723.00  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-1dd-1-3.5.0
                  704.21        +1.2%       712.47  lkp-nex04/JBOD-12HDD-thresh=100M/ext4-1dd-2-3.5.0
                  701.66        -0.1%       700.83  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-1dd-1-3.5.0
                  701.17        +0.0%       701.36  lkp-nex04/JBOD-12HDD-thresh=8G/ext4-1dd-2-3.5.0
                 4216.81        +0.6%      4241.46  TOTAL write_bw

JBOD ext3 100dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  683.15        -3.8%       657.31  lkp-nex04/JBOD-12HDD-thresh=1000M/ext3-100dd-1-3.5.0
                  711.48        -0.1%       710.83  lkp-nex04/JBOD-12HDD-thresh=1000M/ext3-100dd-2-3.5.0
                  677.50        +0.0%       677.62  lkp-nex04/JBOD-12HDD-thresh=1000M/ext3-100dd-3-3.5.0
                  713.31        -0.5%       709.88  lkp-nex04/JBOD-12HDD-thresh=1000M/ext3-100dd-4-3.5.0
                  648.70       +16.1%       753.15  lkp-nex04/JBOD-12HDD-thresh=100M/ext3-100dd-1-3.5.0
                  633.24       +26.5%       801.02  lkp-nex04/JBOD-12HDD-thresh=100M/ext3-100dd-2-3.5.0
                  568.48       +23.8%       703.49  lkp-nex04/JBOD-12HDD-thresh=100M/ext3-100dd-3-3.5.0
                  680.59       +21.6%       827.77  lkp-nex04/JBOD-12HDD-thresh=100M/ext3-100dd-4-3.5.0
                  656.73        -0.9%       651.09  lkp-nex04/JBOD-12HDD-thresh=8G/ext3-100dd-1-3.5.0
                  697.29        -0.3%       695.49  lkp-nex04/JBOD-12HDD-thresh=8G/ext3-100dd-2-3.5.0
                  669.99        -1.9%       657.24  lkp-nex04/JBOD-12HDD-thresh=8G/ext3-100dd-3-3.5.0
                  697.73        -2.1%       683.17  lkp-nex04/JBOD-12HDD-thresh=8G/ext3-100dd-4-3.5.0
                 8038.18        +6.1%      8528.06  TOTAL write_bw

JBOD ext3 10dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  669.69        -1.0%       663.26  lkp-nex04/JBOD-12HDD-thresh=1000M/ext3-10dd-1-3.5.0
                  704.60        +0.3%       707.03  lkp-nex04/JBOD-12HDD-thresh=1000M/ext3-10dd-2-3.5.0
                  629.95        +3.0%       648.55  lkp-nex04/JBOD-12HDD-thresh=100M/ext3-10dd-1-3.5.0
                  616.65        +9.6%       676.08  lkp-nex04/JBOD-12HDD-thresh=100M/ext3-10dd-2-3.5.0
                  691.77        +0.6%       695.88  lkp-nex04/JBOD-12HDD-thresh=8G/ext3-10dd-1-3.5.0
                  706.05        -0.2%       704.95  lkp-nex04/JBOD-12HDD-thresh=8G/ext3-10dd-2-3.5.0
                 4018.71        +1.9%      4095.75  TOTAL write_bw

JBOD ext3 1dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  700.88        +0.1%       701.30  lkp-nex04/JBOD-12HDD-thresh=1000M/ext3-1dd-1-3.5.0
                  699.78        +0.1%       700.27  lkp-nex04/JBOD-12HDD-thresh=1000M/ext3-1dd-2-3.5.0
                  700.07        -0.0%       699.98  lkp-nex04/JBOD-12HDD-thresh=100M/ext3-1dd-1-3.5.0
                  598.09       +11.7%       668.09  lkp-nex04/JBOD-12HDD-thresh=100M/ext3-1dd-2-3.5.0
                  700.53        -0.0%       700.47  lkp-nex04/JBOD-12HDD-thresh=8G/ext3-1dd-1-3.5.0
                  700.65        +0.1%       701.43  lkp-nex04/JBOD-12HDD-thresh=8G/ext3-1dd-2-3.5.0
                 4100.00        +1.7%      4171.54  TOTAL write_bw

JBOD ext2 100dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  644.32        +6.1%       683.56  lkp-nex04/JBOD-12HDD-thresh=1000M/ext2-100dd-1-3.5.0
                  558.44       +16.9%       652.63  lkp-nex04/JBOD-12HDD-thresh=1000M/ext2-100dd-2-3.5.0
                  443.68       +30.1%       577.18  lkp-nex04/JBOD-12HDD-thresh=1000M/ext2-100dd-3-3.5.0
                  449.28       +36.8%       614.49  lkp-nex04/JBOD-12HDD-thresh=1000M/ext2-100dd-4-3.5.0
                  526.02       +10.2%       579.52  lkp-nex04/JBOD-12HDD-thresh=100M/ext2-100dd-1-3.5.0
                  442.03       +10.6%       488.71  lkp-nex04/JBOD-12HDD-thresh=100M/ext2-100dd-2-3.5.0
                  375.04        -5.5%       354.36  lkp-nex04/JBOD-12HDD-thresh=100M/ext2-100dd-3-3.5.0
                  365.83        +3.9%       379.96  lkp-nex04/JBOD-12HDD-thresh=100M/ext2-100dd-4-3.5.0
                  693.56        +0.8%       699.06  lkp-nex04/JBOD-12HDD-thresh=8G/ext2-100dd-1-3.5.0
                  661.00        +3.3%       682.82  lkp-nex04/JBOD-12HDD-thresh=8G/ext2-100dd-2-3.5.0
                  584.28        +9.1%       637.22  lkp-nex04/JBOD-12HDD-thresh=8G/ext2-100dd-3-3.5.0
                  657.01        +4.2%       684.28  lkp-nex04/JBOD-12HDD-thresh=8G/ext2-100dd-4-3.5.0
                 6400.48        +9.9%      7033.79  TOTAL write_bw

JBOD ext2 10dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  662.51        +3.4%       685.05  lkp-nex04/JBOD-12HDD-thresh=1000M/ext2-10dd-1-3.5.0
                  665.07        +3.2%       686.05  lkp-nex04/JBOD-12HDD-thresh=1000M/ext2-10dd-2-3.5.0
                  431.26       +26.9%       547.16  lkp-nex04/JBOD-12HDD-thresh=100M/ext2-10dd-1-3.5.0
                  397.42       +39.4%       553.83  lkp-nex04/JBOD-12HDD-thresh=100M/ext2-10dd-2-3.5.0
                  685.99        +0.9%       691.90  lkp-nex04/JBOD-12HDD-thresh=8G/ext2-10dd-1-3.5.0
                  685.68        +1.2%       693.94  lkp-nex04/JBOD-12HDD-thresh=8G/ext2-10dd-2-3.5.0
                 3527.93        +9.4%      3857.93  TOTAL write_bw

JBOD ext2 1dd
                   3.5.0                3.6.0-rc1+
------------------------  ------------------------
                  718.45        -0.1%       717.89  lkp-nex04/JBOD-12HDD-thresh=1000M/ext2-1dd-1-3.5.0
                  717.25        +0.2%       718.89  lkp-nex04/JBOD-12HDD-thresh=1000M/ext2-1dd-2-3.5.0
                  686.97        +1.0%       693.82  lkp-nex04/JBOD-12HDD-thresh=100M/ext2-1dd-1-3.5.0
                  683.79        +1.5%       694.03  lkp-nex04/JBOD-12HDD-thresh=100M/ext2-1dd-2-3.5.0
                  699.79        +0.1%       700.41  lkp-nex04/JBOD-12HDD-thresh=8G/ext2-1dd-1-3.5.0
                  700.22        +0.1%       701.15  lkp-nex04/JBOD-12HDD-thresh=8G/ext2-1dd-2-3.5.0
                 4206.46        +0.5%      4226.19  TOTAL write_bw

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ext4 write performance regression in 3.6-rc1 on RAID0/5
  2012-08-21  9:42               ` Fengguang Wu
@ 2012-08-21 12:07                 ` Fengguang Wu
  0 siblings, 0 replies; 14+ messages in thread
From: Fengguang Wu @ 2012-08-21 12:07 UTC (permalink / raw)
  To: NeilBrown
  Cc: Theodore Ts'o, Marti Raudsepp, Kernel hackers, ext4 hackers,
	maze, Shi, Alex, linux-fsdevel, linux RAID, Li Shaohua

On Tue, Aug 21, 2012 at 05:42:21PM +0800, Fengguang Wu wrote:
> On Sat, Aug 18, 2012 at 06:44:57AM +1000, NeilBrown wrote:
> > On Fri, 17 Aug 2012 22:25:26 +0800 Fengguang Wu <fengguang.wu@intel.com>
> > wrote:
> > 
> > > [CC md list]
> > > 
> > > On Fri, Aug 17, 2012 at 09:40:39AM -0400, Theodore Ts'o wrote:
> > > > On Fri, Aug 17, 2012 at 02:09:15PM +0800, Fengguang Wu wrote:
> > > > > Ted,
> > > > > 
> > > > > I find ext4 write performance dropped by 3.3% on average in the
> > > > > 3.6-rc1 merge window. xfs and btrfs are fine.
> > > > > 
> > > > > Two machines are tested. The performance regression happens in the
> > > > > lkp-nex04 machine, which is equipped with 12 SSD drives. lkp-st02 does
> > > > > not see regression, which is equipped with HDD drives. I'll continue
> > > > > to repeat the tests and report variations.
> > > > 
> > > > Hmm... I've checked out the commits in "git log v3.5..v3.6-rc1 --
> > > > fs/ext4 fs/jbd2" and I don't see anything that I would expect would
> > > > cause that.  The are the lock elimination changes for Direct I/O
> > > > overwrites, but that shouldn't matter for your tests which are
> > > > measuring buffered writes, correct?
> > > > 
> > > > Is there any chance you could do me a favor and do a git bisect
> > > > restricted to commits involving fs/ext4 and fs/jbd2?
> > > 
> > > I noticed that the regressions all happen in the RAID0/RAID5 cases.
> > > So it may be some interactions between the RAID/ext4 code?
> > 
> > I'm aware of some performance regression in RAID5 which I will be drilling
> > down into next week.  Some things are faster, but some are slower :-(
> > 
> > RAID0 should be unchanged though - I don't think I've changed anything there.
> > 
> > Looking at your numbers, JBOD ranges from  +6.5% to -1.5%
> >                         RAID0 ranges from  +4.0% to -19.2%
> >                         RAID5 ranges from +20.7% to -39.7%
> > 
> > I'm guessing + is good and - is bad?
> 
> Yes.
> 
> > The RAID5 numbers don't surprise me.  The RAID0 do.
> 
> You are right. I did more tests and it's now obvious that RAID0 is
> mostly fine. The major regressions are in the RAID5 10/100dd cases.
> JBOD is performing better in 3.6.0-rc1 :-)
> 
> > > 
> > > I'll try to get some ext2/3 numbers, which should have less changes on the fs side.
> > 
> > Thanks.  That will be useful.
> 
> Here are the more complete results.
> 
>    RAID5     ext4    100dd    -7.3%
>    RAID5     ext4     10dd    -2.2%
>    RAID5     ext4      1dd   +12.1%
>    RAID5     ext3    100dd    -3.1%
>    RAID5     ext3     10dd   -11.5%
>    RAID5     ext3      1dd    +8.9%
>    RAID5     ext2    100dd   -10.5%
>    RAID5     ext2     10dd    -5.2%
>    RAID5     ext2      1dd   +10.0%
>    RAID0     ext4    100dd    +1.7%
>    RAID0     ext4     10dd    -0.9%
>    RAID0     ext4      1dd    -1.1%
>    RAID0     ext3    100dd    -4.2%
>    RAID0     ext3     10dd    -0.2%
>    RAID0     ext3      1dd    -1.0%
>    RAID0     ext2    100dd   +11.3%
>    RAID0     ext2     10dd    +4.7%
>    RAID0     ext2      1dd    -1.6%
>     JBOD     ext4    100dd    +5.9%
>     JBOD     ext4     10dd    +6.0%
>     JBOD     ext4      1dd    +0.6%
>     JBOD     ext3    100dd    +6.1%
>     JBOD     ext3     10dd    +1.9%
>     JBOD     ext3      1dd    +1.7%
>     JBOD     ext2    100dd    +9.9%
>     JBOD     ext2     10dd    +9.4%
>     JBOD     ext2      1dd    +0.5%

And here are the xfs/btrfs results. Very impressive RAID5 improvements!

   RAID5    btrfs    100dd   +25.8%
   RAID5    btrfs     10dd   +21.3%
   RAID5    btrfs      1dd   +14.3%
   RAID5      xfs    100dd   +32.8%
   RAID5      xfs     10dd   +21.5%
   RAID5      xfs      1dd   +25.2%
   RAID0    btrfs    100dd    -7.4%
   RAID0    btrfs     10dd    -0.2%
   RAID0    btrfs      1dd    -2.8%
   RAID0      xfs    100dd   +18.8%
   RAID0      xfs     10dd    +0.0%
   RAID0      xfs      1dd    +3.8%
    JBOD    btrfs    100dd    -0.0%
    JBOD    btrfs     10dd    +2.3%
    JBOD    btrfs      1dd    -0.1%
    JBOD      xfs    100dd    +8.3%
    JBOD      xfs     10dd    +4.1%
    JBOD      xfs      1dd    +0.1%

Thanks,
Fengguang

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ext4 write performance regression in 3.6-rc1 on RAID0/5
       [not found]             ` <20120822035702.GF2570@yliu-dev.sh.intel.com>
@ 2012-08-22  4:07               ` Shaohua Li
  2012-08-22  6:00               ` NeilBrown
  1 sibling, 0 replies; 14+ messages in thread
From: Shaohua Li @ 2012-08-22  4:07 UTC (permalink / raw)
  To: Yuanhan Liu
  Cc: Fengguang Wu, NeilBrown, Theodore Ts'o, Marti Raudsepp,
	Kernel hackers, ext4 hackers, maze@google.com, Shi, Alex,
	linux-fsdevel@vger.kernel.org, linux RAID

On 8/22/12 11:57 AM, Yuanhan Liu wrote:
>  On Fri, Aug 17, 2012 at 10:25:26PM +0800, Fengguang Wu wrote:
> > [CC md list]
> >
> > On Fri, Aug 17, 2012 at 09:40:39AM -0400, Theodore Ts'o wrote:
> >> On Fri, Aug 17, 2012 at 02:09:15PM +0800, Fengguang Wu wrote:
> >>> Ted,
> >>>
> >>> I find ext4 write performance dropped by 3.3% on average in the
> >>> 3.6-rc1 merge window. xfs and btrfs are fine.
> >>>
> >>> Two machines are tested. The performance regression happens in the
> >>> lkp-nex04 machine, which is equipped with 12 SSD drives. lkp-st02 does
> >>> not see regression, which is equipped with HDD drives. I'll continue
> >>> to repeat the tests and report variations.
> >>
> >> Hmm... I've checked out the commits in "git log v3.5..v3.6-rc1 --
> >> fs/ext4 fs/jbd2" and I don't see anything that I would expect would
> >> cause that. The are the lock elimination changes for Direct I/O
> >> overwrites, but that shouldn't matter for your tests which are
> >> measuring buffered writes, correct?
> >>
> >> Is there any chance you could do me a favor and do a git bisect
> >> restricted to commits involving fs/ext4 and fs/jbd2?
> >
> > I noticed that the regressions all happen in the RAID0/RAID5 cases.
> > So it may be some interactions between the RAID/ext4 code?
> >
> > I'll try to get some ext2/3 numbers, which should have less changes 
on the fs side.
> >
> > wfg@bee /export/writeback% ./compare -g ext4 
lkp-nex04/*/*-{3.5.0,3.6.0-rc1+}
> > 3.5.0 3.6.0-rc1+
> > ------------------------ ------------------------
> > 720.62 -1.5% 710.16 lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-100dd-1-3.5.0
> > 706.04 -0.0% 705.86 lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-10dd-1-3.5.0
> > 702.86 -0.2% 701.74 lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-1dd-1-3.5.0
> > 702.41 -0.0% 702.06 lkp-nex04/JBOD-12HDD-thresh=1000M/ext4-1dd-2-3.5.0
> > 779.52 +6.5% 830.11 lkp-nex04/JBOD-12HDD-thresh=100M/ext4-100dd-1-3.5.0
> > 646.70 +4.9% 678.59 lkp-nex04/JBOD-12HDD-thresh=100M/ext4-10dd-1-3.5.0
> > 704.49 +2.6% 723.00 lkp-nex04/JBOD-12HDD-thresh=100M/ext4-1dd-1-3.5.0
> > 704.21 +1.2% 712.47 lkp-nex04/JBOD-12HDD-thresh=100M/ext4-1dd-2-3.5.0
> > 705.26 -1.2% 696.61 lkp-nex04/JBOD-12HDD-thresh=8G/ext4-100dd-1-3.5.0
> > 703.37 +0.1% 703.76 lkp-nex04/JBOD-12HDD-thresh=8G/ext4-10dd-1-3.5.0
> > 701.66 -0.1% 700.83 lkp-nex04/JBOD-12HDD-thresh=8G/ext4-1dd-1-3.5.0
> > 701.17 +0.0% 701.36 lkp-nex04/JBOD-12HDD-thresh=8G/ext4-1dd-2-3.5.0
> > 675.08 -10.5% 604.29 
lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-100dd-1-3.5.0
> > 676.52 -2.7% 658.38 lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-10dd-1-3.5.0
> > 512.70 +4.0% 533.22 lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-1dd-1-3.5.0
> > 524.61 -0.3% 522.90 lkp-nex04/RAID0-12HDD-thresh=1000M/ext4-1dd-2-3.5.0
> > 709.76 -15.7% 598.44 lkp-nex04/RAID0-12HDD-thresh=100M/ext4-100dd-1-3.5.0
> > 681.39 -2.1% 667.25 lkp-nex04/RAID0-12HDD-thresh=100M/ext4-10dd-1-3.5.0
> > 524.16 +0.8% 528.25 lkp-nex04/RAID0-12HDD-thresh=100M/ext4-1dd-2-3.5.0
> > 699.77 -19.2% 565.54 lkp-nex04/RAID0-12HDD-thresh=8G/ext4-100dd-1-3.5.0
> > 675.79 -1.9% 663.17 lkp-nex04/RAID0-12HDD-thresh=8G/ext4-10dd-1-3.5.0
> > 484.84 -7.4% 448.83 lkp-nex04/RAID0-12HDD-thresh=8G/ext4-1dd-1-3.5.0
> > 470.40 -3.2% 455.31 lkp-nex04/RAID0-12HDD-thresh=8G/ext4-1dd-2-3.5.0
> > 167.97 -38.7% 103.03 
lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-100dd-1-3.5.0
> > 243.67 -9.1% 221.41 lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-10dd-1-3.5.0
> > 248.98 +12.2% 279.33 lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-1dd-1-3.5.0
> > 208.45 +14.1% 237.86 lkp-nex04/RAID5-12HDD-thresh=1000M/ext4-1dd-2-3.5.0
> > 71.18 -34.2% 46.82 lkp-nex04/RAID5-12HDD-thresh=100M/ext4-100dd-1-3.5.0
> > 145.84 -7.3% 135.25 lkp-nex04/RAID5-12HDD-thresh=100M/ext4-10dd-1-3.5.0
> > 255.22 +6.7% 272.35 lkp-nex04/RAID5-12HDD-thresh=100M/ext4-1dd-1-3.5.0
> > 243.09 +20.7% 293.30 lkp-nex04/RAID5-12HDD-thresh=100M/ext4-1dd-2-3.5.0
> > 209.24 -23.6% 159.96 lkp-nex04/RAID5-12HDD-thresh=8G/ext4-100dd-1-3.5.0
> > 243.73 -10.9% 217.28 lkp-nex04/RAID5-12HDD-thresh=8G/ext4-10dd-1-3.5.0
>
>  Hi,
>
>  About this issue, I did some investigation. And found we are blocked at
>  get_active_stripes() in most times. It's reasonable, since max_nr_stripes
>  is set to 256 now. It's a kind of small value, thus I tried with
>  different value. Please see the following patch for detailed numbers.
>
>  The test machine is same as above.
>
>  From 85c27fca12b770da5bc8ec9f26a22cb414e84c68 Mon Sep 17 00:00:00 2001
>  From: Yuanhan Liu <yuanhan.liu@linux.intel.com>
>  Date: Wed, 22 Aug 2012 10:51:48 +0800
>  Subject: [RFC PATCH] md/raid5: increase NR_STRIPES to 1024
>
>  Stripe head is a must held resource before doing any IO. And it's
>  limited to 256 by default. With 10dd case, we found that it is
>  blocked at get_active_stripes() in most times(please see the ps
>  output attached).
>
>  Thus I did some tries with different value set to NR_STRIPS, and
>  here are some numbers(EXT4 only) I got with different NR_STRIPS set:
>
>  write bandwidth:
>  ================
>  3.5.0-rc1-256+: (Here 256 means with max strip head set to 256)
>  write bandwidth: 280
>  3.5.0-rc1-1024+:
>  write bandwidth: 421 (+50.4%)
>  3.5.0-rc1-4096+:
>  write bandwidth: 506 (+80.7%)
>  3.5.0-rc1-32768+:
>  write bandwidth: 615 (+119.6%)
>
>  (Here 'sh' means with Shaohua's "multiple threads to handle strips" 
patch [0])
>  3.5.0-rc3-strip-sh+-256:
>  write bandwidth: 465
>
>  3.5.0-rc3-strip-sh+-1024:
>  write bandwidth: 599
>
>  3.5.0-rc3-strip-sh+-32768:
>  write bandwidth: 615
>
>  The kernel maybe a bit older but I found that the data are still kind of
>  valid. Though, I haven't tried Shaohua's latest patch.
>
>  As you can see from those data above: the write bandwidth is increased
>  (a lot) as we increase NR_STRIPES. Thus the bigger NR_STRIPES set, the
>  better write bandwidth we get. But we can't set NR_STRIPES with a too
>  large number, especially by default, or it need lots of memory. Due to
>  the number I got with Shaohua's patch applied, I guess 1024 would be
>  nice value; it's not too big but we gain above 110% performance.
>
>  Comments? BTW, I have a more flexible(more stupid, in the meantime) way:
>  change the max_nr_stripes dynamically based on need?
>
>  Here I also attached more data: the script I used to get those number,
>  ps output, and iostat -kx 3 output.
>
>  The script does it's job in a straight way: start NR dd in background,
>  trace the writeback/global_dirty_state event in background to count the
>  write bandwidth, sample the ps out regularly.
>
>  ---
>  [0]: patch: http://lwn.net/Articles/500200/
>
>  Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
>  ---
>  drivers/md/raid5.c | 2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
>
>  diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
>  index adda94d..82dca53 100644
>  --- a/drivers/md/raid5.c
>  +++ b/drivers/md/raid5.c
>  @@ -62,7 +62,7 @@
>  * Stripe cache
>  */
>
>  -#define NR_STRIPES 256
>  +#define NR_STRIPES 1024
>  #define STRIPE_SIZE PAGE_SIZE
>  #define STRIPE_SHIFT (PAGE_SHIFT - 9)
>  #define STRIPE_SECTORS (STRIPE_SIZE>>9)

does revert commit 8811b5968f6216e fix the problem?

Thanks,
Shaohua

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ext4 write performance regression in 3.6-rc1 on RAID0/5
       [not found]             ` <20120822035702.GF2570@yliu-dev.sh.intel.com>
  2012-08-22  4:07               ` Shaohua Li
@ 2012-08-22  6:00               ` NeilBrown
  2012-08-22  6:31                 ` Yuanhan Liu
                                   ` (2 more replies)
  1 sibling, 3 replies; 14+ messages in thread
From: NeilBrown @ 2012-08-22  6:00 UTC (permalink / raw)
  To: Yuanhan Liu
  Cc: Fengguang Wu, Li Shaohua, Theodore Ts'o, Marti Raudsepp,
	Kernel hackers, ext4 hackers, maze, Shi, Alex, linux-fsdevel,
	linux RAID

[-- Attachment #1: Type: text/plain, Size: 723 bytes --]

On Wed, 22 Aug 2012 11:57:02 +0800 Yuanhan Liu <yuanhan.liu@linux.intel.com>
wrote:

>  
> -#define NR_STRIPES		256
> +#define NR_STRIPES		1024

Changing one magic number into another magic number might help your case, but
it not really a general solution.

Possibly making sure that max_nr_stripes is at least some multiple of the
chunk size might make sense, but I wouldn't want to see a very large multiple.

I thing the problems with RAID5 are deeper than that.  Hopefully I'll figure
out exactly what the best fix is soon - I'm trying to look into it.

I don't think the size of the cache is a big part of the solution.  I think
correct scheduling of IO is the real answer.

Thanks,
NeilBrown



[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ext4 write performance regression in 3.6-rc1 on RAID0/5
  2012-08-22  6:00               ` NeilBrown
@ 2012-08-22  6:31                 ` Yuanhan Liu
  2012-08-22  7:14                 ` Andreas Dilger
  2012-08-22 20:47                 ` Dan Williams
  2 siblings, 0 replies; 14+ messages in thread
From: Yuanhan Liu @ 2012-08-22  6:31 UTC (permalink / raw)
  To: NeilBrown
  Cc: Fengguang Wu, Li Shaohua, Theodore Ts'o, Marti Raudsepp,
	Kernel hackers, ext4 hackers, maze, Shi, Alex, linux-fsdevel,
	linux RAID

On Wed, Aug 22, 2012 at 04:00:25PM +1000, NeilBrown wrote:
> On Wed, 22 Aug 2012 11:57:02 +0800 Yuanhan Liu <yuanhan.liu@linux.intel.com>
> wrote:
> 
> >  
> > -#define NR_STRIPES		256
> > +#define NR_STRIPES		1024
> 
> Changing one magic number into another magic number might help your case, but
> it not really a general solution.

Agreed.

> 
> Possibly making sure that max_nr_stripes is at least some multiple of the
> chunk size might make sense, but I wouldn't want to see a very large multiple.
> 
> I thing the problems with RAID5 are deeper than that.  Hopefully I'll figure
> out exactly what the best fix is soon - I'm trying to look into it.
> 
> I don't think the size of the cache is a big part of the solution.  I think
> correct scheduling of IO is the real answer.

Yes, it should not be. But with less max_nr_stripes, the chance to get a
full strip write is less, and maybe that's the reason why the chance to
block at get_active_strip() is more; and also, the reading is more.

The perfect case would be there are no reading; setting max_nr_stripes
to 32768(the max we get set now), you will find the reading is quite
less(almost zero, please see the iostat I attached in former email).

Anyway, I do agree this should not be the big part of the solution. If
we can handle those stripes faster, I guess 256 would be enough.

Thanks,
Yuanhan Liu

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ext4 write performance regression in 3.6-rc1 on RAID0/5
  2012-08-22  6:00               ` NeilBrown
  2012-08-22  6:31                 ` Yuanhan Liu
@ 2012-08-22  7:14                 ` Andreas Dilger
  2012-08-22 20:47                 ` Dan Williams
  2 siblings, 0 replies; 14+ messages in thread
From: Andreas Dilger @ 2012-08-22  7:14 UTC (permalink / raw)
  To: NeilBrown
  Cc: Yuanhan Liu, Fengguang Wu, Li Shaohua, Theodore Ts'o,
	Marti Raudsepp, Kernel hackers, ext4 hackers, maze, Shi, Alex,
	linux-fsdevel, linux RAID

[-- Attachment #1: Type: text/plain, Size: 1795 bytes --]

On 2012-08-22, at 12:00 AM, NeilBrown wrote:
> On Wed, 22 Aug 2012 11:57:02 +0800 Yuanhan Liu <yuanhan.liu@linux.intel.com>
> wrote:
>> 
>> -#define NR_STRIPES		256
>> +#define NR_STRIPES		1024
> 
> Changing one magic number into another magic number might help your case, but
> it not really a general solution.

We've actually been carrying a patch for a few years in Lustre to
increase the NR_STRIPES to 2048, and made it a configurable module
parameter.  This made a noticeable improvement to the performance
for fast systems.

> Possibly making sure that max_nr_stripes is at least some multiple of the
> chunk size might make sense, but I wouldn't want to see a very large multiple.
> 
> I thing the problems with RAID5 are deeper than that.  Hopefully I'll figure
> out exactly what the best fix is soon - I'm trying to look into it.

The other MD RAID-5/6 patches that we have change the page submission
order to avoid the need to merge pages in the elevator so much, and a
patch to allow zero-copy IO submission if the caller marks the page for
direct IO (indicating it will not be modified until after IO completes).
This avoids a lot of overhead on fast systems.

This isn't really my area of expertise, but patches against RHEL6
could be seen at http://review.whamcloud.com/1142 if you want to
take a look.  I don't know if that code is at all relevant to what
is in 3.x today.

> I don't think the size of the cache is a big part of the solution.  I think
> correct scheduling of IO is the real answer.

My experience is that on fast systems the IO scheduler just gets in the
way.  Submitting larger contiguous IOs to each disk in the first place
is far better than trying to merge small IOs again at the back end.

Cheers, Andreas






[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 235 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ext4 write performance regression in 3.6-rc1 on RAID0/5
  2012-08-22  6:00               ` NeilBrown
  2012-08-22  6:31                 ` Yuanhan Liu
  2012-08-22  7:14                 ` Andreas Dilger
@ 2012-08-22 20:47                 ` Dan Williams
  2012-08-22 21:59                   ` NeilBrown
  2 siblings, 1 reply; 14+ messages in thread
From: Dan Williams @ 2012-08-22 20:47 UTC (permalink / raw)
  To: NeilBrown
  Cc: Yuanhan Liu, Fengguang Wu, Li Shaohua, Theodore Ts'o,
	Marti Raudsepp, Kernel hackers, ext4 hackers, maze, Shi, Alex,
	linux-fsdevel, linux RAID

On Tue, Aug 21, 2012 at 11:00 PM, NeilBrown <neilb@suse.de> wrote:
> On Wed, 22 Aug 2012 11:57:02 +0800 Yuanhan Liu <yuanhan.liu@linux.intel.com>
> wrote:
>
>>
>> -#define NR_STRIPES           256
>> +#define NR_STRIPES           1024
>
> Changing one magic number into another magic number might help your case, but
> it not really a general solution.
>
> Possibly making sure that max_nr_stripes is at least some multiple of the
> chunk size might make sense, but I wouldn't want to see a very large multiple.
>
> I thing the problems with RAID5 are deeper than that.  Hopefully I'll figure
> out exactly what the best fix is soon - I'm trying to look into it.
>
> I don't think the size of the cache is a big part of the solution.  I think
> correct scheduling of IO is the real answer.

Not sure if this is what we are seeing here, but we still have the
unresolved fast parity effect whereby slower parity calculation gives
a larger time to coalesce writes.  I saw this effect when playing with
xor offload.

--
Dan

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: ext4 write performance regression in 3.6-rc1 on RAID0/5
  2012-08-22 20:47                 ` Dan Williams
@ 2012-08-22 21:59                   ` NeilBrown
  0 siblings, 0 replies; 14+ messages in thread
From: NeilBrown @ 2012-08-22 21:59 UTC (permalink / raw)
  To: Dan Williams
  Cc: Yuanhan Liu, Fengguang Wu, Li Shaohua, Theodore Ts'o,
	Marti Raudsepp, Kernel hackers, ext4 hackers, maze, Shi, Alex,
	linux-fsdevel, linux RAID

[-- Attachment #1: Type: text/plain, Size: 1367 bytes --]

On Wed, 22 Aug 2012 13:47:07 -0700 Dan Williams <djbw@fb.com> wrote:

> On Tue, Aug 21, 2012 at 11:00 PM, NeilBrown <neilb@suse.de> wrote:
> > On Wed, 22 Aug 2012 11:57:02 +0800 Yuanhan Liu <yuanhan.liu@linux.intel.com>
> > wrote:
> >
> >>
> >> -#define NR_STRIPES           256
> >> +#define NR_STRIPES           1024
> >
> > Changing one magic number into another magic number might help your case, but
> > it not really a general solution.
> >
> > Possibly making sure that max_nr_stripes is at least some multiple of the
> > chunk size might make sense, but I wouldn't want to see a very large multiple.
> >
> > I thing the problems with RAID5 are deeper than that.  Hopefully I'll figure
> > out exactly what the best fix is soon - I'm trying to look into it.
> >
> > I don't think the size of the cache is a big part of the solution.  I think
> > correct scheduling of IO is the real answer.
> 
> Not sure if this is what we are seeing here, but we still have the
> unresolved fast parity effect whereby slower parity calculation gives
> a larger time to coalesce writes.  I saw this effect when playing with
> xor offload.

I did find a case where inserting a printk made it go faster again.
Replacing that with msleep(2) worked as well. :-)

I'm looking for a most robust solution though.
Thanks for the reminder.

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2012-08-22 21:59 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <CABRT9RAOhaxcYdCxMn5neJ9WT85r=h=7WgZ2dmLaOs-MMqDW9A@mail.gmail.com>
     [not found] ` <20120816024654.GB3781@thunk.org>
     [not found]   ` <20120816111051.GA16036@localhost>
     [not found]     ` <20120816152513.GA31346@thunk.org>
2012-08-17  6:09       ` ext4 write performance regression in 3.6-rc1 Fengguang Wu
2012-08-17 13:40         ` Theodore Ts'o
2012-08-17 14:13           ` Fengguang Wu
2012-08-17 14:25           ` ext4 write performance regression in 3.6-rc1 on RAID0/5 Fengguang Wu
     [not found]             ` <20120817151318.GA2341@localhost>
2012-08-17 15:37               ` Theodore Ts'o
2012-08-17 20:44             ` NeilBrown
2012-08-21  9:42               ` Fengguang Wu
2012-08-21 12:07                 ` Fengguang Wu
     [not found]             ` <20120822035702.GF2570@yliu-dev.sh.intel.com>
2012-08-22  4:07               ` Shaohua Li
2012-08-22  6:00               ` NeilBrown
2012-08-22  6:31                 ` Yuanhan Liu
2012-08-22  7:14                 ` Andreas Dilger
2012-08-22 20:47                 ` Dan Williams
2012-08-22 21:59                   ` NeilBrown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).