* The huge different performance of sequential read between RAID0 and RAID5
@ 2010-01-28 3:16 Yuehai Xu
2010-01-28 7:06 ` Gabor Gombas
0 siblings, 1 reply; 9+ messages in thread
From: Yuehai Xu @ 2010-01-28 3:16 UTC (permalink / raw)
To: linux-raid; +Cc: yhxu
Hi,
When I use IOZONE to test sequential read performance, I notice the
result between RAID0 and RAID5 is totally different.
Below is the message from cat /proc/mdstat:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10]
md0 : active raid5 sdh1[7] sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0]
631353600 blocks level 5, 64k chunk, algorithm 2 [7/6] [UUUUUU_]
[======>..............] recovery = 30.6% (32202272/105225600)
finish=14.7min speed=82429K/sec
unused devices: <none>
The first question is that why the recovery is done after every time I
setup RAID5? I use such command to setup RAID5:
mdadm --create /dev/md0 --level=5 --raid-devices=7 /dev/sdb1 /dev/sdc1
/dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1
Anyway, after the recovery, I start to test. I divide 10 partitions in
RAID5 and RAID0, the mount info is:
/dev/md0p5 38G 817M 35G 3% /mnt/md0p5
/dev/md0p6 38G 817M 35G 3% /mnt/md0p6
/dev/md0p7 38G 817M 35G 3% /mnt/md0p7
/dev/md0p8 38G 817M 35G 3% /mnt/md0p8
/dev/md0p9 38G 817M 35G 3% /mnt/md0p9
/dev/md0p10 38G 817M 35G 3% /mnt/md0p10
/dev/md0p11 38G 817M 35G 3% /mnt/md0p11
/dev/md0p12 38G 817M 35G 3% /mnt/md0p12
/dev/md0p13 38G 817M 35G 3% /mnt/md0p13
/dev/md0p14 38G 817M 35G 3% /mnt/md0p14
Then I start IOZONE which starts 10 processes to do the sequential
read(iozone -i 1). Each process read 640M file on each partition. The
throughput of RAID0 is about 180M/s, while the throughput of RAID5 is
just 43M/s. Why the performance between RAID0 and RAID5 is so
different?
Yuehai
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: The huge different performance of sequential read between RAID0 and RAID5
2010-01-28 3:16 The huge different performance of sequential read between RAID0 and RAID5 Yuehai Xu
@ 2010-01-28 7:06 ` Gabor Gombas
2010-01-28 14:31 ` Yuehai Xu
0 siblings, 1 reply; 9+ messages in thread
From: Gabor Gombas @ 2010-01-28 7:06 UTC (permalink / raw)
To: Yuehai Xu; +Cc: linux-raid, yhxu
On Wed, Jan 27, 2010 at 10:16:12PM -0500, Yuehai Xu wrote:
> md0 : active raid5 sdh1[7] sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0]
> 631353600 blocks level 5, 64k chunk, algorithm 2 [7/6] [UUUUUU_]
[...]
> Then I start IOZONE which starts 10 processes to do the sequential
> read(iozone -i 1). Each process read 640M file on each partition. The
> throughput of RAID0 is about 180M/s, while the throughput of RAID5 is
> just 43M/s. Why the performance between RAID0 and RAID5 is so
> different?
You have a degraded RAID5 array with one drive missing, meaning the data
has to be recalculated from parity all the time. That obviously kills
performance.
Gabor
--
---------------------------------------------------------
MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
---------------------------------------------------------
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: The huge different performance of sequential read between RAID0 and RAID5
2010-01-28 7:06 ` Gabor Gombas
@ 2010-01-28 14:31 ` Yuehai Xu
2010-01-28 14:41 ` Gabor Gombas
0 siblings, 1 reply; 9+ messages in thread
From: Yuehai Xu @ 2010-01-28 14:31 UTC (permalink / raw)
To: linux-raid; +Cc: gombasg, yhxu
On Thu, Jan 28, 2010 at 2:06 AM, Gabor Gombas <gombasg@sztaki.hu> wrote:
> On Wed, Jan 27, 2010 at 10:16:12PM -0500, Yuehai Xu wrote:
>
>> md0 : active raid5 sdh1[7] sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0]
>> 631353600 blocks level 5, 64k chunk, algorithm 2 [7/6] [UUUUUU_]
> [...]
Do you mean there is something wrong when I setup my RAID5? The
command I use to setup RAID5 is:
mdadm --create /dev/md0 --level=5 --raid-devices=7 /dev/sdb1 /dev/sdc1
/dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1
I don't think any of my drive fail because there is no "F" in my
/proc/mdstat output
>> Then I start IOZONE which starts 10 processes to do the sequential
>> read(iozone -i 1). Each process read 640M file on each partition. The
>> throughput of RAID0 is about 180M/s, while the throughput of RAID5 is
>> just 43M/s. Why the performance between RAID0 and RAID5 is so
>> different?
>
> You have a degraded RAID5 array with one drive missing, meaning the data
> has to be recalculated from parity all the time. That obviously kills
> performance.
>
> Gabor
How do you know my RAID5 array has one drive missing? I tried to setup
RAID5 with 5 disks, 3 disks, after each setup, recovery has always
been done. However, if I format my md0 with such command:
mkfs.ext3 -b 4096 -E stride=16 -E stripe-width=*** /dev/XXXX, the
performance for RAID5 becomes usual, at about 200~300M/s.
>
> --
> ---------------------------------------------------------
> MTA SZTAKI Computer and Automation Research Institute
> Hungarian Academy of Sciences
> ---------------------------------------------------------
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: The huge different performance of sequential read between RAID0 and RAID5
2010-01-28 14:31 ` Yuehai Xu
@ 2010-01-28 14:41 ` Gabor Gombas
2010-01-28 14:55 ` Yuehai Xu
0 siblings, 1 reply; 9+ messages in thread
From: Gabor Gombas @ 2010-01-28 14:41 UTC (permalink / raw)
To: Yuehai Xu; +Cc: linux-raid, yhxu
On Thu, Jan 28, 2010 at 09:31:23AM -0500, Yuehai Xu wrote:
> >> md0 : active raid5 sdh1[7] sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0]
> >> 631353600 blocks level 5, 64k chunk, algorithm 2 [7/6] [UUUUUU_]
[...]
> I don't think any of my drive fail because there is no "F" in my
> /proc/mdstat output
It's not failed, it's simply missing. Either it was unavailable when the
array was assembled, or you've explicitely created/assembled the array
with a missing drive.
> How do you know my RAID5 array has one drive missing?
Look at the above output: there are just 6 of the 7 drives available,
and the underscore also means a missing drive.
> I tried to setup RAID5 with 5 disks, 3 disks, after each setup,
> recovery has always been done.
Of course.
> However, if I format my md0 with such command:
> mkfs.ext3 -b 4096 -E stride=16 -E stripe-width=*** /dev/XXXX, the
> performance for RAID5 becomes usual, at about 200~300M/s.
I suppose in that case you had all the disks present in the array.
Gabor
--
---------------------------------------------------------
MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
---------------------------------------------------------
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: The huge different performance of sequential read between RAID0 and RAID5
2010-01-28 14:41 ` Gabor Gombas
@ 2010-01-28 14:55 ` Yuehai Xu
2010-01-28 15:27 ` Robin Hill
0 siblings, 1 reply; 9+ messages in thread
From: Yuehai Xu @ 2010-01-28 14:55 UTC (permalink / raw)
To: Gabor Gombas; +Cc: linux-raid, yhxu
2010/1/28 Gabor Gombas <gombasg@sztaki.hu>:
> On Thu, Jan 28, 2010 at 09:31:23AM -0500, Yuehai Xu wrote:
>
>> >> md0 : active raid5 sdh1[7] sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0]
>> >> 631353600 blocks level 5, 64k chunk, algorithm 2 [7/6] [UUUUUU_]
> [...]
>
>> I don't think any of my drive fail because there is no "F" in my
>> /proc/mdstat output
>
> It's not failed, it's simply missing. Either it was unavailable when the
> array was assembled, or you've explicitely created/assembled the array
> with a missing drive.
I noticed that, thanks! Is it usual that at the beginning of each
setup, there is one missing drive?
>
>> How do you know my RAID5 array has one drive missing?
>
> Look at the above output: there are just 6 of the 7 drives available,
> and the underscore also means a missing drive.
>
>> I tried to setup RAID5 with 5 disks, 3 disks, after each setup,
>> recovery has always been done.
>
> Of course.
>
>> However, if I format my md0 with such command:
>> mkfs.ext3 -b 4096 -E stride=16 -E stripe-width=*** /dev/XXXX, the
>> performance for RAID5 becomes usual, at about 200~300M/s.
>
> I suppose in that case you had all the disks present in the array.
Yes, I did my test after the recovery, in that case, does the "missing
drive" hurt the performance?
Thanks!
Yuehai
>
> Gabor
>
> --
> ---------------------------------------------------------
> MTA SZTAKI Computer and Automation Research Institute
> Hungarian Academy of Sciences
> ---------------------------------------------------------
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: The huge different performance of sequential read between RAID0 and RAID5
2010-01-28 14:55 ` Yuehai Xu
@ 2010-01-28 15:27 ` Robin Hill
2010-01-29 6:05 ` Michael Evans
0 siblings, 1 reply; 9+ messages in thread
From: Robin Hill @ 2010-01-28 15:27 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: Type: text/plain, Size: 2559 bytes --]
On Thu Jan 28, 2010 at 09:55:05AM -0500, Yuehai Xu wrote:
> 2010/1/28 Gabor Gombas <gombasg@sztaki.hu>:
> > On Thu, Jan 28, 2010 at 09:31:23AM -0500, Yuehai Xu wrote:
> >
> >> >> md0 : active raid5 sdh1[7] sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0]
> >> >> 631353600 blocks level 5, 64k chunk, algorithm 2 [7/6] [UUUUUU_]
> > [...]
> >
> >> I don't think any of my drive fail because there is no "F" in my
> >> /proc/mdstat output
> >
> > It's not failed, it's simply missing. Either it was unavailable when the
> > array was assembled, or you've explicitely created/assembled the array
> > with a missing drive.
>
> I noticed that, thanks! Is it usual that at the beginning of each
> setup, there is one missing drive?
>
Yes - in order to make the array available as quickly as possible, it is
initially created as a degraded array. The recovery is then run to
add in the extra disk. Otherwise all disks would need to be written
before the array became available.
> >
> >> How do you know my RAID5 array has one drive missing?
> >
> > Look at the above output: there are just 6 of the 7 drives available,
> > and the underscore also means a missing drive.
> >
> >> I tried to setup RAID5 with 5 disks, 3 disks, after each setup,
> >> recovery has always been done.
> >
> > Of course.
> >
> >> However, if I format my md0 with such command:
> >> mkfs.ext3 -b 4096 -E stride=16 -E stripe-width=*** /dev/XXXX, the
> >> performance for RAID5 becomes usual, at about 200~300M/s.
> >
> > I suppose in that case you had all the disks present in the array.
>
> Yes, I did my test after the recovery, in that case, does the "missing
> drive" hurt the performance?
>
If you had a missing drive in the array when running the test, then this
would definitely affect the performance (as the array would need to do
parity calculations for most stripes). However, as you've not actually
given the /proc/mdstat output for the array post-recovery then I don't
know whether or not this was the case.
Generally, I wouldn't expect the RAID5 array to be that much slower than
a RAID0. You'd best check that the various parameters (chunk size,
stripe cache size, readahead, etc) are the same for both arrays, as
these can have a major impact on performance.
Cheers,
Robin
--
___
( ' } | Robin Hill <robin@robinhill.me.uk> |
/ / ) | Little Jim says .... |
// !! | "He fallen in de water !!" |
[-- Attachment #2: Type: application/pgp-signature, Size: 198 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: The huge different performance of sequential read between RAID0 and RAID5
2010-01-28 15:27 ` Robin Hill
@ 2010-01-29 6:05 ` Michael Evans
2010-01-29 11:53 ` Goswin von Brederlow
0 siblings, 1 reply; 9+ messages in thread
From: Michael Evans @ 2010-01-29 6:05 UTC (permalink / raw)
To: linux-raid
On Thu, Jan 28, 2010 at 7:27 AM, Robin Hill <robin@robinhill.me.uk> wrote:
> On Thu Jan 28, 2010 at 09:55:05AM -0500, Yuehai Xu wrote:
>
>> 2010/1/28 Gabor Gombas <gombasg@sztaki.hu>:
>> > On Thu, Jan 28, 2010 at 09:31:23AM -0500, Yuehai Xu wrote:
>> >
>> >> >> md0 : active raid5 sdh1[7] sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0]
>> >> >> 631353600 blocks level 5, 64k chunk, algorithm 2 [7/6] [UUUUUU_]
>> > [...]
>> >
>> >> I don't think any of my drive fail because there is no "F" in my
>> >> /proc/mdstat output
>> >
>> > It's not failed, it's simply missing. Either it was unavailable when the
>> > array was assembled, or you've explicitely created/assembled the array
>> > with a missing drive.
>>
>> I noticed that, thanks! Is it usual that at the beginning of each
>> setup, there is one missing drive?
>>
> Yes - in order to make the array available as quickly as possible, it is
> initially created as a degraded array. The recovery is then run to
> add in the extra disk. Otherwise all disks would need to be written
> before the array became available.
>
>> >
>> >> How do you know my RAID5 array has one drive missing?
>> >
>> > Look at the above output: there are just 6 of the 7 drives available,
>> > and the underscore also means a missing drive.
>> >
>> >> I tried to setup RAID5 with 5 disks, 3 disks, after each setup,
>> >> recovery has always been done.
>> >
>> > Of course.
>> >
>> >> However, if I format my md0 with such command:
>> >> mkfs.ext3 -b 4096 -E stride=16 -E stripe-width=*** /dev/XXXX, the
>> >> performance for RAID5 becomes usual, at about 200~300M/s.
>> >
>> > I suppose in that case you had all the disks present in the array.
>>
>> Yes, I did my test after the recovery, in that case, does the "missing
>> drive" hurt the performance?
>>
> If you had a missing drive in the array when running the test, then this
> would definitely affect the performance (as the array would need to do
> parity calculations for most stripes). However, as you've not actually
> given the /proc/mdstat output for the array post-recovery then I don't
> know whether or not this was the case.
>
> Generally, I wouldn't expect the RAID5 array to be that much slower than
> a RAID0. You'd best check that the various parameters (chunk size,
> stripe cache size, readahead, etc) are the same for both arrays, as
> these can have a major impact on performance.
>
> Cheers,
> Robin
> --
> ___
> ( ' } | Robin Hill <robin@robinhill.me.uk> |
> / / ) | Little Jim says .... |
> // !! | "He fallen in de water !!" |
>
A more valid test that could be run would follow:
Assemble all the test drives as a raid-5 array (you can zero the
drives any way you like and then --assume-clean if they really are all
zeros) and let the resync complete.
Run any tests you like.
Stop and --zero-superblock on the array.
Create a striped array (raid 0) using all but one of the test drives.
Since you dropped the drive's worth of storage that would be dedicated
to parity in the raid-5 setup you're now benchmarking the same number
of /data/ storage drives; but have saved one drive's worth of recovery
data (at cost of risking your data if any single drive fails).
Still, run the same benchmarks.
Why is this valid instead of throwing all the drives at it in raid-0
mode as well? It provides the same resulting storage size.
What I suspect you'll find is very similar read performance and
measurably, though perhaps tolerable, worse write performance from
raid-5.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: The huge different performance of sequential read between RAID0 and RAID5
2010-01-29 6:05 ` Michael Evans
@ 2010-01-29 11:53 ` Goswin von Brederlow
2010-01-30 7:03 ` Michael Evans
0 siblings, 1 reply; 9+ messages in thread
From: Goswin von Brederlow @ 2010-01-29 11:53 UTC (permalink / raw)
To: Michael Evans; +Cc: linux-raid
Michael Evans <mjevans1983@gmail.com> writes:
> On Thu, Jan 28, 2010 at 7:27 AM, Robin Hill <robin@robinhill.me.uk> wrote:
>> On Thu Jan 28, 2010 at 09:55:05AM -0500, Yuehai Xu wrote:
>>
>>> 2010/1/28 Gabor Gombas <gombasg@sztaki.hu>:
>>> > On Thu, Jan 28, 2010 at 09:31:23AM -0500, Yuehai Xu wrote:
>>> >
>>> >> >> md0 : active raid5 sdh1[7] sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0]
>>> >> >> 631353600 blocks level 5, 64k chunk, algorithm 2 [7/6] [UUUUUU_]
>>> > [...]
>>> >
>>> >> I don't think any of my drive fail because there is no "F" in my
>>> >> /proc/mdstat output
>>> >
>>> > It's not failed, it's simply missing. Either it was unavailable when the
>>> > array was assembled, or you've explicitely created/assembled the array
>>> > with a missing drive.
>>>
>>> I noticed that, thanks! Is it usual that at the beginning of each
>>> setup, there is one missing drive?
>>>
>> Yes - in order to make the array available as quickly as possible, it is
>> initially created as a degraded array. The recovery is then run to
>> add in the extra disk. Otherwise all disks would need to be written
>> before the array became available.
>>
>>> >
>>> >> How do you know my RAID5 array has one drive missing?
>>> >
>>> > Look at the above output: there are just 6 of the 7 drives available,
>>> > and the underscore also means a missing drive.
>>> >
>>> >> I tried to setup RAID5 with 5 disks, 3 disks, after each setup,
>>> >> recovery has always been done.
>>> >
>>> > Of course.
>>> >
>>> >> However, if I format my md0 with such command:
>>> >> mkfs.ext3 -b 4096 -E stride=16 -E stripe-width=*** /dev/XXXX, the
>>> >> performance for RAID5 becomes usual, at about 200~300M/s.
>>> >
>>> > I suppose in that case you had all the disks present in the array.
>>>
>>> Yes, I did my test after the recovery, in that case, does the "missing
>>> drive" hurt the performance?
>>>
>> If you had a missing drive in the array when running the test, then this
>> would definitely affect the performance (as the array would need to do
>> parity calculations for most stripes). However, as you've not actually
>> given the /proc/mdstat output for the array post-recovery then I don't
>> know whether or not this was the case.
>>
>> Generally, I wouldn't expect the RAID5 array to be that much slower than
>> a RAID0. You'd best check that the various parameters (chunk size,
>> stripe cache size, readahead, etc) are the same for both arrays, as
>> these can have a major impact on performance.
>>
>> Cheers,
>> Robin
>> --
>> ___
>> ( ' } | Robin Hill <robin@robinhill.me.uk> |
>> / / ) | Little Jim says .... |
>> // !! | "He fallen in de water !!" |
>>
>
> A more valid test that could be run would follow:
>
> Assemble all the test drives as a raid-5 array (you can zero the
> drives any way you like and then --assume-clean if they really are all
> zeros) and let the resync complete.
>
> Run any tests you like.
>
> Stop and --zero-superblock on the array.
>
> Create a striped array (raid 0) using all but one of the test drives.
>
> Since you dropped the drive's worth of storage that would be dedicated
> to parity in the raid-5 setup you're now benchmarking the same number
> of /data/ storage drives; but have saved one drive's worth of recovery
> data (at cost of risking your data if any single drive fails).
>
> Still, run the same benchmarks.
>
> Why is this valid instead of throwing all the drives at it in raid-0
> mode as well? It provides the same resulting storage size.
>
>
> What I suspect you'll find is very similar read performance and
> measurably, though perhaps tolerable, worse write performance from
> raid-5.
In raid5 mode each drive will read 5*64k data and then skip 64k and
repeat. And skipping such a small chunk of data means waiting till it
has rotated below the head. So each drive only gives 5/6th of its linear
speed. As a result the 6 disks raid5 should be 5/6th of the speed of a 5
disk raid0 assuming the controler and bus are fast enough.
A larger chunk size can mean skipping the parity chunk skips a
cylinder. But larger chunk size makes it less likely reads are spread
over all/multiple disks. So you might loose more than you gain.
MfG
Goswin
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: The huge different performance of sequential read between RAID0 and RAID5
2010-01-29 11:53 ` Goswin von Brederlow
@ 2010-01-30 7:03 ` Michael Evans
0 siblings, 0 replies; 9+ messages in thread
From: Michael Evans @ 2010-01-30 7:03 UTC (permalink / raw)
To: Goswin von Brederlow; +Cc: linux-raid
On Fri, Jan 29, 2010 at 3:53 AM, Goswin von Brederlow <goswin-v-b@web.de> wrote:
> Michael Evans <mjevans1983@gmail.com> writes:
>
>> On Thu, Jan 28, 2010 at 7:27 AM, Robin Hill <robin@robinhill.me.uk> wrote:
>>> On Thu Jan 28, 2010 at 09:55:05AM -0500, Yuehai Xu wrote:
>>>
>>>> 2010/1/28 Gabor Gombas <gombasg@sztaki.hu>:
>>>> > On Thu, Jan 28, 2010 at 09:31:23AM -0500, Yuehai Xu wrote:
>>>> >
>>>> >> >> md0 : active raid5 sdh1[7] sdg1[5] sdf1[4] sde1[3] sdd1[2] sdc1[1] sdb1[0]
>>>> >> >> 631353600 blocks level 5, 64k chunk, algorithm 2 [7/6] [UUUUUU_]
>>>> > [...]
>>>> >
>>>> >> I don't think any of my drive fail because there is no "F" in my
>>>> >> /proc/mdstat output
>>>> >
>>>> > It's not failed, it's simply missing. Either it was unavailable when the
>>>> > array was assembled, or you've explicitely created/assembled the array
>>>> > with a missing drive.
>>>>
>>>> I noticed that, thanks! Is it usual that at the beginning of each
>>>> setup, there is one missing drive?
>>>>
>>> Yes - in order to make the array available as quickly as possible, it is
>>> initially created as a degraded array. The recovery is then run to
>>> add in the extra disk. Otherwise all disks would need to be written
>>> before the array became available.
>>>
>>>> >
>>>> >> How do you know my RAID5 array has one drive missing?
>>>> >
>>>> > Look at the above output: there are just 6 of the 7 drives available,
>>>> > and the underscore also means a missing drive.
>>>> >
>>>> >> I tried to setup RAID5 with 5 disks, 3 disks, after each setup,
>>>> >> recovery has always been done.
>>>> >
>>>> > Of course.
>>>> >
>>>> >> However, if I format my md0 with such command:
>>>> >> mkfs.ext3 -b 4096 -E stride=16 -E stripe-width=*** /dev/XXXX, the
>>>> >> performance for RAID5 becomes usual, at about 200~300M/s.
>>>> >
>>>> > I suppose in that case you had all the disks present in the array.
>>>>
>>>> Yes, I did my test after the recovery, in that case, does the "missing
>>>> drive" hurt the performance?
>>>>
>>> If you had a missing drive in the array when running the test, then this
>>> would definitely affect the performance (as the array would need to do
>>> parity calculations for most stripes). However, as you've not actually
>>> given the /proc/mdstat output for the array post-recovery then I don't
>>> know whether or not this was the case.
>>>
>>> Generally, I wouldn't expect the RAID5 array to be that much slower than
>>> a RAID0. You'd best check that the various parameters (chunk size,
>>> stripe cache size, readahead, etc) are the same for both arrays, as
>>> these can have a major impact on performance.
>>>
>>> Cheers,
>>> Robin
>>> --
>>> ___
>>> ( ' } | Robin Hill <robin@robinhill.me.uk> |
>>> / / ) | Little Jim says .... |
>>> // !! | "He fallen in de water !!" |
>>>
>>
>> A more valid test that could be run would follow:
>>
>> Assemble all the test drives as a raid-5 array (you can zero the
>> drives any way you like and then --assume-clean if they really are all
>> zeros) and let the resync complete.
>>
>> Run any tests you like.
>>
>> Stop and --zero-superblock on the array.
>>
>> Create a striped array (raid 0) using all but one of the test drives.
>>
>> Since you dropped the drive's worth of storage that would be dedicated
>> to parity in the raid-5 setup you're now benchmarking the same number
>> of /data/ storage drives; but have saved one drive's worth of recovery
>> data (at cost of risking your data if any single drive fails).
>>
>> Still, run the same benchmarks.
>>
>> Why is this valid instead of throwing all the drives at it in raid-0
>> mode as well? It provides the same resulting storage size.
>>
>>
>> What I suspect you'll find is very similar read performance and
>> measurably, though perhaps tolerable, worse write performance from
>> raid-5.
>
> In raid5 mode each drive will read 5*64k data and then skip 64k and
> repeat. And skipping such a small chunk of data means waiting till it
> has rotated below the head. So each drive only gives 5/6th of its linear
> speed. As a result the 6 disks raid5 should be 5/6th of the speed of a 5
> disk raid0 assuming the controler and bus are fast enough.
>
> A larger chunk size can mean skipping the parity chunk skips a
> cylinder. But larger chunk size makes it less likely reads are spread
> over all/multiple disks. So you might loose more than you gain.
>
> MfG
> Goswin
>
>
That is true assuming a very large sequential read (buffered video
streams and other very large files). However while each drive will
only have an apparent performance of 5/6 in the case of a 6 drive raid
6 array that is still 5/6 * 6 = 5 drives raid zero equivalent; which
is also the size of usable data storage. All the more reason to say:
Read performance of N+1 drives in raid 5 should be roughly equivalent
to N drives in raid 0 (obviously in the best case).
In the worst case, raid 5 produces data; while raid 0 times out and
fails to read any data.
The main area of performance difference between raid 5 and raid 0 is
seen on /writes/ which is where you pay for the insurance in the
complexity of keeping the stripe clean. At /least/ reading any
changed chunks, plus parity chunk, calculating parity, and writing all
of that back to the drives; OR writing all the data-chunks and the
newly calculated parity chunk. Thus describing why larger writes see
less overall degrade in performance and smaller writes seem so much
worse in comparison. There's also the extra drive used as insurance.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2010-01-30 7:03 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-01-28 3:16 The huge different performance of sequential read between RAID0 and RAID5 Yuehai Xu
2010-01-28 7:06 ` Gabor Gombas
2010-01-28 14:31 ` Yuehai Xu
2010-01-28 14:41 ` Gabor Gombas
2010-01-28 14:55 ` Yuehai Xu
2010-01-28 15:27 ` Robin Hill
2010-01-29 6:05 ` Michael Evans
2010-01-29 11:53 ` Goswin von Brederlow
2010-01-30 7:03 ` Michael Evans
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).