* Disappointing RAID10 Performance
@ 2009-10-16 17:32 adfas asd
2009-10-16 17:36 ` Majed B.
` (5 more replies)
0 siblings, 6 replies; 16+ messages in thread
From: adfas asd @ 2009-10-16 17:32 UTC (permalink / raw)
To: linux-raid
I was hoping to get better performance with RAID10 than from the raw disks, but that's turned out to not be the case. Experimenting with the readahead buffer I get these bandwidths with the following command:
# time dd if={somelarge}.iso of=/dev/null bs={readahead size}
/dev/sd?
1024 71.3 MB/s
2048 71.2 MB/s
4096 77.7 MB/s
8192 69.4 MB/s
16384 76.6 MB/s
/dev/md2
1024 67.1
2048 69.1
4096 75.7
8192 64.9
16384 69.0
Using RAID10offset2 on 2 WD 2TB drives, and always the same input file.
Why would RAID10 performance be -poorer-?
If it weren't for mirroring, this wouldn't be worth it.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Disappointing RAID10 Performance
2009-10-16 17:32 Disappointing RAID10 Performance adfas asd
@ 2009-10-16 17:36 ` Majed B.
2009-10-16 18:10 ` Rob Becker
` (4 subsequent siblings)
5 siblings, 0 replies; 16+ messages in thread
From: Majed B. @ 2009-10-16 17:36 UTC (permalink / raw)
To: adfas asd; +Cc: linux-raid
Have you tried the tips in this page?
http://linux-raid.osdl.org/index.php/Performance#Some_problem_solving_for_benchmarking
On Fri, Oct 16, 2009 at 8:32 PM, adfas asd <chimera_god@yahoo.com> wrote:
> I was hoping to get better performance with RAID10 than from the raw disks, but that's turned out to not be the case. Experimenting with the readahead buffer I get these bandwidths with the following command:
> # time dd if={somelarge}.iso of=/dev/null bs={readahead size}
>
> /dev/sd?
> 1024 71.3 MB/s
> 2048 71.2 MB/s
> 4096 77.7 MB/s
> 8192 69.4 MB/s
> 16384 76.6 MB/s
>
> /dev/md2
> 1024 67.1
> 2048 69.1
> 4096 75.7
> 8192 64.9
> 16384 69.0
>
> Using RAID10offset2 on 2 WD 2TB drives, and always the same input file.
>
> Why would RAID10 performance be -poorer-?
> If it weren't for mirroring, this wouldn't be worth it.
>
>
>
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Disappointing RAID10 Performance
2009-10-16 17:32 Disappointing RAID10 Performance adfas asd
2009-10-16 17:36 ` Majed B.
@ 2009-10-16 18:10 ` Rob Becker
2009-10-16 21:05 ` adfas asd
2009-10-16 18:30 ` Tomasz Chmielewski
` (3 subsequent siblings)
5 siblings, 1 reply; 16+ messages in thread
From: Rob Becker @ 2009-10-16 18:10 UTC (permalink / raw)
To: adfas asd; +Cc: linux-raid@vger.kernel.org
Hi,
What command did you use to create your raid-10? You might try
running iostat in parallel to see if the read_balancer is properly
balancing the reads between the two disks.
I've seen situations where raid-10 on sequential reads will pretty
much always prefer the same disk and barely use the mirror.
One thing we looked at was changing to the far layout, which helped
a bit.
--Rob
On Oct 16, 2009, at 10:32 AM, adfas asd wrote:
> I was hoping to get better performance with RAID10 than from the raw
> disks, but that's turned out to not be the case. Experimenting with
> the readahead buffer I get these bandwidths with the following
> command:
> # time dd if={somelarge}.iso of=/dev/null bs={readahead size}
>
> /dev/sd?
> 1024 71.3 MB/s
> 2048 71.2 MB/s
> 4096 77.7 MB/s
> 8192 69.4 MB/s
> 16384 76.6 MB/s
>
> /dev/md2
> 1024 67.1
> 2048 69.1
> 4096 75.7
> 8192 64.9
> 16384 69.0
>
> Using RAID10offset2 on 2 WD 2TB drives, and always the same input
> file.
>
> Why would RAID10 performance be -poorer-?
> If it weren't for mirroring, this wouldn't be worth it.
>
>
>
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-
> raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Disappointing RAID10 Performance
2009-10-16 17:32 Disappointing RAID10 Performance adfas asd
2009-10-16 17:36 ` Majed B.
2009-10-16 18:10 ` Rob Becker
@ 2009-10-16 18:30 ` Tomasz Chmielewski
2009-10-17 16:12 ` Asdo
2009-10-16 21:20 ` Christopher Chen
` (2 subsequent siblings)
5 siblings, 1 reply; 16+ messages in thread
From: Tomasz Chmielewski @ 2009-10-16 18:30 UTC (permalink / raw)
To: adfas asd; +Cc: linux-raid
adfas asd wrote:
> I was hoping to get better performance with RAID10 than from the raw disks, but that's turned out to not be the case. Experimenting with the readahead buffer I get these bandwidths with the following command:
> # time dd if={somelarge}.iso of=/dev/null bs={readahead size}
>
> /dev/sd?
> 1024 71.3 MB/s
> 2048 71.2 MB/s
> 4096 77.7 MB/s
> 8192 69.4 MB/s
> 16384 76.6 MB/s
>
> /dev/md2
> 1024 67.1
> 2048 69.1
> 4096 75.7
> 8192 64.9
> 16384 69.0
>
> Using RAID10offset2 on 2 WD 2TB drives, and always the same input file.
>
> Why would RAID10 performance be -poorer-?
If you only use your RAID-10 array for a single "dd if=bigfile
of=/dev/null" then yes, it does not give you much over mirroring.
If you start using your drives for two "dd if=bigfile[12] of=/dev/null"
at the same time, you will notice the difference.
--
Tomasz Chmielewski
http://wpkg.org
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Disappointing RAID10 Performance
2009-10-16 18:10 ` Rob Becker
@ 2009-10-16 21:05 ` adfas asd
2009-10-21 23:26 ` Bill Davidsen
0 siblings, 1 reply; 16+ messages in thread
From: adfas asd @ 2009-10-16 21:05 UTC (permalink / raw)
To: linux-raid
--- On Fri, 10/16/09, Rob Becker <Rob.Becker@riverbed.com> wrote:
> What command did you use to create your
> raid-10?
/
mdadm --create /dev/md0 --level=raid1 --chunk=256 --raid-disks=2 missing /dev/sdb1
swap
mdadm --create /dev/md1 --level=raid10 --layout=o2 --chunk=256 --raid-disks=2 missing /dev/sdb2
/home
mdadm --create /dev/md2 --level=raid10 --layout=o2 --chunk=1024 --raid-disks=2 missing /dev/sdb3
... then copied files and later added the sda parts to the array. (RAID conversion on live system)
> You might try
> running iostat in parallel to see if the read_balancer is
> properly
> balancing the reads between the two disks.
Don't understand this as I'm a bit of a n00b...
--- On Fri, 10/16/09, Tomasz Chmielewski <mangoo@wpkg.org> wrote:
> If you only use your RAID-10 array for a single "dd
> if=bigfile of=/dev/null" then yes, it does not give you much
> over mirroring.
>
> If you start using your drives for two "dd if=bigfile[12]
> of=/dev/null" at the same time, you will notice the
> difference.
OK so it was a fallacy to think this would help with large files, unless more than one is involved.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Disappointing RAID10 Performance
2009-10-16 17:32 Disappointing RAID10 Performance adfas asd
` (2 preceding siblings ...)
2009-10-16 18:30 ` Tomasz Chmielewski
@ 2009-10-16 21:20 ` Christopher Chen
2009-10-18 12:06 ` adfas asd
2009-10-16 21:27 ` Keld Jørn Simonsen
2009-10-17 6:03 ` Leslie Rhorer
5 siblings, 1 reply; 16+ messages in thread
From: Christopher Chen @ 2009-10-16 21:20 UTC (permalink / raw)
To: adfas asd; +Cc: linux-raid
Sorry, I've looked at this again.
It's plausible that md's read balancing would increase the read speed,
but it appears that this is not the case. Are you using LVM? And the
readahead for the block devices are set appropriately? Distro? Version
of mdadm?
cc
On Fri, Oct 16, 2009 at 10:32 AM, adfas asd <chimera_god@yahoo.com> wrote:
> I was hoping to get better performance with RAID10 than from the raw disks, but that's turned out to not be the case. Experimenting with the readahead buffer I get these bandwidths with the following command:
> # time dd if={somelarge}.iso of=/dev/null bs={readahead size}
>
> /dev/sd?
> 1024 71.3 MB/s
> 2048 71.2 MB/s
> 4096 77.7 MB/s
> 8192 69.4 MB/s
> 16384 76.6 MB/s
>
> /dev/md2
> 1024 67.1
> 2048 69.1
> 4096 75.7
> 8192 64.9
> 16384 69.0
>
> Using RAID10offset2 on 2 WD 2TB drives, and always the same input file.
>
> Why would RAID10 performance be -poorer-?
> If it weren't for mirroring, this wouldn't be worth it.
>
>
>
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Chris Chen <muffaleta@gmail.com>
"The fact that yours is better than anyone else's
is not a guarantee that it's any good."
-- Seen on a wall
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Disappointing RAID10 Performance
2009-10-16 17:32 Disappointing RAID10 Performance adfas asd
` (3 preceding siblings ...)
2009-10-16 21:20 ` Christopher Chen
@ 2009-10-16 21:27 ` Keld Jørn Simonsen
2009-10-17 6:03 ` Leslie Rhorer
5 siblings, 0 replies; 16+ messages in thread
From: Keld Jørn Simonsen @ 2009-10-16 21:27 UTC (permalink / raw)
To: adfas asd; +Cc: linux-raid
On Fri, Oct 16, 2009 at 10:32:14AM -0700, adfas asd wrote:
> I was hoping to get better performance with RAID10 than from the raw disks, but that's turned out to not be the case. Experimenting with the readahead buffer I get these bandwidths with the following command:
> # time dd if={somelarge}.iso of=/dev/null bs={readahead size}
>
> /dev/sd?
> 1024 71.3 MB/s
> 2048 71.2 MB/s
> 4096 77.7 MB/s
> 8192 69.4 MB/s
> 16384 76.6 MB/s
>
> /dev/md2
> 1024 67.1
> 2048 69.1
> 4096 75.7
> 8192 64.9
> 16384 69.0
>
> Using RAID10offset2 on 2 WD 2TB drives, and always the same input file.
>
> Why would RAID10 performance be -poorer-?
> If it weren't for mirroring, this wouldn't be worth it.
try with layout=far - it is good for reading big files, maybe a factor 2
faster than layout=offset.
More on performance:
http://linux-raid.osdl.org/index.php/Performance
Best regards
keld
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: Disappointing RAID10 Performance
2009-10-16 17:32 Disappointing RAID10 Performance adfas asd
` (4 preceding siblings ...)
2009-10-16 21:27 ` Keld Jørn Simonsen
@ 2009-10-17 6:03 ` Leslie Rhorer
2009-10-18 12:17 ` adfas asd
5 siblings, 1 reply; 16+ messages in thread
From: Leslie Rhorer @ 2009-10-17 6:03 UTC (permalink / raw)
To: linux-raid
> # time dd if={somelarge}.iso of=/dev/null bs={readahead size}
>
> /dev/sd?
> 1024 71.3 MB/s
> 2048 71.2 MB/s
> 4096 77.7 MB/s
> 8192 69.4 MB/s
> 16384 76.6 MB/s
>
> /dev/md2
> 1024 67.1
> 2048 69.1
> 4096 75.7
> 8192 64.9
> 16384 69.0
>
> Using RAID10offset2 on 2 WD 2TB drives, and always the same input file.
RAID10 on 2 drives? Are you striping partitions on a single drive?
You won't get much of a performance boost that way.
> Why would RAID10 performance be -poorer-?
> If it weren't for mirroring, this wouldn't be worth it.
Is this with all drives local, or are you mirroring across a Gig-E link, as
I believe you said you were going to do?
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Disappointing RAID10 Performance
2009-10-16 18:30 ` Tomasz Chmielewski
@ 2009-10-17 16:12 ` Asdo
2009-10-17 16:20 ` Tomasz Chmielewski
2009-10-17 21:35 ` Majed B.
0 siblings, 2 replies; 16+ messages in thread
From: Asdo @ 2009-10-17 16:12 UTC (permalink / raw)
To: Tomasz Chmielewski; +Cc: linux-raid
Tomasz Chmielewski wrote:
> If you start using your drives for two "dd if=bigfile[12]
> of=/dev/null" at the same time, you will notice the difference.
OT: Does this dd line really work? I did see it elsewhere already but
doesn't appear to work for me: seems to see only the last if=. I don't
think multiple input or output files are supported.
For this kind of tests I was doing a for loop with background dd comands
( trailing & ) IIRC
This would be a good dd feature to simplify linux-raid performance
testing though.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Disappointing RAID10 Performance
2009-10-17 16:12 ` Asdo
@ 2009-10-17 16:20 ` Tomasz Chmielewski
2009-10-17 21:35 ` Majed B.
1 sibling, 0 replies; 16+ messages in thread
From: Tomasz Chmielewski @ 2009-10-17 16:20 UTC (permalink / raw)
To: Asdo; +Cc: linux-raid
Asdo wrote:
> Tomasz Chmielewski wrote:
>> If you start using your drives for two "dd if=bigfile[12]
>> of=/dev/null" at the same time, you will notice the difference.
>
> OT: Does this dd line really work? I did see it elsewhere already but
> doesn't appear to work for me: seems to see only the last if=.
I don't think it does work.
You see it everywhere because it's faster to write, and everyone gets
the idea what's this all about.
--
Tomasz Chmielewski
http://wpkg.org
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Disappointing RAID10 Performance
2009-10-17 16:12 ` Asdo
2009-10-17 16:20 ` Tomasz Chmielewski
@ 2009-10-17 21:35 ` Majed B.
2009-10-18 11:01 ` Michael Tokarev
1 sibling, 1 reply; 16+ messages in thread
From: Majed B. @ 2009-10-17 21:35 UTC (permalink / raw)
To: Asdo; +Cc: linux-raid
dd supports writing to multiple devices at once, but not reading from
multiple inputs.
On Sat, Oct 17, 2009 at 7:12 PM, Asdo <asdo@shiftmail.org> wrote:
> Tomasz Chmielewski wrote:
>>
>> If you start using your drives for two "dd if=bigfile[12] of=/dev/null" at
>> the same time, you will notice the difference.
>
> OT: Does this dd line really work? I did see it elsewhere already but
> doesn't appear to work for me: seems to see only the last if=. I don't think
> multiple input or output files are supported.
> For this kind of tests I was doing a for loop with background dd comands (
> trailing & ) IIRC
> This would be a good dd feature to simplify linux-raid performance testing
> though.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Disappointing RAID10 Performance
2009-10-17 21:35 ` Majed B.
@ 2009-10-18 11:01 ` Michael Tokarev
0 siblings, 0 replies; 16+ messages in thread
From: Michael Tokarev @ 2009-10-18 11:01 UTC (permalink / raw)
To: Majed B.; +Cc: Asdo, linux-raid
Majed B. Wrote:
> dd supports writing to multiple devices at once, but not reading from
> multiple inputs.
Count me intrigued. I re-read linux dd manual page, POSIX docs about
dd and linux (coreutils) dd sources, and don't see the `multiple output'
thing. Care to show how to do that?
Thanks.
/mjt
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Disappointing RAID10 Performance
2009-10-16 21:20 ` Christopher Chen
@ 2009-10-18 12:06 ` adfas asd
0 siblings, 0 replies; 16+ messages in thread
From: adfas asd @ 2009-10-18 12:06 UTC (permalink / raw)
To: linux-raid
--- On Fri, 10/16/09, Christopher Chen <muffaleta@gmail.com> wrote:
> It's plausible that md's read balancing would increase the
> read speed,
> but it appears that this is not the case. Are you using
> LVM? And the
> readahead for the block devices are set appropriately?
> Distro? Version
> of mdadm?
No LVM, and readahead on sd? drives set for best performance. Debian Testing with mdadm 3.0-2.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Disappointing RAID10 Performance
[not found] <7bc80d500910161433l49cbf599m80310082b6fdaa97@mail.gmail.com>
@ 2009-10-18 12:14 ` adfas asd
0 siblings, 0 replies; 16+ messages in thread
From: adfas asd @ 2009-10-18 12:14 UTC (permalink / raw)
To: linux-raid
> > Yeah two drives. I didn't know I could have a RAID10
> > with 3 drives? I thought I always had to add 2? I'm
> > afraid to ask how it would set up an offset2 layout with
> > 3...
> Well you have to put it up on a
> whiteboard. It means that every data
> block exists as a "mirror" on any two drives. Hrm. It's
> fun. Draw it
> out :)
But to add a drive I have to tear down the array and rebuild it, right? This gets impossible with the amount of data in larger arrays.
And I guess this isn't really relevant with my system, where one side of the mirror is in the HTPC, and the other is in the garage. A fire in either place would destroy all data in an odd number drive system.
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: Disappointing RAID10 Performance
2009-10-17 6:03 ` Leslie Rhorer
@ 2009-10-18 12:17 ` adfas asd
0 siblings, 0 replies; 16+ messages in thread
From: adfas asd @ 2009-10-18 12:17 UTC (permalink / raw)
To: linux-raid
--- On Fri, 10/16/09, Leslie Rhorer <lrhorer@satx.rr.com> wrote:
> > Why would RAID10 performance be -poorer-?
> > If it weren't for mirroring, this wouldn't be worth
> it.
>
> Is this with all drives local, or are you mirroring across
> a Gig-E link, as
> I believe you said you were going to do?
Do not have the remote array running yet, so all drives are local.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Disappointing RAID10 Performance
2009-10-16 21:05 ` adfas asd
@ 2009-10-21 23:26 ` Bill Davidsen
0 siblings, 0 replies; 16+ messages in thread
From: Bill Davidsen @ 2009-10-21 23:26 UTC (permalink / raw)
To: adfas asd; +Cc: linux-raid
adfas asd wrote:
> --- On Fri, 10/16/09, Rob Becker <Rob.Becker@riverbed.com> wrote:
>
>> What command did you use to create your
>> raid-10?
>>
>
> /
> mdadm --create /dev/md0 --level=raid1 --chunk=256 --raid-disks=2 missing /dev/sdb1
> swap
> mdadm --create /dev/md1 --level=raid10 --layout=o2 --chunk=256 --raid-disks=2 missing /dev/sdb2
> /home
> mdadm --create /dev/md2 --level=raid10 --layout=o2 --chunk=1024 --raid-disks=2 missing /dev/sdb3
> ... then copied files and later added the sda parts to the array. (RAID conversion on live system)
>
>
No wonder it's slow, you want two far copies, this is more or less
mirroring with only two drives. Using a large buffer size also helps,
you hurt your performance by limiting readahead. You can also use the
'blockdev' command (--setra) to increase your readahead on the array.
Just going to far should about double your speed, the other things may
help more.
>
>> You might try
>> running iostat in parallel to see if the read_balancer is
>> properly
>> balancing the reads between the two disks.
>>
>
> Don't understand this as I'm a bit of a n00b...
>
>
> --- On Fri, 10/16/09, Tomasz Chmielewski <mangoo@wpkg.org> wrote:
>
>> If you only use your RAID-10 array for a single "dd
>> if=bigfile of=/dev/null" then yes, it does not give you much
>> over mirroring.
>>
>> If you start using your drives for two "dd if=bigfile[12]
>> of=/dev/null" at the same time, you will notice the
>> difference.
>>
>
> OK so it was a fallacy to think this would help with large files, unless more than one is involved.
>
You are misconfigured.
--
Bill Davidsen <davidsen@tmr.com>
Unintended results are the well-earned reward for incompetence.
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2009-10-21 23:26 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-10-16 17:32 Disappointing RAID10 Performance adfas asd
2009-10-16 17:36 ` Majed B.
2009-10-16 18:10 ` Rob Becker
2009-10-16 21:05 ` adfas asd
2009-10-21 23:26 ` Bill Davidsen
2009-10-16 18:30 ` Tomasz Chmielewski
2009-10-17 16:12 ` Asdo
2009-10-17 16:20 ` Tomasz Chmielewski
2009-10-17 21:35 ` Majed B.
2009-10-18 11:01 ` Michael Tokarev
2009-10-16 21:20 ` Christopher Chen
2009-10-18 12:06 ` adfas asd
2009-10-16 21:27 ` Keld Jørn Simonsen
2009-10-17 6:03 ` Leslie Rhorer
2009-10-18 12:17 ` adfas asd
[not found] <7bc80d500910161433l49cbf599m80310082b6fdaa97@mail.gmail.com>
2009-10-18 12:14 ` adfas asd
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).