* non-optimal RAID 5 performance with 8 drive array
@ 2005-03-01 13:24 Nicola Fankhauser
2005-03-01 17:53 ` Robin Bowes
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Nicola Fankhauser @ 2005-03-01 13:24 UTC (permalink / raw)
To: RAID Linux
Hi all
I have a RAID 5 array consisting of 8 300GB Maxtor SATA drives
(6B300S0), hooked up to a Asus A8N-SLI deluxe motherboard with 4 NForce4
SATA ports and 4 SiI 3114 ports.
see [3] for a description of what I did and more details.
each single disk in the array gives a read performance (tested with dd)
of about 62MB/s (when reading the first 4GiB of a disk).
the array (reading the first 8GiB from /dev/md0 with dd, bs=1024K)
performs at about 174MiB/s, accessing the array through LVM2 (still with
bs=1024K) only 86MiB/s.
my first conclusion was to leave out the LVM2 of the loop and directly
put a file system on /dev/md0.
however, when reading Neil Brown's article [1], I got the impression
that my system should perform better, given the fact that each disk has
a transfer rate of minimally 37MiB/s, maximally 65MiB/s [2].
or is there a flaw in my argumentation and the current performance is
normal? are at last the controllers saturating?
any suggestions are welcome.
regards
nicola
[1]: http://cgi.cse.unsw.edu.au/~neilb/01102979338
[2]: http://www.storagereview.com/articles/200410/200410087B300S0-2_2.html
[3]: http://variant.ch/phpwiki/WikiBlog/2005-02-27
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: non-optimal RAID 5 performance with 8 drive array
2005-03-01 13:24 non-optimal RAID 5 performance with 8 drive array Nicola Fankhauser
@ 2005-03-01 17:53 ` Robin Bowes
2005-03-01 18:04 ` Roberto Fichera
2005-03-04 22:23 ` non-optimal RAID 5 performance with 8 drive array Matthias Julius
2005-03-06 14:25 ` Stephan van Hienen
2 siblings, 1 reply; 10+ messages in thread
From: Robin Bowes @ 2005-03-01 17:53 UTC (permalink / raw)
To: linux-raid
Nicola Fankhauser wrote:
> see [3] for a description of what I did and more details.
Hi Nicola,
I read your description with interest.
I thought I'd try some speed tests myself but dd doesn't seem to work
the same for me (on FC3). Here's what I get:
[root@dude test]# dd if=/dev/zero of=/home/test/test.tmp bs=4096
count=100000
100000+0 records in
100000+0 records out
Notice there is no timing information.
For the read test:
[root@dude test]# dd of=/dev/null if=/home/test/test.tmp bs=4096
100000+0 records in
100000+0 records out
Again, no timing information.
Anyone know if this is a quirk of the FC3 version of dd?
R.
--
http://robinbowes.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: non-optimal RAID 5 performance with 8 drive array
2005-03-01 17:53 ` Robin Bowes
@ 2005-03-01 18:04 ` Roberto Fichera
2005-03-01 18:12 ` Robin Bowes
0 siblings, 1 reply; 10+ messages in thread
From: Roberto Fichera @ 2005-03-01 18:04 UTC (permalink / raw)
To: Robin Bowes; +Cc: linux-raid
At 18.53 01/03/2005, Robin Bowes wrote:
>Nicola Fankhauser wrote:
>>see [3] for a description of what I did and more details.
>
>Hi Nicola,
>
>I read your description with interest.
>
>I thought I'd try some speed tests myself but dd doesn't seem to work the
>same for me (on FC3). Here's what I get:
>
>[root@dude test]# dd if=/dev/zero of=/home/test/test.tmp bs=4096 count=100000
>100000+0 records in
>100000+0 records out
>
>Notice there is no timing information.
>
>For the read test:
>
>[root@dude test]# dd of=/dev/null if=/home/test/test.tmp bs=4096
>100000+0 records in
>100000+0 records out
>
>Again, no timing information.
>
>Anyone know if this is a quirk of the FC3 version of dd?
you have to use:
time dd if=/dev/zero of=/home/test/test.tmp bs=4096 count=100000
>R.
>--
>http://robinbowes.com
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
Roberto Fichera.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: non-optimal RAID 5 performance with 8 drive array
2005-03-01 18:04 ` Roberto Fichera
@ 2005-03-01 18:12 ` Robin Bowes
2005-03-01 18:41 ` Roberto Fichera
0 siblings, 1 reply; 10+ messages in thread
From: Robin Bowes @ 2005-03-01 18:12 UTC (permalink / raw)
To: linux-raid
Roberto Fichera wrote:
> At 18.53 01/03/2005, Robin Bowes wrote:
>> [root@dude test]# dd if=/dev/zero of=/home/test/test.tmp bs=4096
>> count=100000
>> 100000+0 records in
>> 100000+0 records out
>>
>> Notice there is no timing information.
>
> you have to use:
>
> time dd if=/dev/zero of=/home/test/test.tmp bs=4096 count=100000
Robert,
That's not what I meant - I know I can use "time" to get CPU usage
information.
I was meaning that I don't get the disk speed summary like Nicola did, e.g.:
me@beast:$ dd if=/dev/zero of=/storagearray/test.tmp bs=4096
1238865+0 records in
1238864+0 records out
5074386944 bytes transferred in 63.475536 seconds (79942404 bytes/sec)
R.
--
http://robinbowes.com
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: non-optimal RAID 5 performance with 8 drive array
2005-03-01 18:12 ` Robin Bowes
@ 2005-03-01 18:41 ` Roberto Fichera
2005-03-02 23:08 ` dd with bytes/sec patch [was: non-optimal RAID 5 performance with 8 drive array] Tim Moore
0 siblings, 1 reply; 10+ messages in thread
From: Roberto Fichera @ 2005-03-01 18:41 UTC (permalink / raw)
To: Robin Bowes; +Cc: linux-raid
At 19.12 01/03/2005, Robin Bowes wrote:
>Roberto Fichera wrote:
>>At 18.53 01/03/2005, Robin Bowes wrote:
>>>[root@dude test]# dd if=/dev/zero of=/home/test/test.tmp bs=4096
>>>count=100000
>>>100000+0 records in
>>>100000+0 records out
>>>
>>>Notice there is no timing information.
>>you have to use:
>>time dd if=/dev/zero of=/home/test/test.tmp bs=4096 count=100000
>
>Robert,
>
>That's not what I meant - I know I can use "time" to get CPU usage
>information.
>
>I was meaning that I don't get the disk speed summary like Nicola did, e.g.:
>
>me@beast:$ dd if=/dev/zero of=/storagearray/test.tmp bs=4096
>1238865+0 records in
>1238864+0 records out
>5074386944 bytes transferred in 63.475536 seconds (79942404 bytes/sec)
Opss!!! Sorry! May be some recent coreutils have this nice feature :-)!
>R.
>--
>http://robinbowes.com
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at http://vger.kernel.org/majordomo-info.html
Roberto Fichera.
^ permalink raw reply [flat|nested] 10+ messages in thread
* dd with bytes/sec patch [was: non-optimal RAID 5 performance with 8 drive array]
2005-03-01 18:41 ` Roberto Fichera
@ 2005-03-02 23:08 ` Tim Moore
0 siblings, 0 replies; 10+ messages in thread
From: Tim Moore @ 2005-03-02 23:08 UTC (permalink / raw)
To: linux-raid
> 1. Grab coreutils 5.2.1 from gnu.org and the debian patch from
> http://packages.debian.org/unstable/base/coreutils
>
> 2. Extract the 2 dd patches which start with the line
>
> +--- coreutils-5.0/src/dd.c.orig 2003-02-07 07:39:20.000000000 -0500
>
> through, but not including, the line
>
> --- coreutils-5.2.1.orig/debian/patches/19_ipv6
>
> You can see all the patch headers with
> zcat coreutils_5.2.1-2.diff.gz | grep '^---'
>
> 3. Get rid of the extra leading '+' (sed 's/^\+//'). You now have a working dd patch
>
> 4. Apply the patch and compile:
>
> [tim@tim-linux ~/Kits]$ cd coreutils-5.2.1
> [tim@tim-linux coreutils-5.2.1]$ patch -p1 < ../coreutils-5.2.1.dd-performance-counter.patch
> patching file src/dd.c
> Hunk #1 succeeded at 149 (offset -1 lines).
> Hunk #2 succeeded at 377 (offset 11 lines).
> Hunk #3 succeeded at 380 (offset -1 lines).
> Hunk #4 succeeded at 494 (offset 11 lines).
> Hunk #5 succeeded at 1069 (offset -2 lines).
> Hunk #6 succeeded at 1144 (offset 11 lines).
> Hunk #7 succeeded at 1166 with fuzz 2 (offset -2 lines).
> Hunk #8 succeeded at 1268 (offset 12 lines).
> patching file tests/dd/skip-seek
> Hunk #1 succeeded at 20 (offset -1 lines).
> [tim@tim-linux coreutils-5.2.1]$ ./configure -q
> checking how to get filesystem space usage...
> config.status: creating po/POTFILES
> config.status: creating po/Makefile
> [tim@tim-linux coreutils-5.2.1]$ /usr/bin/time make -j2 > /dev/null
> 26.17user 2.23system 0:30.16elapsed 94%CPU (0avgtext+0avgdata 0maxresident)k
> 0inputs+0outputs (410381major+274056minor)pagefaults 0swaps
> [tim@tim-linux coreutils-5.2.1]$ ls -l src/dd
> -rwxrwxr-x 1 tim tim 68574 Mar 2 11:18 src/dd
> [tim@tim-linux coreutils-5.2.1]$ su
> Password:
> [tim@tim-linux coreutils-5.2.1]# src/dd if=/dev/hda1 of=/dev/zero bs=4k
> 126504+0 records in
> 126504+0 records out
> 518160384 bytes transferred in 21.705705 seconds (23872083 bytes/sec)
--
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: non-optimal RAID 5 performance with 8 drive array
2005-03-01 13:24 non-optimal RAID 5 performance with 8 drive array Nicola Fankhauser
2005-03-01 17:53 ` Robin Bowes
@ 2005-03-04 22:23 ` Matthias Julius
2005-03-06 14:25 ` Stephan van Hienen
2 siblings, 0 replies; 10+ messages in thread
From: Matthias Julius @ 2005-03-04 22:23 UTC (permalink / raw)
To: linux-raid
Nicola Fankhauser <nicola.fankhauser@variant.ch> writes:
> Hi all
>
> I have a RAID 5 array consisting of 8 300GB Maxtor SATA drives
> (6B300S0), hooked up to a Asus A8N-SLI deluxe motherboard with 4
> NForce4 SATA ports and 4 SiI 3114 ports.
>
> see [3] for a description of what I did and more details.
>
> each single disk in the array gives a read performance (tested with
> dd) of about 62MB/s (when reading the first 4GiB of a disk).
>
> the array (reading the first 8GiB from /dev/md0 with dd, bs=1024K)
> performs at about 174MiB/s, accessing the array through LVM2 (still
> with bs=1024K) only 86MiB/s.
>
> my first conclusion was to leave out the LVM2 of the loop and directly
> put a file system on /dev/md0.
>
> however, when reading Neil Brown's article [1], I got the impression
> that my system should perform better, given the fact that each disk
> has a transfer rate of minimally 37MiB/s, maximally 65MiB/s [2].
>
> or is there a flaw in my argumentation and the current performance is
> normal? are at last the controllers saturating?
>
> any suggestions are welcome.
Maybe you should try to measure performance by reading from all drives
directly in parallel. That might show hardware bottlenecks. The SiL
is certainly connected to the PCI and can surely not transmit 4 x 62
MB/s (theoretical maximum of PCI 133 MB/s).
Matthias
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: non-optimal RAID 5 performance with 8 drive array
2005-03-01 13:24 non-optimal RAID 5 performance with 8 drive array Nicola Fankhauser
2005-03-01 17:53 ` Robin Bowes
2005-03-04 22:23 ` non-optimal RAID 5 performance with 8 drive array Matthias Julius
@ 2005-03-06 14:25 ` Stephan van Hienen
2005-03-06 14:43 ` Nicola Fankhauser
2 siblings, 1 reply; 10+ messages in thread
From: Stephan van Hienen @ 2005-03-06 14:25 UTC (permalink / raw)
To: Nicola Fankhauser; +Cc: RAID Linux
On Tue, 1 Mar 2005, Nicola Fankhauser wrote:
> the array (reading the first 8GiB from /dev/md0 with dd, bs=1024K) performs
> at about 174MiB/s, accessing the array through LVM2 (still with bs=1024K)
> only 86MiB/s.
Nicola,
which kernel are you using ?
2.4 vs 2.6 performance on my machine (and same problem on different
machines)
raid5 13 disks
2.4:
write 100MB/s
read 140MB/s
2.6
write 100MB/s
read 280MB/s
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2005-03-06 17:09 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-03-01 13:24 non-optimal RAID 5 performance with 8 drive array Nicola Fankhauser
2005-03-01 17:53 ` Robin Bowes
2005-03-01 18:04 ` Roberto Fichera
2005-03-01 18:12 ` Robin Bowes
2005-03-01 18:41 ` Roberto Fichera
2005-03-02 23:08 ` dd with bytes/sec patch [was: non-optimal RAID 5 performance with 8 drive array] Tim Moore
2005-03-04 22:23 ` non-optimal RAID 5 performance with 8 drive array Matthias Julius
2005-03-06 14:25 ` Stephan van Hienen
2005-03-06 14:43 ` Nicola Fankhauser
2005-03-06 17:09 ` Stephan van Hienen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).