* raid0 low performance
@ 2005-07-01 2:15 Ming Zhang
2005-07-01 2:28 ` Tyler
2005-07-01 12:55 ` Guy
0 siblings, 2 replies; 14+ messages in thread
From: Ming Zhang @ 2005-07-01 2:15 UTC (permalink / raw)
To: Linux RAID
[-- Attachment #1: Type: text/plain, Size: 2272 bytes --]
I meet some strange low performance when run RAID0 with vanilla kernel
2.4.27/2.6.11.12.
My box is a 2.8G P4 and 1G RAM. 8 400GB SATA disk and A marvel 8port
controller. I run marvel 3.4.1 driver. I wrote a small program to write
device sequentially and SYNCHRONOUSLY.
This is performance of 1 disk. looks fine.
1048576Bytes * 1024 : 55.466MB/s
524288Bytes * 2048 : 55.830MB/s
262144Bytes * 4096 : 55.782MB/s
131072Bytes * 8192 : 55.567MB/s
65536Bytes * 16384 : 55.926MB/s
32768Bytes * 32768 : 54.344MB/s
16384Bytes * 65536 : 41.415MB/s
8192Bytes * 65536 : 26.499MB/s
4096Bytes * 65536 : 15.110MB/s
2048Bytes * 65536 : 8.422MB/s
1024Bytes * 65536 : 4.318MB/s
But when run 2 disk raid0, there is only 10% improvement.
md3 : active raid0 sdb[1] sda[0]
781422592 blocks 64k chunks
1048576Bytes * 1024 : 67.300MB/s
524288Bytes * 2048 : 66.796MB/s
262144Bytes * 4096 : 65.728MB/s
131072Bytes * 8192 : 65.017MB/s
65536Bytes * 16384 : 59.223MB/s
32768Bytes * 32768 : 49.766MB/s
16384Bytes * 65536 : 39.162MB/s
8192Bytes * 65536 : 26.386MB/s
4096Bytes * 65536 : 15.084MB/s
2048Bytes * 65536 : 8.383MB/s
1024Bytes * 65536 : 4.303MB/s
And when use 4 disks, the speed is slower!
md0 : active raid0 sdh[3] sdg[2] sdf[1] sde[0]
1562845184 blocks 64k chunks
1048576Bytes * 1024 : 58.032MB/s
524288Bytes * 2048 : 56.994MB/s
262144Bytes * 4096 : 58.289MB/s
131072Bytes * 8192 : 65.999MB/s
65536Bytes * 16384 : 59.723MB/s
32768Bytes * 32768 : 50.061MB/s
16384Bytes * 65536 : 38.689MB/s
8192Bytes * 65536 : 26.169MB/s
4096Bytes * 65536 : 15.169MB/s
2048Bytes * 65536 : 8.378MB/s
1024Bytes * 65536 : 4.287MB/s
Any hint on this?
* I do not know how to check current PCI bus speed and I am not sure
whether is limited by that. It is a 64bit card but I am not sure if it
is run at 66MHZ. Should be, but want to check to make sure.
* I tested each disk and all disk performs OK.
Thanks
Ming
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid0 low performance
2005-07-01 2:15 raid0 low performance Ming Zhang
@ 2005-07-01 2:28 ` Tyler
2005-07-01 2:57 ` John Madden
2005-07-01 12:32 ` Ming Zhang
2005-07-01 12:55 ` Guy
1 sibling, 2 replies; 14+ messages in thread
From: Tyler @ 2005-07-01 2:28 UTC (permalink / raw)
To: mingz; +Cc: Linux RAID
My guess at a glance would be either the Marvel driver is at fault, or
the simple fact of software raid performance... did you monitor the cpu
useage during the tests? even PCI 32bit, 33mhz should be able to hit
133 Mbytes per second I believe. What are you writing *from* ? dev/zero
or another drive that may not have a read speed high enough to keep up
with what is available?
Tyler.
Ming Zhang wrote:
>I meet some strange low performance when run RAID0 with vanilla kernel
>2.4.27/2.6.11.12.
>
>My box is a 2.8G P4 and 1G RAM. 8 400GB SATA disk and A marvel 8port
>controller. I run marvel 3.4.1 driver. I wrote a small program to write
>device sequentially and SYNCHRONOUSLY.
>
>This is performance of 1 disk. looks fine.
>
> 1048576Bytes * 1024 : 55.466MB/s
> 524288Bytes * 2048 : 55.830MB/s
> 262144Bytes * 4096 : 55.782MB/s
> 131072Bytes * 8192 : 55.567MB/s
> 65536Bytes * 16384 : 55.926MB/s
> 32768Bytes * 32768 : 54.344MB/s
> 16384Bytes * 65536 : 41.415MB/s
> 8192Bytes * 65536 : 26.499MB/s
> 4096Bytes * 65536 : 15.110MB/s
> 2048Bytes * 65536 : 8.422MB/s
> 1024Bytes * 65536 : 4.318MB/s
>
>But when run 2 disk raid0, there is only 10% improvement.
>
>md3 : active raid0 sdb[1] sda[0]
> 781422592 blocks 64k chunks
> 1048576Bytes * 1024 : 67.300MB/s
> 524288Bytes * 2048 : 66.796MB/s
> 262144Bytes * 4096 : 65.728MB/s
> 131072Bytes * 8192 : 65.017MB/s
> 65536Bytes * 16384 : 59.223MB/s
> 32768Bytes * 32768 : 49.766MB/s
> 16384Bytes * 65536 : 39.162MB/s
> 8192Bytes * 65536 : 26.386MB/s
> 4096Bytes * 65536 : 15.084MB/s
> 2048Bytes * 65536 : 8.383MB/s
> 1024Bytes * 65536 : 4.303MB/s
>
>And when use 4 disks, the speed is slower!
>md0 : active raid0 sdh[3] sdg[2] sdf[1] sde[0]
> 1562845184 blocks 64k chunks
> 1048576Bytes * 1024 : 58.032MB/s
> 524288Bytes * 2048 : 56.994MB/s
> 262144Bytes * 4096 : 58.289MB/s
> 131072Bytes * 8192 : 65.999MB/s
> 65536Bytes * 16384 : 59.723MB/s
> 32768Bytes * 32768 : 50.061MB/s
> 16384Bytes * 65536 : 38.689MB/s
> 8192Bytes * 65536 : 26.169MB/s
> 4096Bytes * 65536 : 15.169MB/s
> 2048Bytes * 65536 : 8.378MB/s
> 1024Bytes * 65536 : 4.287MB/s
>
>
>Any hint on this?
>
>* I do not know how to check current PCI bus speed and I am not sure
>whether is limited by that. It is a 64bit card but I am not sure if it
>is run at 66MHZ. Should be, but want to check to make sure.
>* I tested each disk and all disk performs OK.
>
>
>Thanks
>
>
>Ming
>
>
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid0 low performance
2005-07-01 2:28 ` Tyler
@ 2005-07-01 2:57 ` John Madden
2005-07-01 12:41 ` Ming Zhang
2005-07-01 12:32 ` Ming Zhang
1 sibling, 1 reply; 14+ messages in thread
From: John Madden @ 2005-07-01 2:57 UTC (permalink / raw)
Cc: mingz, Linux RAID, Tyler
> useage during the tests? even PCI 32bit, 33mhz should be able to hit
> 133 Mbytes per second I believe. What are you writing *from* ? dev/zero
~127MB/s is the theoretical bus speed of PCI/33. Actual throughput is much less
than that, of course.
But you're not on a 33MHz bus, you should be at at least 66MHz (and info in /proc
should tell you). Given the lack of performance increase though, I'm guessing
you're either pegging the bus or the CPU, not yet the throughput of the kernel
itself. Does the program block in disk wait when running?
John
(Go Rams)
--
John Madden
UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmadden@ivytech.edu
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid0 low performance
2005-07-01 2:28 ` Tyler
2005-07-01 2:57 ` John Madden
@ 2005-07-01 12:32 ` Ming Zhang
1 sibling, 0 replies; 14+ messages in thread
From: Ming Zhang @ 2005-07-01 12:32 UTC (permalink / raw)
To: Tyler; +Cc: Linux RAID
[-- Attachment #1: Type: text/plain, Size: 3507 bytes --]
i also suspect this marvel driver or chip and want to change to another
one. but before buying one, maybe i should gather some information on
which one is the best 8port or 12 port. :P
cpu utilization is not high. < 20%
i write a small program that allocate a piece of buffer and write the
buffer to device again and again synchronously, no mmap, glibc fwrite,
just basic write system call. i can attach the code here if this list
allows.
i tested to write to /dev/null or a ram disk, pretty fast.
ming
On Thu, 2005-06-30 at 19:28 -0700, Tyler wrote:
> My guess at a glance would be either the Marvel driver is at fault, or
> the simple fact of software raid performance... did you monitor the cpu
> useage during the tests? even PCI 32bit, 33mhz should be able to hit
> 133 Mbytes per second I believe. What are you writing *from* ? dev/zero
> or another drive that may not have a read speed high enough to keep up
> with what is available?
>
> Tyler.
>
> Ming Zhang wrote:
>
> >I meet some strange low performance when run RAID0 with vanilla kernel
> >2.4.27/2.6.11.12.
> >
> >My box is a 2.8G P4 and 1G RAM. 8 400GB SATA disk and A marvel 8port
> >controller. I run marvel 3.4.1 driver. I wrote a small program to write
> >device sequentially and SYNCHRONOUSLY.
> >
> >This is performance of 1 disk. looks fine.
> >
> > 1048576Bytes * 1024 : 55.466MB/s
> > 524288Bytes * 2048 : 55.830MB/s
> > 262144Bytes * 4096 : 55.782MB/s
> > 131072Bytes * 8192 : 55.567MB/s
> > 65536Bytes * 16384 : 55.926MB/s
> > 32768Bytes * 32768 : 54.344MB/s
> > 16384Bytes * 65536 : 41.415MB/s
> > 8192Bytes * 65536 : 26.499MB/s
> > 4096Bytes * 65536 : 15.110MB/s
> > 2048Bytes * 65536 : 8.422MB/s
> > 1024Bytes * 65536 : 4.318MB/s
> >
> >But when run 2 disk raid0, there is only 10% improvement.
> >
> >md3 : active raid0 sdb[1] sda[0]
> > 781422592 blocks 64k chunks
> > 1048576Bytes * 1024 : 67.300MB/s
> > 524288Bytes * 2048 : 66.796MB/s
> > 262144Bytes * 4096 : 65.728MB/s
> > 131072Bytes * 8192 : 65.017MB/s
> > 65536Bytes * 16384 : 59.223MB/s
> > 32768Bytes * 32768 : 49.766MB/s
> > 16384Bytes * 65536 : 39.162MB/s
> > 8192Bytes * 65536 : 26.386MB/s
> > 4096Bytes * 65536 : 15.084MB/s
> > 2048Bytes * 65536 : 8.383MB/s
> > 1024Bytes * 65536 : 4.303MB/s
> >
> >And when use 4 disks, the speed is slower!
> >md0 : active raid0 sdh[3] sdg[2] sdf[1] sde[0]
> > 1562845184 blocks 64k chunks
> > 1048576Bytes * 1024 : 58.032MB/s
> > 524288Bytes * 2048 : 56.994MB/s
> > 262144Bytes * 4096 : 58.289MB/s
> > 131072Bytes * 8192 : 65.999MB/s
> > 65536Bytes * 16384 : 59.723MB/s
> > 32768Bytes * 32768 : 50.061MB/s
> > 16384Bytes * 65536 : 38.689MB/s
> > 8192Bytes * 65536 : 26.169MB/s
> > 4096Bytes * 65536 : 15.169MB/s
> > 2048Bytes * 65536 : 8.378MB/s
> > 1024Bytes * 65536 : 4.287MB/s
> >
> >
> >Any hint on this?
> >
> >* I do not know how to check current PCI bus speed and I am not sure
> >whether is limited by that. It is a 64bit card but I am not sure if it
> >is run at 66MHZ. Should be, but want to check to make sure.
> >* I tested each disk and all disk performs OK.
> >
> >
> >Thanks
> >
> >
> >Ming
> >
> >
> >
>
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid0 low performance
2005-07-01 2:57 ` John Madden
@ 2005-07-01 12:41 ` Ming Zhang
2005-07-01 12:54 ` John Madden
0 siblings, 1 reply; 14+ messages in thread
From: Ming Zhang @ 2005-07-01 12:41 UTC (permalink / raw)
To: John Madden; +Cc: Tyler, Linux RAID
[-- Attachment #1: Type: text/plain, Size: 912 bytes --]
On Thu, 2005-06-30 at 21:57 -0500, John Madden wrote:
> > useage during the tests? even PCI 32bit, 33mhz should be able to hit
> > 133 Mbytes per second I believe. What are you writing *from* ? dev/zero
>
> ~127MB/s is the theoretical bus speed of PCI/33. Actual throughput is much less
> than that, of course.
>
> But you're not on a 33MHz bus, you should be at at least 66MHz (and info in /proc
which one? i can not find. :P could u tell me? thx
> should tell you). Given the lack of performance increase though, I'm guessing
> you're either pegging the bus or the CPU, not yet the throughput of the kernel
> itself. Does the program block in disk wait when running?
yes, it write synchronously. so it is wait.
but my friend run same code on a 3ware 4 disk raid0 and get 140MB
easily.
similar box, P4 2.8G 1G ram. supermicro board.
>
> John
>
> (Go Rams)
>
>
>
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid0 low performance
2005-07-01 12:41 ` Ming Zhang
@ 2005-07-01 12:54 ` John Madden
2005-07-01 13:10 ` Ming Zhang
0 siblings, 1 reply; 14+ messages in thread
From: John Madden @ 2005-07-01 12:54 UTC (permalink / raw)
Cc: Tyler, Linux RAID, Ming Zhang
> which one? i can not find. :P could u tell me? thx
lspci -vv
> but my friend run same code on a 3ware 4 disk raid0 and get 140MB
> easily.
>
> similar box, P4 2.8G 1G ram. supermicro board.
Then I agree with the other reply -- suspect that Marvel controller or its driver
is at fault. Can you borrow your friend's controller? :)
John
--
John Madden
UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmadden@ivytech.edu
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: raid0 low performance
2005-07-01 2:15 raid0 low performance Ming Zhang
2005-07-01 2:28 ` Tyler
@ 2005-07-01 12:55 ` Guy
2005-07-01 13:17 ` Ming Zhang
1 sibling, 1 reply; 14+ messages in thread
From: Guy @ 2005-07-01 12:55 UTC (permalink / raw)
To: mingz, 'Linux RAID'
I think you should test 2 or more disks at the same time. This would prove
that your system can or can't move more than about 60MB/s.
My old P3-500 SMP can move at least 150MB/s. But I have 3 SCSI buses and 3
PCI buses.
Guy
> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Ming Zhang
> Sent: Thursday, June 30, 2005 10:16 PM
> To: Linux RAID
> Subject: raid0 low performance
>
> I meet some strange low performance when run RAID0 with vanilla kernel
> 2.4.27/2.6.11.12.
>
> My box is a 2.8G P4 and 1G RAM. 8 400GB SATA disk and A marvel 8port
> controller. I run marvel 3.4.1 driver. I wrote a small program to write
> device sequentially and SYNCHRONOUSLY.
>
> This is performance of 1 disk. looks fine.
>
> 1048576Bytes * 1024 : 55.466MB/s
> 524288Bytes * 2048 : 55.830MB/s
> 262144Bytes * 4096 : 55.782MB/s
> 131072Bytes * 8192 : 55.567MB/s
> 65536Bytes * 16384 : 55.926MB/s
> 32768Bytes * 32768 : 54.344MB/s
> 16384Bytes * 65536 : 41.415MB/s
> 8192Bytes * 65536 : 26.499MB/s
> 4096Bytes * 65536 : 15.110MB/s
> 2048Bytes * 65536 : 8.422MB/s
> 1024Bytes * 65536 : 4.318MB/s
>
> But when run 2 disk raid0, there is only 10% improvement.
>
> md3 : active raid0 sdb[1] sda[0]
> 781422592 blocks 64k chunks
> 1048576Bytes * 1024 : 67.300MB/s
> 524288Bytes * 2048 : 66.796MB/s
> 262144Bytes * 4096 : 65.728MB/s
> 131072Bytes * 8192 : 65.017MB/s
> 65536Bytes * 16384 : 59.223MB/s
> 32768Bytes * 32768 : 49.766MB/s
> 16384Bytes * 65536 : 39.162MB/s
> 8192Bytes * 65536 : 26.386MB/s
> 4096Bytes * 65536 : 15.084MB/s
> 2048Bytes * 65536 : 8.383MB/s
> 1024Bytes * 65536 : 4.303MB/s
>
> And when use 4 disks, the speed is slower!
> md0 : active raid0 sdh[3] sdg[2] sdf[1] sde[0]
> 1562845184 blocks 64k chunks
> 1048576Bytes * 1024 : 58.032MB/s
> 524288Bytes * 2048 : 56.994MB/s
> 262144Bytes * 4096 : 58.289MB/s
> 131072Bytes * 8192 : 65.999MB/s
> 65536Bytes * 16384 : 59.723MB/s
> 32768Bytes * 32768 : 50.061MB/s
> 16384Bytes * 65536 : 38.689MB/s
> 8192Bytes * 65536 : 26.169MB/s
> 4096Bytes * 65536 : 15.169MB/s
> 2048Bytes * 65536 : 8.378MB/s
> 1024Bytes * 65536 : 4.287MB/s
>
>
> Any hint on this?
>
> * I do not know how to check current PCI bus speed and I am not sure
> whether is limited by that. It is a 64bit card but I am not sure if it
> is run at 66MHZ. Should be, but want to check to make sure.
> * I tested each disk and all disk performs OK.
>
>
> Thanks
>
>
> Ming
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: raid0 low performance
2005-07-01 12:54 ` John Madden
@ 2005-07-01 13:10 ` Ming Zhang
0 siblings, 0 replies; 14+ messages in thread
From: Ming Zhang @ 2005-07-01 13:10 UTC (permalink / raw)
To: John Madden; +Cc: Tyler, Linux RAID
[-- Attachment #1: Type: text/plain, Size: 676 bytes --]
On Fri, 2005-07-01 at 07:54 -0500, John Madden wrote:
> > which one? i can not find. :P could u tell me? thx
>
> lspci -vv
thx. cool flag. i use lspci all the time but never find that -vv stuff.
so from now on every command i will add a -vvvvvvv. :P
> > but my friend run same code on a 3ware 4 disk raid0 and get 140MB
> > easily.
> >
> > similar box, P4 2.8G 1G ram. supermicro board.
>
> Then I agree with the other reply -- suspect that Marvel controller or its driver
> is at fault. Can you borrow your friend's controller? :)
>
he has oracle on it. run a test like that already take a long time and
some beers. :P
> John
>
>
>
>
>
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: raid0 low performance
2005-07-01 12:55 ` Guy
@ 2005-07-01 13:17 ` Ming Zhang
2005-07-01 13:54 ` Ming Zhang
2005-07-01 14:42 ` Ming Zhang
0 siblings, 2 replies; 14+ messages in thread
From: Ming Zhang @ 2005-07-01 13:17 UTC (permalink / raw)
To: Guy; +Cc: 'Linux RAID'
[-- Attachment #1: Type: text/plain, Size: 3611 bytes --]
On Fri, 2005-07-01 at 08:55 -0400, Guy wrote:
> I think you should test 2 or more disks at the same time. This would prove
> that your system can or can't move more than about 60MB/s.
i tested and seems that 90MB/s is the max here.
if i run 2 program copy on 2 disks independently on same card, both of
them bound at 45MB/s
if i run 3 program copy on 3 disks independently on same card, both of
them bound at 27 or 28MB/s so total is still 90MB/s
if i run 2 program copy on 2 disks but one is on 4 port marvel and
another is 8 port marvel, then both of the bound at 55MB/s which is disk
limitation.
so i guess the card or bus has problem. i will check bus speed soon.
>
> My old P3-500 SMP can move at least 150MB/s. But I have 3 SCSI buses and 3
> PCI buses.
>
> Guy
>
ming
> > -----Original Message-----
> > From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> > owner@vger.kernel.org] On Behalf Of Ming Zhang
> > Sent: Thursday, June 30, 2005 10:16 PM
> > To: Linux RAID
> > Subject: raid0 low performance
> >
> > I meet some strange low performance when run RAID0 with vanilla kernel
> > 2.4.27/2.6.11.12.
> >
> > My box is a 2.8G P4 and 1G RAM. 8 400GB SATA disk and A marvel 8port
> > controller. I run marvel 3.4.1 driver. I wrote a small program to write
> > device sequentially and SYNCHRONOUSLY.
> >
> > This is performance of 1 disk. looks fine.
> >
> > 1048576Bytes * 1024 : 55.466MB/s
> > 524288Bytes * 2048 : 55.830MB/s
> > 262144Bytes * 4096 : 55.782MB/s
> > 131072Bytes * 8192 : 55.567MB/s
> > 65536Bytes * 16384 : 55.926MB/s
> > 32768Bytes * 32768 : 54.344MB/s
> > 16384Bytes * 65536 : 41.415MB/s
> > 8192Bytes * 65536 : 26.499MB/s
> > 4096Bytes * 65536 : 15.110MB/s
> > 2048Bytes * 65536 : 8.422MB/s
> > 1024Bytes * 65536 : 4.318MB/s
> >
> > But when run 2 disk raid0, there is only 10% improvement.
> >
> > md3 : active raid0 sdb[1] sda[0]
> > 781422592 blocks 64k chunks
> > 1048576Bytes * 1024 : 67.300MB/s
> > 524288Bytes * 2048 : 66.796MB/s
> > 262144Bytes * 4096 : 65.728MB/s
> > 131072Bytes * 8192 : 65.017MB/s
> > 65536Bytes * 16384 : 59.223MB/s
> > 32768Bytes * 32768 : 49.766MB/s
> > 16384Bytes * 65536 : 39.162MB/s
> > 8192Bytes * 65536 : 26.386MB/s
> > 4096Bytes * 65536 : 15.084MB/s
> > 2048Bytes * 65536 : 8.383MB/s
> > 1024Bytes * 65536 : 4.303MB/s
> >
> > And when use 4 disks, the speed is slower!
> > md0 : active raid0 sdh[3] sdg[2] sdf[1] sde[0]
> > 1562845184 blocks 64k chunks
> > 1048576Bytes * 1024 : 58.032MB/s
> > 524288Bytes * 2048 : 56.994MB/s
> > 262144Bytes * 4096 : 58.289MB/s
> > 131072Bytes * 8192 : 65.999MB/s
> > 65536Bytes * 16384 : 59.723MB/s
> > 32768Bytes * 32768 : 50.061MB/s
> > 16384Bytes * 65536 : 38.689MB/s
> > 8192Bytes * 65536 : 26.169MB/s
> > 4096Bytes * 65536 : 15.169MB/s
> > 2048Bytes * 65536 : 8.378MB/s
> > 1024Bytes * 65536 : 4.287MB/s
> >
> >
> > Any hint on this?
> >
> > * I do not know how to check current PCI bus speed and I am not sure
> > whether is limited by that. It is a 64bit card but I am not sure if it
> > is run at 66MHZ. Should be, but want to check to make sure.
> > * I tested each disk and all disk performs OK.
> >
> >
> > Thanks
> >
> >
> > Ming
>
>
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: raid0 low performance
2005-07-01 13:17 ` Ming Zhang
@ 2005-07-01 13:54 ` Ming Zhang
2005-07-05 0:13 ` Mark Hahn
2005-07-01 14:42 ` Ming Zhang
1 sibling, 1 reply; 14+ messages in thread
From: Ming Zhang @ 2005-07-01 13:54 UTC (permalink / raw)
To: Guy; +Cc: 'Linux RAID'
[-- Attachment #1: Type: text/plain, Size: 3612 bytes --]
On Fri, 2005-07-01 at 09:17 -0400, Ming Zhang wrote:
> On Fri, 2005-07-01 at 08:55 -0400, Guy wrote:
> so i guess the card or bus has problem. i will check bus speed soon.
>
here is lspci
# ./lspci
00:00.0 Class 0600: 8086:2578 (rev 02)
00:03.0 Class 0604: 8086:257b (rev 02)
00:1c.0 Class 0604: 8086:25ae (rev 02)
00:1d.0 Class 0c03: 8086:25a9 (rev 02)
00:1d.1 Class 0c03: 8086:25aa (rev 02)
00:1d.4 Class 0880: 8086:25ab (rev 02)
00:1d.5 Class 0800: 8086:25ac (rev 02)
00:1d.7 Class 0c03: 8086:25ad (rev 02)
00:1e.0 Class 0604: 8086:244e (rev 0a)
00:1f.0 Class 0601: 8086:25a1 (rev 02)
00:1f.1 Class 0101: 8086:25a2 (rev 02)
00:1f.2 Class 0101: 8086:25a3 (rev 02)
00:1f.3 Class 0c05: 8086:25a4 (rev 02)
01:01.0 Class 0200: 8086:1075
02:01.0 Class 0100: 11ab:5081 (rev 03)
02:03.0 Class 0100: 9005:00c0 (rev 01)
02:03.1 Class 0100: 9005:00c0 (rev 01)
02:04.0 Class 0104: 11ab:5041
03:09.0 Class 0300: 1002:4752 (rev 27)
03:0a.0 Class 0200: 8086:1076
i checked the 11ab is marvell. so 02:01.0 and 02:04.0 are 4 port marvel
and 8 port marvel.
ps, what is this 02:01.0 means?
# ./lspci -t
-[00]-+-00.0
+-03.0-[01]----01.0
+-1c.0-[02]--+-01.0
| +-03.0
| +-03.1
| \-04.0
+-1d.0
+-1d.1
+-1d.4
+-1d.5
+-1d.7
+-1e.0-[03]--+-09.0
| \-0a.0
+-1f.0
+-1f.1
+-1f.2
\-1f.3
here why 01.0 appear 2 times and 1 share with the 04.0?
here the verbose output. seem they are 66MHZ and they can do 133. :P
# ./lspci -vv -d 11ab:
02:01.0 Class 0100: 11ab:5081 (rev 03)
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop-
ParErr- Stepping- SERR- FastB2B-
Status: Cap+ 66Mhz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort-
<TAbort- <MAbort- >SERR- <PERR-
Latency: 32, cache line size 08
Interrupt: pin A routed to IRQ 24
Region 0: Memory at fa000000 (64-bit, non-prefetchable)
[size=512K]
Capabilities: [40] Power Management version 2
Flags: PMEClk+ DSI- D1- D2- AuxCurrent=0mA PME
(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [50] Message Signalled Interrupts: 64bit+
Queue=0/0 Enable-
Address: 0000000000000000 Data: 0000
Capabilities: [60] PCI-X non-bridge device.
Command: DPERE- ERO- RBC=0 OST=3
Status: Bus=0 Dev=0 Func=0 64bit- 133MHz- SCD- USC-,
DC=simple, DMMRBC=0, DMOST=0, DMCRS=0, RSCEM-
02:04.0 Class 0104: 11ab:5041
Subsystem: 15d9:5180
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop-
ParErr- Stepping- SERR- FastB2B-
Status: Cap+ 66Mhz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort-
<TAbort- <MAbort- >SERR- <PERR-
Latency: 32, cache line size 08
Interrupt: pin A routed to IRQ 27
Region 0: Memory at fa080000 (64-bit, non-prefetchable)
[size=512K]
Capabilities: [40] Power Management version 2
Flags: PMEClk+ DSI- D1- D2- AuxCurrent=0mA PME
(D0-,D1-,D2-,D3hot-,D3cold-)
Status: D0 PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [50] Message Signalled Interrupts: 64bit+
Queue=0/0 Enable-
Address: 0000000000000000 Data: 0000
Capabilities: [60] PCI-X non-bridge device.
Command: DPERE- ERO- RBC=0 OST=3
Status: Bus=0 Dev=0 Func=0 64bit- 133MHz- SCD- USC-,
DC=simple, DMMRBC=0, DMOST=0, DMCRS=0, RSCEM-
ming
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: raid0 low performance
2005-07-01 13:17 ` Ming Zhang
2005-07-01 13:54 ` Ming Zhang
@ 2005-07-01 14:42 ` Ming Zhang
1 sibling, 0 replies; 14+ messages in thread
From: Ming Zhang @ 2005-07-01 14:42 UTC (permalink / raw)
To: Guy; +Cc: 'Linux RAID'
[-- Attachment #1: Type: text/plain, Size: 1720 bytes --]
i wrote another program to read from device and then discard.
./synctest /dev/md1
Personalities : [linear] [raid0] [raid1] [raid5] [multipath]
read_ahead 1024 sectors
md1 : active raid0 sdl[3] sdk[2] sdb[1] sda[0]
1562845184 blocks 64k chunks
unused devices: <none>
1048576Bytes * 1024 : 158.459MB/s
524288Bytes * 2048 : 164.935MB/s
262144Bytes * 4096 : 173.649MB/s
131072Bytes * 8192 : 174.228MB/s
65536Bytes * 16384 : 178.462MB/s
32768Bytes * 32768 : 177.569MB/s
16384Bytes * 65536 : 177.888MB/s
8192Bytes * 65536 : 173.809MB/s
4096Bytes * 131072 : 172.732MB/s
2048Bytes * 131072 : 165.512MB/s
1024Bytes * 131072 : 150.354MB/s
so looks like the bus is not a bottleneck here. and read ahead really
works here.
i guess the problem is r/w disparity in controller or driver.
thanks guys
ming
On Fri, 2005-07-01 at 09:17 -0400, Ming Zhang wrote:
> On Fri, 2005-07-01 at 08:55 -0400, Guy wrote:
> > I think you should test 2 or more disks at the same time. This
> would prove
> > that your system can or can't move more than about 60MB/s.
>
>
> i tested and seems that 90MB/s is the max here.
>
>
> if i run 2 program copy on 2 disks independently on same card, both of
> them bound at 45MB/s
> if i run 3 program copy on 3 disks independently on same card, both of
> them bound at 27 or 28MB/s so total is still 90MB/s
> if i run 2 program copy on 2 disks but one is on 4 port marvel and
> another is 8 port marvel, then both of the bound at 55MB/s which is
> disk
> limitation.
>
> so i guess the card or bus has problem. i will check bus speed soon.
>
>
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: raid0 low performance
2005-07-01 13:54 ` Ming Zhang
@ 2005-07-05 0:13 ` Mark Hahn
2005-07-05 0:26 ` Ming Zhang
0 siblings, 1 reply; 14+ messages in thread
From: Mark Hahn @ 2005-07-05 0:13 UTC (permalink / raw)
To: Ming Zhang; +Cc: 'Linux RAID'
> # ./lspci -vv -d 11ab:
> 02:01.0 Class 0100: 11ab:5081 (rev 03)
> Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop-
> ParErr- Stepping- SERR- FastB2B-
> Status: Cap+ 66Mhz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort-
> <TAbort- <MAbort- >SERR- <PERR-
> Latency: 32, cache line size 08
latency 32 is quite low; this parameter effects how long the device
can hold onto the bus. try
setpci -s 02:01.0 latency_timer=80
and see whether it improves things. I'm unclear on whether a bridged device
also needs the bridge's latency setting changed.
> Latency: 32, cache line size 08
also low. I'm guessing that your bios is tuned for desktop use - some audio
setups need low latency settings.
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: raid0 low performance
2005-07-05 0:13 ` Mark Hahn
@ 2005-07-05 0:26 ` Ming Zhang
2005-07-05 13:19 ` Ming Zhang
0 siblings, 1 reply; 14+ messages in thread
From: Ming Zhang @ 2005-07-05 0:26 UTC (permalink / raw)
To: Mark Hahn; +Cc: 'Linux RAID'
[-- Attachment #1: Type: text/plain, Size: 1074 bytes --]
On Mon, 2005-07-04 at 20:13 -0400, Mark Hahn wrote:
> > # ./lspci -vv -d 11ab:
> > 02:01.0 Class 0100: 11ab:5081 (rev 03)
> > Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop-
> > ParErr- Stepping- SERR- FastB2B-
> > Status: Cap+ 66Mhz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort-
> > <TAbort- <MAbort- >SERR- <PERR-
> > Latency: 32, cache line size 08
>
> latency 32 is quite low; this parameter effects how long the device
> can hold onto the bus. try
> setpci -s 02:01.0 latency_timer=80
thanks a lot for this and i will test it tomorrow.
> and see whether it improves things. I'm unclear on whether a bridged device
> also needs the bridge's latency setting changed.
>
> > Latency: 32, cache line size 08
>
> also low. I'm guessing that your bios is tuned for desktop use - some audio
> setups need low latency settings.
>
yes, i guess so, the only drawback of this box is that i am using a
entry level server box. so its chipset is quite weak and the setting i
left as default.
Ming
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]
^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: raid0 low performance
2005-07-05 0:26 ` Ming Zhang
@ 2005-07-05 13:19 ` Ming Zhang
0 siblings, 0 replies; 14+ messages in thread
From: Ming Zhang @ 2005-07-05 13:19 UTC (permalink / raw)
To: Mark Hahn; +Cc: 'Linux RAID'
Hi,
With this
-[00]-+-00.0 8086:2578
+-03.0-[01]----01.0 8086:1075
+-1c.0-[02]--+-01.0 11ab:5081
| +-03.0 9005:00c0
| +-03.1 9005:00c0
| \-04.0 11ab:5041
+-1e.0-[03]--+-09.0 1002:4752
| \-0a.0 8086:1076
+-1f.0 8086:25a1
+-1f.1 8086:25a2
+-1f.2 8086:25a3
\-1f.3 8086:25a4
i tried to tune the latency-timer on 02:01.0 02:04.0 and 00:1c.0
individually or both. tried 40, 60, 80. no luck.
so i think it is the driver or card fault. :) thanks anyway.
ming
On Mon, 2005-07-04 at 20:26 -0400, Ming Zhang wrote:
> On Mon, 2005-07-04 at 20:13 -0400, Mark Hahn wrote:
> > > # ./lspci -vv -d 11ab:
> > > 02:01.0 Class 0100: 11ab:5081 (rev 03)
> > > Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop-
> > > ParErr- Stepping- SERR- FastB2B-
> > > Status: Cap+ 66Mhz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort-
> > > <TAbort- <MAbort- >SERR- <PERR-
> > > Latency: 32, cache line size 08
> >
> > latency 32 is quite low; this parameter effects how long the device
> > can hold onto the bus. try
> > setpci -s 02:01.0 latency_timer=80
>
> thanks a lot for this and i will test it tomorrow.
>
> > and see whether it improves things. I'm unclear on whether a bridged device
> > also needs the bridge's latency setting changed.
> >
> > > Latency: 32, cache line size 08
> >
> > also low. I'm guessing that your bios is tuned for desktop use - some audio
> > setups need low latency settings.
> >
> yes, i guess so, the only drawback of this box is that i am using a
> entry level server box. so its chipset is quite weak and the setting i
> left as default.
>
>
> Ming
>
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2005-07-05 13:19 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-07-01 2:15 raid0 low performance Ming Zhang
2005-07-01 2:28 ` Tyler
2005-07-01 2:57 ` John Madden
2005-07-01 12:41 ` Ming Zhang
2005-07-01 12:54 ` John Madden
2005-07-01 13:10 ` Ming Zhang
2005-07-01 12:32 ` Ming Zhang
2005-07-01 12:55 ` Guy
2005-07-01 13:17 ` Ming Zhang
2005-07-01 13:54 ` Ming Zhang
2005-07-05 0:13 ` Mark Hahn
2005-07-05 0:26 ` Ming Zhang
2005-07-05 13:19 ` Ming Zhang
2005-07-01 14:42 ` Ming Zhang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).