linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* 4 disks in raid 5: 33MB/s read performance?
@ 2006-05-22 15:46 Dexter Filmore
  2006-05-22 16:33 ` Mark Hahn
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Dexter Filmore @ 2006-05-22 15:46 UTC (permalink / raw)
  To: Linux RAID Mailing List

I just dd'ed a 700MB iso to /dev/null, dd returned 33MB/s.
Isn't that a little slow?
System is a sil3114 4 port sata 1 controller with 4 samsung spinpoint 250GB, 
8MB cache in raid 5 on a Athlon XP 2000+/512MB.


-- 
-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GCS d--(+)@ s-:+ a- C+++(++++) UL+>++++ P+>++ L+++>++++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h>++ r%>* y?
------END GEEK CODE BLOCK------

http://www.stop1984.com
http://www.againsttcpa.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: 4 disks in raid 5: 33MB/s read performance?
  2006-05-22 15:46 4 disks in raid 5: 33MB/s read performance? Dexter Filmore
@ 2006-05-22 16:33 ` Mark Hahn
  2006-05-24 20:11   ` Bill Davidsen
  2006-05-22 23:02 ` Neil Brown
       [not found] ` <44721F84.6060503@redhat.com>
  2 siblings, 1 reply; 11+ messages in thread
From: Mark Hahn @ 2006-05-22 16:33 UTC (permalink / raw)
  To: Dexter Filmore; +Cc: Linux RAID Mailing List

> I just dd'ed a 700MB iso to /dev/null, dd returned 33MB/s.
> Isn't that a little slow?

what bs parameter did you give to dd?  it should be at least 3*chunk
(probably 3*64k if you used defaults.)


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: 4 disks in raid 5: 33MB/s read performance?
  2006-05-22 15:46 4 disks in raid 5: 33MB/s read performance? Dexter Filmore
  2006-05-22 16:33 ` Mark Hahn
@ 2006-05-22 23:02 ` Neil Brown
  2006-05-24  8:21   ` Max. md array size under 32-bit i368 Gordon Henderson
  2006-05-25 13:58   ` 4 disks in raid 5: 33MB/s read performance? Dexter Filmore
       [not found] ` <44721F84.6060503@redhat.com>
  2 siblings, 2 replies; 11+ messages in thread
From: Neil Brown @ 2006-05-22 23:02 UTC (permalink / raw)
  To: Dexter Filmore; +Cc: Linux RAID Mailing List

On Monday May 22, Dexter.Filmore@gmx.de wrote:
> I just dd'ed a 700MB iso to /dev/null, dd returned 33MB/s.
> Isn't that a little slow?
> System is a sil3114 4 port sata 1 controller with 4 samsung spinpoint 250GB, 
> 8MB cache in raid 5 on a Athlon XP 2000+/512MB.
> 

Yes, read on raid5 isn't as fast as we might like at the moment.

It looks like you are getting about 11MB/s of each disk which is
probably quite a bit slower than they can manage (what is the
single-drive read speed you get dding from /dev/sda or whatever).

You could try playing with the readahead number (blockdev --setra/--getra).
I'm beginning to think that the default setting is a little low.

You could also try increasing the stripe-cache size by writing numbers
to 
   /sys/block/mdX/md/stripe_cache_size

On my test system with a 4 drive raid5 over fast SCSI drives, I
get 230MB/sec on drives that give 90MB/sec.
If I increase the stripe_cache_size from 256 to 1024, I get 260MB/sec.

I wonder if your SATA  controller is causing you grief.
Could you try
   dd if=/dev/SOMEDISK of=/dev/null bs=1024k count=1024
and then do the same again on all devices in parallel
e.g.
   dd if=/dev/SOMEDISK of=/dev/null bs=1024k count=1024 &
   dd if=/dev/SOMEOTHERDISK of=/dev/null bs=1024k count=1024 &
   ...

and see how the speeds compare.
(I get about 55MB/sec on each of 5 drives, or 270MB/sec, which
is probably hitting the SCSI buss limit which as a theoretical 
max of 320MB/sec I think)

NeilBrown


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: 4 disks in raid 5: 33MB/s read performance?
       [not found] ` <44721F84.6060503@redhat.com>
@ 2006-05-23  1:04   ` Dexter Filmore
  0 siblings, 0 replies; 11+ messages in thread
From: Dexter Filmore @ 2006-05-23  1:04 UTC (permalink / raw)
  To: Brendan Conoboy, linux-raid

Am Montag, 22. Mai 2006 22:31 schrieb Brendan Conoboy:
> Dexter Filmore wrote:
> > I just dd'ed a 700MB iso to /dev/null, dd returned 33MB/s.
> > Isn't that a little slow?
> > System is a sil3114 4 port sata 1 controller with 4 samsung spinpoint
> > 250GB, 8MB cache in raid 5 on a Athlon XP 2000+/512MB.
>
> Which SATA driver is being used?  The ata_piix driver, for instance, has
> some of the multi-disk performance penalties as any ATA controller would
> have.
>
> -Brendan (blc@redhat.com)

lsmod lists sata_sil.


-- 
-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GCS d--(+)@ s-:+ a- C+++(++++) UL+>++++ P+>++ L+++>++++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h>++ r%>* y?
------END GEEK CODE BLOCK------

http://www.stop1984.com
http://www.againsttcpa.com

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Max. md array size under 32-bit i368 ...
  2006-05-22 23:02 ` Neil Brown
@ 2006-05-24  8:21   ` Gordon Henderson
  2006-05-24 16:55     ` Ming Zhang
                       ` (2 more replies)
  2006-05-25 13:58   ` 4 disks in raid 5: 33MB/s read performance? Dexter Filmore
  1 sibling, 3 replies; 11+ messages in thread
From: Gordon Henderson @ 2006-05-24  8:21 UTC (permalink / raw)
  To: Linux RAID Mailing List


I know this has come up before, but a few quick googles hasn't answered my
questions - I'm after the max. array size that can be created under
bog-standard 32-bit intel Linux, and any issues re. partitioning.

I'm aiming to create a raid-6 over 12 x 500GB drives - am I going to
have any problems?

(I'm not parittioning the resulting md device, just the underlying sd
devices and building a single md out of  sd[a-l]4 ...)

Cheers,

Gordon

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Max. md array size under 32-bit i368 ...
  2006-05-24  8:21   ` Max. md array size under 32-bit i368 Gordon Henderson
@ 2006-05-24 16:55     ` Ming Zhang
  2006-05-25  4:54     ` Neil Brown
  2006-06-06  9:36     ` Quick Question Gordon Henderson
  2 siblings, 0 replies; 11+ messages in thread
From: Ming Zhang @ 2006-05-24 16:55 UTC (permalink / raw)
  To: Gordon Henderson; +Cc: Linux RAID Mailing List

On Wed, 2006-05-24 at 09:21 +0100, Gordon Henderson wrote:
> I know this has come up before, but a few quick googles hasn't answered my
> questions - I'm after the max. array size that can be created under
> bog-standard 32-bit intel Linux, and any issues re. partitioning.
> 
> I'm aiming to create a raid-6 over 12 x 500GB drives - am I going to
> have any problems?

afraid not. it is like (12-2) * 500 > 2TB.


> 
> (I'm not parittioning the resulting md device, just the underlying sd
> devices and building a single md out of  sd[a-l]4 ...)
> 
> Cheers,
> 
> Gordon
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: 4 disks in raid 5: 33MB/s read performance?
  2006-05-22 16:33 ` Mark Hahn
@ 2006-05-24 20:11   ` Bill Davidsen
  2006-05-25  4:56     ` Neil Brown
  0 siblings, 1 reply; 11+ messages in thread
From: Bill Davidsen @ 2006-05-24 20:11 UTC (permalink / raw)
  To: Mark Hahn; +Cc: Dexter Filmore, Linux RAID Mailing List, Neil Brown

Mark Hahn wrote:

>>I just dd'ed a 700MB iso to /dev/null, dd returned 33MB/s.
>>Isn't that a little slow?
>>    
>>
>
>what bs parameter did you give to dd?  it should be at least 3*chunk
>(probably 3*64k if you used defaults.)
>

I would expect readahead to make this unproductive. Mind you, I didn't 
say it is, but I can't see why not. There was a problem with data going 
through stripe cache when it didn't need to, but I thought that was fixed.

Neil? Am I an optimist?

-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: Max. md array size under 32-bit i368 ...
  2006-05-24  8:21   ` Max. md array size under 32-bit i368 Gordon Henderson
  2006-05-24 16:55     ` Ming Zhang
@ 2006-05-25  4:54     ` Neil Brown
  2006-06-06  9:36     ` Quick Question Gordon Henderson
  2 siblings, 0 replies; 11+ messages in thread
From: Neil Brown @ 2006-05-25  4:54 UTC (permalink / raw)
  To: Gordon Henderson; +Cc: Linux RAID Mailing List

On Wednesday May 24, gordon@drogon.net wrote:
> 
> I know this has come up before, but a few quick googles hasn't answered my
> questions - I'm after the max. array size that can be created under
> bog-standard 32-bit intel Linux, and any issues re. partitioning.
> 
> I'm aiming to create a raid-6 over 12 x 500GB drives - am I going to
> have any problems?

No, this should work providing your kernel is compiled with CONFIG_LBD.

NeilBrown

> 
> (I'm not parittioning the resulting md device, just the underlying sd
> devices and building a single md out of  sd[a-l]4 ...)
> 
> Cheers,
> 
> Gordon
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: 4 disks in raid 5: 33MB/s read performance?
  2006-05-24 20:11   ` Bill Davidsen
@ 2006-05-25  4:56     ` Neil Brown
  0 siblings, 0 replies; 11+ messages in thread
From: Neil Brown @ 2006-05-25  4:56 UTC (permalink / raw)
  To: Bill Davidsen; +Cc: Mark Hahn, Dexter Filmore, Linux RAID Mailing List

On Wednesday May 24, davidsen@tmr.com wrote:
> Mark Hahn wrote:
> 
> >>I just dd'ed a 700MB iso to /dev/null, dd returned 33MB/s.
> >>Isn't that a little slow?
> >>    
> >>
> >
> >what bs parameter did you give to dd?  it should be at least 3*chunk
> >(probably 3*64k if you used defaults.)
> >
> 
> I would expect readahead to make this unproductive. Mind you, I didn't 
> say it is, but I can't see why not. There was a problem with data going 
> through stripe cache when it didn't need to, but I thought that was fixed.
> 
> Neil? Am I an optimist?

Probably....

You are write about readahead - it should make the difference in block
size irrelevant.

You are wrong about the problem of reading through the cache being
fixed.  It hasn't yet.  We still read through the cache.
However that shouldn't cause more than a 10% speed reduction, and
33MB/s sounds like more than 10% down.

NeilBrown

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: 4 disks in raid 5: 33MB/s read performance?
  2006-05-22 23:02 ` Neil Brown
  2006-05-24  8:21   ` Max. md array size under 32-bit i368 Gordon Henderson
@ 2006-05-25 13:58   ` Dexter Filmore
  1 sibling, 0 replies; 11+ messages in thread
From: Dexter Filmore @ 2006-05-25 13:58 UTC (permalink / raw)
  To: Neil Brown; +Cc: Linux RAID Mailing List

> On Monday May 22, Dexter.Filmore@gmx.de wrote:
> > I just dd'ed a 700MB iso to /dev/null, dd returned 33MB/s.
> > Isn't that a little slow?
> > System is a sil3114 4 port sata 1 controller with 4 samsung spinpoint
> > 250GB, 8MB cache in raid 5 on a Athlon XP 2000+/512MB.
>
> Yes, read on raid5 isn't as fast as we might like at the moment.
>
> It looks like you are getting about 11MB/s of each disk which is
> probably quite a bit slower than they can manage (what is the
> single-drive read speed you get dding from /dev/sda or whatever).
>
> You could try playing with the readahead number (blockdev --setra/--getra).
> I'm beginning to think that the default setting is a little low.

Changed from 384 to 1024, no improvement.

>
> You could also try increasing the stripe-cache size by writing numbers
> to
>    /sys/block/mdX/md/stripe_cache_size

Actually, there's no directory /sys/block/md0/md/ here. Can I find that in 
proc somewhere? And what are sane numbers for this setting?

> I wonder if your SATA  controller is causing you grief.
> Could you try
>    dd if=/dev/SOMEDISK of=/dev/null bs=1024k count=1024
> and then do the same again on all devices in parallel
> e.g.
>    dd if=/dev/SOMEDISK of=/dev/null bs=1024k count=1024 &
>    dd if=/dev/SOMEOTHERDISK of=/dev/null bs=1024k count=1024 &
>    ...

 4112 pts/0    R      0:00 dd if /dev/sda of /dev/null bs 1024k count 1024
 4113 pts/0    R      0:00 dd if /dev/sdb of /dev/null bs 1024k count 1024
 4114 pts/0    R      0:00 dd if /dev/sdc of /dev/null bs 1024k count 1024
 4115 pts/0    R      0:00 dd if /dev/sdd of /dev/null bs 1024k count 1024
 4116 pts/0    R+     0:00 ps ax

1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 34,5576 Sekunden, 31,1 MB/s
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 36,073 Sekunden, 29,8 MB/s
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 40,5109 Sekunden, 26,5 MB/s
1024+0 Datensätze ein
1024+0 Datensätze aus
1073741824 Bytes (1,1 GB) kopiert, 40,5054 Sekunden, 26,5 MB/s

(Partly german, but I think you get it)

A single disks pumps out 65-70MB/s. Since they are on a PCI 32bit controller 
the combined speeds when reading from all four disks at once pretty much max 
the 133MB/s PCI limit. (I' surprised it comes so close. That controller works 
pretty well for 18 bucks.)

Dex


-- 
-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GCS d--(+)@ s-:+ a- C+++(++++) UL+>++++ P+>++ L+++>++++ E-- W++ N o? K-
w--(---) !O M+ V- PS++(+) PE(-) Y++ PGP t++(---)@ 5 X+(++) R+(++) tv--(+)@ 
b++(+++) DI+++ D G++ e* h>++ r%>* y?
------END GEEK CODE BLOCK------

http://www.stop1984.com
http://www.againsttcpa.com
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Quick Question ..
  2006-05-24  8:21   ` Max. md array size under 32-bit i368 Gordon Henderson
  2006-05-24 16:55     ` Ming Zhang
  2006-05-25  4:54     ` Neil Brown
@ 2006-06-06  9:36     ` Gordon Henderson
  2 siblings, 0 replies; 11+ messages in thread
From: Gordon Henderson @ 2006-06-06  9:36 UTC (permalink / raw)
  To: Linux RAID Mailing List


I'm just after conformation (or not!) of something I've done for a long
time which I think is right - it certainly seem right, but one of those
things I've always wondered about ...

When creating an array I allocate drives from alternative controllers with
the thought that the OS/system/hardware might be able to better overlap
accesses to the devices - but it sort of assumes that the kernel drivers
(and/or mdadm) don't re-order accesses to the devices internally (I'm not
counting SCSI logical re-ordering here)

So. eg. an example I'm working on now - I have a server with 2 external
facing SCSI interfaces, and a box with 14 drives on 2 chains - so 7 drives
on each interface (I'd prefer less, but it's what I've got - it's a Dell
external box)  sd{a-g} are on scsi0, and sd{h-n} are on scsi1, so I issue
the create command with:

  mdadm --create /dev/md1 -n14 -l6	\
	 /dev/sda /dev/sdh		\
	 /dev/sdb /dev/sdi		\
	etc.

I guess benchmarking it would be the way to really test it, but I'm not
after performance for this particular box - just after some thoughts as to
whether this is the "best" (or not!) way to do things, or if I'm wasting
my time doing it this way!

Thanks,

Gordon

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2006-06-06  9:36 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-05-22 15:46 4 disks in raid 5: 33MB/s read performance? Dexter Filmore
2006-05-22 16:33 ` Mark Hahn
2006-05-24 20:11   ` Bill Davidsen
2006-05-25  4:56     ` Neil Brown
2006-05-22 23:02 ` Neil Brown
2006-05-24  8:21   ` Max. md array size under 32-bit i368 Gordon Henderson
2006-05-24 16:55     ` Ming Zhang
2006-05-25  4:54     ` Neil Brown
2006-06-06  9:36     ` Quick Question Gordon Henderson
2006-05-25 13:58   ` 4 disks in raid 5: 33MB/s read performance? Dexter Filmore
     [not found] ` <44721F84.6060503@redhat.com>
2006-05-23  1:04   ` Dexter Filmore

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).