linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RAID-6 check slow..
@ 2006-08-22  7:26 Brad Campbell
  2006-08-22  7:52 ` Neil Brown
  0 siblings, 1 reply; 3+ messages in thread
From: Brad Campbell @ 2006-08-22  7:26 UTC (permalink / raw)
  To: RAID Linux

G'day all,

I have a box with 15 SATA drives in it, they are all on the PCI bus and it's a relatively slow machine.

I can extract about 100MB/s combined read speed from these drives with dd.

When reading /dev/md0 with dd I get about 80MB/s, but when I ask it to check the array on a 
completely idle system with echo check > /sys/block/md/md0/sync_action I get a combined read speed 
across all drives of 31.9MB/s

I'm not that fussed I guess, given the system does have extended idle periods, it would be nice to 
have a sync or check complete as quickly as the hardware allows. Experience has shown that a rebuild 
of a single disk failure takes 10-12 hours but the check seems to take forever

brad@storage1:~$ cat /proc/mdstat
Personalities : [raid6]
md0 : active raid6 sda[0] sdo[14] sdn[13] sdm[12] sdl[11] sdk[10] sdj[9] sdi[8] sdh[7] sdg[6] sdf[5] 
sde[4] sdd[3] sdc[2] sdb[1]
       3186525056 blocks level 6, 128k chunk, algorithm 2 [15/15] [UUUUUUUUUUUUUUU]
       [>....................]  resync =  0.1% (458496/245117312) finish=1881.9min speed=2164K/sec

unused devices: <none>

I have included some iostat output running on a 5 second interval and allowed 30 seconds to stabilise.

Linux storage1 2.6.17.9 #2 Sun Aug 20 17:16:24 GST 2006 i686 GNU/Linux

<----- snip ----->

1st a dd from all drives.

storage1:/home/brad# cat t
#!/bin/sh
for i in /dev/sd[abcdefghijklmno] ; do
echo $i
dd if=$i of=/dev/null &
done;


avg-cpu:  %user   %nice    %sys %iowait   %idle
            8.80    0.00   58.40   32.80    0.00

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              13.00     13312.00         0.00      66560          0
sdb              12.80     13107.20         0.00      65536          0
sdc              12.80     13107.20         0.00      65536          0
sdd              12.80     13107.20         0.00      65536          0
sde              12.80     13107.20         0.00      65536          0
sdf              12.80     13107.20         0.00      65536          0
sdg              12.80     13107.20         0.00      65536          0
sdh              13.00     13312.00         0.00      66560          0
sdi              12.80     13107.20         0.00      65536          0
sdj              13.00     13312.00         0.00      66560          0
sdk              13.00     13312.00         0.00      66560          0
sdl              12.80     13107.20         0.00      65536          0
sdm              17.20     17612.80         0.00      88064          0
sdn              17.20     17612.80         0.00      88064          0
sdo              17.20     17612.80         0.00      88064          0
md0               0.00         0.00         0.00          0          0


<---- snip ----->

echo check > /sys/block/md/md0/sync_action

avg-cpu:  %user   %nice    %sys %iowait   %idle
            0.80    0.00    6.59    0.00   92.61

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda               5.99      4343.31         0.00      21760          0
sdb               5.99      4343.31         0.00      21760          0
sdc               5.99      4343.31         0.00      21760          0
sdd               5.99      4343.31         0.00      21760          0
sde               5.99      4343.31         0.00      21760          0
sdf               5.99      4343.31         0.00      21760          0
sdg               5.99      4343.31         0.00      21760          0
sdh               5.99      4343.31         0.00      21760          0
sdi               5.99      4343.31         0.00      21760          0
sdj               5.99      4343.31         0.00      21760          0
sdk               5.99      4343.31         0.00      21760          0
sdl               5.99      4343.31         0.00      21760          0
sdm               5.99      4343.31         0.00      21760          0
sdn               5.99      4343.31         0.00      21760          0
sdo               5.99      4343.31         0.00      21760          0
md0               0.00         0.00         0.00          0          0

storage1:/home/brad# grep 0 /proc/sys/dev/raid/*
/proc/sys/dev/raid/speed_limit_max:400000
/proc/sys/dev/raid/speed_limit_min:1000

<----- snip ----->

dd if=/dev/md0 of=/dev/null

avg-cpu:  %user   %nice    %sys %iowait   %idle
            9.00    0.00   72.60   18.40    0.00

Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
sda              25.80     11008.00         0.00      55040          0
sdb              25.60     10924.80         0.00      54624          0
sdc              26.00     10956.80         0.00      54784          0
sdd              25.80     10956.80         0.00      54784          0
sde              25.20     11059.20         0.00      55296          0
sdf              26.00     11008.00         0.00      55040          0
sdg              26.20     11008.00         0.00      55040          0
sdh              26.40     11008.00         0.00      55040          0
sdi              26.00     11008.00         0.00      55040          0
sdj              26.40     11008.00         0.00      55040          0
sdk              26.80     10988.80         0.00      54944          0
sdl              25.80     10945.60         0.00      54728          0
sdm              26.20     10956.80         0.00      54784          0
sdn              25.40     10905.60         0.00      54528          0
sdo              24.80     10905.60         0.00      54528          0
md0           20467.20    163737.60         0.00     818688          0

Brad
-- 
"Human beings, who are almost unique in having the ability
to learn from the experience of others, are also remarkable
for their apparent disinclination to do so." -- Douglas Adams

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: RAID-6 check slow..
  2006-08-22  7:26 RAID-6 check slow Brad Campbell
@ 2006-08-22  7:52 ` Neil Brown
  2006-08-22  8:07   ` Brad Campbell
  0 siblings, 1 reply; 3+ messages in thread
From: Neil Brown @ 2006-08-22  7:52 UTC (permalink / raw)
  To: Brad Campbell; +Cc: RAID Linux

On Tuesday August 22, brad@wasp.net.au wrote:
> G'day all,
> 
> I have a box with 15 SATA drives in it, they are all on the PCI bus and it's a relatively slow machine.
> 
> I can extract about 100MB/s combined read speed from these drives with dd.
> 
> When reading /dev/md0 with dd I get about 80MB/s, but when I ask it to check the array on a 
> completely idle system with echo check > /sys/block/md/md0/sync_action I get a combined read speed 
> across all drives of 31.9MB/s
> 
> I'm not that fussed I guess, given the system does have extended idle periods, it would be nice to 
> have a sync or check complete as quickly as the hardware allows. Experience has shown that a rebuild 
> of a single disk failure takes 10-12 hours but the check seems to take forever
> 
> brad@storage1:~$ cat /proc/mdstat
> Personalities : [raid6]
> md0 : active raid6 sda[0] sdo[14] sdn[13] sdm[12] sdl[11] sdk[10] sdj[9] sdi[8] sdh[7] sdg[6] sdf[5] 
> sde[4] sdd[3] sdc[2] sdb[1]
>        3186525056 blocks level 6, 128k chunk, algorithm 2 [15/15] [UUUUUUUUUUUUUUU]
>        [>....................]  resync =  0.1% (458496/245117312) finish=1881.9min speed=2164K/sec

Hmm....  nothing obvious.
Have you tried increasing 
  /proc/sys/dev/raid/speed_limit_min:1000
just in case that makes a difference (it shouldn't but you seem to be
down close to that speed).

What speed in the raid6 algorithm used - as reported at boot time?
Again, I doubt that is the problem - if should be about 1000 times
speed you are seeing.

What if you try increasing /sys/block/md0/md/stripe_cache_size ?

That's all I can think of for now.

NeilBrown

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: RAID-6 check slow..
  2006-08-22  7:52 ` Neil Brown
@ 2006-08-22  8:07   ` Brad Campbell
  0 siblings, 0 replies; 3+ messages in thread
From: Brad Campbell @ 2006-08-22  8:07 UTC (permalink / raw)
  To: Neil Brown; +Cc: RAID Linux

Neil Brown wrote:

> Hmm....  nothing obvious.
> Have you tried increasing 
>   /proc/sys/dev/raid/speed_limit_min:1000
> just in case that makes a difference (it shouldn't but you seem to be
> down close to that speed).

No difference..

> What speed in the raid6 algorithm used - as reported at boot time?
> Again, I doubt that is the problem - if should be about 1000 times
> speed you are seeing.

raid6: int32x1    739 MB/s
raid6: int32x2    991 MB/s
raid6: int32x4    636 MB/s
raid6: int32x8    587 MB/s
raid6: mmxx1     1556 MB/s
raid6: mmxx2     2701 MB/s
raid6: sse1x1    1432 MB/s
raid6: sse1x2    2398 MB/s
raid6: using algorithm sse1x2 (2398 MB/s)
md: raid6 personality registered for level 6
raid5: automatically using best checksumming function: pIII_sse
    pIII_sse  :  2345.000 MB/sec
raid5: using function: pIII_sse (2345.000 MB/sec)
md: md driver 0.90.3 MAX_MD_DEVS=256, MD_SB_DISKS=27
md: bitmap version 4.39


> What if you try increasing /sys/block/md0/md/stripe_cache_size ?

I already have this at 8192 (which appears to be HUGE for 15 drives, but I've got 1.5GB of ram and 
nothing else using it)

> That's all I can think of for now.
> 

Oh well, no stress.. Just thought I'd ask anyway :)

Brad
-- 
"Human beings, who are almost unique in having the ability
to learn from the experience of others, are also remarkable
for their apparent disinclination to do so." -- Douglas Adams

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2006-08-22  8:07 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-08-22  7:26 RAID-6 check slow Brad Campbell
2006-08-22  7:52 ` Neil Brown
2006-08-22  8:07   ` Brad Campbell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).