linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Raid5 Construction Question
@ 2004-08-19 17:24 PAulN
  2004-08-19 17:30 ` Guy
  2004-08-19 18:32 ` Tim Moore
  0 siblings, 2 replies; 23+ messages in thread
From: PAulN @ 2004-08-19 17:24 UTC (permalink / raw)
  To: linux-raid

Hi,
So I have a raid 5 which is around ~1TB and the problem I've been having 
is that
the resync rate is really bad.  This is understandable given that the 
raid5 has not
been initialized.  Does anyone know a way for me to initialize my raid5 
before I use it so
that the resync process doesn't run for 3 days?
thanks
paul


Config:
-------------------------------------------------
raiddev             /dev/md0
raid-level                  5
nr-raid-disks               7
nr-spare-disks              1
chunk-size                  64k
persistent-superblock       1
parity-algorithm        left-symmetric
    device          /dev/sda1
    raid-disk     0
    device          /dev/sdb1
    raid-disk     1
    device          /dev/sdc1
    raid-disk     2
    device          /dev/sdd1
    raid-disk     3
    device          /dev/sde1
    raid-disk     4
    device          /dev/sdg1
    raid-disk     5
    device          /dev/sdf1
    raid-disk    6
    device          /dev/sdh1
    spare-disk     0
----------------------------------------


^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: Raid5 Construction Question
  2004-08-19 17:24 Raid5 Construction Question PAulN
@ 2004-08-19 17:30 ` Guy
  2004-08-19 17:52   ` PAulN
  2004-08-19 18:32 ` Tim Moore
  1 sibling, 1 reply; 23+ messages in thread
From: Guy @ 2004-08-19 17:30 UTC (permalink / raw)
  To: 'PAulN', linux-raid

In short... Issue this command:
echo 100000 > /proc/sys/dev/raid/speed_limit_max

If it does not help issue this command and send the results:
cat /proc/mdstat

Details below.

These are related to throttling:
/proc/sys/dev/raid/speed_limit_max
/proc/sys/dev/raid/speed_limit_min

Do "man md" for more info.

The speed limits are per device, not per array.
Make sure the max is large enough the permit your disks to go as fast as
they can.  I use 100000 (100,000K bytes/second).  My disks are not that
fast, and having too large of a number does not hurt.
At least as a test, set the min to the same value as max.

I use these commands when I want to change by hand:
cat /proc/sys/dev/raid/speed_limit_max
cat /proc/sys/dev/raid/speed_limit_min

echo 100000 > /proc/sys/dev/raid/speed_limit_max
echo 1000 > /proc/sys/dev/raid/speed_limit_min

Guy

-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of PAulN
Sent: Thursday, August 19, 2004 1:25 PM
To: linux-raid@vger.kernel.org
Subject: Raid5 Construction Question

Hi,
So I have a raid 5 which is around ~1TB and the problem I've been having 
is that
the resync rate is really bad.  This is understandable given that the 
raid5 has not
been initialized.  Does anyone know a way for me to initialize my raid5 
before I use it so
that the resync process doesn't run for 3 days?
thanks
paul


Config:
-------------------------------------------------
raiddev             /dev/md0
raid-level                  5
nr-raid-disks               7
nr-spare-disks              1
chunk-size                  64k
persistent-superblock       1
parity-algorithm        left-symmetric
    device          /dev/sda1
    raid-disk     0
    device          /dev/sdb1
    raid-disk     1
    device          /dev/sdc1
    raid-disk     2
    device          /dev/sdd1
    raid-disk     3
    device          /dev/sde1
    raid-disk     4
    device          /dev/sdg1
    raid-disk     5
    device          /dev/sdf1
    raid-disk    6
    device          /dev/sdh1
    spare-disk     0
----------------------------------------

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Raid5 Construction Question
  2004-08-19 17:30 ` Guy
@ 2004-08-19 17:52   ` PAulN
  2004-08-19 18:17     ` Guy
                       ` (2 more replies)
  0 siblings, 3 replies; 23+ messages in thread
From: PAulN @ 2004-08-19 17:52 UTC (permalink / raw)
  To: Guy; +Cc: linux-raid

Guy,
thanks for the snappy reply!  I wish my disks were as fast :)
I failed to mention that I had been tweaking those proc values.  Currently
they are:
(root@lcn0:raid)# cat speed_limit_max
200000
(root@lcn0:raid)# cat speed_limit_min
10000

If I'm correct, this means that the min speed is 10MB/sec per device.
I've verified that each device has a seq write speed of about 38MB/sec so
each should be capable of handling 10,000Kbytes sec.  Right after I issue
a raidstart the speed is pretty good (~30MB/sec) but is just falls until 
it hits
around 300K. 

md0 : active raid5 sdh1[7] sdf1[6] sdg1[5] sde1[4] sdd1[3] sdc1[2] 
sdb1[1] sda1[0]
      481949184 blocks level 5, 64k chunk, algorithm 2 [7/7] [UUUUUUU]
      [>....................]  resync =  2.4% (1936280/80324864) 
finish=4261.4min speed=305K/sec

In the docs I saw that "reconstruction" is possible with raidhotadd but 
I didn't
see anything about initialization.   So am I "screwed" until the resync 
is fixed?  I was depeding
on the disks to do some filesystem testing but maybe I'll have to wait a 
few days.. 
Thanks,
Paul


Guy wrote:

>In short... Issue this command:
>echo 100000 > /proc/sys/dev/raid/speed_limit_max
>
>If it does not help issue this command and send the results:
>cat /proc/mdstat
>
>Details below.
>
>These are related to throttling:
>/proc/sys/dev/raid/speed_limit_max
>/proc/sys/dev/raid/speed_limit_min
>
>Do "man md" for more info.
>
>The speed limits are per device, not per array.
>Make sure the max is large enough the permit your disks to go as fast as
>they can.  I use 100000 (100,000K bytes/second).  My disks are not that
>fast, and having too large of a number does not hurt.
>At least as a test, set the min to the same value as max.
>
>I use these commands when I want to change by hand:
>cat /proc/sys/dev/raid/speed_limit_max
>cat /proc/sys/dev/raid/speed_limit_min
>
>echo 100000 > /proc/sys/dev/raid/speed_limit_max
>echo 1000 > /proc/sys/dev/raid/speed_limit_min
>
>Guy
>
>-----Original Message-----
>From: linux-raid-owner@vger.kernel.org
>[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of PAulN
>Sent: Thursday, August 19, 2004 1:25 PM
>To: linux-raid@vger.kernel.org
>Subject: Raid5 Construction Question
>
>Hi,
>So I have a raid 5 which is around ~1TB and the problem I've been having 
>is that
>the resync rate is really bad.  This is understandable given that the 
>raid5 has not
>been initialized.  Does anyone know a way for me to initialize my raid5 
>before I use it so
>that the resync process doesn't run for 3 days?
>thanks
>paul
>
>
>Config:
>-------------------------------------------------
>raiddev             /dev/md0
>raid-level                  5
>nr-raid-disks               7
>nr-spare-disks              1
>chunk-size                  64k
>persistent-superblock       1
>parity-algorithm        left-symmetric
>    device          /dev/sda1
>    raid-disk     0
>    device          /dev/sdb1
>    raid-disk     1
>    device          /dev/sdc1
>    raid-disk     2
>    device          /dev/sdd1
>    raid-disk     3
>    device          /dev/sde1
>    raid-disk     4
>    device          /dev/sdg1
>    raid-disk     5
>    device          /dev/sdf1
>    raid-disk    6
>    device          /dev/sdh1
>    spare-disk     0
>----------------------------------------
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>  
>



^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: Raid5 Construction Question
  2004-08-19 17:52   ` PAulN
@ 2004-08-19 18:17     ` Guy
  2004-08-19 18:24       ` PAulN
  2004-08-19 18:35       ` Gordon Henderson
  2004-08-19 20:24     ` Maarten van den Berg
  2004-08-20  1:21     ` Neil Brown
  2 siblings, 2 replies; 23+ messages in thread
From: Guy @ 2004-08-19 18:17 UTC (permalink / raw)
  To: 'PAulN'; +Cc: linux-raid

You don't need to wait.  You can use the array now.

But ouch 305K/sec!  If this a 386-33?  :)

Have you tried dd tests on each disk to verify each works well?

Something like:
time dd if=/dev/sdc1 of=/dev/null bs=64k count=100000
This is just a read test.  My disks take about 340 seconds.  Yours should be
about twice as fast.

Each disk should give about the same performance.
You may find 1 that has issues.

If you are willing to re-build the array, you could do a write test:
time dd if=/dev/zero of=/dev/sdc1 bs=64k count=10000
THIS WILL TRASH THE ARRAY!!!

Guy

-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of PAulN
Sent: Thursday, August 19, 2004 1:53 PM
To: Guy
Cc: linux-raid@vger.kernel.org
Subject: Re: Raid5 Construction Question

Guy,
thanks for the snappy reply!  I wish my disks were as fast :)
I failed to mention that I had been tweaking those proc values.  Currently
they are:
(root@lcn0:raid)# cat speed_limit_max
200000
(root@lcn0:raid)# cat speed_limit_min
10000

If I'm correct, this means that the min speed is 10MB/sec per device.
I've verified that each device has a seq write speed of about 38MB/sec so
each should be capable of handling 10,000Kbytes sec.  Right after I issue
a raidstart the speed is pretty good (~30MB/sec) but is just falls until 
it hits
around 300K. 

md0 : active raid5 sdh1[7] sdf1[6] sdg1[5] sde1[4] sdd1[3] sdc1[2] 
sdb1[1] sda1[0]
      481949184 blocks level 5, 64k chunk, algorithm 2 [7/7] [UUUUUUU]
      [>....................]  resync =  2.4% (1936280/80324864) 
finish=4261.4min speed=305K/sec

In the docs I saw that "reconstruction" is possible with raidhotadd but 
I didn't
see anything about initialization.   So am I "screwed" until the resync 
is fixed?  I was depeding
on the disks to do some filesystem testing but maybe I'll have to wait a 
few days.. 
Thanks,
Paul


Guy wrote:

>In short... Issue this command:
>echo 100000 > /proc/sys/dev/raid/speed_limit_max
>
>If it does not help issue this command and send the results:
>cat /proc/mdstat
>
>Details below.
>
>These are related to throttling:
>/proc/sys/dev/raid/speed_limit_max
>/proc/sys/dev/raid/speed_limit_min
>
>Do "man md" for more info.
>
>The speed limits are per device, not per array.
>Make sure the max is large enough the permit your disks to go as fast as
>they can.  I use 100000 (100,000K bytes/second).  My disks are not that
>fast, and having too large of a number does not hurt.
>At least as a test, set the min to the same value as max.
>
>I use these commands when I want to change by hand:
>cat /proc/sys/dev/raid/speed_limit_max
>cat /proc/sys/dev/raid/speed_limit_min
>
>echo 100000 > /proc/sys/dev/raid/speed_limit_max
>echo 1000 > /proc/sys/dev/raid/speed_limit_min
>
>Guy
>
>-----Original Message-----
>From: linux-raid-owner@vger.kernel.org
>[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of PAulN
>Sent: Thursday, August 19, 2004 1:25 PM
>To: linux-raid@vger.kernel.org
>Subject: Raid5 Construction Question
>
>Hi,
>So I have a raid 5 which is around ~1TB and the problem I've been having 
>is that
>the resync rate is really bad.  This is understandable given that the 
>raid5 has not
>been initialized.  Does anyone know a way for me to initialize my raid5 
>before I use it so
>that the resync process doesn't run for 3 days?
>thanks
>paul
>
>
>Config:
>-------------------------------------------------
>raiddev             /dev/md0
>raid-level                  5
>nr-raid-disks               7
>nr-spare-disks              1
>chunk-size                  64k
>persistent-superblock       1
>parity-algorithm        left-symmetric
>    device          /dev/sda1
>    raid-disk     0
>    device          /dev/sdb1
>    raid-disk     1
>    device          /dev/sdc1
>    raid-disk     2
>    device          /dev/sdd1
>    raid-disk     3
>    device          /dev/sde1
>    raid-disk     4
>    device          /dev/sdg1
>    raid-disk     5
>    device          /dev/sdf1
>    raid-disk    6
>    device          /dev/sdh1
>    spare-disk     0
>----------------------------------------
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>  
>


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Raid5 Construction Question
  2004-08-19 18:17     ` Guy
@ 2004-08-19 18:24       ` PAulN
  2004-08-19 18:29         ` Guy
  2004-08-19 18:35       ` Gordon Henderson
  1 sibling, 1 reply; 23+ messages in thread
From: PAulN @ 2004-08-19 18:24 UTC (permalink / raw)
  To: Guy; +Cc: linux-raid

I have it mounted now but the performance sucks because
the resync is going on.    I've already test each disk individually and 
I get
38MB/sec.  The machine is a 2,.4 GHz xeon with 2GB ram so there should
be no problem with cpu or memory.    Before I tried raid5 i made an 8xraid0
stripe which yielded ~120MB/sec which is what I'd expect from the 
controller.
So there is no way to initialize the raid5 device - similar to what a 
hardware raid
controller does?
p

Guy wrote:

>You don't need to wait.  You can use the array now.
>
>But ouch 305K/sec!  If this a 386-33?  :)
>
>Have you tried dd tests on each disk to verify each works well?
>
>Something like:
>time dd if=/dev/sdc1 of=/dev/null bs=64k count=100000
>This is just a read test.  My disks take about 340 seconds.  Yours should be
>about twice as fast.
>
>Each disk should give about the same performance.
>You may find 1 that has issues.
>
>If you are willing to re-build the array, you could do a write test:
>time dd if=/dev/zero of=/dev/sdc1 bs=64k count=10000
>THIS WILL TRASH THE ARRAY!!!
>
>Guy
>
>-----Original Message-----
>From: linux-raid-owner@vger.kernel.org
>[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of PAulN
>Sent: Thursday, August 19, 2004 1:53 PM
>To: Guy
>Cc: linux-raid@vger.kernel.org
>Subject: Re: Raid5 Construction Question
>
>Guy,
>thanks for the snappy reply!  I wish my disks were as fast :)
>I failed to mention that I had been tweaking those proc values.  Currently
>they are:
>(root@lcn0:raid)# cat speed_limit_max
>200000
>(root@lcn0:raid)# cat speed_limit_min
>10000
>
>If I'm correct, this means that the min speed is 10MB/sec per device.
>I've verified that each device has a seq write speed of about 38MB/sec so
>each should be capable of handling 10,000Kbytes sec.  Right after I issue
>a raidstart the speed is pretty good (~30MB/sec) but is just falls until 
>it hits
>around 300K. 
>
>md0 : active raid5 sdh1[7] sdf1[6] sdg1[5] sde1[4] sdd1[3] sdc1[2] 
>sdb1[1] sda1[0]
>      481949184 blocks level 5, 64k chunk, algorithm 2 [7/7] [UUUUUUU]
>      [>....................]  resync =  2.4% (1936280/80324864) 
>finish=4261.4min speed=305K/sec
>
>In the docs I saw that "reconstruction" is possible with raidhotadd but 
>I didn't
>see anything about initialization.   So am I "screwed" until the resync 
>is fixed?  I was depeding
>on the disks to do some filesystem testing but maybe I'll have to wait a 
>few days.. 
>Thanks,
>Paul
>
>
>Guy wrote:
>
>  
>
>>In short... Issue this command:
>>echo 100000 > /proc/sys/dev/raid/speed_limit_max
>>
>>If it does not help issue this command and send the results:
>>cat /proc/mdstat
>>
>>Details below.
>>
>>These are related to throttling:
>>/proc/sys/dev/raid/speed_limit_max
>>/proc/sys/dev/raid/speed_limit_min
>>
>>Do "man md" for more info.
>>
>>The speed limits are per device, not per array.
>>Make sure the max is large enough the permit your disks to go as fast as
>>they can.  I use 100000 (100,000K bytes/second).  My disks are not that
>>fast, and having too large of a number does not hurt.
>>At least as a test, set the min to the same value as max.
>>
>>I use these commands when I want to change by hand:
>>cat /proc/sys/dev/raid/speed_limit_max
>>cat /proc/sys/dev/raid/speed_limit_min
>>
>>echo 100000 > /proc/sys/dev/raid/speed_limit_max
>>echo 1000 > /proc/sys/dev/raid/speed_limit_min
>>
>>Guy
>>
>>-----Original Message-----
>>From: linux-raid-owner@vger.kernel.org
>>[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of PAulN
>>Sent: Thursday, August 19, 2004 1:25 PM
>>To: linux-raid@vger.kernel.org
>>Subject: Raid5 Construction Question
>>
>>Hi,
>>So I have a raid 5 which is around ~1TB and the problem I've been having 
>>is that
>>the resync rate is really bad.  This is understandable given that the 
>>raid5 has not
>>been initialized.  Does anyone know a way for me to initialize my raid5 
>>before I use it so
>>that the resync process doesn't run for 3 days?
>>thanks
>>paul
>>
>>
>>Config:
>>-------------------------------------------------
>>raiddev             /dev/md0
>>raid-level                  5
>>nr-raid-disks               7
>>nr-spare-disks              1
>>chunk-size                  64k
>>persistent-superblock       1
>>parity-algorithm        left-symmetric
>>   device          /dev/sda1
>>   raid-disk     0
>>   device          /dev/sdb1
>>   raid-disk     1
>>   device          /dev/sdc1
>>   raid-disk     2
>>   device          /dev/sdd1
>>   raid-disk     3
>>   device          /dev/sde1
>>   raid-disk     4
>>   device          /dev/sdg1
>>   raid-disk     5
>>   device          /dev/sdf1
>>   raid-disk    6
>>   device          /dev/sdh1
>>   spare-disk     0
>>----------------------------------------
>>
>>-
>>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>the body of a message to majordomo@vger.kernel.org
>>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> 
>>
>>    
>>
>
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>  
>



^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: Raid5 Construction Question
  2004-08-19 18:24       ` PAulN
@ 2004-08-19 18:29         ` Guy
  0 siblings, 0 replies; 23+ messages in thread
From: Guy @ 2004-08-19 18:29 UTC (permalink / raw)
  To: 'PAulN'; +Cc: linux-raid

If the re-build is slow, I would expect the array to have bad performance
even after it has finished re-building.

I think you problems are beyond me!  Good luck!

Guy

-----Original Message-----
From: PAulN [mailto:pauln@psc.edu] 
Sent: Thursday, August 19, 2004 2:24 PM
To: Guy
Cc: linux-raid@vger.kernel.org
Subject: Re: Raid5 Construction Question

I have it mounted now but the performance sucks because
the resync is going on.    I've already test each disk individually and 
I get
38MB/sec.  The machine is a 2,.4 GHz xeon with 2GB ram so there should
be no problem with cpu or memory.    Before I tried raid5 i made an 8xraid0
stripe which yielded ~120MB/sec which is what I'd expect from the 
controller.
So there is no way to initialize the raid5 device - similar to what a 
hardware raid
controller does?
p

Guy wrote:

>You don't need to wait.  You can use the array now.
>
>But ouch 305K/sec!  If this a 386-33?  :)
>
>Have you tried dd tests on each disk to verify each works well?
>
>Something like:
>time dd if=/dev/sdc1 of=/dev/null bs=64k count=100000
>This is just a read test.  My disks take about 340 seconds.  Yours should
be
>about twice as fast.
>
>Each disk should give about the same performance.
>You may find 1 that has issues.
>
>If you are willing to re-build the array, you could do a write test:
>time dd if=/dev/zero of=/dev/sdc1 bs=64k count=10000
>THIS WILL TRASH THE ARRAY!!!
>
>Guy
>
>-----Original Message-----
>From: linux-raid-owner@vger.kernel.org
>[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of PAulN
>Sent: Thursday, August 19, 2004 1:53 PM
>To: Guy
>Cc: linux-raid@vger.kernel.org
>Subject: Re: Raid5 Construction Question
>
>Guy,
>thanks for the snappy reply!  I wish my disks were as fast :)
>I failed to mention that I had been tweaking those proc values.  Currently
>they are:
>(root@lcn0:raid)# cat speed_limit_max
>200000
>(root@lcn0:raid)# cat speed_limit_min
>10000
>
>If I'm correct, this means that the min speed is 10MB/sec per device.
>I've verified that each device has a seq write speed of about 38MB/sec so
>each should be capable of handling 10,000Kbytes sec.  Right after I issue
>a raidstart the speed is pretty good (~30MB/sec) but is just falls until 
>it hits
>around 300K. 
>
>md0 : active raid5 sdh1[7] sdf1[6] sdg1[5] sde1[4] sdd1[3] sdc1[2] 
>sdb1[1] sda1[0]
>      481949184 blocks level 5, 64k chunk, algorithm 2 [7/7] [UUUUUUU]
>      [>....................]  resync =  2.4% (1936280/80324864) 
>finish=4261.4min speed=305K/sec
>
>In the docs I saw that "reconstruction" is possible with raidhotadd but 
>I didn't
>see anything about initialization.   So am I "screwed" until the resync 
>is fixed?  I was depeding
>on the disks to do some filesystem testing but maybe I'll have to wait a 
>few days.. 
>Thanks,
>Paul
>
>
>Guy wrote:
>
>  
>
>>In short... Issue this command:
>>echo 100000 > /proc/sys/dev/raid/speed_limit_max
>>
>>If it does not help issue this command and send the results:
>>cat /proc/mdstat
>>
>>Details below.
>>
>>These are related to throttling:
>>/proc/sys/dev/raid/speed_limit_max
>>/proc/sys/dev/raid/speed_limit_min
>>
>>Do "man md" for more info.
>>
>>The speed limits are per device, not per array.
>>Make sure the max is large enough the permit your disks to go as fast as
>>they can.  I use 100000 (100,000K bytes/second).  My disks are not that
>>fast, and having too large of a number does not hurt.
>>At least as a test, set the min to the same value as max.
>>
>>I use these commands when I want to change by hand:
>>cat /proc/sys/dev/raid/speed_limit_max
>>cat /proc/sys/dev/raid/speed_limit_min
>>
>>echo 100000 > /proc/sys/dev/raid/speed_limit_max
>>echo 1000 > /proc/sys/dev/raid/speed_limit_min
>>
>>Guy
>>
>>-----Original Message-----
>>From: linux-raid-owner@vger.kernel.org
>>[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of PAulN
>>Sent: Thursday, August 19, 2004 1:25 PM
>>To: linux-raid@vger.kernel.org
>>Subject: Raid5 Construction Question
>>
>>Hi,
>>So I have a raid 5 which is around ~1TB and the problem I've been having 
>>is that
>>the resync rate is really bad.  This is understandable given that the 
>>raid5 has not
>>been initialized.  Does anyone know a way for me to initialize my raid5 
>>before I use it so
>>that the resync process doesn't run for 3 days?
>>thanks
>>paul
>>
>>
>>Config:
>>-------------------------------------------------
>>raiddev             /dev/md0
>>raid-level                  5
>>nr-raid-disks               7
>>nr-spare-disks              1
>>chunk-size                  64k
>>persistent-superblock       1
>>parity-algorithm        left-symmetric
>>   device          /dev/sda1
>>   raid-disk     0
>>   device          /dev/sdb1
>>   raid-disk     1
>>   device          /dev/sdc1
>>   raid-disk     2
>>   device          /dev/sdd1
>>   raid-disk     3
>>   device          /dev/sde1
>>   raid-disk     4
>>   device          /dev/sdg1
>>   raid-disk     5
>>   device          /dev/sdf1
>>   raid-disk    6
>>   device          /dev/sdh1
>>   spare-disk     0
>>----------------------------------------
>>
>>-
>>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>the body of a message to majordomo@vger.kernel.org
>>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> 
>>
>>    
>>
>
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>  
>


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Raid5 Construction Question
  2004-08-19 17:24 Raid5 Construction Question PAulN
  2004-08-19 17:30 ` Guy
@ 2004-08-19 18:32 ` Tim Moore
  2004-08-19 20:30   ` Maarten van den Berg
  1 sibling, 1 reply; 23+ messages in thread
From: Tim Moore @ 2004-08-19 18:32 UTC (permalink / raw)
  To: linux-raid

My raid tab has 'chunk-size 64' and I get 64k chunks.  Does 'chunk-size 
64k' mean 64M chunks?

Did you set stride correctly when running mke2fs (based on 
chunk_size/fs_block_size)?

Either one of these could trash performance.

rgds,
tim

PAulN wrote:
> Hi,
> So I have a raid 5 which is around ~1TB and the problem I've been having 
> is that
> the resync rate is really bad.  This is understandable given that the 
> raid5 has not
> been initialized.  Does anyone know a way for me to initialize my raid5 
> before I use it so
> that the resync process doesn't run for 3 days?
> thanks
> paul
> 
> 
> Config:
> -------------------------------------------------
> raiddev             /dev/md0
> raid-level                  5
> nr-raid-disks               7
> nr-spare-disks              1
> chunk-size                  64k
> persistent-superblock       1
> parity-algorithm        left-symmetric
>    device          /dev/sda1
>    raid-disk     0
>    device          /dev/sdb1
>    raid-disk     1
>    device          /dev/sdc1
>    raid-disk     2
>    device          /dev/sdd1
>    raid-disk     3
>    device          /dev/sde1
>    raid-disk     4
>    device          /dev/sdg1
>    raid-disk     5
>    device          /dev/sdf1
>    raid-disk    6
>    device          /dev/sdh1
>    spare-disk     0
> ----------------------------------------
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

-- 
  | for direct mail add "private_" in front of user name

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: Raid5 Construction Question
  2004-08-19 18:17     ` Guy
  2004-08-19 18:24       ` PAulN
@ 2004-08-19 18:35       ` Gordon Henderson
  1 sibling, 0 replies; 23+ messages in thread
From: Gordon Henderson @ 2004-08-19 18:35 UTC (permalink / raw)
  To: linux-raid

On Thu, 19 Aug 2004, Guy wrote:

> You don't need to wait.  You can use the array now.

Indeed - but maybe you are already using the array which is why the
rebuild it taking so long?

> But ouch 305K/sec!  If this a 386-33?  :)
>
> Have you tried dd tests on each disk to verify each works well?
>
> Something like:
> time dd if=/dev/sdc1 of=/dev/null bs=64k count=100000
> This is just a read test.  My disks take about 340 seconds.  Yours should be
> about twice as fast.

You can also use hdparm - although really designed for IDE drives, it'll
do the test on SCSI withou any problems:

Eg. on a reasonable PC with an IDE drive:

/dev/hda:
 Timing buffer-cache reads:   1008 MB in  2.00 seconds = 504.00 MB/sec
 Timing buffered disk reads:  142 MB in  3.02 seconds =  47.02 MB/sec


On a SCSI server (Dell 4xxxx something)

/dev/sda:
 Timing buffer-cache reads:   128 MB in  0.18 seconds =711.11 MB/sec
 Timing buffered disk reads:  64 MB in  0.97 seconds = 65.98 MB/sec

This is a RAID1 on the same server

/dev/md0:
 Timing buffer-cache reads:   128 MB in  0.18 seconds =711.11 MB/sec
 Timing buffered disk reads:  64 MB in  1.03 seconds = 62.14 MB/sec

This isa RAID5 on the same server.

/dev/md4:
 Timing buffer-cache reads:   128 MB in  0.18 seconds =711.11 MB/sec
 Timing buffered disk reads:  64 MB in  0.34 seconds =188.24 MB/sec

> Each disk should give about the same performance.
> You may find 1 that has issues.

I've seen this with IDE drives - one drive was very much slower than the
other. No idea why.

How is it configured? Are all 8 drives on the same cable? You might want
to split them and put 4 on a cable with 2 controllers - There may still
issues with PCI bus bandwidth then, but it might help things along. On
that server above, I have 4 SCSI drives, 2 on each bus. sda & b on one
bus, sdc and d on the 2nd. I haven't made tests to see if alternating the
drives in the /etc/raditab makes a difference, but thats what I do anyway
as it "feels" the right thing to do.

Eg:

raiddev /dev/md2
  raid-level            5
  nr-raid-disks         4
  nr-spare-disks        0
  persistent-superblock 1
  chunk-size            32
  device                /dev/sda3
  raid-disk             0
  device                /dev/sdc3
  raid-disk             1
  device                /dev/sdb3
  raid-disk             2
  device                /dev/sdd3
  raid-disk             3

Reading the follow-ups, performance will be slow with the raid-speed-min
parameter set high... If you want performance, during a rebuild, then
set this low.

Gordon

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Raid5 Construction Question
  2004-08-19 17:52   ` PAulN
  2004-08-19 18:17     ` Guy
@ 2004-08-19 20:24     ` Maarten van den Berg
  2004-08-19 20:26       ` Kourosh
  2004-08-19 20:53       ` Guy
  2004-08-20  1:21     ` Neil Brown
  2 siblings, 2 replies; 23+ messages in thread
From: Maarten van den Berg @ 2004-08-19 20:24 UTC (permalink / raw)
  To: linux-raid

On Thursday 19 August 2004 19:52, PAulN wrote:
> Guy,
> thanks for the snappy reply!  I wish my disks were as fast :)
> I failed to mention that I had been tweaking those proc values.  Currently
> they are:
> (root@lcn0:raid)# cat speed_limit_max
> 200000
> (root@lcn0:raid)# cat speed_limit_min
> 10000
>
> If I'm correct, this means that the min speed is 10MB/sec per device.
> I've verified that each device has a seq write speed of about 38MB/sec so
> each should be capable of handling 10,000Kbytes sec.  Right after I issue
> a raidstart the speed is pretty good (~30MB/sec) but is just falls until
> it hits
> around 300K.
>
> md0 : active raid5 sdh1[7] sdf1[6] sdg1[5] sde1[4] sdd1[3] sdc1[2]
> sdb1[1] sda1[0]
>       481949184 blocks level 5, 64k chunk, algorithm 2 [7/7] [UUUUUUU]
>       [>....................]  resync =  2.4% (1936280/80324864)
> finish=4261.4min speed=305K/sec

Something like this happened to me a while ago.  The speed is good at start, 
then after a certain amount of time starts degrading until very very low, 
like 5k/sec.  It keeps ever decreasing. Also, the decrease of speed occurred 
at exactly the same point every time.  After a lot of searching, asking and 
bitching the true reason was revealed; one of the disks had problems and 
couldn't read/write a part of its surface.  Only when I ran dd on it (and saw 
the read errors reported) did I realize that.

So if what you are seeing is this ever-decreasing speed, starting at a 
specific point, I'd strongly concur with Guy in saying: Test each disk 
separately by reading /and or writing its _entire_ surface using the dd 
commands suggested. Not using hdparm or benchmarks, but reading the entire 
disk(s) as described.  The purpose of this is NOT that you get an idea of the 
speed, but that you verify that the entire surface is still ok.

Beyond that, I have no suggestions to offer you.

Maarten


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Raid5 Construction Question
  2004-08-19 20:24     ` Maarten van den Berg
@ 2004-08-19 20:26       ` Kourosh
  2004-08-19 20:39         ` Mike Hardy
  2004-08-19 21:50         ` Maarten van den Berg
  2004-08-19 20:53       ` Guy
  1 sibling, 2 replies; 23+ messages in thread
From: Kourosh @ 2004-08-19 20:26 UTC (permalink / raw)
  Cc: linux-raid

On Thu, Aug 19, 2004 at 10:24:06PM +0200, Maarten van den Berg wrote:
> On Thursday 19 August 2004 19:52, PAulN wrote:
> > Guy,
> > thanks for the snappy reply!  I wish my disks were as fast :)
> > I failed to mention that I had been tweaking those proc values.  Currently
> > they are:
> > (root@lcn0:raid)# cat speed_limit_max
> > 200000
> > (root@lcn0:raid)# cat speed_limit_min
> > 10000
> >
> > If I'm correct, this means that the min speed is 10MB/sec per device.
> > I've verified that each device has a seq write speed of about 38MB/sec so
> > each should be capable of handling 10,000Kbytes sec.  Right after I issue
> > a raidstart the speed is pretty good (~30MB/sec) but is just falls until
> > it hits
> > around 300K.
> >
> > md0 : active raid5 sdh1[7] sdf1[6] sdg1[5] sde1[4] sdd1[3] sdc1[2]
> > sdb1[1] sda1[0]
> >       481949184 blocks level 5, 64k chunk, algorithm 2 [7/7] [UUUUUUU]
> >       [>....................]  resync =  2.4% (1936280/80324864)
> > finish=4261.4min speed=305K/sec
> 
> Something like this happened to me a while ago.  The speed is good at start, 
> then after a certain amount of time starts degrading until very very low, 
> like 5k/sec.  It keeps ever decreasing. Also, the decrease of speed occurred 
> at exactly the same point every time.  After a lot of searching, asking and 
> bitching the true reason was revealed; one of the disks had problems and 
> couldn't read/write a part of its surface.  Only when I ran dd on it (and saw 
> the read errors reported) did I realize that.
> 
> So if what you are seeing is this ever-decreasing speed, starting at a 
> specific point, I'd strongly concur with Guy in saying: Test each disk 
> separately by reading /and or writing its _entire_ surface using the dd 
> commands suggested. Not using hdparm or benchmarks, but reading the entire 
> disk(s) as described.  The purpose of this is NOT that you get an idea of the 
> speed, but that you verify that the entire surface is still ok.
> 
> Beyond that, I have no suggestions to offer you.
> 
> Maarten

I've found that one of the better ways of varifying a disk is to run 
the disk manufacturers disk utilities on it.  They all provide a 
bootable disk to run the utilities.  Several times I've had problems 
similar to this and each time it ended up being a disk that was 
failing.  Run the utility as all the vendors I've dealt with require 
the error code from the utility to process an RMA, so might as well do 
it sooner, rather than later.

You could also try low-level formating each disk using the SCSI 
controllers utilities.  IIRC it should remap any bad blocks.

Hope this helps,

Kourosh

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Raid5 Construction Question
  2004-08-19 18:32 ` Tim Moore
@ 2004-08-19 20:30   ` Maarten van den Berg
  2004-08-23 15:24     ` Tim Moore
  0 siblings, 1 reply; 23+ messages in thread
From: Maarten van den Berg @ 2004-08-19 20:30 UTC (permalink / raw)
  To: linux-raid

On Thursday 19 August 2004 20:32, Tim Moore wrote:
> My raid tab has 'chunk-size 64' and I get 64k chunks.  Does 'chunk-size
> 64k' mean 64M chunks?
>
> Did you set stride correctly when running mke2fs (based on
> chunk_size/fs_block_size)?

I've often wondered about this stride setting when NOT using ext2 (ext3)...
How do you specify it for other filesystems ?  And when you cannot, how does 
it affect the performance of the FS ?  Take for example reiserfs, which does 
not mention stride in its entire manpage.  What then ?

Maarten


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Raid5 Construction Question
  2004-08-19 20:26       ` Kourosh
@ 2004-08-19 20:39         ` Mike Hardy
  2004-08-19 21:50         ` Maarten van den Berg
  1 sibling, 0 replies; 23+ messages in thread
From: Mike Hardy @ 2004-08-19 20:39 UTC (permalink / raw)
  Cc: linux-raid


Using the SMART protocol to automatically test and monitor them (after a 
full test when I first build the box is how I typically steer clear of 
these things.

I've only done more err...budget setups using IDE but apparently it 
works well on SCSI too:

http://smartmontools.sourceforge.net/smartmontools_scsi.html

You could very quickly issue a full test command to all the drives then 
come back later and check them to make sure they completed correctly

-Mike

Kourosh wrote:

> I've found that one of the better ways of varifying a disk is to run 
> the disk manufacturers disk utilities on it.  They all provide a 
> bootable disk to run the utilities.  Several times I've had problems 
> similar to this and each time it ended up being a disk that was 
> failing.  Run the utility as all the vendors I've dealt with require 
> the error code from the utility to process an RMA, so might as well do 
> it sooner, rather than later.
> 
> You could also try low-level formating each disk using the SCSI 
> controllers utilities.  IIRC it should remap any bad blocks.
> 
> Hope this helps,
> 
> Kourosh

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: Raid5 Construction Question
  2004-08-19 20:24     ` Maarten van den Berg
  2004-08-19 20:26       ` Kourosh
@ 2004-08-19 20:53       ` Guy
  1 sibling, 0 replies; 23+ messages in thread
From: Guy @ 2004-08-19 20:53 UTC (permalink / raw)
  To: linux-raid

To do the read test as Maarten suggests, do this:
time dd if=/dev/sdc of=/dev/null bs=64k
Where <sdc> is the name of the disk to test.
Test them all.
Test them at the same time if you want, use different windows so the output
does not get mixed together.
Larger block sizes are fine.
"time" is only added to compare the performance of the disks.

The above is a re-only test.  So, it is safe.

"sdc" is the whole disk!  If you try a write test it will trash the
partition table and all data.

I have a cron job that tests all of my disks each night.
Bad sectors really f up raid 5 arrays.
Odd, since I started testing each night, I have never had anymore bad
sectors.  Not that I recall.  Maybe somehow it helps.  Maybe sectors that
require re-tries or error correction to read are re-located before they go
completely bad.

Guy

-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Maarten van den Berg
Sent: Thursday, August 19, 2004 4:24 PM
To: linux-raid@vger.kernel.org
Subject: Re: Raid5 Construction Question

On Thursday 19 August 2004 19:52, PAulN wrote:
> Guy,
> thanks for the snappy reply!  I wish my disks were as fast :)
> I failed to mention that I had been tweaking those proc values.  Currently
> they are:
> (root@lcn0:raid)# cat speed_limit_max
> 200000
> (root@lcn0:raid)# cat speed_limit_min
> 10000
>
> If I'm correct, this means that the min speed is 10MB/sec per device.
> I've verified that each device has a seq write speed of about 38MB/sec so
> each should be capable of handling 10,000Kbytes sec.  Right after I issue
> a raidstart the speed is pretty good (~30MB/sec) but is just falls until
> it hits
> around 300K.
>
> md0 : active raid5 sdh1[7] sdf1[6] sdg1[5] sde1[4] sdd1[3] sdc1[2]
> sdb1[1] sda1[0]
>       481949184 blocks level 5, 64k chunk, algorithm 2 [7/7] [UUUUUUU]
>       [>....................]  resync =  2.4% (1936280/80324864)
> finish=4261.4min speed=305K/sec

Something like this happened to me a while ago.  The speed is good at start,

then after a certain amount of time starts degrading until very very low, 
like 5k/sec.  It keeps ever decreasing. Also, the decrease of speed occurred

at exactly the same point every time.  After a lot of searching, asking and 
bitching the true reason was revealed; one of the disks had problems and 
couldn't read/write a part of its surface.  Only when I ran dd on it (and
saw 
the read errors reported) did I realize that.

So if what you are seeing is this ever-decreasing speed, starting at a 
specific point, I'd strongly concur with Guy in saying: Test each disk 
separately by reading /and or writing its _entire_ surface using the dd 
commands suggested. Not using hdparm or benchmarks, but reading the entire 
disk(s) as described.  The purpose of this is NOT that you get an idea of
the 
speed, but that you verify that the entire surface is still ok.

Beyond that, I have no suggestions to offer you.

Maarten

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Raid5 Construction Question
  2004-08-19 20:26       ` Kourosh
  2004-08-19 20:39         ` Mike Hardy
@ 2004-08-19 21:50         ` Maarten van den Berg
  2004-08-19 21:55           ` Guy
  1 sibling, 1 reply; 23+ messages in thread
From: Maarten van den Berg @ 2004-08-19 21:50 UTC (permalink / raw)
  To: linux-raid

On Thursday 19 August 2004 22:26, Kourosh wrote:
> On Thu, Aug 19, 2004 at 10:24:06PM +0200, Maarten van den Berg wrote:
> > On Thursday 19 August 2004 19:52, PAulN wrote:

> I've found that one of the better ways of varifying a disk is to run
> the disk manufacturers disk utilities on it.  They all provide a
> bootable disk to run the utilities.  Several times I've had problems
> similar to this and each time it ended up being a disk that was
> failing.  Run the utility as all the vendors I've dealt with require
> the error code from the utility to process an RMA, so might as well do
> it sooner, rather than later.

I did not do this for multiple reasons.
The first -and by far the most important one- was that I desperately needed 
the data that was on the broken array (it all stemmed from a two-disk raid5 
failure).  I do not trust these vendor-diskettes to leave my data intact.

Secondly it was far easier for me to run this dd test under linux than find a 
loose floppydrive, connect it to my server, reinstate IRQ 6 and run various 
floppies against it.

And third, as the system had multiple identical promise addon controllers and 
7 mostly identical ide disks, I figured I couldn't be totally sure the disk 
linux saw as /dev/hdm was disk ??? on controller ?? as reported by such 
floppies.

I do recommend those utilities for a final go / no-go verdict, but you better 
run them on standalone systems on single disks.  
Or maybe I'm just being too paranoid is all...?

Maarten


^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: Raid5 Construction Question
  2004-08-19 21:50         ` Maarten van den Berg
@ 2004-08-19 21:55           ` Guy
  0 siblings, 0 replies; 23+ messages in thread
From: Guy @ 2004-08-19 21:55 UTC (permalink / raw)
  To: linux-raid

I have Seagate disks.  I use a built for Linux tool supplied by Seagate
(seatoolsenterprise).  My disks are not smart compatible, so I use this
tool.  It can also set some Seagate only disk parameters (I think Seagate
only).  The tool can test disks while the system is up and accessing the
disks, just like smartd can.  But as I said, I can't use smartd to test my
disks.

Guy

-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Maarten van den Berg
Sent: Thursday, August 19, 2004 5:51 PM
To: linux-raid@vger.kernel.org
Subject: Re: Raid5 Construction Question

On Thursday 19 August 2004 22:26, Kourosh wrote:
> On Thu, Aug 19, 2004 at 10:24:06PM +0200, Maarten van den Berg wrote:
> > On Thursday 19 August 2004 19:52, PAulN wrote:

> I've found that one of the better ways of varifying a disk is to run
> the disk manufacturers disk utilities on it.  They all provide a
> bootable disk to run the utilities.  Several times I've had problems
> similar to this and each time it ended up being a disk that was
> failing.  Run the utility as all the vendors I've dealt with require
> the error code from the utility to process an RMA, so might as well do
> it sooner, rather than later.

I did not do this for multiple reasons.
The first -and by far the most important one- was that I desperately needed 
the data that was on the broken array (it all stemmed from a two-disk raid5 
failure).  I do not trust these vendor-diskettes to leave my data intact.

Secondly it was far easier for me to run this dd test under linux than find
a 
loose floppydrive, connect it to my server, reinstate IRQ 6 and run various 
floppies against it.

And third, as the system had multiple identical promise addon controllers
and 
7 mostly identical ide disks, I figured I couldn't be totally sure the disk 
linux saw as /dev/hdm was disk ??? on controller ?? as reported by such 
floppies.

I do recommend those utilities for a final go / no-go verdict, but you
better 
run them on standalone systems on single disks.  
Or maybe I'm just being too paranoid is all...?

Maarten

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Raid5 Construction Question
  2004-08-19 17:52   ` PAulN
  2004-08-19 18:17     ` Guy
  2004-08-19 20:24     ` Maarten van den Berg
@ 2004-08-20  1:21     ` Neil Brown
  2004-08-20  1:56       ` Guy
  2004-08-20  4:53       ` Paul Nowoczynski
  2 siblings, 2 replies; 23+ messages in thread
From: Neil Brown @ 2004-08-20  1:21 UTC (permalink / raw)
  To: PAulN; +Cc: Guy, linux-raid

On Thursday August 19, pauln@psc.edu wrote:
> 
> md0 : active raid5 sdh1[7] sdf1[6] sdg1[5] sde1[4] sdd1[3] sdc1[2] 
> sdb1[1] sda1[0]
>       481949184 blocks level 5, 64k chunk, algorithm 2 [7/7] [UUUUUUU]
>       [>....................]  resync =  2.4% (1936280/80324864) 
> finish=4261.4min speed=305K/sec
> 
> In the docs I saw that "reconstruction" is possible with raidhotadd but 
> I didn't
> see anything about initialization.   So am I "screwed" until the resync 
> is fixed?  I was depeding
> on the disks to do some filesystem testing but maybe I'll have to wait a 
> few days.. 

When resyncing an array, raid5 will read all the blocks in each
stripe, check the parity, and if it is wrong, write out the new
parity.
For a mostly-correct array this just involves lots of sequential
reads.  For a mostly incorrect array, this involves lots of
read-seek-writes which is substantially slower.

When reconstructing onto a spare, raid5 will read all the good drives
and write to the spare.  This all IO is sequential, there are no
seeks, and it is nice and fast.

It is for this reason that 'mdadm' will normally create a raid5 array
with one missing drive and one spare, which is then immediately
reconstructed. 

mkraid, on the other hand, doesn't know about this, and just creates
the array and the sync happens which, as you note, can be quite slow.

So, you can either re-make the array with mdadm or, you can fail one
drive, remove it and re-add it.

   raidsetfaulty /dev/md0 /dev/sdh1
   raidhotremove /dev/md0 /dev/sdh1
   raidhotadd /dev/md0 /dev/sdh1

In either case it should stop the resync and start reconstruction
which will be much faster.

NeilBrown

^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: Raid5 Construction Question
  2004-08-20  1:21     ` Neil Brown
@ 2004-08-20  1:56       ` Guy
  2004-08-20  2:01         ` Neil Brown
  2004-08-20  4:53       ` Paul Nowoczynski
  1 sibling, 1 reply; 23+ messages in thread
From: Guy @ 2004-08-20  1:56 UTC (permalink / raw)
  To: 'Neil Brown'; +Cc: linux-raid

Why does it check the parity and not just write the correct parity?
Does it log the count that were wrong, or anything?

And does your new --update=resync option do the same thing?

Thanks,
Guy

-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Neil Brown
Sent: Thursday, August 19, 2004 9:21 PM
To: PAulN
Cc: Guy; linux-raid@vger.kernel.org
Subject: Re: Raid5 Construction Question

On Thursday August 19, pauln@psc.edu wrote:
> 
> md0 : active raid5 sdh1[7] sdf1[6] sdg1[5] sde1[4] sdd1[3] sdc1[2] 
> sdb1[1] sda1[0]
>       481949184 blocks level 5, 64k chunk, algorithm 2 [7/7] [UUUUUUU]
>       [>....................]  resync =  2.4% (1936280/80324864) 
> finish=4261.4min speed=305K/sec
> 
> In the docs I saw that "reconstruction" is possible with raidhotadd but 
> I didn't
> see anything about initialization.   So am I "screwed" until the resync 
> is fixed?  I was depeding
> on the disks to do some filesystem testing but maybe I'll have to wait a 
> few days.. 

When resyncing an array, raid5 will read all the blocks in each
stripe, check the parity, and if it is wrong, write out the new
parity.
For a mostly-correct array this just involves lots of sequential
reads.  For a mostly incorrect array, this involves lots of
read-seek-writes which is substantially slower.

When reconstructing onto a spare, raid5 will read all the good drives
and write to the spare.  This all IO is sequential, there are no
seeks, and it is nice and fast.

It is for this reason that 'mdadm' will normally create a raid5 array
with one missing drive and one spare, which is then immediately
reconstructed. 

mkraid, on the other hand, doesn't know about this, and just creates
the array and the sync happens which, as you note, can be quite slow.

So, you can either re-make the array with mdadm or, you can fail one
drive, remove it and re-add it.

   raidsetfaulty /dev/md0 /dev/sdh1
   raidhotremove /dev/md0 /dev/sdh1
   raidhotadd /dev/md0 /dev/sdh1

In either case it should stop the resync and start reconstruction
which will be much faster.

NeilBrown
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: Raid5 Construction Question
  2004-08-20  1:56       ` Guy
@ 2004-08-20  2:01         ` Neil Brown
  0 siblings, 0 replies; 23+ messages in thread
From: Neil Brown @ 2004-08-20  2:01 UTC (permalink / raw)
  To: Guy; +Cc: linux-raid

On Thursday August 19, bugzilla@watkins-home.com wrote:
> Why does it check the parity and not just write the correct parity?

Because on a mostly-in-sync array, checking the parity is faster than
correcting the parity.

> Does it log the count that were wrong, or anything?

no, but one day it will.

> 
> And does your new --update=resync option do the same thing?

It just marks an in-sync array as not-in-sync so that a resync is
forced.  It will do the same check-and-correct process for raid5.

NeilBrown

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Raid5 Construction Question
  2004-08-20  1:21     ` Neil Brown
  2004-08-20  1:56       ` Guy
@ 2004-08-20  4:53       ` Paul Nowoczynski
  1 sibling, 0 replies; 23+ messages in thread
From: Paul Nowoczynski @ 2004-08-20  4:53 UTC (permalink / raw)
  To: Neil Brown; +Cc: Guy, linux-raid

Neil,
Thanks for the info - I tried something along these lines
but didn't know how to set the faulty disk.  I didn't see it in
a man page though I should have caught it with raid\tab\tab :)

Ironically, the performance did end up hovering around 30MB/sec
for most of the rebuild.

Thanks.
Pul

On Fri, 20 Aug 2004, Neil Brown wrote:

> On Thursday August 19, pauln@psc.edu wrote:
> >
> > md0 : active raid5 sdh1[7] sdf1[6] sdg1[5] sde1[4] sdd1[3] sdc1[2]
> > sdb1[1] sda1[0]
> >       481949184 blocks level 5, 64k chunk, algorithm 2 [7/7] [UUUUUUU]
> >       [>....................]  resync =  2.4% (1936280/80324864)
> > finish=4261.4min speed=305K/sec
> >
> > In the docs I saw that "reconstruction" is possible with raidhotadd but
> > I didn't
> > see anything about initialization.   So am I "screwed" until the resync
> > is fixed?  I was depeding
> > on the disks to do some filesystem testing but maybe I'll have to wait a
> > few days..
>
> When resyncing an array, raid5 will read all the blocks in each
> stripe, check the parity, and if it is wrong, write out the new
> parity.
> For a mostly-correct array this just involves lots of sequential
> reads.  For a mostly incorrect array, this involves lots of
> read-seek-writes which is substantially slower.
>
> When reconstructing onto a spare, raid5 will read all the good drives
> and write to the spare.  This all IO is sequential, there are no
> seeks, and it is nice and fast.
>
> It is for this reason that 'mdadm' will normally create a raid5 array
> with one missing drive and one spare, which is then immediately
> reconstructed.
>
> mkraid, on the other hand, doesn't know about this, and just creates
> the array and the sync happens which, as you note, can be quite slow.
>
> So, you can either re-make the array with mdadm or, you can fail one
> drive, remove it and re-add it.
>
>    raidsetfaulty /dev/md0 /dev/sdh1
>    raidhotremove /dev/md0 /dev/sdh1
>    raidhotadd /dev/md0 /dev/sdh1
>
> In either case it should stop the resync and start reconstruction
<> which will be much faster.
>
> NeilBrown
>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Raid5 Construction Question
@ 2004-08-20  5:17 Mike Baynton
  2004-08-20  6:52 ` Guy
  0 siblings, 1 reply; 23+ messages in thread
From: Mike Baynton @ 2004-08-20  5:17 UTC (permalink / raw)
  To: linux-raid

Paul Nowoczynski wrote:

 > Neil,
 > Thanks for the info - I tried something along these lines
 > but didn't know how to set the faulty disk.  I didn't see it in
 > a man page


How bout that. Does somebody know if there is ANY man page detailing all 
of those little raidtools utilities? How are man pages maintained, 
anyway? Even though you can #raid\tab\tab it seems like they should at 
least be mentioned by name in the man pages somewhere, mkraid maybe. Or 
am i just missing it?

though I should have caught it with raid\tab\tab :)

 >
 > Ironically, the performance did end up hovering around 30MB/sec
 > for most of the rebuild.
 >
 > Thanks.
 > Pul
 >


^ permalink raw reply	[flat|nested] 23+ messages in thread

* RE: Raid5 Construction Question
  2004-08-20  5:17 Mike Baynton
@ 2004-08-20  6:52 ` Guy
  0 siblings, 0 replies; 23+ messages in thread
From: Guy @ 2004-08-20  6:52 UTC (permalink / raw)
  To: 'Mike Baynton', linux-raid

From what I have read on this list, raidtools is no longer supported.
This includes raidstart, raidstop, raidhotadd and raidhotremove, maybe more.

mdadm is the preferred tool to maintain your arrays.

Guy

-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Mike Baynton
Sent: Friday, August 20, 2004 1:18 AM
To: linux-raid@vger.kernel.org
Subject: Re: Raid5 Construction Question

Paul Nowoczynski wrote:

 > Neil,
 > Thanks for the info - I tried something along these lines
 > but didn't know how to set the faulty disk.  I didn't see it in
 > a man page


How bout that. Does somebody know if there is ANY man page detailing all 
of those little raidtools utilities? How are man pages maintained, 
anyway? Even though you can #raid\tab\tab it seems like they should at 
least be mentioned by name in the man pages somewhere, mkraid maybe. Or 
am i just missing it?

though I should have caught it with raid\tab\tab :)

 >
 > Ironically, the performance did end up hovering around 30MB/sec
 > for most of the rebuild.
 >
 > Thanks.
 > Pul
 >

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Raid5 Construction Question
  2004-08-19 20:30   ` Maarten van den Berg
@ 2004-08-23 15:24     ` Tim Moore
  2004-08-23 15:27       ` Gordon Henderson
  0 siblings, 1 reply; 23+ messages in thread
From: Tim Moore @ 2004-08-23 15:24 UTC (permalink / raw)
  To: linux-raid

Unknown.  Stride is part of mke2fs and I only use ext3.

rgds,
tim.

Maarten van den Berg wrote:
> On Thursday 19 August 2004 20:32, Tim Moore wrote:
> 
>>My raid tab has 'chunk-size 64' and I get 64k chunks.  Does 'chunk-size
>>64k' mean 64M chunks?
>>
>>Did you set stride correctly when running mke2fs (based on
>>chunk_size/fs_block_size)?
> 
> 
> I've often wondered about this stride setting when NOT using ext2 (ext3)...
> How do you specify it for other filesystems ?  And when you cannot, how does 
> it affect the performance of the FS ?  Take for example reiserfs, which does 
> not mention stride in its entire manpage.  What then ?
> 
> Maarten
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: Raid5 Construction Question
  2004-08-23 15:24     ` Tim Moore
@ 2004-08-23 15:27       ` Gordon Henderson
  0 siblings, 0 replies; 23+ messages in thread
From: Gordon Henderson @ 2004-08-23 15:27 UTC (permalink / raw)
  To: Tim Moore; +Cc: linux-raid

On Mon, 23 Aug 2004, Tim Moore wrote:

> Unknown.  Stride is part of mke2fs and I only use ext3.

But underneath every ext3 is an ext2 waiting to get out :)

Gordon

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2004-08-23 15:27 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-08-19 17:24 Raid5 Construction Question PAulN
2004-08-19 17:30 ` Guy
2004-08-19 17:52   ` PAulN
2004-08-19 18:17     ` Guy
2004-08-19 18:24       ` PAulN
2004-08-19 18:29         ` Guy
2004-08-19 18:35       ` Gordon Henderson
2004-08-19 20:24     ` Maarten van den Berg
2004-08-19 20:26       ` Kourosh
2004-08-19 20:39         ` Mike Hardy
2004-08-19 21:50         ` Maarten van den Berg
2004-08-19 21:55           ` Guy
2004-08-19 20:53       ` Guy
2004-08-20  1:21     ` Neil Brown
2004-08-20  1:56       ` Guy
2004-08-20  2:01         ` Neil Brown
2004-08-20  4:53       ` Paul Nowoczynski
2004-08-19 18:32 ` Tim Moore
2004-08-19 20:30   ` Maarten van den Berg
2004-08-23 15:24     ` Tim Moore
2004-08-23 15:27       ` Gordon Henderson
  -- strict thread matches above, loose matches on Subject: below --
2004-08-20  5:17 Mike Baynton
2004-08-20  6:52 ` Guy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).