linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* mdadm creating raid5 with all spare disks
@ 2009-12-10 17:15 Matt Tehonica
  2009-12-10 20:10 ` Neil Brown
  0 siblings, 1 reply; 8+ messages in thread
From: Matt Tehonica @ 2009-12-10 17:15 UTC (permalink / raw)
  To: linux-raid@vger.kernel.org

Hi all,

I was just wondering if anyone else has seen a problem when creating a  
RAID5 where mdadm will create it and it shows all disks as spares and  
above it shows all drives as removed. I can't seem to figure this. Any  
sugestions would be great.

Thanks!

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: mdadm creating raid5 with all spare disks
  2009-12-10 17:15 mdadm creating raid5 with all spare disks Matt Tehonica
@ 2009-12-10 20:10 ` Neil Brown
  2009-12-11  0:04   ` Matthew Tehonica
  0 siblings, 1 reply; 8+ messages in thread
From: Neil Brown @ 2009-12-10 20:10 UTC (permalink / raw)
  To: Matt Tehonica; +Cc: linux-raid@vger.kernel.org

On Thu, 10 Dec 2009 12:15:09 -0500
Matt Tehonica <matt.tehonica@mac.com> wrote:

> Hi all,
> 
> I was just wondering if anyone else has seen a problem when creating a  
> RAID5 where mdadm will create it and it shows all disks as spares and  
> above it shows all drives as removed. I can't seem to figure this. Any  
> sugestions would be great.

Maybe the raid456 modules isn't getting loaded?

If you show us the exact command, the exact contents of /proc/mdstat,
and any kernel log messages generated at the time it will be a lot easier
to explain what is happening.

NeilBrown

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: mdadm creating raid5 with all spare disks
  2009-12-10 20:10 ` Neil Brown
@ 2009-12-11  0:04   ` Matthew Tehonica
  2009-12-11  0:47     ` Richard Scobie
  0 siblings, 1 reply; 8+ messages in thread
From: Matthew Tehonica @ 2009-12-11  0:04 UTC (permalink / raw)
  To: Neil Brown; +Cc: linux-raid@vger.kernel.org

 
On Thursday, December 10, 2009, at 03:10PM, "Neil Brown" <neilb@suse.de> wrote:
>On Thu, 10 Dec 2009 12:15:09 -0500
>Matt Tehonica <matt.tehonica@mac.com> wrote:
>
>> Hi all,
>> 
>> I was just wondering if anyone else has seen a problem when creating a  
>> RAID5 where mdadm will create it and it shows all disks as spares and  
>> above it shows all drives as removed. I can't seem to figure this. Any  
>> sugestions would be great.
>
>Maybe the raid456 modules isn't getting loaded?
>
>If you show us the exact command, the exact contents of /proc/mdstat,
>and any kernel log messages generated at the time it will be a lot easier
>to explain what is happening.
>
>NeilBrown
>--
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>

 On Thursday, December 10, 2009, at 03:10PM, "Neil Brown" <neilb@suse.de> wrote:
>On Thu, 10 Dec 2009 12:15:09 -0500
>Matt Tehonica <matt.tehonica@mac.com> wrote:
>
>> Hi all,
>> 
>> I was just wondering if anyone else has seen a problem when creating a  
>> RAID5 where mdadm will create it and it shows all disks as spares and  
>> above it shows all drives as removed. I can't seem to figure this. Any  
>> sugestions would be great.
>
>Maybe the raid456 modules isn't getting loaded?
>
>If you show us the exact command, the exact contents of /proc/mdstat,
>and any kernel log messages generated at the time it will be a lot easier
>to explain what is happening.
>
>NeilBrown
>--
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>

Thanks for the response.  Here is some more info....

When I create the RAID5 using: sudo mdadm --create --verbose /dev/md2 --chunk=512 --level=raid5 --raid-devices=3 /dev/sde /dev/sdf /dev/sdg

it shows: mdadm: 
layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: size set to 1465138176K
mdadm: Defaulting to version 1.1 metadata
mdadm: array /dev/md2 started.

So great, the array is created and started.  Then I cat /proc/mdstat and see: 

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md2 : active raid5 sdg[3] sdf[1] sde[0]
      2930276352 blocks super 1.1 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [>....................]  recovery =  0.0% (0/1465138176) finish=201456499.2min speed=0K/sec
      
After about 5 minutes, I try to cat /proc/mdstat again and it just freezes and sits there with the cursor flashing.  At this point, I can do anything with mdadm, not even like a --detail on another array on the system.  Just sits there and flashes.  I've tried with a bunch of different disks and same thing all the time.  Here is a section from my kern.log.  

Dec 10 19:01:07 ghostrider kernel: [ 1338.011320] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
Dec 10 19:01:07 ghostrider kernel: [ 1338.011330] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
Dec 10 19:01:07 ghostrider kernel: [ 1338.011343] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
Dec 10 19:01:07 ghostrider kernel: [ 1338.011349] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
Dec 10 19:01:07 ghostrider kernel: [ 1338.011443] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
Dec 10 19:01:07 ghostrider kernel: [ 1338.011449] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
Dec 10 19:01:07 ghostrider kernel: [ 1338.011461] end_request: I/O error, dev sdf, sector 0
Dec 10 19:01:07 ghostrider kernel: [ 1338.011472] md: super_written gets error=-5, uptodate=0
Dec 10 19:01:07 ghostrider kernel: [ 1338.011516] sd 6:0:2:0: [sdg] Unhandled error code
Dec 10 19:01:07 ghostrider kernel: [ 1338.011521] sd 6:0:2:0: [sdg] Result: hostbyte=DID_OK driverbyte=DRIVER_TIMEOUT
Dec 10 19:01:07 ghostrider kernel: [ 1338.011529] end_request: I/O error, dev sdg, sector 2930277167
Dec 10 19:01:07 ghostrider kernel: [ 1338.031349] md: md2: recovery done.
Dec 10 19:01:07 ghostrider kernel: [ 1338.031556] RAID5 conf printout:
Dec 10 19:01:07 ghostrider kernel: [ 1338.031561]  --- rd:3 wd:0
Dec 10 19:01:07 ghostrider kernel: [ 1338.031566]  disk 0, o:0, dev:sde
Dec 10 19:01:07 ghostrider kernel: [ 1338.031571]  disk 1, o:0, dev:sdf
Dec 10 19:01:07 ghostrider kernel: [ 1338.031575]  disk 2, o:0, dev:sdg
Dec 10 19:01:07 ghostrider kernel: [ 1338.080421] RAID5 conf printout:
Dec 10 19:01:07 ghostrider kernel: [ 1338.080424]  --- rd:3 wd:0
Dec 10 19:01:07 ghostrider kernel: [ 1338.080427]  disk 0, o:0, dev:sde
Dec 10 19:01:07 ghostrider kernel: [ 1338.080429]  disk 1, o:0, dev:sdf
Dec 10 19:01:07 ghostrider kernel: [ 1338.080437] RAID5 conf printout:
Dec 10 19:01:07 ghostrider kernel: [ 1338.080439]  --- rd:3 wd:0
Dec 10 19:01:07 ghostrider kernel: [ 1338.080440]  disk 0, o:0, dev:sde
Dec 10 19:01:07 ghostrider kernel: [ 1338.080441]  disk 1, o:0, dev:sdf
Dec 10 19:01:07 ghostrider kernel: [ 1338.120169] RAID5 conf printout:
Dec 10 19:01:07 ghostrider kernel: [ 1338.120177]  --- rd:3 wd:0
Dec 10 19:01:07 ghostrider kernel: [ 1338.120184]  disk 0, o:0, dev:sde
Dec 10 19:01:07 ghostrider kernel: [ 1338.120200] RAID5 conf printout:
Dec 10 19:01:07 ghostrider kernel: [ 1338.120203]  --- rd:3 wd:0
Dec 10 19:01:07 ghostrider kernel: [ 1338.120206]  disk 0, o:0, dev:sde
Dec 10 19:01:07 ghostrider kernel: [ 1338.160092] RAID5 conf printout:
Dec 10 19:01:07 ghostrider kernel: [ 1338.160101]  --- rd:3 wd:0
Dec 10 19:01:38 ghostrider kernel: [ 1369.011322] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
Dec 10 19:01:38 ghostrider kernel: [ 1369.011332] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
Dec 10 19:01:38 ghostrider kernel: [ 1369.011346] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
Dec 10 19:01:38 ghostrider kernel: [ 1369.011352] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
Dec 10 19:01:38 ghostrider kernel: [ 1369.011360] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
Dec 10 19:01:38 ghostrider kernel: [ 1369.011365] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
Dec 10 19:02:09 ghostrider kernel: [ 1400.010077] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
Dec 10 19:02:09 ghostrider kernel: [ 1400.010087] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
Dec 10 19:02:09 ghostrider kernel: [ 1400.010101] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
Dec 10 19:02:09 ghostrider kernel: [ 1400.010107] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
Dec 10 19:02:09 ghostrider kernel: [ 1400.010115] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
Dec 10 19:02:09 ghostrider kernel: [ 1400.010120] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5

Any ideas??  Thanks!

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: mdadm creating raid5 with all spare disks
  2009-12-11  0:04   ` Matthew Tehonica
@ 2009-12-11  0:47     ` Richard Scobie
  2009-12-11  1:01       ` Matt Tehonica
  0 siblings, 1 reply; 8+ messages in thread
From: Richard Scobie @ 2009-12-11  0:47 UTC (permalink / raw)
  To: Matthew Tehonica; +Cc: Neil Brown, linux-raid@vger.kernel.org

Matthew Tehonica wrote:

> After about 5 minutes, I try to cat /proc/mdstat again and it just freezes and sits there with the cursor flashing.  At this point, I can do anything with mdadm, not even like a --detail on another array on the system.  Just sits there and flashes.  I've tried with a bunch of different disks and same thing all the time.  Here is a section from my kern.log.  
> 
> Dec 10 19:01:07 ghostrider kernel: [ 1338.011320] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
> Dec 10 19:01:07 ghostrider kernel: [ 1338.011330] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
> Dec 10 19:01:07 ghostrider kernel: [ 1338.011343] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
> Dec 10 19:01:07 ghostrider kernel: [ 1338.011349] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
> Dec 10 19:01:07 ghostrider kernel: [ 1338.011443] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
> Dec 10 19:01:07 ghostrider kernel: [ 1338.011449] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
> Dec 10 19:01:07 ghostrider kernel: [ 1338.011461] end_request: I/O error, dev sdf, sector 0
> Dec 10 19:01:07 ghostrider kernel: [ 1338.011472] md: super_written gets error=-5, uptodate=0

Looks like the drives are being dropped by the controller.

I believe there was some recent discussion here about mv_sas, md and 
failures under heavy I/O load.

Regards,

Richard

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: mdadm creating raid5 with all spare disks
  2009-12-11  0:47     ` Richard Scobie
@ 2009-12-11  1:01       ` Matt Tehonica
  2009-12-11  9:08         ` Andre Tomt
  2009-12-11 13:33         ` Thomas Fjellstrom
  0 siblings, 2 replies; 8+ messages in thread
From: Matt Tehonica @ 2009-12-11  1:01 UTC (permalink / raw)
  To: Richard Scobie; +Cc: linux-raid@vger.kernel.org


On Dec 10, 2009, at 7:47 PM, Richard Scobie <richard@sauce.co.nz> wrote:

> Matthew Tehonica wrote:
>
>> After about 5 minutes, I try to cat /proc/mdstat again and it just  
>> freezes and sits there with the cursor flashing.  At this point, I  
>> can do anything with mdadm, not even like a --detail on another  
>> array on the system.  Just sits there and flashes.  I've tried with  
>> a bunch of different disks and same thing all the time.  Here is a  
>> section from my kern.log.  Dec 10 19:01:07 ghostrider kernel: [ 1338.011320 
>> ] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c  
>> 1669:mvs_abort_task:rc= 5
>> Dec 10 19:01:07 ghostrider kernel: [ 1338.011330] /build/buildd/ 
>> linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
>> Dec 10 19:01:07 ghostrider kernel: [ 1338.011343] /build/buildd/ 
>> linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
>> Dec 10 19:01:07 ghostrider kernel: [ 1338.011349] /build/buildd/ 
>> linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
>> Dec 10 19:01:07 ghostrider kernel: [ 1338.011443] /build/buildd/ 
>> linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
>> Dec 10 19:01:07 ghostrider kernel: [ 1338.011449] /build/buildd/ 
>> linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
>> Dec 10 19:01:07 ghostrider kernel: [ 1338.011461] end_request: I/O  
>> error, dev sdf, sector 0
>> Dec 10 19:01:07 ghostrider kernel: [ 1338.011472] md: super_written  
>> gets error=-5, uptodate=0
>
> Looks like the drives are being dropped by the controller.
>
> I believe there was some recent discussion here about mv_sas, md and  
> failures under heavy I/O load.
>
> Regards,
>
> Richard
> --
> To unsubscribe from this list: send the line "unsubscribe linux- 
> raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Thanks Richard!  After some more review I found many people with this  
problem and appears the linux-scsi devs submitted patches mid last  
month. Building the latest kernel as we speak!

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: mdadm creating raid5 with all spare disks
  2009-12-11  1:01       ` Matt Tehonica
@ 2009-12-11  9:08         ` Andre Tomt
  2009-12-11 13:33         ` Thomas Fjellstrom
  1 sibling, 0 replies; 8+ messages in thread
From: Andre Tomt @ 2009-12-11  9:08 UTC (permalink / raw)
  To: Matt Tehonica; +Cc: Richard Scobie, linux-raid@vger.kernel.org

On 11.12.2009 02:01, Matt Tehonica wrote:
> On Dec 10, 2009, at 7:47 PM, Richard Scobie <richard@sauce.co.nz> wrote:
<snip>
>> Looks like the drives are being dropped by the controller.
>>
>> I believe there was some recent discussion here about mv_sas, md and
>> failures under heavy I/O load.
>
> Thanks Richard! After some more review I found many people with this
> problem and appears the linux-scsi devs submitted patches mid last
> month. Building the latest kernel as we speak!

FYI; The fixes has not made it to mainline yet so 2.6.32 and 2.6.33-git 
is still broken with this driver. Not sure if they've landed in some 
scsi tree yet but manually applying patches 1-6 from the linux-scsi 
mailing list worked for me, at least to the point that it is now usable 
(patch 7 is for some libsas change not in 2.6.32).

On a related note, mptsas driver also seems to have issues with newer 
HBA's. Non-RAID SAS HBA's doesn't seem to be in a very good shape in 
Linux nowadays. Might give a mpt2sas driven controller a spin soon, 
curious on how that pans out..

-- 
André
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: mdadm creating raid5 with all spare disks
  2009-12-11  1:01       ` Matt Tehonica
  2009-12-11  9:08         ` Andre Tomt
@ 2009-12-11 13:33         ` Thomas Fjellstrom
  2009-12-11 17:54           ` Matt Tehonica
  1 sibling, 1 reply; 8+ messages in thread
From: Thomas Fjellstrom @ 2009-12-11 13:33 UTC (permalink / raw)
  To: Matt Tehonica; +Cc: Richard Scobie, linux-raid@vger.kernel.org

On Thu December 10 2009, Matt Tehonica wrote:
> On Dec 10, 2009, at 7:47 PM, Richard Scobie <richard@sauce.co.nz> wrote:
> > Matthew Tehonica wrote:
> >> After about 5 minutes, I try to cat /proc/mdstat again and it just
> >> freezes and sits there with the cursor flashing.  At this point, I
> >> can do anything with mdadm, not even like a --detail on another
> >> array on the system.  Just sits there and flashes.  I've tried with
> >> a bunch of different disks and same thing all the time.  Here is a
> >> section from my kern.log.  Dec 10 19:01:07 ghostrider kernel: [
> >> 1338.011320 ] /build/buildd/linux-2.6.31/drivers/scsi/mvsas/mv_sas.c
> >> 1669:mvs_abort_task:rc= 5
> >> Dec 10 19:01:07 ghostrider kernel: [ 1338.011330] /build/buildd/
> >> linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
> >> Dec 10 19:01:07 ghostrider kernel: [ 1338.011343] /build/buildd/
> >> linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
> >> Dec 10 19:01:07 ghostrider kernel: [ 1338.011349] /build/buildd/
> >> linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
> >> Dec 10 19:01:07 ghostrider kernel: [ 1338.011443] /build/buildd/
> >> linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1669:mvs_abort_task:rc= 5
> >> Dec 10 19:01:07 ghostrider kernel: [ 1338.011449] /build/buildd/
> >> linux-2.6.31/drivers/scsi/mvsas/mv_sas.c 1608:mvs_query_task:rc= 5
> >> Dec 10 19:01:07 ghostrider kernel: [ 1338.011461] end_request: I/O
> >> error, dev sdf, sector 0
> >> Dec 10 19:01:07 ghostrider kernel: [ 1338.011472] md: super_written
> >> gets error=-5, uptodate=0
> >
> > Looks like the drives are being dropped by the controller.
> >
> > I believe there was some recent discussion here about mv_sas, md and
> > failures under heavy I/O load.
> >
> > Regards,
> >
> > Richard
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-
> > raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> Thanks Richard!  After some more review I found many people with this
> problem and appears the linux-scsi devs submitted patches mid last
> month. Building the latest kernel as we speak!

I'd be one of them ;) I have a somewhat old copy of the new mvsas drivers, 
and have been running a 5x1TB raid5 array for about a month now and haven't 
had any problems (other than 2.6.31 being horribly io bound).

> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


-- 
Thomas Fjellstrom
tfjellstrom@shaw.ca

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: mdadm creating raid5 with all spare disks
  2009-12-11 13:33         ` Thomas Fjellstrom
@ 2009-12-11 17:54           ` Matt Tehonica
  0 siblings, 0 replies; 8+ messages in thread
From: Matt Tehonica @ 2009-12-11 17:54 UTC (permalink / raw)
  To: linux-raid


On Dec 11, 2009, at 8:33 AM, Thomas Fjellstrom wrote:

>> <snip>
> I'd be one of them ;) I have a somewhat old copy of the new mvsas  
> drivers,
> and have been running a 5x1TB raid5 array for about a month now and  
> haven't
> had any problems (other than 2.6.31 being horribly io bound).
>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux- 
>> raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>

Could someone expand on how I would go about getting those patches 1-6  
and applying them?  Not sure where to grab them from or apply them.

Thanks!

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2009-12-11 17:54 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-12-10 17:15 mdadm creating raid5 with all spare disks Matt Tehonica
2009-12-10 20:10 ` Neil Brown
2009-12-11  0:04   ` Matthew Tehonica
2009-12-11  0:47     ` Richard Scobie
2009-12-11  1:01       ` Matt Tehonica
2009-12-11  9:08         ` Andre Tomt
2009-12-11 13:33         ` Thomas Fjellstrom
2009-12-11 17:54           ` Matt Tehonica

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).