linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: RAID5 (mdadm) array hosed after grow operation
       [not found] <1231144738.2997.1293010001@webmail.messagingengine.com>
@ 2009-01-05 14:13 ` Justin Piszcz
  2009-01-05 22:17   ` Neil Brown
  0 siblings, 1 reply; 7+ messages in thread
From: Justin Piszcz @ 2009-01-05 14:13 UTC (permalink / raw)
  To: whollygoat; +Cc: debian-user, linux-raid

cc linux-raid

On Mon, 5 Jan 2009, whollygoat@letterboxes.org wrote:

> I think growing my RAID array after replacing all the
> drives with bigger ones has somehow hosed the array.
>
> The system is Etch with a stock 2.6.18 kernel and
> mdadm v. 2.5.6, running on an Athlon 1700 box.
> The array is 6 disk (5 active, one spare) RAID 5
> that has been humming along quite nicely for
> a few months now.  However, I decided to replace
> all the drives with larger ones.
>
> The RAID reassembled fine at each boot as the drives
> were replaced one by one.  After the last drive was
> partitioned and added to the array, I issued the
> command
>
>   "mdadm -G /dev/md/0 -z max"
>
> to grow the array to the maximum space available
> on the smallest drive.  That appeared to work just
> fine at the time, but booting today the array
> refused to assemble with the following error:
>
>    md: hdg1 has invalid sb, not importing!
>    md: md_import_device returned -22
>
> I tried to force assembly but only two of the remaining
> 4 active drives appeared to be fault free.  dmesg gives
>
>    md: kicking non-fresh hde1 from array!
>    md: unbind<hde1>
>    md: export_rdev(hde1)
>    md: kicking non-fresh hdi1 from array!
>    md: unbind<hdi1>
>    md: export_rdev(hdi1)
>
> I also noticed that "mdadm -X <drive>" shows
> the pre-grow device size for 2 of the devices
> and some discrepancies between event and event cleared
> counts.
>
> One last thing I found curious---from dmesg:
>
>    EXT3-fs error (device hdg1): ext3_check_descriptors: Block
>    bitmap for group 0 not in group (block 2040936682)!
>    EXT3-fs: group descriptors corrupted!
>
> There is not ext3 directly on hdg1.  LVM sits between the
> and the filesystem, so the above message seems suspect.
>
> I hope someone will be able to help me with this.  I feel
> like the info above is pertinent, but I don't know where to
> go from here.
>
> Thanks
> -- 
>
>  whollygoat@letterboxes.org
>
> -- 
> http://www.fastmail.fm - Does exactly what it says on the tin
>
>
> -- 
> To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: RAID5 (mdadm) array hosed after grow operation
  2009-01-05 14:13 ` RAID5 (mdadm) array hosed after grow operation Justin Piszcz
@ 2009-01-05 22:17   ` Neil Brown
  2009-01-06  8:45     ` whollygoat
  2009-01-08  4:19     ` whollygoat
  0 siblings, 2 replies; 7+ messages in thread
From: Neil Brown @ 2009-01-05 22:17 UTC (permalink / raw)
  To: whollygoat; +Cc: Justin Piszcz, debian-user, linux-raid

On Monday January 5, jpiszcz@lucidpixels.com wrote:
> cc linux-raid
> 
> On Mon, 5 Jan 2009, whollygoat@letterboxes.org wrote:
> 
> > I think growing my RAID array after replacing all the
> > drives with bigger ones has somehow hosed the array.
> >
> > The system is Etch with a stock 2.6.18 kernel and
> > mdadm v. 2.5.6, running on an Athlon 1700 box.
> > The array is 6 disk (5 active, one spare) RAID 5
> > that has been humming along quite nicely for
> > a few months now.  However, I decided to replace
> > all the drives with larger ones.
> >
> > The RAID reassembled fine at each boot as the drives
> > were replaced one by one.  After the last drive was
> > partitioned and added to the array, I issued the
> > command
> >
> >   "mdadm -G /dev/md/0 -z max"
> >
> > to grow the array to the maximum space available
> > on the smallest drive.  That appeared to work just
> > fine at the time, but booting today the array
> > refused to assemble with the following error:
> >
> >    md: hdg1 has invalid sb, not importing!
> >    md: md_import_device returned -22
> >
> > I tried to force assembly but only two of the remaining
> > 4 active drives appeared to be fault free.  dmesg gives
> >
> >    md: kicking non-fresh hde1 from array!
> >    md: unbind<hde1>
> >    md: export_rdev(hde1)
> >    md: kicking non-fresh hdi1 from array!
> >    md: unbind<hdi1>
> >    md: export_rdev(hdi1)

Please report
   mdadm --examine /dev/whatever
for every device that you think should be a part of the array.

> >
> > I also noticed that "mdadm -X <drive>" shows
> > the pre-grow device size for 2 of the devices
> > and some discrepancies between event and event cleared
> > counts.

You cannot grow an array with an active bitmap... or at least you
shouldn't be able to.  Maybe 2.6.18 didn't enforce that.  Maybe that
is what caused the problem - not sure.

> >
> > One last thing I found curious---from dmesg:
> >
> >    EXT3-fs error (device hdg1): ext3_check_descriptors: Block
> >    bitmap for group 0 not in group (block 2040936682)!
> >    EXT3-fs: group descriptors corrupted!
> >
> > There is not ext3 directly on hdg1.  LVM sits between the
> > and the filesystem, so the above message seems suspect.

Seems like something got confused during boot and the wrong device got
mounted.  That is bad.

NeilBrown

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: RAID5 (mdadm) array hosed after grow operation
  2009-01-05 22:17   ` Neil Brown
@ 2009-01-06  8:45     ` whollygoat
  2009-01-08  4:19     ` whollygoat
  1 sibling, 0 replies; 7+ messages in thread
From: whollygoat @ 2009-01-06  8:45 UTC (permalink / raw)
  To: Neil Brown; +Cc: Justin Piszcz, debian-user, linux-raid


On Tue, 6 Jan 2009 09:17:46 +1100, "Neil Brown" <neilb@suse.de> said:
> On Monday January 5, jpiszcz@lucidpixels.com wrote:
> > cc linux-raid
> > 
> > On Mon, 5 Jan 2009, whollygoat@letterboxes.org wrote:
> > 
> > > I think growing my RAID array after replacing all the
> > > drives with bigger ones has somehow hosed the array.
> > >
> > > The system is Etch with a stock 2.6.18 kernel and
> > > mdadm v. 2.5.6, running on an Athlon 1700 box.
> > > The array is 6 disk (5 active, one spare) RAID 5
> > > that has been humming along quite nicely for
> > > a few months now.  However, I decided to replace
> > > all the drives with larger ones.
> > >
> > > The RAID reassembled fine at each boot as the drives
> > > were replaced one by one.  After the last drive was
> > > partitioned and added to the array, I issued the
> > > command
> > >
> > >   "mdadm -G /dev/md/0 -z max"
> > >
> > > to grow the array to the maximum space available
> > > on the smallest drive.  That appeared to work just
> > > fine at the time, but booting today the array
> > > refused to assemble with the following error:
> > >
> > >    md: hdg1 has invalid sb, not importing!
> > >    md: md_import_device returned -22
> > >
> > > I tried to force assembly but only two of the remaining
> > > 4 active drives appeared to be fault free.  dmesg gives
> > >
> > >    md: kicking non-fresh hde1 from array!
> > >    md: unbind<hde1>
> > >    md: export_rdev(hde1)
> > >    md: kicking non-fresh hdi1 from array!
> > >    md: unbind<hdi1>
> > >    md: export_rdev(hdi1)
> 
> Please report
>    mdadm --examine /dev/whatever
> for every device that you think should be a part of the array.

I noticed as I copied and pasted below the requested info,
that "Device Size" and "Used Size" all make sense, whereas
with the -X option "Sync Size" reflects the sizes of the 
swapped out drives "39078016 (37.27 GiB 40.02 GB)" 
for hdg1 and hdo1.

Also, when booting today, I was able to get my eye balls 
moving fast enough to capture boot messages I noticed but
couldn't decipher yesterday "incorrect meta data area
header checksum" for hdo and hdg, but for at least one, and I 
think two other drives that I still wasn't fast enough to
capture.

Also, with regard to your comment below, what do you mean by
"active bitmap".  I seems to me I couldn't do anything with
the array until it was activated.

Hmm, just noticed something else that seems weird. There seem
to be 10 and 11 place holders (3 drives each) in the "Array Slot"
field below which is respectively 4 and 5 more places than there
are drives.

Thanks for you help.

------------- begin output --------------
fly:~# mdadm -E /dev/hde1
/dev/hde1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x1
     Array UUID : 6d57c75c:01b1b110:524cdc82:f2fc9c68
           Name : fly:FlyFileServ  (local to host fly)
  Creation Time : Mon Aug  4 00:59:16 2008
     Raid Level : raid5
   Raid Devices : 5

    Device Size : 160086320 (76.34 GiB 81.96 GB)
     Array Size : 625184768 (298.11 GiB 320.09 GB)
      Used Size : 156296192 (74.53 GiB 80.02 GB)
   Super Offset : 160086448 sectors
          State : clean
    Device UUID : d0992c0a:d645873f:d1e325cc:0a00327f

Internal Bitmap : 2 sectors from superblock
    Update Time : Sat Jan  3 21:31:41 2009
       Checksum : 1a5674a1 - correct
         Events : 218

         Layout : left-symmetric
     Chunk Size : 64K

    Array Slot : 9 (failed, failed, failed, failed, failed, empty, 3, 2,
    0, 1, 4)
   Array State : uUuuu 5 failed

 
fly:~# mdadm -E /dev/hdg1
/dev/hdg1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x1
     Array UUID : 6d57c75c:01b1b110:524cdc82:f2fc9c68
           Name : fly:FlyFileServ  (local to host fly)
  Creation Time : Mon Aug  4 00:59:16 2008
     Raid Level : raid5
   Raid Devices : 5

    Device Size : 156296176 (74.53 GiB 80.02 GB)
     Array Size : 625184768 (298.11 GiB 320.09 GB)
      Used Size : 156296192 (74.53 GiB 80.02 GB)
   Super Offset : 156296304 sectors
          State : clean
    Device UUID : 72b7258a:22e70cea:cc667617:8873796f

Internal Bitmap : 2 sectors from superblock
    Update Time : Sat Jan  3 21:31:41 2009
       Checksum : 7ff97f89 - correct
         Events : 218

         Layout : left-symmetric
     Chunk Size : 64K

    Array Slot : 10 (failed, failed, failed, failed, failed, empty, 3,
    2, 0, 1, 4)
   Array State : uuuuU 5 failed

 
fly:~# mdadm -E /dev/hdi1
/dev/hdi1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x1
     Array UUID : 6d57c75c:01b1b110:524cdc82:f2fc9c68
           Name : fly:FlyFileServ  (local to host fly)
  Creation Time : Mon Aug  4 00:59:16 2008
     Raid Level : raid5
   Raid Devices : 5

    Device Size : 160086320 (76.34 GiB 81.96 GB)
     Array Size : 625184768 (298.11 GiB 320.09 GB)
      Used Size : 156296192 (74.53 GiB 80.02 GB)
   Super Offset : 160086448 sectors
          State : clean
    Device UUID : ade7e4e9:e58dc8df:c36df5b7:a938711d

Internal Bitmap : 2 sectors from superblock
    Update Time : Sat Jan  3 21:31:41 2009
       Checksum : 245ecd1e - correct
         Events : 218

         Layout : left-symmetric
     Chunk Size : 64K

    Array Slot : 8 (failed, failed, failed, failed, failed, empty, 3, 2,
    0, 1, 4)
   Array State : Uuuuu 5 failed
 

fly:~# mdadm -E /dev/hdk1
/dev/hdk1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x1
     Array UUID : 6d57c75c:01b1b110:524cdc82:f2fc9c68
           Name : fly:FlyFileServ  (local to host fly)
  Creation Time : Mon Aug  4 00:59:16 2008
     Raid Level : raid5
   Raid Devices : 5

    Device Size : 234436336 (111.79 GiB 120.03 GB)
     Array Size : 625184768 (298.11 GiB 320.09 GB)
      Used Size : 156296192 (74.53 GiB 80.02 GB)
   Super Offset : 234436464 sectors
          State : clean
    Device UUID : a7c337b5:c3c02071:e0f1099c:6f14a48e

Internal Bitmap : 2 sectors from superblock
    Update Time : Sun Jan  4 16:15:10 2009
       Checksum : df2d3ea6 - correct
         Events : 222

         Layout : left-symmetric
     Chunk Size : 64K

    Array Slot : 7 (failed, failed, failed, failed, failed, empty, 3, 2,
    failed, failed)
   Array State : __Uu_ 7 failed
 
 
fly:~# mdadm -E /dev/hdm1
/dev/hdm1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x1
     Array UUID : 6d57c75c:01b1b110:524cdc82:f2fc9c68
           Name : fly:FlyFileServ  (local to host fly)
  Creation Time : Mon Aug  4 00:59:16 2008
     Raid Level : raid5
   Raid Devices : 5

    Device Size : 156360432 (74.56 GiB 80.06 GB)
     Array Size : 625184768 (298.11 GiB 320.09 GB)
      Used Size : 156296192 (74.53 GiB 80.02 GB)
   Super Offset : 156360560 sectors
          State : clean
    Device UUID : 01c88710:44a63ce1:ae1c03ba:0d8aaca0

Internal Bitmap : 2 sectors from superblock
    Update Time : Sun Jan  4 16:15:10 2009
       Checksum : d14c18ec - correct
         Events : 222

         Layout : left-symmetric
     Chunk Size : 64K

    Array Slot : 6 (failed, failed, failed, failed, failed, empty, 3, 2,
    failed, failed)
   Array State : __uU_ 7 failed

 
fly:~# mdadm -E /dev/hdo1
/dev/hdo1:
          Magic : a92b4efc
        Version : 01
    Feature Map : 0x1
     Array UUID : 6d57c75c:01b1b110:524cdc82:f2fc9c68
           Name : fly:FlyFileServ  (local to host fly)
  Creation Time : Mon Aug  4 00:59:16 2008
     Raid Level : raid5
   Raid Devices : 5

    Device Size : 234436336 (111.79 GiB 120.03 GB)
     Array Size : 625184768 (298.11 GiB 320.09 GB)
      Used Size : 156296192 (74.53 GiB 80.02 GB)
   Super Offset : 234436464 sectors
          State : clean
    Device UUID : bbb30d5a:39f90588:65d5b01c:3e1a4d9a

Internal Bitmap : 2 sectors from superblock
    Update Time : Sun Jan  4 16:15:10 2009
       Checksum : 27385082 - correct
         Events : 222

         Layout : left-symmetric
     Chunk Size : 64K

    Array Slot : 5 (failed, failed, failed, failed, failed, empty, 3, 2,
    failed, failed)
   Array State : __uu_ 7 failed

-------------- end output ---------------
> 
> > >
> > > I also noticed that "mdadm -X <drive>" shows
> > > the pre-grow device size for 2 of the devices
> > > and some discrepancies between event and event cleared
> > > counts.
> 
> You cannot grow an array with an active bitmap... or at least you
> shouldn't be able to.  Maybe 2.6.18 didn't enforce that.  Maybe that
> is what caused the problem - not sure.
> 
> > >
> > > One last thing I found curious---from dmesg:
> > >
> > >    EXT3-fs error (device hdg1): ext3_check_descriptors: Block
> > >    bitmap for group 0 not in group (block 2040936682)!
> > >    EXT3-fs: group descriptors corrupted!
> > >
> > > There is not ext3 directly on hdg1.  LVM sits between the
> > > and the filesystem, so the above message seems suspect.
> 
> Seems like something got confused during boot and the wrong device got
> mounted.  That is bad.
> 
> NeilBrown
-- 
  
  whollygoat@letterboxes.org

-- 
http://www.fastmail.fm - The professional email service


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: RAID5 (mdadm) array hosed after grow operation
  2009-01-05 22:17   ` Neil Brown
  2009-01-06  8:45     ` whollygoat
@ 2009-01-08  4:19     ` whollygoat
       [not found]       ` <20090108101218.GI25654@samad.com.au>
  1 sibling, 1 reply; 7+ messages in thread
From: whollygoat @ 2009-01-08  4:19 UTC (permalink / raw)
  Cc: Justin Piszcz, debian-user, Neil Brown, linux-raid


On Tue, 6 Jan 2009 09:17:46 +1100, "Neil Brown" <neilb@suse.de> said:
> On Monday January 5, jpiszcz@lucidpixels.com wrote:
> > cc linux-raid
> > 
> > On Mon, 5 Jan 2009, whollygoat@letterboxes.org wrote:
> > 
> > > 

[snip]

> > > The RAID reassembled fine at each boot as the drives
> > > were replaced one by one.  After the last drive was
> > > partitioned and added to the array, I issued the
> > > command
> > >
> > >   "mdadm -G /dev/md/0 -z max"
> > >

[snip]

> 
> You cannot grow an array with an active bitmap... or at least you
> shouldn't be able to.  Maybe 2.6.18 didn't enforce that.  Maybe that
> is what caused the problem - not sure.
> 

I've decided to swap the smaller drives back in and start the upgrade 
process over again.  Seems that might be the fastest way to fix the 
problem.

How should I have done the grow operation if not as above?  The only
thing I see in man mdadm is the "-S" switch which seems to disassemble
the array.  Maybe this is because I've only tried it on the degraded 
array this problem has left with.  At any rate, after 

    mdadm -S /dev/md/0

running

    mdadm -D /dev/md/0

gave me an error something to the effect the array didn't exist or 
couldn't be found or something like that.

Or maybe do I need to add "--bitmap=none" to remove the bitmap 
when running the above grow command?

Hope you can help,

Thanks

goat
-- 
  
  whollygoat@letterboxes.org

-- 
http://www.fastmail.fm - Same, same, but different...


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Alex Samad Re: RAID5 (mdadm) array hosed after grow operation
       [not found]       ` <20090108101218.GI25654@samad.com.au>
@ 2009-01-09  2:41         ` whollygoat
  2009-01-09 10:45           ` John Robinson
  0 siblings, 1 reply; 7+ messages in thread
From: whollygoat @ 2009-01-09  2:41 UTC (permalink / raw)
  To: debian-user; +Cc: linux-raid


On Thu, 8 Jan 2009 21:12:18 +1100, "Alex Samad" <alex@samad.com.au>
said:
> On Wed, Jan 07, 2009 at 08:19:05PM -0800, whollygoat@letterboxes.org
> wrote:
> > 
> > On Tue, 6 Jan 2009 09:17:46 +1100, "Neil Brown" <neilb@suse.de> said:
> 
> [snip]
> 
> > How should I have done the grow operation if not as above?  The only
> > thing I see in man mdadm is the "-S" switch which seems to disassemble
> > the array.  Maybe this is because I've only tried it on the degraded 
> > array this problem has left with.  At any rate, after 
> > 
> >     mdadm -S /dev/md/0
> > 
> 
> [snip]
> 
> > 
> > Hope you can help,
> 
> Hi
> 
> I have grown raid5 arrays either by disk number or disk size, I have
> only ever used --grow and never used the -z option
> 
> I would re copy the info over from the small drives to the large drives
> (if you can have all the drives in at one time that might be better.
> 
> increase the partition size and then run --grow on the array. I have
> done this going from 250G -> 500G -> 750g -> 1T. although when I have
> done it, I fail one drive and then add the new drive, expand the
> partition size and re add it back into the array, once I have done all
> the drives I then ran the grow.
> 

I'm not sure I uderstand what you mean.  When you copy the info over and
then increase the partition size, are you doing something like dd
if=smalldrive
of=bigdrive, then using a tool like parted to resize the partition?  

I put the large drives in (as hot spares) with a single raid partition 
(type fd) that uses the entire disk, so I can't increase their size any. 
Then when I failed the drive the data it contained was rebuilt to the
larger
hot spare.

But anyway, I don't think that is going to matter.  The issue I am
trying to 
solve is how to de-activate the bitmap. It was suggested on the
linux-raid 
list that my problem may have been caused by running the grow op on an
active 
bitmap and I can't see from "man mdadm" how to de-activate the bit map.

The only thing I see about deactivation is --stop and that disassembles
the
array, in which case I can't run the grow command.  I read how to remove
the bitmap, but then I guess I would have to readd it after the grow op.
 In
any case, I would like to get the figured out without too much
experimentation
because swapping drives in and out and rebuilding is pretty time
consuming so I 
would really like to avoid fudging this up again.

Thanks for you help

goat
-- 
  
  whollygoat@letterboxes.org

-- 
http://www.fastmail.fm - Access all of your messages and folders
                          wherever you are


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Alex Samad Re: RAID5 (mdadm) array hosed after grow operation
  2009-01-09  2:41         ` Alex Samad " whollygoat
@ 2009-01-09 10:45           ` John Robinson
  2009-01-13  3:46             ` whollygoat
  0 siblings, 1 reply; 7+ messages in thread
From: John Robinson @ 2009-01-09 10:45 UTC (permalink / raw)
  To: whollygoat; +Cc: debian-user, linux-raid

On 09/01/2009 02:41, whollygoat@letterboxes.org wrote:
> But anyway, I don't think that is going to matter.  The issue I am
> trying to 
> solve is how to de-activate the bitmap. It was suggested on the
> linux-raid 
> list that my problem may have been caused by running the grow op on an
> active 
> bitmap and I can't see from "man mdadm" how to de-activate the bit map.

man mdadm tells me:
[...]
-b, --bitmap=
     Specify a file to store a write-intent bitmap in. The file should 
not exist unless --force is also given. The same file should be provided 
when assembling the array. If the word internal is given, then the 
bitmap is stored with the metadata on the array, and so is replicated on 
all devices. If the word none is given with --grow mode, then any bitmap 
that is present is removed.

So I imagine you'd want to
   # mdadm --grow /dev/mdX --bitmap=none
to de-activate the bitmap.

Cheers,

John.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Alex Samad Re: RAID5 (mdadm) array hosed after grow operation
  2009-01-09 10:45           ` John Robinson
@ 2009-01-13  3:46             ` whollygoat
  0 siblings, 0 replies; 7+ messages in thread
From: whollygoat @ 2009-01-13  3:46 UTC (permalink / raw)
  To: debian-user, linux-raid; +Cc: John Robinson


On Fri, 09 Jan 2009 10:45:56 +0000, "John Robinson"
<john.robinson@anonymous.org.uk> said:
> On 09/01/2009 02:41, whollygoat@letterboxes.org wrote:
> > But anyway, I don't think that is going to matter.  The issue I am
> > trying to 
> > solve is how to de-activate the bitmap. It was suggested on the
> > linux-raid 
> > list that my problem may have been caused by running the grow op on an
> > active 
> > bitmap and I can't see from "man mdadm" how to de-activate the bit map.
> 
> man mdadm tells me:
> [...]
> -b, --bitmap=
>      Specify a file to store a write-intent bitmap in. The file should 
> not exist unless --force is also given. The same file should be provided 
> when assembling the array. If the word internal is given, then the 
> bitmap is stored with the metadata on the array, and so is replicated on 
> all devices. If the word none is given with --grow mode, then any bitmap 
> that is present is removed.
> 
> So I imagine you'd want to
>    # mdadm --grow /dev/mdX --bitmap=none
> to de-activate the bitmap.

The question that came to mind when I read that in the docs,
was how to recreate the bitmap on an already created array
after nuking it with "none".

I guess I also had doubts because the reply I had a few iterations
back didn't say that I shouldn't have performed the grow operation
on an existant bitmap, but an active one, and I wasn't prepared to
make the leap from active/inactive to existant/non-existant.

But, this has all become moot anyway.  When I put the original, smaller
drives back in, hoping to do the grow op overagain, I was faced with a
similar problem assembling the array, so I'm guessing the problem
caused by something other than the grow.  I put the larger drives in,
zeroed them, and am in the process of recreating the array and
file systems to be populated from backups.

Thanks for the input.

goat
-- 
  
  whollygoat@letterboxes.org

-- 
http://www.fastmail.fm - Faster than the air-speed velocity of an
                          unladen european swallow


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2009-01-13  3:46 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <1231144738.2997.1293010001@webmail.messagingengine.com>
2009-01-05 14:13 ` RAID5 (mdadm) array hosed after grow operation Justin Piszcz
2009-01-05 22:17   ` Neil Brown
2009-01-06  8:45     ` whollygoat
2009-01-08  4:19     ` whollygoat
     [not found]       ` <20090108101218.GI25654@samad.com.au>
2009-01-09  2:41         ` Alex Samad " whollygoat
2009-01-09 10:45           ` John Robinson
2009-01-13  3:46             ` whollygoat

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).