linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RAID5 won't mount after reducing disks from 8 to 6
@ 2011-02-17  0:24 Matt Tehonica
  2011-02-17  1:30 ` Phil Turmel
  2011-02-17  1:42 ` NeilBrown
  0 siblings, 2 replies; 12+ messages in thread
From: Matt Tehonica @ 2011-02-17  0:24 UTC (permalink / raw)
  To: linux-raid

Hi all,

I converted my RAID5 from 8 disks to 6 disks using "mdadm --grow /dev/md0 --array-size" and then "mdadm --grow /dev/md0 --raid-devices=6" and after rebooting, it won't mount.  It has been running fine for about a year.  File system is XFS.  Here is some info on it....

mtehonica@ghostrider:~$ sudo mdadm -D /dev/md0 
/dev/md0:
        Version : 1.01
  Creation Time : Mon Dec 21 19:40:58 2009
     Raid Level : raid5
     Array Size : 7325690880 (6986.32 GiB 7501.51 GB)
  Used Dev Size : 1465138176 (1397.26 GiB 1500.30 GB)
   Raid Devices : 6
  Total Devices : 8
    Persistence : Superblock is persistent

    Update Time : Wed Feb 16 19:13:50 2011
          State : clean
 Active Devices : 6
Working Devices : 8
 Failed Devices : 0
  Spare Devices : 2

         Layout : left-symmetric
     Chunk Size : 512K

           Name : ghostrider:0  (local to host ghostrider)
           UUID : b4ba7316:08976ceb:3ab7af7b:430ff147
         Events : 327264

    Number   Major   Minor   RaidDevice State
       0       8       80        0      active sync   /dev/sdf
       1       8       16        1      active sync   /dev/sdb
       3       8       64        2      active sync   /dev/sde
       4       8       48        3      active sync   /dev/sdd
       5       8        0        4      active sync   /dev/sda
       8       8      112        5      active sync   /dev/sdh

       6       8      144        -      spare   /dev/sdj
       7       8      128        -      spare   /dev/sdi


mtehonica@ghostrider:~$ sudo mdadm -E /dev/sda 
/dev/sda:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x0
     Array UUID : b4ba7316:08976ceb:3ab7af7b:430ff147
           Name : ghostrider:0  (local to host ghostrider)
  Creation Time : Mon Dec 21 19:40:58 2009
     Raid Level : raid5
   Raid Devices : 6

 Avail Dev Size : 2930276904 (1397.26 GiB 1500.30 GB)
     Array Size : 14651381760 (6986.32 GiB 7501.51 GB)
  Used Dev Size : 2930276352 (1397.26 GiB 1500.30 GB)
    Data Offset : 264 sectors
   Super Offset : 0 sectors
          State : clean
    Device UUID : 83ff92c5:b7acdfcd:d3ace2b5:0ac3549a

    Update Time : Wed Feb 16 19:13:52 2011
       Checksum : 5053499e - correct
         Events : 327264

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 4
   Array State : AAAAAA ('A' == active, '.' == missing)


mtehonica@ghostrider:~$ sudo mount /dev/md0 /test
mount: /dev/md0: can't read superblock


I tried running xfs_repair and it will just run forever and says "............................found candidate secondary superblock...
error reading superblock 80 -- seek to offset 7501512704000 failed
unable to verify superblock, continuing...
..........................................................................................................................................".  

Any insight/help on this would be appreciated!

Thanks in advance!
Matt

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID5 won't mount after reducing disks from 8 to 6
  2011-02-17  0:24 RAID5 won't mount after reducing disks from 8 to 6 Matt Tehonica
@ 2011-02-17  1:30 ` Phil Turmel
  2011-02-17  1:39   ` Matt Tehonica
                     ` (2 more replies)
  2011-02-17  1:42 ` NeilBrown
  1 sibling, 3 replies; 12+ messages in thread
From: Phil Turmel @ 2011-02-17  1:30 UTC (permalink / raw)
  To: Matt Tehonica; +Cc: linux-raid

On 02/16/2011 07:24 PM, Matt Tehonica wrote:
> Hi all,
> 
> I converted my RAID5 from 8 disks to 6 disks using "mdadm --grow /dev/md0 --array-size" and then "mdadm --grow /dev/md0 --raid-devices=6" and after rebooting, it won't mount.  It has been running fine for about a year.  File system is XFS.  Here is some info on it....

Did you resize (shrink) the XFS filesystem first?


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID5 won't mount after reducing disks from 8 to 6
  2011-02-17  1:30 ` Phil Turmel
@ 2011-02-17  1:39   ` Matt Tehonica
  2011-02-17  2:13     ` Phil Turmel
  2011-02-17  2:18   ` Keld Jørn Simonsen
  2011-02-17  2:43   ` Stan Hoeppner
  2 siblings, 1 reply; 12+ messages in thread
From: Matt Tehonica @ 2011-02-17  1:39 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

Thanks for the response!  All I did was truncate the array using "mdadm --grow /dev/md0 --array-size" and then resize the array using "mdadm --grow /dev/md0 --raid-devices=6" so I guess I didn't resize the XFS FS first.  Is that something I can do now or am I screwed?

On Feb 16, 2011, at 8:30 PM, Phil Turmel wrote:

> On 02/16/2011 07:24 PM, Matt Tehonica wrote:
>> Hi all,
>> 
>> I converted my RAID5 from 8 disks to 6 disks using "mdadm --grow /dev/md0 --array-size" and then "mdadm --grow /dev/md0 --raid-devices=6" and after rebooting, it won't mount.  It has been running fine for about a year.  File system is XFS.  Here is some info on it....
> 
> Did you resize (shrink) the XFS filesystem first?
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID5 won't mount after reducing disks from 8 to 6
  2011-02-17  0:24 RAID5 won't mount after reducing disks from 8 to 6 Matt Tehonica
  2011-02-17  1:30 ` Phil Turmel
@ 2011-02-17  1:42 ` NeilBrown
  2011-02-17  1:47   ` Matt Tehonica
  1 sibling, 1 reply; 12+ messages in thread
From: NeilBrown @ 2011-02-17  1:42 UTC (permalink / raw)
  To: Matt Tehonica; +Cc: linux-raid

On Wed, 16 Feb 2011 19:24:41 -0500 Matt Tehonica <matt.tehonica@mac.com>
wrote:

> Hi all,
> 
> I converted my RAID5 from 8 disks to 6 disks using "mdadm --grow /dev/md0 --array-size" and then "mdadm --grow /dev/md0 --raid-devices=6" and after rebooting, it won't mount.  It has been running fine for about a year.  File system is XFS.  Here is some info on it....

.... uhmm... you did resize the filesystem to be smaller before you resized
the array to be smaller ... didn't you?

Because if you didn't you have probably just lost 1/3 of your data.

That is the whole point of having to set the array-size first.  You then make
sure your data is still safe before performing the irreversible reshape.

If you did reshape the XFS filesystem first, then something else must be
wrong.

NeilBrown



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID5 won't mount after reducing disks from 8 to 6
  2011-02-17  1:42 ` NeilBrown
@ 2011-02-17  1:47   ` Matt Tehonica
  2011-02-17  2:14     ` NeilBrown
  2011-02-17  3:11     ` Joe Landman
  0 siblings, 2 replies; 12+ messages in thread
From: Matt Tehonica @ 2011-02-17  1:47 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

To be honest, I couldn't find good directions on how exactly to resize an array to be smaller so I put some bits and pieces together to do it.  Guess I didn't understand everything beforehand because I didn't read anything about resizing the file system.  I thought setting the "array-size" would take care of moving data before rebuilding onto the 6 disks.

Since I haven't resized the XFS filesystem, any recommendations on what to do next?  Think it's possible to recover any of the data?  For what it's worth, I haven't done anything to the 2 disks that I was going to remove.

Thanks for the help.


On Feb 16, 2011, at 8:42 PM, NeilBrown wrote:

> On Wed, 16 Feb 2011 19:24:41 -0500 Matt Tehonica <matt.tehonica@mac.com>
> wrote:
> 
>> Hi all,
>> 
>> I converted my RAID5 from 8 disks to 6 disks using "mdadm --grow /dev/md0 --array-size" and then "mdadm --grow /dev/md0 --raid-devices=6" and after rebooting, it won't mount.  It has been running fine for about a year.  File system is XFS.  Here is some info on it....
> 
> .... uhmm... you did resize the filesystem to be smaller before you resized
> the array to be smaller ... didn't you?
> 
> Because if you didn't you have probably just lost 1/3 of your data.
> 
> That is the whole point of having to set the array-size first.  You then make
> sure your data is still safe before performing the irreversible reshape.
> 
> If you did reshape the XFS filesystem first, then something else must be
> wrong.
> 
> NeilBrown
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID5 won't mount after reducing disks from 8 to 6
  2011-02-17  1:39   ` Matt Tehonica
@ 2011-02-17  2:13     ` Phil Turmel
  0 siblings, 0 replies; 12+ messages in thread
From: Phil Turmel @ 2011-02-17  2:13 UTC (permalink / raw)
  To: Matt Tehonica; +Cc: linux-raid

On 02/16/2011 08:39 PM, Matt Tehonica wrote:
> Thanks for the response!  All I did was truncate the array using "mdadm --grow /dev/md0 --array-size" and then resize the array using "mdadm --grow /dev/md0 --raid-devices=6" so I guess I didn't resize the XFS FS first.  Is that something I can do now or am I screwed?

Ouch.  I don't know if you are screwed.  Unless you wrote to it, in which case you probably are.  But you haven't been able to mount it, I presume, so you might be OK.  It depends on whether the reshape is reversible.  If it's not reversible, you'll have to ask the XFS experts xfs@oss.sgi.com or linux-xfs@vger.kernel.org

Neil?

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID5 won't mount after reducing disks from 8 to 6
  2011-02-17  1:47   ` Matt Tehonica
@ 2011-02-17  2:14     ` NeilBrown
  2011-02-17  2:22       ` Matt Tehonica
  2011-02-17  3:06       ` Stan Hoeppner
  2011-02-17  3:11     ` Joe Landman
  1 sibling, 2 replies; 12+ messages in thread
From: NeilBrown @ 2011-02-17  2:14 UTC (permalink / raw)
  To: Matt Tehonica; +Cc: linux-raid

On Wed, 16 Feb 2011 20:47:08 -0500 Matt Tehonica <matt.tehonica@mac.com>
wrote:

> To be honest, I couldn't find good directions on how exactly to resize an array to be smaller so I put some bits and pieces together to do it.  Guess I didn't understand everything beforehand because I didn't read anything about resizing the file system.  I thought setting the "array-size" would take care of moving data before rebuilding onto the 6 disks.

The next release of mdadm will be a bit more explicit in the man page about
the need to resize the filesystem first - you aren't the first person to do
this.

> 
> Since I haven't resized the XFS filesystem, any recommendations on what to do next?  Think it's possible to recover any of the data?  For what it's worth, I haven't done anything to the 2 disks that I was going to remove.

8 to 6 disks in a RAID6 means 6 to 4 data disks, so 1/3 of the space is gone.
And it really is gone,  Little bits of it might be on the two devices that
you were going to remove, but I doubt it would be usable, and it would be
very hard to recover.

So your best bet it to convince xfs_repair to work with what you've got and
try to knit together as much as it can  - which may be nothing, I really
don't know.

Maybe you could ask on an XFS list somewhere.

But if you have any backups - I suggest they are by far your best bet.

I'm sorry, but it isn't really possible for mdadm to detect that you haven't
resized your filesystem and warn you about it :-(

NeilBrown


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID5 won't mount after reducing disks from 8 to 6
  2011-02-17  1:30 ` Phil Turmel
  2011-02-17  1:39   ` Matt Tehonica
@ 2011-02-17  2:18   ` Keld Jørn Simonsen
  2011-02-17  2:43   ` Stan Hoeppner
  2 siblings, 0 replies; 12+ messages in thread
From: Keld Jørn Simonsen @ 2011-02-17  2:18 UTC (permalink / raw)
  To: Phil Turmel; +Cc: Matt Tehonica, linux-raid

On Wed, Feb 16, 2011 at 08:30:27PM -0500, Phil Turmel wrote:
> On 02/16/2011 07:24 PM, Matt Tehonica wrote:
> > Hi all,
> > 
> > I converted my RAID5 from 8 disks to 6 disks using "mdadm --grow /dev/md0 --array-size" and then "mdadm --grow /dev/md0 --raid-devices=6" and after rebooting, it won't mount.  It has been running fine for about a year.  File system is XFS.  Here is some info on it....
> 
> Did you resize (shrink) the XFS filesystem first?

I think it is not possible to shrink an XFS partition.

best regards
keld

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID5 won't mount after reducing disks from 8 to 6
  2011-02-17  2:14     ` NeilBrown
@ 2011-02-17  2:22       ` Matt Tehonica
  2011-02-17  3:06       ` Stan Hoeppner
  1 sibling, 0 replies; 12+ messages in thread
From: Matt Tehonica @ 2011-02-17  2:22 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid@vger.kernel.org

Yeah, I'm definitely not putting mdadm at fault for my screw up. I realize mdadm can't check to make sure you resized the filesystem. I'll check with the xfs crew and see if there is anything xfs_repair can do. 

Sent from my iPhone

On Feb 16, 2011, at 9:14 PM, NeilBrown <neilb@suse.de> wrote:

> On Wed, 16 Feb 2011 20:47:08 -0500 Matt Tehonica <matt.tehonica@mac.com>
> wrote:
> 
>> To be honest, I couldn't find good directions on how exactly to resize an array to be smaller so I put some bits and pieces together to do it.  Guess I didn't understand everything beforehand because I didn't read anything about resizing the file system.  I thought setting the "array-size" would take care of moving data before rebuilding onto the 6 disks.
> 
> The next release of mdadm will be a bit more explicit in the man page about
> the need to resize the filesystem first - you aren't the first person to do
> this.
> 
>> 
>> Since I haven't resized the XFS filesystem, any recommendations on what to do next?  Think it's possible to recover any of the data?  For what it's worth, I haven't done anything to the 2 disks that I was going to remove.
> 
> 8 to 6 disks in a RAID6 means 6 to 4 data disks, so 1/3 of the space is gone.
> And it really is gone,  Little bits of it might be on the two devices that
> you were going to remove, but I doubt it would be usable, and it would be
> very hard to recover.
> 
> So your best bet it to convince xfs_repair to work with what you've got and
> try to knit together as much as it can  - which may be nothing, I really
> don't know.
> 
> Maybe you could ask on an XFS list somewhere.
> 
> But if you have any backups - I suggest they are by far your best bet.
> 
> I'm sorry, but it isn't really possible for mdadm to detect that you haven't
> resized your filesystem and warn you about it :-(
> 
> NeilBrown
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID5 won't mount after reducing disks from 8 to 6
  2011-02-17  1:30 ` Phil Turmel
  2011-02-17  1:39   ` Matt Tehonica
  2011-02-17  2:18   ` Keld Jørn Simonsen
@ 2011-02-17  2:43   ` Stan Hoeppner
  2 siblings, 0 replies; 12+ messages in thread
From: Stan Hoeppner @ 2011-02-17  2:43 UTC (permalink / raw)
  To: Phil Turmel; +Cc: Matt Tehonica, linux-raid

Phil Turmel put forth on 2/16/2011 7:30 PM:
> On 02/16/2011 07:24 PM, Matt Tehonica wrote:
>> Hi all,
>>
>> I converted my RAID5 from 8 disks to 6 disks using "mdadm --grow /dev/md0 --array-size" and then "mdadm --grow /dev/md0 --raid-devices=6" and after rebooting, it won't mount.  It has been running fine for about a year.  File system is XFS.  Here is some info on it....
> 
> Did you resize (shrink) the XFS filesystem first?

You can't shrink XFS filesystems.

This has been talked about as a possibility in the future, but AFAIK
there's no code yet.

-- 
Stan

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID5 won't mount after reducing disks from 8 to 6
  2011-02-17  2:14     ` NeilBrown
  2011-02-17  2:22       ` Matt Tehonica
@ 2011-02-17  3:06       ` Stan Hoeppner
  1 sibling, 0 replies; 12+ messages in thread
From: Stan Hoeppner @ 2011-02-17  3:06 UTC (permalink / raw)
  To: Linux RAID

NeilBrown put forth on 2/16/2011 8:14 PM:

> So your best bet it to convince xfs_repair to work with what you've got and
> try to knit together as much as it can  - which may be nothing, I really
> don't know.

xfs_repair won't help.  He's hosed.  If this had been a grow from 8
disks to 10 he'd be ok, as you grow mdadm first then XFS.  But as I
said, XFS has no shrink capability.  xfs_repair will just puke all over
itself if you run it.

> Maybe you could ask on an XFS list somewhere.

He already did, in a way, as I'm on that list.  You're more than welcome
to ask on the XFS mailing list, but you'll get the same answer.

This really sucks and I feel for Matt.  I wish he'd have asked on either
or both lists first...

What he needs to do now is start over from scratch.  Delete the current
md device and create a new one, then create a new XFS filesystem, and
then restore his files from his backup device.

Is there a Linux mdraid best practices document somewhere, that could
help prevent folks from hosing themselves like this if they'd read it first?

-- 
Stan

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: RAID5 won't mount after reducing disks from 8 to 6
  2011-02-17  1:47   ` Matt Tehonica
  2011-02-17  2:14     ` NeilBrown
@ 2011-02-17  3:11     ` Joe Landman
  1 sibling, 0 replies; 12+ messages in thread
From: Joe Landman @ 2011-02-17  3:11 UTC (permalink / raw)
  To: Matt Tehonica; +Cc: linux-raid

On 02/16/2011 08:47 PM, Matt Tehonica wrote:
> To be honest, I couldn't find good directions on how exactly to
> resize an array to be smaller so I put some bits and pieces together
> to do it.  Guess I didn't understand everything beforehand because I
> didn't read anything about resizing the file system.  I thought
> setting the "array-size" would take care of moving data before
> rebuilding onto the 6 disks.
>
> Since I haven't resized the XFS filesystem, any recommendations on
> what to do next?  Think it's possible to recover any of the data?
> For what it's worth, I haven't done anything to the 2 disks that I
> was going to remove.

XFS isn't shrinkable.  See if

	xfs_check /dev/mdX

tells you anything.  If you have a backup, a restore would be the 
fastest option.  If you don't, and you have no other possible way of 
recovering the data from another source, try

	xfs_repair /dev/mdX

I usually run it with the verbose flag on so I can see what its doing.

I am not sure it will work, or result in recoverable data.

If you need to shrink xfs file systems, in general, it is best to use 
xfsdump/xfsrestore or tar.

> -- To unsubscribe from this list: send the line "unsubscribe
> linux-raid" in the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
Joe Landman
landman@scalableinformatics.com

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2011-02-17  3:11 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-02-17  0:24 RAID5 won't mount after reducing disks from 8 to 6 Matt Tehonica
2011-02-17  1:30 ` Phil Turmel
2011-02-17  1:39   ` Matt Tehonica
2011-02-17  2:13     ` Phil Turmel
2011-02-17  2:18   ` Keld Jørn Simonsen
2011-02-17  2:43   ` Stan Hoeppner
2011-02-17  1:42 ` NeilBrown
2011-02-17  1:47   ` Matt Tehonica
2011-02-17  2:14     ` NeilBrown
2011-02-17  2:22       ` Matt Tehonica
2011-02-17  3:06       ` Stan Hoeppner
2011-02-17  3:11     ` Joe Landman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).