linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RAID5 superblock and filesystem recovery after re-creation
@ 2012-07-08 21:47 Alexander Schleifer
  2012-07-08 22:13 ` NeilBrown
  0 siblings, 1 reply; 6+ messages in thread
From: Alexander Schleifer @ 2012-07-08 21:47 UTC (permalink / raw)
  To: linux-raid

Hi,

after a new installation of Ubuntu, my RAID5 device was set to
"inactive". All devices were set to spare device and the level was
unknown. So I tried to re-create the array by the following command.

mdadm --create /dev/md0 --assume-clean --level=5 --raid-disk=6
--chunk=512 --metadata=1.2 /dev/sde /dev/sdd /dev/sda /dev/sdc
/dev/sdg /dev/sdh

I have a backup of the mdadm -Evvvvs output, so I could recover the
chunk size, metadata and offset (2048) from this information.

The partially output of mdadm --create... shows this output:

...
mdadm: /dev/sde appears to be part of a raid array:
    level=raid5 devices=6 ctime=Sun Jul  8 23:02:51 2012
mdadm: partition table exists on /dev/sde but will be lost or
       meaningless after creating array
...

The array is recreated, but no valid filesystem is found on /dev/md0
(dumpe2fs: Filesystem revision too high while trying to open /dev/md0.
Couldn't find valid filesystem superblock.). Also fdisk /dev/sde shows
no partition.
My next step would be creating Linux RAID type partitions on the 6
devices with fdisk and call mdadm --create with /dev/sde1 /dev/sdd1
and so on.
Is this step a possible solution for recovering the filesystem?

Thanks for any help,
-Alex

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID5 superblock and filesystem recovery after re-creation
  2012-07-08 21:47 RAID5 superblock and filesystem recovery after re-creation Alexander Schleifer
@ 2012-07-08 22:13 ` NeilBrown
  2012-07-08 22:45   ` Alexander Schleifer
  0 siblings, 1 reply; 6+ messages in thread
From: NeilBrown @ 2012-07-08 22:13 UTC (permalink / raw)
  To: Alexander Schleifer; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 1760 bytes --]

On Sun, 8 Jul 2012 23:47:16 +0200 Alexander Schleifer
<alexander.schleifer@googlemail.com> wrote:

> Hi,
> 
> after a new installation of Ubuntu, my RAID5 device was set to
> "inactive". All devices were set to spare device and the level was
> unknown. So I tried to re-create the array by the following command.

Sorry about that.  In case you haven't seen it,
   http://neil.brown.name/blog/20120615073245
explains the background

> 
> mdadm --create /dev/md0 --assume-clean --level=5 --raid-disk=6
> --chunk=512 --metadata=1.2 /dev/sde /dev/sdd /dev/sda /dev/sdc
> /dev/sdg /dev/sdh
> 
> I have a backup of the mdadm -Evvvvs output, so I could recover the
> chunk size, metadata and offset (2048) from this information.
> 
> The partially output of mdadm --create... shows this output:
> 
> ...
> mdadm: /dev/sde appears to be part of a raid array:
>     level=raid5 devices=6 ctime=Sun Jul  8 23:02:51 2012
> mdadm: partition table exists on /dev/sde but will be lost or
>        meaningless after creating array
> ...
> 
> The array is recreated, but no valid filesystem is found on /dev/md0
> (dumpe2fs: Filesystem revision too high while trying to open /dev/md0.
> Couldn't find valid filesystem superblock.). Also fdisk /dev/sde shows
> no partition.
> My next step would be creating Linux RAID type partitions on the 6
> devices with fdisk and call mdadm --create with /dev/sde1 /dev/sdd1
> and so on.
> Is this step a possible solution for recovering the filesystem?

Depends.. Was the original array created on partitions, or on whole devices?
The saved '-E' output should show that.

Maybe you have the devices in the wrong order.  The order you have looks odd
for a recently created array.

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID5 superblock and filesystem recovery after re-creation
  2012-07-08 22:13 ` NeilBrown
@ 2012-07-08 22:45   ` Alexander Schleifer
  2012-07-09  0:02     ` NeilBrown
  0 siblings, 1 reply; 6+ messages in thread
From: Alexander Schleifer @ 2012-07-08 22:45 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

2012/7/9 NeilBrown <neilb@suse.de>:
> On Sun, 8 Jul 2012 23:47:16 +0200 Alexander Schleifer
> <alexander.schleifer@googlemail.com> wrote:
>
>> Hi,
>>
>> after a new installation of Ubuntu, my RAID5 device was set to
>> "inactive". All devices were set to spare device and the level was
>> unknown. So I tried to re-create the array by the following command.
>
> Sorry about that.  In case you haven't seen it,
>    http://neil.brown.name/blog/20120615073245
> explains the background
>
>>
>> mdadm --create /dev/md0 --assume-clean --level=5 --raid-disk=6
>> --chunk=512 --metadata=1.2 /dev/sde /dev/sdd /dev/sda /dev/sdc
>> /dev/sdg /dev/sdh
>>
>> I have a backup of the mdadm -Evvvvs output, so I could recover the
>> chunk size, metadata and offset (2048) from this information.
>>
>> The partially output of mdadm --create... shows this output:
>>
>> ...
>> mdadm: /dev/sde appears to be part of a raid array:
>>     level=raid5 devices=6 ctime=Sun Jul  8 23:02:51 2012
>> mdadm: partition table exists on /dev/sde but will be lost or
>>        meaningless after creating array
>> ...
>>
>> The array is recreated, but no valid filesystem is found on /dev/md0
>> (dumpe2fs: Filesystem revision too high while trying to open /dev/md0.
>> Couldn't find valid filesystem superblock.). Also fdisk /dev/sde shows
>> no partition.
>> My next step would be creating Linux RAID type partitions on the 6
>> devices with fdisk and call mdadm --create with /dev/sde1 /dev/sdd1
>> and so on.
>> Is this step a possible solution for recovering the filesystem?
>
> Depends.. Was the original array created on partitions, or on whole devices?
> The saved '-E' output should show that.
>
> Maybe you have the devices in the wrong order.  The order you have looks odd
> for a recently created array.
>
> NeilBrown

The original array was created on whole devices, as the saved output
starts with e.g. "/dev/sde:".
I used the order of the 'Device UUID' from the saved output to
recreate the order in the new system (the ports changed due to a new
mainboard). After the installation I had a degraded array in
initramfs, but I was able to simply "exit" the debug shell and the
array was accessible. I will now skip the step of creating raid type
partitions and try every possible order of devices.

Thanks,
-Alex

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID5 superblock and filesystem recovery after re-creation
  2012-07-08 22:45   ` Alexander Schleifer
@ 2012-07-09  0:02     ` NeilBrown
  2012-07-09  6:50       ` Alexander Schleifer
  0 siblings, 1 reply; 6+ messages in thread
From: NeilBrown @ 2012-07-09  0:02 UTC (permalink / raw)
  To: Alexander Schleifer; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 3344 bytes --]

On Mon, 9 Jul 2012 00:45:08 +0200 Alexander Schleifer
<alexander.schleifer@googlemail.com> wrote:

> 2012/7/9 NeilBrown <neilb@suse.de>:
> > On Sun, 8 Jul 2012 23:47:16 +0200 Alexander Schleifer
> > <alexander.schleifer@googlemail.com> wrote:
> >
> >> Hi,
> >>
> >> after a new installation of Ubuntu, my RAID5 device was set to
> >> "inactive". All devices were set to spare device and the level was
> >> unknown. So I tried to re-create the array by the following command.
> >
> > Sorry about that.  In case you haven't seen it,
> >    http://neil.brown.name/blog/20120615073245
> > explains the background
> >
> >>
> >> mdadm --create /dev/md0 --assume-clean --level=5 --raid-disk=6
> >> --chunk=512 --metadata=1.2 /dev/sde /dev/sdd /dev/sda /dev/sdc
> >> /dev/sdg /dev/sdh
> >>
> >> I have a backup of the mdadm -Evvvvs output, so I could recover the
> >> chunk size, metadata and offset (2048) from this information.
> >>
> >> The partially output of mdadm --create... shows this output:
> >>
> >> ...
> >> mdadm: /dev/sde appears to be part of a raid array:
> >>     level=raid5 devices=6 ctime=Sun Jul  8 23:02:51 2012
> >> mdadm: partition table exists on /dev/sde but will be lost or
> >>        meaningless after creating array
> >> ...
> >>
> >> The array is recreated, but no valid filesystem is found on /dev/md0
> >> (dumpe2fs: Filesystem revision too high while trying to open /dev/md0.
> >> Couldn't find valid filesystem superblock.). Also fdisk /dev/sde shows
> >> no partition.
> >> My next step would be creating Linux RAID type partitions on the 6
> >> devices with fdisk and call mdadm --create with /dev/sde1 /dev/sdd1
> >> and so on.
> >> Is this step a possible solution for recovering the filesystem?
> >
> > Depends.. Was the original array created on partitions, or on whole devices?
> > The saved '-E' output should show that.
> >
> > Maybe you have the devices in the wrong order.  The order you have looks odd
> > for a recently created array.
> >
> > NeilBrown
> 
> The original array was created on whole devices, as the saved output
> starts with e.g. "/dev/sde:".

Right, so you definitely don't want to create partitions.  Maybe when mdadm
reported "partition table exists' it was a false positive, or maybe old
information - creating a 1.2 array doesn't destroy the partition table.

> I used the order of the 'Device UUID' from the saved output to
> recreate the order in the new system (the ports changed due to a new
> mainboard).

When you say "the order", do you mean the numerical order?

If you looked at the old "mdadm -E" output matching the "Device Role" with
"Device UUID" to determine the order of the UUIDs, then looked at the
"mdadm -E" output after the metadata got corrupted and used the "Device UUID"
to determine the correct "Device Role", then ordered the devices by that Role,
then that should have worked.

I assume you did have a filesystem directly on /dev/md0, and hadn't
partitioned it or used LVM on it?

NeilBrown


>              After the installation I had a degraded array in
> initramfs, but I was able to simply "exit" the debug shell and the
> array was accessible. I will now skip the step of creating raid type
> partitions and try every possible order of devices.
> 
> Thanks,
> -Alex


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID5 superblock and filesystem recovery after re-creation
  2012-07-09  0:02     ` NeilBrown
@ 2012-07-09  6:50       ` Alexander Schleifer
  2012-07-09  7:08         ` NeilBrown
  0 siblings, 1 reply; 6+ messages in thread
From: Alexander Schleifer @ 2012-07-09  6:50 UTC (permalink / raw)
  To: NeilBrown; +Cc: linux-raid

2012/7/9 NeilBrown <neilb@suse.de>:
> On Mon, 9 Jul 2012 00:45:08 +0200 Alexander Schleifer
> <alexander.schleifer@googlemail.com> wrote:
>
>> 2012/7/9 NeilBrown <neilb@suse.de>:
>> > On Sun, 8 Jul 2012 23:47:16 +0200 Alexander Schleifer
>> > <alexander.schleifer@googlemail.com> wrote:
>> >
>> >> Hi,
>> >>
>> >> after a new installation of Ubuntu, my RAID5 device was set to
>> >> "inactive". All devices were set to spare device and the level was
>> >> unknown. So I tried to re-create the array by the following command.
>> >
>> > Sorry about that.  In case you haven't seen it,
>> >    http://neil.brown.name/blog/20120615073245
>> > explains the background
>> >
>> >>
>> >> mdadm --create /dev/md0 --assume-clean --level=5 --raid-disk=6
>> >> --chunk=512 --metadata=1.2 /dev/sde /dev/sdd /dev/sda /dev/sdc
>> >> /dev/sdg /dev/sdh
>> >>
>> >> I have a backup of the mdadm -Evvvvs output, so I could recover the
>> >> chunk size, metadata and offset (2048) from this information.
>> >>
>> >> The partially output of mdadm --create... shows this output:
>> >>
>> >> ...
>> >> mdadm: /dev/sde appears to be part of a raid array:
>> >>     level=raid5 devices=6 ctime=Sun Jul  8 23:02:51 2012
>> >> mdadm: partition table exists on /dev/sde but will be lost or
>> >>        meaningless after creating array
>> >> ...
>> >>
>> >> The array is recreated, but no valid filesystem is found on /dev/md0
>> >> (dumpe2fs: Filesystem revision too high while trying to open /dev/md0.
>> >> Couldn't find valid filesystem superblock.). Also fdisk /dev/sde shows
>> >> no partition.
>> >> My next step would be creating Linux RAID type partitions on the 6
>> >> devices with fdisk and call mdadm --create with /dev/sde1 /dev/sdd1
>> >> and so on.
>> >> Is this step a possible solution for recovering the filesystem?
>> >
>> > Depends.. Was the original array created on partitions, or on whole devices?
>> > The saved '-E' output should show that.
>> >
>> > Maybe you have the devices in the wrong order.  The order you have looks odd
>> > for a recently created array.
>> >
>> > NeilBrown
>>
>> The original array was created on whole devices, as the saved output
>> starts with e.g. "/dev/sde:".
>
> Right, so you definitely don't want to create partitions.  Maybe when mdadm
> reported "partition table exists' it was a false positive, or maybe old
> information - creating a 1.2 array doesn't destroy the partition table.
>
>> I used the order of the 'Device UUID' from the saved output to
>> recreate the order in the new system (the ports changed due to a new
>> mainboard).
>
> When you say "the order", do you mean the numerical order?
>
> If you looked at the old "mdadm -E" output matching the "Device Role" with
> "Device UUID" to determine the order of the UUIDs, then looked at the
> "mdadm -E" output after the metadata got corrupted and used the "Device UUID"
> to determine the correct "Device Role", then ordered the devices by that Role,
> then that should have worked.

Ok, I used the "Device UUID" only to get the order. Now I reordered my
"mdadm --create..." call according to old "Device Role" and it works
;)

>
> I assume you did have a filesystem directly on /dev/md0, and hadn't
> partitioned it or used LVM on it?

Yes, the devices all are the same type and so I used the whole device
and created a filesystem directly on /dev/md0.

Now, fsck is running pass 1 for a few minutes with no error. So, I
think that everything is fine and I say thank you for helping get my
raid back to life ;-)

-Alex

>
> NeilBrown
>
>
>>              After the installation I had a degraded array in
>> initramfs, but I was able to simply "exit" the debug shell and the
>> array was accessible. I will now skip the step of creating raid type
>> partitions and try every possible order of devices.
>>
>> Thanks,
>> -Alex
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: RAID5 superblock and filesystem recovery after re-creation
  2012-07-09  6:50       ` Alexander Schleifer
@ 2012-07-09  7:08         ` NeilBrown
  0 siblings, 0 replies; 6+ messages in thread
From: NeilBrown @ 2012-07-09  7:08 UTC (permalink / raw)
  To: Alexander Schleifer; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 3945 bytes --]

On Mon, 9 Jul 2012 08:50:16 +0200 Alexander Schleifer
<alexander.schleifer@googlemail.com> wrote:

> 2012/7/9 NeilBrown <neilb@suse.de>:
> > On Mon, 9 Jul 2012 00:45:08 +0200 Alexander Schleifer
> > <alexander.schleifer@googlemail.com> wrote:
> >
> >> 2012/7/9 NeilBrown <neilb@suse.de>:
> >> > On Sun, 8 Jul 2012 23:47:16 +0200 Alexander Schleifer
> >> > <alexander.schleifer@googlemail.com> wrote:
> >> >
> >> >> Hi,
> >> >>
> >> >> after a new installation of Ubuntu, my RAID5 device was set to
> >> >> "inactive". All devices were set to spare device and the level was
> >> >> unknown. So I tried to re-create the array by the following command.
> >> >
> >> > Sorry about that.  In case you haven't seen it,
> >> >    http://neil.brown.name/blog/20120615073245
> >> > explains the background
> >> >
> >> >>
> >> >> mdadm --create /dev/md0 --assume-clean --level=5 --raid-disk=6
> >> >> --chunk=512 --metadata=1.2 /dev/sde /dev/sdd /dev/sda /dev/sdc
> >> >> /dev/sdg /dev/sdh
> >> >>
> >> >> I have a backup of the mdadm -Evvvvs output, so I could recover the
> >> >> chunk size, metadata and offset (2048) from this information.
> >> >>
> >> >> The partially output of mdadm --create... shows this output:
> >> >>
> >> >> ...
> >> >> mdadm: /dev/sde appears to be part of a raid array:
> >> >>     level=raid5 devices=6 ctime=Sun Jul  8 23:02:51 2012
> >> >> mdadm: partition table exists on /dev/sde but will be lost or
> >> >>        meaningless after creating array
> >> >> ...
> >> >>
> >> >> The array is recreated, but no valid filesystem is found on /dev/md0
> >> >> (dumpe2fs: Filesystem revision too high while trying to open /dev/md0.
> >> >> Couldn't find valid filesystem superblock.). Also fdisk /dev/sde shows
> >> >> no partition.
> >> >> My next step would be creating Linux RAID type partitions on the 6
> >> >> devices with fdisk and call mdadm --create with /dev/sde1 /dev/sdd1
> >> >> and so on.
> >> >> Is this step a possible solution for recovering the filesystem?
> >> >
> >> > Depends.. Was the original array created on partitions, or on whole devices?
> >> > The saved '-E' output should show that.
> >> >
> >> > Maybe you have the devices in the wrong order.  The order you have looks odd
> >> > for a recently created array.
> >> >
> >> > NeilBrown
> >>
> >> The original array was created on whole devices, as the saved output
> >> starts with e.g. "/dev/sde:".
> >
> > Right, so you definitely don't want to create partitions.  Maybe when mdadm
> > reported "partition table exists' it was a false positive, or maybe old
> > information - creating a 1.2 array doesn't destroy the partition table.
> >
> >> I used the order of the 'Device UUID' from the saved output to
> >> recreate the order in the new system (the ports changed due to a new
> >> mainboard).
> >
> > When you say "the order", do you mean the numerical order?
> >
> > If you looked at the old "mdadm -E" output matching the "Device Role" with
> > "Device UUID" to determine the order of the UUIDs, then looked at the
> > "mdadm -E" output after the metadata got corrupted and used the "Device UUID"
> > to determine the correct "Device Role", then ordered the devices by that Role,
> > then that should have worked.
> 
> Ok, I used the "Device UUID" only to get the order. Now I reordered my
> "mdadm --create..." call according to old "Device Role" and it works
> ;)
> 
> >
> > I assume you did have a filesystem directly on /dev/md0, and hadn't
> > partitioned it or used LVM on it?
> 
> Yes, the devices all are the same type and so I used the whole device
> and created a filesystem directly on /dev/md0.
> 
> Now, fsck is running pass 1 for a few minutes with no error. So, I
> think that everything is fine and I say thank you for helping get my
> raid back to life ;-)
> 

Good news!  Always happy to hear success reports.

Thanks,
NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2012-07-09  7:08 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-07-08 21:47 RAID5 superblock and filesystem recovery after re-creation Alexander Schleifer
2012-07-08 22:13 ` NeilBrown
2012-07-08 22:45   ` Alexander Schleifer
2012-07-09  0:02     ` NeilBrown
2012-07-09  6:50       ` Alexander Schleifer
2012-07-09  7:08         ` NeilBrown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).