linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: NeilBrown <neilb@suse.de>
To: Alexander Schleifer <alexander.schleifer@googlemail.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: RAID5 superblock and filesystem recovery after re-creation
Date: Mon, 9 Jul 2012 17:08:11 +1000	[thread overview]
Message-ID: <20120709170811.7aa546ec@notabene.brown> (raw)
In-Reply-To: <CA+=meHUtC5PqczdhgQM3c2JYBNVGFuWJektgq_+Pvs80JgVHMw@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 3945 bytes --]

On Mon, 9 Jul 2012 08:50:16 +0200 Alexander Schleifer
<alexander.schleifer@googlemail.com> wrote:

> 2012/7/9 NeilBrown <neilb@suse.de>:
> > On Mon, 9 Jul 2012 00:45:08 +0200 Alexander Schleifer
> > <alexander.schleifer@googlemail.com> wrote:
> >
> >> 2012/7/9 NeilBrown <neilb@suse.de>:
> >> > On Sun, 8 Jul 2012 23:47:16 +0200 Alexander Schleifer
> >> > <alexander.schleifer@googlemail.com> wrote:
> >> >
> >> >> Hi,
> >> >>
> >> >> after a new installation of Ubuntu, my RAID5 device was set to
> >> >> "inactive". All devices were set to spare device and the level was
> >> >> unknown. So I tried to re-create the array by the following command.
> >> >
> >> > Sorry about that.  In case you haven't seen it,
> >> >    http://neil.brown.name/blog/20120615073245
> >> > explains the background
> >> >
> >> >>
> >> >> mdadm --create /dev/md0 --assume-clean --level=5 --raid-disk=6
> >> >> --chunk=512 --metadata=1.2 /dev/sde /dev/sdd /dev/sda /dev/sdc
> >> >> /dev/sdg /dev/sdh
> >> >>
> >> >> I have a backup of the mdadm -Evvvvs output, so I could recover the
> >> >> chunk size, metadata and offset (2048) from this information.
> >> >>
> >> >> The partially output of mdadm --create... shows this output:
> >> >>
> >> >> ...
> >> >> mdadm: /dev/sde appears to be part of a raid array:
> >> >>     level=raid5 devices=6 ctime=Sun Jul  8 23:02:51 2012
> >> >> mdadm: partition table exists on /dev/sde but will be lost or
> >> >>        meaningless after creating array
> >> >> ...
> >> >>
> >> >> The array is recreated, but no valid filesystem is found on /dev/md0
> >> >> (dumpe2fs: Filesystem revision too high while trying to open /dev/md0.
> >> >> Couldn't find valid filesystem superblock.). Also fdisk /dev/sde shows
> >> >> no partition.
> >> >> My next step would be creating Linux RAID type partitions on the 6
> >> >> devices with fdisk and call mdadm --create with /dev/sde1 /dev/sdd1
> >> >> and so on.
> >> >> Is this step a possible solution for recovering the filesystem?
> >> >
> >> > Depends.. Was the original array created on partitions, or on whole devices?
> >> > The saved '-E' output should show that.
> >> >
> >> > Maybe you have the devices in the wrong order.  The order you have looks odd
> >> > for a recently created array.
> >> >
> >> > NeilBrown
> >>
> >> The original array was created on whole devices, as the saved output
> >> starts with e.g. "/dev/sde:".
> >
> > Right, so you definitely don't want to create partitions.  Maybe when mdadm
> > reported "partition table exists' it was a false positive, or maybe old
> > information - creating a 1.2 array doesn't destroy the partition table.
> >
> >> I used the order of the 'Device UUID' from the saved output to
> >> recreate the order in the new system (the ports changed due to a new
> >> mainboard).
> >
> > When you say "the order", do you mean the numerical order?
> >
> > If you looked at the old "mdadm -E" output matching the "Device Role" with
> > "Device UUID" to determine the order of the UUIDs, then looked at the
> > "mdadm -E" output after the metadata got corrupted and used the "Device UUID"
> > to determine the correct "Device Role", then ordered the devices by that Role,
> > then that should have worked.
> 
> Ok, I used the "Device UUID" only to get the order. Now I reordered my
> "mdadm --create..." call according to old "Device Role" and it works
> ;)
> 
> >
> > I assume you did have a filesystem directly on /dev/md0, and hadn't
> > partitioned it or used LVM on it?
> 
> Yes, the devices all are the same type and so I used the whole device
> and created a filesystem directly on /dev/md0.
> 
> Now, fsck is running pass 1 for a few minutes with no error. So, I
> think that everything is fine and I say thank you for helping get my
> raid back to life ;-)
> 

Good news!  Always happy to hear success reports.

Thanks,
NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

      reply	other threads:[~2012-07-09  7:08 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-07-08 21:47 RAID5 superblock and filesystem recovery after re-creation Alexander Schleifer
2012-07-08 22:13 ` NeilBrown
2012-07-08 22:45   ` Alexander Schleifer
2012-07-09  0:02     ` NeilBrown
2012-07-09  6:50       ` Alexander Schleifer
2012-07-09  7:08         ` NeilBrown [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120709170811.7aa546ec@notabene.brown \
    --to=neilb@suse.de \
    --cc=alexander.schleifer@googlemail.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).