linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jeff Wiegley <jeffw@csun.edu>
To: "Meyer, Adrian" <adrian_meyer@med.unc.edu>,
	"linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>
Subject: Re: Re-creating/Fixing Raid5 from superblock Info
Date: Sun, 27 Apr 2014 14:11:09 -0700	[thread overview]
Message-ID: <535D726D.2040804@csun.edu> (raw)
In-Reply-To: <ACABA0F8A3B9A3468A3881B311B28B08586E8013@ITS-MSXMBS2F.ad.unc.edu>

Did you install the original OS or did you install
an updated OS? Metadata version 0.90.00 is really
old I think. New metadata version is 1.2.

The byte offsets for things have changed from
version to version. You are going to need to know
these offsets because they have to be the same
as the original.

If you reinstalled the original OS then you should
be in luck because you will have reinstalled the old
mdadm that was used to create the array in the first
place. It will use the same offsets if you used its
defaults in the first place.

If you installed an up to date OS then you will
need to get the array re-created with the original
offsets and sizes. The newest mdadm allows you to
specify these and override the defaults at create
time.

I can see from your listing that the chunksize is
64K. But the dump doesn't show data offsets. so I
don't know what that is. 64K is not the default for
current mdadm creations (on Raid6 at least) so I
believe you may have not used default values which
would make this all a lot harder to figure out what
values to use on your re-create.

I just went through recovering an array successfully
similar to your problem and I didn't know those
sizes either. Here's what I did. I went to the
mdadm download site and I downloaded old versions
of the tool. They are quite easy to build (I only
had to remove the -Werror from the Makefiles and
type make). I also like this way because I don't
know enough about the various offsets/size/layouts
to know what to override and what not. I know I
just used the defaults in the past so as long as
I use the same old version of mdadm that I did those
many years ago it will use the right stuff.

Pick the version closest to the one that you used.
It should have the original offsets and sizes that
you used assuming you didn't override/change them
when you created the array.

using the correct mdadm version do the following:
./mdadm --create --assume-clean --metadata=0.90 --chunk=64 --level=5 
--raid-devices=5 /dev/md0 /dev/sd[ghijk]

THE ASSUME-CLEAN IS ***HUGELY*** IMPORTANT. You do NOT
want the array to resync on you. If it does do its
initial resync I believe you screw up your data.

The order of the drives is important too. Your listing
shows the md numeric order of the drives and the
/dev/sd[ghijk] will shell expand to the same order.


I would immediately do
   mdadm --readonly /dev/md0
after the create to make sure nothing changes while
you test. If you've got the offsets wrong and it
doesn't work then you can stop the array, zero the
superblock and try creating it again with different
offset/size values. But once you alter the data on
the drives... you're toast.

Before doing anything else to the array I would
also do
    cat /proc/mdstat
    mdadm --examine /dev/sdg
just to verify that it was created with the proper
drive order and sizes. If they are clearly off I
would not proceed and would try again with
different settings.

I was lucky that I had two similar arrays on the
drives /dev/md3 and /dev/md4 and I didn't care about
/dev/md3. So I could mess around with sizes and
superblocks on there until/if I got it working and
then use those parameters on the important drive.

You don't have a spare to screw with. I would
suggest finding a way to make a byte for byte
backup of your drives using dd then you can screw
around without fear of not being able to restore
your drives if you accidentally alter their data.
(But this is going to require 5 more 2TB drives
on hand or some other 10TB storage to store your
dd images while you attempt tests.)

Please read/research carefully. I'm not an mdadm
wizard and I figure I hit the jackpot of luck
when I successfully recreated my array after
reinstalling an OS and blowing away the superblocks.
So if you can get verification of my suggestions
before proceeding that would be best.

- Jeff Wiegley

On 4/27/2014 7:03 AM, Meyer, Adrian wrote:
> I am trying to re-create a raid5 after re-installing the OS. Unfortunately my initial try was not successful and I probably messed things up. I saved the original disk information (see below). What would be the mdadm --create command with the correct additional parameters?
>
> /dev/sde:
>            Magic : a92b4efc
>          Version : 0.90.00
>             UUID : 208886f0:0b0c5d65:d1ecd824:5a220e5e (local to host niederhorn)
>    Creation Time : Mon Jul  2 00:08:03 2012
>       Raid Level : raid5
>    Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)
>       Array Size : 7814057984 (7452.07 GiB 8001.60 GB)
>     Raid Devices : 5
>    Total Devices : 5
> Preferred Minor : 0
>
>      Update Time : Sat Apr 26 11:03:04 2014
>            State : clean
>   Active Devices : 5
> Working Devices : 5
>   Failed Devices : 0
>    Spare Devices : 0
>         Checksum : 188e511f - correct
>           Events : 33568
>
>           Layout : left-symmetric
>       Chunk Size : 64K
>
>        Number   Major   Minor   RaidDevice State
> this     0       8       96        0      active sync   /dev/sdg
>
>     0     0       8       96        0      active sync   /dev/sdg
>     1     1       8      112        1      active sync   /dev/sdh
>     2     2       8      128        2      active sync   /dev/sdi
>     3     3       8      144        3      active sync   /dev/sdj
>     4     4       8      160        4      active sync   /dev/sdk
> root@niederhorn:/home/xbmc# mdadm --examine /dev/sdf
> /dev/sdf:
>            Magic : a92b4efc
>          Version : 0.90.00
>             UUID : 208886f0:0b0c5d65:d1ecd824:5a220e5e (local to host niederhorn)
>    Creation Time : Mon Jul  2 00:08:03 2012
>       Raid Level : raid5
>    Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)
>       Array Size : 7814057984 (7452.07 GiB 8001.60 GB)
>     Raid Devices : 5
>    Total Devices : 5
> Preferred Minor : 0
>
>      Update Time : Sat Apr 26 11:03:04 2014
>            State : clean
>   Active Devices : 5
> Working Devices : 5
>   Failed Devices : 0
>    Spare Devices : 0
>         Checksum : 188e5131 - correct
>           Events : 33568
>
>           Layout : left-symmetric
>       Chunk Size : 64K
>
>        Number   Major   Minor   RaidDevice State
> this     1       8      112        1      active sync   /dev/sdh
>
>     0     0       8       96        0      active sync   /dev/sdg
>     1     1       8      112        1      active sync   /dev/sdh
>     2     2       8      128        2      active sync   /dev/sdi
>     3     3       8      144        3      active sync   /dev/sdj
>     4     4       8      160        4      active sync   /dev/sdk
> root@niederhorn:/home/xbmc# mdadm --examine /dev/sdg
> /dev/sdg:
>            Magic : a92b4efc
>          Version : 0.90.00
>             UUID : 208886f0:0b0c5d65:d1ecd824:5a220e5e (local to host niederhorn)
>    Creation Time : Mon Jul  2 00:08:03 2012
>       Raid Level : raid5
>    Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)
>       Array Size : 7814057984 (7452.07 GiB 8001.60 GB)
>     Raid Devices : 5
>    Total Devices : 5
> Preferred Minor : 0
>
>      Update Time : Sat Apr 26 11:03:04 2014
>            State : clean
>   Active Devices : 5
> Working Devices : 5
>   Failed Devices : 0
>    Spare Devices : 0
>         Checksum : 188e5143 - correct
>           Events : 33568
>
>           Layout : left-symmetric
>       Chunk Size : 64K
>
>        Number   Major   Minor   RaidDevice State
> this     2       8      128        2      active sync   /dev/sdi
>
>     0     0       8       96        0      active sync   /dev/sdg
>     1     1       8      112        1      active sync   /dev/sdh
>     2     2       8      128        2      active sync   /dev/sdi
>     3     3       8      144        3      active sync   /dev/sdj
>     4     4       8      160        4      active sync   /dev/sdk
> root@niederhorn:/home/xbmc# mdadm --examine /dev/sdh
> /dev/sdh:
>            Magic : a92b4efc
>          Version : 0.90.00
>             UUID : 208886f0:0b0c5d65:d1ecd824:5a220e5e (local to host niederhorn)
>    Creation Time : Mon Jul  2 00:08:03 2012
>       Raid Level : raid5
>    Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)
>       Array Size : 7814057984 (7452.07 GiB 8001.60 GB)
>     Raid Devices : 5
>    Total Devices : 5
> Preferred Minor : 0
>
>      Update Time : Sat Apr 26 11:03:04 2014
>            State : clean
>   Active Devices : 5
> Working Devices : 5
>   Failed Devices : 0
>    Spare Devices : 0
>         Checksum : 188e5155 - correct
>           Events : 33568
>
>           Layout : left-symmetric
>       Chunk Size : 64K
>
>        Number   Major   Minor   RaidDevice State
> this     3       8      144        3      active sync   /dev/sdj
>
>     0     0       8       96        0      active sync   /dev/sdg
>     1     1       8      112        1      active sync   /dev/sdh
>     2     2       8      128        2      active sync   /dev/sdi
>     3     3       8      144        3      active sync   /dev/sdj
>     4     4       8      160        4      active sync   /dev/sdk
> root@niederhorn:/home/xbmc# mdadm --examine /dev/sdi
> /dev/sdi:
>            Magic : a92b4efc
>          Version : 0.90.00
>             UUID : 208886f0:0b0c5d65:d1ecd824:5a220e5e (local to host niederhorn)
>    Creation Time : Mon Jul  2 00:08:03 2012
>       Raid Level : raid5
>    Used Dev Size : 1953514496 (1863.02 GiB 2000.40 GB)
>       Array Size : 7814057984 (7452.07 GiB 8001.60 GB)
>     Raid Devices : 5
>    Total Devices : 5
> Preferred Minor : 0
>
>      Update Time : Sat Apr 26 11:03:04 2014
>            State : clean
>   Active Devices : 5
> Working Devices : 5
>   Failed Devices : 0
>    Spare Devices : 0
>         Checksum : 188e5167 - correct
>           Events : 33568
>
>           Layout : left-symmetric
>       Chunk Size : 64K
>
>        Number   Major   Minor   RaidDevice State
> this     4       8      160        4      active sync   /dev/sdk
>
>     0     0       8       96        0      active sync   /dev/sdg
>     1     1       8      112        1      active sync   /dev/sdh
>     2     2       8      128        2      active sync   /dev/sdi
>     3     3       8      144        3      active sync   /dev/sdj
>     4     4       8      160        4      active sync   /dev/sdk
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


  reply	other threads:[~2014-04-27 21:11 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-04-27 14:03 Re-creating/Fixing Raid5 from superblock Info Meyer, Adrian
2014-04-27 21:11 ` Jeff Wiegley [this message]
     [not found]   ` <ej0ve65owi1ayirr210j6tk3.1398637830612@email.android.com>
2014-04-27 22:47     ` Jeff Wiegley

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=535D726D.2040804@csun.edu \
    --to=jeffw@csun.edu \
    --cc=adrian_meyer@med.unc.edu \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).