linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Cundiff <syshackmin@gmail.com>
To: Phil Turmel <philip@turmel.org>
Cc: Daniel Sanabria <sanabria.d@gmail.com>,
	Mikael Abrahamsson <swmike@swm.pp.se>,
	linux-raid@vger.kernel.org
Subject: Re: help please, can't mount/recover raid 5 array
Date: Sun, 10 Feb 2013 17:01:30 -0500	[thread overview]
Message-ID: <CAKHEz2ZKL54iuiDiJiVG8Ec-cJGRnjO_qYM7nF13KQPdKUYikQ@mail.gmail.com> (raw)
In-Reply-To: <51180B96.9020500@turmel.org>

On Sun, Feb 10, 2013 at 4:05 PM, Phil Turmel <philip@turmel.org> wrote:
> Hi Daniel,
>
> On 02/10/2013 04:36 AM, Daniel Sanabria wrote:
>> On 10 February 2013 09:17, Daniel Sanabria <sanabria.d@gmail.com> wrote:
>>> Hi Mikael,
>>>
>>> Yes I did. Here it is:
>
> [trim /]
>
>>> /dev/sda3:
>>>           Magic : a92b4efc
>>>         Version : 0.90.00
>
> =====================^^^^^^^
>
>>>            UUID : 0deb6f79:aec7ed69:bfe78010:bc810f04
>>>   Creation Time : Thu Dec  3 22:12:24 2009
>>>      Raid Level : raid5
>>>   Used Dev Size : 255999936 (244.14 GiB 262.14 GB)
>>>      Array Size : 511999872 (488.28 GiB 524.29 GB)
>>>    Raid Devices : 3
>>>   Total Devices : 3
>>> Preferred Minor : 2
>>>
>>>     Update Time : Sat Feb  9 16:09:20 2013
>>>           State : clean
>>>  Active Devices : 3
>>> Working Devices : 3
>>>  Failed Devices : 0
>>>   Spare Devices : 0
>>>        Checksum : 8dd157e5 - correct
>>>          Events : 792552
>>>
>>>          Layout : left-symmetric
>>>      Chunk Size : 64K
>
> =====================^^^
>
>>>
>>>       Number   Major   Minor   RaidDevice State
>>> this     0       8        3        0      active sync   /dev/sda3
>>>
>>>    0     0       8        3        0      active sync   /dev/sda3
>>>    1     1       8       18        1      active sync   /dev/sdb2
>>>    2     2       8       34        2      active sync   /dev/sdc2
>
> From your original post:
>
>> /dev/md2:
>>         Version : 1.2
>
> ====================^^^
>
>>   Creation Time : Sat Feb  9 17:30:32 2013
>>      Raid Level : raid5
>>      Array Size : 511996928 (488.28 GiB 524.28 GB)
>>   Used Dev Size : 255998464 (244.14 GiB 262.14 GB)
>>    Raid Devices : 3
>>   Total Devices : 3
>>     Persistence : Superblock is persistent
>>
>>     Update Time : Sat Feb  9 20:47:46 2013
>>           State : clean
>>  Active Devices : 3
>> Working Devices : 3
>>  Failed Devices : 0
>>   Spare Devices : 0
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>
> ====================^^^^
>
>>
>>            Name : lamachine:2  (local to host lamachine)
>>            UUID : 48be851b:f0210b64:e9fbefdf:24c84c5f
>>          Events : 2
>>
>>     Number   Major   Minor   RaidDevice State
>>        0       8        3        0      active sync   /dev/sda3
>>        1       8       18        1      active sync   /dev/sdb2
>>        2       8       34        2      active sync   /dev/sdc2
>
> I don't know what possessed you to use "mdadm --create" to try to fix
> your system, but it is almost always the wrong first step.  But since
> you scrambled it with "mdadm --create", you'll have to fix it with
> "mdadm --create".
>
> mdadm --stop /dev/md2
>
> mdadm --create --assume-clean /dev/md2 --metadata=0.90 \
>         --level=5 --raid-devices=3 --chunk=64 \
>         /dev/sda3 /dev/sdb2 /dev/sdc2
>

It looks like your using a dracut based boot system. Once you get the
array created and mounting you'll need to update /etc/mdadm.conf with
the new array information and run dracut to update your initrd with
the new configuration. If not problems could crop up down the road.

> Then, you will have to reconstruct the beginning of the array, as much
> as 3MB worth, that was replaced with v1.2 metadata.  (The used dev size
> differs by 1472kB, suggesting that the new mdadm gave you a new data
> offset of 2048, and the rest is the difference in the chunk size.)
>
> Your original report and follow-ups have not clearly indicated what is
> on this 524GB array, so I can't be more specific.  If it is a
> filesystem, an fsck may fix it with modest losses.
>
> If it is another LVM PV, you may be able to do a vgrestore to reset the
> 1st megabyte.  You didn't activate a bitmap on the array, so the
> remainder of the new metadata space was probably untouched.
>

If the data on this array is important and without backups it would be
a good idea to image the drives before you start doing anything else.
Most of your data can likely be recovered but you can easily destroy
it beyond conventional repair if your not very careful at this point.

According to the fstab in the original post it looks like its just an
ext4 filesystem on top of the md. If that is the case an fsck should
get you going again after creating the array. You can try a regular
fsck but your superblock is most likely gone. A backup superblock if
needed is generally accessible by adding -b 32768 to the fsck.
Hopefully you didn't have many files in the root of that filesystem.
They are all most likely going to end up as random numbered files and
directories in lost+found.


--
Dave Cundiff
System Administrator
A2Hosting, Inc
http://www.a2hosting.com

  reply	other threads:[~2013-02-10 22:01 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-02-09 21:03 help please, can't mount/recover raid 5 array Daniel Sanabria
2013-02-09 23:00 ` Dave Cundiff
2013-02-10  9:17   ` Daniel Sanabria
2013-02-10  6:29 ` Mikael Abrahamsson
     [not found]   ` <CAHscji0h5nHUssKi23BMfR=Ek+jSH+vK0odYNWkzrVDf6t18mw@mail.gmail.com>
2013-02-10  9:36     ` Daniel Sanabria
2013-02-10 21:05       ` Phil Turmel
2013-02-10 22:01         ` Dave Cundiff [this message]
2013-02-11 12:49           ` Daniel Sanabria
2013-02-11 16:30             ` Mikael Abrahamsson
2013-02-11 16:39               ` Daniel Sanabria

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKHEz2ZKL54iuiDiJiVG8Ec-cJGRnjO_qYM7nF13KQPdKUYikQ@mail.gmail.com \
    --to=syshackmin@gmail.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=philip@turmel.org \
    --cc=sanabria.d@gmail.com \
    --cc=swmike@swm.pp.se \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).