linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jean-Paul Sergent <jpsergent@gmail.com>
To: NeilBrown <neilb@suse.de>
Cc: linux-raid@vger.kernel.org
Subject: Re: 2 disk raid 5 failure
Date: Sun, 5 Oct 2014 02:21:31 -0700	[thread overview]
Message-ID: <CANuKptm=Y3L-xUfGfB5GO3yV63buHeWsfxMYKjUVPP8paGhs0A@mail.gmail.com> (raw)
In-Reply-To: <20141005194105.4377c295@notabene.brown>

I'm recovering with a liveUSB of debian, the system this raid is on
normally runs fedora 20, I'm not sure what version of mdadm was
running on that system. I can find out if I need to.


root@debian:~# mdadm --version
mdadm - v3.3 - 3rd September 2013


root@debian:~# mdadm -A /dev/md0 --force -vv /dev/sdb /dev/sdc
/dev/sde /dev/sdf /dev/sdg
mdadm: looking for devices for /dev/md0
mdadm: /dev/sdb is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sdc is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sde is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdf is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdg is identified as a member of /dev/md0, slot 4.
mdadm: added /dev/sdc to /dev/md0 as 1
mdadm: added /dev/sdb to /dev/md0 as 2
mdadm: added /dev/sdf to /dev/md0 as 3 (possibly out of date)
mdadm: added /dev/sdg to /dev/md0 as 4 (possibly out of date)
mdadm: added /dev/sde to /dev/md0 as 0
mdadm: /dev/md0 assembled from 3 drives - not enough to start the array.

Thanks,
-JP

On Sun, Oct 5, 2014 at 1:41 AM, NeilBrown <neilb@suse.de> wrote:
> On Sat, 4 Oct 2014 22:43:01 -0700 Jean-Paul Sergent <jpsergent@gmail.com>
> wrote:
>
>> Greetings,
>>
>> Recently I lost 2 disks, out of 5, in my raid 5 array from a bad SATA power
>> cable. It was a Y splitter and it shorted... it was cheap. I was wondering
>> if there was any chance in getting my data back.
>>
>> Of the 2 disks that blew out, one actually had bad/unreadable sectors on
>> the disk and the other disk seems fine. I have cloned both disks with DD to
>> 2 new disks and forced the one disk to clone even with errors. The
>> remaining 3 disks are in tact. The events number on the array for all 5
>> disks are very close to each other:
>>
>>          Events : 201636
>>          Events : 201636
>>          Events : 201636
>>          Events : 201630
>>          Events : 201633
>>
>> Which from my reading gives me some hope, but I'm not sure. I have not done
>> "recovering a failed software raid" on the wiki yet, the part about
>> using a loop device to protect your array. I thought I would send a
>> message out to this list first before going down
>> that route.
>>
>> I did try to do a mdadm --force --assemble on the array but is says that it
>> only has 3 disks which isn't enough to start the array. I don't want to do
>> anything else before consulting the mailing list first.
>
> --force --assemble really is what you want.  It should work.
> What does
>    mdadm -A /dev/md1 --force -vv /dev/sdb /dev/sdc /dev/sde /dev/sdf /dev/sdg
>
> report??
> What version of mdadm (mdadm -V) do you have?
>
> NeilBrown
>
>
>>
>> below I have pasted the mdadm --examine from each member drive. Any help would
>> be greatly appreciated.
>>
>> Thanks,
>> -JP
>>
>> /dev/sdb:
>>           Magic : a92b4efc
>>         Version : 1.2
>>     Feature Map : 0x0
>>      Array UUID : 7de100f5:4f30f751:62456293:fe98f735
>>            Name : b1ackb0x:1
>>   Creation Time : Sun Jan 13 00:01:44 2013
>>      Raid Level : raid5
>>    Raid Devices : 5
>>
>>  Avail Dev Size : 2930275120 (1397.26 GiB 1500.30 GB)
>>      Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
>>   Used Dev Size : 2930274304 (1397.26 GiB 1500.30 GB)
>>     Data Offset : 2048 sectors
>>    Super Offset : 8 sectors
>>    Unused Space : before=1968 sectors, after=816 sectors
>>           State : clean
>>     Device UUID : 71d7c3d7:7b232399:51571715:711da6f6
>>
>>     Update Time : Tue Apr 29 02:49:21 2014
>>        Checksum : cd29f83c - correct
>>          Events : 201636
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>>
>>    Device Role : Active device 2
>>    Array State : AAA.. ('A' == active, '.' == missing, 'R' == replacing)
>> /dev/sdc:
>>           Magic : a92b4efc
>>         Version : 1.2
>>     Feature Map : 0x0
>>      Array UUID : 7de100f5:4f30f751:62456293:fe98f735
>>            Name : b1ackb0x:1
>>   Creation Time : Sun Jan 13 00:01:44 2013
>>      Raid Level : raid5
>>    Raid Devices : 5
>>
>>  Avail Dev Size : 2930275120 (1397.26 GiB 1500.30 GB)
>>      Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
>>   Used Dev Size : 2930274304 (1397.26 GiB 1500.30 GB)
>>     Data Offset : 2048 sectors
>>    Super Offset : 8 sectors
>>    Unused Space : before=1968 sectors, after=816 sectors
>>           State : clean
>>     Device UUID : b47c32b5:b2f9e81a:37150c33:8e3fa6ca
>>
>>     Update Time : Tue Apr 29 02:49:21 2014
>>        Checksum : 1e5353af - correct
>>          Events : 201636
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>>
>>    Device Role : Active device 1
>>    Array State : AAA.. ('A' == active, '.' == missing, 'R' == replacing)
>> /dev/sde:
>>           Magic : a92b4efc
>>         Version : 1.2
>>     Feature Map : 0x0
>>      Array UUID : 7de100f5:4f30f751:62456293:fe98f735
>>            Name : b1ackb0x:1
>>   Creation Time : Sun Jan 13 00:01:44 2013
>>      Raid Level : raid5
>>    Raid Devices : 5
>>
>>  Avail Dev Size : 2930275120 (1397.26 GiB 1500.30 GB)
>>      Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
>>   Used Dev Size : 2930274304 (1397.26 GiB 1500.30 GB)
>>     Data Offset : 2048 sectors
>>    Super Offset : 8 sectors
>>    Unused Space : before=1968 sectors, after=816 sectors
>>           State : clean
>>     Device UUID : 0398da5b:0bcddd81:8f7e77e9:6689ee0c
>>
>>     Update Time : Tue Apr 29 02:49:21 2014
>>        Checksum : 24a3f586 - correct
>>          Events : 201636
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>>
>>    Device Role : Active device 0
>>    Array State : AAA.. ('A' == active, '.' == missing, 'R' == replacing)
>> /dev/sdf:
>>           Magic : a92b4efc
>>         Version : 1.2
>>     Feature Map : 0x0
>>      Array UUID : 7de100f5:4f30f751:62456293:fe98f735
>>            Name : b1ackb0x:1
>>   Creation Time : Sun Jan 13 00:01:44 2013
>>      Raid Level : raid5
>>    Raid Devices : 5
>>
>>  Avail Dev Size : 2930275120 (1397.26 GiB 1500.30 GB)
>>      Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
>>   Used Dev Size : 2930274304 (1397.26 GiB 1500.30 GB)
>>     Data Offset : 2048 sectors
>>    Super Offset : 8 sectors
>>    Unused Space : before=1968 sectors, after=976752816 sectors
>>           State : clean
>>     Device UUID : 356c6d85:627a994f:753dec0d:db4fa4f2
>>
>>     Update Time : Tue Apr 29 02:37:38 2014
>>        Checksum : 2621f9d5 - correct
>>          Events : 201630
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>>
>>    Device Role : Active device 3
>>    Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
>> /dev/sdg:
>>           Magic : a92b4efc
>>         Version : 1.2
>>     Feature Map : 0x0
>>      Array UUID : 7de100f5:4f30f751:62456293:fe98f735
>>            Name : b1ackb0x:1
>>   Creation Time : Sun Jan 13 00:01:44 2013
>>      Raid Level : raid5
>>    Raid Devices : 5
>>
>>  Avail Dev Size : 2930275120 (1397.26 GiB 1500.30 GB)
>>      Array Size : 5860548608 (5589.05 GiB 6001.20 GB)
>>   Used Dev Size : 2930274304 (1397.26 GiB 1500.30 GB)
>>     Data Offset : 2048 sectors
>>    Super Offset : 8 sectors
>>    Unused Space : before=1968 sectors, after=976752816 sectors
>>           State : clean
>>     Device UUID : 3dc152d8:832dd43a:a6d638e3:6e12b394
>>
>>     Update Time : Tue Apr 29 02:48:01 2014
>>        Checksum : db9e6008 - correct
>>          Events : 201633
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>>
>>    Device Role : Active device 4
>>    Array State : AAA.A ('A' == active, '.' == missing, 'R' == replacing)
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

  reply	other threads:[~2014-10-05  9:21 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-10-05  5:43 2 disk raid 5 failure Jean-Paul Sergent
2014-10-05  8:41 ` NeilBrown
2014-10-05  9:21   ` Jean-Paul Sergent [this message]
2014-10-05  9:38     ` Jean-Paul Sergent
2014-10-05  9:52       ` NeilBrown
2014-10-05  9:55         ` Jean-Paul Sergent
2014-10-05 23:50           ` NeilBrown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CANuKptm=Y3L-xUfGfB5GO3yV63buHeWsfxMYKjUVPP8paGhs0A@mail.gmail.com' \
    --to=jpsergent@gmail.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=neilb@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).