From: Michael Evans <mjevans1983@gmail.com>
To: Anshuman Aggarwal <anshuman@brillgene.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: 4 partition raid 5 with 2 disks active and 2 spare, how to force?
Date: Thu, 25 Mar 2010 04:37:00 -0700 [thread overview]
Message-ID: <4877c76c1003250437r346e18en8da0f6f804bef634@mail.gmail.com> (raw)
In-Reply-To: <2E4545D6-8F4E-4779-9103-960C52983A72@brillgene.com>
On Thu, Mar 25, 2010 at 2:30 AM, Anshuman Aggarwal
<anshuman@brillgene.com> wrote:
> All, thanks in advance...particularly Neil.
>
> My raid5 setup has 4 partitions, 2 of which are showing up as spare and 2 as active. The mdadm --assemble --force gives me the following error:
> 2 active devices and 2 spare cannot start device
>
> it is a raid 5, with superblock 1.2, 4 devices in the order sda1, sdb5, sdc5, sdd5. I have lvm2 on top of this with other devices ...so as you all know data is irreplaceable blah blah.
>
> I know that this device has not been written to for a while, so the data can be considered intact (hopefully all) if I can get the device to start up...but I'm not sure of the best way to coax the kernel to assemble it. Relevant information follows:
>
> === This device is working fine ===
> mdadm --examine -e1.2 /dev/sdb5
> /dev/sdb5:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x1
> Array UUID : 42c56ea0:2484f566:387adc6c:b3f6a014
> Name : GATEWAY:127 (local to host GATEWAY)
> Creation Time : Sat Aug 22 09:44:21 2009
> Raid Level : raid5
> Raid Devices : 4
>
> Avail Dev Size : 586099060 (279.47 GiB 300.08 GB)
> Array Size : 1758296832 (838.42 GiB 900.25 GB)
> Used Dev Size : 586098944 (279.47 GiB 300.08 GB)
> Data Offset : 272 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : f8ebb9f8:b447f894:d8b0b59f:ca8e98eb
>
> Internal Bitmap : 2 sectors from superblock
> Update Time : Fri Mar 19 00:56:15 2010
> Checksum : 1005cfbc - correct
> Events : 3796145
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Device Role : Active device 2
> Array State : .AA. ('A' == active, '.' == missing)
>
> === This device is marked spare, can be marked active (IMHO) ===
> mdadm --examine -e1.2 /dev/sdd5
> /dev/sdd5:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x1
> Array UUID : 42c56ea0:2484f566:387adc6c:b3f6a014
> Name : GATEWAY:127 (local to host GATEWAY)
> Creation Time : Sat Aug 22 09:44:21 2009
> Raid Level : raid5
> Raid Devices : 4
>
> Avail Dev Size : 586099060 (279.47 GiB 300.08 GB)
> Array Size : 1758296832 (838.42 GiB 900.25 GB)
> Used Dev Size : 586098944 (279.47 GiB 300.08 GB)
> Data Offset : 272 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : 763a832f:1a9a7ea8:ce90d4a3:32e8ae54
>
> Internal Bitmap : 2 sectors from superblock
> Update Time : Fri Mar 19 00:56:15 2010
> Checksum : c78aab46 - correct
> Events : 3796145
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Device Role : spare
> Array State : .AA. ('A' == active, '.' == missing)
>
>
> === This is the completely failed device (needs replacement) ===
> mdadm --examine -e1.2 /dev/sda1
> [HANGS!!]
>
>
>
> I already have the replacement drive available as sde5 but want to be able to reconstruct as much as possible)
>
> Thanks again,
> Anshuman Aggarwal--
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
You have a raid 5 array.
(drives then data+parity per drive as an example)
1234
123P
45P6
7P89
...
You are missing two drives, meaning you lack parity and 1 data stripe
and have NO parity to recover it with.
It's like seeing:
.23.
.5P.
.P8.
and expecting to somehow recover the missing data when it is no longer
within the clean information.
Your only hope is to assemble the array in read only mode with the
other devices, if they can still even be read. In that case you might
at least be able to recover nearly all of your data; hopefully any
missing areas are in unimportant files or non-allocated space.
At this point you should be EXTREMELY CAREFUL, and DO NOTHING, without
having a good solid plan in place. Rushing /WILL/ cause you to loose
data that might still potentially be recovered.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2010-03-25 11:37 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <S1753093Ab0CYHZE/20100325072504Z+37@vger.kernel.org>
2010-03-25 9:30 ` 4 partition raid 5 with 2 disks active and 2 spare, how to force? Anshuman Aggarwal
2010-03-25 11:37 ` Michael Evans [this message]
2010-03-25 14:09 ` Anshuman Aggarwal
2010-03-26 3:38 ` Michael Evans
2010-03-26 16:28 ` Anshuman Aggarwal
2010-03-26 19:04 ` Michael Evans
2010-03-28 15:18 ` Anshuman Aggarwal
2010-03-28 16:35 ` Anshuman Aggarwal
2010-03-29 5:32 ` Luca Berra
2010-03-29 6:41 ` Michael Evans
2010-04-06 18:07 ` linux raid recreate Anshuman Aggarwal
2010-04-06 22:55 ` Neil Brown
2010-04-07 0:24 ` Berkey B Walker
2010-04-07 7:27 ` Anshuman Aggarwal
2010-04-07 13:15 ` Neil Brown
2010-03-26 19:29 ` 4 partition raid 5 with 2 disks active and 2 spare, how to force? John Robinson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4877c76c1003250437r346e18en8da0f6f804bef634@mail.gmail.com \
--to=mjevans1983@gmail.com \
--cc=anshuman@brillgene.com \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).