From: Mike Tran <mhtran@us.ibm.com>
To: Sam Hopkins <sah@coraid.com>
Cc: linux-raid@vger.kernel.org, jrs@abhost.net, support@coraid.com
Subject: Re: data recovery on raid5
Date: Fri, 21 Apr 2006 18:31:03 -0500 [thread overview]
Message-ID: <44496B37.6040501@us.ibm.com> (raw)
In-Reply-To: <8867c2a014b34d9c11ce162c4f5860af@coraid.com>
Sam Hopkins wrote:
>Hello,
>
>I have a client with a failed raid5 that is in desperate need of the
>data that's on the raid. The attached file holds the mdadm -E
>superblocks that are hopefully the keys to the puzzle. Linux-raid
>folks, if you can give any help here it would be much appreciated.
>
># mdadm -V
>mdadm - v1.7.0 - 11 August 2004
># uname -a
>Linux hazel 2.6.13-gentoo-r5 #1 SMP Sat Jan 21 13:24:15 PST 2006 i686 Intel(R) Pentium(R) 4 CPU 2.40GHz GenuineIntel GNU/Linux
>
>Here's my take:
>
>Logfiles show that last night drive /dev/etherd/e0.4 failed and around
>noon today /dev/etherd/e0.0 failed. This jibes with the superblock
>dates and info.
>
>My assessment is that since the last known good configuration was
>0 <missing>
>1 /dev/etherd/e0.0
>2 /dev/etherd/e0.2
>3 /dev/etherd/e0.3
>
>then we should shoot for this. I couldn't figure out how to get there
>using mdadm -A since /dev/etherd/e0.0 isn't in sync with e0.2 or e0.3.
>If anyone can suggest a way to get this back using -A, please chime in.
>
>The alternative is to recreate the array with this configuration hoping
>the data blocks will all line up properly so the filesystem can be mounted
>and data retrieved. It looks like the following command is the right
>way to do this, but not being an expert I (and the client) would like
>someone else to verify the sanity of this approach.
>
>Will
>
>mdadm -C /dev/md0 -n 4 -l 5 missing /dev/etherd/e0.[023]
>
>do what we want?
>
>Linux-raid folks, please reply-to-all as we're probably all not on
>the list.
>
>
>
Yes, I would re-create the array with 1 missing disk. mount read-only,
verify your data. If things are ok, remount read-write and remember to
add a new disk to fix the degrade array.
With the "missing" keyword, no resync/recovery, thus the data on disk
will be intact.
--
Regards,
Mike T.
next prev parent reply other threads:[~2006-04-21 23:31 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-04-21 23:11 data recovery on raid5 Sam Hopkins
2006-04-21 23:31 ` Mike Tran [this message]
2006-04-21 23:38 ` Mike Hardy
2006-04-22 4:03 ` Molle Bestefich
2006-04-22 7:43 ` David Greaves
2006-04-22 8:51 ` David Greaves
-- strict thread matches above, loose matches on Subject: below --
2006-04-22 18:57 Jonathan
2006-04-22 19:48 ` Molle Bestefich
2006-04-22 20:07 ` Jonathan
2006-04-22 20:22 ` Molle Bestefich
2006-04-22 20:32 ` Jonathan
2006-04-22 20:38 ` Molle Bestefich
2006-04-22 20:55 ` Jonathan
2006-04-22 21:17 ` Molle Bestefich
2006-04-22 21:42 ` Carlos Carvalho
2006-04-22 22:58 ` Molle Bestefich
2006-04-22 22:30 ` David Greaves
2006-04-22 23:17 ` Christian Pedaschus
2006-04-22 20:51 ` Molle Bestefich
2006-04-22 20:28 ` Carlos Carvalho
2006-04-23 2:46 ` Neil Brown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=44496B37.6040501@us.ibm.com \
--to=mhtran@us.ibm.com \
--cc=jrs@abhost.net \
--cc=linux-raid@vger.kernel.org \
--cc=sah@coraid.com \
--cc=support@coraid.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).