From: Stefan Borggraefe <stefan@spybot.info>
To: Ole Tange <tange@binf.ku.dk>
Cc: linux-raid@vger.kernel.org
Subject: Re: Help with recovering a RAID5 array
Date: Sat, 04 May 2013 13:13:27 +0200 [thread overview]
Message-ID: <1838659.cc600uVROo@rattle> (raw)
In-Reply-To: <CANU9nTnwx_3m3Ggebxgfe1RqKU6OMk59qnhecSmYvmC2cUeRnw@mail.gmail.com>
Am Freitag, 3. Mai 2013, 10:38:52 schrieben Sie:
> On Thu, May 2, 2013 at 2:24 PM, Stefan Borggraefe <stefan@spybot.info>
wrote:
> > I am using a RAID5 software RAID on Ubuntu 12.04
> >
> > It consits of 6 Hitachi drives with 4 TB and contains an ext 4 file
> > system.
> >
> > When I returned to this server this morning, the array was in the
> > following
> > state:
> >
> > md126 : active raid5 sdc1[7](S) sdh1[4] sdd1[3](F) sde1[0] sdg1[6] sdf1[2]
> >
> > 19535086080 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/4]
> >
> > [U_U_UU]
> >
> > sdc is the newly added hard disk, but now also sdd failed. :( It would be
> > great if there was a way to have the this RAID5 working again. Perhaps
> > sdc1
> > can then be fully added to the array and after this drive sdd also
> > exchanged.
> I have had a few raid6 fail in a similar fashion: the 3rd drive
> faliing during rebuild (Also 4 TB Hitachi by the way).
>
> I tested if the drives were fine:
>
> parallel dd if={} of=/dev/null bs=1000k ::: /dev/sd?
>
> And they were all fine.
Same for me.
> With only a few failing sectors (if any) I figured that very little
> would be lost by forcing the failing drive online. Remove the spare
> drive, and force the remaining online:
>
> mdadm -A --scan --force
I removed the spare /dev/sdc1 from /dev/md126
with
mdadm /dev/md126 --remove /dev/sdc1
After mdadm -A --scan --force the array is now in this state
md126 : active raid5 sdh1[4] sdd1[3](F) sde1[0] sdg1[6] sdf1[2]
19535086080 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/4]
[U_U_UU]
> Next step is to do fsck.
I think this is not possible yet at this point. Don't I need to reassemble the
array using the --assume-clean option and with one missing drive first? Some
step is missing here.
Stefan
next prev parent reply other threads:[~2013-05-04 11:13 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-05-02 12:24 Help with recovering a RAID5 array Stefan Borggraefe
2013-05-02 12:30 ` Mathias Burén
2013-05-02 13:14 ` Stefan Borggraefe
2013-05-02 13:17 ` Mathias Burén
2013-05-02 13:29 ` Stefan Borggraefe
2013-05-02 13:49 ` Mathias Burén
2013-05-02 14:17 ` Stefan Borggraefe
2013-05-03 8:38 ` Ole Tange
2013-05-04 11:13 ` Stefan Borggraefe [this message]
2013-05-06 6:31 ` NeilBrown
2013-05-06 8:12 ` Stefan Borggraefe
2013-05-10 10:14 ` Stefan Borggraefe
2013-05-10 10:48 ` NeilBrown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1838659.cc600uVROo@rattle \
--to=stefan@spybot.info \
--cc=linux-raid@vger.kernel.org \
--cc=tange@binf.ku.dk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).