linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Nicolas Jungers <nicolas@jungers.net>
To: Neil Brown <neilb@suse.de>
Cc: linux-raid@vger.kernel.org
Subject: Re: trouble repairing raid10
Date: Sun, 06 Jun 2010 18:28:53 +0200	[thread overview]
Message-ID: <4C0BCCC5.4000709@jungers.net> (raw)
In-Reply-To: <20100603101913.3839c934@notabene.brown>

On 06/03/2010 02:19 AM, Neil Brown wrote:
> On Wed, 02 Jun 2010 18:25:58 +0200
> Nicolas Jungers<nicolas@jungers.net>  wrote:
>
>> I've a 4 HD raid10 with to failed drive.  Any attempt I made to add 2
>> replacement disks fail consistently.

[snip]
>
> If one of these is actually usable and just had a transient failure then you
> could try re-creating the array with the drives, or 'missing' in the right
> order and with the write layout/chunksize set.
> You would need to be user the 'Data Offset' was the same, which unfortunately
> can require using exactly the same version of mdadm as created the array in
> the first place.

I managed to copy the two failed disk on a new one (same brand/model) 
with (gnu) ddrescue for a grand total of 512 B lost.  With that copy and 
a copy of one of the non failed disk I recreated (mdadm -C) the array 
over the disks with the same creation parameters and two missing drives. 
  I'm not sure that the procedure was quicker than pulling the data back 
from the backup, but nevertheless, the exercise was interesting.

When thinking about it, could it not be automated/detected in some way 
by mdadm or a related utility?  Or documented in a FAQ?  I had the 
feeling that the close to easy recovery state could be eased by mdadm 
itself, or am I dreaming?

N.


>
> NeilBrown
>
>>
>>
>>
>> mdadm --examine /dev/sdm2
>> /dev/sdm2:
>>             Magic : a92b4efc
>>           Version : 1.2
>>       Feature Map : 0x0
>>        Array UUID : d90ad6fe:1355134f:f83ffadc:a4fe7859
>>              Name : m1:1
>>     Creation Time : Thu Apr  1 21:28:58 2010
>>        Raid Level : raid10
>>      Raid Devices : 4
>>
>>    Avail Dev Size : 3907026909 (1863.02 GiB 2000.40 GB)
>>        Array Size : 7814049792 (3726.03 GiB 4000.79 GB)
>>     Used Dev Size : 3907024896 (1863.01 GiB 2000.40 GB)
>>       Data Offset : 272 sectors
>>      Super Offset : 8 sectors
>>             State : clean
>>       Device UUID : e217355e:632ac2f0:8120e55e:3878bd88
>>
>>       Update Time : Wed Jun  2 12:31:39 2010
>>          Checksum : feef2809 - correct
>>            Events : 1377156
>>
>>            Layout : near=2, far=1
>>        Chunk Size : 1024K
>>
>>       Array Slot : 3 (failed, failed, 2, 3)
>>      Array State : __uU 2 failed
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


      parent reply	other threads:[~2010-06-06 16:28 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-06-02 16:25 trouble repairing raid10 Nicolas Jungers
2010-06-03  0:19 ` Neil Brown
2010-06-03  4:38   ` Nicolas Jungers
2010-06-06 16:28   ` Nicolas Jungers [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4C0BCCC5.4000709@jungers.net \
    --to=nicolas@jungers.net \
    --cc=linux-raid@vger.kernel.org \
    --cc=neilb@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).