linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dawning Sky <the.dawning.sky@gmail.com>
To: linux-raid@vger.kernel.org
Subject: Re: Disk I/O error while rebuilding an md raid-5 array
Date: Mon, 8 Feb 2010 23:39:02 -0800	[thread overview]
Message-ID: <adf93d751002082339m4493166fo3f9d0ffe6a2c1eb4@mail.gmail.com> (raw)
In-Reply-To: <4B710745.7020200@stud.tu-ilmenau.de>

Thanks for the good advice.  ddrescue on sdb returned an error of 4kB.
 I do still have my old sde.  But one thing I did, which was stupid,
was trying to rebuild the raid-5 when it was mounted.  So I don't know
the old sde is still consistent with the rest 3 disks, since some
files would have been modified between the times when I took the old
sde offline and when the rebuild failed.

So at this point, I guess I'll get 4 new drives and set up a brand new
raid-6 and try to restore my data from my backup in an external drive
and hope for the best.  I'll keep the 4 drives from my old raid-5 just
in case if I need to recover something from them.

I guess I learned my lesson.  I should have ddrescued all the disks I
want to replace, instead of using md's rebuild mechanism.

DS


On Mon, Feb 8, 2010 at 10:57 PM, Stefan Hübner
<stefan.huebner@stud.tu-ilmenau.de> wrote:
> Hi!
>
> I do RAID-recoveries at least once a month and get paid for it.  Rule of
> thumb: if your have one drive dropped and another one with pending
> sectors, your rebuild will fail - no need for calculations there.
>
> ddrescue on a clean disk is about half as fast as dd with a blocksize
> beyond 1M.  ddrescue on a disk with pending sectors is just no pita as
> dd or sg_dd would be, because it adds the neccesary intelligence.
>
> Do you have the original sde still around?  If yes, ddrescue both: sdb
> and sde.  My experience says: there will only be a few KB lost.  Then
> re-create your raid (it will only write the superblock new) with
> "--assume-clean".  After that worked, you might make another (big)
> backup first, then run fsck and see what happens.  If the lost bytes
> have screwed the filesystem, you might want to re-create the raid with
> another (personally I prefer xfs) fs and replay your backup into it.
>
> A few commands to make the intentions cleaner:
> ddrescue -dv -r5 /dev/oldsdb1 /dev/newsdb1 /root/sdblog
> ddrescue -dv -r5 /dev/oldsde1 /dev/newsde1 /root/sdelog
> ... find out which drive is which raid-device -> mdadm -E /dev/sdX1
> mdadm --create /dev/md0 --raid-devices=4 --level=5
> --chunk=${your_chunk_size_in_kb} --assume-clean
> ${ordered_list_of_raid_devices}
>
> Hope this helps,
> Stefan Hübner
>
>
> Am 09.02.2010 05:20, schrieb Dawning Sky:
>> On Mon, Feb 8, 2010 at 3:23 PM, Dawning Sky <the.dawning.sky@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Now I have two faulty drives and things don't look good.  However, I
>>> was able to add the sdb back to the array and md seemed not mind and
>>> still reported "active sync".  At this point I shut down computer and
>>> decided to clone sdb with clonezilla so that I can have a good sdb to
>>> finish rebuilding sde.  Not sure if it will complete without I/O
>>> errors.  It appears clonezilla is using dd and the speed is extremely
>>> slow (~5MB/sec) and it says it's gonna take 1 day to clone the 500GB.
>>>
>>>
>> As expected, dd encountered the same UNC error.  Now I'm trying to
>> ddrescue the drive to see what happens.  My question is whether this
>> is worth doing.  Assuming ddrescue cannot read the bad sector either
>> and writes 0's to the new drive, will I be able to rebuild the raid-5,
>> from 2 good disks and this disk with a bad sector?  I can assume there
>> will be a bad file but will the array still function?
>>
>> Or I'm better off just build a new array from scratch.
>>
>> Any suggestions are appreciated.
>>
>> Regards,
>>
>> DS
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2010-02-09  7:39 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-02-08 23:23 Disk I/O error while rebuilding an md raid-5 array Dawning Sky
2010-02-09  0:04 ` Greg Freemyer
2010-02-09  0:45   ` Dawning Sky
2010-02-09 10:22     ` Jon Hardcastle
2010-02-09 11:57   ` John Robinson
2010-02-09 17:43     ` Dawning Sky
2010-02-09 19:28       ` Mikael Abrahamsson
2010-02-09  1:23 ` Dawning Sky
2010-02-09  4:24   ` Wil Reichert
2010-02-09  4:26     ` Steven Haigh
2010-02-09  4:37       ` Wil Reichert
2010-02-09  4:20 ` Dawning Sky
2010-02-09  6:57   ` Stefan Hübner
2010-02-09  7:39     ` Dawning Sky [this message]
2010-02-09  8:05       ` Mikael Abrahamsson
  -- strict thread matches above, loose matches on Subject: below --
2010-02-09  0:25 russ

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=adf93d751002082339m4493166fo3f9d0ffe6a2c1eb4@mail.gmail.com \
    --to=the.dawning.sky@gmail.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).