linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Greaves <david@dgreaves.com>
To: eharney@CLEMSON.EDU
Cc: linux-raid@vger.kernel.org
Subject: Re: Need help recovering a raid5 array
Date: Tue, 24 Oct 2006 09:49:23 +0100	[thread overview]
Message-ID: <453DD393.4030406@dgreaves.com> (raw)
In-Reply-To: <44029.130.127.44.164.1161616903.squirrel@wm.clemson.edu>

eharney@CLEMSON.EDU wrote:
> Hello all,
Hi

First off, don't do anything else without reading up or talking on here :)

The list archive has got a lot of good material - 'help' is usually a good
search term!!!


> 
> I had a disk fail in a raid 5 array (4 disk array, no spares), and am
> having trouble recovering it.  I believe my data is still safe, but I
> cannot tell what is going wrong here.

There's some useful stuff but always include:
* kernel version
* mdadm version
* relevant dmesg or similar output


What went wrong?
Did /dev/sdd fail? If so then why are you adding it back to the array? Or is
this now a replacement?

You should be OK - I'll reply quickly now and see if I can make some suggestions
later (or sooner).

David


> 
> When I try to rebuild the array "mdadm --assemble /dev/md0 /dev/sda2
> /dev/sdb2 /dev/sdc2 /dev/sdd2" I see "failed to RUN_ARRAY /dev/md0:
> Input/output error".
> 
> dmesg shows the following:
> md: bind<sdb2>
> md: bind<sdc2>
> md: bind<sdd2>
> md: bind<sda2>
> md: md0: raid array is not clean -- starting background reconstruction
> raid5: device sda2 operational as raid disk 0
> raid5: device sdc2 operational as raid disk 2
> raid5: device sdb2 operational as raid disk 1
> raid5: cannot start dirty degraded array for md0
> RAID5 conf printout:
>  --- rd:4 wd:3 fd:1
>  disk 0, o:1, dev:sda2
>  disk 1, o:1, dev:sdb2
>  disk 2, o:1, dev:sdc2
> raid5: failed to run raid set md0
> md: pers->run() failed ...
> 
> 
> 
> /proc mdstat shows:
> md0 : inactive sda2[0] sdd2[3](S) sdc2[2] sdb2[1]
> 
> This seems wrong, as sdd2 should not be a spare - I want it to be the
> fourth disk.
> 
> 
> The output of mdadm -E for each disk is as follows:
> sda2:
> /dev/sda2:
>           Magic : a92b4efc
>         Version : 00.90.00
>            UUID : c50a81fc:ef4323e6:438a7cb1:25ae35e5
>   Creation Time : Thu Jun  1 21:13:58 2006
>      Raid Level : raid5
>     Device Size : 390555904 (372.46 GiB 399.93 GB)
>      Array Size : 1171667712 (1117.39 GiB 1199.79 GB)
>    Raid Devices : 4
>   Total Devices : 4
> Preferred Minor : 0
> 
>     Update Time : Sun Oct 22 23:39:06 2006
>           State : active
>  Active Devices : 3
> Working Devices : 4
>  Failed Devices : 0
>   Spare Devices : 1
>        Checksum : 683f2f5c - correct
>          Events : 0.8831997
> 
>          Layout : left-symmetric
>      Chunk Size : 256K
> 
>       Number   Major   Minor   RaidDevice State
> this     0       8        2        0      active sync   /dev/sda2
> 
>    0     0       8        2        0      active sync   /dev/sda2
>    1     1       8       18        1      active sync   /dev/sdb2
>    2     2       8       34        2      active sync   /dev/sdc2
>    3     3       0        0        3      faulty removed
>    4     4       8       50        4      spare   /dev/sdd2
> 
> 
> sdb2:
> /dev/sdb2:
>           Magic : a92b4efc
>         Version : 00.90.00
>            UUID : c50a81fc:ef4323e6:438a7cb1:25ae35e5
>   Creation Time : Thu Jun  1 21:13:58 2006
>      Raid Level : raid5
>     Device Size : 390555904 (372.46 GiB 399.93 GB)
>      Array Size : 1171667712 (1117.39 GiB 1199.79 GB)
>    Raid Devices : 4
>   Total Devices : 4
> Preferred Minor : 0
> 
>     Update Time : Sun Oct 22 23:39:06 2006
>           State : active
>  Active Devices : 3
> Working Devices : 4
>  Failed Devices : 0
>   Spare Devices : 1
>        Checksum : 683f2f6e - correct
>          Events : 0.8831997
> 
>          Layout : left-symmetric
>      Chunk Size : 256K
> 
>       Number   Major   Minor   RaidDevice State
> this     1       8       18        1      active sync   /dev/sdb2
> 
>    0     0       8        2        0      active sync   /dev/sda2
>    1     1       8       18        1      active sync   /dev/sdb2
>    2     2       8       34        2      active sync   /dev/sdc2
>    3     3       0        0        3      faulty removed
>    4     4       8       50        4      spare   /dev/sdd2
> 
> 
> sdc2:
> /dev/sdc2:
>           Magic : a92b4efc
>         Version : 00.90.00
>            UUID : c50a81fc:ef4323e6:438a7cb1:25ae35e5
>   Creation Time : Thu Jun  1 21:13:58 2006
>      Raid Level : raid5
>     Device Size : 390555904 (372.46 GiB 399.93 GB)
>      Array Size : 1171667712 (1117.39 GiB 1199.79 GB)
>    Raid Devices : 4
>   Total Devices : 4
> Preferred Minor : 0
> 
>     Update Time : Sun Oct 22 23:39:06 2006
>           State : active
>  Active Devices : 3
> Working Devices : 4
>  Failed Devices : 0
>   Spare Devices : 1
>        Checksum : 683f2f80 - correct
>          Events : 0.8831997
> 
>          Layout : left-symmetric
>      Chunk Size : 256K
> 
>       Number   Major   Minor   RaidDevice State
> this     2       8       34        2      active sync   /dev/sdc2
> 
>    0     0       8        2        0      active sync   /dev/sda2
>    1     1       8       18        1      active sync   /dev/sdb2
>    2     2       8       34        2      active sync   /dev/sdc2
>    3     3       0        0        3      faulty removed
>    4     4       8       50        4      spare   /dev/sdd2
> 
> 
> sdd2:
> /dev/sdd2:
>           Magic : a92b4efc
>         Version : 00.90.00
>            UUID : c50a81fc:ef4323e6:438a7cb1:25ae35e5
>   Creation Time : Thu Jun  1 21:13:58 2006
>      Raid Level : raid5
>     Device Size : 390555904 (372.46 GiB 399.93 GB)
>      Array Size : 1171667712 (1117.39 GiB 1199.79 GB)
>    Raid Devices : 4
>   Total Devices : 4
> Preferred Minor : 0
> 
>     Update Time : Sun Oct 22 23:39:06 2006
>           State : active
>  Active Devices : 3
> Working Devices : 4
>  Failed Devices : 0
>   Spare Devices : 1
>        Checksum : 683f2fbf - correct
>          Events : 0.8831997
> 
>          Layout : left-symmetric
>      Chunk Size : 256K
> 
>       Number   Major   Minor   RaidDevice State
> this     3       8       50       -1      sync   /dev/sdd2
> 
>    0     0       8        2        0      active sync   /dev/sda2
>    1     1       8       18        1      active sync   /dev/sdb2
>    2     2       8       34        2      active sync   /dev/sdc2
>    3     3       8       50       -1      sync   /dev/sdd2
>    4     4       8       50        4      spare   /dev/sdd2
> 
> 
> 
> Does anyone have any idea how to get this array back into good shape?
> I'm not sure why it thinks sdd2 should be a spare, or how to get it back
> to being a regular disk.
> 
> I would appreciate any help you can offer.  (Also, am I right in thinking
> my data is still good?  I should still have 3 of the 4 disks working fine,
> at any rate.)
> 
> Thanks,
> Eric
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


  reply	other threads:[~2006-10-24  8:49 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-10-23 15:21 Need help recovering a raid5 array eharney
2006-10-24  8:49 ` David Greaves [this message]
     [not found] <44029.130.127.44.164.1161616905.squirrel@wm.clemson.edu>
2006-10-23 17:49 ` eharney

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=453DD393.4030406@dgreaves.com \
    --to=david@dgreaves.com \
    --cc=eharney@CLEMSON.EDU \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).