linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Johannes Truschnigg <johannes@truschnigg.info>
To: Phil Turmel <philip@turmel.org>
Cc: linux-raid@vger.kernel.org
Subject: Re: What just happened to my disks/RAID5 array?
Date: Fri, 6 Jan 2012 11:51:43 +0100	[thread overview]
Message-ID: <20120106105143.GA2932@vault.local> (raw)
In-Reply-To: <4E70FE5B.5080601@turmel.org>


[-- Attachment #1.1: Type: text/plain, Size: 3629 bytes --]

Hello again Phil and everyone else who's having a peek,

you see, I finally had the chance to migrate all the disks to a new machine,
and figured I'd try my luck at getting back the data on my precious array.
It's been a while since I had access to it, but having that data available all
the time is not as important as having it at all, as I use the box mostly to
store old(er) backups. I definitely would like to have them back at some point
in time, however ;)

So yesterday, I upgraded all the software on the boot drive (running Gentoo),
and now I have Kernel 3.2.0 and mdadm 3.1.5, and all the drives attached to an
AMD SB850 in AHCI mode. Drive-wise, everything looks as expected - all device
nodes are there, fdisk reports the correct size, and SMART data can be read
w/o problems. Assemling the array, however, fails, and I promised in a
previous mail in this thread that I were to come back to the list and post the
info I got before venturing forth. Well, here I am now:

I have the array in stopped state, so /proc/mdstat contains no arrays at this
time. Now I run the following command which yields this output:

--- snip ---
# mdadm -v --assemble -u "19e260e6:db3cad86:0541487d:a1bae605" /dev/md0 
mdadm: looking for devices for /dev/md0
mdadm: cannot open device /dev/sda1: Device or resource busy
mdadm: /dev/sda1 has wrong uuid.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm: /dev/sda has wrong uuid.
mdadm: /dev/sdf is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sde is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdd is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sdc is identified as a member of /dev/md0, slot 4.
mdadm: /dev/sdb is identified as a member of /dev/md0, slot 0.
mdadm: added /dev/sdb to /dev/md0 as 0
mdadm: added /dev/sdd to /dev/md0 as 2
mdadm: added /dev/sdf to /dev/md0 as 3
mdadm: added /dev/sdc to /dev/md0 as 4
mdadm: added /dev/sde to /dev/md0 as 1
mdadm: /dev/md0 assembled from 2 drives - not enough to start the array.
--- snip ---


It seems that mdadm would be able to identify all five original components of
my array, but later decides that it found only two of them, and therefore
can't start the array. /proc/mdstat, at this point in time, shows the
following:

--- snip ---
md0 : inactive sde[1](S) sdc[5](S) sdf[3](S) sdd[2](S) sdb[0](S)
      7325687800 blocks super 1.2
--- snip ---

The (S) should indicate the component being marked as "spare", right (mdstat
really should have a manpage with a short overview of the most commonly
observed abbreviations, symbols and terms - I guess I'll volunteer if you
don't tell me that's already documented somewhere)?

Shall I just try "-A --force" and that's supposed to kick the array enough to
start again? Or is there anything else you could and would recommend before
resorting to that?

One thing I forgot to mention is that I cannot guarantee that the order of the
drives is still the same as it was in the old box (device node names for the
component disks could have changed), but I'm convinced that's not a problem
and I mention it only for the sake of completeness.

I have attached a file with the output of `mdadm -E` for each of the
components for your viewing pleasure - thanks in advance for anyone's time and
effort who's looking into this!

-- 
with best regards:
- Johannes Truschnigg ( johannes@truschnigg.info )

www:   http://johannes.truschnigg.info/
phone: +43 650 2 133337
xmpp:  johannes@truschnigg.info

Please do not bother me with HTML-eMail or attachments. Thank you.

[-- Attachment #1.2: mdadm-examine-disks.txt --]
[-- Type: text/plain, Size: 4447 bytes --]

# mdadm -E /dev/sdb
/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 19e260e6:db3cad86:0541487d:a1bae605
           Name : virtue:0  (local to host virtue)
  Creation Time : Tue Dec 21 10:25:32 2010
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 2930275120 (1397.26 GiB 1500.30 GB)
     Array Size : 11721097216 (5589.05 GiB 6001.20 GB)
  Used Dev Size : 2930274304 (1397.26 GiB 1500.30 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : a1a06197:c5a7727d:5f527b15:01941ba2

Internal Bitmap : 8 sectors from superblock
    Update Time : Sun Sep 11 20:11:23 2011
       Checksum : 29ad30c - correct
         Events : 3926

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAAAA ('A' == active, '.' == missing)


# mdadm -E /dev/sdc
/dev/sdc:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 19e260e6:db3cad86:0541487d:a1bae605
           Name : virtue:0  (local to host virtue)
  Creation Time : Tue Dec 21 10:25:32 2010
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 2930275120 (1397.26 GiB 1500.30 GB)
     Array Size : 11721097216 (5589.05 GiB 6001.20 GB)
  Used Dev Size : 2930274304 (1397.26 GiB 1500.30 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 67a52c17:6f69b41c:696ce995:5b845991

Internal Bitmap : 8 sectors from superblock
    Update Time : Sun Sep 11 20:11:23 2011
       Checksum : 9153a3e4 - correct
         Events : 3926

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 4
   Array State : AAAAA ('A' == active, '.' == missing)


# mdadm -E /dev/sdd
/dev/sdd:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 19e260e6:db3cad86:0541487d:a1bae605
           Name : virtue:0  (local to host virtue)
  Creation Time : Tue Dec 21 10:25:32 2010
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 2930275120 (1397.26 GiB 1500.30 GB)
     Array Size : 11721097216 (5589.05 GiB 6001.20 GB)
  Used Dev Size : 2930274304 (1397.26 GiB 1500.30 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 697750ec:0391a119:1bac7a98:1cb374d6

Internal Bitmap : 8 sectors from superblock
    Update Time : Sun Sep 11 20:11:23 2011
       Checksum : ab110beb - correct
         Events : 3926

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAAAA ('A' == active, '.' == missing)


# mdadm -E /dev/sde
/dev/sde:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 19e260e6:db3cad86:0541487d:a1bae605
           Name : virtue:0  (local to host virtue)
  Creation Time : Tue Dec 21 10:25:32 2010
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 2930275120 (1397.26 GiB 1500.30 GB)
     Array Size : 11721097216 (5589.05 GiB 6001.20 GB)
  Used Dev Size : 2930274304 (1397.26 GiB 1500.30 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : bd1fc7fb:00ed1072:1fd7d01a:415255a0

Internal Bitmap : 8 sectors from superblock
    Update Time : Tue Sep 13 06:07:24 2011
       Checksum : 5f2bb793 - correct
         Events : 3929

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : .A.A. ('A' == active, '.' == missing)


# mdadm -E /dev/sdf
/dev/sdf:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 19e260e6:db3cad86:0541487d:a1bae605
           Name : virtue:0  (local to host virtue)
  Creation Time : Tue Dec 21 10:25:32 2010
     Raid Level : raid5
   Raid Devices : 5

 Avail Dev Size : 2930275120 (1397.26 GiB 1500.30 GB)
     Array Size : 11721097216 (5589.05 GiB 6001.20 GB)
  Used Dev Size : 2930274304 (1397.26 GiB 1500.30 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 2a781a8a:ed3a6a97:29df74d2:bfcbe831

Internal Bitmap : 8 sectors from superblock
    Update Time : Tue Sep 13 06:07:24 2011
       Checksum : 5c0fdf77 - correct
         Events : 3929

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 3
   Array State : .A.A. ('A' == active, '.' == missing)

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

  reply	other threads:[~2012-01-06 10:51 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-09-13  8:27 What just happened to my disks/RAID5 array? Johannes Truschnigg
2011-09-13 11:37 ` Phil Turmel
2011-09-13 18:56   ` Johannes Truschnigg
2011-09-14 11:41     ` Phil Turmel
2011-09-14 18:17       ` Johannes Truschnigg
2011-09-14 19:19         ` Phil Turmel
2012-01-06 10:51           ` Johannes Truschnigg [this message]
2012-01-06 13:16             ` Phil Turmel
2012-01-06 13:46               ` Johannes Truschnigg
2012-01-06 14:51                 ` Phil Turmel
2012-01-06 15:28                   ` Johannes Truschnigg
2012-01-07 14:23                     ` John Robinson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120106105143.GA2932@vault.local \
    --to=johannes@truschnigg.info \
    --cc=linux-raid@vger.kernel.org \
    --cc=philip@turmel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).