linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Brian Candler <B.Candler@pobox.com>
To: Christian Balzer <chibi@gol.com>
Cc: linux-raid@vger.kernel.org
Subject: Re: Assembly failure
Date: Fri, 13 Jul 2012 19:52:35 +0100	[thread overview]
Message-ID: <20120713185235.GA40886@nsrc.org> (raw)
In-Reply-To: <20120711172742.2b8e13e9@batzmaru.gol.ad.jp>

OK, after reseating drives and removing the three definitely bad ones, I
think the hardware is stable again now.

So now I have a problem with the five-drive array I had set up in the mean
time.  All five drives are there, but one is a bit behind the others in its
event count and last update time.

Here's the mdadm --examine output:

/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 149c0025:e7c5da3a:62b7a318:4ca57af7
           Name : storage1.2
  Creation Time : Wed Jul 11 14:50:06 2012
     Raid Level : raid6
   Raid Devices : 5

 Avail Dev Size : 5860531120 (2794.52 GiB 3000.59 GB)
     Array Size : 17581590528 (8383.56 GiB 9001.77 GB)
  Used Dev Size : 5860530176 (2794.52 GiB 3000.59 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : 56e9ce91:c5df8850:2105c86d:c9c710a1

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Jul 11 15:19:31 2012
       Checksum : 80c0762 - correct
         Events : 276

         Layout : left-symmetric
     Chunk Size : 1024K

   Device Role : Active device 0
   Array State : AAAAA ('A' == active, '.' == missing)
/dev/sdj:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 149c0025:e7c5da3a:62b7a318:4ca57af7
           Name : storage1.2
  Creation Time : Wed Jul 11 14:50:06 2012
     Raid Level : raid6
   Raid Devices : 5

 Avail Dev Size : 5860531120 (2794.52 GiB 3000.59 GB)
     Array Size : 17581590528 (8383.56 GiB 9001.77 GB)
  Used Dev Size : 5860530176 (2794.52 GiB 3000.59 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : db72c8d7:672760b4:572dc944:fc7c151b

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Jul 11 15:29:52 2012
       Checksum : 11ec5fef - correct
         Events : 357

         Layout : left-symmetric
     Chunk Size : 1024K

   Device Role : Active device 1
   Array State : .AAAA ('A' == active, '.' == missing)
/dev/sdk:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 149c0025:e7c5da3a:62b7a318:4ca57af7
           Name : storage1.2
  Creation Time : Wed Jul 11 14:50:06 2012
     Raid Level : raid6
   Raid Devices : 5

 Avail Dev Size : 5860531120 (2794.52 GiB 3000.59 GB)
     Array Size : 17581590528 (8383.56 GiB 9001.77 GB)
  Used Dev Size : 5860530176 (2794.52 GiB 3000.59 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : b12fefdd:74914e6e:9f3ca2bd:8b433e34

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Jul 11 15:29:52 2012
       Checksum : 64035caa - correct
         Events : 357

         Layout : left-symmetric
     Chunk Size : 1024K

   Device Role : Active device 2
   Array State : .AAAA ('A' == active, '.' == missing)
/dev/sdl:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 149c0025:e7c5da3a:62b7a318:4ca57af7
           Name : storage1.2
  Creation Time : Wed Jul 11 14:50:06 2012
     Raid Level : raid6
   Raid Devices : 5

 Avail Dev Size : 5860531120 (2794.52 GiB 3000.59 GB)
     Array Size : 17581590528 (8383.56 GiB 9001.77 GB)
  Used Dev Size : 5860530176 (2794.52 GiB 3000.59 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : db387f8a:383c26f4:4012a3ec:12c7679e

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Jul 11 15:29:52 2012
       Checksum : 2f9569c2 - correct
         Events : 357

         Layout : left-symmetric
     Chunk Size : 1024K

   Device Role : Active device 3
   Array State : .AAAA ('A' == active, '.' == missing)
/dev/sdm:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 149c0025:e7c5da3a:62b7a318:4ca57af7
           Name : storage1.2
  Creation Time : Wed Jul 11 14:50:06 2012
     Raid Level : raid6
   Raid Devices : 5

 Avail Dev Size : 5860531120 (2794.52 GiB 3000.59 GB)
     Array Size : 17581590528 (8383.56 GiB 9001.77 GB)
  Used Dev Size : 5860530176 (2794.52 GiB 3000.59 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : ac50fe77:91ce387a:e819a38d:4d56a734

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Jul 11 15:29:52 2012
       Checksum : da66aace - correct
         Events : 357

         Layout : left-symmetric
     Chunk Size : 1024K

   Device Role : Active device 4
   Array State : .AAAA ('A' == active, '.' == missing)

Now, a simple assemble fails:

    root@dev-storage1:~# mdadm --assemble /dev/md/storage1.2 /dev/sd{b,j,k,l,m}
    mdadm: /dev/md/storage1.2 assembled from 4 drives - not enough to start the array while not clean - consider --force.
    root@dev-storage1:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md127 : inactive sdj[1](S) sdm[4](S) sdl[3](S) sdk[2](S) sdb[0](S)
          14651327800 blocks super 1.2
           
    unused devices: <none>

(Well, md127 exists, but I don't know how to "start" it).
So let's try using --force as it suggests:

    root@dev-storage1:~# mdadm -S /dev/md127
    mdadm: stopped /dev/md127
    root@dev-storage1:~# mdadm --assemble --force /dev/md/storage1.2 /dev/sd{b,j,k,l,m}
    mdadm: /dev/md/storage1.2 has been started with 4 drives (out of 5).
    root@dev-storage1:~# cat /proc/mdstatPersonalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md127 : active raid6 sdj[1] sdm[4] sdl[3] sdk[2]
          8790795264 blocks super 1.2 level 6, 1024k chunk, algorithm 2 [5/4] [_UUUU]
          bitmap: 22/22 pages [88KB], 65536KB chunk

    unused devices: <none>
    root@dev-storage1:~# 

Now I have a 4-drive degraded RAID6, /dev/sdb isn't even listed (even though
I gave it on the command line).  Is this correct?  Is the next thing to do
to add the 5th drive into it manually?

    root@dev-storage1:~# mdadm --manage --re-add /dev/md127 /dev/sdb
    mdadm: re-added /dev/sdb
    root@dev-storage1:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
    [raid10] 
    md127 : active raid6 sdb[0] sdj[1] sdm[4] sdl[3] sdk[2]
          8790795264 blocks super 1.2 level 6, 1024k chunk, algorithm 2 [5/4]
    [_UUUU]
          [>....................]  recovery =  1.1% (32854540/2930265088)
    finish=952.5min speed=50692K/sec
          bitmap: 22/22 pages [88KB], 65536KB chunk

    unused devices: <none>

That seems to have worked, can someone just confirm that's the right
sequence of things to do though. This is a test system, next time I do this
might be for real :-)

Cheers,

Brian.

  parent reply	other threads:[~2012-07-13 18:52 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-07-10 16:33 Assembly failure Brian Candler
2012-07-10 16:48 ` Sebastian Riemer
2012-07-10 17:06   ` Brian Candler
2012-07-10 17:38     ` Sebastian Riemer
2012-07-10 18:59       ` Brian Candler
2012-07-11  2:43         ` NeilBrown
2012-07-11  7:58           ` Brian Candler
2012-07-11  8:27             ` Christian Balzer
2012-07-11  9:09               ` Brian Candler
2012-07-11 10:32                 ` Mikael Abrahamsson
2012-07-11 10:47                   ` Brian Candler
2012-07-11 10:44               ` Roman Mamedov
2012-07-11 17:21                 ` Christian Balzer
2012-07-13 18:52               ` Brian Candler [this message]
2012-07-10 17:05 ` pants
  -- strict thread matches above, loose matches on Subject: below --
2012-07-13 20:34 Richard Scobie

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120713185235.GA40886@nsrc.org \
    --to=b.candler@pobox.com \
    --cc=chibi@gol.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).