linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Wrong count of devices in /proc/mdstat after "want_replacement"
@ 2012-09-18  5:04 Roman Mamedov
  2012-09-18  5:18 ` NeilBrown
  0 siblings, 1 reply; 2+ messages in thread
From: Roman Mamedov @ 2012-09-18  5:04 UTC (permalink / raw)
  To: linux-raid

[-- Attachment #1: Type: text/plain, Size: 4861 bytes --]

Hello,

Summary:

After removing a disk via "want_replacement", disk count in /proc/mdstat is
wrong ("[5/4]", not "[5/5]" as it should be).

More details below:

----

# mdadm --add /dev/md0 /dev/sdb1 
mdadm: added /dev/sdb1

# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed May 25 00:07:38 2011
     Raid Level : raid5
     Array Size : 3907003136 (3726.01 GiB 4000.77 GB)
  Used Dev Size : 976750784 (931.50 GiB 1000.19 GB)
   Raid Devices : 5
  Total Devices : 6
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Tue Sep 18 01:01:38 2012
          State : active 
 Active Devices : 5
Working Devices : 6
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

           Name : avdeb:0  (local to host avdeb)
           UUID : b99961fb:ed1f76c8:ec2dad31:6db45332
         Events : 14135

    Number   Major   Minor   RaidDevice State
       0       8       65        0      active sync   /dev/sde1
       6       8       33        1      active sync   /dev/sdc1
       3       8       81        2      active sync   /dev/sdf1
       4       8       49        3      active sync   /dev/sdd1
       5       8       97        4      active sync   /dev/sdg1

       7       8       17        -      spare   /dev/sdb1

# echo want_replacement > /sys/block/md0/md/dev-sde1/state 

// It's rebuilding:

# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md0 : active raid5 sdb1[7](R) sde1[0] sdg1[5] sdd1[4] sdf1[3] sdc1[6]
      3907003136 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
      [>....................]  recovery =  0.1% (1724936/976750784) finish=150.7min speed=107808K/sec
      bitmap: 0/4 pages [0KB], 131072KB chunk

// Rebuild finished:

# cat /proc/mdstat 
md0 : active raid5 sdb1[7] sde1[0](F) sdg1[5] sdd1[4] sdf1[3] sdc1[6]
      3907003136 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUUUU]
      bitmap: 0/4 pages [0KB], 131072KB chunk

# mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Wed May 25 00:07:38 2011
     Raid Level : raid5
     Array Size : 3907003136 (3726.01 GiB 4000.77 GB)
  Used Dev Size : 976750784 (931.50 GiB 1000.19 GB)
   Raid Devices : 5
  Total Devices : 6
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Tue Sep 18 04:46:14 2012
          State : active 
 Active Devices : 5
Working Devices : 5
 Failed Devices : 1
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : avdeb:0  (local to host avdeb)
           UUID : b99961fb:ed1f76c8:ec2dad31:6db45332
         Events : 14231

    Number   Major   Minor   RaidDevice State
       7       8       17        0      active sync   /dev/sdb1
       6       8       33        1      active sync   /dev/sdc1
       3       8       81        2      active sync   /dev/sdf1
       4       8       49        3      active sync   /dev/sdd1
       5       8       97        4      active sync   /dev/sdg1

       0       8       65        -      faulty spare   /dev/sde1

// Removing sde1:

# mdadm --remove /dev/md0 /dev/sde1 
mdadm: hot removed /dev/sde1 from /dev/md0

// Removed ok:

# mdadm --detail /dev/md0 
/dev/md0:
        Version : 1.2
  Creation Time : Wed May 25 00:07:38 2011
     Raid Level : raid5
     Array Size : 3907003136 (3726.01 GiB 4000.77 GB)
  Used Dev Size : 976750784 (931.50 GiB 1000.19 GB)
   Raid Devices : 5
  Total Devices : 5
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Tue Sep 18 08:49:24 2012
          State : active 
 Active Devices : 5
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 64K

           Name : avdeb:0  (local to host avdeb)
           UUID : b99961fb:ed1f76c8:ec2dad31:6db45332
         Events : 14234

    Number   Major   Minor   RaidDevice State
       7       8       17        0      active sync   /dev/sdb1
       6       8       33        1      active sync   /dev/sdc1
       3       8       81        2      active sync   /dev/sdf1
       4       8       49        3      active sync   /dev/sdd1
       5       8       97        4      active sync   /dev/sdg1

// In the end /proc/mdstat shows "[5/4]", not "[5/5]":

# cat /proc/mdstat 
md0 : active raid5 sdb1[7] sdg1[5] sdd1[4] sdf1[3] sdc1[6]
      3907003136 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUUUU]
      bitmap: 0/4 pages [0KB], 131072KB chunk

-- 
With respect,
Roman

~~~~~~~~~~~~~~~~~~~~~~~~~~~
"Stallman had a printer,
with code he could not see.
So he began to tinker,
and set the software free."

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Wrong count of devices in /proc/mdstat after "want_replacement"
  2012-09-18  5:04 Wrong count of devices in /proc/mdstat after "want_replacement" Roman Mamedov
@ 2012-09-18  5:18 ` NeilBrown
  0 siblings, 0 replies; 2+ messages in thread
From: NeilBrown @ 2012-09-18  5:18 UTC (permalink / raw)
  To: Roman Mamedov; +Cc: linux-raid

[-- Attachment #1: Type: text/plain, Size: 5302 bytes --]

On Tue, 18 Sep 2012 11:04:36 +0600 Roman Mamedov <rm@romanrm.ru> wrote:

> Hello,
> 
> Summary:
> 
> After removing a disk via "want_replacement", disk count in /proc/mdstat is
> wrong ("[5/4]", not "[5/5]" as it should be).
> 
> More details below:
> 
> ----
> 
> # mdadm --add /dev/md0 /dev/sdb1 
> mdadm: added /dev/sdb1
> 
> # mdadm --detail /dev/md0
> /dev/md0:
>         Version : 1.2
>   Creation Time : Wed May 25 00:07:38 2011
>      Raid Level : raid5
>      Array Size : 3907003136 (3726.01 GiB 4000.77 GB)
>   Used Dev Size : 976750784 (931.50 GiB 1000.19 GB)
>    Raid Devices : 5
>   Total Devices : 6
>     Persistence : Superblock is persistent
> 
>   Intent Bitmap : Internal
> 
>     Update Time : Tue Sep 18 01:01:38 2012
>           State : active 
>  Active Devices : 5
> Working Devices : 6
>  Failed Devices : 0
>   Spare Devices : 1
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>            Name : avdeb:0  (local to host avdeb)
>            UUID : b99961fb:ed1f76c8:ec2dad31:6db45332
>          Events : 14135
> 
>     Number   Major   Minor   RaidDevice State
>        0       8       65        0      active sync   /dev/sde1
>        6       8       33        1      active sync   /dev/sdc1
>        3       8       81        2      active sync   /dev/sdf1
>        4       8       49        3      active sync   /dev/sdd1
>        5       8       97        4      active sync   /dev/sdg1
> 
>        7       8       17        -      spare   /dev/sdb1
> 
> # echo want_replacement > /sys/block/md0/md/dev-sde1/state 
> 
> // It's rebuilding:
> 
> # cat /proc/mdstat 
> Personalities : [raid6] [raid5] [raid4] 
> md0 : active raid5 sdb1[7](R) sde1[0] sdg1[5] sdd1[4] sdf1[3] sdc1[6]
>       3907003136 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
>       [>....................]  recovery =  0.1% (1724936/976750784) finish=150.7min speed=107808K/sec
>       bitmap: 0/4 pages [0KB], 131072KB chunk
> 
> // Rebuild finished:
> 
> # cat /proc/mdstat 
> md0 : active raid5 sdb1[7] sde1[0](F) sdg1[5] sdd1[4] sdf1[3] sdc1[6]
>       3907003136 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUUUU]
>       bitmap: 0/4 pages [0KB], 131072KB chunk
> 
> # mdadm --detail /dev/md0
> /dev/md0:
>         Version : 1.2
>   Creation Time : Wed May 25 00:07:38 2011
>      Raid Level : raid5
>      Array Size : 3907003136 (3726.01 GiB 4000.77 GB)
>   Used Dev Size : 976750784 (931.50 GiB 1000.19 GB)
>    Raid Devices : 5
>   Total Devices : 6
>     Persistence : Superblock is persistent
> 
>   Intent Bitmap : Internal
> 
>     Update Time : Tue Sep 18 04:46:14 2012
>           State : active 
>  Active Devices : 5
> Working Devices : 5
>  Failed Devices : 1
>   Spare Devices : 0
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>            Name : avdeb:0  (local to host avdeb)
>            UUID : b99961fb:ed1f76c8:ec2dad31:6db45332
>          Events : 14231
> 
>     Number   Major   Minor   RaidDevice State
>        7       8       17        0      active sync   /dev/sdb1
>        6       8       33        1      active sync   /dev/sdc1
>        3       8       81        2      active sync   /dev/sdf1
>        4       8       49        3      active sync   /dev/sdd1
>        5       8       97        4      active sync   /dev/sdg1
> 
>        0       8       65        -      faulty spare   /dev/sde1
> 
> // Removing sde1:
> 
> # mdadm --remove /dev/md0 /dev/sde1 
> mdadm: hot removed /dev/sde1 from /dev/md0
> 
> // Removed ok:
> 
> # mdadm --detail /dev/md0 
> /dev/md0:
>         Version : 1.2
>   Creation Time : Wed May 25 00:07:38 2011
>      Raid Level : raid5
>      Array Size : 3907003136 (3726.01 GiB 4000.77 GB)
>   Used Dev Size : 976750784 (931.50 GiB 1000.19 GB)
>    Raid Devices : 5
>   Total Devices : 5
>     Persistence : Superblock is persistent
> 
>   Intent Bitmap : Internal
> 
>     Update Time : Tue Sep 18 08:49:24 2012
>           State : active 
>  Active Devices : 5
> Working Devices : 5
>  Failed Devices : 0
>   Spare Devices : 0
> 
>          Layout : left-symmetric
>      Chunk Size : 64K
> 
>            Name : avdeb:0  (local to host avdeb)
>            UUID : b99961fb:ed1f76c8:ec2dad31:6db45332
>          Events : 14234
> 
>     Number   Major   Minor   RaidDevice State
>        7       8       17        0      active sync   /dev/sdb1
>        6       8       33        1      active sync   /dev/sdc1
>        3       8       81        2      active sync   /dev/sdf1
>        4       8       49        3      active sync   /dev/sdd1
>        5       8       97        4      active sync   /dev/sdg1
> 
> // In the end /proc/mdstat shows "[5/4]", not "[5/5]":
> 
> # cat /proc/mdstat 
> md0 : active raid5 sdb1[7] sdg1[5] sdd1[4] sdf1[3] sdc1[6]
>       3907003136 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUUUU]
>       bitmap: 0/4 pages [0KB], 131072KB chunk
> 


Thanks for the report.
I think that's fixed by:

http://git.neil.brown.name/?p=linux.git;a=commitdiff;h=413c4a33cb3cd1a14431c61fd20904cdb1867d17

which I will hopefully be sending to Linus some time soon.

NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2012-09-18  5:18 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-09-18  5:04 Wrong count of devices in /proc/mdstat after "want_replacement" Roman Mamedov
2012-09-18  5:18 ` NeilBrown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).