From: NeilBrown <neilb@suse.de>
To: Roman Mamedov <rm@romanrm.ru>
Cc: linux-raid@vger.kernel.org
Subject: Re: Wrong count of devices in /proc/mdstat after "want_replacement"
Date: Tue, 18 Sep 2012 15:18:26 +1000 [thread overview]
Message-ID: <20120918151826.08af9e27@notabene.brown> (raw)
In-Reply-To: <20120918110436.0215e5ec@natsu>
[-- Attachment #1: Type: text/plain, Size: 5302 bytes --]
On Tue, 18 Sep 2012 11:04:36 +0600 Roman Mamedov <rm@romanrm.ru> wrote:
> Hello,
>
> Summary:
>
> After removing a disk via "want_replacement", disk count in /proc/mdstat is
> wrong ("[5/4]", not "[5/5]" as it should be).
>
> More details below:
>
> ----
>
> # mdadm --add /dev/md0 /dev/sdb1
> mdadm: added /dev/sdb1
>
> # mdadm --detail /dev/md0
> /dev/md0:
> Version : 1.2
> Creation Time : Wed May 25 00:07:38 2011
> Raid Level : raid5
> Array Size : 3907003136 (3726.01 GiB 4000.77 GB)
> Used Dev Size : 976750784 (931.50 GiB 1000.19 GB)
> Raid Devices : 5
> Total Devices : 6
> Persistence : Superblock is persistent
>
> Intent Bitmap : Internal
>
> Update Time : Tue Sep 18 01:01:38 2012
> State : active
> Active Devices : 5
> Working Devices : 6
> Failed Devices : 0
> Spare Devices : 1
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Name : avdeb:0 (local to host avdeb)
> UUID : b99961fb:ed1f76c8:ec2dad31:6db45332
> Events : 14135
>
> Number Major Minor RaidDevice State
> 0 8 65 0 active sync /dev/sde1
> 6 8 33 1 active sync /dev/sdc1
> 3 8 81 2 active sync /dev/sdf1
> 4 8 49 3 active sync /dev/sdd1
> 5 8 97 4 active sync /dev/sdg1
>
> 7 8 17 - spare /dev/sdb1
>
> # echo want_replacement > /sys/block/md0/md/dev-sde1/state
>
> // It's rebuilding:
>
> # cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> md0 : active raid5 sdb1[7](R) sde1[0] sdg1[5] sdd1[4] sdf1[3] sdc1[6]
> 3907003136 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
> [>....................] recovery = 0.1% (1724936/976750784) finish=150.7min speed=107808K/sec
> bitmap: 0/4 pages [0KB], 131072KB chunk
>
> // Rebuild finished:
>
> # cat /proc/mdstat
> md0 : active raid5 sdb1[7] sde1[0](F) sdg1[5] sdd1[4] sdf1[3] sdc1[6]
> 3907003136 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUUUU]
> bitmap: 0/4 pages [0KB], 131072KB chunk
>
> # mdadm --detail /dev/md0
> /dev/md0:
> Version : 1.2
> Creation Time : Wed May 25 00:07:38 2011
> Raid Level : raid5
> Array Size : 3907003136 (3726.01 GiB 4000.77 GB)
> Used Dev Size : 976750784 (931.50 GiB 1000.19 GB)
> Raid Devices : 5
> Total Devices : 6
> Persistence : Superblock is persistent
>
> Intent Bitmap : Internal
>
> Update Time : Tue Sep 18 04:46:14 2012
> State : active
> Active Devices : 5
> Working Devices : 5
> Failed Devices : 1
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Name : avdeb:0 (local to host avdeb)
> UUID : b99961fb:ed1f76c8:ec2dad31:6db45332
> Events : 14231
>
> Number Major Minor RaidDevice State
> 7 8 17 0 active sync /dev/sdb1
> 6 8 33 1 active sync /dev/sdc1
> 3 8 81 2 active sync /dev/sdf1
> 4 8 49 3 active sync /dev/sdd1
> 5 8 97 4 active sync /dev/sdg1
>
> 0 8 65 - faulty spare /dev/sde1
>
> // Removing sde1:
>
> # mdadm --remove /dev/md0 /dev/sde1
> mdadm: hot removed /dev/sde1 from /dev/md0
>
> // Removed ok:
>
> # mdadm --detail /dev/md0
> /dev/md0:
> Version : 1.2
> Creation Time : Wed May 25 00:07:38 2011
> Raid Level : raid5
> Array Size : 3907003136 (3726.01 GiB 4000.77 GB)
> Used Dev Size : 976750784 (931.50 GiB 1000.19 GB)
> Raid Devices : 5
> Total Devices : 5
> Persistence : Superblock is persistent
>
> Intent Bitmap : Internal
>
> Update Time : Tue Sep 18 08:49:24 2012
> State : active
> Active Devices : 5
> Working Devices : 5
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Name : avdeb:0 (local to host avdeb)
> UUID : b99961fb:ed1f76c8:ec2dad31:6db45332
> Events : 14234
>
> Number Major Minor RaidDevice State
> 7 8 17 0 active sync /dev/sdb1
> 6 8 33 1 active sync /dev/sdc1
> 3 8 81 2 active sync /dev/sdf1
> 4 8 49 3 active sync /dev/sdd1
> 5 8 97 4 active sync /dev/sdg1
>
> // In the end /proc/mdstat shows "[5/4]", not "[5/5]":
>
> # cat /proc/mdstat
> md0 : active raid5 sdb1[7] sdg1[5] sdd1[4] sdf1[3] sdc1[6]
> 3907003136 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/4] [UUUUU]
> bitmap: 0/4 pages [0KB], 131072KB chunk
>
Thanks for the report.
I think that's fixed by:
http://git.neil.brown.name/?p=linux.git;a=commitdiff;h=413c4a33cb3cd1a14431c61fd20904cdb1867d17
which I will hopefully be sending to Linus some time soon.
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
prev parent reply other threads:[~2012-09-18 5:18 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-09-18 5:04 Wrong count of devices in /proc/mdstat after "want_replacement" Roman Mamedov
2012-09-18 5:18 ` NeilBrown [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120918151826.08af9e27@notabene.brown \
--to=neilb@suse.de \
--cc=linux-raid@vger.kernel.org \
--cc=rm@romanrm.ru \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).