From: "Luke Odom" <luke@lukeodom.com>
To: NeilBrown <neilb@suse.de>
Cc: Luke Odom <luke@lukeodom.com>, linux-raid@vger.kernel.org
Subject: Re: dmadm question
Date: Mon, 15 Sep 2014 07:07:53 -0700 [thread overview]
Message-ID: <56516488c20b6b78729f14731fe8aecf.squirrel@webmail.lukeodom.com> (raw)
In-Reply-To: <20140915103154.59bc7293@notabene.brown>
Drive is exact same model as old one. Output of requested commands:
# mdadm --manage /dev/md127 --remove /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md127
# mdadm --zero /dev/sdb
# mdadm --manage /dev/md127 --add /dev/sdb
mdadm: added /dev/sdb
# ps aux | grep mdmon
root 1937 0.0 0.1 10492 10484 ? SLsl 14:04 0:00 mdmon md127
root 2055 0.0 0.0 2420 928 pts/0 S+ 14:06 0:00 grep mdmon
md: unbind<sdb>
md: export_rdev(sdb)
md: bind<sdb>
On Sun, September 14, 2014 5:31 pm, NeilBrown wrote:
> On 12 Sep 2014 18:49:54 -0700 Luke Odom <luke@lukeodom.com> wrote:
>
>> I had a raid1 subarray running within an imsm container. One of the
>> drives died so I replaced it. I can get the new drive into the imsm
>> container but I canât add it to the raid1 array within that
>> container. Iâve read the man page and canât see to figure it out.
>> Any help would be greatly appreciated. Using mdadm 3.2.5 on debian
>> squeeze.Â
>
> This should just happen automatically. As soon as you add the device to
> the
> container, mdmon notices and adds it to the raid1.
>
> However it appears not to have happened...
>
> I assume the new drive is exactly the same size as the old drive?
> Try removing the new device from md127, run "mdadm --zero" on it, then add
> it
> back again.
> Do any messages appear in the kernel logs when you do that?
>
> Is "mdmon md127" running?
>
> NeilBrown
>
>
>>
>>
>>
>>
>> root@ds6790:~# cat /proc/mdstat
>> Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
>> md126 : active raid1 sda[0]
>> Â Â Â 976759808 blocks super external:/md127/0 [2/1] [U_]
>>
>>
>>
>>
>> md127 : inactive sdb[0](S) sda[1](S)
>> Â Â Â 4901 blocks super external:imsm
>>
>>
>>
>>
>> unused devices: <none>
>>
>>
>>
>>
>>
>>
>> root@ds6790:~# mdadm --detail /dev/md126
>> /dev/md126:
>> Â Â Â Container : /dev/md127, member 0
>> Â Â Â Raid Level : raid1
>> Â Â Â Array Size : 976759808 (931.51 GiB 1000.20 GB)
>> Â Used Dev Size : 976759940 (931.51 GiB 1000.20 GB)
>> Â Â Raid Devices : 2
>> Â Total Devices : 1
>>
>>
>>
>>
>> Â Â Â Â Â State : active, degradedÂ
>> Â Active Devices : 1
>> Working Devices : 1
>> Â Failed Devices : 0
>> Â Spare Devices : 0
>>
>>
>>
>>
>>
>>
>>
>>
>> Â Â Â Â Â Â UUID : 1be60edf:5c16b945:86434b6b:2714fddb
>>   Number  Major  Minor  RaidDevice State
>> Â Â Â Â 0 Â Â Â 8 Â Â Â Â 0 Â Â Â Â 0 Â Â Â active sync Â
>> /dev/sda
>> Â Â Â Â 1 Â Â Â 0 Â Â Â Â 0 Â Â Â Â 1 Â Â Â removed
>>
>>
>>
>>
>>
>>
>> root@ds6790:~# mdadm --examine /dev/md127
>> /dev/md127:
>> Â Â Â Â Â Magic : Intel Raid ISM Cfg Sig.
>> Â Â Â Â Version : 1.1.00
>> Â Â Orig Family : 6e37aa48
>> Â Â Â Â Â Family : 6e37aa48
>> Â Â Â Generation : 00640a43
>> Â Â Â Attributes : All supported
>> Â Â Â Â Â Â UUID : ac27ba68:f8a3618d:3810d44f:25031c07
>> Â Â Â Â Checksum : 513ef1f6 correct
>> Â Â MPB Sectors : 1
>> Â Â Â Â Â Disks : 2
>> Â Â RAID Devices : 1
>>
>>
>>
>>
>> Â Disk00 Serial : 9XG3RTL0
>> Â Â Â Â Â State : active
>> Â Â Â Â Â Â Â Id : 00000002
>> Â Â Usable Size : 1953519880 (931.51 GiB 1000.20 GB)
>>
>>
>>
>>
>> [Volume0]:
>> Â Â Â Â Â Â UUID : 1be60edf:5c16b945:86434b6b:2714fddb
>> Â Â Â RAID Level : 1
>> Â Â Â Â Members : 2
>> Â Â Â Â Â Slots : [U_]
>> Â Â Failed disk : 1
>> Â Â Â This Slot : 0
>> Â Â Â Array Size : 1953519616 (931.51 GiB 1000.20 GB)
>> Â Â Per Dev Size : 1953519880 (931.51 GiB 1000.20 GB)
>> Â Sector Offset : 0
>> Â Â Num Stripes : 7630936
>> Â Â Â Chunk Size : 64 KiB
>> Â Â Â Â Reserved : 0
>> Â Migrate State : idle
>> Â Â Â Map State : degraded
>> Â Â Dirty State : dirty
>>
>>
>>
>>
>> Â Disk01 Serial : XG3RWMF
>> Â Â Â Â Â State : failed
>> Â Â Â Â Â Â Â Id : ffffffff
>> Â Â Usable Size : 1953519880 (931.51 GiB 1000.20 GB)
>>
>>
>>
>>
>>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2014-09-15 14:07 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <661E720A-B65C-4C24-B6A4-A4439596DEB9@lukeodom.com>
2014-09-15 0:31 ` dmadm question NeilBrown
2014-09-15 14:07 ` Luke Odom [this message]
2014-09-29 17:38 ` Dan Williams
2014-09-29 21:30 ` NeilBrown
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56516488c20b6b78729f14731fe8aecf.squirrel@webmail.lukeodom.com \
--to=luke@lukeodom.com \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).