linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Luke Odom" <luke@lukeodom.com>
To: NeilBrown <neilb@suse.de>
Cc: Luke Odom <luke@lukeodom.com>, linux-raid@vger.kernel.org
Subject: Re: dmadm question
Date: Mon, 15 Sep 2014 07:07:53 -0700	[thread overview]
Message-ID: <56516488c20b6b78729f14731fe8aecf.squirrel@webmail.lukeodom.com> (raw)
In-Reply-To: <20140915103154.59bc7293@notabene.brown>

Drive is exact same model as old one. Output of requested commands:

# mdadm --manage /dev/md127 --remove /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md127
# mdadm --zero /dev/sdb
# mdadm --manage /dev/md127 --add /dev/sdb
mdadm: added /dev/sdb
# ps aux | grep mdmon
root      1937  0.0  0.1  10492 10484 ?        SLsl 14:04   0:00 mdmon md127
root      2055  0.0  0.0   2420   928 pts/0    S+   14:06   0:00 grep mdmon

md: unbind<sdb>
md: export_rdev(sdb)
md: bind<sdb>



On Sun, September 14, 2014 5:31 pm, NeilBrown wrote:
> On 12 Sep 2014 18:49:54 -0700 Luke Odom <luke@lukeodom.com> wrote:
>
>>   I had a raid1 subarray running within an imsm container. One of the
>> drives died so I replaced it. I can get the new drive into the imsm
>> container but I can’t add it to the raid1 array within that
>> container. I’ve read the man page and can’t see to figure it out.
>> Any help would be greatly appreciated. Using mdadm 3.2.5 on debian
>> squeeze. 
>
> This  should just happen automatically.  As soon as you add the device to
> the
> container, mdmon notices and adds it to the raid1.
>
> However it appears not to have happened...
>
> I assume the new drive is exactly the same size as the old drive?
> Try removing the new device from md127, run "mdadm --zero" on it, then add
> it
> back again.
> Do any messages appear in the kernel logs when you do that?
>
> Is "mdmon md127" running?
>
> NeilBrown
>
>
>>
>>
>>
>>
>> root@ds6790:~# cat /proc/mdstat
>> Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
>> md126 : active raid1 sda[0]
>>       976759808 blocks super external:/md127/0 [2/1] [U_]
>>
>>
>>
>>
>> md127 : inactive sdb[0](S) sda[1](S)
>>       4901 blocks super external:imsm
>>
>>
>>
>>
>> unused devices: <none>
>>
>>
>>
>>
>>
>>
>> root@ds6790:~# mdadm --detail /dev/md126
>> /dev/md126:
>>       Container : /dev/md127, member 0
>>      Raid Level : raid1
>>      Array Size : 976759808 (931.51 GiB 1000.20 GB)
>>   Used Dev Size : 976759940 (931.51 GiB 1000.20 GB)
>>    Raid Devices : 2
>>   Total Devices : 1
>>
>>
>>
>>
>>           State : active, degraded 
>>  Active Devices : 1
>> Working Devices : 1
>>  Failed Devices : 0
>>   Spare Devices : 0
>>
>>
>>
>>
>>
>>
>>
>>
>>            UUID : 1be60edf:5c16b945:86434b6b:2714fddb
>>     Number   Major   Minor   RaidDevice State
>>        0       8        0        0      active sync  
>> /dev/sda
>>        1       0        0        1      removed
>>
>>
>>
>>
>>
>>
>> root@ds6790:~# mdadm --examine /dev/md127
>> /dev/md127:
>>           Magic : Intel Raid ISM Cfg Sig.
>>         Version : 1.1.00
>>     Orig Family : 6e37aa48
>>          Family : 6e37aa48
>>      Generation : 00640a43
>>      Attributes : All supported
>>            UUID : ac27ba68:f8a3618d:3810d44f:25031c07
>>        Checksum : 513ef1f6 correct
>>     MPB Sectors : 1
>>           Disks : 2
>>    RAID Devices : 1
>>
>>
>>
>>
>>   Disk00 Serial : 9XG3RTL0
>>           State : active
>>              Id : 00000002
>>     Usable Size : 1953519880 (931.51 GiB 1000.20 GB)
>>
>>
>>
>>
>> [Volume0]:
>>            UUID : 1be60edf:5c16b945:86434b6b:2714fddb
>>      RAID Level : 1
>>         Members : 2
>>           Slots : [U_]
>>     Failed disk : 1
>>       This Slot : 0
>>      Array Size : 1953519616 (931.51 GiB 1000.20 GB)
>>    Per Dev Size : 1953519880 (931.51 GiB 1000.20 GB)
>>   Sector Offset : 0
>>     Num Stripes : 7630936
>>      Chunk Size : 64 KiB
>>        Reserved : 0
>>   Migrate State : idle
>>       Map State : degraded
>>     Dirty State : dirty
>>
>>
>>
>>
>>   Disk01 Serial : XG3RWMF
>>           State : failed
>>              Id : ffffffff
>>     Usable Size : 1953519880 (931.51 GiB 1000.20 GB)
>>
>>
>>
>>
>>
>
>

--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2014-09-15 14:07 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <661E720A-B65C-4C24-B6A4-A4439596DEB9@lukeodom.com>
2014-09-15  0:31 ` dmadm question NeilBrown
2014-09-15 14:07   ` Luke Odom [this message]
2014-09-29 17:38     ` Dan Williams
2014-09-29 21:30       ` NeilBrown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56516488c20b6b78729f14731fe8aecf.squirrel@webmail.lukeodom.com \
    --to=luke@lukeodom.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=neilb@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).