linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: NeilBrown <neilb@suse.com>
To: Andy Smith <andy@strugglers.net>, linux-raid@vger.kernel.org
Subject: Re: Newly-created arrays don't auto-assemble - related to hostname change?
Date: Thu, 17 Nov 2016 17:09:28 +1100	[thread overview]
Message-ID: <87lgwihc2v.fsf@notabene.neil.brown.name> (raw)
In-Reply-To: <20161117035230.GG21587@bitfolk.com>

[-- Attachment #1: Type: text/plain, Size: 6347 bytes --]

On Thu, Nov 17 2016, Andy Smith wrote:

> Hi,
>
> I feel I am missing something very simple here, as I usually don't
> have this issue, but here goes…
>
> I've got a Debian jessie host on which I created four arrays during
> install (md{0,1,2,3}), using the Debian installer and partman. These
> auto-assemble fine.
>
> After install the name of the server was changed from "tbd" to
> "jfd". Another array was then created (md5), added to
> /etc/mdadm/mdadm.conf and the initramfs was rebuilt
> (update-initramfs -u).
>
> md5 does not auto-assemble. It can be started fine after boot with:
>
>     # mdadm --assemble /dev/md5
>
> or:
>
>     # mdadm --incremental /dev/sdc
>     # mdadm --incremental /dev/sdd

This is almost exactly what udev does when the devices are discovered,
so if it works here, it should work when udev does it.

My only guess is that maybe the "DEVICE /dev/sd*" line in the mdadm.conf
is causing confusion.  udev might be using a different name, though that
would be odd.

Can you try removing that line and see if it makes a difference?

NeilBrown


>
> /etc/mdadm/mdadm.conf:
>
>     DEVICE /dev/sd*
>     CREATE owner=root group=disk mode=0660 auto=yes
>     HOMEHOST <ignore>
>     MAILADDR root
>     ARRAY /dev/md/0  metadata=1.2 UUID=400bac1d:e2c5d6ef:fea3b8c8:bcb70f8f
>     ARRAY /dev/md/1  metadata=1.2 UUID=e29c8b89:705f0116:d888f77e:2b6e32f5
>     ARRAY /dev/md/2  metadata=1.2 UUID=039b3427:4be5157a:6e2d53bd:fe898803
>     ARRAY /dev/md/3  metadata=1.2 UUID=30f745ce:7ed41b53:4df72181:7406ea1d
>     ARRAY /dev/md/5  metadata=1.2 UUID=957030cf:c09f023d:ceaebb27:e546f095
>
> I've unpacked the initramfs and looked at the mdadm.conf in there
> and it is identical.
>
> Initially HOMEHOST was set to <system> (the default), but I noticed
> when looking at --detail that md5 has:
>
>            Name : jfd:5  (local to host jfd)
>
> whereas the others have:
>
>            Name : tbd:0
>
> …so I changed it to <ignore> to see if that would help. It didn't.
>
> So, I'd really appreciate any hints as to what I've missed here!
>
> Here follows --detail and --examine of md5 and its members, then the
> contents of /proc/mdstat after I have manually assembled md5.
>
> $ sudo mdadm --detail /dev/md5
> /dev/md5:
>         Version : 1.2
>   Creation Time : Thu Nov 17 02:35:15 2016
>      Raid Level : raid10
>      Array Size : 1875243008 (1788.37 GiB 1920.25 GB)
>   Used Dev Size : 1875243008 (1788.37 GiB 1920.25 GB)
>    Raid Devices : 2
>   Total Devices : 2
>     Persistence : Superblock is persistent
>
>   Intent Bitmap : Internal
>
>     Update Time : Thu Nov 17 02:35:15 2016
>           State : clean 
>  Active Devices : 2
> Working Devices : 2
>  Failed Devices : 0
>   Spare Devices : 0
>
>          Layout : far=2
>      Chunk Size : 512K
>
>            Name : jfd:5  (local to host jfd)
>            UUID : 957030cf:c09f023d:ceaebb27:e546f095
>          Events : 0
>
>     Number   Major   Minor   RaidDevice State
>        0       8       48        0      active sync   /dev/sdd
>        1       8       32        1      active sync   /dev/sdc
>
> $ sudo mdadm --examine /dev/sd{c,d}
> /dev/sdc:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : 957030cf:c09f023d:ceaebb27:e546f095
>            Name : jfd:5  (local to host jfd)
>   Creation Time : Thu Nov 17 02:35:15 2016
>      Raid Level : raid10
>    Raid Devices : 2
>
>  Avail Dev Size : 3750486704 (1788.37 GiB 1920.25 GB)
>      Array Size : 1875243008 (1788.37 GiB 1920.25 GB)
>   Used Dev Size : 3750486016 (1788.37 GiB 1920.25 GB)
>     Data Offset : 262144 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=262056 sectors, after=688 sectors
>           State : clean
>     Device UUID : 4ac82c29:2d109465:7fff9b22:8c411c1e
>
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Thu Nov 17 02:35:15 2016
>   Bad Block Log : 512 entries available at offset 72 sectors
>        Checksum : 96d669f1 - correct
>          Events : 0
>
>          Layout : far=2
>      Chunk Size : 512K
>
>    Device Role : Active device 1
>    Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
> /dev/sdd:
>           Magic : a92b4efc
>         Version : 1.2
>     Feature Map : 0x1
>      Array UUID : 957030cf:c09f023d:ceaebb27:e546f095
>            Name : jfd:5  (local to host jfd)
>   Creation Time : Thu Nov 17 02:35:15 2016
>      Raid Level : raid10
>    Raid Devices : 2
>  Avail Dev Size : 3750486704 (1788.37 GiB 1920.25 GB)
>      Array Size : 1875243008 (1788.37 GiB 1920.25 GB)
>   Used Dev Size : 3750486016 (1788.37 GiB 1920.25 GB)
>     Data Offset : 262144 sectors
>    Super Offset : 8 sectors
>    Unused Space : before=262056 sectors, after=688 sectors
>           State : clean
>     Device UUID : 3a067652:6e88afae:82722342:0036bae0
>
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Thu Nov 17 02:35:15 2016
>   Bad Block Log : 512 entries available at offset 72 sectors
>        Checksum : eb608799 - correct
>          Events : 0
>
>          Layout : far=2
>      Chunk Size : 512K
>
>    Device Role : Active device 0
>    Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
>
> $ cat /proc/mdstat 
> Personalities : [raid1] [raid10] 
> md5 : active (auto-read-only) raid10 sdd[0] sdc[1]
>       1875243008 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
>       bitmap: 0/14 pages [0KB], 65536KB chunk
>
> md3 : active raid10 sda5[0] sdb5[1]
>       12199936 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
>       
> md2 : active (auto-read-only) raid10 sda3[0] sdb3[1]
>       975872 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
>       
> md1 : active raid10 sda2[0] sdb2[1]
>       1951744 blocks super 1.2 512K chunks 2 far-copies [2/2] [UU]
>       
> md0 : active raid1 sda1[0] sdb1[1]
>       498368 blocks super 1.2 [2/2] [UU]
>       
> unused devices: <none>
>
> Cheers,
> Andy
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 800 bytes --]

  reply	other threads:[~2016-11-17  6:09 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-17  3:52 Newly-created arrays don't auto-assemble - related to hostname change? Andy Smith
2016-11-17  6:09 ` NeilBrown [this message]
2016-11-17 15:09   ` Andy Smith
2016-11-17 22:43     ` NeilBrown
2016-11-18  2:31       ` Andy Smith
2016-11-18  3:02         ` NeilBrown
2016-11-18  3:47           ` Andy Smith
2016-11-18  4:08             ` NeilBrown
2016-11-18  4:17               ` Andy Smith
2016-11-21  4:32                 ` NeilBrown
2016-11-21  6:02                   ` Andy Smith
2016-11-21 22:56                     ` NeilBrown
2016-11-22  6:01                       ` Andy Smith
2016-11-23  2:34                         ` NeilBrown
2016-11-23  9:03                           ` Bug#784070: " Michael Tokarev
2016-11-24  1:24                             ` Andy Smith
2016-11-23  9:09                           ` SOUBEYRAND Yann - externe
2016-11-17 23:22 ` Peter Sangas
2016-11-18  2:03   ` Glenn Enright

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87lgwihc2v.fsf@notabene.neil.brown.name \
    --to=neilb@suse.com \
    --cc=andy@strugglers.net \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).