linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RE: Impact of missing parameter during mdadm create
@ 2011-03-01 18:38 Mike Viau
  2011-03-03 11:06 ` Ken Drummond
  0 siblings, 1 reply; 10+ messages in thread
From: Mike Viau @ 2011-03-01 18:38 UTC (permalink / raw)
  To: linuxraid; +Cc: linux-raid


> On Tue, 1 Mar 2011 17:13:09 +1000  wrote:
>
>>
>> Manual re-assembly outputs as follows:
>>
>>
>> mdadm -Ss
>>
>> mdadm: stopped /dev/md0
>>
>> ---
>>
>> mdadm -Asvvv
>>
>> mdadm: looking for devices for /dev/md/0
>> mdadm: no RAID superblock on /dev/dm-6
>> mdadm: /dev/dm-6 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-5
>> mdadm: /dev/dm-5 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-4
>> mdadm: /dev/dm-4 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-3
>> mdadm: /dev/dm-3 has wrong uuid.
>> mdadm: cannot open device /dev/dm-2: Device or resource busy
>> mdadm: /dev/dm-2 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-1
>> mdadm: /dev/dm-1 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-0
>> mdadm: /dev/dm-0 has wrong uuid.
>> mdadm: no RAID superblock on /dev/sde
>> mdadm: /dev/sde has wrong uuid.
>> mdadm: no RAID superblock on /dev/sdd
>> mdadm: /dev/sdd has wrong uuid.
>> mdadm: cannot open device /dev/sdc7: Device or resource busy
>> mdadm: /dev/sdc7 has wrong uuid.
>> mdadm: cannot open device /dev/sdc6: Device or resource busy
>> mdadm: /dev/sdc6 has wrong uuid.
>> mdadm: cannot open device /dev/sdc5: Device or resource busy
>> mdadm: /dev/sdc5 has wrong uuid.
>> mdadm: no RAID superblock on /dev/sdc2
>> mdadm: /dev/sdc2 has wrong uuid.
>> mdadm: cannot open device /dev/sdc1: Device or resource busy
>> mdadm: /dev/sdc1 has wrong uuid.
>> mdadm: cannot open device /dev/sdc: Device or resource busy
>> mdadm: /dev/sdc has wrong uuid.
>> mdadm: no RAID superblock on /dev/sda
>> mdadm: /dev/sda has wrong uuid.
>> mdadm: no RAID superblock on /dev/sdb
>> mdadm: /dev/sdb has wrong uuid.
>> mdadm: /dev/sdd1 is identified as a member of /dev/md/0, slot 2.
>> mdadm: /dev/sda1 is identified as a member of /dev/md/0, slot 0.
>> mdadm: /dev/sdb1 is identified as a member of /dev/md/0, slot 1.
>> mdadm: added /dev/sdb1 to /dev/md/0 as 1
>> mdadm: added /dev/sdd1 to /dev/md/0 as 2
>> mdadm: looking for devices for /dev/md/0
>> mdadm: no RAID superblock on /dev/dm-6
>> mdadm: /dev/dm-6 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-5
>> mdadm: /dev/dm-5 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-4
>> mdadm: /dev/dm-4 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-3
>> mdadm: /dev/dm-3 has wrong uuid.
>> mdadm: cannot open device /dev/dm-2: Device or resource busy
>> mdadm: /dev/dm-2 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-1
>> mdadm: /dev/dm-1 has wrong uuid.
>> mdadm: no RAID superblock on /dev/dm-0
>> mdadm: /dev/dm-0 has wrong uuid.
>> mdadm: no RAID superblock on /dev/sde
>> mdadm: /dev/sde has wrong uuid.
>> mdadm: no RAID superblock on /dev/sdd
>> mdadm: /dev/sdd has wrong uuid.
>> mdadm: cannot open device /dev/sdc7: Device or resource busy
>> mdadm: /dev/sdc7 has wrong uuid.
>> mdadm: cannot open device /dev/sdc6: Device or resource busy
>> mdadm: /dev/sdc6 has wrong uuid.
>> mdadm: cannot open device /dev/sdc5: Device or resource busy
>> mdadm: /dev/sdc5 has wrong uuid.
>> mdadm: no RAID superblock on /dev/sdc2
>> mdadm: /dev/sdc2 has wrong uuid.
>> mdadm: cannot open device /dev/sdc1: Device or resource busy
>> mdadm: /dev/sdc1 has wrong uuid.
>> mdadm: cannot open device /dev/sdc: Device or resource busy
>> mdadm: /dev/sdc has wrong uuid.
>> mdadm: no RAID superblock on /dev/sda
>> mdadm: /dev/sda has wrong uuid.
>> mdadm: no RAID superblock on /dev/sdb
>> mdadm: /dev/sdb has wrong uuid.
>> mdadm: /dev/sdd1 is identified as a member of /dev/md/0, slot 2.
>> mdadm: /dev/sda1 is identified as a member of /dev/md/0, slot 0.
>> mdadm: /dev/sdb1 is identified as a member of /dev/md/0, slot 1.
>> mdadm: added /dev/sdb1 to /dev/md/0 as 1
>> mdadm: added /dev/sdd1 to /dev/md/0 as 2
>> mdadm: added /dev/sda1 to /dev/md/0 as 0
>> mdadm: /dev/md/0 has been started with 2 drives (out of 3).
>>
>> ---
>>
>> mdadm --examine /dev/sd{a,b,d}1
>>
>> /dev/sda1:
>>           Magic : a92b4efc
>>         Version : 1.2
>>     Feature Map : 0x0
>>      Array UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
>>            Name : XEN-HOST:0  (local to host XEN-HOST)
>>   Creation Time : Mon Dec 20 09:48:07 2010
>>      Raid Level : raid5
>>    Raid Devices : 3
>>
>>  Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB)
>>      Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
>>   Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
>>     Data Offset : 2048 sectors
>>    Super Offset : 8 sectors
>>           State : clean
>>     Device UUID : 25f4baf0:9a378d2c:16a87f0c:ff89b2c8
>>
>>     Update Time : Mon Feb 28 23:35:20 2011
>>        Checksum : 3745d2b9 - correct
>>          Events : 33374
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>>
>>    Device Role : Active device 0
>>    Array State : AAA ('A' == active, '.' == missing)
>>
>> /dev/sdb1:
>>           Magic : a92b4efc
>>         Version : 1.2
>>     Feature Map : 0x0
>>      Array UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
>>            Name : XEN-HOST:0  (local to host XEN-HOST)
>>   Creation Time : Mon Dec 20 09:48:07 2010
>>      Raid Level : raid5
>>    Raid Devices : 3
>>
>>  Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB)
>>      Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
>>   Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
>>     Data Offset : 2048 sectors
>>    Super Offset : 8 sectors
>>           State : clean
>>     Device UUID : f20ab5fd:1f141cae:e0547278:d6cf063e
>>
>>     Update Time : Mon Feb 28 23:35:20 2011
>>        Checksum : a715b8ad - correct
>>          Events : 33374
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>>
>>    Device Role : Active device 1
>>    Array State : AAA ('A' == active, '.' == missing)
>>
>> /dev/sdd1:
>>           Magic : a92b4efc
>>         Version : 1.2
>>     Feature Map : 0x0
>>      Array UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
>>            Name : XEN-HOST:0  (local to host XEN-HOST)
>>   Creation Time : Mon Dec 20 09:48:07 2010
>>      Raid Level : raid5
>>    Raid Devices : 3
>>
>>  Avail Dev Size : 1953521072 (931.51 GiB 1000.20 GB)
>>      Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
>>   Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
>>     Data Offset : 2048 sectors
>>    Super Offset : 8 sectors
>>           State : clean
>>     Device UUID : 33d70114:ffdc4fcc:2c8d65ba:ab50bab2
>>
>>     Update Time : Mon Feb 28 23:29:05 2011
>>        Checksum : 923d11a2 - correct
>>          Events : 33368
>>
>>          Layout : left-symmetric
>>      Chunk Size : 512K
>>
>>    Device Role : Active device 2
>>    Array State : AAA ('A' == active, '.' == missing)
>>
>> ---
>>
>>
>> Any ideas or tips? I am considering this might be a bug, but I have only
>> had this problem in my Debian Squeeze system.
>>
>
> What do cat /proc/mdstat and mdadm -D /dev/md0 show you? Also have you
> updated your mdadm.conf (and the mdadm.conf in the initramfs if you use
> one)?
>

After a reboot I see

cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda1[0] sdb1[1]
      1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]

unused devices: 


But sometimes I see

cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active (auto-read-only) raid5 sda1[0] sdb1[1]
      1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]

unused devices: 


QUESTION: What does '(auto-read-only)' mean?

In either case --detail output is the same for both cases.

mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Mon Dec 20 09:48:07 2010
     Raid Level : raid5
     Array Size : 1953517568 (1863.02 GiB 2000.40 GB)
  Used Dev Size : 976758784 (931.51 GiB 1000.20 GB)
   Raid Devices : 3
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Tue Mar  1 13:50:53 2011
          State : clean, degraded
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

         Layout : left-symmetric
     Chunk Size : 512K

           Name : XEN-HOST:0  (local to host XEN-HOST)
           UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
         Events : 33422

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       0        0        2      removed


Hmm, so the array is aware that it is missing drive number/RaidDevice of 2, I am not sure what implication of having a major/minor of 0.
QUESTION: Must the Major/Minor information exactly match what the system detect vs the meta data on the array (I presume)?

If that is the case it looks like I need to make drive number/RaidDevice 2 have a major/minor 8/49.

ls -l /dev/sda1
brw-rw---- 1 root disk 8, 1 Mar  1 14:17 /dev/sda1

ls -l /dev/sdb1
brw-rw---- 1 root disk 8, 17 Mar  1 14:17 /dev/sdb1

ls -l /dev/sdd1
brw-rw---- 1 root floppy 8, 49 Mar  1 14:17 /dev/sdd1


Until I find a solution I am manually running:

mdadm --re-add /dev/md0 /dev/sdd1 -vvv
mdadm: re-added /dev/sdd1

or

mdadm --add /dev/md0 /dev/sdd1 -vvv
mdadm: re-added /dev/sdd1


Which then gives me:

cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd1[3] sda1[0] sdb1[1]
      1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [>....................]  recovery =  0.1% (1222156/976758784) finish=622.3min speed=26126K/sec

unused devices: 

QUESTION: Here is seems sdd1 is given drive number 3 not 2, is that a problem? (e.g: sdd1[2] vs sdd1[3])


I am also certain my mdadm.conf on my file system is in sync/updated with the one in my initramfs for all kernels actually.


cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST 

# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=7d8a7c68:95a230d0:0a8f6e74:4c8f81e9 name=XEN-HOST:0



In trying to fix the problem I attempted to change the preferred minor of an MD array (RAID) by follow these instructions.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    # you need to manually assemble the array to change the preferred minor
    # if you manually assemble, the superblock will be updated to reflect
    # the preferred minor as you indicate with the assembly.
    # for example, to set the preferred minor to 4:
    mdadm --assemble /dev/md4 /dev/sd[abc]1

    # this only works on 2.6 kernels, and only for RAID levels of 1 and above.


mdadm --assemble /dev/md0 /dev/sd{a,b,d}1 -vvv
mdadm: looking for devices for /dev/md0
mdadm: /dev/sda1 is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdb1 is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdd1 is identified as a member of /dev/md0, slot 2.
mdadm: added /dev/sdb1 to /dev/md0 as 1
mdadm: added /dev/sdd1 to /dev/md0 as 2
mdadm: added /dev/sda1 to /dev/md0 as 0
mdadm: /dev/md0 has been started with 2 drives (out of 3) and 1 rebuilding.


So because I specified all the drives, I assume this is the same things as assembling the RAID degraded and then manually re-adding the last one (/dev/sdd1).


-M





 		 	   		  --
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread
* Impact of missing parameter during mdadm create
@ 2011-03-01  3:19 Mike Viau
  2011-03-01  3:59 ` Mike Viau
  0 siblings, 1 reply; 10+ messages in thread
From: Mike Viau @ 2011-03-01  3:19 UTC (permalink / raw)
  To: linux-raid


Hello mdadm hackers,

I was wondering what (if any) impact would creating an array with the missing parameter have on subsequent assemblies of a mdadm array? 

When the array was created, I used a command like:

mdadm --create -l5 -n3 /dev/md0 /dev/sda1 missing /dev/sdb1

And then loaded some initial data on the md0 array from /dev/sdd1, and then I zeroed out /dev/sdd1 and added it to the md0 array.

Details on each drive seem to show they all belong to the same Array UUID, but when the array is (re)assembled (on boot or manually), only /dev/sd{a,b}1 are added to mdadm array automatically, and /dev/sdd1 must be re-added manually.


> mdadm --examine /dev/sd{a,b,d}1
> /dev/sda1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
> Name : XEN-HOST:0 (local to host XEN-HOST)
> Creation Time : Mon Dec 20 09:48:07 2010
> Raid Level : raid5
> Raid Devices : 3
> 
> Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB)
> Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
> Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : 25f4baf0:9a378d2c:16a87f0c:ff89b2c8
> 
> Update Time : Fri Feb 18 16:32:19 2011
> Checksum : 37383bee - correct
> Events : 32184
> 
> Layout : left-symmetric
> Chunk Size : 512K
> 
> Device Role : Active device 0
> Array State : AAA ('A' == active, '.' == missing)
> /dev/sdb1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x0
> Array UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
> Name : XEN-HOST:0 (local to host XEN-HOST)
> Creation Time : Mon Dec 20 09:48:07 2010
> Raid Level : raid5
> Raid Devices : 3
> 
> Avail Dev Size : 1953517954 (931.51 GiB 1000.20 GB)
> Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
> Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : f20ab5fd:1f141cae:e0547278:d6cf063e
> 
> Update Time : Fri Feb 18 16:32:19 2011
> Checksum : a70821e2 - correct
> Events : 32184
> 
> Layout : left-symmetric
> Chunk Size : 512K
> 
> Device Role : Active device 1
> Array State : AAA ('A' == active, '.' == missing)
> /dev/sdd1:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x2
> Array UUID : 7d8a7c68:95a230d0:0a8f6e74:4c8f81e9
> Name : XEN-HOST:0 (local to host XEN-HOST)
> Creation Time : Mon Dec 20 09:48:07 2010
> Raid Level : raid5
> Raid Devices : 3
> 
> Avail Dev Size : 1953521072 (931.51 GiB 1000.20 GB)
> Array Size : 3907035136 (1863.02 GiB 2000.40 GB)
> Used Dev Size : 1953517568 (931.51 GiB 1000.20 GB)
> Data Offset : 2048 sectors
> Super Offset : 8 sectors
> Recovery Offset : 610474280 sectors
> State : clean
> Device UUID : 33d70114:ffdc4fcc:2c8d65ba:ab50bab2
> 
> Update Time : Fri Feb 18 16:32:19 2011
> Checksum : b692957e - correct
> Events : 32184
> 
> Layout : left-symmetric
> Chunk Size : 512K
> 
> Device Role : Active device 2
> Array State : AAA ('A' == active, '.' == missing)






-M


 		 	   		  

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2011-03-05 15:02 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <BAY148-w59357147C486E15E514067EFC50@phx.gbl>
2011-03-05  6:37 ` Impact of missing parameter during mdadm create Ken Drummond
2011-03-05 14:47   ` Mike Viau
2011-03-05 15:02     ` John Robinson
2011-03-01 18:38 Mike Viau
2011-03-03 11:06 ` Ken Drummond
2011-03-04  4:55   ` Mike Viau
2011-03-04  5:01     ` Mike Viau
2011-03-04  7:36       ` Ken Drummond
  -- strict thread matches above, loose matches on Subject: below --
2011-03-01  3:19 Mike Viau
2011-03-01  3:59 ` Mike Viau

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).