* How to recreate a dmraid RAID array with mdadm (was: no subject)
@ 2010-11-14 6:50 Mike Viau
2010-11-15 5:21 ` Neil Brown
0 siblings, 1 reply; 9+ messages in thread
From: Mike Viau @ 2010-11-14 6:50 UTC (permalink / raw)
To: neilb; +Cc: linux-raid, debian-user
> On Sun, 14 Nov 2010 06:36:00 +1100 <neilb@suse.de> wrote:
>> cat /proc/mdstat (showing what mdadm shows/discovers)
>>
>> Personalities :
>> md127 : inactive sda[1](S) sdb[0](S)
>> 4514 blocks super external:imsm
>>
>> unused devices:
>
> As imsm can have several arrays described by one set of metadata, mdadm
> creates an inactive arrive just like this which just holds the set of
> devices, and then should create other arrays made of from different regions
> of those devices.
> It looks like mdadm hasn't done that you. You can ask it to with:
>
> mdadm -I /dev/md/imsm0
>
> That should created the real raid1 array in /dev/md/something.
>
> NeilBrown
>
Thanks for this information, I feel like I am getting closer to getting this working properly. After running the command above (mdadm -I /dev/md/imsm0), the real raid 1 array did appear as /dev/md/*
ls -al /dev/md
total 0
drwxr-xr-x 2 root root 80 Nov 14 00:53 .
drwxr-xr-x 21 root root 3480 Nov 14 00:53 ..
lrwxrwxrwx 1 root root 8 Nov 14 00:50 imsm0 -> ../md127
lrwxrwxrwx 1 root root 8 Nov 14 00:53 OneTB-RAID1-PV -> ../md126
---------------
And the kernel messages:
[ 4652.315650] md: bind<sdb>
[ 4652.315866] md: bind<sda>
[ 4652.341862] raid1: md126 is not clean -- starting background reconstruction
[ 4652.341958] raid1: raid set md126 active with 2 out of 2 mirrors
[ 4652.342025] md126: detected capacity change from 0 to 1000202043392
[ 4652.342400] md126: p1
[ 4652.528448] md: md126 switched to read-write mode.
[ 4652.529387] md: resync of RAID array md126
[ 4652.529424] md: minimum _guaranteed_ speed: 1000 KB/sec/disk.
[ 4652.529464] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
[ 4652.529525] md: using 128k window, over a total of 976759940 blocks.
---------------
fdisk -ul /dev/md/OneTB-RAID1-PV
Disk /dev/md/OneTB-RAID1-PV: 1000.2 GB, 1000202043392 bytes
255 heads, 63 sectors/track, 121600 cylinders, total 1953519616 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/md/OneTB-RAID1-PV1 63 1953503999 976751968+ 8e Linux LVM
---------------
pvscan
PV /dev/sdc7 VG XENSTORE-VG lvm2 [46.56 GiB / 0 free]
PV /dev/md126p1 VG OneTB-RAID1-VG lvm2 [931.50 GiB / 0 free]
Total: 2 [978.06 GiB] / in use: 2 [978.06 GiB] / in no VG: 0 [0 ]
---------------
pvdisplay
--- Physical volume ---
PV Name /dev/md126p1
VG Name OneTB-RAID1-VG
PV Size 931.50 GiB / not usable 3.34 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238464
Free PE 0
Allocated PE 238464
PV UUID hvxXR3-tV9B-CMBW-nZn2-N2zH-N1l6-sC9m9i
----------------
vgscan
Reading all physical volumes. This may take a while...
Found volume group "XENSTORE-VG" using metadata type lvm2
Found volume group "OneTB-RAID1-VG" using metadata type lvm2
-------------
vgdisplay
--- Volume group ---
VG Name OneTB-RAID1-VG
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 931.50 GiB
PE Size 4.00 MiB
Total PE 238464
Alloc PE / Size 238464 / 931.50 GiB
Free PE / Size 0 / 0
VG UUID nCBsU2-VpgR-EcZj-lA15-oJGL-rYOw-YxXiC8
--------------------
vgchange -a y OneTB-RAID1-VG
1 logical volume(s) in volume group "OneTB-RAID1-VG" now active
--------------------
lvdisplay
--- Logical volume ---
LV Name /dev/OneTB-RAID1-VG/OneTB-RAID1-LV
VG Name OneTB-RAID1-VG
LV UUID R3TYWb-PJo1-Xzbm-vJwu-YpgP-ohZW-Vf1kHJ
LV Write Access read/write
LV Status available
# open 0
LV Size 931.50 GiB
Current LE 238464
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4
------------------------
fdisk -ul /dev/OneTB-RAID1-VG/OneTB-RAID1-LV
Disk /dev/OneTB-RAID1-VG/OneTB-RAID1-LV: 1000.2 GB, 1000190509056 bytes
255 heads, 63 sectors/track, 121599 cylinders, total 1953497088 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xbda8e40b
Device Boot Start End Blocks Id System
/dev/OneTB-RAID1-VG/OneTB-RAID1-LV1 63 1953487934 976743936 83 Linux
-----------------------
mount -t ext4 /dev/OneTB-RAID1-VG/OneTB-RAID1-LV /mnt
mount
/dev/sdc5 on / type ext4 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/sdc1 on /boot type ext2 (rw)
xenfs on /proc/xen type xenfs (rw)
/dev/mapper/OneTB--RAID1--VG-OneTB--RAID1--LV on /mnt type ext4 (rw)
-----------------
ls /mnt (and files are visible)
-------------------
And also when the array is running after manually running the command above, the error when updating the init ramdisk for kernels is gone....
update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-2.6.34.7-xen
update-initramfs: Generating /boot/initrd.img-2.6.32-5-xen-amd64
update-initramfs: Generating /boot/initrd.img-2.6.32-5-amd64
-----------------
But the issue remain now is that the mdadm is not running the real raid1 array on reboots, the init ramdisk errors come right back unfortunately (enabled verbosity)....
1) update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-2.6.34.7-xen
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
update-initramfs: Generating /boot/initrd.img-2.6.32-5-xen-amd64
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
update-initramfs: Generating /boot/initrd.img-2.6.32-5-amd64
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
2) dpkg-reconfigure --priority=low mdadm [leaving all defaults]
Stopping MD monitoring service: mdadm --monitor.
Generating array device nodes... done.
update-initramfs: Generating /boot/initrd.img-2.6.34.7-xen
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
Starting MD monitoring service: mdadm --monitor.
Generating udev events for MD arrays...done.
3) update-initramfs -u -k all [again]
update-initramfs: Generating /boot/initrd.img-2.6.34.7-xen
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
update-initramfs: Generating /boot/initrd.img-2.6.32-5-xen-amd64
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
update-initramfs: Generating /boot/initrd.img-2.6.32-5-amd64
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
-----------------
ls -al /dev/md/
total 0
drwxr-xr-x 2 root root 60 Nov 14 01:22 .
drwxr-xr-x 21 root root 3440 Nov 14 01:23 ..
lrwxrwxrwx 1 root root 8 Nov 14 01:23 imsm0 -> ../md127
-----------------
How does one fix the problem of not having the array not starting at boot?
The files/configuration I have now:
find /etc -type f | grep mdadm
./logcheck/ignore.d.server/mdadm
./logcheck/violations.d/mdadm
./default/mdadm
./init.d/mdadm
./init.d/mdadm-raid
./cron.daily/mdadm
./cron.d/mdadm
./mdadm/mdadm.conf
find /etc/rc?.d/ | grep mdadm
/etc/rc0.d/K01mdadm
/etc/rc0.d/K10mdadm-raid
/etc/rc1.d/K01mdadm
/etc/rc2.d/S02mdadm
/etc/rc3.d/S02mdadm
/etc/rc4.d/S02mdadm
/etc/rc5.d/S02mdadm
/etc/rc6.d/K01mdadm
/etc/rc6.d/K10mdadm-raid
/etc/rcS.d/S03mdadm-raid
cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
# This file was auto-generated on Fri, 05 Nov 2010 16:29:48 -0400
# by mkconf 3.1.4-1+8efb9d1
--------------------
Again, How does one fix the problem of not having the array not starting at boot?
Thanks.
-M
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: How to recreate a dmraid RAID array with mdadm (was: no subject)
2010-11-14 6:50 How to recreate a dmraid RAID array with mdadm (was: no subject) Mike Viau
@ 2010-11-15 5:21 ` Neil Brown
2010-11-17 1:02 ` Mike Viau
0 siblings, 1 reply; 9+ messages in thread
From: Neil Brown @ 2010-11-15 5:21 UTC (permalink / raw)
To: Mike Viau; +Cc: linux-raid, debian-user
On Sun, 14 Nov 2010 01:50:42 -0500
Mike Viau <viaum@sheridanc.on.ca> wrote:
> Again, How does one fix the problem of not having the array not starting at boot?
>
To be able to answer that one would need to know exactly what is in the
initramfs. And unfortunately all distros are different and I'm not
particularly familiar with Ubuntu.
Maybe if you
mkdir /tmp/initrd
cd /tmp/initrd
zcat /boot/initrd.img-2.6.32-5-amd64 | cpio -idv
and then have a look around and particularly report etc/mdadm/mdadm.conf
and anything else that might be interesting.
If the mdadm.conf in the initrd is the same as in /etc/mdadm, then it
*should* work.
NeilBrown
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: How to recreate a dmraid RAID array with mdadm (was: no subject)
2010-11-15 5:21 ` Neil Brown
@ 2010-11-17 1:02 ` Mike Viau
2010-11-17 1:26 ` Neil Brown
0 siblings, 1 reply; 9+ messages in thread
From: Mike Viau @ 2010-11-17 1:02 UTC (permalink / raw)
To: neilb; +Cc: linux-raid, debian-user
> On Mon, 15 Nov 2010 16:21:22 +1100 <neilb@suse.de> wrote:
> > On Sun, 14 Nov 2010 01:50:42 -0500 Mike wrote:
> >
> > How does one fix the problem of not having the array not starting at boot?
> >
>
> To be able to answer that one would need to know exactly what is in the
> initramfs. And unfortunately all distros are different and I'm not
> particularly familiar with Ubuntu.
>
> Maybe if you
> mkdir /tmp/initrd
> cd /tmp/initrd
> zcat /boot/initrd.img-2.6.32-5-amd64 | cpio -idv
>
> and then have a look around and particularly report etc/mdadm/mdadm.conf
> and anything else that might be interesting.
>
> If the mdadm.conf in the initrd is the same as in /etc/mdadm, then it
> *should* work.
>
Thanks again Neil. I got a chance to examine my systems initramfs to discover two differences in the local copy of mdadm.conf and the initramfs's copy.
The initramfs's copy contains:
DEVICE partitions
HOMEHOST <system>
ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
So both ARRAY lines got copied over to the initramfs's copy of mdadm.conf, but
CREATE owner=root group=disk mode=0660 auto=yes
and
MAILADDR root
were not carried over on the update-initramfs command.
To your clearly better understanding of all this, does the CREATE stanza NEED to be present in the initramfs's copy of mdadm.conf in order for the array to be created on boot? If so, how can one accomplish this, so that the line is added whenever a new initramfs is created for the kernel?
My diff findings between the local copy of mdadm.conf and the initramfs's copy pasted at:
http://debian.pastebin.com/5VNnd9g1
Thanks for your help.
-M
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: How to recreate a dmraid RAID array with mdadm (was: no subject)
2010-11-17 1:02 ` Mike Viau
@ 2010-11-17 1:26 ` Neil Brown
2010-11-17 1:39 ` John Robinson
0 siblings, 1 reply; 9+ messages in thread
From: Neil Brown @ 2010-11-17 1:26 UTC (permalink / raw)
To: Mike Viau; +Cc: linux-raid, debian-user
On Tue, 16 Nov 2010 20:02:17 -0500
Mike Viau <viaum@sheridanc.on.ca> wrote:
>
> > On Mon, 15 Nov 2010 16:21:22 +1100 <neilb@suse.de> wrote:
> > > On Sun, 14 Nov 2010 01:50:42 -0500 Mike wrote:
> > >
> > > How does one fix the problem of not having the array not starting at boot?
> > >
> >
> > To be able to answer that one would need to know exactly what is in the
> > initramfs. And unfortunately all distros are different and I'm not
> > particularly familiar with Ubuntu.
> >
> > Maybe if you
> > mkdir /tmp/initrd
> > cd /tmp/initrd
> > zcat /boot/initrd.img-2.6.32-5-amd64 | cpio -idv
> >
> > and then have a look around and particularly report etc/mdadm/mdadm.conf
> > and anything else that might be interesting.
> >
> > If the mdadm.conf in the initrd is the same as in /etc/mdadm, then it
> > *should* work.
> >
>
> Thanks again Neil. I got a chance to examine my systems initramfs to discover two differences in the local copy of mdadm.conf and the initramfs's copy.
>
> The initramfs's copy contains:
>
> DEVICE partitions
> HOMEHOST <system>
> ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
> ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
>
> So both ARRAY lines got copied over to the initramfs's copy of mdadm.conf, but
>
> CREATE owner=root group=disk mode=0660 auto=yes
>
> and
>
> MAILADDR root
>
> were not carried over on the update-initramfs command.
>
>
> To your clearly better understanding of all this, does the CREATE stanza NEED to be present in the initramfs's copy of mdadm.conf in order for the array to be created on boot? If so, how can one accomplish this, so that the line is added whenever a new initramfs is created for the kernel?
No, those differences couldn't explain it not working.
I would really expect that mdadm.conf file to successfully assemble the
RAID1.
As you have the same in /etc/mdadm/mdadm.conf you could see what is happening
by:
mdadm -Ss
to stop all md arrays, then
mdadm -Asvv
to auto-start everything in mdadm.conf and be verbose about that is happening.
If that fails to start the raid1, then the messages it produces will be
helpful in understanding why.
If it succeeds, then there must be something wrong with the initrd...
Maybe '/sbin/mdmon' is missing... Or maybe it doesn't run
mdadm -As
(or equivalently: mdadm --assemble --scan)
but doesn't something else. To determine what you would need to search for
'mdadm' in all the scripts in the initrd and see what turns up.
NeilBrown
>
>
> My diff findings between the local copy of mdadm.conf and the initramfs's copy pasted at:
> http://debian.pastebin.com/5VNnd9g1
>
>
> Thanks for your help.
>
>
> -M
>
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: How to recreate a dmraid RAID array with mdadm (was: no subject)
2010-11-17 1:26 ` Neil Brown
@ 2010-11-17 1:39 ` John Robinson
2010-11-17 1:53 ` Neil Brown
0 siblings, 1 reply; 9+ messages in thread
From: John Robinson @ 2010-11-17 1:39 UTC (permalink / raw)
To: Neil Brown; +Cc: Mike Viau, linux-raid, debian-user
On 17/11/2010 01:26, Neil Brown wrote:
> On Tue, 16 Nov 2010 20:02:17 -0500
> Mike Viau<viaum@sheridanc.on.ca> wrote:
[...]
>> DEVICE partitions
>> HOMEHOST<system>
>> ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
>> ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
[...]
> I would really expect that mdadm.conf file to successfully assemble the
> RAID1.
The only thing that strikes me is that "DEVICE partitions" line - surely
imsm containers don't live in partitions?
Cheers,
John.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: How to recreate a dmraid RAID array with mdadm (was: no subject)
2010-11-17 1:39 ` John Robinson
@ 2010-11-17 1:53 ` Neil Brown
2010-11-17 2:27 ` Mike Viau
0 siblings, 1 reply; 9+ messages in thread
From: Neil Brown @ 2010-11-17 1:53 UTC (permalink / raw)
To: John Robinson; +Cc: Mike Viau, linux-raid, debian-user
On Wed, 17 Nov 2010 01:39:39 +0000
John Robinson <john.robinson@anonymous.org.uk> wrote:
> On 17/11/2010 01:26, Neil Brown wrote:
> > On Tue, 16 Nov 2010 20:02:17 -0500
> > Mike Viau<viaum@sheridanc.on.ca> wrote:
> [...]
> >> DEVICE partitions
> >> HOMEHOST<system>
> >> ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
> >> ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
> [...]
> > I would really expect that mdadm.conf file to successfully assemble the
> > RAID1.
>
> The only thing that strikes me is that "DEVICE partitions" line - surely
> imsm containers don't live in partitions?
No, they don't.
But "DEVICE partitions" actually means "any devices listed
in /proc/partitions", and that includes whole devices.
:-(
NeilBrown
>
> Cheers,
>
> John.
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: How to recreate a dmraid RAID array with mdadm (was: no subject)
2010-11-17 1:53 ` Neil Brown
@ 2010-11-17 2:27 ` Mike Viau
0 siblings, 0 replies; 9+ messages in thread
From: Mike Viau @ 2010-11-17 2:27 UTC (permalink / raw)
To: neilb, john.robinson; +Cc: linux-raid, debian-user
> On Wed, 17 Nov 2010 12:53:37 +1100 <neilb@suse.de> wrote:
> On Wed, 17 Nov 2010 01:39:39 +0000
> John Robinson wrote:
>
> > On 17/11/2010 01:26, Neil Brown wrote:
> > > On Tue, 16 Nov 2010 20:02:17 -0500
> > > Mike Viau wrote:
> > [...]
> > >> DEVICE partitions
> > >> HOMEHOST
> > >> ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
> > >> ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
> > [...]
> > > I would really expect that mdadm.conf file to successfully assemble the
> > > RAID1.
> >
> > The only thing that strikes me is that "DEVICE partitions" line - surely
> > imsm containers don't live in partitions?
>
> No, they don't.
>
> But "DEVICE partitions" actually means "any devices listed
> in /proc/partitions", and that includes whole devices.
> :-(
>
I noticed both /dev/sda and /dev/sdb (the drives which make up the raid1 array) do not appear to recognized as having a valid container when one is required. The output of mdadm -Asvv shows:
mdadm -Asvv
mdadm: looking for devices for further assembly
mdadm: no RAID superblock on /dev/dm-3
mdadm: /dev/dm-3 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-2
mdadm: /dev/dm-2 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-1
mdadm: /dev/dm-1 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-0
mdadm: /dev/dm-0 has wrong uuid.
mdadm: no RAID superblock on /dev/loop0
mdadm: /dev/loop0 has wrong uuid.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm: /dev/sdc7 has wrong uuid.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm: /dev/sdc6 has wrong uuid.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm: /dev/sdc5 has wrong uuid.
mdadm: no RAID superblock on /dev/sdc2
mdadm: /dev/sdc2 has wrong uuid.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm: /dev/sdc1 has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm: /dev/sdb has wrong uuid.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm: /dev/sda has wrong uuid.
mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
mdadm: no recogniseable superblock on /dev/dm-3
mdadm/dev/dm-3 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-2
mdadm/dev/dm-2 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-1
mdadm/dev/dm-1 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-0
mdadm/dev/dm-0 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/loop0
mdadm/dev/loop0 is not a container, and one is required.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm/dev/sdc7 is not a container, and one is required.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm/dev/sdc6 is not a container, and one is required.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm/dev/sdc5 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/sdc2
mdadm/dev/sdc2 is not a container, and one is required.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm/dev/sdc1 is not a container, and one is required.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm/dev/sdc is not a container, and one is required.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm/dev/sdb is not a container, and one is required.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm/dev/sda is not a container, and one is required.
and cat /proc/partitions shows:
major minor #blocks name
8 0 976762584 sda
8 16 976762584 sdb
8 32 78125000 sdc
8 33 487424 sdc1
8 34 1 sdc2
8 37 20995072 sdc5
8 38 7811072 sdc6
8 39 48826368 sdc7
7 0 4388218 loop0
254 0 10485760 dm-0
254 1 10485760 dm-1
254 2 10485760 dm-2
254 3 17367040 dm-3
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: How to recreate a dmraid RAID array with mdadm (was: no subject)
@ 2010-11-17 2:44 Mike Viau
2010-11-17 3:15 ` Neil Brown
0 siblings, 1 reply; 9+ messages in thread
From: Mike Viau @ 2010-11-17 2:44 UTC (permalink / raw)
To: neilb; +Cc: linux-raid, debian-user
> On Wed, 17 Nov 2010 12:26:47 +1100 wrote:
>>
>>> On Mon, 15 Nov 2010 16:21:22 +1100 wrote:
>>>> On Sun, 14 Nov 2010 01:50:42 -0500 Mike wrote:
>>>>
>>>> How does one fix the problem of not having the array not starting at boot?
>>>>
>>>
>>> To be able to answer that one would need to know exactly what is in the
>>> initramfs. And unfortunately all distros are different and I'm not
>>> particularly familiar with Ubuntu.
>>>
>>> Maybe if you
>>> mkdir /tmp/initrd
>>> cd /tmp/initrd
>>> zcat /boot/initrd.img-2.6.32-5-amd64 | cpio -idv
>>>
>>> and then have a look around and particularly report etc/mdadm/mdadm.conf
>>> and anything else that might be interesting.
>>>
>>> If the mdadm.conf in the initrd is the same as in /etc/mdadm, then it
>>> *should* work.
>>>
>>
>> Thanks again Neil. I got a chance to examine my systems initramfs to discover two differences in the local copy of mdadm.conf and the initramfs's copy.
>>
>> The initramfs's copy contains:
>>
>> DEVICE partitions
>> HOMEHOST
>> ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
>> ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
>>
>> So both ARRAY lines got copied over to the initramfs's copy of mdadm.conf, but
>>
>> CREATE owner=root group=disk mode=0660 auto=yes
>>
>> and
>>
>> MAILADDR root
>>
>> were not carried over on the update-initramfs command.
>>
>>
>> To your clearly better understanding of all this, does the CREATE stanza NEED to be present in the initramfs's copy of mdadm.conf in order for the array to be created on boot? If so, how can one accomplish this, so that the line is added whenever a new initramfs is created for the kernel?
>
> No, those differences couldn't explain it not working.
>
> I would really expect that mdadm.conf file to successfully assemble the
> RAID1.
>
> As you have the same in /etc/mdadm/mdadm.conf you could see what is happening
> by:
>
> mdadm -Ss
>
> to stop all md arrays, then
>
> mdadm -Asvv
>
> to auto-start everything in mdadm.conf and be verbose about that is happening.
>
> If that fails to start the raid1, then the messages it produces will be
> helpful in understanding why.
> If it succeeds, then there must be something wrong with the initrd...
> Maybe '/sbin/mdmon' is missing... Or maybe it doesn't run
> mdadm -As
> (or equivalently: mdadm --assemble --scan)
> but doesn't something else. To determine what you would need to search for
> 'mdadm' in all the scripts in the initrd and see what turns up.
>
Using mdadm -Ss stops the array:
mdadm: stopped /dev/md127
Where /dev/md127 is the imsm0 device and not the OneTB-RAID1-PV device.
Then executing mdadm -Asvv shows:
mdadm: looking for devices for further assembly
mdadm: no RAID superblock on /dev/dm-3
mdadm: /dev/dm-3 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-2
mdadm: /dev/dm-2 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-1
mdadm: /dev/dm-1 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-0
mdadm: /dev/dm-0 has wrong uuid.
mdadm: no RAID superblock on /dev/loop0
mdadm: /dev/loop0 has wrong uuid.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm: /dev/sdc7 has wrong uuid.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm: /dev/sdc6 has wrong uuid.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm: /dev/sdc5 has wrong uuid.
mdadm: no RAID superblock on /dev/sdc2
mdadm: /dev/sdc2 has wrong uuid.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm: /dev/sdc1 has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm: /dev/sdb has wrong uuid.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm: /dev/sda has wrong uuid.
mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
mdadm: no recogniseable superblock on /dev/dm-3
mdadm/dev/dm-3 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-2
mdadm/dev/dm-2 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-1
mdadm/dev/dm-1 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-0
mdadm/dev/dm-0 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/loop0
mdadm/dev/loop0 is not a container, and one is required.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm/dev/sdc7 is not a container, and one is required.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm/dev/sdc6 is not a container, and one is required.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm/dev/sdc5 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/sdc2
mdadm/dev/sdc2 is not a container, and one is required.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm/dev/sdc1 is not a container, and one is required.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm/dev/sdc is not a container, and one is required.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm/dev/sdb is not a container, and one is required.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm/dev/sda is not a container, and one is required.
So I am not really sure if that succeed or not, but it doesn't look like it has because there is not /dev/md/OneTB-RAID1-PV:
ls -al /dev/md/
total 0
drwxr-xr-x 2 root root 60 Nov 16 21:08 .
drwxr-xr-x 21 root root 3440 Nov 16 21:08 ..
lrwxrwxrwx 1 root root 8 Nov 16 21:08 imsm0 -> ../md127
But after mdadm -Ivv /dev/md/imsm0:
mdadm: UUID differs from /dev/md/OneTB-RAID1-PV.
mdadm: match found for member 0
mdadm: Started /dev/md/OneTB-RAID1-PV with 2 devices
Then ls -al /dev/md/ reveals /dev/md/OneTB-RAID1-PV:
total 0
drwxr-xr-x 2 root root 80 Nov 16 21:40 .
drwxr-xr-x 21 root root 3480 Nov 16 21:40 ..
lrwxrwxrwx 1 root root 8 Nov 16 21:08 imsm0 -> ../md127
lrwxrwxrwx 1 root root 8 Nov 16 21:40 OneTB-RAID1-PV -> ../md126
Regardless some initram disk findings:
pwd
/tmp/initrd
Then:
find . -type f | grep md | grep -v amd
./lib/udev/rules.d/64-md-raid.rules
./scripts/local-top/mdadm
./etc/mdadm/mdadm.conf
./conf/conf.d/md
./sbin/mdadm
./lib/udev/rules.d/64-md-raid.rules
http://paste.debian.net/100016/
./scripts/local-top/mdadm
http://paste.debian.net/100017/
./etc/mdadm/mdadm.conf
http://paste.debian.net/100018/
./conf/conf.d/md
http://paste.debian.net/100019/
./sbin/mdadm
{of course is a binary}
-M
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: How to recreate a dmraid RAID array with mdadm (was: no subject)
2010-11-17 2:44 Mike Viau
@ 2010-11-17 3:15 ` Neil Brown
0 siblings, 0 replies; 9+ messages in thread
From: Neil Brown @ 2010-11-17 3:15 UTC (permalink / raw)
To: Mike Viau; +Cc: linux-raid, debian-user
On Tue, 16 Nov 2010 21:44:10 -0500
Mike Viau <viaum@sheridanc.on.ca> wrote:
>
> > On Wed, 17 Nov 2010 12:26:47 +1100 wrote:
> >>
> >>> On Mon, 15 Nov 2010 16:21:22 +1100 wrote:
> >>>> On Sun, 14 Nov 2010 01:50:42 -0500 Mike wrote:
> >>>>
> >>>> How does one fix the problem of not having the array not starting at boot?
> >>>>
> >>>
> >>> To be able to answer that one would need to know exactly what is in the
> >>> initramfs. And unfortunately all distros are different and I'm not
> >>> particularly familiar with Ubuntu.
> >>>
> >>> Maybe if you
> >>> mkdir /tmp/initrd
> >>> cd /tmp/initrd
> >>> zcat /boot/initrd.img-2.6.32-5-amd64 | cpio -idv
> >>>
> >>> and then have a look around and particularly report etc/mdadm/mdadm.conf
> >>> and anything else that might be interesting.
> >>>
> >>> If the mdadm.conf in the initrd is the same as in /etc/mdadm, then it
> >>> *should* work.
> >>>
> >>
> >> Thanks again Neil. I got a chance to examine my systems initramfs to discover two differences in the local copy of mdadm.conf and the initramfs's copy.
> >>
> >> The initramfs's copy contains:
> >>
> >> DEVICE partitions
> >> HOMEHOST
> >> ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
> >> ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
> >>
> >> So both ARRAY lines got copied over to the initramfs's copy of mdadm.conf, but
> >>
> >> CREATE owner=root group=disk mode=0660 auto=yes
> >>
> >> and
> >>
> >> MAILADDR root
> >>
> >> were not carried over on the update-initramfs command.
> >>
> >>
> >> To your clearly better understanding of all this, does the CREATE stanza NEED to be present in the initramfs's copy of mdadm.conf in order for the array to be created on boot? If so, how can one accomplish this, so that the line is added whenever a new initramfs is created for the kernel?
> >
> > No, those differences couldn't explain it not working.
> >
> > I would really expect that mdadm.conf file to successfully assemble the
> > RAID1.
> >
> > As you have the same in /etc/mdadm/mdadm.conf you could see what is happening
> > by:
> >
> > mdadm -Ss
> >
> > to stop all md arrays, then
> >
> > mdadm -Asvv
> >
> > to auto-start everything in mdadm.conf and be verbose about that is happening.
> >
> > If that fails to start the raid1, then the messages it produces will be
> > helpful in understanding why.
> > If it succeeds, then there must be something wrong with the initrd...
> > Maybe '/sbin/mdmon' is missing... Or maybe it doesn't run
> > mdadm -As
> > (or equivalently: mdadm --assemble --scan)
> > but doesn't something else. To determine what you would need to search for
> > 'mdadm' in all the scripts in the initrd and see what turns up.
> >
>
> Using mdadm -Ss stops the array:
>
> mdadm: stopped /dev/md127
>
>
> Where /dev/md127 is the imsm0 device and not the OneTB-RAID1-PV device.
>
>
> Then executing mdadm -Asvv shows:
>
> mdadm: looking for devices for further assembly
> mdadm: no RAID superblock on /dev/dm-3
> mdadm: /dev/dm-3 has wrong uuid.
> mdadm: no RAID superblock on /dev/dm-2
> mdadm: /dev/dm-2 has wrong uuid.
> mdadm: no RAID superblock on /dev/dm-1
> mdadm: /dev/dm-1 has wrong uuid.
> mdadm: no RAID superblock on /dev/dm-0
> mdadm: /dev/dm-0 has wrong uuid.
> mdadm: no RAID superblock on /dev/loop0
> mdadm: /dev/loop0 has wrong uuid.
> mdadm: cannot open device /dev/sdc7: Device or resource busy
> mdadm: /dev/sdc7 has wrong uuid.
> mdadm: cannot open device /dev/sdc6: Device or resource busy
> mdadm: /dev/sdc6 has wrong uuid.
> mdadm: cannot open device /dev/sdc5: Device or resource busy
> mdadm: /dev/sdc5 has wrong uuid.
> mdadm: no RAID superblock on /dev/sdc2
> mdadm: /dev/sdc2 has wrong uuid.
> mdadm: cannot open device /dev/sdc1: Device or resource busy
> mdadm: /dev/sdc1 has wrong uuid.
> mdadm: cannot open device /dev/sdc: Device or resource busy
> mdadm: /dev/sdc has wrong uuid.
> mdadm: cannot open device /dev/sdb: Device or resource busy
> mdadm: /dev/sdb has wrong uuid.
> mdadm: cannot open device /dev/sda: Device or resource busy
> mdadm: /dev/sda has wrong uuid.
This looks wrong. mdadm should be looking for the container as listed in
mdadm.conf and it should find a matching uuid on sda and sdb, but it doesn't.
Can you:
mdadm -E /dev/sda /dev/sdb ; cat /etc/mdadm/mdadm.conf
so I can compare the uuids?
Thanks,
NeilBrown
> mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
> mdadm: no recogniseable superblock on /dev/dm-3
> mdadm/dev/dm-3 is not a container, and one is required.
> mdadm: no recogniseable superblock on /dev/dm-2
> mdadm/dev/dm-2 is not a container, and one is required.
> mdadm: no recogniseable superblock on /dev/dm-1
> mdadm/dev/dm-1 is not a container, and one is required.
> mdadm: no recogniseable superblock on /dev/dm-0
> mdadm/dev/dm-0 is not a container, and one is required.
> mdadm: no recogniseable superblock on /dev/loop0
> mdadm/dev/loop0 is not a container, and one is required.
> mdadm: cannot open device /dev/sdc7: Device or resource busy
> mdadm/dev/sdc7 is not a container, and one is required.
> mdadm: cannot open device /dev/sdc6: Device or resource busy
> mdadm/dev/sdc6 is not a container, and one is required.
> mdadm: cannot open device /dev/sdc5: Device or resource busy
> mdadm/dev/sdc5 is not a container, and one is required.
> mdadm: no recogniseable superblock on /dev/sdc2
> mdadm/dev/sdc2 is not a container, and one is required.
> mdadm: cannot open device /dev/sdc1: Device or resource busy
> mdadm/dev/sdc1 is not a container, and one is required.
> mdadm: cannot open device /dev/sdc: Device or resource busy
> mdadm/dev/sdc is not a container, and one is required.
> mdadm: cannot open device /dev/sdb: Device or resource busy
> mdadm/dev/sdb is not a container, and one is required.
> mdadm: cannot open device /dev/sda: Device or resource busy
> mdadm/dev/sda is not a container, and one is required.
>
>
> So I am not really sure if that succeed or not, but it doesn't look like it has because there is not /dev/md/OneTB-RAID1-PV:
>
> ls -al /dev/md/
>
> total 0
> drwxr-xr-x 2 root root 60 Nov 16 21:08 .
> drwxr-xr-x 21 root root 3440 Nov 16 21:08 ..
> lrwxrwxrwx 1 root root 8 Nov 16 21:08 imsm0 -> ../md127
>
>
> But after mdadm -Ivv /dev/md/imsm0:
>
>
> mdadm: UUID differs from /dev/md/OneTB-RAID1-PV.
> mdadm: match found for member 0
> mdadm: Started /dev/md/OneTB-RAID1-PV with 2 devices
>
>
> Then ls -al /dev/md/ reveals /dev/md/OneTB-RAID1-PV:
>
> total 0
> drwxr-xr-x 2 root root 80 Nov 16 21:40 .
> drwxr-xr-x 21 root root 3480 Nov 16 21:40 ..
> lrwxrwxrwx 1 root root 8 Nov 16 21:08 imsm0 -> ../md127
> lrwxrwxrwx 1 root root 8 Nov 16 21:40 OneTB-RAID1-PV -> ../md126
>
>
>
> Regardless some initram disk findings:
>
> pwd
>
> /tmp/initrd
>
> Then:
>
> find . -type f | grep md | grep -v amd
>
> ./lib/udev/rules.d/64-md-raid.rules
> ./scripts/local-top/mdadm
> ./etc/mdadm/mdadm.conf
> ./conf/conf.d/md
> ./sbin/mdadm
>
>
>
>
> ./lib/udev/rules.d/64-md-raid.rules
> http://paste.debian.net/100016/
>
> ./scripts/local-top/mdadm
> http://paste.debian.net/100017/
>
> ./etc/mdadm/mdadm.conf
> http://paste.debian.net/100018/
>
> ./conf/conf.d/md
> http://paste.debian.net/100019/
>
> ./sbin/mdadm
> {of course is a binary}
>
>
> -M
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2010-11-17 3:15 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-11-14 6:50 How to recreate a dmraid RAID array with mdadm (was: no subject) Mike Viau
2010-11-15 5:21 ` Neil Brown
2010-11-17 1:02 ` Mike Viau
2010-11-17 1:26 ` Neil Brown
2010-11-17 1:39 ` John Robinson
2010-11-17 1:53 ` Neil Brown
2010-11-17 2:27 ` Mike Viau
-- strict thread matches above, loose matches on Subject: below --
2010-11-17 2:44 Mike Viau
2010-11-17 3:15 ` Neil Brown
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).