linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* How to recreate a dmraid RAID array with mdadm (was: no subject)
@ 2010-11-14  6:50 Mike Viau
  2010-11-15  5:21 ` Neil Brown
  0 siblings, 1 reply; 21+ messages in thread
From: Mike Viau @ 2010-11-14  6:50 UTC (permalink / raw)
  To: neilb; +Cc: linux-raid, debian-user


> On Sun, 14 Nov 2010 06:36:00 +1100 <neilb@suse.de> wrote:
>> cat /proc/mdstat (showing what mdadm shows/discovers)
>>
>> Personalities :
>> md127 : inactive sda[1](S) sdb[0](S)
>> 4514 blocks super external:imsm
>>
>> unused devices:
>
> As imsm can have several arrays described by one set of metadata, mdadm
> creates an inactive arrive just like this which just holds the set of
> devices, and then should create other arrays made of from different regions
> of those devices.
> It looks like mdadm hasn't done that you. You can ask it to with:
>
> mdadm -I /dev/md/imsm0
>
> That should created the real raid1 array in /dev/md/something.
>
> NeilBrown
>

Thanks for this information, I feel like I am getting closer to getting this working properly. After running the command above (mdadm -I /dev/md/imsm0), the real raid 1 array did appear as /dev/md/*

ls -al /dev/md
total 0
drwxr-xr-x  2 root root   80 Nov 14 00:53 .
drwxr-xr-x 21 root root 3480 Nov 14 00:53 ..
lrwxrwxrwx  1 root root    8 Nov 14 00:50 imsm0 -> ../md127
lrwxrwxrwx  1 root root    8 Nov 14 00:53 OneTB-RAID1-PV -> ../md126

---------------

And the kernel messages:

[ 4652.315650] md: bind<sdb>
[ 4652.315866] md: bind<sda>
[ 4652.341862] raid1: md126 is not clean -- starting background reconstruction
[ 4652.341958] raid1: raid set md126 active with 2 out of 2 mirrors
[ 4652.342025] md126: detected capacity change from 0 to 1000202043392
[ 4652.342400]  md126: p1
[ 4652.528448] md: md126 switched to read-write mode.
[ 4652.529387] md: resync of RAID array md126
[ 4652.529424] md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
[ 4652.529464] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
[ 4652.529525] md: using 128k window, over a total of 976759940 blocks.
 
---------------

fdisk -ul /dev/md/OneTB-RAID1-PV 

Disk /dev/md/OneTB-RAID1-PV: 1000.2 GB, 1000202043392 bytes
255 heads, 63 sectors/track, 121600 cylinders, total 1953519616 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

                 Device Boot      Start         End      Blocks   Id  System
/dev/md/OneTB-RAID1-PV1              63  1953503999   976751968+  8e  Linux LVM

---------------

pvscan 

  PV /dev/sdc7      VG XENSTORE-VG      lvm2 [46.56 GiB / 0    free]
  PV /dev/md126p1   VG OneTB-RAID1-VG   lvm2 [931.50 GiB / 0    free]
  Total: 2 [978.06 GiB] / in use: 2 [978.06 GiB] / in no VG: 0 [0   ]

---------------

pvdisplay 

 --- Physical volume ---
  PV Name               /dev/md126p1
  VG Name               OneTB-RAID1-VG
  PV Size               931.50 GiB / not usable 3.34 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              238464
  Free PE               0
  Allocated PE          238464
  PV UUID               hvxXR3-tV9B-CMBW-nZn2-N2zH-N1l6-sC9m9i

----------------

vgscan 

  Reading all physical volumes.  This may take a while...
  Found volume group "XENSTORE-VG" using metadata type lvm2
  Found volume group "OneTB-RAID1-VG" using metadata type lvm2

-------------

vgdisplay

--- Volume group ---
  VG Name               OneTB-RAID1-VG
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               931.50 GiB
  PE Size               4.00 MiB
  Total PE              238464
  Alloc PE / Size       238464 / 931.50 GiB
  Free  PE / Size       0 / 0   
  VG UUID               nCBsU2-VpgR-EcZj-lA15-oJGL-rYOw-YxXiC8

--------------------

vgchange -a y OneTB-RAID1-VG

  1 logical volume(s) in volume group "OneTB-RAID1-VG" now active

--------------------

lvdisplay 

--- Logical volume ---
  LV Name                /dev/OneTB-RAID1-VG/OneTB-RAID1-LV
  VG Name                OneTB-RAID1-VG
  LV UUID                R3TYWb-PJo1-Xzbm-vJwu-YpgP-ohZW-Vf1kHJ
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                931.50 GiB
  Current LE             238464
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4

------------------------

fdisk -ul /dev/OneTB-RAID1-VG/OneTB-RAID1-LV 

Disk /dev/OneTB-RAID1-VG/OneTB-RAID1-LV: 1000.2 GB, 1000190509056 bytes
255 heads, 63 sectors/track, 121599 cylinders, total 1953497088 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xbda8e40b

                             Device Boot      Start         End      Blocks   Id  System
/dev/OneTB-RAID1-VG/OneTB-RAID1-LV1              63  1953487934   976743936   83  Linux

-----------------------

mount -t ext4 /dev/OneTB-RAID1-VG/OneTB-RAID1-LV /mnt
mount
/dev/sdc5 on / type ext4 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/sdc1 on /boot type ext2 (rw)
xenfs on /proc/xen type xenfs (rw)
/dev/mapper/OneTB--RAID1--VG-OneTB--RAID1--LV on /mnt type ext4 (rw)

-----------------

ls /mnt (and files are visible)

-------------------

And also when the array is running after manually running the command above, the error when updating the init ramdisk for kernels is gone....

update-initramfs -u -k all
update-initramfs: Generating /boot/initrd.img-2.6.34.7-xen
update-initramfs: Generating /boot/initrd.img-2.6.32-5-xen-amd64
update-initramfs: Generating /boot/initrd.img-2.6.32-5-amd64


-----------------

But the issue remain now is that the mdadm is not running the real raid1 array on reboots, the init ramdisk errors come right back unfortunately (enabled verbosity)....

1) update-initramfs -u -k all

update-initramfs: Generating /boot/initrd.img-2.6.34.7-xen
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
update-initramfs: Generating /boot/initrd.img-2.6.32-5-xen-amd64
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
update-initramfs: Generating /boot/initrd.img-2.6.32-5-amd64
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.


2) dpkg-reconfigure --priority=low mdadm [leaving all defaults]

Stopping MD monitoring service: mdadm --monitor.
Generating array device nodes... done.
update-initramfs: Generating /boot/initrd.img-2.6.34.7-xen
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
Starting MD monitoring service: mdadm --monitor.
Generating udev events for MD arrays...done.


3) update-initramfs -u -k all [again]

update-initramfs: Generating /boot/initrd.img-2.6.34.7-xen
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
update-initramfs: Generating /boot/initrd.img-2.6.32-5-xen-amd64
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
update-initramfs: Generating /boot/initrd.img-2.6.32-5-amd64
I: mdadm: using configuration file: /etc/mdadm/mdadm.conf
mdadm: cannot open /dev/md/OneTB-RAID1-PV: No such file or directory
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.
-----------------

ls -al /dev/md/
total 0
drwxr-xr-x  2 root root   60 Nov 14 01:22 .
drwxr-xr-x 21 root root 3440 Nov 14 01:23 ..
lrwxrwxrwx  1 root root    8 Nov 14 01:23 imsm0 -> ../md127

-----------------


How does one fix the problem of not having the array not starting at boot?

The files/configuration I have now:

find /etc -type f | grep mdadm
./logcheck/ignore.d.server/mdadm
./logcheck/violations.d/mdadm
./default/mdadm
./init.d/mdadm
./init.d/mdadm-raid
./cron.daily/mdadm
./cron.d/mdadm
./mdadm/mdadm.conf

find /etc/rc?.d/ | grep mdadm
/etc/rc0.d/K01mdadm
/etc/rc0.d/K10mdadm-raid
/etc/rc1.d/K01mdadm
/etc/rc2.d/S02mdadm
/etc/rc3.d/S02mdadm
/etc/rc4.d/S02mdadm
/etc/rc5.d/S02mdadm
/etc/rc6.d/K01mdadm
/etc/rc6.d/K10mdadm-raid
/etc/rcS.d/S03mdadm-raid


cat /etc/mdadm/mdadm.conf 
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a

# This file was auto-generated on Fri, 05 Nov 2010 16:29:48 -0400
# by mkconf 3.1.4-1+8efb9d1

--------------------


Again, How does one fix the problem of not having the array not starting at boot?



Thanks.
 

-M
 		 	   		  --
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: How to recreate a dmraid RAID array with mdadm (was: no subject)
  2010-11-14  6:50 How to recreate a dmraid RAID array with mdadm (was: no subject) Mike Viau
@ 2010-11-15  5:21 ` Neil Brown
  2010-11-17  1:02   ` Mike Viau
  0 siblings, 1 reply; 21+ messages in thread
From: Neil Brown @ 2010-11-15  5:21 UTC (permalink / raw)
  To: Mike Viau; +Cc: linux-raid, debian-user

On Sun, 14 Nov 2010 01:50:42 -0500
Mike Viau <viaum@sheridanc.on.ca> wrote:

> Again, How does one fix the problem of not having the array not starting at boot?
> 

To be able to answer that one would need to know exactly what is in the
initramfs.  And unfortunately all distros are different and I'm not
particularly familiar with Ubuntu.

Maybe if you 
  mkdir /tmp/initrd
  cd /tmp/initrd
  zcat /boot/initrd.img-2.6.32-5-amd64 | cpio -idv

 and then have a look around and particularly report etc/mdadm/mdadm.conf
 and anything else that might be interesting.

If the mdadm.conf in the initrd is the same as in /etc/mdadm, then it
*should* work.


NeilBrown

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: How to recreate a dmraid RAID array with mdadm (was: no subject)
  2010-11-15  5:21 ` Neil Brown
@ 2010-11-17  1:02   ` Mike Viau
  2010-11-17  1:26     ` Neil Brown
  0 siblings, 1 reply; 21+ messages in thread
From: Mike Viau @ 2010-11-17  1:02 UTC (permalink / raw)
  To: neilb; +Cc: linux-raid, debian-user


> On Mon, 15 Nov 2010 16:21:22 +1100 <neilb@suse.de> wrote:
> > On Sun, 14 Nov 2010 01:50:42 -0500 Mike wrote:
> > 
> > How does one fix the problem of not having the array not starting at boot?
> >
>
> To be able to answer that one would need to know exactly what is in the
> initramfs. And unfortunately all distros are different and I'm not
> particularly familiar with Ubuntu.
>
> Maybe if you
> mkdir /tmp/initrd
> cd /tmp/initrd
> zcat /boot/initrd.img-2.6.32-5-amd64 | cpio -idv
>
> and then have a look around and particularly report etc/mdadm/mdadm.conf
> and anything else that might be interesting.
>
> If the mdadm.conf in the initrd is the same as in /etc/mdadm, then it
> *should* work.
>

Thanks again Neil. I got a chance to examine my systems initramfs to discover two differences in the local copy of mdadm.conf and the initramfs's copy.

The initramfs's copy contains:

DEVICE partitions
HOMEHOST <system>
ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a

So both ARRAY lines got copied over to the initramfs's copy of mdadm.conf, but

CREATE owner=root group=disk mode=0660 auto=yes

and

MAILADDR root

were not carried over on the update-initramfs command.


To your clearly better understanding of all this, does the CREATE stanza NEED to be present in the initramfs's copy of mdadm.conf in order for the array to be created on boot? If so, how can one accomplish this, so that the line is added whenever a new initramfs is created for the kernel?


My diff findings between the local copy of mdadm.conf and the initramfs's copy pasted at:
http://debian.pastebin.com/5VNnd9g1


Thanks for your help.


-M
 		 	   		  

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: How to recreate a dmraid RAID array with mdadm (was: no subject)
  2010-11-17  1:02   ` Mike Viau
@ 2010-11-17  1:26     ` Neil Brown
  2010-11-17  1:39       ` John Robinson
  0 siblings, 1 reply; 21+ messages in thread
From: Neil Brown @ 2010-11-17  1:26 UTC (permalink / raw)
  To: Mike Viau; +Cc: linux-raid, debian-user

On Tue, 16 Nov 2010 20:02:17 -0500
Mike Viau <viaum@sheridanc.on.ca> wrote:

> 
> > On Mon, 15 Nov 2010 16:21:22 +1100 <neilb@suse.de> wrote:
> > > On Sun, 14 Nov 2010 01:50:42 -0500 Mike wrote:
> > > 
> > > How does one fix the problem of not having the array not starting at boot?
> > >
> >
> > To be able to answer that one would need to know exactly what is in the
> > initramfs. And unfortunately all distros are different and I'm not
> > particularly familiar with Ubuntu.
> >
> > Maybe if you
> > mkdir /tmp/initrd
> > cd /tmp/initrd
> > zcat /boot/initrd.img-2.6.32-5-amd64 | cpio -idv
> >
> > and then have a look around and particularly report etc/mdadm/mdadm.conf
> > and anything else that might be interesting.
> >
> > If the mdadm.conf in the initrd is the same as in /etc/mdadm, then it
> > *should* work.
> >
> 
> Thanks again Neil. I got a chance to examine my systems initramfs to discover two differences in the local copy of mdadm.conf and the initramfs's copy.
> 
> The initramfs's copy contains:
> 
> DEVICE partitions
> HOMEHOST <system>
> ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
> ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
> 
> So both ARRAY lines got copied over to the initramfs's copy of mdadm.conf, but
> 
> CREATE owner=root group=disk mode=0660 auto=yes
> 
> and
> 
> MAILADDR root
> 
> were not carried over on the update-initramfs command.
> 
> 
> To your clearly better understanding of all this, does the CREATE stanza NEED to be present in the initramfs's copy of mdadm.conf in order for the array to be created on boot? If so, how can one accomplish this, so that the line is added whenever a new initramfs is created for the kernel?

No, those differences couldn't explain it not working.

I would really expect that mdadm.conf file to successfully assemble the
RAID1.

As you have the same in /etc/mdadm/mdadm.conf you could see what is happening
by:

 mdadm -Ss

to stop all md arrays, then

 mdadm -Asvv

to auto-start everything in mdadm.conf and be verbose about that is happening.

If that fails to start the raid1, then the messages it produces will be
helpful in understanding why.
If it succeeds, then there must be something wrong with the initrd...
Maybe '/sbin/mdmon' is missing...  Or maybe it doesn't run
  mdadm -As
(or equivalently:  mdadm --assemble --scan)
but doesn't something else.  To determine what you would need to search for
'mdadm' in all the scripts in the initrd and see what turns up.

NeilBrown




> 
> 
> My diff findings between the local copy of mdadm.conf and the initramfs's copy pasted at:
> http://debian.pastebin.com/5VNnd9g1
> 
> 
> Thanks for your help.
> 
> 
> -M
>  		 	   		  

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: How to recreate a dmraid RAID array with mdadm (was: no subject)
  2010-11-17  1:26     ` Neil Brown
@ 2010-11-17  1:39       ` John Robinson
  2010-11-17  1:53         ` Neil Brown
  0 siblings, 1 reply; 21+ messages in thread
From: John Robinson @ 2010-11-17  1:39 UTC (permalink / raw)
  To: Neil Brown; +Cc: Mike Viau, linux-raid, debian-user

On 17/11/2010 01:26, Neil Brown wrote:
> On Tue, 16 Nov 2010 20:02:17 -0500
> Mike Viau<viaum@sheridanc.on.ca>  wrote:
[...]
>> DEVICE partitions
>> HOMEHOST<system>
>> ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
>> ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
[...]
> I would really expect that mdadm.conf file to successfully assemble the
> RAID1.

The only thing that strikes me is that "DEVICE partitions" line - surely 
imsm containers don't live in partitions?

Cheers,

John.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: How to recreate a dmraid RAID array with mdadm (was: no subject)
  2010-11-17  1:39       ` John Robinson
@ 2010-11-17  1:53         ` Neil Brown
  2010-11-17  2:27           ` Mike Viau
  0 siblings, 1 reply; 21+ messages in thread
From: Neil Brown @ 2010-11-17  1:53 UTC (permalink / raw)
  To: John Robinson; +Cc: Mike Viau, linux-raid, debian-user

On Wed, 17 Nov 2010 01:39:39 +0000
John Robinson <john.robinson@anonymous.org.uk> wrote:

> On 17/11/2010 01:26, Neil Brown wrote:
> > On Tue, 16 Nov 2010 20:02:17 -0500
> > Mike Viau<viaum@sheridanc.on.ca>  wrote:
> [...]
> >> DEVICE partitions
> >> HOMEHOST<system>
> >> ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
> >> ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
> [...]
> > I would really expect that mdadm.conf file to successfully assemble the
> > RAID1.
> 
> The only thing that strikes me is that "DEVICE partitions" line - surely 
> imsm containers don't live in partitions?

No, they don't.

But "DEVICE partitions" actually means "any devices listed
in /proc/partitions", and that includes whole devices.
:-(

NeilBrown


> 
> Cheers,
> 
> John.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: How to recreate a dmraid RAID array with mdadm (was: no subject)
  2010-11-17  1:53         ` Neil Brown
@ 2010-11-17  2:27           ` Mike Viau
  0 siblings, 0 replies; 21+ messages in thread
From: Mike Viau @ 2010-11-17  2:27 UTC (permalink / raw)
  To: neilb, john.robinson; +Cc: linux-raid, debian-user


> On Wed, 17 Nov 2010 12:53:37 +1100 <neilb@suse.de> wrote:
> On Wed, 17 Nov 2010 01:39:39 +0000
> John Robinson  wrote:
>
> > On 17/11/2010 01:26, Neil Brown wrote:
> > > On Tue, 16 Nov 2010 20:02:17 -0500
> > > Mike Viau wrote:
> > [...]
> > >> DEVICE partitions
> > >> HOMEHOST
> > >> ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
> > >> ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
> > [...]
> > > I would really expect that mdadm.conf file to successfully assemble the
> > > RAID1.
> >
> > The only thing that strikes me is that "DEVICE partitions" line - surely
> > imsm containers don't live in partitions?
>
> No, they don't.
>
> But "DEVICE partitions" actually means "any devices listed
> in /proc/partitions", and that includes whole devices.
> :-(
>

I noticed both /dev/sda and /dev/sdb (the drives which make up the raid1 array) do not appear to recognized as having a valid container when one is required. The output of mdadm -Asvv shows:

mdadm -Asvv
mdadm: looking for devices for further assembly
mdadm: no RAID superblock on /dev/dm-3
mdadm: /dev/dm-3 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-2
mdadm: /dev/dm-2 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-1
mdadm: /dev/dm-1 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-0
mdadm: /dev/dm-0 has wrong uuid.
mdadm: no RAID superblock on /dev/loop0
mdadm: /dev/loop0 has wrong uuid.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm: /dev/sdc7 has wrong uuid.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm: /dev/sdc6 has wrong uuid.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm: /dev/sdc5 has wrong uuid.
mdadm: no RAID superblock on /dev/sdc2
mdadm: /dev/sdc2 has wrong uuid.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm: /dev/sdc1 has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm: /dev/sdb has wrong uuid.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm: /dev/sda has wrong uuid.
mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
mdadm: no recogniseable superblock on /dev/dm-3
mdadm/dev/dm-3 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-2
mdadm/dev/dm-2 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-1
mdadm/dev/dm-1 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-0
mdadm/dev/dm-0 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/loop0
mdadm/dev/loop0 is not a container, and one is required.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm/dev/sdc7 is not a container, and one is required.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm/dev/sdc6 is not a container, and one is required.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm/dev/sdc5 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/sdc2
mdadm/dev/sdc2 is not a container, and one is required.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm/dev/sdc1 is not a container, and one is required.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm/dev/sdc is not a container, and one is required.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm/dev/sdb is not a container, and one is required.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm/dev/sda is not a container, and one is required.


and cat /proc/partitions shows:

major minor  #blocks  name

   8        0  976762584 sda
   8       16  976762584 sdb
   8       32   78125000 sdc
   8       33     487424 sdc1
   8       34          1 sdc2
   8       37   20995072 sdc5
   8       38    7811072 sdc6
   8       39   48826368 sdc7
   7        0    4388218 loop0
 254        0   10485760 dm-0
 254        1   10485760 dm-1
 254        2   10485760 dm-2
 254        3   17367040 dm-3

 		 	   		  --
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: How to recreate a dmraid RAID array with mdadm (was: no subject)
@ 2010-11-17  2:44 Mike Viau
  2010-11-17  3:15 ` Neil Brown
  0 siblings, 1 reply; 21+ messages in thread
From: Mike Viau @ 2010-11-17  2:44 UTC (permalink / raw)
  To: neilb; +Cc: linux-raid, debian-user


> On Wed, 17 Nov 2010 12:26:47 +1100  wrote:
>>
>>> On Mon, 15 Nov 2010 16:21:22 +1100  wrote:
>>>> On Sun, 14 Nov 2010 01:50:42 -0500 Mike wrote:
>>>>
>>>> How does one fix the problem of not having the array not starting at boot?
>>>>
>>>
>>> To be able to answer that one would need to know exactly what is in the
>>> initramfs. And unfortunately all distros are different and I'm not
>>> particularly familiar with Ubuntu.
>>>
>>> Maybe if you
>>> mkdir /tmp/initrd
>>> cd /tmp/initrd
>>> zcat /boot/initrd.img-2.6.32-5-amd64 | cpio -idv
>>>
>>> and then have a look around and particularly report etc/mdadm/mdadm.conf
>>> and anything else that might be interesting.
>>>
>>> If the mdadm.conf in the initrd is the same as in /etc/mdadm, then it
>>> *should* work.
>>>
>>
>> Thanks again Neil. I got a chance to examine my systems initramfs to discover two differences in the local copy of mdadm.conf and the initramfs's copy.
>>
>> The initramfs's copy contains:
>>
>> DEVICE partitions
>> HOMEHOST
>> ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
>> ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
>>
>> So both ARRAY lines got copied over to the initramfs's copy of mdadm.conf, but
>>
>> CREATE owner=root group=disk mode=0660 auto=yes
>>
>> and
>>
>> MAILADDR root
>>
>> were not carried over on the update-initramfs command.
>>
>>
>> To your clearly better understanding of all this, does the CREATE stanza NEED to be present in the initramfs's copy of mdadm.conf in order for the array to be created on boot? If so, how can one accomplish this, so that the line is added whenever a new initramfs is created for the kernel?
>
> No, those differences couldn't explain it not working.
>
> I would really expect that mdadm.conf file to successfully assemble the
> RAID1.
>
> As you have the same in /etc/mdadm/mdadm.conf you could see what is happening
> by:
>
> mdadm -Ss
>
> to stop all md arrays, then
>
> mdadm -Asvv
>
> to auto-start everything in mdadm.conf and be verbose about that is happening.
>
> If that fails to start the raid1, then the messages it produces will be
> helpful in understanding why.
> If it succeeds, then there must be something wrong with the initrd...
> Maybe '/sbin/mdmon' is missing... Or maybe it doesn't run
> mdadm -As
> (or equivalently: mdadm --assemble --scan)
> but doesn't something else. To determine what you would need to search for
> 'mdadm' in all the scripts in the initrd and see what turns up.
>

Using mdadm -Ss stops the array:

mdadm: stopped /dev/md127


Where /dev/md127 is the imsm0 device and not the OneTB-RAID1-PV device.


Then executing mdadm -Asvv shows:

mdadm: looking for devices for further assembly
mdadm: no RAID superblock on /dev/dm-3
mdadm: /dev/dm-3 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-2
mdadm: /dev/dm-2 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-1
mdadm: /dev/dm-1 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-0
mdadm: /dev/dm-0 has wrong uuid.
mdadm: no RAID superblock on /dev/loop0
mdadm: /dev/loop0 has wrong uuid.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm: /dev/sdc7 has wrong uuid.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm: /dev/sdc6 has wrong uuid.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm: /dev/sdc5 has wrong uuid.
mdadm: no RAID superblock on /dev/sdc2
mdadm: /dev/sdc2 has wrong uuid.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm: /dev/sdc1 has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm: /dev/sdb has wrong uuid.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm: /dev/sda has wrong uuid.
mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
mdadm: no recogniseable superblock on /dev/dm-3
mdadm/dev/dm-3 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-2
mdadm/dev/dm-2 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-1
mdadm/dev/dm-1 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-0
mdadm/dev/dm-0 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/loop0
mdadm/dev/loop0 is not a container, and one is required.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm/dev/sdc7 is not a container, and one is required.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm/dev/sdc6 is not a container, and one is required.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm/dev/sdc5 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/sdc2
mdadm/dev/sdc2 is not a container, and one is required.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm/dev/sdc1 is not a container, and one is required.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm/dev/sdc is not a container, and one is required.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm/dev/sdb is not a container, and one is required.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm/dev/sda is not a container, and one is required.


So I am not really sure if that succeed or not, but it doesn't look like it has because there is not /dev/md/OneTB-RAID1-PV:

ls -al /dev/md/

total 0
drwxr-xr-x  2 root root   60 Nov 16 21:08 .
drwxr-xr-x 21 root root 3440 Nov 16 21:08 ..
lrwxrwxrwx  1 root root    8 Nov 16 21:08 imsm0 -> ../md127


But after mdadm -Ivv /dev/md/imsm0:


mdadm: UUID differs from /dev/md/OneTB-RAID1-PV.
mdadm: match found for member 0
mdadm: Started /dev/md/OneTB-RAID1-PV with 2 devices


Then ls -al /dev/md/ reveals /dev/md/OneTB-RAID1-PV:

total 0
drwxr-xr-x  2 root root   80 Nov 16 21:40 .
drwxr-xr-x 21 root root 3480 Nov 16 21:40 ..
lrwxrwxrwx  1 root root    8 Nov 16 21:08 imsm0 -> ../md127
lrwxrwxrwx  1 root root    8 Nov 16 21:40 OneTB-RAID1-PV -> ../md126



Regardless some initram disk findings:

pwd

/tmp/initrd

Then:

find . -type f | grep md | grep -v amd

./lib/udev/rules.d/64-md-raid.rules
./scripts/local-top/mdadm
./etc/mdadm/mdadm.conf
./conf/conf.d/md
./sbin/mdadm




./lib/udev/rules.d/64-md-raid.rules
http://paste.debian.net/100016/

./scripts/local-top/mdadm
http://paste.debian.net/100017/

./etc/mdadm/mdadm.conf
http://paste.debian.net/100018/

./conf/conf.d/md
http://paste.debian.net/100019/

./sbin/mdadm
{of course is a binary}


-M

 		 	   		  --
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: How to recreate a dmraid RAID array with mdadm (was: no subject)
  2010-11-17  2:44 How to recreate a dmraid RAID array with mdadm (was: no subject) Mike Viau
@ 2010-11-17  3:15 ` Neil Brown
  2010-11-17 22:36   ` How to recreate a dmraid RAID array with mdadm Mike Viau
  0 siblings, 1 reply; 21+ messages in thread
From: Neil Brown @ 2010-11-17  3:15 UTC (permalink / raw)
  To: Mike Viau; +Cc: linux-raid, debian-user

On Tue, 16 Nov 2010 21:44:10 -0500
Mike Viau <viaum@sheridanc.on.ca> wrote:

> 
> > On Wed, 17 Nov 2010 12:26:47 +1100  wrote:
> >>
> >>> On Mon, 15 Nov 2010 16:21:22 +1100  wrote:
> >>>> On Sun, 14 Nov 2010 01:50:42 -0500 Mike wrote:
> >>>>
> >>>> How does one fix the problem of not having the array not starting at boot?
> >>>>
> >>>
> >>> To be able to answer that one would need to know exactly what is in the
> >>> initramfs. And unfortunately all distros are different and I'm not
> >>> particularly familiar with Ubuntu.
> >>>
> >>> Maybe if you
> >>> mkdir /tmp/initrd
> >>> cd /tmp/initrd
> >>> zcat /boot/initrd.img-2.6.32-5-amd64 | cpio -idv
> >>>
> >>> and then have a look around and particularly report etc/mdadm/mdadm.conf
> >>> and anything else that might be interesting.
> >>>
> >>> If the mdadm.conf in the initrd is the same as in /etc/mdadm, then it
> >>> *should* work.
> >>>
> >>
> >> Thanks again Neil. I got a chance to examine my systems initramfs to discover two differences in the local copy of mdadm.conf and the initramfs's copy.
> >>
> >> The initramfs's copy contains:
> >>
> >> DEVICE partitions
> >> HOMEHOST
> >> ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
> >> ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
> >>
> >> So both ARRAY lines got copied over to the initramfs's copy of mdadm.conf, but
> >>
> >> CREATE owner=root group=disk mode=0660 auto=yes
> >>
> >> and
> >>
> >> MAILADDR root
> >>
> >> were not carried over on the update-initramfs command.
> >>
> >>
> >> To your clearly better understanding of all this, does the CREATE stanza NEED to be present in the initramfs's copy of mdadm.conf in order for the array to be created on boot? If so, how can one accomplish this, so that the line is added whenever a new initramfs is created for the kernel?
> >
> > No, those differences couldn't explain it not working.
> >
> > I would really expect that mdadm.conf file to successfully assemble the
> > RAID1.
> >
> > As you have the same in /etc/mdadm/mdadm.conf you could see what is happening
> > by:
> >
> > mdadm -Ss
> >
> > to stop all md arrays, then
> >
> > mdadm -Asvv
> >
> > to auto-start everything in mdadm.conf and be verbose about that is happening.
> >
> > If that fails to start the raid1, then the messages it produces will be
> > helpful in understanding why.
> > If it succeeds, then there must be something wrong with the initrd...
> > Maybe '/sbin/mdmon' is missing... Or maybe it doesn't run
> > mdadm -As
> > (or equivalently: mdadm --assemble --scan)
> > but doesn't something else. To determine what you would need to search for
> > 'mdadm' in all the scripts in the initrd and see what turns up.
> >
> 
> Using mdadm -Ss stops the array:
> 
> mdadm: stopped /dev/md127
> 
> 
> Where /dev/md127 is the imsm0 device and not the OneTB-RAID1-PV device.
> 
> 
> Then executing mdadm -Asvv shows:
> 
> mdadm: looking for devices for further assembly
> mdadm: no RAID superblock on /dev/dm-3
> mdadm: /dev/dm-3 has wrong uuid.
> mdadm: no RAID superblock on /dev/dm-2
> mdadm: /dev/dm-2 has wrong uuid.
> mdadm: no RAID superblock on /dev/dm-1
> mdadm: /dev/dm-1 has wrong uuid.
> mdadm: no RAID superblock on /dev/dm-0
> mdadm: /dev/dm-0 has wrong uuid.
> mdadm: no RAID superblock on /dev/loop0
> mdadm: /dev/loop0 has wrong uuid.
> mdadm: cannot open device /dev/sdc7: Device or resource busy
> mdadm: /dev/sdc7 has wrong uuid.
> mdadm: cannot open device /dev/sdc6: Device or resource busy
> mdadm: /dev/sdc6 has wrong uuid.
> mdadm: cannot open device /dev/sdc5: Device or resource busy
> mdadm: /dev/sdc5 has wrong uuid.
> mdadm: no RAID superblock on /dev/sdc2
> mdadm: /dev/sdc2 has wrong uuid.
> mdadm: cannot open device /dev/sdc1: Device or resource busy
> mdadm: /dev/sdc1 has wrong uuid.
> mdadm: cannot open device /dev/sdc: Device or resource busy
> mdadm: /dev/sdc has wrong uuid.
> mdadm: cannot open device /dev/sdb: Device or resource busy
> mdadm: /dev/sdb has wrong uuid.
> mdadm: cannot open device /dev/sda: Device or resource busy
> mdadm: /dev/sda has wrong uuid.

This looks wrong.  mdadm should be looking for the container as listed in
mdadm.conf and it should find a matching uuid on sda and sdb, but it doesn't.

Can you:

 mdadm -E /dev/sda /dev/sdb ; cat /etc/mdadm/mdadm.conf

so I can compare the uuids?

Thanks,

NeilBrown




> mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
> mdadm: no recogniseable superblock on /dev/dm-3
> mdadm/dev/dm-3 is not a container, and one is required.
> mdadm: no recogniseable superblock on /dev/dm-2
> mdadm/dev/dm-2 is not a container, and one is required.
> mdadm: no recogniseable superblock on /dev/dm-1
> mdadm/dev/dm-1 is not a container, and one is required.
> mdadm: no recogniseable superblock on /dev/dm-0
> mdadm/dev/dm-0 is not a container, and one is required.
> mdadm: no recogniseable superblock on /dev/loop0
> mdadm/dev/loop0 is not a container, and one is required.
> mdadm: cannot open device /dev/sdc7: Device or resource busy
> mdadm/dev/sdc7 is not a container, and one is required.
> mdadm: cannot open device /dev/sdc6: Device or resource busy
> mdadm/dev/sdc6 is not a container, and one is required.
> mdadm: cannot open device /dev/sdc5: Device or resource busy
> mdadm/dev/sdc5 is not a container, and one is required.
> mdadm: no recogniseable superblock on /dev/sdc2
> mdadm/dev/sdc2 is not a container, and one is required.
> mdadm: cannot open device /dev/sdc1: Device or resource busy
> mdadm/dev/sdc1 is not a container, and one is required.
> mdadm: cannot open device /dev/sdc: Device or resource busy
> mdadm/dev/sdc is not a container, and one is required.
> mdadm: cannot open device /dev/sdb: Device or resource busy
> mdadm/dev/sdb is not a container, and one is required.
> mdadm: cannot open device /dev/sda: Device or resource busy
> mdadm/dev/sda is not a container, and one is required.
> 
> 
> So I am not really sure if that succeed or not, but it doesn't look like it has because there is not /dev/md/OneTB-RAID1-PV:
> 
> ls -al /dev/md/
> 
> total 0
> drwxr-xr-x  2 root root   60 Nov 16 21:08 .
> drwxr-xr-x 21 root root 3440 Nov 16 21:08 ..
> lrwxrwxrwx  1 root root    8 Nov 16 21:08 imsm0 -> ../md127
> 
> 
> But after mdadm -Ivv /dev/md/imsm0:
> 
> 
> mdadm: UUID differs from /dev/md/OneTB-RAID1-PV.
> mdadm: match found for member 0
> mdadm: Started /dev/md/OneTB-RAID1-PV with 2 devices
> 
> 
> Then ls -al /dev/md/ reveals /dev/md/OneTB-RAID1-PV:
> 
> total 0
> drwxr-xr-x  2 root root   80 Nov 16 21:40 .
> drwxr-xr-x 21 root root 3480 Nov 16 21:40 ..
> lrwxrwxrwx  1 root root    8 Nov 16 21:08 imsm0 -> ../md127
> lrwxrwxrwx  1 root root    8 Nov 16 21:40 OneTB-RAID1-PV -> ../md126
> 
> 
> 
> Regardless some initram disk findings:
> 
> pwd
> 
> /tmp/initrd
> 
> Then:
> 
> find . -type f | grep md | grep -v amd
> 
> ./lib/udev/rules.d/64-md-raid.rules
> ./scripts/local-top/mdadm
> ./etc/mdadm/mdadm.conf
> ./conf/conf.d/md
> ./sbin/mdadm
> 
> 
> 
> 
> ./lib/udev/rules.d/64-md-raid.rules
> http://paste.debian.net/100016/
> 
> ./scripts/local-top/mdadm
> http://paste.debian.net/100017/
> 
> ./etc/mdadm/mdadm.conf
> http://paste.debian.net/100018/
> 
> ./conf/conf.d/md
> http://paste.debian.net/100019/
> 
> ./sbin/mdadm
> {of course is a binary}
> 
> 
> -M
> 
>  		 	   		  
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: How to recreate a dmraid RAID array with mdadm
  2010-11-17  3:15 ` Neil Brown
@ 2010-11-17 22:36   ` Mike Viau
  2010-11-18  0:11     ` Neil Brown
  0 siblings, 1 reply; 21+ messages in thread
From: Mike Viau @ 2010-11-17 22:36 UTC (permalink / raw)
  To: neilb; +Cc: linux-raid, debian-user


> On Wed, 17 Nov 2010 14:15:14 +1100 <neilb@suse.de> wrote:
>
> This looks wrong. mdadm should be looking for the container as listed in
> mdadm.conf and it should find a matching uuid on sda and sdb, but it doesn't.
>
> Can you:
>
> mdadm -E /dev/sda /dev/sdb ; cat /etc/mdadm/mdadm.conf
>
> so I can compare the uuids?
>

Sure.

# definitions of existing MD arrays ( So you don't have to scroll down :P )


ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383

ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a

mdadm -E /dev/sda /dev/sdb

/dev/sda:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.1.00
    Orig Family : 601eee02
         Family : 601eee02
     Generation : 00001187
           UUID : 084b969a:0808f5b8:6c784fb7:62659383
       Checksum : 2f91ce06 correct
    MPB Sectors : 1
          Disks : 2
   RAID Devices : 1

  Disk00 Serial : STF604MH0J34LB
          State : active
             Id : 00020000
    Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

[OneTB-RAID1-PV]:
           UUID : ae4a1598:72267ed7:3b34867b:9c56497a
     RAID Level : 1
        Members : 2
          Slots : [UU]
      This Slot : 0
     Array Size : 1953519616 (931.51 GiB 1000.20 GB)
   Per Dev Size : 1953519880 (931.51 GiB 1000.20 GB)
  Sector Offset : 0
    Num Stripes : 7630936
     Chunk Size : 64 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : clean

  Disk01 Serial : STF604MH0PN2YB
          State : active
             Id : 00030000
    Usable Size : 1953520654 (931.51 GiB 1000.20 GB)
/dev/sdb:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.1.00
    Orig Family : 601eee02
         Family : 601eee02
     Generation : 00001187
           UUID : 084b969a:0808f5b8:6c784fb7:62659383
       Checksum : 2f91ce06 correct
    MPB Sectors : 1
          Disks : 2
   RAID Devices : 1

  Disk01 Serial : STF604MH0PN2YB
          State : active
             Id : 00030000
    Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

[OneTB-RAID1-PV]:
           UUID : ae4a1598:72267ed7:3b34867b:9c56497a
     RAID Level : 1
        Members : 2
          Slots : [UU]
      This Slot : 1
     Array Size : 1953519616 (931.51 GiB 1000.20 GB)
   Per Dev Size : 1953519880 (931.51 GiB 1000.20 GB)
  Sector Offset : 0
    Num Stripes : 7630936
     Chunk Size : 64 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : clean

  Disk00 Serial : STF604MH0J34LB
          State : active
             Id : 00020000
    Usable Size : 1953520654 (931.51 GiB 1000.20 GB)

----------------------------------
cat /etc/mdadm/mdadm.conf

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a

# This file was auto-generated on Fri, 05 Nov 2010 16:29:48 -0400
# by mkconf 3.1.4-1+8efb9d1


-M
 		 	   		  --
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: How to recreate a dmraid RAID array with mdadm
  2010-11-17 22:36   ` How to recreate a dmraid RAID array with mdadm Mike Viau
@ 2010-11-18  0:11     ` Neil Brown
  2010-11-18  0:56       ` Mike Viau
  0 siblings, 1 reply; 21+ messages in thread
From: Neil Brown @ 2010-11-18  0:11 UTC (permalink / raw)
  To: Mike Viau; +Cc: linux-raid, debian-user

On Wed, 17 Nov 2010 17:36:23 -0500
Mike Viau <viaum@sheridanc.on.ca> wrote:

> 
> > On Wed, 17 Nov 2010 14:15:14 +1100 <neilb@suse.de> wrote:
> >
> > This looks wrong. mdadm should be looking for the container as listed in
> > mdadm.conf and it should find a matching uuid on sda and sdb, but it doesn't.
> >
> > Can you:
> >
> > mdadm -E /dev/sda /dev/sdb ; cat /etc/mdadm/mdadm.conf
> >
> > so I can compare the uuids?
> >
> 
> Sure.
> 
> # definitions of existing MD arrays ( So you don't have to scroll down :P )
> 
> 
> ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
> 
> ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
> 
....
>            UUID : 084b969a:0808f5b8:6c784fb7:62659383
> [OneTB-RAID1-PV]:
>            UUID : ae4a1598:72267ed7:3b34867b:9c56497a
....
> # definitions of existing MD arrays
> ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
> ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a

Yes, the uuids are definitely all correct.
This really should work.  I just tested a similar config and it worked
exactly as exported.
Weird.

Whatever version of mdadm are you running???
Can you try getting the latest (3.1.4) from
     http://www.kernel.org/pub/linux/utils/raid/mdadm/

and see how that works.
Just
    make
    ./mdadm -Asvv

NeilBrown
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: How to recreate a dmraid RAID array with mdadm
  2010-11-18  0:11     ` Neil Brown
@ 2010-11-18  0:56       ` Mike Viau
  2010-11-18  1:28         ` Neil Brown
  0 siblings, 1 reply; 21+ messages in thread
From: Mike Viau @ 2010-11-18  0:56 UTC (permalink / raw)
  To: neilb; +Cc: linux-raid, debian-user

[-- Attachment #1: Type: text/plain, Size: 4746 bytes --]


> On Thu, 18 Nov 2010 11:11:49 +1100 <neilb@suse.de> wrote:
>
> > On Wed, 17 Nov 2010 17:36:23 -0500 Mike Viau  wrote:
> >
> > > On Wed, 17 Nov 2010 14:15:14 +1100
> > >
> > > This looks wrong. mdadm should be looking for the container as listed in
> > > mdadm.conf and it should find a matching uuid on sda and sdb, but it doesn't.
> > >
> > > Can you:
> > >
> > > mdadm -E /dev/sda /dev/sdb ; cat /etc/mdadm/mdadm.conf
> > >
> > > so I can compare the uuids?
> > >
> >
> > Sure.
> >
> > # definitions of existing MD arrays ( So you don't have to scroll down :P )
> >
> > ARRAY metadata=imsm UUID=084b969a:0808f5b8:6c784fb7:62659383
> >
> > ARRAY /dev/md/OneTB-RAID1-PV container=084b969a:0808f5b8:6c784fb7:62659383 member=0 UUID=ae4a1598:72267ed7:3b34867b:9c56497a
> >
> ....
> >            UUID : 084b969a:0808f5b8:6c784fb7:62659383
> > [OneTB-RAID1-PV]:
> >            UUID : ae4a1598:72267ed7:3b34867b:9c56497a
> ....
>
> Yes, the uuids are definitely all correct.
> This really should work. I just tested a similar config and it worked
> exactly as exported.
> Weird.
>
> Whatever version of mdadm are you running???
> Can you try getting the latest (3.1.4) from
> http://www.kernel.org/pub/linux/utils/raid/mdadm/

I am running the same version, from a Debian Squeeze package which I presume is the same.

mdadm -V

mdadm - v3.1.4 - 31st August 2010

>
> and see how that works.
> Just
> make
> ./mdadm -Asvv

Regardless, I did recompile (attached is the make output -- no errors) and got similar mdadm output:

./mdadm -Asvv
mdadm: looking for devices for further assembly
mdadm: no RAID superblock on /dev/md126p1
mdadm: /dev/md126p1 has wrong uuid.
mdadm: no RAID superblock on /dev/md/OneTB-RAID1-PV
mdadm: /dev/md/OneTB-RAID1-PV has wrong uuid.
mdadm: no RAID superblock on /dev/dm-3
mdadm: /dev/dm-3 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-2
mdadm: /dev/dm-2 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-1
mdadm: /dev/dm-1 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-0
mdadm: /dev/dm-0 has wrong uuid.
mdadm: no RAID superblock on /dev/loop0
mdadm: /dev/loop0 has wrong uuid.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm: /dev/sdc7 has wrong uuid.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm: /dev/sdc6 has wrong uuid.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm: /dev/sdc5 has wrong uuid.
mdadm: no RAID superblock on /dev/sdc2
mdadm: /dev/sdc2 has wrong uuid.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm: /dev/sdc1 has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm: /dev/sdb has wrong uuid.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm: /dev/sda has wrong uuid.
mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
mdadm: no recogniseable superblock on /dev/md126p1
mdadm/dev/md126p1 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/md/OneTB-RAID1-PV
mdadm/dev/md/OneTB-RAID1-PV is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-3
mdadm/dev/dm-3 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-2
mdadm/dev/dm-2 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-1
mdadm/dev/dm-1 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-0
mdadm/dev/dm-0 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/loop0
mdadm/dev/loop0 is not a container, and one is required.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm/dev/sdc7 is not a container, and one is required.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm/dev/sdc6 is not a container, and one is required.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm/dev/sdc5 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/sdc2
mdadm/dev/sdc2 is not a container, and one is required.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm/dev/sdc1 is not a container, and one is required.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm/dev/sdc is not a container, and one is required.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm/dev/sdb is not a container, and one is required.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm/dev/sda is not a container, and one is required.


So what could this mean???


-M
 		 	   		  

[-- Attachment #2: mdadm_compile.txt --]
[-- Type: text/plain, Size: 11148 bytes --]

make

gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o mdadm.o mdadm.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o config.o config.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o mdstat.o mdstat.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o ReadMe.o ReadMe.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o util.o util.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o Manage.o Manage.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o Assemble.o Assemble.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o Build.o Build.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o Create.o Create.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o Detail.o Detail.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o Examine.o Examine.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o Grow.o Grow.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o Monitor.o Monitor.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o dlink.o dlink.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o Kill.o Kill.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o Query.o Query.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o Incremental.o Incremental.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o mdopen.o mdopen.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o super0.o super0.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o super1.o super1.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o super-ddf.o super-ddf.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o super-intel.o super-intel.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o bitmap.o bitmap.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o restripe.o restripe.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o sysfs.o sysfs.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS -DHAVE_STDINT_H -o sha1.o -c sha1.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o mapfile.o mapfile.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o crc32.o crc32.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o sg_io.o sg_io.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o msg.o msg.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o platform-intel.o platform-intel.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o probe_roms.o probe_roms.c
gcc  -o mdadm mdadm.o config.o mdstat.o  ReadMe.o util.o Manage.o Assemble.o Build.o Create.o Detail.o Examine.o Grow.o Monitor.o dlink.o Kill.o Query.o Incremental.o mdopen.o super0.o super1.o super-ddf.o super-intel.o bitmap.o restripe.o sysfs.o sha1.o mapfile.o crc32.o sg_io.o msg.o platform-intel.o probe_roms.o 
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o mdmon.o mdmon.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o monitor.o monitor.c
gcc -Wall -Werror -Wstrict-prototypes -Wextra -Wno-unused-parameter -ggdb -DSendmail=\""/usr/sbin/sendmail -t"\" -DCONFFILE=\"/etc/mdadm.conf\" -DCONFFILE2=\"/etc/mdadm/mdadm.conf\" -DMAP_DIR=\"/dev/.mdadm\" -DMAP_FILE=\"map\" -DMDMON_DIR=\"/dev/.mdadm\" -DUSE_PTHREADS   -c -o managemon.o managemon.c
gcc  -pthread -z now -o mdmon mdmon.o monitor.o managemon.o util.o mdstat.o sysfs.o config.o Kill.o sg_io.o dlink.o ReadMe.o super0.o super1.o super-intel.o super-ddf.o sha1.o crc32.o msg.o bitmap.o platform-intel.o probe_roms.o 
sed -e 's/{DEFAULT_METADATA}/1.2/g' mdadm.8.in > mdadm.8
nroff -man mdadm.8 > mdadm.man
nroff -man md.4 > md.man
nroff -man mdadm.conf.5 > mdadm.conf.man
nroff -man mdmon.8 > mdmon.man


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: How to recreate a dmraid RAID array with mdadm
  2010-11-18  0:56       ` Mike Viau
@ 2010-11-18  1:28         ` Neil Brown
  2010-11-18  2:05           ` Mike Viau
  0 siblings, 1 reply; 21+ messages in thread
From: Neil Brown @ 2010-11-18  1:28 UTC (permalink / raw)
  To: Mike Viau; +Cc: linux-raid, debian-user

On Wed, 17 Nov 2010 19:56:10 -0500
Mike Viau <viaum@sheridanc.on.ca> wrote:


> I am running the same version, from a Debian Squeeze package which I presume is the same.
> 
> mdadm -V
> 
> mdadm - v3.1.4 - 31st August 2010

Yes, should be identical to what I am running.
> 
> >
> > and see how that works.
> > Just
> > make
> > ./mdadm -Asvv
> 
> Regardless, I did recompile (attached is the make output -- no errors) and got similar mdadm output:
> 
> ./mdadm -Asvv
> mdadm: looking for devices for further assembly
> mdadm: no RAID superblock on /dev/md126p1
> mdadm: /dev/md126p1 has wrong uuid.
> mdadm: no RAID superblock on /dev/md/OneTB-RAID1-PV
> mdadm: /dev/md/OneTB-RAID1-PV has wrong uuid
....
> mdadm: cannot open device /dev/sdb: Device or resource busy
> mdadm: /dev/sdb has wrong uuid.
> mdadm: cannot open device /dev/sda: Device or resource busy
> mdadm: /dev/sda has wrong uuid.

The arrays are clearly currently assembled.  Trying to assemble them again is
not likely to produce a good result :-)  I should have said to "./mdadm -Ss"
first.

Could you apply this patch and then test again with:

 ./mdadm -Ss
 ./mdadm -Asvvv

Thanks,
NeilBrown

diff --git a/Assemble.c b/Assemble.c
index afd4e60..11323fa 100644
--- a/Assemble.c
+++ b/Assemble.c
@@ -344,9 +344,14 @@ int Assemble(struct supertype *st, char *mddev,
 		if (ident->uuid_set && (!update || strcmp(update, "uuid")!= 0) &&
 		    (!tst || !tst->sb ||
 		     same_uuid(content->uuid, ident->uuid, tst->ss->swapuuid)==0)) {
-			if (report_missmatch)
+			if (report_missmatch) {
+				char buf[200];
 				fprintf(stderr, Name ": %s has wrong uuid.\n",
 					devname);
+				fprintf(stderr, " want %s\n", __fname_from_uuid(ident->uuid, 0, buf, ':'));
+				fprintf(stderr, " have %s\n", __fname_from_uuid(content->uuid, 0, buf, ':'));
+				fprintf(stderr, " metadata=%s\n", tst->ss->name);
+			}
 			goto loop;
 		}
 		if (ident->name[0] && (!update || strcmp(update, "name")!= 0) &&

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* RE: How to recreate a dmraid RAID array with mdadm
  2010-11-18  1:28         ` Neil Brown
@ 2010-11-18  2:05           ` Mike Viau
  2010-11-18  2:32             ` Neil Brown
  0 siblings, 1 reply; 21+ messages in thread
From: Mike Viau @ 2010-11-18  2:05 UTC (permalink / raw)
  To: neilb; +Cc: linux-raid, debian-user


> On Thu, 18 Nov 2010 12:28:47 +1100 <neilb@suse.de> wrote:
> >
> > I am running the same version, from a Debian Squeeze package which I presume is the same.
> >
> > mdadm -V
> >
> > mdadm - v3.1.4 - 31st August 2010
>
> Yes, should be identical to what I am running.
> >
> > >
> > > and see how that works.
> > > Just
> > > make
> > > ./mdadm -Asvv
> >
> > Regardless, I did recompile (attached is the make output -- no errors) and got similar mdadm output:
> >
> > ./mdadm -Asvv
> > mdadm: looking for devices for further assembly
> > mdadm: no RAID superblock on /dev/md126p1
> > mdadm: /dev/md126p1 has wrong uuid.
> > mdadm: no RAID superblock on /dev/md/OneTB-RAID1-PV
> > mdadm: /dev/md/OneTB-RAID1-PV has wrong uuid
> ....
> > mdadm: cannot open device /dev/sdb: Device or resource busy
> > mdadm: /dev/sdb has wrong uuid.
> > mdadm: cannot open device /dev/sda: Device or resource busy
> > mdadm: /dev/sda has wrong uuid.
>
> The arrays are clearly currently assembled. Trying to assemble them again is
> not likely to produce a good result :-) I should have said to "./mdadm -Ss"
> first.
>
> Could you apply this patch and then test again with:
>
> ./mdadm -Ss
> ./mdadm -Asvvv
>

Applied the patch:

if (ident->uuid_set && (!update || strcmp(update, "uuid")!= 0) &&
                    (!tst || !tst->sb ||
                     same_uuid(content->uuid, ident->uuid, tst->ss->swapuuid)==0)) {
                         if (report_missmatch) {
                              char buf[200];
                                fprintf(stderr, Name ": %s has wrong uuid.\n",
                                        devname);
                           fprintf(stderr, " want %s\n", __fname_from_uuid(ident->uuid, 0, buf, ':'));
                           fprintf(stderr, " have %s\n", __fname_from_uuid(content->uuid, 0, buf, ':'));
                           fprintf(stderr, " metadata=%s\n", tst->ss->name);
                    }
                        goto loop;
                }


And got:

./mdadm -Ss

mdadm: stopped /dev/md127


./mdadm -Asvvv

mdadm: looking for devices for further assembly
mdadm: no RAID superblock on /dev/dm-3
mdadm: /dev/dm-3 has wrong uuid.
 want UUID-084b969a:0808f5b8:6c784fb7:62659383
Segmentation fault


I took the liberty of extending the char buffer to 2000 bytes/chars and 64K (1<<16) but got the same segfaults.


-M
 		 	   		  --
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: How to recreate a dmraid RAID array with mdadm
  2010-11-18  2:05           ` Mike Viau
@ 2010-11-18  2:32             ` Neil Brown
  2010-11-18  3:03               ` Mike Viau
  0 siblings, 1 reply; 21+ messages in thread
From: Neil Brown @ 2010-11-18  2:32 UTC (permalink / raw)
  To: Mike Viau; +Cc: linux-raid, debian-user

On Wed, 17 Nov 2010 21:05:40 -0500
Mike Viau <viaum@sheridanc.on.ca> wrote:


> ./mdadm -Ss
> 
> mdadm: stopped /dev/md127
> 
> 
> ./mdadm -Asvvv
> 
> mdadm: looking for devices for further assembly
> mdadm: no RAID superblock on /dev/dm-3
> mdadm: /dev/dm-3 has wrong uuid.
>  want UUID-084b969a:0808f5b8:6c784fb7:62659383
> Segmentation fault

Try this patch instead please.

NeilBrown

diff --git a/Assemble.c b/Assemble.c
index afd4e60..11e6238 100644
--- a/Assemble.c
+++ b/Assemble.c
@@ -344,9 +344,17 @@ int Assemble(struct supertype *st, char *mddev,
 		if (ident->uuid_set && (!update || strcmp(update, "uuid")!= 0) &&
 		    (!tst || !tst->sb ||
 		     same_uuid(content->uuid, ident->uuid, tst->ss->swapuuid)==0)) {
-			if (report_missmatch)
+			if (report_missmatch) {
+				char buf[200];
 				fprintf(stderr, Name ": %s has wrong uuid.\n",
 					devname);
+				fprintf(stderr, " want %s\n", __fname_from_uuid(ident->uuid, 0, buf, ':'));
+				fprintf(stderr, " tst=%p sb=%p\n", tst, tst?tst->sb:NULL);
+				if (tst) {
+					fprintf(stderr, " have %s\n", __fname_from_uuid(content->uuid, 0, buf, ':'));
+					fprintf(stderr, " metadata=%s\n", tst->ss->name);
+				}
+			}
 			goto loop;
 		}
 		if (ident->name[0] && (!update || strcmp(update, "name")!= 0) &&

 		 	   		  
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* RE: How to recreate a dmraid RAID array with mdadm
  2010-11-18  2:32             ` Neil Brown
@ 2010-11-18  3:03               ` Mike Viau
  2010-11-18  3:17                 ` Neil Brown
  0 siblings, 1 reply; 21+ messages in thread
From: Mike Viau @ 2010-11-18  3:03 UTC (permalink / raw)
  To: neilb; +Cc: linux-raid, debian-user


> On Thu, 18 Nov 2010 13:32:47 +1100 <neilb@suse.de> wrote:
> > ./mdadm -Ss
> >
> > mdadm: stopped /dev/md127
> >
> >
> > ./mdadm -Asvvv
> >
> > mdadm: looking for devices for further assembly
> > mdadm: no RAID superblock on /dev/dm-3
> > mdadm: /dev/dm-3 has wrong uuid.
> >  want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > Segmentation fault
>
> Try this patch instead please.

Applied new patch and got:

./mdadm -Ss

mdadm: stopped /dev/md127


./mdadm -Asvvv
mdadm: looking for devices for further assembly
mdadm: no RAID superblock on /dev/dm-3
mdadm: /dev/dm-3 has wrong uuid.
 want UUID-084b969a:0808f5b8:6c784fb7:62659383
 tst=0x10dd010 sb=(nil)
Segmentation fault


Again tried various buffer sizes (segfault above was with char buf[200];)


if (ident->uuid_set && (!update || strcmp(update, "uuid")!= 0) &&
    (!tst || !tst->sb ||
     same_uuid(content->uuid, ident->uuid, tst->ss->swapuuid)==0)) {
        if (report_missmatch) {
             char buf[1<<16];
                fprintf(stderr, Name ": %s has wrong uuid.\n",
                        devname);
          fprintf(stderr, " want %s\n", __fname_from_uuid(ident->uuid, 0, buf, ':'));
             fprintf(stderr, " tst=%p sb=%p\n", tst, tst?tst->sb:NULL);
             if (tst) {
                       fprintf(stderr, " have %s\n", __fname_from_uuid(content->uuid, 0, buf, ':'));
                        fprintf(stderr, " metadata=%s\n", tst->ss->name);
             }
    }
        goto loop;
}

 		 	   		  --
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: How to recreate a dmraid RAID array with mdadm
  2010-11-18  3:03               ` Mike Viau
@ 2010-11-18  3:17                 ` Neil Brown
  2010-11-18  5:10                   ` Mike Viau
  0 siblings, 1 reply; 21+ messages in thread
From: Neil Brown @ 2010-11-18  3:17 UTC (permalink / raw)
  To: Mike Viau; +Cc: linux-raid, debian-user

On Wed, 17 Nov 2010 22:03:41 -0500
Mike Viau <viaum@sheridanc.on.ca> wrote:

> 
> > On Thu, 18 Nov 2010 13:32:47 +1100 <neilb@suse.de> wrote:
> > > ./mdadm -Ss
> > >
> > > mdadm: stopped /dev/md127
> > >
> > >
> > > ./mdadm -Asvvv
> > >
> > > mdadm: looking for devices for further assembly
> > > mdadm: no RAID superblock on /dev/dm-3
> > > mdadm: /dev/dm-3 has wrong uuid.
> > >  want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > > Segmentation fault
> >
> > Try this patch instead please.
> 
> Applied new patch and got:
> 
> ./mdadm -Ss
> 
> mdadm: stopped /dev/md127
> 
> 
> ./mdadm -Asvvv
> mdadm: looking for devices for further assembly
> mdadm: no RAID superblock on /dev/dm-3
> mdadm: /dev/dm-3 has wrong uuid.
>  want UUID-084b969a:0808f5b8:6c784fb7:62659383
>  tst=0x10dd010 sb=(nil)
> Segmentation fault

Sorry... I guess I should have tested it myself..

The
   if (tst) {

Should be
 
   if (tst && content) {

NeilBrown


> 
> 
> Again tried various buffer sizes (segfault above was with char buf[200];)
> 
> 
> if (ident->uuid_set && (!update || strcmp(update, "uuid")!= 0) &&
>     (!tst || !tst->sb ||
>      same_uuid(content->uuid, ident->uuid, tst->ss->swapuuid)==0)) {
>         if (report_missmatch) {
>              char buf[1<<16];
>                 fprintf(stderr, Name ": %s has wrong uuid.\n",
>                         devname);
>           fprintf(stderr, " want %s\n", __fname_from_uuid(ident->uuid, 0, buf, ':'));
>              fprintf(stderr, " tst=%p sb=%p\n", tst, tst?tst->sb:NULL);
>              if (tst) {
>                        fprintf(stderr, " have %s\n", __fname_from_uuid(content->uuid, 0, buf, ':'));
>                         fprintf(stderr, " metadata=%s\n", tst->ss->name);
>              }
>     }
>         goto loop;
> }
> 
>  		 	   		  
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: How to recreate a dmraid RAID array with mdadm
  2010-11-18  3:17                 ` Neil Brown
@ 2010-11-18  5:10                   ` Mike Viau
  2010-11-18  5:38                     ` Neil Brown
  0 siblings, 1 reply; 21+ messages in thread
From: Mike Viau @ 2010-11-18  5:10 UTC (permalink / raw)
  To: neilb; +Cc: linux-raid, debian-user


> On Thu, 18 Nov 2010 14:17:18 +1100 <neilb@suse.de> wrote:
> >
> > > On Thu, 18 Nov 2010 13:32:47 +1100  wrote:
> > > > ./mdadm -Ss
> > > >
> > > > mdadm: stopped /dev/md127
> > > >
> > > >
> > > > ./mdadm -Asvvv
> > > >
> > > > mdadm: looking for devices for further assembly
> > > > mdadm: no RAID superblock on /dev/dm-3
> > > > mdadm: /dev/dm-3 has wrong uuid.
> > > > want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > > > Segmentation fault
> > >
> > > Try this patch instead please.
> >
> > Applied new patch and got:
> >
> > ./mdadm -Ss
> >
> > mdadm: stopped /dev/md127
> >
> >
> > ./mdadm -Asvvv
> > mdadm: looking for devices for further assembly
> > mdadm: no RAID superblock on /dev/dm-3
> > mdadm: /dev/dm-3 has wrong uuid.
> >  want UUID-084b969a:0808f5b8:6c784fb7:62659383
> >  tst=0x10dd010 sb=(nil)
> > Segmentation fault
>
> Sorry... I guess I should have tested it myself..
>
> The
> if (tst) {
>
> Should be
>
> if (tst && content) {
>

Apply update and got:

mdadm: /dev/sdb is identified as a member of /dev/md/imsm0, slot -1.
mdadm: /dev/sda is identified as a member of /dev/md/imsm0, slot -1.
mdadm: added /dev/sda to /dev/md/imsm0 as -1
mdadm: added /dev/sdb to /dev/md/imsm0 as -1
mdadm: Container /dev/md/imsm0 has been assembled with 2 drives
mdadm: looking for devices for /dev/md/OneTB-RAID1-PV


Full output at: http://paste.debian.net/100103/
expires: 

2010-11-21 06:07:30
-M


 		 	   		  

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: How to recreate a dmraid RAID array with mdadm
  2010-11-18  5:10                   ` Mike Viau
@ 2010-11-18  5:38                     ` Neil Brown
  2010-11-22 18:07                       ` Mike Viau
  0 siblings, 1 reply; 21+ messages in thread
From: Neil Brown @ 2010-11-18  5:38 UTC (permalink / raw)
  To: Mike Viau; +Cc: linux-raid, debian-user

On Thu, 18 Nov 2010 00:10:50 -0500
Mike Viau <viaum@sheridanc.on.ca> wrote:

> 
> > On Thu, 18 Nov 2010 14:17:18 +1100 <neilb@suse.de> wrote:
> > >
> > > > On Thu, 18 Nov 2010 13:32:47 +1100  wrote:
> > > > > ./mdadm -Ss
> > > > >
> > > > > mdadm: stopped /dev/md127
> > > > >
> > > > >
> > > > > ./mdadm -Asvvv
> > > > >
> > > > > mdadm: looking for devices for further assembly
> > > > > mdadm: no RAID superblock on /dev/dm-3
> > > > > mdadm: /dev/dm-3 has wrong uuid.
> > > > > want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > > > > Segmentation fault
> > > >
> > > > Try this patch instead please.
> > >
> > > Applied new patch and got:
> > >
> > > ./mdadm -Ss
> > >
> > > mdadm: stopped /dev/md127
> > >
> > >
> > > ./mdadm -Asvvv
> > > mdadm: looking for devices for further assembly
> > > mdadm: no RAID superblock on /dev/dm-3
> > > mdadm: /dev/dm-3 has wrong uuid.
> > >  want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > >  tst=0x10dd010 sb=(nil)
> > > Segmentation fault
> >
> > Sorry... I guess I should have tested it myself..
> >
> > The
> > if (tst) {
> >
> > Should be
> >
> > if (tst && content) {
> >
> 
> Apply update and got:
> 
> mdadm: /dev/sdb is identified as a member of /dev/md/imsm0, slot -1.
> mdadm: /dev/sda is identified as a member of /dev/md/imsm0, slot -1.
> mdadm: added /dev/sda to /dev/md/imsm0 as -1
> mdadm: added /dev/sdb to /dev/md/imsm0 as -1
> mdadm: Container /dev/md/imsm0 has been assembled with 2 drives
> mdadm: looking for devices for /dev/md/OneTB-RAID1-PV

So just to clarify.

With the Debian mdadm, which is 3.1.4, if you

 mdadm -Ss
 mdadm -Asvv

it says (among other things) that /dev/sda has wrong uuid.
and doesn't start the array.

But with the mdadm you compiled yourself, which is also 3.1.4,
if you

  mdadm -Ss
  mdadm -Asvv

then it doesn't give that message, and it works.

That is very strange.   It seems that the Debian mdadm is broken somehow, but
I'm fairly sure Debian hardly changes anything - they are *very* good at
getting their changes upstream first.

I don't suppose you have an /etc/mdadm.conf as well as /etc/mdadm/mdadm.conf
do you?  If you did and the two were different, the Debian's mdadm would
behave a bit differently to upstream (they prefer different config files) but
I very much doubt that is the problem.

But I guess if the self-compiled one works (even when you take the patch
out), then just
   make install

and be happy.

NeilBrown


> 
> 
> Full output at: http://paste.debian.net/100103/
> expires: 
> 
> 2010-11-21 06:07:30
> -M
> 
> 
>  		 	   		  

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: How to recreate a dmraid RAID array with mdadm
  2010-11-18  5:38                     ` Neil Brown
@ 2010-11-22 18:07                       ` Mike Viau
  2010-11-22 23:11                         ` Neil Brown
  0 siblings, 1 reply; 21+ messages in thread
From: Mike Viau @ 2010-11-22 18:07 UTC (permalink / raw)
  To: neilb; +Cc: linux-raid, debian-user

[-- Attachment #1: Type: text/plain, Size: 4614 bytes --]


> On Thu, 18 Nov 2010 16:38:49 +1100 <neilb@suse.de> wrote:
> > > On Thu, 18 Nov 2010 14:17:18 +1100  wrote:
> > > >
> > > > > On Thu, 18 Nov 2010 13:32:47 +1100 wrote:
> > > > > > ./mdadm -Ss
> > > > > >
> > > > > > mdadm: stopped /dev/md127
> > > > > >
> > > > > >
> > > > > > ./mdadm -Asvvv
> > > > > >
> > > > > > mdadm: looking for devices for further assembly
> > > > > > mdadm: no RAID superblock on /dev/dm-3
> > > > > > mdadm: /dev/dm-3 has wrong uuid.
> > > > > > want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > > > > > Segmentation fault
> > > > >
> > > > > Try this patch instead please.
> > > >
> > > > Applied new patch and got:
> > > >
> > > > ./mdadm -Ss
> > > >
> > > > mdadm: stopped /dev/md127
> > > >
> > > >
> > > > ./mdadm -Asvvv
> > > > mdadm: looking for devices for further assembly
> > > > mdadm: no RAID superblock on /dev/dm-3
> > > > mdadm: /dev/dm-3 has wrong uuid.
> > > > want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > > > tst=0x10dd010 sb=(nil)
> > > > Segmentation fault
> > >
> > > Sorry... I guess I should have tested it myself..
> > >
> > > The
> > > if (tst) {
> > >
> > > Should be
> > >
> > > if (tst && content) {
> > >
> >
> > Apply update and got:
> >
> > mdadm: /dev/sdb is identified as a member of /dev/md/imsm0, slot -1.
> > mdadm: /dev/sda is identified as a member of /dev/md/imsm0, slot -1.
> > mdadm: added /dev/sda to /dev/md/imsm0 as -1
> > mdadm: added /dev/sdb to /dev/md/imsm0 as -1
> > mdadm: Container /dev/md/imsm0 has been assembled with 2 drives
> > mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
>
> So just to clarify.
>
> With the Debian mdadm, which is 3.1.4, if you
>
> mdadm -Ss
> mdadm -Asvv
>
> it says (among other things) that /dev/sda has wrong uuid.
> and doesn't start the array.

Actually both compiled and Debian do not start the array. Or atleast create the /dev/md/OneTB-RAID1-PV device when running mdadm -I /dev/md/imsm0 does.

You are right about seeing a message on /dev/sda about having a wrong uuid somewhere though.  I went back to take a look at my output from the Debian mailing list to see that the mdadm did change slightly from this thread has begun.

The old output was copied verbatim on http://lists.debian.org/debian-user/2010/11/msg01234.html and says (among other things) that /dev/sda has wrong uuid.

The /dev/sd[ab] has wrong uuid messages are missing from the mdadm -Asvv output but....

./mdadm -Ivv /dev/md/imsm0 
mdadm: UUID differs from /dev/md/OneTB-RAID1-PV.
mdadm: match found for member 0
mdadm: Started /dev/md/OneTB-RAID1-PV with 2 devices


I still have this UUID message when still using the mdadm -I command.


I'll attach the output of both the mdadm commands above as they run now on the system, but I noticed, but also that in the same thread link above, with the old output I was inqurying as to both /dev/sda and /dev/sdb (the drives which make up the raid1 array) do not appear to recognized as having a valid container when one is required.

What is take on GeraldCC (gcsgcatling@bigpond.com) assistance about /dev/sd[ab] containing a 8e (for LVM) partition type, rather than the fd type to denote raid autodetect. If this was the magical fix (which I am not saying it can’t be) why is mdadm -I /dev/md/imsm0 able to bring up the array for use as an physical volume for LVM?



>
> But with the mdadm you compiled yourself, which is also 3.1.4,
> if you
>
> mdadm -Ss
> mdadm -Asvv
>
> then it doesn't give that message, and it works.

Again, actually both compiled and Debian do not start the array. Or atleast
create the /dev/md/OneTB-RAID1-PV device when running mdadm -I
/dev/md/imsm0 does.

>
> That is very strange. It seems that the Debian mdadm is broken somehow, but
> I'm fairly sure Debian hardly changes anything - they are *very* good at
> getting their changes upstream first.
>
> I don't suppose you have an /etc/mdadm.conf as well as /etc/mdadm/mdadm.conf
> do you? If you did and the two were different, the Debian's mdadm would
> behave a bit differently to upstream (they prefer different config files) but
> I very much doubt that is the problem.
>

There is no /etc/mdadm.conf on the filesystem only /etc/mdadm/mdadm.conf


> But I guess if the self-compiled one works (even when you take the patch
> out), then just
> make install

I wish this was the case...

>
> and be happy.
>
> NeilBrown
>
>
> >
> >
> > Full output at: http://paste.debian.net/100103/
> > expires:
> >
> > 2010-11-21 06:07:30

Thanks

-M
 		 	   		  

[-- Attachment #2: Compiled version.txt --]
[-- Type: text/plain, Size: 5077 bytes --]

Compiled version

./mdadm -Ss

mdadm: stopped /dev/md127

===

./mdadm -Asvv

mdadm: looking for devices for further assembly
mdadm: no RAID superblock on /dev/dm-3
mdadm: /dev/dm-3 has wrong uuid.
 want UUID-084b969a:0808f5b8:6c784fb7:62659383
 tst=0x982010 sb=(nil)
mdadm: no RAID superblock on /dev/dm-2
mdadm: /dev/dm-2 has wrong uuid.
 want UUID-084b969a:0808f5b8:6c784fb7:62659383
 tst=0x982120 sb=(nil)
mdadm: no RAID superblock on /dev/dm-1
mdadm: /dev/dm-1 has wrong uuid.
 want UUID-084b969a:0808f5b8:6c784fb7:62659383
 tst=0x9821b0 sb=(nil)
mdadm: no RAID superblock on /dev/dm-0
mdadm: /dev/dm-0 has wrong uuid.
 want UUID-084b969a:0808f5b8:6c784fb7:62659383
 tst=0x9919a0 sb=(nil)
mdadm: no RAID superblock on /dev/loop0
mdadm: /dev/loop0 has wrong uuid.
 want UUID-084b969a:0808f5b8:6c784fb7:62659383
 tst=0x991a30 sb=(nil)
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm: /dev/sdc7 has wrong uuid.
 want UUID-084b969a:0808f5b8:6c784fb7:62659383
 tst=0x991ac0 sb=(nil)
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm: /dev/sdc6 has wrong uuid.
 want UUID-084b969a:0808f5b8:6c784fb7:62659383
 tst=0x991b50 sb=(nil)
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm: /dev/sdc5 has wrong uuid.
 want UUID-084b969a:0808f5b8:6c784fb7:62659383
 tst=0x991be0 sb=(nil)
mdadm: no RAID superblock on /dev/sdc2
mdadm: /dev/sdc2 has wrong uuid.
 want UUID-084b969a:0808f5b8:6c784fb7:62659383
 tst=0x991c70 sb=(nil)
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm: /dev/sdc1 has wrong uuid.
 want UUID-084b969a:0808f5b8:6c784fb7:62659383
 tst=0x991d00 sb=(nil)
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
 want UUID-084b969a:0808f5b8:6c784fb7:62659383
 tst=0x991d90 sb=(nil)
mdadm: /dev/sdb is identified as a member of /dev/md/imsm0, slot -1.
mdadm: /dev/sda is identified as a member of /dev/md/imsm0, slot -1.
mdadm: added /dev/sda to /dev/md/imsm0 as -1
mdadm: added /dev/sdb to /dev/md/imsm0 as -1
mdadm: Container /dev/md/imsm0 has been assembled with 2 drives
mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
mdadm: no recogniseable superblock on /dev/dm-3
mdadm/dev/dm-3 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-2
mdadm/dev/dm-2 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-1
mdadm/dev/dm-1 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-0
mdadm/dev/dm-0 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/loop0
mdadm/dev/loop0 is not a container, and one is required.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm/dev/sdc7 is not a container, and one is required.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm/dev/sdc6 is not a container, and one is required.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm/dev/sdc5 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/sdc2
mdadm/dev/sdc2 is not a container, and one is required.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm/dev/sdc1 is not a container, and one is required.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm/dev/sdc is not a container, and one is required.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm/dev/sdb is not a container, and one is required.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm/dev/sda is not a container, and one is required.
mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
mdadm: no recogniseable superblock on /dev/dm-3
mdadm/dev/dm-3 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-2
mdadm/dev/dm-2 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-1
mdadm/dev/dm-1 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-0
mdadm/dev/dm-0 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/loop0
mdadm/dev/loop0 is not a container, and one is required.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm/dev/sdc7 is not a container, and one is required.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm/dev/sdc6 is not a container, and one is required.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm/dev/sdc5 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/sdc2
mdadm/dev/sdc2 is not a container, and one is required.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm/dev/sdc1 is not a container, and one is required.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm/dev/sdc is not a container, and one is required.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm/dev/sdb is not a container, and one is required.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm/dev/sda is not a container, and one is required.


[-- Attachment #3: DEBIAN version.txt --]
[-- Type: text/plain, Size: 4299 bytes --]

DEBIAN mdadm

mdadm -Ss

mdadm: stopped /dev/md127

===

mdadm -Asvv

mdadm: looking for devices for further assembly
mdadm: no RAID superblock on /dev/dm-3
mdadm: /dev/dm-3 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-2
mdadm: /dev/dm-2 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-1
mdadm: /dev/dm-1 has wrong uuid.
mdadm: no RAID superblock on /dev/dm-0
mdadm: /dev/dm-0 has wrong uuid.
mdadm: no RAID superblock on /dev/loop0
mdadm: /dev/loop0 has wrong uuid.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm: /dev/sdc7 has wrong uuid.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm: /dev/sdc6 has wrong uuid.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm: /dev/sdc5 has wrong uuid.
mdadm: no RAID superblock on /dev/sdc2
mdadm: /dev/sdc2 has wrong uuid.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm: /dev/sdc1 has wrong uuid.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: /dev/sdc has wrong uuid.
mdadm: /dev/sdb is identified as a member of /dev/md/imsm0, slot -1.
mdadm: /dev/sda is identified as a member of /dev/md/imsm0, slot -1.
mdadm: added /dev/sda to /dev/md/imsm0 as -1
mdadm: added /dev/sdb to /dev/md/imsm0 as -1
mdadm: Container /dev/md/imsm0 has been assembled with 2 drives
mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
mdadm: no recogniseable superblock on /dev/dm-3
mdadm/dev/dm-3 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-2
mdadm/dev/dm-2 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-1
mdadm/dev/dm-1 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-0
mdadm/dev/dm-0 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/loop0
mdadm/dev/loop0 is not a container, and one is required.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm/dev/sdc7 is not a container, and one is required.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm/dev/sdc6 is not a container, and one is required.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm/dev/sdc5 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/sdc2
mdadm/dev/sdc2 is not a container, and one is required.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm/dev/sdc1 is not a container, and one is required.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm/dev/sdc is not a container, and one is required.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm/dev/sdb is not a container, and one is required.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm/dev/sda is not a container, and one is required.
mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
mdadm: no recogniseable superblock on /dev/dm-3
mdadm/dev/dm-3 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-2
mdadm/dev/dm-2 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-1
mdadm/dev/dm-1 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/dm-0
mdadm/dev/dm-0 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/loop0
mdadm/dev/loop0 is not a container, and one is required.
mdadm: cannot open device /dev/sdc7: Device or resource busy
mdadm/dev/sdc7 is not a container, and one is required.
mdadm: cannot open device /dev/sdc6: Device or resource busy
mdadm/dev/sdc6 is not a container, and one is required.
mdadm: cannot open device /dev/sdc5: Device or resource busy
mdadm/dev/sdc5 is not a container, and one is required.
mdadm: no recogniseable superblock on /dev/sdc2
mdadm/dev/sdc2 is not a container, and one is required.
mdadm: cannot open device /dev/sdc1: Device or resource busy
mdadm/dev/sdc1 is not a container, and one is required.
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm/dev/sdc is not a container, and one is required.
mdadm: cannot open device /dev/sdb: Device or resource busy
mdadm/dev/sdb is not a container, and one is required.
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm/dev/sda is not a container, and one is required.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: How to recreate a dmraid RAID array with mdadm
  2010-11-22 18:07                       ` Mike Viau
@ 2010-11-22 23:11                         ` Neil Brown
  0 siblings, 0 replies; 21+ messages in thread
From: Neil Brown @ 2010-11-22 23:11 UTC (permalink / raw)
  To: Mike Viau; +Cc: linux-raid, debian-user


I see the problem now.  And John Robinson was nearly there.

The problem is that after assembling the container /dev/md/imsm,
mdadm needs to assemble the RAID1, but doesn't find the
container /dev/md/imsm to assemble it from.
That is because of the
  DEVICE partitions
line.
A container is not a partition - it does not appear in /proc/partitions.
You need

  DEVICE partitions containers

which is the default if you don't have a DEVICE line (and I didn't have a
device line in my testing).

I think all the "wrong uuid" messages were because the device was busy (and
so it didn't read a uuid), probably because you didn't "mdadm -Ss" first.

So just remove the "DEVICE partitions" line, or add " containers" to it, and 
all should be happy.

NeilBrown



On Mon, 22 Nov 2010 13:07:10 -0500
Mike Viau <viaum@sheridanc.on.ca> wrote:

> 
> > On Thu, 18 Nov 2010 16:38:49 +1100 <neilb@suse.de> wrote:
> > > > On Thu, 18 Nov 2010 14:17:18 +1100  wrote:
> > > > >
> > > > > > On Thu, 18 Nov 2010 13:32:47 +1100 wrote:
> > > > > > > ./mdadm -Ss
> > > > > > >
> > > > > > > mdadm: stopped /dev/md127
> > > > > > >
> > > > > > >
> > > > > > > ./mdadm -Asvvv
> > > > > > >
> > > > > > > mdadm: looking for devices for further assembly
> > > > > > > mdadm: no RAID superblock on /dev/dm-3
> > > > > > > mdadm: /dev/dm-3 has wrong uuid.
> > > > > > > want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > > > > > > Segmentation fault
> > > > > >
> > > > > > Try this patch instead please.
> > > > >
> > > > > Applied new patch and got:
> > > > >
> > > > > ./mdadm -Ss
> > > > >
> > > > > mdadm: stopped /dev/md127
> > > > >
> > > > >
> > > > > ./mdadm -Asvvv
> > > > > mdadm: looking for devices for further assembly
> > > > > mdadm: no RAID superblock on /dev/dm-3
> > > > > mdadm: /dev/dm-3 has wrong uuid.
> > > > > want UUID-084b969a:0808f5b8:6c784fb7:62659383
> > > > > tst=0x10dd010 sb=(nil)
> > > > > Segmentation fault
> > > >
> > > > Sorry... I guess I should have tested it myself..
> > > >
> > > > The
> > > > if (tst) {
> > > >
> > > > Should be
> > > >
> > > > if (tst && content) {
> > > >
> > >
> > > Apply update and got:
> > >
> > > mdadm: /dev/sdb is identified as a member of /dev/md/imsm0, slot -1.
> > > mdadm: /dev/sda is identified as a member of /dev/md/imsm0, slot -1.
> > > mdadm: added /dev/sda to /dev/md/imsm0 as -1
> > > mdadm: added /dev/sdb to /dev/md/imsm0 as -1
> > > mdadm: Container /dev/md/imsm0 has been assembled with 2 drives
> > > mdadm: looking for devices for /dev/md/OneTB-RAID1-PV
> >
> > So just to clarify.
> >
> > With the Debian mdadm, which is 3.1.4, if you
> >
> > mdadm -Ss
> > mdadm -Asvv
> >
> > it says (among other things) that /dev/sda has wrong uuid.
> > and doesn't start the array.
> 
> Actually both compiled and Debian do not start the array. Or atleast create the /dev/md/OneTB-RAID1-PV device when running mdadm -I /dev/md/imsm0 does.
> 
> You are right about seeing a message on /dev/sda about having a wrong uuid somewhere though.  I went back to take a look at my output from the Debian mailing list to see that the mdadm did change slightly from this thread has begun.
> 
> The old output was copied verbatim on http://lists.debian.org/debian-user/2010/11/msg01234.html and says (among other things) that /dev/sda has wrong uuid.
> 
> The /dev/sd[ab] has wrong uuid messages are missing from the mdadm -Asvv output but....
> 
> ./mdadm -Ivv /dev/md/imsm0 
> mdadm: UUID differs from /dev/md/OneTB-RAID1-PV.
> mdadm: match found for member 0
> mdadm: Started /dev/md/OneTB-RAID1-PV with 2 devices
> 
> 
> I still have this UUID message when still using the mdadm -I command.
> 
> 
> I'll attach the output of both the mdadm commands above as they run now on the system, but I noticed, but also that in the same thread link above, with the old output I was inqurying as to both /dev/sda and /dev/sdb (the drives which make up the raid1 array) do not appear to recognized as having a valid container when one is required.
> 
> What is take on GeraldCC (gcsgcatling@bigpond.com) assistance about /dev/sd[ab] containing a 8e (for LVM) partition type, rather than the fd type to denote raid autodetect. If this was the magical fix (which I am not saying it can’t be) why is mdadm -I /dev/md/imsm0 able to bring up the array for use as an physical volume for LVM?
> 
> 
> 
> >
> > But with the mdadm you compiled yourself, which is also 3.1.4,
> > if you
> >
> > mdadm -Ss
> > mdadm -Asvv
> >
> > then it doesn't give that message, and it works.
> 
> Again, actually both compiled and Debian do not start the array. Or atleast
> create the /dev/md/OneTB-RAID1-PV device when running mdadm -I
> /dev/md/imsm0 does.
> 
> >
> > That is very strange. It seems that the Debian mdadm is broken somehow, but
> > I'm fairly sure Debian hardly changes anything - they are *very* good at
> > getting their changes upstream first.
> >
> > I don't suppose you have an /etc/mdadm.conf as well as /etc/mdadm/mdadm.conf
> > do you? If you did and the two were different, the Debian's mdadm would
> > behave a bit differently to upstream (they prefer different config files) but
> > I very much doubt that is the problem.
> >
> 
> There is no /etc/mdadm.conf on the filesystem only /etc/mdadm/mdadm.conf
> 
> 
> > But I guess if the self-compiled one works (even when you take the patch
> > out), then just
> > make install
> 
> I wish this was the case...
> 
> >
> > and be happy.
> >
> > NeilBrown
> >
> >
> > >
> > >
> > > Full output at: http://paste.debian.net/100103/
> > > expires:
> > >
> > > 2010-11-21 06:07:30
> 
> Thanks
> 
> -M
>  		 	   		  
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2010-11-22 23:11 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-11-17  2:44 How to recreate a dmraid RAID array with mdadm (was: no subject) Mike Viau
2010-11-17  3:15 ` Neil Brown
2010-11-17 22:36   ` How to recreate a dmraid RAID array with mdadm Mike Viau
2010-11-18  0:11     ` Neil Brown
2010-11-18  0:56       ` Mike Viau
2010-11-18  1:28         ` Neil Brown
2010-11-18  2:05           ` Mike Viau
2010-11-18  2:32             ` Neil Brown
2010-11-18  3:03               ` Mike Viau
2010-11-18  3:17                 ` Neil Brown
2010-11-18  5:10                   ` Mike Viau
2010-11-18  5:38                     ` Neil Brown
2010-11-22 18:07                       ` Mike Viau
2010-11-22 23:11                         ` Neil Brown
  -- strict thread matches above, loose matches on Subject: below --
2010-11-14  6:50 How to recreate a dmraid RAID array with mdadm (was: no subject) Mike Viau
2010-11-15  5:21 ` Neil Brown
2010-11-17  1:02   ` Mike Viau
2010-11-17  1:26     ` Neil Brown
2010-11-17  1:39       ` John Robinson
2010-11-17  1:53         ` Neil Brown
2010-11-17  2:27           ` Mike Viau

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).