linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Using mdadm instead of dmraid for BIOS-RAID root volume
@ 2013-10-08 12:17 Brian Candler
  2013-10-08 14:19 ` Brian Candler
  2013-10-08 14:37 ` Jes Sorensen
  0 siblings, 2 replies; 10+ messages in thread
From: Brian Candler @ 2013-10-08 12:17 UTC (permalink / raw)
  To: linux-raid

I have a number of systems with Ubuntu 12.04 and Intel BIOS-RAID 
mirrored pairs for the boot disk. These come up as dmraid, with the root 
filesystem on /dev/mapper/isw_XXXXXXXXXX_Volume0p1.

I would like to convert them to use mdadm instead (so for example I can 
monitor them with /proc/mdstat)

The mdadm version is 3.2.5, and the manpage says that dff and imsm 
metadata is supported. mdadm --examine confirms this:

|# mdadm --examine /dev/sda
/dev/sda:
           Magic : Intel Raid ISM Cfg Sig.
         Version : 1.1.00
     Orig Family : 8bc6b015
          Family : 8bc6b015
      Generation : 00000010
      Attributes : All supported
            UUID : 2ff8e106:XXXXXXXX:XXXXXXXX:XXXXXXXX
        Checksum : b92d117e correct
     MPB Sectors : 1
           Disks : 2
    RAID Devices : 1

   Disk00 Serial : S14CNEAXXXXXXXX
           State : active
              Id : 00000000
     Usable Size : 234435342 (111.79 GiB 120.03 GB)

[Volume0]:
            UUID : a7fb0d20:XXXXXXXX:XXXXXXXX:XXXXXXXX
      RAID Level : 1
         Members : 2
           Slots : [UU]
     Failed disk : none
       This Slot : 0
      Array Size : 222715904 (106.20 GiB 114.03 GB)
    Per Dev Size : 222716168 (106.20 GiB 114.03 GB)
   Sector Offset : 0
     Num Stripes : 869984
      Chunk Size : 64 KiB
        Reserved : 0
   Migrate State : idle
       Map State : uninitialized
     Dirty State : clean

   Disk01 Serial : S14CNEAXXXXXXXX
           State : active
              Id : 00000001
     Usable Size : 234435342 (111.79 GiB 120.03 GB)

|(ditto for /dev/sdb)

Since this machine is going to need a reinstall for an unrelated reason 
anyway, I thought as an experiment I'd first try to convert it to use 
mdadm at boot.

What I did was:

1. apt-get remove dmraid; apt-get autoremove

This gets rid of:

/lib/udev/rules.d/97-dmraid.rules
/usr/share/initramfs-tools/scripts/local-top/dmraid
/usr/share/initramfs-tools/hooks/dmraid

but we still have:

/lib/udev/rules.d/64-md-raid.rules
/usr/share/initramfs-tools/hooks/mdadm
/usr/share/initramfs-tools/scripts/mdadm-functions
/usr/share/initramfs-tools/scripts/local-premount/mdadm
/usr/share/initramfs-tools/scripts/init-premount/mdadm

2. /usr/share/mdadm/mkconf >/etc/mdadm/mdadm.conf

ARRAY metadata=imsm UUID=2ff8...
ARRAY /dev/md/Volume0 container=2ff8... member=0 UUID=...


However, after a reboot, the machine came up with /dev/sdb1 as its root. 
I was hoping for /dev/md/Volume0

3. Reboot with "raid=autodetect md=1"

No difference, although it did come up with /dev/sda1 as its root 
(possibly coincidental change)

4. After a few more failed experiments with kernel command line options, 
I tried to put it back using "apt-get install dmraid". However after 
that, further boots failed - the kernel panicked because it couldn't 
find the root partition with the given UUID. I guess this means that 
neither dmraid nor mdadm was able to assemble the volume.

The screen shows:

md: Scanned 0 and added 0 devices.
md: autorun ..
md: ... autorun DONE.
VFS: Cannot open root device "UUID=..." or unknown-block(0,0)
Please append a correct "root=" boot option; here are the available 
partitions
(then shows sda, sda1, sdb, sdb1 and a backtrace)

Unfortunately I cannot scroll back to see previous lines. I *can* boot 
the system in a crippled way with root=/dev/sda1 though.

Back at the shell, it looks like the dmraid volume is still present:

# dmraid -r
/dev/sdb: isw, "isw_XXXXXXXXXX", GROUP, ok, 234441646 sectors, data@ 0
/dev/sda: isw, "isw_XXXXXXXXXX", GROUP, ok, 234441646 sectors, data@ 0
# dmraid -s
*** Group superset isw_XXXXXXXXXX
--> Subset
name   : isw_XXXXXXXXXX_Volume0
size   : 222716160
stride : 128
type   : mirror
status : ok
subsets: 0
devs   : 2
spares : 0
# ls /dev/mapper
control
#

The machine is probably toast anyway, as it has written to /dev/sdb1 and 
/dev/sda1 independently.

Anyway, I'm not so worried about having broken this machine, as it 
needed a reinstall anyway, but I do wonder what would have been the 
correct way to get mdraid instead of dmraid at boot time for this root 
volume?

Thanks,

Brian.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Using mdadm instead of dmraid for BIOS-RAID root volume
  2013-10-08 12:17 Using mdadm instead of dmraid for BIOS-RAID root volume Brian Candler
@ 2013-10-08 14:19 ` Brian Candler
  2013-10-08 18:36   ` Martin Wilck
  2013-10-08 14:37 ` Jes Sorensen
  1 sibling, 1 reply; 10+ messages in thread
From: Brian Candler @ 2013-10-08 14:19 UTC (permalink / raw)
  To: linux-raid

 > Anyway, I'm not so worried about having broken this machine, as it 
needed a reinstall anyway, but I do wonder what would have been the 
correct way to get mdraid instead of dmraid at boot time for this root 
volume?

After some more searching, it looks like the udev rules were nobbled to 
disable this in
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1030292

A possible way to re-enable is here:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1054948/comments/9

I'm a bit concerned about the issues around clean shutdown, and hence 
whether is really production-ready yet.

Regards,

Brian.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Using mdadm instead of dmraid for BIOS-RAID root volume
  2013-10-08 12:17 Using mdadm instead of dmraid for BIOS-RAID root volume Brian Candler
  2013-10-08 14:19 ` Brian Candler
@ 2013-10-08 14:37 ` Jes Sorensen
  1 sibling, 0 replies; 10+ messages in thread
From: Jes Sorensen @ 2013-10-08 14:37 UTC (permalink / raw)
  To: Brian Candler; +Cc: linux-raid

Brian Candler <b.candler@pobox.com> writes:
> I have a number of systems with Ubuntu 12.04 and Intel BIOS-RAID
> mirrored pairs for the boot disk. These come up as dmraid, with the
> root filesystem on /dev/mapper/isw_XXXXXXXXXX_Volume0p1.
>
> I would like to convert them to use mdadm instead (so for example I
> can monitor them with /proc/mdstat)
>
> The mdadm version is 3.2.5, and the manpage says that dff and imsm
> metadata is supported. mdadm --examine confirms this:
>
> |# mdadm --examine /dev/sda
> /dev/sda:
>           Magic : Intel Raid ISM Cfg Sig.
>         Version : 1.1.00
>     Orig Family : 8bc6b015
>          Family : 8bc6b015
>      Generation : 00000010
>      Attributes : All supported
>            UUID : 2ff8e106:XXXXXXXX:XXXXXXXX:XXXXXXXX
>        Checksum : b92d117e correct
>     MPB Sectors : 1
>           Disks : 2
>    RAID Devices : 1
>
>   Disk00 Serial : S14CNEAXXXXXXXX
>           State : active
>              Id : 00000000
>     Usable Size : 234435342 (111.79 GiB 120.03 GB)
>
> [Volume0]:
>            UUID : a7fb0d20:XXXXXXXX:XXXXXXXX:XXXXXXXX
>      RAID Level : 1
>         Members : 2
>           Slots : [UU]
>     Failed disk : none
>       This Slot : 0
>      Array Size : 222715904 (106.20 GiB 114.03 GB)
>    Per Dev Size : 222716168 (106.20 GiB 114.03 GB)
>   Sector Offset : 0
>     Num Stripes : 869984
>      Chunk Size : 64 KiB
>        Reserved : 0
>   Migrate State : idle
>       Map State : uninitialized
>     Dirty State : clean
>
>   Disk01 Serial : S14CNEAXXXXXXXX
>           State : active
>              Id : 00000001
>     Usable Size : 234435342 (111.79 GiB 120.03 GB)
>
> |(ditto for /dev/sdb)
>
> Since this machine is going to need a reinstall for an unrelated
> reason anyway, I thought as an experiment I'd first try to convert it
> to use mdadm at boot.
>
> What I did was:
>
> 1. apt-get remove dmraid; apt-get autoremove
>
> This gets rid of:
>
> /lib/udev/rules.d/97-dmraid.rules
> /usr/share/initramfs-tools/scripts/local-top/dmraid
> /usr/share/initramfs-tools/hooks/dmraid
>
> but we still have:
>
> /lib/udev/rules.d/64-md-raid.rules
> /usr/share/initramfs-tools/hooks/mdadm
> /usr/share/initramfs-tools/scripts/mdadm-functions
> /usr/share/initramfs-tools/scripts/local-premount/mdadm
> /usr/share/initramfs-tools/scripts/init-premount/mdadm
>
> 2. /usr/share/mdadm/mkconf >/etc/mdadm/mdadm.conf
>
> ARRAY metadata=imsm UUID=2ff8...
> ARRAY /dev/md/Volume0 container=2ff8... member=0 UUID=...

You may need

AUTO +imsm +1.x -all

or similar in your /etc/mdadm.conf

Afterwards you may also need to recreate your initramfs or whatever it
is that Ubuntu uses. Boot devices are assembled in the initramfs (at
least in Fedora), so if you still have the old initramfs sitting around,
it will not see your changes.

Cheers,
Jes

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Using mdadm instead of dmraid for BIOS-RAID root volume
  2013-10-08 14:19 ` Brian Candler
@ 2013-10-08 18:36   ` Martin Wilck
  2013-10-10  8:11     ` Brian Candler
  2013-10-11 13:03     ` Brian Candler
  0 siblings, 2 replies; 10+ messages in thread
From: Martin Wilck @ 2013-10-08 18:36 UTC (permalink / raw)
  To: Brian Candler; +Cc: linux-raid

On 10/08/2013 04:19 PM, Brian Candler wrote:
>  > Anyway, I'm not so worried about having broken this machine, as it 
> needed a reinstall anyway, but I do wonder what would have been the 
> correct way to get mdraid instead of dmraid at boot time for this root 
> volume?
> 
> After some more searching, it looks like the udev rules were nobbled to 
> disable this in
> https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1030292
> 
> A possible way to re-enable is here:
> https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1054948/comments/9
> 
> I'm a bit concerned about the issues around clean shutdown, and hence 
> whether is really production-ready yet.

In general, this works. I have seen it work with CentOS, Fedora, and
various SUSE distributions. It may require some work on Ubuntu's side.

 1 The udev rules for incremental mdadm autoassembly need to be in
place. The upstream rules should be fine. They can normally coexist with
the rules for dmraid.
 2 For shutdown, the distro must take care not to kill mdmon before file
systems are unmounted, and to run mdadm --wait-clean after any write
access to file systems is finished.

Martin

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Using mdadm instead of dmraid for BIOS-RAID root volume
  2013-10-08 18:36   ` Martin Wilck
@ 2013-10-10  8:11     ` Brian Candler
  2013-10-11 13:03     ` Brian Candler
  1 sibling, 0 replies; 10+ messages in thread
From: Brian Candler @ 2013-10-10  8:11 UTC (permalink / raw)
  To: Martin Wilck; +Cc: linux-raid

On 08/10/2013 19:36, Martin Wilck wrote:
> On 10/08/2013 04:19 PM, Brian Candler wrote:
>>   > Anyway, I'm not so worried about having broken this machine, as it
>> needed a reinstall anyway, but I do wonder what would have been the
>> correct way to get mdraid instead of dmraid at boot time for this root
>> volume?
>>
>> After some more searching, it looks like the udev rules were nobbled to
>> disable this in
>> https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1030292
>>
>> A possible way to re-enable is here:
>> https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1054948/comments/9
>>
>> I'm a bit concerned about the issues around clean shutdown, and hence
>> whether is really production-ready yet.
> In general, this works. I have seen it work with CentOS, Fedora, and
> various SUSE distributions. It may require some work on Ubuntu's side.
>
>   1 The udev rules for incremental mdadm autoassembly need to be in
> place. The upstream rules should be fine. They can normally coexist with
> the rules for dmraid.
>   2 For shutdown, the distro must take care not to kill mdmon before file
> systems are unmounted, and to run mdadm --wait-clean after any write
> access to file systems is finished.
Details of what I tried for ubuntu, and the results, are at
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1054948/comments/16

Although it kind-of works, there are a lot of rough edges: most 
seriously, locking up during a shutdown. This isn't ready for production 
unfortunately.

Also, I tried booting Ubuntu 13.10beta2, and it is still using dmraid:
https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1054948/comments/17

Regards,

Brian.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Using mdadm instead of dmraid for BIOS-RAID root volume
  2013-10-08 18:36   ` Martin Wilck
  2013-10-10  8:11     ` Brian Candler
@ 2013-10-11 13:03     ` Brian Candler
  2013-10-11 18:13       ` Martin Wilck
  1 sibling, 1 reply; 10+ messages in thread
From: Brian Candler @ 2013-10-11 13:03 UTC (permalink / raw)
  To: Martin Wilck; +Cc: linux-raid

On 08/10/2013 19:36, Martin Wilck wrote:
> On 10/08/2013 04:19 PM, Brian Candler wrote:
>>   > Anyway, I'm not so worried about having broken this machine, as it
>> needed a reinstall anyway, but I do wonder what would have been the
>> correct way to get mdraid instead of dmraid at boot time for this root
>> volume?
>>
>> After some more searching, it looks like the udev rules were nobbled to
>> disable this in
>> https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1030292
>>
>> A possible way to re-enable is here:
>> https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1054948/comments/9
>>
>> I'm a bit concerned about the issues around clean shutdown, and hence
>> whether is really production-ready yet.
> In general, this works. I have seen it work with CentOS, Fedora, and
> various SUSE distributions.
FWIW, I found problems on stock CentOS 6.4.

I had created two RAID volumes within the BIOS:
"BOOT" (4GB)
"LVM" (rest of disk)

These are correctly detected and come up as md0 (container), md125 
(BOOT), md126 (LVM). In a previous install using Debian I had set these 
up as ext4 and LVM respectively.

In the CentOS graphical installer, they are shown as:

V Hard Drives
     md125    4096    ext4
     md126    902246 vg   Physical volume (LVM)

(md0 is not shown)

Problem: if I double-click on md125, or select md125 and click Edit..., 
nothing happens. Therefore I cannot mark it as being used for the /boot 
filesystem.

However, if I double-click on md126, it correctly pops up "You cannot 
edit this drive: This device is part of the LVM volume group 'vg'."

Then if I delete all the logical volumes, and the volume group, I 
expected to be able to edit md126 - but I can't. Again, just nothing 
happens when I double-click on it. I can neither partition it, nor 
change it to ext4 and mount it.

So it looks like there's work remaining to make this usable in CentOS too.

Regards,

Brian.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Using mdadm instead of dmraid for BIOS-RAID root volume
  2013-10-11 13:03     ` Brian Candler
@ 2013-10-11 18:13       ` Martin Wilck
  2013-10-11 19:49         ` Brian Candler
  0 siblings, 1 reply; 10+ messages in thread
From: Martin Wilck @ 2013-10-11 18:13 UTC (permalink / raw)
  To: Brian Candler; +Cc: linux-raid

On 10/11/2013 03:03 PM, Brian Candler wrote:
> On 08/10/2013 19:36, Martin Wilck wrote:
>> On 10/08/2013 04:19 PM, Brian Candler wrote:
>>>   > Anyway, I'm not so worried about having broken this machine, as it
>>> needed a reinstall anyway, but I do wonder what would have been the
>>> correct way to get mdraid instead of dmraid at boot time for this root
>>> volume?
>>>
>>> After some more searching, it looks like the udev rules were nobbled to
>>> disable this in
>>> https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1030292
>>>
>>> A possible way to re-enable is here:
>>> https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1054948/comments/9
>>>
>>> I'm a bit concerned about the issues around clean shutdown, and hence
>>> whether is really production-ready yet.
>> In general, this works. I have seen it work with CentOS, Fedora, and
>> various SUSE distributions.
> FWIW, I found problems on stock CentOS 6.4.
> 
> I had created two RAID volumes within the BIOS:
> "BOOT" (4GB)
> "LVM" (rest of disk)
> 
> These are correctly detected and come up as md0 (container), md125 
> (BOOT), md126 (LVM). In a previous install using Debian I had set these 
> up as ext4 and LVM respectively.
> 
> In the CentOS graphical installer, they are shown as:
> 
> V Hard Drives
>      md125    4096    ext4
>      md126    902246 vg   Physical volume (LVM)
> 
> (md0 is not shown)
> 
> Problem: if I double-click on md125, or select md125 and click Edit..., 
> nothing happens. Therefore I cannot mark it as being used for the /boot 
> filesystem.
> 
> However, if I double-click on md126, it correctly pops up "You cannot 
> edit this drive: This device is part of the LVM volume group 'vg'."
> 
> Then if I delete all the logical volumes, and the volume group, I 
> expected to be able to edit md126 - but I can't. Again, just nothing 
> happens when I double-click on it. I can neither partition it, nor 
> change it to ext4 and mount it.
> 
> So it looks like there's work remaining to make this usable in CentOS too.

Maybe, but your setup is pretty unusual. These MD arrays are "disks" for
the installer, and thus would need to be partitioned. I believe it would
work better that way. I have never tried an LVM PV on a whole disk.

Martin

> 
> Regards,
> 
> Brian.
> 


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Using mdadm instead of dmraid for BIOS-RAID root volume
  2013-10-11 18:13       ` Martin Wilck
@ 2013-10-11 19:49         ` Brian Candler
  2013-10-13 16:44           ` Martin Wilck
  0 siblings, 1 reply; 10+ messages in thread
From: Brian Candler @ 2013-10-11 19:49 UTC (permalink / raw)
  To: Martin Wilck; +Cc: linux-raid

 > Maybe, but your setup is pretty unusual. These MD arrays are "disks" 
for the installer, and thus would need to be partitioned. I believe it 
would work better that way. I have never tried an LVM PV on a whole disk.

It wouldn't let me partition md125 either. It just did nothing when I 
clicked "Edit".


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Using mdadm instead of dmraid for BIOS-RAID root volume
  2013-10-11 19:49         ` Brian Candler
@ 2013-10-13 16:44           ` Martin Wilck
  2013-10-13 18:12             ` Brian Candler
  0 siblings, 1 reply; 10+ messages in thread
From: Martin Wilck @ 2013-10-13 16:44 UTC (permalink / raw)
  To: Brian Candler; +Cc: linux-raid

On 10/11/2013 09:49 PM, Brian Candler wrote:
>  > Maybe, but your setup is pretty unusual. These MD arrays are "disks" 
> for the installer, and thus would need to be partitioned. I believe it 
> would work better that way. I have never tried an LVM PV on a whole disk.
> 
> It wouldn't let me partition md125 either. It just did nothing when I 
> clicked "Edit".

I assume that that has something to do with the pre-exisiting PV. But
that's just a guess. I have successfully installed CentOS on BIOS RAID
on several occasions. In any case, what you describe is "just" an
anaconda problem. Annoying of course, but you should get in touch with
anaconda people to solve it.

Martin

> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Using mdadm instead of dmraid for BIOS-RAID root volume
  2013-10-13 16:44           ` Martin Wilck
@ 2013-10-13 18:12             ` Brian Candler
  0 siblings, 0 replies; 10+ messages in thread
From: Brian Candler @ 2013-10-13 18:12 UTC (permalink / raw)
  To: Martin Wilck; +Cc: linux-raid

On 13/10/2013 17:44, Martin Wilck wrote:
> On 10/11/2013 09:49 PM, Brian Candler wrote:
>>   > Maybe, but your setup is pretty unusual. These MD arrays are "disks"
>> for the installer, and thus would need to be partitioned. I believe it
>> would work better that way. I have never tried an LVM PV on a whole disk.
>>
>> It wouldn't let me partition md125 either. It just did nothing when I
>> clicked "Edit".
> I assume that that has something to do with the pre-exisiting PV.
There were two md volumes: one was an existing PV; one had been used an 
existing ext4 filesystem natively, and was neither partitioned nor a PV.

The one which was ext4 did nothing when I tried to partition it (by 
clicking the "Edit..." button). The one which was a PV gave an error 
when I tried to edit it (fair enough). But then after I removed all the 
LVs and the VG, clicking "Edit..." on that one no longer gave an error, 
but it just did nothing.

> what you describe is "just" an
> anaconda problem.
True enough.


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2013-10-13 18:12 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-10-08 12:17 Using mdadm instead of dmraid for BIOS-RAID root volume Brian Candler
2013-10-08 14:19 ` Brian Candler
2013-10-08 18:36   ` Martin Wilck
2013-10-10  8:11     ` Brian Candler
2013-10-11 13:03     ` Brian Candler
2013-10-11 18:13       ` Martin Wilck
2013-10-11 19:49         ` Brian Candler
2013-10-13 16:44           ` Martin Wilck
2013-10-13 18:12             ` Brian Candler
2013-10-08 14:37 ` Jes Sorensen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).