linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* debian dist-upgrade etch -> squeeze broke my mdadm RAID1
@ 2010-08-11  5:27 Doug
  2010-08-11  8:16 ` Tim Small
  0 siblings, 1 reply; 2+ messages in thread
From: Doug @ 2010-08-11  5:27 UTC (permalink / raw)
  To: linux-raid


I had a PC running a XEN 2.6.18 kernel with two 350Gb disks 
configured as RAID1 with ext3 root dom0 on /dev/md0  (5Gb) and 
the remainder allocated to /dev/md1 managed with lvm for my 
XEN domU client OS's.

I needed a recent kernel/drivers to access some new hardware in Dom0 so 
ran apt-get dist-upgrade (after some initial trouble with 
incompatible dpkg versions but thats irrelevant).
After the upgrade and installation of a new 2.6.32 kernel, the kernel
wouldn't boot because it couldn't find the root partition - it 
dropped into the BusyBox/initramfs prompt.

The md devices were /dev/md/imsm0 sym linked to /dev/md127

Fortunately I was able to reboot my original 2.6.18 kernel from GRUB


$ dpkg-reconfigure mdadm

Generating array device nodes... done.
update-initramfs: Generating /boot/initrd.img-2.6.32-5-xen-686
W: mdadm: unchecked configuration file: /etc/mdadm/mdadm.conf
W: mdadm: please read /usr/share/doc/mdadm/README.upgrading-2.5.3.gz .
I: mdadm: auto-generated temporary mdadm.conf configuration file.
I: mdadm: will start all available MD arrays from the initial ramdisk.
I: mdadm: use `dpkg-reconfigure --priority=low mdadm` to change this.

basically, this runs the script: /usr/share/mdadm/mkconf
which evaluates the expression

  if ! $MDADM --examine --scan --config=partitions; then

and adds the result to the file /etc/mdadm/mdadm.conf

$ mdadm --examine --scan
ARRAY metadata=imsm UUID=9a1dfb7e:5783ddc7:a614552a:eb9c1135
ARRAY /dev/md/Volume0 container=9a1dfb7e:5783ddc7:a614552a:eb9c1135 member=0 
UUID=42f9906a:f4c0a788:5a5a74eb:154a22ef
ARRAY /dev/md0 UUID=74553564:5d83fae3:8520d040:09f92738
ARRAY /dev/md1 UUID=f5971c98:57f6fca7:5a7f0c3a:4f19787a

I am guessing that at boot, the initramfs(?) 2.6.32  kernel 
doesn't understand the /dev/md/Volume0 line and its builtin 
mdadm aborts before assembling the /dev/md0 and /dev/md1 partitions.

If I replace --examine with --detail in /usr/share/mdadm/mkconf, i.e.

  if ! $MDADM --detail --scan --config=partitions; then

$ mdadm --detail --scan
ARRAY /dev/md0 metadata=0.90 UUID=74553564:5d83fae3:8520d040:09f92738
ARRAY /dev/md1 metadata=0.90 UUID=f5971c98:57f6fca7:5a7f0c3a:4f19787a

and rerun 
$ dpkg-reconfigure mdadm

Now /dev/md0 is assembled and mounted as root and my 2.6.32
kernel boots. Problem sussed. But what was really going wrong and why?
I read that devices with superblock (metadata=?) version 0.9 are 
automatically assembled by the kernel at boot time, but apparently that
didn't happen.


Thanks
Doug

P.S.

ii  linux-image-2.6.18-6-xen-686             2.6.18.dfsg.1-26etch2 
ii  linux-image-2.6.32-5-xen-686             2.6.32-18 
ii  mdadm                                    3.0.3-2
ii  initramfs-tools                          0.97.2
ii  lvm2                                     2.02.66-2


mdadm --examine /dev/sda 
/dev/sda:                                 
          Magic : Intel Raid ISM Cfg Sig. 
        Version : 1.1.00                  
    Orig Family : 00000000                
         Family : 828006a6                
     Generation : 00000d7f                
           UUID : 9a1dfb7e:5783ddc7:a614552a:eb9c1135
       Checksum : 4b2aa46c correct                   
    MPB Sectors : 2                                  
          Disks : 2                                  
   RAID Devices : 1                                  

  Disk00 Serial : 9SZ0EQJ2
          State : active  
             Id : 00000000
    Usable Size : 625137934 (298.09 GiB 320.07 GB)

[Volume0]:
           UUID : 42f9906a:f4c0a788:5a5a74eb:154a22ef
     RAID Level : 1
        Members : 2
      This Slot : 0
     Array Size : 625137664 (298.09 GiB 320.07 GB)
   Per Dev Size : 625137664 (298.09 GiB 320.07 GB)
  Sector Offset : 0
    Num Stripes : 2441944
     Chunk Size : 64 KiB
       Reserved : 0
  Migrate State : migrating: repair
      Map State : normal <-- normal
    Dirty State : dirty

  Disk01 Serial : 9SZ0EH35
          State : active
             Id : 00000100
    Usable Size : 625137934 (298.09 GiB 320.07 GB)





^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: debian dist-upgrade etch -> squeeze broke my mdadm RAID1
  2010-08-11  5:27 debian dist-upgrade etch -> squeeze broke my mdadm RAID1 Doug
@ 2010-08-11  8:16 ` Tim Small
  0 siblings, 0 replies; 2+ messages in thread
From: Tim Small @ 2010-08-11  8:16 UTC (permalink / raw)
  To: Doug; +Cc: linux-raid

On 11/08/10 06:27, Doug wrote:
> After the upgrade and installation of a new 2.6.32 kernel, the kernel
>    

I believe Debian only supports upgrading between major versions, so you 
should have gone 4.0.x (etch) -> 5.0.x (lenny) -> 6.0-pre (squeeze).  If 
you didn't then that'll explain the problem you're seeing which is 
really Debian-specific, and relates (I believe) to the change in device 
naming with the switch to udev which occured between etch and lenny (and 
is presumably taken into account when upgrading between those two).

HTH,

Tim.

-- 
South East Open Source Solutions Limited
Registered in England and Wales with company number 06134732.
Registered Office: 2 Powell Gardens, Redhill, Surrey, RH1 1TQ
VAT number: 900 6633 53  http://seoss.co.uk/ +44-(0)1273-808309


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2010-08-11  8:16 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-08-11  5:27 debian dist-upgrade etch -> squeeze broke my mdadm RAID1 Doug
2010-08-11  8:16 ` Tim Small

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).