* RE: Broken RAID1 boot arrays
[not found] <1273616411.5140.25.camel@localhost.localdomain>
@ 2010-05-11 23:59 ` Leslie Rhorer
2010-05-12 0:13 ` Leslie Rhorer
0 siblings, 1 reply; 29+ messages in thread
From: Leslie Rhorer @ 2010-05-11 23:59 UTC (permalink / raw)
To: linux-raid
> > remote access. This is a headless system, and I really can't
> effectively
> > work with a local console, plus I need to mount the /boot array in order
> to
> > properly edit the initrd. I suppose I could mount the drive as a non-
> array
> > and then force a sync to the second drive, but I'd rather not.
>
> The debian install discs should do most of that,
Yeah, but ssh is a bit of a pain to work with from a live CD when
one must repeatedly reboot the system. I'm using an Ubuntu live CD right
now, but I have to run `sudo apt-get install openssh-server` on the
workstation every time I boot (and as I mentioned, any console access on the
machine is rather difficult), and then I have to run `ssh -o
UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no ubuntu@backup`
from one of my workstations. Once or twice isn't all that bad, but more
than that gets old in a hurry.
> and lenny or later
> allows you to setup a ssh server if you select expert mode.
"Lenny" isn't really a good option. I'd much rather have a later
kernel. By "Setup", do you mean install the OS, or enter recovery mode? I
don't want to try to install the OS: that could be a disaster. BTW, the
2.6.32 kernel is moving the IDE disks all the way from /hda and /hdb to /sdj
and /sdk. I think that's part of why it's breaking: the existing mdadm.conf
doesn't scan that high.
> If that's not complete enough then I believe http://www.sysresccd.org/
> is what you are after.
Thanks. I may check it out if I don't get this working in a couple
of more boot cycles.
^ permalink raw reply [flat|nested] 29+ messages in thread* RE: Broken RAID1 boot arrays
2010-05-11 23:59 ` Broken RAID1 boot arrays Leslie Rhorer
@ 2010-05-12 0:13 ` Leslie Rhorer
2010-05-13 1:31 ` Leslie Rhorer
0 siblings, 1 reply; 29+ messages in thread
From: Leslie Rhorer @ 2010-05-12 0:13 UTC (permalink / raw)
To: linux-raid
> don't want to try to install the OS: that could be a disaster. BTW, the
> 2.6.32 kernel is moving the IDE disks all the way from /hda and /hdb to
> /sdj
> and /sdk. I think that's part of why it's breaking: the existing
> mdadm.conf
> doesn't scan that high.
OK, maybe not. I re-arranged things so the boot drives are /dev/sda
and /dev/sdb, but it still isn't working. When I boot the Ubuntu live CD
and install mdadm, it creates the following mdadm.conf:
ubuntu@ubuntu:~$ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2
UUID=d6a2c60b:7345e957:05aefe0b:f8d1527f name='Backup':1
ARRAY /dev/md/2 level=raid1 metadata=1.2 num-devices=2
UUID=d45ff663:9e53774c:6fcf9968:21692025 name='Backup':2
ARRAY /dev/md/3 level=raid1 metadata=1.2 num-devices=2
UUID=3615c4a2:33786b6d:b13863d9:458cd054 name='Backup':3
ARRAY /dev/md/0 level=raid5 metadata=1.2 num-devices=8
UUID=940ae4e4:04057ffc:5e92d2fb:63e3efb7 name='Backup':0
# This file was auto-generated on Tue, 11 May 2010 23:45:16 +0000
# by mkconf $Id$
If I try to auto-assemble the arrays, it fails:
ubuntu@ubuntu:~$ sudo mdadm --assemble --scan
mdadm: no devices found for /dev/md/1
mdadm: no devices found for /dev/md/2
mdadm: no devices found for /dev/md/3
mdadm: no devices found for /dev/md/0
Yet I can manually assemble them with no issues:
ubuntu@ubuntu:~$ sudo mdadm --assemble /dev/md2 /dev/sda2 /dev/sdb2
mdadm: /dev/md2 has been started with 2 drives.
I think this failure to auto-assemble lies at the root of the reason
the arrays are not coming up (and consequently the system is not booting),
but why is this happening? Below is the result of the --examine for
/dev/md2:
ubuntu@ubuntu:~$ sudo mdadm --examine /dev/sda2
/dev/sda2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : d45ff663:9e53774c:6fcf9968:21692025
Name : 'Backup':2
Creation Time : Sun Dec 20 04:59:43 2009
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 554884828 (264.59 GiB 284.10 GB)
Array Size : 554884828 (264.59 GiB 284.10 GB)
Data Offset : 272 sectors
Super Offset : 8 sectors
State : clean
Device UUID : b77a6eb8:c07a50f5:3bff3afb:846652a2
Internal Bitmap : 8 sectors from superblock
Update Time : Tue May 11 06:27:13 2010
Checksum : 96af22ac - correct
Events : 14920
Array Slot : 2 (failed, 1, 0)
Array State : Uu 1 failed
ubuntu@ubuntu:~$
ubuntu@ubuntu:~$ sudo mdadm --examine /dev/sdb2
/dev/sdb2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : d45ff663:9e53774c:6fcf9968:21692025
Name : 'Backup':2
Creation Time : Sun Dec 20 04:59:43 2009
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 554884828 (264.59 GiB 284.10 GB)
Array Size : 554884828 (264.59 GiB 284.10 GB)
Data Offset : 272 sectors
Super Offset : 8 sectors
State : clean
Device UUID : bc74e25e:14e6562a:f136cf70:a2f6c6ac
Internal Bitmap : 8 sectors from superblock
Update Time : Tue May 11 06:27:13 2010
Checksum : f2324fd6 - correct
Events : 14920
Array Slot : 1 (failed, 1, 0)
Array State : uU 1 failed
^ permalink raw reply [flat|nested] 29+ messages in thread* RE: Broken RAID1 boot arrays
2010-05-12 0:13 ` Leslie Rhorer
@ 2010-05-13 1:31 ` Leslie Rhorer
2010-05-13 4:15 ` Daniel Reurich
0 siblings, 1 reply; 29+ messages in thread
From: Leslie Rhorer @ 2010-05-13 1:31 UTC (permalink / raw)
To: linux-raid
Hello? Anyone? I'm flummoxed, here. I tried to write in a manual
assembly of the arrays in the initrd, but so far I haven't been able to get
it to work. One way or another, it just hangs when running
/scripts/local-top/mdadm in the initrd. Even `ls -1 /dev/sd*` returns an
error.
It's also really odd that I can assemble and mount the root and boot
arrays, but under Ubuntu I can't even assemble the swap array. It complains
that the first member of the array is busy and refuses to start /dev/md3.
The results of --examine look identical to those listed below, except of
course for the partition specific entries (size, drive and array UUID,
events, etc).
I really need to get this machine back on line, and any suggestions
are greatly appreciated.
> > don't want to try to install the OS: that could be a disaster. BTW, the
> > 2.6.32 kernel is moving the IDE disks all the way from /hda and /hdb to
> > /sdj
> > and /sdk. I think that's part of why it's breaking: the existing
> > mdadm.conf
> > doesn't scan that high.
>
> OK, maybe not. I re-arranged things so the boot drives are /dev/sda
> and /dev/sdb, but it still isn't working. When I boot the Ubuntu live CD
> and install mdadm, it creates the following mdadm.conf:
>
> ubuntu@ubuntu:~$ cat /etc/mdadm/mdadm.conf
> # mdadm.conf
> #
> # Please refer to mdadm.conf(5) for information about this file.
> #
>
> # by default, scan all partitions (/proc/partitions) for MD superblocks.
> # alternatively, specify devices to scan, using wildcards if desired.
> DEVICE partitions
>
> # auto-create devices with Debian standard permissions
> CREATE owner=root group=disk mode=0660 auto=yes
>
> # automatically tag new arrays as belonging to the local system
> HOMEHOST <system>
>
> # instruct the monitoring daemon where to send mail alerts
> MAILADDR root
>
> # definitions of existing MD arrays
> ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2
> UUID=d6a2c60b:7345e957:05aefe0b:f8d1527f name='Backup':1
> ARRAY /dev/md/2 level=raid1 metadata=1.2 num-devices=2
> UUID=d45ff663:9e53774c:6fcf9968:21692025 name='Backup':2
> ARRAY /dev/md/3 level=raid1 metadata=1.2 num-devices=2
> UUID=3615c4a2:33786b6d:b13863d9:458cd054 name='Backup':3
> ARRAY /dev/md/0 level=raid5 metadata=1.2 num-devices=8
> UUID=940ae4e4:04057ffc:5e92d2fb:63e3efb7 name='Backup':0
>
> # This file was auto-generated on Tue, 11 May 2010 23:45:16 +0000
> # by mkconf $Id$
>
> If I try to auto-assemble the arrays, it fails:
>
> ubuntu@ubuntu:~$ sudo mdadm --assemble --scan
> mdadm: no devices found for /dev/md/1
> mdadm: no devices found for /dev/md/2
> mdadm: no devices found for /dev/md/3
> mdadm: no devices found for /dev/md/0
>
> Yet I can manually assemble them with no issues:
>
> ubuntu@ubuntu:~$ sudo mdadm --assemble /dev/md2 /dev/sda2 /dev/sdb2
> mdadm: /dev/md2 has been started with 2 drives.
>
> I think this failure to auto-assemble lies at the root of the reason
> the arrays are not coming up (and consequently the system is not booting),
> but why is this happening? Below is the result of the --examine for
> /dev/md2:
>
> ubuntu@ubuntu:~$ sudo mdadm --examine /dev/sda2
> /dev/sda2:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x1
> Array UUID : d45ff663:9e53774c:6fcf9968:21692025
> Name : 'Backup':2
> Creation Time : Sun Dec 20 04:59:43 2009
> Raid Level : raid1
> Raid Devices : 2
>
> Avail Dev Size : 554884828 (264.59 GiB 284.10 GB)
> Array Size : 554884828 (264.59 GiB 284.10 GB)
> Data Offset : 272 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : b77a6eb8:c07a50f5:3bff3afb:846652a2
>
> Internal Bitmap : 8 sectors from superblock
> Update Time : Tue May 11 06:27:13 2010
> Checksum : 96af22ac - correct
> Events : 14920
>
>
> Array Slot : 2 (failed, 1, 0)
> Array State : Uu 1 failed
> ubuntu@ubuntu:~$
> ubuntu@ubuntu:~$ sudo mdadm --examine /dev/sdb2
> /dev/sdb2:
> Magic : a92b4efc
> Version : 1.2
> Feature Map : 0x1
> Array UUID : d45ff663:9e53774c:6fcf9968:21692025
> Name : 'Backup':2
> Creation Time : Sun Dec 20 04:59:43 2009
> Raid Level : raid1
> Raid Devices : 2
>
> Avail Dev Size : 554884828 (264.59 GiB 284.10 GB)
> Array Size : 554884828 (264.59 GiB 284.10 GB)
> Data Offset : 272 sectors
> Super Offset : 8 sectors
> State : clean
> Device UUID : bc74e25e:14e6562a:f136cf70:a2f6c6ac
>
> Internal Bitmap : 8 sectors from superblock
> Update Time : Tue May 11 06:27:13 2010
> Checksum : f2324fd6 - correct
> Events : 14920
>
>
> Array Slot : 1 (failed, 1, 0)
> Array State : uU 1 failed
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 29+ messages in thread
* RE: Broken RAID1 boot arrays
2010-05-13 1:31 ` Leslie Rhorer
@ 2010-05-13 4:15 ` Daniel Reurich
2010-05-13 4:39 ` Daniel Reurich
` (2 more replies)
0 siblings, 3 replies; 29+ messages in thread
From: Daniel Reurich @ 2010-05-13 4:15 UTC (permalink / raw)
To: Leslie Rhorer; +Cc: linux-raid
On Wed, 2010-05-12 at 20:31 -0500, Leslie Rhorer wrote:
> Hello? Anyone? I'm flummoxed, here. I tried to write in a manual
> assembly of the arrays in the initrd, but so far I haven't been able to get
> it to work. One way or another, it just hangs when running
> /scripts/local-top/mdadm in the initrd. Even `ls -1 /dev/sd*` returns an
> error.
>
Ok.
1) Get business card image from the link provided and burn to CD and
boot of it.
http://www.debian.org/devel/debian-installer/
2) Select Advanced Options then expert install.
3) Set Language etc.
4) When it asks to select installer components select "Network Console"
and continue.
5) Configure the network (if you haven't already),
6) In the menu select "Continue installation remotely using ssh and
follow the instructions to connect in via ssh from your desired
workstation and continue.
7) Select exit to shell
8) insert the appropriate raid modules: 'modprobe raidX' where X is the
raid levels you use for each raid level you use.
9) use mdadm to manually assemble the necessary root, /boot and /var
arrays.
10) If your root fs is in LVM do: "modprobe dm_mod" followed by
"vgchange -ay"
11) make a target directory: "mkdir /target"
12) mount the root filesystem on /target: mount /dev/<rootfs> /target
13) bind mount the dev sys and proc virtual filesystems:
"mount -o bind /dev /target/dev"
"mount -o bind /sys /target/sys"
"mount -o bind /proc /target/proc"
14) Chroot: chroot /target /bin/bash
15) mount /boot /usr /var as needed.
16) update your mdadm.conf and /etc/fstab etc (ideally use labels for
root and boot or fs UUID's), and any other stuff like installing the
latest mdadm (apt|aptitude should work fine if your internet connected).
***See my notes below.
17) update your grub config, and run update-grub.
18) update your initrd image: "mkinitramfs -k all"
19) unmount the fs's you mounted in the chroot
20) umount /target/proc /target/sys and /target/dev.
21) reboot and try it out.
*** You might want to post your real mdadm.conf at this point. If your
not sure about what the issue is, then perhaps IRC (does linux-raid have
a channel?) might be the best bet.
> It's also really odd that I can assemble and mount the root and boot
> arrays, but under Ubuntu I can't even assemble the swap array. It complains
> that the first member of the array is busy and refuses to start /dev/md3.
> The results of --examine look identical to those listed below, except of
> course for the partition specific entries (size, drive and array UUID,
> events, etc).
>
This is because ubuntu probably picks up the first swap partition it
finds and uses it.
> I really need to get this machine back on line, and any suggestions
> are greatly appreciated.
>
> > > don't want to try to install the OS: that could be a disaster. BTW, the
> > > 2.6.32 kernel is moving the IDE disks all the way from /hda and /hdb to
> > > /sdj
> > > and /sdk. I think that's part of why it's breaking: the existing
> > > mdadm.conf
> > > doesn't scan that high.
mdadm shouldn't care unless you've changed the "DEVICE partitions" line
to something else.
> > OK, maybe not. I re-arranged things so the boot drives are /dev/sda
> > and /dev/sdb, but it still isn't working. When I boot the Ubuntu live CD
> > and install mdadm, it creates the following mdadm.conf:
> >
> > ubuntu@ubuntu:~$ cat /etc/mdadm/mdadm.conf
> > # mdadm.conf
> > #
> > # Please refer to mdadm.conf(5) for information about this file.
> > #
> >
> > # by default, scan all partitions (/proc/partitions) for MD superblocks.
> > # alternatively, specify devices to scan, using wildcards if desired.
> > DEVICE partitions
> >
> > # auto-create devices with Debian standard permissions
> > CREATE owner=root group=disk mode=0660 auto=yes
> >
> > # automatically tag new arrays as belonging to the local system
> > HOMEHOST <system>
> >
> > # instruct the monitoring daemon where to send mail alerts
> > MAILADDR root
> >
> > # definitions of existing MD arrays
> > ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2
> > UUID=d6a2c60b:7345e957:05aefe0b:f8d1527f name='Backup':1
> > ARRAY /dev/md/2 level=raid1 metadata=1.2 num-devices=2
> > UUID=d45ff663:9e53774c:6fcf9968:21692025 name='Backup':2
> > ARRAY /dev/md/3 level=raid1 metadata=1.2 num-devices=2
> > UUID=3615c4a2:33786b6d:b13863d9:458cd054 name='Backup':3
> > ARRAY /dev/md/0 level=raid5 metadata=1.2 num-devices=8
> > UUID=940ae4e4:04057ffc:5e92d2fb:63e3efb7 name='Backup':0
> >
> > # This file was auto-generated on Tue, 11 May 2010 23:45:16 +0000
> > # by mkconf $Id$
> >
> > If I try to auto-assemble the arrays, it fails:
> >
> > ubuntu@ubuntu:~$ sudo mdadm --assemble --scan
> > mdadm: no devices found for /dev/md/1
> > mdadm: no devices found for /dev/md/2
> > mdadm: no devices found for /dev/md/3
> > mdadm: no devices found for /dev/md/0
> >
It seems odd to me that all the raid volumes are named "Backup".
Perhaps mdadm doesn't like the name collision.
Perhaps you need to recreate some of them with a different name. I'd
suggest recreating the raid1 volumes with different names and the
--assume-clean flag (except the swap one which won't be since the ubuntu
live cd's been messing with one of those component partitions).
I hope this helps.
Regards,
--
Daniel Reurich.
Centurion Computer Technology (2005) Ltd
Mobile +64 21 797 722
^ permalink raw reply [flat|nested] 29+ messages in thread* RE: Broken RAID1 boot arrays
2010-05-13 4:15 ` Daniel Reurich
@ 2010-05-13 4:39 ` Daniel Reurich
2010-05-13 23:30 ` Leslie Rhorer
2010-05-15 7:23 ` Leslie Rhorer
2 siblings, 0 replies; 29+ messages in thread
From: Daniel Reurich @ 2010-05-13 4:39 UTC (permalink / raw)
To: Leslie Rhorer; +Cc: linux-raid
On Thu, 2010-05-13 at 16:15 +1200, Daniel Reurich wrote:
> On Wed, 2010-05-12 at 20:31 -0500, Leslie Rhorer wrote:
> > Hello? Anyone? I'm flummoxed, here. I tried to write in a manual
> > assembly of the arrays in the initrd, but so far I haven't been able to get
> > it to work. One way or another, it just hangs when running
> > /scripts/local-top/mdadm in the initrd. Even `ls -1 /dev/sd*` returns an
> > error.
> >
> Ok.
>
> 1) Get business card image from the link provided and burn to CD and
> boot of it.
>
> http://www.debian.org/devel/debian-installer/
>
> 2) Select Advanced Options then expert install.
> 3) Set Language etc.
> 4) When it asks to select installer components select "Network Console"
> and continue.
> 5) Configure the network (if you haven't already),
> 6) In the menu select "Continue installation remotely using ssh and
> follow the instructions to connect in via ssh from your desired
> workstation and continue.
> 7) Select exit to shell
> 8) insert the appropriate raid modules: 'modprobe raidX' where X is the
> raid levels you use for each raid level you use.
> 9) use mdadm to manually assemble the necessary root, /boot and /var
> arrays.
> 10) If your root fs is in LVM do: "modprobe dm_mod" followed by
> "vgchange -ay"
> 11) make a target directory: "mkdir /target"
> 12) mount the root filesystem on /target: mount /dev/<rootfs> /target
> 13) bind mount the dev sys and proc virtual filesystems:
> "mount -o bind /dev /target/dev"
> "mount -o bind /sys /target/sys"
> "mount -o bind /proc /target/proc"
> 14) Chroot: chroot /target /bin/bash
> 15) mount /boot /usr /var as needed.
> 16) update your mdadm.conf and /etc/fstab etc (ideally use labels for
> root and boot or fs UUID's), and any other stuff like installing the
> latest mdadm (apt|aptitude should work fine if your internet connected).
> ***See my notes below.
> 17) update your grub config, and run update-grub.
> 18) update your initrd image: "mkinitramfs -k all"
> 19) unmount the fs's you mounted in the chroot
> 20) umount /target/proc /target/sys and /target/dev.
> 21) reboot and try it out.
>
> *** You might want to post your real mdadm.conf at this point. If your
> not sure about what the issue is, then perhaps IRC (does linux-raid have
> a channel?) might be the best bet.
>
>
>
> > It's also really odd that I can assemble and mount the root and boot
> > arrays, but under Ubuntu I can't even assemble the swap array. It complains
> > that the first member of the array is busy and refuses to start /dev/md3.
> > The results of --examine look identical to those listed below, except of
> > course for the partition specific entries (size, drive and array UUID,
> > events, etc).
> >
> This is because ubuntu probably picks up the first swap partition it
> finds and uses it.
>
> > I really need to get this machine back on line, and any suggestions
> > are greatly appreciated.
> >
> > > > don't want to try to install the OS: that could be a disaster. BTW, the
> > > > 2.6.32 kernel is moving the IDE disks all the way from /hda and /hdb to
> > > > /sdj
> > > > and /sdk. I think that's part of why it's breaking: the existing
> > > > mdadm.conf
> > > > doesn't scan that high.
> mdadm shouldn't care unless you've changed the "DEVICE partitions" line
> to something else.
>
> > > OK, maybe not. I re-arranged things so the boot drives are /dev/sda
> > > and /dev/sdb, but it still isn't working. When I boot the Ubuntu live CD
> > > and install mdadm, it creates the following mdadm.conf:
> > >
> > > ubuntu@ubuntu:~$ cat /etc/mdadm/mdadm.conf
> > > # mdadm.conf
> > > #
> > > # Please refer to mdadm.conf(5) for information about this file.
> > > #
> > >
> > > # by default, scan all partitions (/proc/partitions) for MD superblocks.
> > > # alternatively, specify devices to scan, using wildcards if desired.
> > > DEVICE partitions
> > >
> > > # auto-create devices with Debian standard permissions
> > > CREATE owner=root group=disk mode=0660 auto=yes
> > >
> > > # automatically tag new arrays as belonging to the local system
> > > HOMEHOST <system>
> > >
> > > # instruct the monitoring daemon where to send mail alerts
> > > MAILADDR root
> > >
> > > # definitions of existing MD arrays
> > > ARRAY /dev/md/1 level=raid1 metadata=1.0 num-devices=2
> > > UUID=d6a2c60b:7345e957:05aefe0b:f8d1527f name='Backup':1
> > > ARRAY /dev/md/2 level=raid1 metadata=1.2 num-devices=2
> > > UUID=d45ff663:9e53774c:6fcf9968:21692025 name='Backup':2
> > > ARRAY /dev/md/3 level=raid1 metadata=1.2 num-devices=2
> > > UUID=3615c4a2:33786b6d:b13863d9:458cd054 name='Backup':3
> > > ARRAY /dev/md/0 level=raid5 metadata=1.2 num-devices=8
> > > UUID=940ae4e4:04057ffc:5e92d2fb:63e3efb7 name='Backup':0
> > >
> > > # This file was auto-generated on Tue, 11 May 2010 23:45:16 +0000
> > > # by mkconf $Id$
> > >
> > > If I try to auto-assemble the arrays, it fails:
> > >
> > > ubuntu@ubuntu:~$ sudo mdadm --assemble --scan
> > > mdadm: no devices found for /dev/md/1
> > > mdadm: no devices found for /dev/md/2
> > > mdadm: no devices found for /dev/md/3
> > > mdadm: no devices found for /dev/md/0
> > >
> It seems odd to me that all the raid volumes are named "Backup".
> Perhaps mdadm doesn't like the name collision.
>
> Perhaps you need to recreate some of them with a different name. I'd
> suggest recreating the raid1 volumes with different names and the
> --assume-clean flag (except the swap one which won't be
*clean* since the ubuntu live cd's been using one of those component
partitions for it's swap ).
> I hope this helps.
>
> Regards,
>
>
--
Daniel Reurich.
Centurion Computer Technology (2005) Ltd
Mobile 021 797 722
^ permalink raw reply [flat|nested] 29+ messages in thread
* RE: Broken RAID1 boot arrays
2010-05-13 4:15 ` Daniel Reurich
2010-05-13 4:39 ` Daniel Reurich
@ 2010-05-13 23:30 ` Leslie Rhorer
2010-05-14 0:16 ` Daniel Reurich
2010-05-15 7:23 ` Leslie Rhorer
2 siblings, 1 reply; 29+ messages in thread
From: Leslie Rhorer @ 2010-05-13 23:30 UTC (permalink / raw)
To: 'Daniel Reurich'; +Cc: linux-raid
Thank you for your response. My hat is off to you. Few people
return such thorough and detailed posts.
> > Hello? Anyone? I'm flummoxed, here. I tried to write in a manual
> > assembly of the arrays in the initrd, but so far I haven't been able to
> get
> > it to work. One way or another, it just hangs when running
> > /scripts/local-top/mdadm in the initrd. Even `ls -1 /dev/sd*` returns
> an
> > error.
> >
> Ok.
>
> 1) Get business card image from the link provided and burn to CD and
> boot of it.
>
> http://www.debian.org/devel/debian-installer/
>
> 2) Select Advanced Options then expert install.
> 3) Set Language etc.
> 4) When it asks to select installer components select "Network Console"
> and continue.
> 5) Configure the network (if you haven't already),
> 6) In the menu select "Continue installation remotely using ssh and
> follow the instructions to connect in via ssh from your desired
> workstation and continue.
> 7) Select exit to shell
> 8) insert the appropriate raid modules: 'modprobe raidX' where X is the
> raid levels you use for each raid level you use.
> 9) use mdadm to manually assemble the necessary root, /boot and /var
> arrays.
/var is just part of the main array. Only /boot and the swap area
have their own partitions. Interestingly enough, the installer kernel shows
the drives to be /dev/hda and /dev/hdb, again. Apparently the installer
uses an older kernel? Oh, and it can assemble the third array (the swap
area) just fine, or at least it says it can:
~ # mdadm -Dt /dev/md3
/dev/md3:
Version : 1.02
Creation Time : Sun Dec 20 05:05:08 2009
Raid Level : raid1
Array Size : 204796548 (195.31 GiB 209.71 GB)
Used Dev Size : 204796548 (195.31 GiB 209.71 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Mon May 10 01:08:00 2010
State : active
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Name : 'Backup':3
UUID : 3615c4a2:33786b6d:b13863d9:458cd054
Events : 66
Number Major Minor RaidDevice State
2 3 3 0 active sync /dev/hda3
1 3 67 1 active sync /dev/hdb3
> 10) If your root fs is in LVM do: "modprobe dm_mod" followed by
> "vgchange -ay"
> 11) make a target directory: "mkdir /target"
> 12) mount the root filesystem on /target: mount /dev/<rootfs> /target
'No joy:
~ # mount -o -v /dev/md1 /target
mount: mounting /dev/md1 on /target failed: Invalid argument
So now, what? I can mount the arrays just fine under the Ubuntu
live CD, but not this one.
> 13) bind mount the dev sys and proc virtual filesystems:
> "mount -o bind /dev /target/dev"
> "mount -o bind /sys /target/sys"
> "mount -o bind /proc /target/proc"
> 14) Chroot: chroot /target /bin/bash
> 15) mount /boot /usr /var as needed.
> 16) update your mdadm.conf and /etc/fstab etc (ideally use labels for
> root and boot or fs UUID's), and any other stuff like installing the
> latest mdadm (apt|aptitude should work fine if your internet connected).
Uh-uh, again. Neither apt-get nor aptitude seem to be on the CD, at
least not when installing this way.
> > It's also really odd that I can assemble and mount the root and boot
> > arrays, but under Ubuntu I can't even assemble the swap array. It
> complains
> > that the first member of the array is busy and refuses to start
> /dev/md3.
> > The results of --examine look identical to those listed below, except of
> > course for the partition specific entries (size, drive and array UUID,
> > events, etc).
> >
> This is because ubuntu probably picks up the first swap partition it
> finds and uses it.
It doesn't mention it when I issue `mount` or lsof. What's more, it
gives the same error for both partitions. Also, as I mentioned, it doesn't
show any errors when I issue `sudo mdadm --examine [sda3|sdb3]`. Finally,
it assembles without complaint under the Debian live CD.
> It seems odd to me that all the raid volumes are named "Backup".
> Perhaps mdadm doesn't like the name collision.
First of all, isn't that the homehost name? If so, it is *SUPPOSED*
to be the same for all three. Secondly, it assembled just fine under the
old kernel and mdadm, as I mentioned. Thirdly, if it were the case, I would
expect it to assemble at least the first target without complaint. Finally,
the names aren't the same. They are 'Backup':1, 'Backup':2, and 'Backup':3
> Perhaps you need to recreate some of them with a different name. I'd
> suggest recreating the raid1 volumes with different names and the
> --assume-clean flag (except the swap one which won't be since the ubuntu
> live cd's been messing with one of those component partitions).
I think before I try something like that, I would just trash one
element of each array, assemble the arrays broken with just one element, and
copy over the files to the "new" partitions, and go from there.
> I hope this helps.
Well, I'm getting somewhere. I'm just not sure where, if I can't
get mount to work.
^ permalink raw reply [flat|nested] 29+ messages in thread* RE: Broken RAID1 boot arrays
2010-05-13 23:30 ` Leslie Rhorer
@ 2010-05-14 0:16 ` Daniel Reurich
2010-05-14 2:58 ` Leslie Rhorer
2010-05-15 20:03 ` Leslie Rhorer
0 siblings, 2 replies; 29+ messages in thread
From: Daniel Reurich @ 2010-05-14 0:16 UTC (permalink / raw)
To: Leslie Rhorer; +Cc: linux-raid
On Thu, 2010-05-13 at 18:30 -0500, Leslie Rhorer wrote:
> Thank you for your response. My hat is off to you. Few people
> return such thorough and detailed posts.
>
> > > Hello? Anyone? I'm flummoxed, here. I tried to write in a manual
> > > assembly of the arrays in the initrd, but so far I haven't been able to
> > get
> > > it to work. One way or another, it just hangs when running
> > > /scripts/local-top/mdadm in the initrd. Even `ls -1 /dev/sd*` returns
> > an
> > > error.
> > >
> > Ok.
> >
> > 1) Get business card image from the link provided and burn to CD and
> > boot of it.
> >
> > http://www.debian.org/devel/debian-installer/
> >
> > 2) Select Advanced Options then expert install.
> > 3) Set Language etc.
> > 4) When it asks to select installer components select "Network Console"
> > and continue.
> > 5) Configure the network (if you haven't already),
> > 6) In the menu select "Continue installation remotely using ssh and
> > follow the instructions to connect in via ssh from your desired
> > workstation and continue.
> > 7) Select exit to shell
> > 8) insert the appropriate raid modules: 'modprobe raidX' where X is the
> > raid levels you use for each raid level you use.
> > 9) use mdadm to manually assemble the necessary root, /boot and /var
> > arrays.
>
> /var is just part of the main array. Only /boot and the swap area
> have their own partitions. Interestingly enough, the installer kernel shows
> the drives to be /dev/hda and /dev/hdb, again. Apparently the installer
> uses an older kernel? Oh, and it can assemble the third array (the swap
> area) just fine, or at least it says it can:
>
> ~ # mdadm -Dt /dev/md3
> /dev/md3:
> Version : 1.02
> Creation Time : Sun Dec 20 05:05:08 2009
> Raid Level : raid1
> Array Size : 204796548 (195.31 GiB 209.71 GB)
> Used Dev Size : 204796548 (195.31 GiB 209.71 GB)
> Raid Devices : 2
> Total Devices : 2
> Persistence : Superblock is persistent
>
> Intent Bitmap : Internal
>
> Update Time : Mon May 10 01:08:00 2010
> State : active
> Active Devices : 2
> Working Devices : 2
> Failed Devices : 0
> Spare Devices : 0
>
> Name : 'Backup':3
> UUID : 3615c4a2:33786b6d:b13863d9:458cd054
> Events : 66
>
> Number Major Minor RaidDevice State
> 2 3 3 0 active sync /dev/hda3
> 1 3 67 1 active sync /dev/hdb3
>
> > 10) If your root fs is in LVM do: "modprobe dm_mod" followed by
> > "vgchange -ay"
> > 11) make a target directory: "mkdir /target"
> > 12) mount the root filesystem on /target: mount /dev/<rootfs> /target
>
> 'No joy:
>
> ~ # mount -o -v /dev/md1 /target
> mount: mounting /dev/md1 on /target failed: Invalid argument
> So now, what? I can mount the arrays just fine under the Ubuntu
> live CD, but not this one.
For a start don't use -o unless your specifying options like rw,bind
etc.
What type of filesystem is it?
Try "mount -v /dev/md1 /"
>
> > 13) bind mount the dev sys and proc virtual filesystems:
> > "mount -o bind /dev /target/dev"
> > "mount -o bind /sys /target/sys"
> > "mount -o bind /proc /target/proc"
> > 14) Chroot: chroot /target /bin/bash
> > 15) mount /boot /usr /var as needed.
> > 16) update your mdadm.conf and /etc/fstab etc (ideally use labels for
> > root and boot or fs UUID's), and any other stuff like installing the
> > latest mdadm (apt|aptitude should work fine if your internet connected).
> Uh-uh, again. Neither apt-get nor aptitude seem to be on the CD, at
> least not when installing this way.
But your in the chroot, and most of the normal tools in your system are
use able.
> > > It's also really odd that I can assemble and mount the root and boot
> > > arrays, but under Ubuntu I can't even assemble the swap array. It
> > complains
> > > that the first member of the array is busy and refuses to start
> > /dev/md3.
> > > The results of --examine look identical to those listed below, except of
> > > course for the partition specific entries (size, drive and array UUID,
> > > events, etc).
> > >
> > This is because ubuntu probably picks up the first swap partition it
> > finds and uses it.
>
> It doesn't mention it when I issue `mount` or lsof. What's more, it
> gives the same error for both partitions. Also, as I mentioned, it doesn't
> show any errors when I issue `sudo mdadm --examine [sda3|sdb3]`. Finally,
> it assembles without complaint under the Debian live CD.
>
> > It seems odd to me that all the raid volumes are named "Backup".
> > Perhaps mdadm doesn't like the name collision.
>
> First of all, isn't that the homehost name? If so, it is *SUPPOSED*
> to be the same for all three. Secondly, it assembled just fine under the
> old kernel and mdadm, as I mentioned. Thirdly, if it were the case, I would
> expect it to assemble at least the first target without complaint. Finally,
> the names aren't the same. They are 'Backup':1, 'Backup':2, and 'Backup':3
>
Nope. I suspect you've mistaken the mdadm option -N or --name for
--hostname.
The name should be specific to the individual arrays and hostname is for
saying these arrays belong to this host.
> > Perhaps you need to recreate some of them with a different name. I'd
> > suggest recreating the raid1 volumes with different names and the
> > --assume-clean flag (except the swap one which won't be since the ubuntu
> > live cd's been messing with one of those component partitions).
>
> I think before I try something like that, I would just trash one
> element of each array, assemble the arrays broken with just one element, and
> copy over the files to the "new" partitions, and go from there.
Alternatively recreate the arrays with a missing drive and add that once
your satisfied the data is still their in the new array.
>
> > I hope this helps.
>
> Well, I'm getting somewhere. I'm just not sure where, if I can't
> get mount to work.
>
I hope I've solved that one for you.
Regards,
--
Daniel Reurich.
Centurion Computer Technology (2005) Ltd
Mobile 021 797 722
^ permalink raw reply [flat|nested] 29+ messages in thread
* RE: Broken RAID1 boot arrays
2010-05-14 0:16 ` Daniel Reurich
@ 2010-05-14 2:58 ` Leslie Rhorer
2010-05-14 6:54 ` Daniel Reurich
2010-05-14 7:08 ` Daniel Reurich
2010-05-15 20:03 ` Leslie Rhorer
1 sibling, 2 replies; 29+ messages in thread
From: Leslie Rhorer @ 2010-05-14 2:58 UTC (permalink / raw)
To: 'Daniel Reurich'; +Cc: linux-raid
> > ~ # mount -o -v /dev/md1 /target
> > mount: mounting /dev/md1 on /target failed: Invalid argument
>
> > So now, what? I can mount the arrays just fine under the Ubuntu
> > live CD, but not this one.
>
> For a start don't use -o unless your specifying options like rw,bind
> etc.
I misread the man page (it did seem rather odd), but it doesn't
matter. When I first tried, it was without any switches. I tried
specifying the fs type. I tired updating the fstab file and using `mount
-a`. It read the file just fine, but still gives me the same error.
> What type of filesystem is it?
>
> Try "mount -v /dev/md1 /"
It doesn't matter what switches I try, it always gives me that
error. The md1 array (/boot) is ext2, and the md2 array (/) is ext3.
> > > 13) bind mount the dev sys and proc virtual filesystems:
> > > "mount -o bind /dev /target/dev"
> > > "mount -o bind /sys /target/sys"
> > > "mount -o bind /proc /target/proc"
> > > 14) Chroot: chroot /target /bin/bash
> > > 15) mount /boot /usr /var as needed.
> > > 16) update your mdadm.conf and /etc/fstab etc (ideally use labels for
> > > root and boot or fs UUID's)
The mdadm.conf file already employs UUIDs for the RAID arrays. In
the man page, I don't see a way to specify the device by UUID, but by my
reading "DEVICE partitions" should work. It won't help to specify the array
UUID in fstab if mdadm won't assemble the arrays.
, and any other stuff like installing the
> > > latest mdadm (apt|aptitude should work fine if your internet
> connected).
>
> > Uh-uh, again. Neither apt-get nor aptitude seem to be on the CD, at
> > least not when installing this way.
>
> But your in the chroot, and most of the normal tools in your system are
> use able.
No, I'm not. Remember? I can't mount /target (/dev/md2) so I can't
chroot to it:
~ # chroot /target /bin/bash
chroot: cannot execute /bin/bash: No such file or directory
Everything in your method requires me to be able to mount the / and
/boot file systems. Hmm. The only thing I can't do under the Ubuntu CD is
assemble and mount the swap, so this should work using the Ubuntu CD...
> > > > It's also really odd that I can assemble and mount the root
> and boot
> > > > arrays, but under Ubuntu I can't even assemble the swap array. It
> > > complains
> > > > that the first member of the array is busy and refuses to start
> > > /dev/md3.
> > > > The results of --examine look identical to those listed below,
> except of
> > > > course for the partition specific entries (size, drive and array
> UUID,
> > > > events, etc).
> > > >
> > > This is because ubuntu probably picks up the first swap partition it
> > > finds and uses it.
> >
> > It doesn't mention it when I issue `mount` or lsof. What's more, it
> > gives the same error for both partitions. Also, as I mentioned, it
> doesn't
> > show any errors when I issue `sudo mdadm --examine [sda3|sdb3]`.
> Finally,
> > it assembles without complaint under the Debian live CD.
> >
> > > It seems odd to me that all the raid volumes are named "Backup".
> > > Perhaps mdadm doesn't like the name collision.
> >
> > First of all, isn't that the homehost name? If so, it is *SUPPOSED*
> > to be the same for all three. Secondly, it assembled just fine under
> the
> > old kernel and mdadm, as I mentioned. Thirdly, if it were the case, I
> would
> > expect it to assemble at least the first target without complaint.
> Finally,
> > the names aren't the same. They are 'Backup':1, 'Backup':2, and
> 'Backup':3
> >
> Nope. I suspect you've mistaken the mdadm option -N or --name for
> --hostname.
No, I'm just reading what's in the superblock (via --examine) which
is what is used to populate the mdadm.conf file. I did not use the --name
option when I created the arrays, but the HOMEHOST <system> line was in
mdadm.conf when I created them.
> The name should be specific to the individual arrays and hostname is for
> saying these arrays belong to this host.
>
> > > Perhaps you need to recreate some of them with a different name. I'd
> > > suggest recreating the raid1 volumes with different names and the
> > > --assume-clean flag (except the swap one which won't be since the
> ubuntu
> > > live cd's been messing with one of those component partitions).
> >
> > I think before I try something like that, I would just trash one
> > element of each array, assemble the arrays broken with just one element,
> and
> > copy over the files to the "new" partitions, and go from there.
> Alternatively recreate the arrays with a missing drive and add that once
> your satisfied the data is still their in the new array.
>
> >
> > > I hope this helps.
> >
> > Well, I'm getting somewhere. I'm just not sure where, if I can't
> > get mount to work.
> >
> I hope I've solved that one for you.
You mean by mounting the device so I can chroot so that mount will
work? Uh... no. I can't fix the mount utility by doing anything which
first requires me to use the mount utility. If you mean not using the -o
option, then no, that doesn't make any difference, either. Nor does the -v
option appear to do anything. The `mount` command never returns anything
but "mounting xxxx on yyyy failed: Invalid argument", unless I issue:
~ # mount --help
BusyBox v1.14.2 (Debian 1:1.14.2-2) multi-call binary
Usage: mount [flags] DEVICE NODE [-o OPT,OPT]
which isn't really very helpful.
^ permalink raw reply [flat|nested] 29+ messages in thread
* RE: Broken RAID1 boot arrays
2010-05-14 2:58 ` Leslie Rhorer
@ 2010-05-14 6:54 ` Daniel Reurich
2010-05-14 12:18 ` Leslie Rhorer
2010-05-14 7:08 ` Daniel Reurich
1 sibling, 1 reply; 29+ messages in thread
From: Daniel Reurich @ 2010-05-14 6:54 UTC (permalink / raw)
To: Leslie Rhorer; +Cc: linux-raid
On Thu, 2010-05-13 at 21:58 -0500, Leslie Rhorer wrote:
> > > ~ # mount -o -v /dev/md1 /target
> > > mount: mounting /dev/md1 on /target failed: Invalid argument
> >
>
> > What type of filesystem is it?
> >
> > Try "mount -v /dev/md1 /"
>
> It doesn't matter what switches I try, it always gives me that
> error. The md1 array (/boot) is ext2, and the md2 array (/) is ext3.
Is mdadm actually assembling the arrays?
what's the output of: "cat /proc/mdstat"
^ permalink raw reply [flat|nested] 29+ messages in thread
* RE: Broken RAID1 boot arrays
2010-05-14 6:54 ` Daniel Reurich
@ 2010-05-14 12:18 ` Leslie Rhorer
0 siblings, 0 replies; 29+ messages in thread
From: Leslie Rhorer @ 2010-05-14 12:18 UTC (permalink / raw)
To: 'Daniel Reurich'; +Cc: linux-raid
> > > What type of filesystem is it?
> > >
> > > Try "mount -v /dev/md1 /"
> >
> > It doesn't matter what switches I try, it always gives me that
> > error. The md1 array (/boot) is ext2, and the md2 array (/) is ext3.
>
> Is mdadm actually assembling the arrays?
It says it is. I was able to resync /dev/md3, and I can get the
details on all three arrays from mdadm. If it weren't assembling them, then
it should return an error when I query for details.
> what's the output of: "cat /proc/mdstat"
/dev # cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md3 : active raid1 hda3[2] hdb3[1]
204796548 blocks super 1.2 [2/2] [UU]
bitmap: 0/196 pages [0KB], 512KB chunk
md2 : active raid1 hda2[2] hdb2[1]
277442414 blocks super 1.2 [2/2] [UU]
bitmap: 0/265 pages [0KB], 512KB chunk
md1 : active raid1 hda1[2] hdb1[1]
6144816 blocks super 1.0 [2/2] [UU]
bitmap: 0/6 pages [0KB], 512KB chunk
unused devices: <none>
Note that last response is untrue. There are 8 drives belonging to
a RAID5 array that is not assembled. I presume, however, it is basing that
response on the (nonexistent) mdadm.conf file.
^ permalink raw reply [flat|nested] 29+ messages in thread
* RE: Broken RAID1 boot arrays
2010-05-14 2:58 ` Leslie Rhorer
2010-05-14 6:54 ` Daniel Reurich
@ 2010-05-14 7:08 ` Daniel Reurich
2010-05-14 12:43 ` Leslie Rhorer
1 sibling, 1 reply; 29+ messages in thread
From: Daniel Reurich @ 2010-05-14 7:08 UTC (permalink / raw)
To: Leslie Rhorer; +Cc: linux-raid
On Thu, 2010-05-13 at 21:58 -0500, Leslie Rhorer wrote:
> > > ~ # mount -o -v /dev/md1 /target
> > > mount: mounting /dev/md1 on /target failed: Invalid argument
> >
> > > So now, what? I can mount the arrays just fine under the Ubuntu
> > > live CD, but not this one.
> >
> > For a start don't use -o unless your specifying options like rw,bind
> > etc.
>
> I misread the man page (it did seem rather odd), but it doesn't
> matter. When I first tried, it was without any switches. I tried
> specifying the fs type. I tired updating the fstab file and using `mount
> -a`. It read the file just fine, but still gives me the same error.
>
>
> > What type of filesystem is it?
> >
> > Try "mount -v /dev/md1 /"
Sorry. Should have been "mount -v /dev/md1 /target" - assuming /dev/md1
is your root filesystem.
>
> It doesn't matter what switches I try, it always gives me that
> error. The md1 array (/boot) is ext2, and the md2 array (/) is ext3.
you did "mkdir /target" didn't you? Can verify it is there?
>
> > > > 13) bind mount the dev sys and proc virtual filesystems:
> > > > "mount -o bind /dev /target/dev"
> > > > "mount -o bind /sys /target/sys"
> > > > "mount -o bind /proc /target/proc"
> > > > 14) Chroot: chroot /target /bin/bash
> > > > 15) mount /boot /usr /var as needed.
> > > > 16) update your mdadm.conf and /etc/fstab etc (ideally use labels for
> > > > root and boot or fs UUID's)
>
> The mdadm.conf file already employs UUIDs for the RAID arrays. In
> the man page, I don't see a way to specify the device by UUID, but by my
> reading "DEVICE partitions" should work. It won't help to specify the array
> UUID in fstab if mdadm won't assemble the arrays.
>
> , and any other stuff like installing the
> > > > latest mdadm (apt|aptitude should work fine if your internet
> > connected).
> >
> > > Uh-uh, again. Neither apt-get nor aptitude seem to be on the CD, at
> > > least not when installing this way.
> >
> > But your in the chroot, and most of the normal tools in your system are
> > use able.
>
> No, I'm not. Remember? I can't mount /target (/dev/md2) so I can't
> chroot to it:
>
> ~ # chroot /target /bin/bash
> chroot: cannot execute /bin/bash: No such file or directory
>
> Everything in your method requires me to be able to mount the / and
> /boot file systems. Hmm. The only thing I can't do under the Ubuntu CD is
> assemble and mount the swap, so this should work using the Ubuntu CD...
>
> > > > > It's also really odd that I can assemble and mount the root
> > and boot
> > > > > arrays, but under Ubuntu I can't even assemble the swap array. It
> > > > complains
> > > > > that the first member of the array is busy and refuses to start
> > > > /dev/md3.
> > > > > The results of --examine look identical to those listed below,
> > except of
> > > > > course for the partition specific entries (size, drive and array
> > UUID,
> > > > > events, etc).
> > > > >
> > > > This is because ubuntu probably picks up the first swap partition it
> > > > finds and uses it.
> > >
> > > It doesn't mention it when I issue `mount` or lsof. What's more, it
> > > gives the same error for both partitions. Also, as I mentioned, it
> > doesn't
> > > show any errors when I issue `sudo mdadm --examine [sda3|sdb3]`.
> > Finally,
> > > it assembles without complaint under the Debian live CD.
> > >
> > > > It seems odd to me that all the raid volumes are named "Backup".
> > > > Perhaps mdadm doesn't like the name collision.
> > >
> > > First of all, isn't that the homehost name? If so, it is *SUPPOSED*
> > > to be the same for all three. Secondly, it assembled just fine under
> > the
> > > old kernel and mdadm, as I mentioned. Thirdly, if it were the case, I
> > would
> > > expect it to assemble at least the first target without complaint.
> > Finally,
> > > the names aren't the same. They are 'Backup':1, 'Backup':2, and
> > 'Backup':3
> > >
> > Nope. I suspect you've mistaken the mdadm option -N or --name for
> > --hostname.
>
> No, I'm just reading what's in the superblock (via --examine) which
> is what is used to populate the mdadm.conf file. I did not use the --name
> option when I created the arrays, but the HOMEHOST <system> line was in
> mdadm.conf when I created them.
>
>
> > The name should be specific to the individual arrays and hostname is for
> > saying these arrays belong to this host.
> >
> > > > Perhaps you need to recreate some of them with a different name. I'd
> > > > suggest recreating the raid1 volumes with different names and the
> > > > --assume-clean flag (except the swap one which won't be since the
> > ubuntu
> > > > live cd's been messing with one of those component partitions).
> > >
> > > I think before I try something like that, I would just trash one
> > > element of each array, assemble the arrays broken with just one element,
> > and
> > > copy over the files to the "new" partitions, and go from there.
> > Alternatively recreate the arrays with a missing drive and add that once
> > your satisfied the data is still their in the new array.
> >
> > >
> > > > I hope this helps.
> > >
> > > Well, I'm getting somewhere. I'm just not sure where, if I can't
> > > get mount to work.
> > >
> > I hope I've solved that one for you.
>
> You mean by mounting the device so I can chroot so that mount will
> work? Uh... no. I can't fix the mount utility by doing anything which
> first requires me to use the mount utility. If you mean not using the -o
> option, then no, that doesn't make any difference, either. Nor does the -v
> option appear to do anything. The `mount` command never returns anything
> but "mounting xxxx on yyyy failed: Invalid argument", unless I issue:
>
> ~ # mount --help
> BusyBox v1.14.2 (Debian 1:1.14.2-2) multi-call binary
>
> Usage: mount [flags] DEVICE NODE [-o OPT,OPT]
>
> which isn't really very helpful.
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 29+ messages in thread
* RE: Broken RAID1 boot arrays
2010-05-14 7:08 ` Daniel Reurich
@ 2010-05-14 12:43 ` Leslie Rhorer
0 siblings, 0 replies; 29+ messages in thread
From: Leslie Rhorer @ 2010-05-14 12:43 UTC (permalink / raw)
To: 'Daniel Reurich'; +Cc: linux-raid
> > > What type of filesystem is it?
> > >
> > > Try "mount -v /dev/md1 /"
>
> Sorry. Should have been "mount -v /dev/md1 /target" - assuming /dev/md1
> is your root filesystem.
Actually, /dev/md2 is root, but no, it wouldn't mount, no matter
what I tried. What's more, now that I assembled the arrays under the Debian
live CD, Ubuntu will no longer assemble any of them. Note the Ubuntu live
CD has an old version of mdadm (v2.6.7.1)
> > It doesn't matter what switches I try, it always gives me that
> > error. The md1 array (/boot) is ext2, and the md2 array (/) is ext3.
> you did "mkdir /target" didn't you? Can verify it is there?
Yes.
> > No, I'm not. Remember? I can't mount /target (/dev/md2) so I can't
> > chroot to it:
> >
> > ~ # chroot /target /bin/bash
> > chroot: cannot execute /bin/bash: No such file or directory
> >
> > Everything in your method requires me to be able to mount the / and
> > /boot file systems. Hmm. The only thing I can't do under the Ubuntu CD
> is
> > assemble and mount the swap, so this should work using the Ubuntu CD...
'So much for that effort. I think I'll try one of the other
suggested distros...
^ permalink raw reply [flat|nested] 29+ messages in thread
* RE: Broken RAID1 boot arrays
2010-05-14 0:16 ` Daniel Reurich
2010-05-14 2:58 ` Leslie Rhorer
@ 2010-05-15 20:03 ` Leslie Rhorer
2010-05-16 3:10 ` Leslie Rhorer
1 sibling, 1 reply; 29+ messages in thread
From: Leslie Rhorer @ 2010-05-15 20:03 UTC (permalink / raw)
To: linux-raid
> > > It seems odd to me that all the raid volumes are named "Backup".
> > > Perhaps mdadm doesn't like the name collision.
> >
> > First of all, isn't that the homehost name? If so, it is *SUPPOSED*
> > to be the same for all three. Secondly, it assembled just fine under
> the
> > old kernel and mdadm, as I mentioned. Thirdly, if it were the case, I
> would
> > expect it to assemble at least the first target without complaint.
> Finally,
> > the names aren't the same. They are 'Backup':1, 'Backup':2, and
> 'Backup':3
> >
> Nope. I suspect you've mistaken the mdadm option -N or --name for
> --hostname.
>
> The name should be specific to the individual arrays and hostname is for
> saying these arrays belong to this host.
Here it is in the man page:
[--homehost=
This will override any HOMEHOST setting in the config file and
provides the identity of the host which should be considered the home for
any arrays. When creating an array, the homehost will be recorded in the
superblock. For version-1 superblocks, it will be prefixed to the array
name...]
Thus, the array names are 0, 1, 2, and 3, each prepended in the
superblock by 'Backup'. Since I did not specify a name when I created the
arrays, presumably mdadm simply used the minor number as the array name.
Thus, /dev/md0 became 'Backup':0, /dev/md1 became 'Backup':1, etc.
Also from the man page:
[-N, --name=
Set a name for the array. This is currently only effective
when creating an array with a version-1 superblock, or an array in a DDF
container. The name is a simple textual string that can be used to identify
array components when assembling. If name is needed but not specified, it
is taken from the basename of the device that is being created. e.g. when
creating /dev/md/home the name will default to home.]
I cannot help but believe this persistent failure to assemble the
arrays in the initrd is related to the fact mdadm will not auto-assemble the
arrays from mdadm.conf when one issues the command:
`mdadm --assemble --scan`
I have asked about this repeatedly in this forum, but have never
received any answer at all. After all, isn't this effectively what mdadm
does when booting in order to attempt to bring up the arrays for mounting,
in particular the / array, which must be present in order to transfer OS
activity from the RAM disk to the hard drive? I don't really know why it
worked under the old kernel - although it did issue a warning during the
boot process that it could not find any drives to assemble - but won't work
under the newer kernel.
Neil, I know you are busy, but you have been awfully silent in all
of this. Do you have any insights?
^ permalink raw reply [flat|nested] 29+ messages in thread* RE: Broken RAID1 boot arrays
2010-05-15 20:03 ` Leslie Rhorer
@ 2010-05-16 3:10 ` Leslie Rhorer
2010-05-29 18:51 ` Leslie Rhorer
0 siblings, 1 reply; 29+ messages in thread
From: Leslie Rhorer @ 2010-05-16 3:10 UTC (permalink / raw)
To: linux-raid
I've tried everything of which I can think, and then some. This
system is still not bootable. I even tried explicitly assembling the arrays
from within /scripts/local-top/mdadm inside the RAM disk, but it still won't
assemble the arrays, or at least won't mount them. 'Something I did notice
a few days ago, but I didn't think it would matter: Inside the initrd, the
array members show up as /dev/hdxy, while inside a fully booted system I
think they show up as /dev/sdxy. Could it be there is a conflict between
the block device when mdadm is attempting to assemble the arrays prior to
mounting and when the system is attempting to mount the arrays?
^ permalink raw reply [flat|nested] 29+ messages in thread
* RE: Broken RAID1 boot arrays
2010-05-16 3:10 ` Leslie Rhorer
@ 2010-05-29 18:51 ` Leslie Rhorer
2010-05-29 19:34 ` Leslie Rhorer
0 siblings, 1 reply; 29+ messages in thread
From: Leslie Rhorer @ 2010-05-29 18:51 UTC (permalink / raw)
To: linux-raid
I've found a little time to look into this a bit further. The
reports are different from "Lenny" and "Squeeze", but the end result is the
same (except of course "Squeeze" won't boot, at all from a RAID array).
Nothng I try will get mdadm to auto-assemble the arrays. First of all,
shouldn't the UUIDs of all the partitions appear in /dev/disk/by-uuid? None
of them are there for any unassembled partitions under "Squeeze" and none of
them are there at all under "Lenny". Why is "Lenny" reporting a wrong name
and saying the partitions are not built for the host? The hostname is
correct. Why is "Squeeze" reporting incorrect device UUIDs? Is it because
they do not exist in /dev/disk/by-uuid?
From "Squeeze":
root@Backup: /# mdadm --assemble -v --scan
<snip>
mdadm: no RAID superblock on /dev/hdb1
mdadm: /dev/hdb1 has wrong uuid.
<snip>
root@Backup:/dev/disk/by-uuid# ls -l
total 0
lrwxrwxrwx 1 root root 10 May 28 21:27 405a6da1-8040-49bc-8a6a-b5aa4faf79d3
-> ../../sda2
lrwxrwxrwx 1 root root 9 May 29 12:02 bba2eeda-2c8b-45ad-86d1-82b6cef84ee3
-> ../../md0
lrwxrwxrwx 1 root root 10 May 28 21:27 ea01cadd-1e17-49b4-a1c4-3ae2898eee01
-> ../../sda5
lrwxrwxrwx 1 root root 10 May 28 21:27 f8f1c3d3-0ea3-495d-9be8-9e4038aebb96
-> ../../sda1
root@Backup:/dev/disk/by-uuid# mdadm --examine /dev/hdb1
/dev/hdb1:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : d6a2c60b:7345e957:05aefe0b:f8d1527f
Name : 'Backup':1
Creation Time : Sun Dec 20 01:13:59 2009
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 12289632 (5.86 GiB 6.29 GB)
Array Size : 12289632 (5.86 GiB 6.29 GB)
Super Offset : 12289640 sectors
State : clean
Device UUID : f32fd8b1:95b548af:13b79684:51213590
Internal Bitmap : 2 sectors from superblock
Update Time : Thu May 20 20:14:16 2010
Checksum : 3242a0f1 - correct
Events : 292
Device Role : Active device 1
Array State : AA ('A' == active, '.' == missing)
root@Backup:/dev/disk/by-uuid# hostname
Backup
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR lrhorer@satx.rr.com
PROGRAM /usr/bin/mdadm_notify
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 UUID=940ae4e4:04057ffc:5e92d2fb:63e3efb7
name='Backup':0
ARRAY /dev/md1 metadata=1.0 UUID=d6a2c60b:7345e957:05aefe0b:f8d1527f
name='Backup':1
ARRAY /dev/md2 metadata=1.2 UUID=d45ff663:9e53774c:6fcf9968:21692025
name='Backup':2
ARRAY /dev/md3 metadata=1.2 UUID=3615c4a2:33786b6d:b13863d9:458cd054
name='Backup':3
# This file was auto-generated on Thu, 20 May 2010 06:52:13 -0500
# by mkconf 3.0.3-2
From "Lenny":
RAID-Server:/RAID/Server-Main# mdadm --assemble --scan -v
<snip>
mdadm: /dev/hda1 has wrong name.
<snip>
mdadm: /dev/sda1 has wrong name.
<snip>
mdadm: /dev/hda1 is not built for host RAID-Server.
<snip>
mdadm: /dev/sda1 is not built for host RAID-Server.
<snip>
RAID-Server:/RAID/Server-Main# mdadm --examine /dev/hda1
/dev/hda1:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : 76e8e11d:e0183c3c:404cb86a:19a7cb3d
Name : 'RAID-Server':1
Creation Time : Wed Dec 23 23:46:28 2009
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 803160 (392.23 MiB 411.22 MB)
Array Size : 803160 (392.23 MiB 411.22 MB)
Super Offset : 803168 sectors
State : clean
Device UUID : 28fa09ed:07bf99e2:e3a3b396:9fe389d3
Internal Bitmap : 2 sectors from superblock
Update Time : Sat May 29 12:52:54 2010
Checksum : 9e33cfa4 - correct
Events : 252
Array Slot : 2 (failed, 1, 0)
Array State : Uu 1 failed
RAID-Server:/RAID/Server-Main# mdadm --examine /dev/sda1
/dev/sda1:
Magic : a92b4efc
Version : 1.0
Feature Map : 0x1
Array UUID : 76e8e11d:e0183c3c:404cb86a:19a7cb3d
Name : 'RAID-Server':1
Creation Time : Wed Dec 23 23:46:28 2009
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 803160 (392.23 MiB 411.22 MB)
Array Size : 803160 (392.23 MiB 411.22 MB)
Super Offset : 803168 sectors
State : clean
Device UUID : 28212297:1d982d5d:ce41b6fe:03720159
Internal Bitmap : 2 sectors from superblock
Update Time : Sat May 29 12:52:54 2010
Checksum : b059fc07 - correct
Events : 252
Array Slot : 1 (failed, 1, 0)
Array State : uU 1 failed
RAID-Server:/dev/disk/by-uuid# ll
total 0
drwxr-xr-x 2 root root 120 2010-05-10 16:26 .
drwxr-xr-x 6 root root 120 2010-05-10 16:26 ..
lrwxrwxrwx 1 root root 9 2010-05-10 16:26
5cbe8269-fec8-42db-889d-a1d57b0a797e -> ../../md2
lrwxrwxrwx 1 root root 9 2010-05-10 16:26
5f41f190-d280-4f55-8b61-44f4abe981c1 -> ../../md0
lrwxrwxrwx 1 root root 9 2010-05-10 16:26
904e95cb-d0ad-4167-9dbf-51baf1867c00 -> ../../md3
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR lrhorer@satx.rr.com
# This file was auto-generated on Fri, 21 Nov 2008 22:35:30 -0600
# by mkconf $Id$
PROGRAM /usr/bin/mdadm_notify
ARRAY /dev/md0 level=raid6 metadata=1.2 num-devices=11
UUID=5ff10d73:a096195f:7a646bba:a68986ca name=RAID-Server:0
ARRAY /dev/md1 level=raid1 metadata=1.0 num-devices=2
UUID=76e8e11d:e0183c3c:404cb86a:19a7cb3d name=RAID-Server:1
ARRAY /dev/md2 level=raid1 metadata=1.2 num-devices=2
UUID=4b466602:fb81286c:4ad8dc5c:ad0bd065 name=RAID-Server:2
ARRAY /dev/md3 level=raid1 metadata=1.2 num-devices=2
UUID=5bc11cda:e1b4065f:fbf2fca5:8b12e0ba name=RAID-Server:3
^ permalink raw reply [flat|nested] 29+ messages in thread* RE: Broken RAID1 boot arrays
2010-05-29 18:51 ` Leslie Rhorer
@ 2010-05-29 19:34 ` Leslie Rhorer
0 siblings, 0 replies; 29+ messages in thread
From: Leslie Rhorer @ 2010-05-29 19:34 UTC (permalink / raw)
To: linux-raid
OK, I figured it out, I think. Apparently mdadm does not like the
single quotes around the hostname. The odd thing is, mdadm *CREATED* those
hostnames automatically. It looks like I can get this to work (I hope) by
updating the homehost name in each array member's superblock. 'Fingers
crossed.
> -----Original Message-----
> From: linux-raid-owner@vger.kernel.org [mailto:linux-raid-
> owner@vger.kernel.org] On Behalf Of Leslie Rhorer
> Sent: Saturday, May 29, 2010 1:51 PM
> To: linux-raid@vger.kernel.org
> Subject: RE: Broken RAID1 boot arrays
>
>
> I've found a little time to look into this a bit further. The
> reports are different from "Lenny" and "Squeeze", but the end result is
> the
> same (except of course "Squeeze" won't boot, at all from a RAID array).
> Nothng I try will get mdadm to auto-assemble the arrays. First of all,
> shouldn't the UUIDs of all the partitions appear in /dev/disk/by-uuid?
> None
> of them are there for any unassembled partitions under "Squeeze" and none
> of
> them are there at all under "Lenny". Why is "Lenny" reporting a wrong
> name
> and saying the partitions are not built for the host? The hostname is
> correct. Why is "Squeeze" reporting incorrect device UUIDs? Is it
> because
> they do not exist in /dev/disk/by-uuid?
>
> >From "Squeeze":
>
> root@Backup: /# mdadm --assemble -v --scan
> <snip>
> mdadm: no RAID superblock on /dev/hdb1
> mdadm: /dev/hdb1 has wrong uuid.
> <snip>
>
> root@Backup:/dev/disk/by-uuid# ls -l
> total 0
> lrwxrwxrwx 1 root root 10 May 28 21:27 405a6da1-8040-49bc-8a6a-
> b5aa4faf79d3
> -> ../../sda2
> lrwxrwxrwx 1 root root 9 May 29 12:02 bba2eeda-2c8b-45ad-86d1-
> 82b6cef84ee3
> -> ../../md0
> lrwxrwxrwx 1 root root 10 May 28 21:27 ea01cadd-1e17-49b4-a1c4-
> 3ae2898eee01
> -> ../../sda5
> lrwxrwxrwx 1 root root 10 May 28 21:27 f8f1c3d3-0ea3-495d-9be8-
> 9e4038aebb96
> -> ../../sda1
> root@Backup:/dev/disk/by-uuid# mdadm --examine /dev/hdb1
> /dev/hdb1:
> Magic : a92b4efc
> Version : 1.0
> Feature Map : 0x1
> Array UUID : d6a2c60b:7345e957:05aefe0b:f8d1527f
> Name : 'Backup':1
> Creation Time : Sun Dec 20 01:13:59 2009
> Raid Level : raid1
> Raid Devices : 2
>
> Avail Dev Size : 12289632 (5.86 GiB 6.29 GB)
> Array Size : 12289632 (5.86 GiB 6.29 GB)
> Super Offset : 12289640 sectors
> State : clean
> Device UUID : f32fd8b1:95b548af:13b79684:51213590
>
> Internal Bitmap : 2 sectors from superblock
> Update Time : Thu May 20 20:14:16 2010
> Checksum : 3242a0f1 - correct
> Events : 292
>
>
> Device Role : Active device 1
> Array State : AA ('A' == active, '.' == missing)
> root@Backup:/dev/disk/by-uuid# hostname
> Backup
>
> # mdadm.conf
> #
> # Please refer to mdadm.conf(5) for information about this file.
> #
>
> # by default, scan all partitions (/proc/partitions) for MD superblocks.
> # alternatively, specify devices to scan, using wildcards if desired.
> DEVICE partitions
>
> # auto-create devices with Debian standard permissions
> CREATE owner=root group=disk mode=0660 auto=yes
>
> # automatically tag new arrays as belonging to the local system
> HOMEHOST <system>
>
> # instruct the monitoring daemon where to send mail alerts
> MAILADDR lrhorer@satx.rr.com
>
> PROGRAM /usr/bin/mdadm_notify
>
> # definitions of existing MD arrays
> ARRAY /dev/md0 metadata=1.2 UUID=940ae4e4:04057ffc:5e92d2fb:63e3efb7
> name='Backup':0
> ARRAY /dev/md1 metadata=1.0 UUID=d6a2c60b:7345e957:05aefe0b:f8d1527f
> name='Backup':1
> ARRAY /dev/md2 metadata=1.2 UUID=d45ff663:9e53774c:6fcf9968:21692025
> name='Backup':2
> ARRAY /dev/md3 metadata=1.2 UUID=3615c4a2:33786b6d:b13863d9:458cd054
> name='Backup':3
>
> # This file was auto-generated on Thu, 20 May 2010 06:52:13 -0500
> # by mkconf 3.0.3-2
>
>
>
>
> >From "Lenny":
>
> RAID-Server:/RAID/Server-Main# mdadm --assemble --scan -v
> <snip>
> mdadm: /dev/hda1 has wrong name.
> <snip>
> mdadm: /dev/sda1 has wrong name.
> <snip>
> mdadm: /dev/hda1 is not built for host RAID-Server.
> <snip>
> mdadm: /dev/sda1 is not built for host RAID-Server.
> <snip>
>
> RAID-Server:/RAID/Server-Main# mdadm --examine /dev/hda1
> /dev/hda1:
> Magic : a92b4efc
> Version : 1.0
> Feature Map : 0x1
> Array UUID : 76e8e11d:e0183c3c:404cb86a:19a7cb3d
> Name : 'RAID-Server':1
> Creation Time : Wed Dec 23 23:46:28 2009
> Raid Level : raid1
> Raid Devices : 2
>
> Avail Dev Size : 803160 (392.23 MiB 411.22 MB)
> Array Size : 803160 (392.23 MiB 411.22 MB)
> Super Offset : 803168 sectors
> State : clean
> Device UUID : 28fa09ed:07bf99e2:e3a3b396:9fe389d3
>
> Internal Bitmap : 2 sectors from superblock
> Update Time : Sat May 29 12:52:54 2010
> Checksum : 9e33cfa4 - correct
> Events : 252
>
>
> Array Slot : 2 (failed, 1, 0)
> Array State : Uu 1 failed
> RAID-Server:/RAID/Server-Main# mdadm --examine /dev/sda1
> /dev/sda1:
> Magic : a92b4efc
> Version : 1.0
> Feature Map : 0x1
> Array UUID : 76e8e11d:e0183c3c:404cb86a:19a7cb3d
> Name : 'RAID-Server':1
> Creation Time : Wed Dec 23 23:46:28 2009
> Raid Level : raid1
> Raid Devices : 2
>
> Avail Dev Size : 803160 (392.23 MiB 411.22 MB)
> Array Size : 803160 (392.23 MiB 411.22 MB)
> Super Offset : 803168 sectors
> State : clean
> Device UUID : 28212297:1d982d5d:ce41b6fe:03720159
>
> Internal Bitmap : 2 sectors from superblock
> Update Time : Sat May 29 12:52:54 2010
> Checksum : b059fc07 - correct
> Events : 252
>
>
> Array Slot : 1 (failed, 1, 0)
> Array State : uU 1 failed
> RAID-Server:/dev/disk/by-uuid# ll
> total 0
> drwxr-xr-x 2 root root 120 2010-05-10 16:26 .
> drwxr-xr-x 6 root root 120 2010-05-10 16:26 ..
> lrwxrwxrwx 1 root root 9 2010-05-10 16:26
> 5cbe8269-fec8-42db-889d-a1d57b0a797e -> ../../md2
> lrwxrwxrwx 1 root root 9 2010-05-10 16:26
> 5f41f190-d280-4f55-8b61-44f4abe981c1 -> ../../md0
> lrwxrwxrwx 1 root root 9 2010-05-10 16:26
> 904e95cb-d0ad-4167-9dbf-51baf1867c00 -> ../../md3
>
> # mdadm.conf
> #
> # Please refer to mdadm.conf(5) for information about this file.
> #
>
> # by default, scan all partitions (/proc/partitions) for MD superblocks.
> # alternatively, specify devices to scan, using wildcards if desired.
> DEVICE partitions
>
> # auto-create devices with Debian standard permissions
> CREATE owner=root group=disk mode=0660 auto=yes
>
> # automatically tag new arrays as belonging to the local system
> HOMEHOST <system>
>
> # instruct the monitoring daemon where to send mail alerts
> MAILADDR lrhorer@satx.rr.com
>
> # This file was auto-generated on Fri, 21 Nov 2008 22:35:30 -0600
> # by mkconf $Id$
> PROGRAM /usr/bin/mdadm_notify
>
> ARRAY /dev/md0 level=raid6 metadata=1.2 num-devices=11
> UUID=5ff10d73:a096195f:7a646bba:a68986ca name=RAID-Server:0
> ARRAY /dev/md1 level=raid1 metadata=1.0 num-devices=2
> UUID=76e8e11d:e0183c3c:404cb86a:19a7cb3d name=RAID-Server:1
> ARRAY /dev/md2 level=raid1 metadata=1.2 num-devices=2
> UUID=4b466602:fb81286c:4ad8dc5c:ad0bd065 name=RAID-Server:2
> ARRAY /dev/md3 level=raid1 metadata=1.2 num-devices=2
> UUID=5bc11cda:e1b4065f:fbf2fca5:8b12e0ba name=RAID-Server:3
>
>
>
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 29+ messages in thread
* RE: Broken RAID1 boot arrays
2010-05-13 4:15 ` Daniel Reurich
2010-05-13 4:39 ` Daniel Reurich
2010-05-13 23:30 ` Leslie Rhorer
@ 2010-05-15 7:23 ` Leslie Rhorer
2 siblings, 0 replies; 29+ messages in thread
From: Leslie Rhorer @ 2010-05-15 7:23 UTC (permalink / raw)
To: 'Daniel Reurich'; +Cc: linux-raid
> 11) make a target directory: "mkdir /target"
> 12) mount the root filesystem on /target: mount /dev/<rootfs> /target
OK, I got this to work. I started the installer, did an sftp to one
of my other servers, and then copied the /bin/mount command and the
/lib/libselinux.so.1 library over to the temporary system. After that, I
was able to mount the partitions with no trouble.
> 13) bind mount the dev sys and proc virtual filesystems:
> "mount -o bind /dev /target/dev"
> "mount -o bind /sys /target/sys"
> "mount -o bind /proc /target/proc"
> 14) Chroot: chroot /target /bin/bash
> 15) mount /boot /usr /var as needed.
> 16) update your mdadm.conf and /etc/fstab etc (ideally use labels for
> root and boot or fs UUID's), and any other stuff like installing the
These should be OK, but then they should always have been OK,
AFAICT. They worked before:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR lrhorer@satx.rr.com
# definitions of existing MD arrays
# This file was auto-generated on Thu, 14 May 2009 20:25:57 -0500
# by mkconf $Id$
PROGRAM /usr/bin/mdadm_notify
ARRAY /dev/md0 level=raid5 num-devices=8 metadata=01.02 name='Backup':0
UUID=940ae4e4:04057ffc:5e92d2fb:63e3efb7
ARRAY /dev/md3 level=raid1 num-devices=2 metadata=01.02 name='Backup':3
UUID=3615c4a2:33786b6d:b13863d9:458cd054
ARRAY /dev/md2 level=raid1 num-devices=2 metadata=01.02 name='Backup':2
UUID=d45ff663:9e53774c:6fcf9968:21692025
ARRAY /dev/md1 level=raid1 num-devices=2 metadata=01.00 name='Backup':1
UUID=d6a2c60b:7345e957:05aefe0b:f8d1527f
# /etc/fstab: static file system information.
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults
0 0
/dev/md2 / ext3 defaults
0 1
/dev/md1 /boot ext2 defaults 0 2
/dev/md3 none swap sw
0 0
/dev/cdrom /media/cdrom0 udf,iso9660 user,noauto 0
0
/dev/md0 /Backup xfs defaults 0
2
RAID-Server:/RAID /RAID nfs tcp 0
0
> latest mdadm (apt|aptitude should work fine if your internet connected).
> ***See my notes below.
I ran `apt-get install mdadm`, and it responded mdadm was the
current version for "Squeeze" (3.0.3, as I recall)
> 17) update your grub config, and run update-grub.
> 18) update your initrd image: "mkinitramfs -k all"
Uh-uh, that failed. Firstly, "all" is not a valid switch for this
version of mkinitramfs. Secondly, the running version of the kernel
(2.6.30-2-amd64) is not the same as the installed version (2.6.32-3-amd64),
so it complained about various missing items. I seem to recall one can have
mkinitramfs roll up an image for a secific version of a kernel not running,
but I decided instead to just try to complete the apt-get upgrade. It's
running now. We'll see how it goes. It's getting a lot of log failures
because /dev/pts is not mounted, and mandb locale errors because $LC and
$LANG are not set. 'No shocker there, and it shouldn't be fatal. Hopefully
nothing fatal will pop up.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Broken RAID1 boot arrays
@ 2010-05-10 2:25 Leslie Rhorer
2010-05-10 9:17 ` John Robinson
2010-05-10 17:06 ` Bill Davidsen
0 siblings, 2 replies; 29+ messages in thread
From: Leslie Rhorer @ 2010-05-10 2:25 UTC (permalink / raw)
To: linux-raid
I was running a system under kernel 2.6.26.2-amd64, and it was
having some problems that seemed possibly due to the kernel (or not), so I
undertook to upgrade the kernel to 2.6.33-2-amd64. Now, there's a distro
upgrade "feature" which ordinarily prevents this upgrade, because udev won't
upgrade with the old kernel in place, and the kernel can't upgrade because
of unmet dependencies which require a newer udev version, among other
things. In any case, the work-around is to create the file
/etc/udev/kernel-upgrade, at which point udev can be upgraded and then the
kernel must be upgraded before rebooting. Now, I've done this before, and
it worked, but I've never tried it on a system which boots from an array.
This time, it broke.
As part of the upgrade, GRUB1 is supposed to chain load to GRUB2
which then continues to boot the system. This does not seem to be
happening. What's more, when linux begins to load, it doesn't seem to
recognize the arrays, so it can't find the root file system. There are two
drives, /dev/hda and /dev/hdb, each divvied up into three partitions:
/dev/hdx1 is formatted as ext2 and (supposed to be) mounted as /boot, and
/dev/hdx2 formatted as ext3 and is (supposed to be) /, and /dev/hdx3 is
configured as swap. In all three cases, the partitions are a pair of
members in a RAID1 array. The /dev/hdx1 partitions have 1.0 superblocks and
are assigned /dev/md1. The /dev/hdx2 partitions have 1.2 superblocks and
are assigned /dev/md2. The /dev/hdx3 partitions have 1.2 superblocks and
are assigned /dev/md3. All three have internal bitmaps.
GRUB can initially read the /dev/hda1 partition, because it does
bring up the GRUB menu, which is on /dev/hdx1.
If I boot to multiuser mode, I get a complaint about an address
space collision of a device. It then recognizes the /dev/hda1 partition as
ext2 and starts to load the initrd, but then unceremoniously hangs. After a
while, it aborts the boot sequence and informs the user it has given up
waiting for the root device. It announces it cannot find /dev/md2 and drops
to busybox. Busybox, however, complains about not being able to access tty,
and the system hangs for good.
If I boot to single user mode, then when raid1 and raid456 load
successfully, but then it complains that none of the arrays are assembled.
Afterwards, it waits for / to be available, and eventually times out with
the same errors as the multiuser mode.
I'm not sure where I should start looking. I suppose if initrd
doesn't have the image of /etc, it might cause md to fail to load the
arrays, but it certainly should contain /etc. What else could be causing
the failure? I did happen to notice that under the old kernel, when md
first tried, the arrays would not load, but then a bit later in the boot
process, they did load.
Has anyone else come across this problem? The upgrade is from
Debian "Lenny" to "Squeeze". Where should I start looking?
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: Broken RAID1 boot arrays
2010-05-10 2:25 Leslie Rhorer
@ 2010-05-10 9:17 ` John Robinson
2010-05-10 9:47 ` Tim Small
2010-05-11 2:37 ` Leslie Rhorer
2010-05-10 17:06 ` Bill Davidsen
1 sibling, 2 replies; 29+ messages in thread
From: John Robinson @ 2010-05-10 9:17 UTC (permalink / raw)
To: Leslie Rhorer; +Cc: linux-raid
On 10/05/2010 03:25, Leslie Rhorer wrote:
> I was running a system under kernel 2.6.26.2-amd64, and it was
> having some problems that seemed possibly due to the kernel (or not), so I
> undertook to upgrade the kernel to 2.6.33-2-amd64. Now, there's a distro
> upgrade "feature" which ordinarily prevents this upgrade, because udev won't
> upgrade with the old kernel in place, and the kernel can't upgrade because
> of unmet dependencies which require a newer udev version, among other
> things. In any case, the work-around is to create the file
> /etc/udev/kernel-upgrade, at which point udev can be upgraded and then the
> kernel must be upgraded before rebooting. Now, I've done this before, and
> it worked, but I've never tried it on a system which boots from an array.
> This time, it broke.
>
> As part of the upgrade, GRUB1 is supposed to chain load to GRUB2
> which then continues to boot the system. This does not seem to be
> happening. What's more, when linux begins to load, it doesn't seem to
> recognize the arrays, so it can't find the root file system. There are two
> drives, /dev/hda and /dev/hdb, each divvied up into three partitions:
> /dev/hdx1 is formatted as ext2 and (supposed to be) mounted as /boot, and
> /dev/hdx2 formatted as ext3 and is (supposed to be) /, and /dev/hdx3 is
> configured as swap. In all three cases, the partitions are a pair of
> members in a RAID1 array. The /dev/hdx1 partitions have 1.0 superblocks and
> are assigned /dev/md1. The /dev/hdx2 partitions have 1.2 superblocks and
> are assigned /dev/md2. The /dev/hdx3 partitions have 1.2 superblocks and
> are assigned /dev/md3. All three have internal bitmaps.
>
> GRUB can initially read the /dev/hda1 partition, because it does
> bring up the GRUB menu, which is on /dev/hdx1.
>
> If I boot to multiuser mode, I get a complaint about an address
> space collision of a device. It then recognizes the /dev/hda1 partition as
> ext2 and starts to load the initrd, but then unceremoniously hangs. After a
> while, it aborts the boot sequence and informs the user it has given up
> waiting for the root device. It announces it cannot find /dev/md2 and drops
> to busybox. Busybox, however, complains about not being able to access tty,
> and the system hangs for good.
>
> If I boot to single user mode, then when raid1 and raid456 load
> successfully, but then it complains that none of the arrays are assembled.
> Afterwards, it waits for / to be available, and eventually times out with
> the same errors as the multiuser mode.
>
> I'm not sure where I should start looking. I suppose if initrd
> doesn't have the image of /etc, it might cause md to fail to load the
> arrays, but it certainly should contain /etc. What else could be causing
> the failure? I did happen to notice that under the old kernel, when md
> first tried, the arrays would not load, but then a bit later in the boot
> process, they did load.
>
> Has anyone else come across this problem? The upgrade is from
> Debian "Lenny" to "Squeeze". Where should I start looking?
Firstly, almost certainly by 2.6.32 your IDE drives will appear as
/dev/sdx rather than hdx so you may need to build the initrd for 2.6.32
with a different /etc/mdadm.conf. When you boot and get the complaint
about no / can you get a command line you can run mdadm -Evvs from?
Secondly, the idea of grub1 chain-loading grub2 sounds iffy to me; use
either grub1 or grub2 but not both.
Hope this gives you some places to look.
Cheers,
John.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: Broken RAID1 boot arrays
2010-05-10 9:17 ` John Robinson
@ 2010-05-10 9:47 ` Tim Small
2010-05-11 2:44 ` Leslie Rhorer
2010-05-11 2:37 ` Leslie Rhorer
1 sibling, 1 reply; 29+ messages in thread
From: Tim Small @ 2010-05-10 9:47 UTC (permalink / raw)
To: John Robinson; +Cc: Leslie Rhorer, linux-raid
On 10/05/10 10:17, John Robinson wrote:
>
> Secondly, the idea of grub1 chain-loading grub2 sounds iffy to me; use
> either grub1 or grub2 but not both.
This is Debian's upgrade-mechanism for Grub. It sets up grub1 to give
you the option of either:
1. Chainloading grub2, so that you can verify that it works correctly,
and then issue the command to remove grub1, an just-use grub2 if you're
happy with it, or in case there's some problem with grub2.
2. Booting the kernel directly from grub1 (then you can remove grub2
once the system has booted).
The OP should probably report this as a bug against the
linux-image-2.6.xx package which they are using in Debian.
Tim.
--
South East Open Source Solutions Limited
Registered in England and Wales with company number 06134732.
Registered Office: 2 Powell Gardens, Redhill, Surrey, RH1 1TQ
VAT number: 900 6633 53 http://seoss.co.uk/ +44-(0)1273-808309
^ permalink raw reply [flat|nested] 29+ messages in thread
* RE: Broken RAID1 boot arrays
2010-05-10 9:47 ` Tim Small
@ 2010-05-11 2:44 ` Leslie Rhorer
2010-05-11 3:04 ` Leslie Rhorer
0 siblings, 1 reply; 29+ messages in thread
From: Leslie Rhorer @ 2010-05-11 2:44 UTC (permalink / raw)
To: 'Tim Small', 'John Robinson'; +Cc: linux-raid
> On 10/05/10 10:17, John Robinson wrote:
> >
> > Secondly, the idea of grub1 chain-loading grub2 sounds iffy to me; use
> > either grub1 or grub2 but not both.
>
> This is Debian's upgrade-mechanism for Grub. It sets up grub1 to give
> you the option of either:
>
> 1. Chainloading grub2, so that you can verify that it works correctly,
> and then issue the command to remove grub1, an just-use grub2 if you're
> happy with it, or in case there's some problem with grub2.
> 2. Booting the kernel directly from grub1 (then you can remove grub2
> once the system has booted).
>
> The OP should probably report this as a bug against the
> linux-image-2.6.xx package which they are using in Debian.
I'm pretty sure it has been reported more than once. Google brings
up a ton of responses when one searches for the error produced when udev
tries to upgrade and can't because of the old kernel. I don't recall what
the error is, exactly, because I first ran into it several months ago. Note
it is only encountered when trying to upgrade from "Lenny" to "Squeeze", not
when loading "Squeeze" directly, but then "Squeeze" is still a testing
distro. It hasn't been released, yet, and probably won't be for several
more months. I would expect the problem to be resolved by the time
"Squeeze" enters stable status.
^ permalink raw reply [flat|nested] 29+ messages in thread
* RE: Broken RAID1 boot arrays
2010-05-11 2:44 ` Leslie Rhorer
@ 2010-05-11 3:04 ` Leslie Rhorer
2010-05-11 7:54 ` Luca Berra
2010-05-11 16:25 ` Bill Davidsen
0 siblings, 2 replies; 29+ messages in thread
From: Leslie Rhorer @ 2010-05-11 3:04 UTC (permalink / raw)
To: linux-raid
On a related note, does anyone know of a good Linux Live CD which
enables both network operations (especially telnetd and ftpd) along with
mdadm? I have a couple of Live CDs on hand, but they don't support mdadm or
remote access. This is a headless system, and I really can't effectively
work with a local console, plus I need to mount the /boot array in order to
properly edit the initrd. I suppose I could mount the drive as a non-array
and then force a sync to the second drive, but I'd rather not.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: Broken RAID1 boot arrays
2010-05-11 3:04 ` Leslie Rhorer
@ 2010-05-11 7:54 ` Luca Berra
2010-05-11 16:27 ` Bill Davidsen
2010-05-11 16:25 ` Bill Davidsen
1 sibling, 1 reply; 29+ messages in thread
From: Luca Berra @ 2010-05-11 7:54 UTC (permalink / raw)
To: linux-raid
On Mon, May 10, 2010 at 10:04:32PM -0500, Leslie Rhorer wrote:
>
> On a related note, does anyone know of a good Linux Live CD which
>enables both network operations (especially telnetd and ftpd) along with
>mdadm? I have a couple of Live CDs on hand, but they don't support mdadm or
http://www.sysresccd.org is nice, it has mdadm and ssh, no telnet and
ftp, but who needs those nowadays :P
--
Luca Berra -- bluca@comedia.it
Communication Media & Services S.r.l.
/"\
\ / ASCII RIBBON CAMPAIGN
X AGAINST HTML MAIL
/ \
^ permalink raw reply [flat|nested] 29+ messages in thread* Re: Broken RAID1 boot arrays
2010-05-11 7:54 ` Luca Berra
@ 2010-05-11 16:27 ` Bill Davidsen
2010-05-12 6:28 ` Luca Berra
0 siblings, 1 reply; 29+ messages in thread
From: Bill Davidsen @ 2010-05-11 16:27 UTC (permalink / raw)
To: linux-raid
Luca Berra wrote:
> On Mon, May 10, 2010 at 10:04:32PM -0500, Leslie Rhorer wrote:
>>
>> On a related note, does anyone know of a good Linux Live CD which
>> enables both network operations (especially telnetd and ftpd) along with
>> mdadm? I have a couple of Live CDs on hand, but they don't support
>> mdadm or
> http://www.sysresccd.org is nice, it has mdadm and ssh, no telnet and
> ftp, but who needs those nowadays :P
>
What tool other than telnet do you use to open an interactive socket to
an arbitrary port?
--
Bill Davidsen <davidsen@tmr.com>
"We can't solve today's problems by using the same thinking we
used in creating them." - Einstein
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: Broken RAID1 boot arrays
2010-05-11 16:27 ` Bill Davidsen
@ 2010-05-12 6:28 ` Luca Berra
0 siblings, 0 replies; 29+ messages in thread
From: Luca Berra @ 2010-05-12 6:28 UTC (permalink / raw)
To: linux-raid
On Tue, May 11, 2010 at 12:27:28PM -0400, Bill Davidsen wrote:
> Luca Berra wrote:
>> On Mon, May 10, 2010 at 10:04:32PM -0500, Leslie Rhorer wrote:
>>>
>>> On a related note, does anyone know of a good Linux Live CD which
>>> enables both network operations (especially telnetd and ftpd) along with
>>> mdadm? I have a couple of Live CDs on hand, but they don't support mdadm
>>> or
>> http://www.sysresccd.org is nice, it has mdadm and ssh, no telnet and
>> ftp, but who needs those nowadays :P
>>
> What tool other than telnet do you use to open an interactive socket to an
> arbitrary port?
netcat or socat :)
but Leslie and me were talking about daemons in the specific environment
of a live cd, YMMV.
L.
--
Luca Berra -- bluca@comedia.it
Communication Media & Services S.r.l.
/"\
\ / ASCII RIBBON CAMPAIGN
X AGAINST HTML MAIL
/ \
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: Broken RAID1 boot arrays
2010-05-11 3:04 ` Leslie Rhorer
2010-05-11 7:54 ` Luca Berra
@ 2010-05-11 16:25 ` Bill Davidsen
1 sibling, 0 replies; 29+ messages in thread
From: Bill Davidsen @ 2010-05-11 16:25 UTC (permalink / raw)
To: Leslie Rhorer; +Cc: linux-raid
Leslie Rhorer wrote:
> On a related note, does anyone know of a good Linux Live CD which
> enables both network operations (especially telnetd and ftpd) along with
> mdadm? I have a couple of Live CDs on hand, but they don't support mdadm or
> remote access. This is a headless system, and I really can't effectively
> work with a local console, plus I need to mount the /boot array in order to
> properly edit the initrd. I suppose I could mount the drive as a non-array
> and then force a sync to the second drive, but I'd rather not.
>
>
Fedora disks can be booted in rescue mode.
--
Bill Davidsen <davidsen@tmr.com>
"We can't solve today's problems by using the same thinking we
used in creating them." - Einstein
^ permalink raw reply [flat|nested] 29+ messages in thread
* RE: Broken RAID1 boot arrays
2010-05-10 9:17 ` John Robinson
2010-05-10 9:47 ` Tim Small
@ 2010-05-11 2:37 ` Leslie Rhorer
1 sibling, 0 replies; 29+ messages in thread
From: Leslie Rhorer @ 2010-05-11 2:37 UTC (permalink / raw)
To: 'John Robinson', linux-raid
> >
> > GRUB can initially read the /dev/hda1 partition, because it does
> > bring up the GRUB menu, which is on /dev/hdx1.
> >
> > If I boot to multiuser mode, I get a complaint about an address
> > space collision of a device. It then recognizes the /dev/hda1 partition
> as
> > ext2 and starts to load the initrd, but then unceremoniously hangs.
> After a
> > while, it aborts the boot sequence and informs the user it has given up
> > waiting for the root device. It announces it cannot find /dev/md2 and
> drops
> > to busybox. Busybox, however, complains about not being able to access
> tty,
> > and the system hangs for good.
> >
> > If I boot to single user mode, then when raid1 and raid456 load
> > successfully, but then it complains that none of the arrays are
> assembled.
> > Afterwards, it waits for / to be available, and eventually times out
> with
> > the same errors as the multiuser mode.
> >
> > I'm not sure where I should start looking. I suppose if initrd
> > doesn't have the image of /etc, it might cause md to fail to load the
> > arrays, but it certainly should contain /etc. What else could be
> causing
> > the failure? I did happen to notice that under the old kernel, when md
> > first tried, the arrays would not load, but then a bit later in the boot
> > process, they did load.
> >
> > Has anyone else come across this problem? The upgrade is from
> > Debian "Lenny" to "Squeeze". Where should I start looking?
>
> Firstly, almost certainly by 2.6.32 your IDE drives will appear as
> /dev/sdx rather than hdx so you may need to build the initrd for 2.6.32
> with a different /etc/mdadm.conf.
Thanks! I did not know that. It's a great place to start.
> When you boot and get the complaint
> about no / can you get a command line you can run mdadm -Evvs from?
No, the system hangs. However, I can interrupt the boot at the GRUB
prompt, and I should be able to specify the correct target for /, which will
get me half the way there.
> Secondly, the idea of grub1 chain-loading grub2 sounds iffy to me; use
> either grub1 or grub2 but not both.
It's done that way so that if GRUB2 fails, one can still interrupt
the boot at the GRUB1 prompt and fix things. Once GRUB2 is running
properly, there is a simple command which eliminates GRUB1 from the boot
procerss and boots directly from GRUB2.
> Hope this gives you some places to look.
It surely does! Thanks yet again. I'm too tired right now to dig
into it, but if indeed hda and hdb are now sda and sdb, I can fix this
pretty easily. Of course, I have backups, and I could just revert to the
backup image, but then I would be right back where I started. I'd much
rather get the broken system working with the new kernel and move forward.
^ permalink raw reply [flat|nested] 29+ messages in thread
* Re: Broken RAID1 boot arrays
2010-05-10 2:25 Leslie Rhorer
2010-05-10 9:17 ` John Robinson
@ 2010-05-10 17:06 ` Bill Davidsen
2010-05-11 2:50 ` Leslie Rhorer
1 sibling, 1 reply; 29+ messages in thread
From: Bill Davidsen @ 2010-05-10 17:06 UTC (permalink / raw)
To: Leslie Rhorer; +Cc: linux-raid
Leslie Rhorer wrote:
> I was running a system under kernel 2.6.26.2-amd64, and it was
> having some problems that seemed possibly due to the kernel (or not), so I
> undertook to upgrade the kernel to 2.6.33-2-amd64. Now, there's a distro
> upgrade "feature" which ordinarily prevents this upgrade, because udev won't
> upgrade with the old kernel in place, and the kernel can't upgrade because
> of unmet dependencies which require a newer udev version, among other
> things. In any case, the work-around is to create the file
> /etc/udev/kernel-upgrade, at which point udev can be upgraded and then the
> kernel must be upgraded before rebooting. Now, I've done this before, and
> it worked, but I've never tried it on a system which boots from an array.
> This time, it broke.
>
> As part of the upgrade, GRUB1 is supposed to chain load to GRUB2
> which then continues to boot the system. This does not seem to be
> happening. What's more, when linux begins to load, it doesn't seem to
> recognize the arrays, so it can't find the root file system. There are two
> drives, /dev/hda and /dev/hdb, each divvied up into three partitions:
> /dev/hdx1 is formatted as ext2 and (supposed to be) mounted as /boot, and
> /dev/hdx2 formatted as ext3 and is (supposed to be) /, and /dev/hdx3 is
> configured as swap. In all three cases, the partitions are a pair of
> members in a RAID1 array. The /dev/hdx1 partitions have 1.0 superblocks and
> are assigned /dev/md1. The /dev/hdx2 partitions have 1.2 superblocks and
> are assigned /dev/md2. The /dev/hdx3 partitions have 1.2 superblocks and
> are assigned /dev/md3. All three have internal bitmaps.
>
> GRUB can initially read the /dev/hda1 partition, because it does
> bring up the GRUB menu, which is on /dev/hdx1.
>
> If I boot to multiuser mode, I get a complaint about an address
> space collision of a device. It then recognizes the /dev/hda1 partition as
> ext2 and starts to load the initrd, but then unceremoniously hangs. After a
> while, it aborts the boot sequence and informs the user it has given up
> waiting for the root device. It announces it cannot find /dev/md2 and drops
> to busybox. Busybox, however, complains about not being able to access tty,
> and the system hangs for good.
>
> If I boot to single user mode, then when raid1 and raid456 load
> successfully, but then it complains that none of the arrays are assembled.
> Afterwards, it waits for / to be available, and eventually times out with
> the same errors as the multiuser mode.
>
> I'm not sure where I should start looking. I suppose if initrd
> doesn't have the image of /etc, it might cause md to fail to load the
> arrays, but it certainly should contain /etc. What else could be causing
> the failure? I did happen to notice that under the old kernel, when md
> first tried, the arrays would not load, but then a bit later in the boot
> process, they did load.
>
>
I have zero experience with debian, but on several other distributions I
have noted that an upgrade to a very recent kernel from a fairly old
kernel (which you did) will make the LVM not work if you're using that.
If not, forget I said it, it's not related and I never chased it, I just
saw it a few times and muttered mighty oaths and moved on.
--
Bill Davidsen <davidsen@tmr.com>
"We can't solve today's problems by using the same thinking we
used in creating them." - Einstein
^ permalink raw reply [flat|nested] 29+ messages in thread
* RE: Broken RAID1 boot arrays
2010-05-10 17:06 ` Bill Davidsen
@ 2010-05-11 2:50 ` Leslie Rhorer
0 siblings, 0 replies; 29+ messages in thread
From: Leslie Rhorer @ 2010-05-11 2:50 UTC (permalink / raw)
To: 'Bill Davidsen'; +Cc: linux-raid
> I have zero experience with debian, but on several other distributions I
> have noted that an upgrade to a very recent kernel from a fairly old
> kernel (which you did) will make the LVM not work if you're using that.
By "that", you mean LVM? No, straight MD RAID 1.
> If not, forget I said it, it's not related and I never chased it, I just
> saw it a few times and muttered mighty oaths and moved on.
Oh, it's almost surely related to the new version of udev. John
Robinson's response makes it clear udev is assigning a different block
device (sdx vs mdx), whihc is probably why it's failing. I did try
different targets, and /dev/hda and /dev/hdb definitely do not exist. I'll
look for sda and sdb when I get the chance. 'Should be an easy fix if
that's all it is.
^ permalink raw reply [flat|nested] 29+ messages in thread
end of thread, other threads:[~2010-05-29 19:34 UTC | newest]
Thread overview: 29+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1273616411.5140.25.camel@localhost.localdomain>
2010-05-11 23:59 ` Broken RAID1 boot arrays Leslie Rhorer
2010-05-12 0:13 ` Leslie Rhorer
2010-05-13 1:31 ` Leslie Rhorer
2010-05-13 4:15 ` Daniel Reurich
2010-05-13 4:39 ` Daniel Reurich
2010-05-13 23:30 ` Leslie Rhorer
2010-05-14 0:16 ` Daniel Reurich
2010-05-14 2:58 ` Leslie Rhorer
2010-05-14 6:54 ` Daniel Reurich
2010-05-14 12:18 ` Leslie Rhorer
2010-05-14 7:08 ` Daniel Reurich
2010-05-14 12:43 ` Leslie Rhorer
2010-05-15 20:03 ` Leslie Rhorer
2010-05-16 3:10 ` Leslie Rhorer
2010-05-29 18:51 ` Leslie Rhorer
2010-05-29 19:34 ` Leslie Rhorer
2010-05-15 7:23 ` Leslie Rhorer
2010-05-10 2:25 Leslie Rhorer
2010-05-10 9:17 ` John Robinson
2010-05-10 9:47 ` Tim Small
2010-05-11 2:44 ` Leslie Rhorer
2010-05-11 3:04 ` Leslie Rhorer
2010-05-11 7:54 ` Luca Berra
2010-05-11 16:27 ` Bill Davidsen
2010-05-12 6:28 ` Luca Berra
2010-05-11 16:25 ` Bill Davidsen
2010-05-11 2:37 ` Leslie Rhorer
2010-05-10 17:06 ` Bill Davidsen
2010-05-11 2:50 ` Leslie Rhorer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).