* One partition degraded after every reboot
@ 2004-03-26 15:41 Thomas Andrews
2004-04-09 13:39 ` Thomas Andrews
0 siblings, 1 reply; 7+ messages in thread
From: Thomas Andrews @ 2004-03-26 15:41 UTC (permalink / raw)
To: linux-raid
Hi All,
I've set up RAID-1 on a pair of disks recently. When I reboot I get this
in syslog, even though the partition was perfect & not degraded before:
kernel: /dev/ide/host2/bus0/target0/lun0: p1 < p5 p6 p7 p8 > p2 p3
kernel: /dev/ide/host2/bus1/target0/lun0: p1 < p5 p6 p7 p8 > p2 p3
kernel: [events: 0000001c]
kernel: md: bind<ide/host2/bus1/target0/lun0/part5,1>
kernel: md: ide/host2/bus1/target0/lun0/part5's event counter: 0000001c
kernel: md1: former device ide/host2/bus0/target0/lun0/part5 is unavailable, removing from array!
More specifically, of the 4 RAID-1 partitions, md1 (my root partition)
is in degraded mode. Here's a snippet of /proc/mdstat:
md1 : active raid1 ide/host2/bus1/target0/lun0/part5[0]
38957440 blocks [2/1] [U_]
All the RAID partitions are of type FD on both disks, and the disks are
brand new. I swapped out the 'offending' disk with another brand new
disk, but it made no difference.
This is a stock Debian/testing pc running a stock 2.4.24-1-686 kernel.
I use mdadm. Initially I thought that it was grub messing things up, but
I'm booting from a floppy now, and I haven't bothered to intall grub on
the newer disk.
To recommission partition on the previous disk, I used
mdadm --zero-superblock /dev/hde5
mdadm /dev/md1 -a /dev/hde5
This set things right, and there were no problems until the next reboot.
The process is totally repeatable.
What am I missing here ?
Why does it say "device ide/host2/bus0/target0/lun0/part5 is unavailable"
in the log ??
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: One partition degraded after every reboot
@ 2004-04-01 11:45 Philipp Gortan
2004-04-01 17:52 ` Thomas Andrews
2004-04-09 13:09 ` Thomas Andrews
0 siblings, 2 replies; 7+ messages in thread
From: Philipp Gortan @ 2004-04-01 11:45 UTC (permalink / raw)
To: linux-raid
Thomas Andrews wrote:
> I've set up RAID-1 on a pair of disks recently. When I reboot I get this
> in syslog, even though the partition was perfect & not degraded before:
>
...
> kernel: md1: former device ide/host2/bus0/target0/lun0/part5 is unavailable, removing from array!
>
> More specifically, of the 4 RAID-1 partitions, md1 (my root partition)
> is in degraded mode. Here's a snippet of /proc/mdstat:
>
> md1 : active raid1 ide/host2/bus1/target0/lun0/part5[0]
> 38957440 blocks [2/1] [U_]
>
> All the RAID partitions are of type FD on both disks, and the disks are
> brand new
...
> This is a stock Debian/testing pc running a stock 2.4.24-1-686 kernel.
Hi Andrew,
I had the same problem today, with debian/testing and both 2.4 and 2.6
kernels.
My root filesystem, a raid 1 device would come up degraded at every
reboot, even if it was clean on shutdown.
I solved the problem by creating a new initrd and fiddling with the lilo
configuration:
for me,
after re-adding the always-failing drive to the raid
# mdadm -a /dev/md0 /dev/hda1
i updated my lilo.conf to:
...
boot=/dev/md0
raid-extra-boot=/dev/hda,/dev/hdc
root=/dev/md0
...
and created a new initrd
# mkinitrd -k -r /dev/md0 -o /boot/initrd.img-2.6.3-1-k7
and ran lilo again
# lilo
since that reboot, the raid comes up complete.
hope that helps,
cu, philipp
--
When in doubt, use brute force.
-- Ken Thompson
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: One partition degraded after every reboot
2004-04-01 11:45 One partition degraded after every reboot Philipp Gortan
@ 2004-04-01 17:52 ` Thomas Andrews
2004-04-02 9:20 ` Philipp Gortan
2004-04-09 13:09 ` Thomas Andrews
1 sibling, 1 reply; 7+ messages in thread
From: Thomas Andrews @ 2004-04-01 17:52 UTC (permalink / raw)
To: Philipp Gortan, linux-raid
On Thu, Apr 01, 2004 at 01:45:39PM +0200, Philipp Gortan wrote:
> and created a new initrd
> # mkinitrd -k -r /dev/md0 -o /boot/initrd.img-2.6.3-1-k7
Yes I found I had to do that from the start because the initrd image
from before the raid conversion was set up for a different root device,
so in my case that isn't the solution.
> and ran lilo again
I'm using grub. I wonder if that's a problem ?
-Thomas
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: One partition degraded after every reboot
2004-04-01 17:52 ` Thomas Andrews
@ 2004-04-02 9:20 ` Philipp Gortan
0 siblings, 0 replies; 7+ messages in thread
From: Philipp Gortan @ 2004-04-02 9:20 UTC (permalink / raw)
To: linux-raid; +Cc: Thomas Andrews
Thomas Andrews wrote:
>># mkinitrd -k -r /dev/md0 -o /boot/initrd.img-2.6.3-1-k7
>>
>>
>Yes I found I had to do that from the start because the initrd image
>from before the raid conversion was set up for a different root device,
>so in my case that isn't the solution.
>
>
>
Hi Thomas,
well, I'd definitely give it another try now...
>I'm using grub. I wonder if that's a problem ?
>
>
I'm no grub expert, but I didn't get it to work under grub either, so
I switched to lilo. Simply say
# apt-get install lilo
# liloconfig
accept the defaults and change the values in /etc/lilo.conf for boot and
root to your root md device.
then run lilo and you should be set...
cu,
philipp
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: One partition degraded after every reboot
2004-04-01 11:45 One partition degraded after every reboot Philipp Gortan
2004-04-01 17:52 ` Thomas Andrews
@ 2004-04-09 13:09 ` Thomas Andrews
1 sibling, 0 replies; 7+ messages in thread
From: Thomas Andrews @ 2004-04-09 13:09 UTC (permalink / raw)
To: linux-raid
On Thu, Apr 01, 2004 at 01:45:39PM +0200, Philipp Gortan wrote:
> > I've set up RAID-1 on a pair of disks recently. When I reboot I get this
> > in syslog, even though the partition was perfect & not degraded before:
> >
> > kernel: md1: former device ide/host2/bus0/target0/lun0/part5 is unavailable, removing from array!
> >
> > More specifically, of the 4 RAID-1 partitions, md1 (my root partition)
> > is in degraded mode.
[snip]
> I solved the problem by creating a new initrd and fiddling with the lilo
> configuration:
> for me,
> after re-adding the always-failing drive to the raid
> # mdadm -a /dev/md0 /dev/hda1
> i updated my lilo.conf to:
>
> ...
> boot=/dev/md0
> raid-extra-boot=/dev/hda,/dev/hdc
> root=/dev/md0
> ...
>
> and created a new initrd
> # mkinitrd -k -r /dev/md0 -o /boot/initrd.img-2.6.3-1-k7
>
> and ran lilo again
> # lilo
[snip]
Unfortunately, installing lilo in place of grub hasn't solved the
problem, so I'm still without a solution.
-Thomas
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: One partition degraded after every reboot
2004-03-26 15:41 Thomas Andrews
@ 2004-04-09 13:39 ` Thomas Andrews
2004-04-11 19:55 ` Thomas Andrews
0 siblings, 1 reply; 7+ messages in thread
From: Thomas Andrews @ 2004-04-09 13:39 UTC (permalink / raw)
To: Thomas Andrews; +Cc: linux-raid
What can cause this to occur after a clean shutdown & reboot:
"md1: former device ide/host2/bus0/target0/lun0/part5 is unavailable, removing from array!"
I have looked in md.c and I can see where it occurs, but I don't
understand the code much.
-Thomas
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: One partition degraded after every reboot
2004-04-09 13:39 ` Thomas Andrews
@ 2004-04-11 19:55 ` Thomas Andrews
0 siblings, 0 replies; 7+ messages in thread
From: Thomas Andrews @ 2004-04-11 19:55 UTC (permalink / raw)
To: linux-raid
On Fri, Apr 09, 2004 at 03:39:49PM +0200, Thomas Andrews wrote:
> What can cause this to occur after a clean shutdown & reboot:
> "md1: former device ide/host2/bus0/target0/lun0/part5 is unavailable, removing from array!"
Well I finally solved it. The error above is somewhat misleading! I
originally created the initrd with the raid1 array degraded. It had to
be done that way because I wanted to re-boot using md1 as the root
device, but at the time /dev/hda5 was still the root.
mkinitrd created a script called "script" which contained the line:
mdadm -A /devfs/md/4 -R -u <magic_number> /dev/hdc5
But what it needs is:
mdadm -A /devfs/md/4 -R -u <magic_number> /dev/hdc5 /dev/hda5
The reason other people get it to work by "fiddling around" is that they
invariably re-do the mkinitrd afterwards, and chances are, most of the
time they will do it with all the devices in the array (ie not degraded
mode.) Creating the initrd with the raid1 running in it's final desired
manner results in mkinitrd putting the correct command into "script", so
there you are.
It's worth noting also that the "broken" initrd also only had /dev/hdc5
whereas the working one has /dev/hdc5 *and* /dev/hda4
In summary, if you get the error above, get the complete raid array up
and running with all devices inserted, and then re-create your initrd:
mkinitrd -r /dev/md? -o /boot/initrd.img-?.?.??-raid
-Thomas
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2004-04-11 19:55 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-04-01 11:45 One partition degraded after every reboot Philipp Gortan
2004-04-01 17:52 ` Thomas Andrews
2004-04-02 9:20 ` Philipp Gortan
2004-04-09 13:09 ` Thomas Andrews
-- strict thread matches above, loose matches on Subject: below --
2004-03-26 15:41 Thomas Andrews
2004-04-09 13:39 ` Thomas Andrews
2004-04-11 19:55 ` Thomas Andrews
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).