linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: One partition degraded after every reboot
@ 2004-04-01 11:45 Philipp Gortan
  2004-04-01 17:52 ` Thomas Andrews
  2004-04-09 13:09 ` Thomas Andrews
  0 siblings, 2 replies; 7+ messages in thread
From: Philipp Gortan @ 2004-04-01 11:45 UTC (permalink / raw)
  To: linux-raid

Thomas Andrews wrote:

> I've set up RAID-1 on a pair of disks recently. When I reboot I get this
> in syslog, even though the partition was perfect & not degraded before:
> 
...
> kernel: md1: former device ide/host2/bus0/target0/lun0/part5 is unavailable, removing from array!
> 
> More specifically, of the 4 RAID-1 partitions, md1 (my root partition)
> is in degraded mode. Here's a snippet of /proc/mdstat:
> 
> md1 : active raid1 ide/host2/bus1/target0/lun0/part5[0]
>       38957440 blocks [2/1] [U_]
> 
> All the RAID partitions are of type FD on both disks, and the disks are
> brand new
...
> This is a stock Debian/testing pc running a stock 2.4.24-1-686 kernel.

Hi Andrew,
I had the same problem today, with debian/testing and both 2.4 and 2.6 
kernels.
My root filesystem, a raid 1 device would come up degraded at every 
reboot, even if it was clean on shutdown.

I solved the problem by creating a new initrd and fiddling with the lilo 
configuration:
for me,
after re-adding the always-failing drive to the raid
# mdadm -a /dev/md0 /dev/hda1
i updated my lilo.conf to:

...
boot=/dev/md0
raid-extra-boot=/dev/hda,/dev/hdc
root=/dev/md0
...


and created a new initrd
# mkinitrd -k -r /dev/md0 -o /boot/initrd.img-2.6.3-1-k7

and ran lilo again
# lilo

since that reboot, the raid comes up complete.

hope that helps,

cu, philipp

-- 
When in doubt, use brute force.

                                -- Ken Thompson

^ permalink raw reply	[flat|nested] 7+ messages in thread
* One partition degraded after every reboot
@ 2004-03-26 15:41 Thomas Andrews
  2004-04-09 13:39 ` Thomas Andrews
  0 siblings, 1 reply; 7+ messages in thread
From: Thomas Andrews @ 2004-03-26 15:41 UTC (permalink / raw)
  To: linux-raid

Hi All,

I've set up RAID-1 on a pair of disks recently. When I reboot I get this
in syslog, even though the partition was perfect & not degraded before:

kernel:  /dev/ide/host2/bus0/target0/lun0: p1 < p5 p6 p7 p8 > p2 p3
kernel:  /dev/ide/host2/bus1/target0/lun0: p1 < p5 p6 p7 p8 > p2 p3
kernel:  [events: 0000001c]
kernel: md: bind<ide/host2/bus1/target0/lun0/part5,1>
kernel: md: ide/host2/bus1/target0/lun0/part5's event counter: 0000001c
kernel: md1: former device ide/host2/bus0/target0/lun0/part5 is unavailable, removing from array!

More specifically, of the 4 RAID-1 partitions, md1 (my root partition)
is in degraded mode. Here's a snippet of /proc/mdstat:

md1 : active raid1 ide/host2/bus1/target0/lun0/part5[0]
      38957440 blocks [2/1] [U_]

All the RAID partitions are of type FD on both disks, and the disks are
brand new. I swapped out the 'offending' disk with another brand new
disk, but it made no difference.

This is a stock Debian/testing pc running a stock 2.4.24-1-686 kernel.
I use mdadm. Initially I thought that it was grub messing things up, but
I'm booting from a floppy now, and I haven't bothered to intall grub on
the newer disk.

To recommission partition on the previous disk, I used 
mdadm --zero-superblock /dev/hde5
mdadm /dev/md1 -a /dev/hde5
This set things right, and there were no problems until the next reboot.
The process is totally repeatable.

What am I missing here ?

Why does it say "device ide/host2/bus0/target0/lun0/part5 is unavailable"
in the log ??


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2004-04-11 19:55 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-04-01 11:45 One partition degraded after every reboot Philipp Gortan
2004-04-01 17:52 ` Thomas Andrews
2004-04-02  9:20   ` Philipp Gortan
2004-04-09 13:09 ` Thomas Andrews
  -- strict thread matches above, loose matches on Subject: below --
2004-03-26 15:41 Thomas Andrews
2004-04-09 13:39 ` Thomas Andrews
2004-04-11 19:55   ` Thomas Andrews

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).