* Raid 1 fails on every reboot?
@ 2005-05-01 2:59 John McMonagle
2005-05-01 3:25 ` well, I know it's too late for most of us in the US - berk walker
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: John McMonagle @ 2005-05-01 2:59 UTC (permalink / raw)
To: linux-raid
I cloned a system with 2 drives and 2 raid 1 partitions.
I took a drive out of the old system.
Recreated the raids to get a new uuids.
Partitioned a new drive and added the partitions.
There were a couple other problems that I'm pretty sure was caused by a
bad sata controller but think I have that under control.
So here is my problem.
Any time I reboot only the partitions on the first drive are active.
If I switch drives it's always the first one that is active.
Here is my mdadm.conf
DEVICE /dev/sd*
ARRAY /dev/md0 level=raid1 num-devices=2
UUID=647381d7:fb84da16:f43e8c51:3695f234
ARRAY /dev/md1 level=raid1 num-devices=2
UUID=90b9e6d2:c78b1827:77f469ad:0f8997ed
This is all after reboot:
cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[1]
586240 blocks [2/1] [_U]
md1 : active raid1 sda2[1]
194771968 blocks [2/1] [_U]
mdadm -E /dev/sda1
/dev/sda1:
Magic : a92b4efc
Version : 00.90.00
UUID : 647381d7:fb84da16:f43e8c51:3695f234
Creation Time : Sat Apr 30 06:02:47 2005
Raid Level : raid1
Raid Devices : 2
Total Devices : 1
Preferred Minor : 0
Update Time : Sat Apr 30 21:47:22 2005
State : clean
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Checksum : b8e8e847 - correct
Events : 0.4509
Number Major Minor RaidDevice State
this 1 8 1 1 active sync /dev/sda1
0 0 0 0 0 removed
1 1 8 1 1 active sync /dev/sda1
mdadm -E /dev/sdb1
/dev/sdb1:
Magic : a92b4efc
Version : 00.90.00
UUID : 647381d7:fb84da16:f43e8c51:3695f234
Creation Time : Sat Apr 30 06:02:47 2005
Raid Level : raid1
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Update Time : Sat Apr 30 21:26:07 2005
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : b8e8e1d8 - correct
Events : 0.4303
Number Major Minor RaidDevice State
this 0 8 17 0 active sync /dev/sdb1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 1 1 active sync /dev/sda1
After adding /dev/sdb1 to md0:
cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[0] sda1[1]
586240 blocks [2/2] [UU]
md1 : active raid1 sda2[1]
194771968 blocks [2/1] [_U]
unused devices: <none>
mdadm -E /dev/sda1
/dev/sda1:
Magic : a92b4efc
Version : 00.90.00
UUID : 647381d7:fb84da16:f43e8c51:3695f234
Creation Time : Sat Apr 30 06:02:47 2005
Raid Level : raid1
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Update Time : Sat Apr 30 21:51:11 2005
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : b8e8e99a - correct
Events : 0.4551
Number Major Minor RaidDevice State
this 1 8 1 1 active sync /dev/sda1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 1 1 active sync /dev/sda1
mdadm -E /dev/sdb1
/dev/sdb1:
Magic : a92b4efc
Version : 00.90.00
UUID : 647381d7:fb84da16:f43e8c51:3695f234
Creation Time : Sat Apr 30 06:02:47 2005
Raid Level : raid1
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Update Time : Sat Apr 30 21:51:43 2005
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Checksum : b8e8e9d0 - correct
Events : 0.4555
Number Major Minor RaidDevice State
this 0 8 17 0 active sync /dev/sdb1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 1 1 active sync /dev/sda1
Any Idea what is wrong?
John
^ permalink raw reply [flat|nested] 6+ messages in thread
* well, I know it's too late for most of us in the US -
2005-05-01 2:59 Raid 1 fails on every reboot? John McMonagle
@ 2005-05-01 3:25 ` berk walker
2005-05-01 10:27 ` Raid 1 fails on every reboot? Laurent CARON
2005-05-02 1:59 ` Eric Wood
2 siblings, 0 replies; 6+ messages in thread
From: berk walker @ 2005-05-01 3:25 UTC (permalink / raw)
Cc: linux-raid
My Seagate 120 SATA system has augured in, and NO, drattit, it wasn't
properly redundant. ANYWAY..
Anyone know of some place (open on Sunday, in the Philly/Princeton area)
where I can get a good deal on good drives - SCSI would be great ~120Gb,
PATA is 2nd choice, (I've had a lot of problems in SATA connectors
[harness stiffness, I think] . so I don't want to play that fun again)?
This time I'll raid them 1st. BTW the other boxes are running fine.
Thanks
b-
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Raid 1 fails on every reboot?
2005-05-01 2:59 Raid 1 fails on every reboot? John McMonagle
2005-05-01 3:25 ` well, I know it's too late for most of us in the US - berk walker
@ 2005-05-01 10:27 ` Laurent CARON
2005-05-01 13:48 ` John McMonagle
2005-05-02 1:59 ` Eric Wood
2 siblings, 1 reply; 6+ messages in thread
From: Laurent CARON @ 2005-05-01 10:27 UTC (permalink / raw)
To: John McMonagle; +Cc: linux-raid
John McMonagle a écrit :
> I cloned a system with 2 drives and 2 raid 1 partitions.
>
> I took a drive out of the old system.
> Recreated the raids to get a new uuids.
> Partitioned a new drive and added the partitions.
> There were a couple other problems that I'm pretty sure was caused by
> a bad sata controller but think I have that under control.
>
> So here is my problem.
> Any time I reboot only the partitions on the first drive are active.
> If I switch drives it's always the first one that is active.
>
> Here is my mdadm.conf
> DEVICE /dev/sd*
> ARRAY /dev/md0 level=raid1 num-devices=2
> UUID=647381d7:fb84da16:f43e8c51:3695f234
> ARRAY /dev/md1 level=raid1 num-devices=2
> UUID=90b9e6d2:c78b1827:77f469ad:0f8997ed
>
> This is all after reboot:
> cat /proc/mdstat
> Personalities : [raid1]
> md0 : active raid1 sda1[1]
> 586240 blocks [2/1] [_U]
>
> md1 : active raid1 sda2[1]
> 194771968 blocks [2/1] [_U]
>
> mdadm -E /dev/sda1
> /dev/sda1:
> Magic : a92b4efc
> Version : 00.90.00
> UUID : 647381d7:fb84da16:f43e8c51:3695f234
> Creation Time : Sat Apr 30 06:02:47 2005
> Raid Level : raid1
> Raid Devices : 2
> Total Devices : 1
> Preferred Minor : 0
>
> Update Time : Sat Apr 30 21:47:22 2005
> State : clean
> Active Devices : 1
> Working Devices : 1
> Failed Devices : 0
> Spare Devices : 0
> Checksum : b8e8e847 - correct
> Events : 0.4509
>
>
> Number Major Minor RaidDevice State
> this 1 8 1 1 active sync /dev/sda1
>
> 0 0 0 0 0 removed
> 1 1 8 1 1 active sync /dev/sda1
>
> mdadm -E /dev/sdb1
> /dev/sdb1:
> Magic : a92b4efc
> Version : 00.90.00
> UUID : 647381d7:fb84da16:f43e8c51:3695f234
> Creation Time : Sat Apr 30 06:02:47 2005
> Raid Level : raid1
> Raid Devices : 2
> Total Devices : 2
> Preferred Minor : 0
>
> Update Time : Sat Apr 30 21:26:07 2005
> State : clean
> Active Devices : 2
> Working Devices : 2
> Failed Devices : 0
> Spare Devices : 0
> Checksum : b8e8e1d8 - correct
> Events : 0.4303
>
>
> Number Major Minor RaidDevice State
> this 0 8 17 0 active sync /dev/sdb1
>
> 0 0 8 17 0 active sync /dev/sdb1
> 1 1 8 1 1 active sync /dev/sda1
>
> After adding /dev/sdb1 to md0:
>
> cat /proc/mdstat
> Personalities : [raid1]
> md0 : active raid1 sdb1[0] sda1[1]
> 586240 blocks [2/2] [UU]
>
> md1 : active raid1 sda2[1]
> 194771968 blocks [2/1] [_U]
>
> unused devices: <none>
> mdadm -E /dev/sda1
> /dev/sda1:
> Magic : a92b4efc
> Version : 00.90.00
> UUID : 647381d7:fb84da16:f43e8c51:3695f234
> Creation Time : Sat Apr 30 06:02:47 2005
> Raid Level : raid1
> Raid Devices : 2
> Total Devices : 2
> Preferred Minor : 0
>
> Update Time : Sat Apr 30 21:51:11 2005
> State : clean
> Active Devices : 2
> Working Devices : 2
> Failed Devices : 0
> Spare Devices : 0
> Checksum : b8e8e99a - correct
> Events : 0.4551
>
>
> Number Major Minor RaidDevice State
> this 1 8 1 1 active sync /dev/sda1
>
> 0 0 8 17 0 active sync /dev/sdb1
> 1 1 8 1 1 active sync /dev/sda1
> mdadm -E /dev/sdb1
> /dev/sdb1:
> Magic : a92b4efc
> Version : 00.90.00
> UUID : 647381d7:fb84da16:f43e8c51:3695f234
> Creation Time : Sat Apr 30 06:02:47 2005
> Raid Level : raid1
> Raid Devices : 2
> Total Devices : 2
> Preferred Minor : 0
>
> Update Time : Sat Apr 30 21:51:43 2005
> State : clean
> Active Devices : 2
> Working Devices : 2
> Failed Devices : 0
> Spare Devices : 0
> Checksum : b8e8e9d0 - correct
> Events : 0.4555
>
>
> Number Major Minor RaidDevice State
> this 0 8 17 0 active sync /dev/sdb1
>
> 0 0 8 17 0 active sync /dev/sdb1
> 1 1 8 1 1 active sync /dev/sda1
>
>
> Any Idea what is wrong?
>
> John
>
>
>
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
what does fdisk -l says?
--
Don't smoke the next cigarette. Repeat.
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Raid 1 fails on every reboot?
2005-05-01 10:27 ` Raid 1 fails on every reboot? Laurent CARON
@ 2005-05-01 13:48 ` John McMonagle
2005-05-01 16:57 ` John McMonagle
0 siblings, 1 reply; 6+ messages in thread
From: John McMonagle @ 2005-05-01 13:48 UTC (permalink / raw)
To: Laurent CARON; +Cc: linux-raid
Laurent CARON wrote:
> John McMonagle a écrit :
>
>> I cloned a system with 2 drives and 2 raid 1 partitions.
>>
>> I took a drive out of the old system.
>> Recreated the raids to get a new uuids.
>> Partitioned a new drive and added the partitions.
>> There were a couple other problems that I'm pretty sure was caused by
>> a bad sata controller but think I have that under control.
>>
>> So here is my problem.
>> Any time I reboot only the partitions on the first drive are active.
>> If I switch drives it's always the first one that is active.
>>
>> Here is my mdadm.conf
>> DEVICE /dev/sd*
>> ARRAY /dev/md0 level=raid1 num-devices=2
>> UUID=647381d7:fb84da16:f43e8c51:3695f234
>> ARRAY /dev/md1 level=raid1 num-devices=2
>> UUID=90b9e6d2:c78b1827:77f469ad:0f8997ed
>>
>> This is all after reboot:
>> cat /proc/mdstat
>> Personalities : [raid1]
>> md0 : active raid1 sda1[1]
>> 586240 blocks [2/1] [_U]
>>
>> md1 : active raid1 sda2[1]
>> 194771968 blocks [2/1] [_U]
>>
>> mdadm -E /dev/sda1
>> /dev/sda1:
>> Magic : a92b4efc
>> Version : 00.90.00
>> UUID : 647381d7:fb84da16:f43e8c51:3695f234
>> Creation Time : Sat Apr 30 06:02:47 2005
>> Raid Level : raid1
>> Raid Devices : 2
>> Total Devices : 1
>> Preferred Minor : 0
>>
>> Update Time : Sat Apr 30 21:47:22 2005
>> State : clean
>> Active Devices : 1
>> Working Devices : 1
>> Failed Devices : 0
>> Spare Devices : 0
>> Checksum : b8e8e847 - correct
>> Events : 0.4509
>>
>>
>> Number Major Minor RaidDevice State
>> this 1 8 1 1 active sync /dev/sda1
>>
>> 0 0 0 0 0 removed
>> 1 1 8 1 1 active sync /dev/sda1
>>
>> mdadm -E /dev/sdb1
>> /dev/sdb1:
>> Magic : a92b4efc
>> Version : 00.90.00
>> UUID : 647381d7:fb84da16:f43e8c51:3695f234
>> Creation Time : Sat Apr 30 06:02:47 2005
>> Raid Level : raid1
>> Raid Devices : 2
>> Total Devices : 2
>> Preferred Minor : 0
>>
>> Update Time : Sat Apr 30 21:26:07 2005
>> State : clean
>> Active Devices : 2
>> Working Devices : 2
>> Failed Devices : 0
>> Spare Devices : 0
>> Checksum : b8e8e1d8 - correct
>> Events : 0.4303
>>
>>
>> Number Major Minor RaidDevice State
>> this 0 8 17 0 active sync /dev/sdb1
>>
>> 0 0 8 17 0 active sync /dev/sdb1
>> 1 1 8 1 1 active sync /dev/sda1
>>
>> After adding /dev/sdb1 to md0:
>>
>> cat /proc/mdstat
>> Personalities : [raid1]
>> md0 : active raid1 sdb1[0] sda1[1]
>> 586240 blocks [2/2] [UU]
>>
>> md1 : active raid1 sda2[1]
>> 194771968 blocks [2/1] [_U]
>>
>> unused devices: <none>
>> mdadm -E /dev/sda1
>> /dev/sda1:
>> Magic : a92b4efc
>> Version : 00.90.00
>> UUID : 647381d7:fb84da16:f43e8c51:3695f234
>> Creation Time : Sat Apr 30 06:02:47 2005
>> Raid Level : raid1
>> Raid Devices : 2
>> Total Devices : 2
>> Preferred Minor : 0
>>
>> Update Time : Sat Apr 30 21:51:11 2005
>> State : clean
>> Active Devices : 2
>> Working Devices : 2
>> Failed Devices : 0
>> Spare Devices : 0
>> Checksum : b8e8e99a - correct
>> Events : 0.4551
>>
>>
>> Number Major Minor RaidDevice State
>> this 1 8 1 1 active sync /dev/sda1
>>
>> 0 0 8 17 0 active sync /dev/sdb1
>> 1 1 8 1 1 active sync /dev/sda1
>> mdadm -E /dev/sdb1
>> /dev/sdb1:
>> Magic : a92b4efc
>> Version : 00.90.00
>> UUID : 647381d7:fb84da16:f43e8c51:3695f234
>> Creation Time : Sat Apr 30 06:02:47 2005
>> Raid Level : raid1
>> Raid Devices : 2
>> Total Devices : 2
>> Preferred Minor : 0
>>
>> Update Time : Sat Apr 30 21:51:43 2005
>> State : clean
>> Active Devices : 2
>> Working Devices : 2
>> Failed Devices : 0
>> Spare Devices : 0
>> Checksum : b8e8e9d0 - correct
>> Events : 0.4555
>>
>>
>> Number Major Minor RaidDevice State
>> this 0 8 17 0 active sync /dev/sdb1
>>
>> 0 0 8 17 0 active sync /dev/sdb1
>> 1 1 8 1 1 active sync /dev/sda1
>>
>>
>> Any Idea what is wrong?
>>
>> John
>>
>>
>
> what does fdisk -l says?
>
neebackup:~# fdisk -l
Disk /dev/sda: 200.0 GB, 200049647616 bytes
255 heads, 63 sectors/track, 24321 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 73 586341 fd Linux raid
autodetect
/dev/sda2 74 24321 194772060 fd Linux raid
autodetect
Disk /dev/sdb: 200.0 GB, 200049647616 bytes
255 heads, 63 sectors/track, 24321 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 73 586341 fd Linux raid
autodetect
/dev/sdb2 74 24321 194772060 fd Linux raid
autodetect
Also a lttle more info.
It's debian sarge.
It's a SATA150 TX4 sata controler that had been in use for months.
neebackup:~# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.01
Creation Time : Sat Apr 30 06:02:47 2005
Raid Level : raid1
Array Size : 586240 (572.50 MiB 600.31 MB)
Device Size : 586240 (572.50 MiB 600.31 MB)
Raid Devices : 2
Total Devices : 1
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sun May 1 08:40:40 2005
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
UUID : 647381d7:fb84da16:f43e8c51:3695f234
Events : 0.4875
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 0 0 - removed
There are no disk error messages.
I booted with a fedora 3 rescue cd and md0 started properly so I assume
it's something in the startup.
I rebuilt the initrd a couple times. Is there anything in it that could
cause this problem?
John
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Raid 1 fails on every reboot?
2005-05-01 13:48 ` John McMonagle
@ 2005-05-01 16:57 ` John McMonagle
0 siblings, 0 replies; 6+ messages in thread
From: John McMonagle @ 2005-05-01 16:57 UTC (permalink / raw)
To: linux-raid
See the cause in the start up script in the initrd:
mdadm -A /devfs/md/1 -R -u 90b9e6d2:c78b1827:77f469ad:0f8997ed /dev/sda2
mkdir /devfs/vg1
mount_tmpfs /var
if [ -f /etc/lvm/lvm.conf ]; then
cat /etc/lvm/lvm.conf > /var/lvm.conf
fi
mount_tmpfs /etc/lvm
if [ -f /var/lvm.conf ]; then
cat /var/lvm.conf > /etc/lvm/lvm.conf
fi
mount -nt devfs devfs /dev
vgchange -a y vg1
umount /dev
umount -n /var
umount -n /etc/lvm
ROOT=/dev/md0
mdadm -A /devfs/md/0 -R -u 647381d7:fb84da16:f43e8c51:3695f234 /dev/sda1
This is debian sarge with 2.6.11 kernel.
Can make it work by adding the partitions to md0 and md1 and then
rebuild the initrd.
While this gets me running it doesn't seem ideal.
There is a good chance of changing devices without rebuilding the initrd.
Is it possible to do the mdadm -A with out specifying the devices such as?
mdadm -A /devfs/md/0 -R -u 647381d7:fb84da16:f43e8c51:3695f234
If so it's something to bug the debian folks about.
Any other way it should be done?
Thanks
John
John McMonagle wrote:
> Laurent CARON wrote:
>
>> John McMonagle a écrit :
>>
>>> I cloned a system with 2 drives and 2 raid 1 partitions.
>>>
>>> I took a drive out of the old system.
>>> Recreated the raids to get a new uuids.
>>> Partitioned a new drive and added the partitions.
>>> There were a couple other problems that I'm pretty sure was caused
>>> by a bad sata controller but think I have that under control.
>>>
>>> So here is my problem.
>>> Any time I reboot only the partitions on the first drive are active.
>>> If I switch drives it's always the first one that is active.
>>>
>>> Here is my mdadm.conf
>>> DEVICE /dev/sd*
>>> ARRAY /dev/md0 level=raid1 num-devices=2
>>> UUID=647381d7:fb84da16:f43e8c51:3695f234
>>> ARRAY /dev/md1 level=raid1 num-devices=2
>>> UUID=90b9e6d2:c78b1827:77f469ad:0f8997ed
>>>
>>> This is all after reboot:
>>> cat /proc/mdstat
>>> Personalities : [raid1]
>>> md0 : active raid1 sda1[1]
>>> 586240 blocks [2/1] [_U]
>>>
>>> md1 : active raid1 sda2[1]
>>> 194771968 blocks [2/1] [_U]
>>>
>>> mdadm -E /dev/sda1
>>> /dev/sda1:
>>> Magic : a92b4efc
>>> Version : 00.90.00
>>> UUID : 647381d7:fb84da16:f43e8c51:3695f234
>>> Creation Time : Sat Apr 30 06:02:47 2005
>>> Raid Level : raid1
>>> Raid Devices : 2
>>> Total Devices : 1
>>> Preferred Minor : 0
>>>
>>> Update Time : Sat Apr 30 21:47:22 2005
>>> State : clean
>>> Active Devices : 1
>>> Working Devices : 1
>>> Failed Devices : 0
>>> Spare Devices : 0
>>> Checksum : b8e8e847 - correct
>>> Events : 0.4509
>>>
>>>
>>> Number Major Minor RaidDevice State
>>> this 1 8 1 1 active sync /dev/sda1
>>>
>>> 0 0 0 0 0 removed
>>> 1 1 8 1 1 active sync /dev/sda1
>>>
>>> mdadm -E /dev/sdb1
>>> /dev/sdb1:
>>> Magic : a92b4efc
>>> Version : 00.90.00
>>> UUID : 647381d7:fb84da16:f43e8c51:3695f234
>>> Creation Time : Sat Apr 30 06:02:47 2005
>>> Raid Level : raid1
>>> Raid Devices : 2
>>> Total Devices : 2
>>> Preferred Minor : 0
>>>
>>> Update Time : Sat Apr 30 21:26:07 2005
>>> State : clean
>>> Active Devices : 2
>>> Working Devices : 2
>>> Failed Devices : 0
>>> Spare Devices : 0
>>> Checksum : b8e8e1d8 - correct
>>> Events : 0.4303
>>>
>>>
>>> Number Major Minor RaidDevice State
>>> this 0 8 17 0 active sync /dev/sdb1
>>>
>>> 0 0 8 17 0 active sync /dev/sdb1
>>> 1 1 8 1 1 active sync /dev/sda1
>>>
>>> After adding /dev/sdb1 to md0:
>>>
>>> cat /proc/mdstat
>>> Personalities : [raid1]
>>> md0 : active raid1 sdb1[0] sda1[1]
>>> 586240 blocks [2/2] [UU]
>>>
>>> md1 : active raid1 sda2[1]
>>> 194771968 blocks [2/1] [_U]
>>>
>>> unused devices: <none>
>>> mdadm -E /dev/sda1
>>> /dev/sda1:
>>> Magic : a92b4efc
>>> Version : 00.90.00
>>> UUID : 647381d7:fb84da16:f43e8c51:3695f234
>>> Creation Time : Sat Apr 30 06:02:47 2005
>>> Raid Level : raid1
>>> Raid Devices : 2
>>> Total Devices : 2
>>> Preferred Minor : 0
>>>
>>> Update Time : Sat Apr 30 21:51:11 2005
>>> State : clean
>>> Active Devices : 2
>>> Working Devices : 2
>>> Failed Devices : 0
>>> Spare Devices : 0
>>> Checksum : b8e8e99a - correct
>>> Events : 0.4551
>>>
>>>
>>> Number Major Minor RaidDevice State
>>> this 1 8 1 1 active sync /dev/sda1
>>>
>>> 0 0 8 17 0 active sync /dev/sdb1
>>> 1 1 8 1 1 active sync /dev/sda1
>>> mdadm -E /dev/sdb1
>>> /dev/sdb1:
>>> Magic : a92b4efc
>>> Version : 00.90.00
>>> UUID : 647381d7:fb84da16:f43e8c51:3695f234
>>> Creation Time : Sat Apr 30 06:02:47 2005
>>> Raid Level : raid1
>>> Raid Devices : 2
>>> Total Devices : 2
>>> Preferred Minor : 0
>>>
>>> Update Time : Sat Apr 30 21:51:43 2005
>>> State : clean
>>> Active Devices : 2
>>> Working Devices : 2
>>> Failed Devices : 0
>>> Spare Devices : 0
>>> Checksum : b8e8e9d0 - correct
>>> Events : 0.4555
>>>
>>>
>>> Number Major Minor RaidDevice State
>>> this 0 8 17 0 active sync /dev/sdb1
>>>
>>> 0 0 8 17 0 active sync /dev/sdb1
>>> 1 1 8 1 1 active sync /dev/sda1
>>>
>>>
>>> Any Idea what is wrong?
>>>
>>> John
>>>
>>>
>>
>> what does fdisk -l says?
>>
> neebackup:~# fdisk -l
>
> Disk /dev/sda: 200.0 GB, 200049647616 bytes
> 255 heads, 63 sectors/track, 24321 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Device Boot Start End Blocks Id System
> /dev/sda1 * 1 73 586341 fd Linux raid
> autodetect
> /dev/sda2 74 24321 194772060 fd Linux raid
> autodetect
>
> Disk /dev/sdb: 200.0 GB, 200049647616 bytes
> 255 heads, 63 sectors/track, 24321 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Device Boot Start End Blocks Id System
> /dev/sdb1 * 1 73 586341 fd Linux raid
> autodetect
> /dev/sdb2 74 24321 194772060 fd Linux raid
> autodetect
>
> Also a lttle more info.
>
> It's debian sarge.
>
> It's a SATA150 TX4 sata controler that had been in use for months.
> neebackup:~# mdadm -D /dev/md0
> /dev/md0:
> Version : 00.90.01
> Creation Time : Sat Apr 30 06:02:47 2005
> Raid Level : raid1
> Array Size : 586240 (572.50 MiB 600.31 MB)
> Device Size : 586240 (572.50 MiB 600.31 MB)
> Raid Devices : 2
> Total Devices : 1
> Preferred Minor : 0
> Persistence : Superblock is persistent
>
> Update Time : Sun May 1 08:40:40 2005
> State : clean, degraded
> Active Devices : 1
> Working Devices : 1
> Failed Devices : 0
> Spare Devices : 0
>
> UUID : 647381d7:fb84da16:f43e8c51:3695f234
> Events : 0.4875
>
> Number Major Minor RaidDevice State
> 0 8 1 0 active sync /dev/sda1
> 1 0 0 - removed
>
> There are no disk error messages.
>
> I booted with a fedora 3 rescue cd and md0 started properly so I
> assume it's something in the startup.
>
> I rebuilt the initrd a couple times. Is there anything in it that
> could cause this problem?
>
> John
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Raid 1 fails on every reboot?
2005-05-01 2:59 Raid 1 fails on every reboot? John McMonagle
2005-05-01 3:25 ` well, I know it's too late for most of us in the US - berk walker
2005-05-01 10:27 ` Raid 1 fails on every reboot? Laurent CARON
@ 2005-05-02 1:59 ` Eric Wood
2 siblings, 0 replies; 6+ messages in thread
From: Eric Wood @ 2005-05-02 1:59 UTC (permalink / raw)
To: linux-raid
----- Original Message -----
From: "John McMonagle"
> So here is my problem.
> Any time I reboot only the partitions on the first drive are active.
> If I switch drives it's always the first one that is active.
I'm not sure about scsi drives, but for RAID-1 IDE drives I find that I have
to add a -h argument to the halt script.
The last lines of:
/etc/init.d/halt
are:
HALTARGS="-i -d -h"
[ -f /poweroff -o ! -f /halt ] && HALTARGS="$HALTARGS -p"
exec $command $HALTARGS
What I have found is that by adding a -h (man halt: Put all harddrives on
the system in standby mode just before halt or poweroff), this somehow
flushes the drives and I don't have rebuild problems upon reboot. However,
this works for FC 1 systems and I do not know if this is still neccessary
for other distros.
-Eric Wood
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2005-05-02 1:59 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-05-01 2:59 Raid 1 fails on every reboot? John McMonagle
2005-05-01 3:25 ` well, I know it's too late for most of us in the US - berk walker
2005-05-01 10:27 ` Raid 1 fails on every reboot? Laurent CARON
2005-05-01 13:48 ` John McMonagle
2005-05-01 16:57 ` John McMonagle
2005-05-02 1:59 ` Eric Wood
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).