linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John McMonagle <johnm@advocap.org>
To: linux-raid@vger.kernel.org
Subject: Re: Raid 1 fails on every reboot?
Date: Sun, 01 May 2005 11:57:23 -0500	[thread overview]
Message-ID: <42750A73.6060200@advocap.org> (raw)
In-Reply-To: <4274DE22.3@advocap.org>


See the cause in the start up script in the initrd:

mdadm -A /devfs/md/1 -R -u 90b9e6d2:c78b1827:77f469ad:0f8997ed /dev/sda2
mkdir /devfs/vg1
mount_tmpfs /var
if [ -f /etc/lvm/lvm.conf ]; then
cat /etc/lvm/lvm.conf > /var/lvm.conf
fi
mount_tmpfs /etc/lvm
if [ -f /var/lvm.conf ]; then
cat /var/lvm.conf > /etc/lvm/lvm.conf
fi
mount -nt devfs devfs /dev
vgchange -a y vg1
umount /dev
umount -n /var
umount -n /etc/lvm
ROOT=/dev/md0
mdadm -A /devfs/md/0 -R -u 647381d7:fb84da16:f43e8c51:3695f234 /dev/sda1

This is debian sarge with 2.6.11 kernel.

Can make it work by adding the partitions to md0 and md1 and then 
rebuild the initrd.

While this gets me running it doesn't seem ideal.
There is a good chance of changing devices without rebuilding the initrd.

Is it possible to do the mdadm -A with out specifying the devices such as?
mdadm -A /devfs/md/0 -R -u 647381d7:fb84da16:f43e8c51:3695f234

If so it's something to bug the debian folks about.

Any other way it should be done?

Thanks

John








John McMonagle wrote:

> Laurent CARON wrote:
>
>> John McMonagle a écrit :
>>
>>> I cloned a system with 2 drives and 2 raid 1 partitions.
>>>
>>> I took a drive out of the old system.
>>> Recreated the raids to get a new uuids.
>>> Partitioned a new drive and added the partitions.
>>> There were a couple other problems that I'm pretty sure was caused 
>>> by a bad sata controller but think I have that under control.
>>>
>>> So here is my problem.
>>> Any time I reboot only the partitions on the first drive are active.
>>> If I switch drives it's always the first one that is active.
>>>
>>> Here is my mdadm.conf
>>> DEVICE  /dev/sd*
>>> ARRAY /dev/md0 level=raid1 num-devices=2 
>>> UUID=647381d7:fb84da16:f43e8c51:3695f234
>>> ARRAY /dev/md1 level=raid1 num-devices=2 
>>> UUID=90b9e6d2:c78b1827:77f469ad:0f8997ed
>>>
>>> This is all after reboot:
>>> cat /proc/mdstat
>>> Personalities : [raid1]
>>> md0 : active raid1 sda1[1]
>>>      586240 blocks [2/1] [_U]
>>>
>>> md1 : active raid1 sda2[1]
>>>      194771968 blocks [2/1] [_U]
>>>
>>> mdadm -E /dev/sda1
>>> /dev/sda1:
>>>          Magic : a92b4efc
>>>        Version : 00.90.00
>>>           UUID : 647381d7:fb84da16:f43e8c51:3695f234
>>>  Creation Time : Sat Apr 30 06:02:47 2005
>>>     Raid Level : raid1
>>>   Raid Devices : 2
>>>  Total Devices : 1
>>> Preferred Minor : 0
>>>
>>>    Update Time : Sat Apr 30 21:47:22 2005
>>>          State : clean
>>> Active Devices : 1
>>> Working Devices : 1
>>> Failed Devices : 0
>>>  Spare Devices : 0
>>>       Checksum : b8e8e847 - correct
>>>         Events : 0.4509
>>>
>>>
>>>      Number   Major   Minor   RaidDevice State
>>> this     1       8        1        1      active sync   /dev/sda1
>>>
>>>   0     0       0        0        0      removed
>>>   1     1       8        1        1      active sync   /dev/sda1
>>>
>>> mdadm -E /dev/sdb1
>>> /dev/sdb1:
>>>          Magic : a92b4efc
>>>        Version : 00.90.00
>>>           UUID : 647381d7:fb84da16:f43e8c51:3695f234
>>>  Creation Time : Sat Apr 30 06:02:47 2005
>>>     Raid Level : raid1
>>>   Raid Devices : 2
>>>  Total Devices : 2
>>> Preferred Minor : 0
>>>
>>>    Update Time : Sat Apr 30 21:26:07 2005
>>>          State : clean
>>> Active Devices : 2
>>> Working Devices : 2
>>> Failed Devices : 0
>>>  Spare Devices : 0
>>>       Checksum : b8e8e1d8 - correct
>>>         Events : 0.4303
>>>
>>>
>>>      Number   Major   Minor   RaidDevice State
>>> this     0       8       17        0      active sync   /dev/sdb1
>>>
>>>   0     0       8       17        0      active sync   /dev/sdb1
>>>   1     1       8        1        1      active sync   /dev/sda1
>>>
>>> After adding /dev/sdb1 to md0:
>>>
>>> cat /proc/mdstat
>>> Personalities : [raid1]
>>> md0 : active raid1 sdb1[0] sda1[1]
>>>      586240 blocks [2/2] [UU]
>>>
>>> md1 : active raid1 sda2[1]
>>>      194771968 blocks [2/1] [_U]
>>>
>>> unused devices: <none>
>>> mdadm -E /dev/sda1
>>> /dev/sda1:
>>>          Magic : a92b4efc
>>>        Version : 00.90.00
>>>           UUID : 647381d7:fb84da16:f43e8c51:3695f234
>>>  Creation Time : Sat Apr 30 06:02:47 2005
>>>     Raid Level : raid1
>>>   Raid Devices : 2
>>>  Total Devices : 2
>>> Preferred Minor : 0
>>>
>>>    Update Time : Sat Apr 30 21:51:11 2005
>>>          State : clean
>>> Active Devices : 2
>>> Working Devices : 2
>>> Failed Devices : 0
>>>  Spare Devices : 0
>>>       Checksum : b8e8e99a - correct
>>>         Events : 0.4551
>>>
>>>
>>>      Number   Major   Minor   RaidDevice State
>>> this     1       8        1        1      active sync   /dev/sda1
>>>
>>>   0     0       8       17        0      active sync   /dev/sdb1
>>>   1     1       8        1        1      active sync   /dev/sda1
>>> mdadm -E /dev/sdb1
>>> /dev/sdb1:
>>>          Magic : a92b4efc
>>>        Version : 00.90.00
>>>           UUID : 647381d7:fb84da16:f43e8c51:3695f234
>>>  Creation Time : Sat Apr 30 06:02:47 2005
>>>     Raid Level : raid1
>>>   Raid Devices : 2
>>>  Total Devices : 2
>>> Preferred Minor : 0
>>>
>>>    Update Time : Sat Apr 30 21:51:43 2005
>>>          State : clean
>>> Active Devices : 2
>>> Working Devices : 2
>>> Failed Devices : 0
>>>  Spare Devices : 0
>>>       Checksum : b8e8e9d0 - correct
>>>         Events : 0.4555
>>>
>>>
>>>      Number   Major   Minor   RaidDevice State
>>> this     0       8       17        0      active sync   /dev/sdb1
>>>
>>>   0     0       8       17        0      active sync   /dev/sdb1
>>>   1     1       8        1        1      active sync   /dev/sda1
>>>
>>>
>>> Any Idea what is wrong?
>>>
>>> John
>>>
>>>
>>
>> what does fdisk -l says?
>>
> neebackup:~# fdisk -l
>
> Disk /dev/sda: 200.0 GB, 200049647616 bytes
> 255 heads, 63 sectors/track, 24321 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
>   Device Boot      Start         End      Blocks   Id  System
> /dev/sda1   *           1          73      586341   fd  Linux raid 
> autodetect
> /dev/sda2              74       24321   194772060   fd  Linux raid 
> autodetect
>
> Disk /dev/sdb: 200.0 GB, 200049647616 bytes
> 255 heads, 63 sectors/track, 24321 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
>   Device Boot      Start         End      Blocks   Id  System
> /dev/sdb1   *           1          73      586341   fd  Linux raid 
> autodetect
> /dev/sdb2              74       24321   194772060   fd  Linux raid 
> autodetect
>
> Also a lttle more info.
>
> It's debian sarge.
>
> It's a SATA150 TX4 sata controler that had been in use for months.
> neebackup:~# mdadm -D /dev/md0
> /dev/md0:
>        Version : 00.90.01
>  Creation Time : Sat Apr 30 06:02:47 2005
>     Raid Level : raid1
>     Array Size : 586240 (572.50 MiB 600.31 MB)
>    Device Size : 586240 (572.50 MiB 600.31 MB)
>   Raid Devices : 2
>  Total Devices : 1
> Preferred Minor : 0
>    Persistence : Superblock is persistent
>
>    Update Time : Sun May  1 08:40:40 2005
>          State : clean, degraded
> Active Devices : 1
> Working Devices : 1
> Failed Devices : 0
>  Spare Devices : 0
>
>           UUID : 647381d7:fb84da16:f43e8c51:3695f234
>         Events : 0.4875
>
>    Number   Major   Minor   RaidDevice State
>       0       8        1        0      active sync   /dev/sda1
>       1       0        0        -      removed
>
> There are no disk error messages.
>
> I booted with a fedora 3 rescue cd and md0 started properly so I 
> assume it's something in the  startup.
>
> I rebuilt the initrd a couple times. Is there anything in it that 
> could cause this problem?
>
> John
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  reply	other threads:[~2005-05-01 16:57 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-05-01  2:59 Raid 1 fails on every reboot? John McMonagle
2005-05-01  3:25 ` well, I know it's too late for most of us in the US - berk walker
2005-05-01 10:27 ` Raid 1 fails on every reboot? Laurent CARON
2005-05-01 13:48   ` John McMonagle
2005-05-01 16:57     ` John McMonagle [this message]
2005-05-02  1:59 ` Eric Wood

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=42750A73.6060200@advocap.org \
    --to=johnm@advocap.org \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).