* Problem with auto-assembling raid1 on system start
@ 2009-05-08 14:11 Tobias Gunkel
2009-05-08 18:36 ` CoolCold
0 siblings, 1 reply; 4+ messages in thread
From: Tobias Gunkel @ 2009-05-08 14:11 UTC (permalink / raw)
To: linux-raid
Hello everyone!
After rebooting one of our Debian servers yesterday (under normal
conditions), mdadm was not able to assemble /dev/md0 automaticly any more.
System: Debian Lenny, mdmadm v2.5.6, Kernel 2.6.26-preemptive-cpuset
(from Debian testing sources)
This is what I get during boot:
[...]
Begin: Mounting root file system... ...
Begin: Running /scripts/local-top ...
Begin: Loading MD modules ...
md: raid1 personality registered for level 1
Success: loaded module raid1.
Done.
Begin: Assembling all MD arrays ...
[...]
md: md0 stopped.
mdadm: no devices found for /dev/md0
Failure: failed to assemble all arrays.
[...]
Then the system falls back to BusyBox shell from initramfs, because the
root fs - which is located on /dev/md0 - could not be mounted.
But from the initramfs shell, it is possible to cleanly assemble and
mount the md0 array:
(initramfs) mdadm -A /dev/md0 /dev/sda2 /dev/sdb2
md: md0 stopped.
md: bind<sdb2>
md: bind<sda2>
raid1: raid set md0 active with 2 out of 2 mirrors
mdadm: /dev/md0 has been started with 2 drives.
(initramfs) mount /dev/md0 root
kjournald starting. Commit interval 5 seconds
EXT3 FS on md0, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
After leaving the initramfs shell with 'exit', the system continues to
boot normally.
Strange: /dev/md1 (swap) which is the first array in assembling order,
gets assembled and started correctly.
I also played around with ROOTDELAY=60, but this did not changed anything.
I'm grateful for any help.
Best regards, Tobias
PS: Maybe some helpful output (after starting the system the way
described above):
$ cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda2[0] sdb2[1]
487331648 blocks [2/2] [UU]
md1 : active raid1 sda1[0] sdb1[1]
1052160 blocks [2/2] [UU]
unused devices: <none>
$ mdadm --detail --scan
ARRAY /dev/md1 level=raid1 num-devices=2
UUID=c3838888:50dbed72:15a9bffb:d0e83d23
ARRAY /dev/md0 level=raid1 num-devices=2
UUID=0d0a0c79:70adae03:f802952b:2b58c14d
$ grep -v ^# /etc/mdadm/mdadm.conf
DEVICE /dev/sd*[0-9] /dev/sd*[0-9]
CREATE owner=root group=disk mode=0660 auto=yes
HOMEHOST <system>
MAILADDR root
ARRAY /dev/md1 level=raid1 num-devices=2
UUID=c3838888:50dbed72:15a9bffb:d0e83d23
ARRAY /dev/md0 level=raid1 num-devices=2
UUID=0d0a0c79:70adae03:f802952b:2b58c14d
$ mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Thu Dec 11 14:18:44 2008
Raid Level : raid1
Array Size : 487331648 (464.76 GiB 499.03 GB)
Device Size : 487331648 (464.76 GiB 499.03 GB)
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri May 8 15:45:32 2009
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 0d0a0c79:70adae03:f802952b:2b58c14d
Events : 0.900
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: Problem with auto-assembling raid1 on system start
2009-05-08 14:11 Problem with auto-assembling raid1 on system start Tobias Gunkel
@ 2009-05-08 18:36 ` CoolCold
2009-05-08 20:09 ` Tobias Gunkel
0 siblings, 1 reply; 4+ messages in thread
From: CoolCold @ 2009-05-08 18:36 UTC (permalink / raw)
To: Tobias Gunkel; +Cc: linux-raid
Does mdadm.conf in initrd image contains valid uuids/array names? (you
can ungzip && extract cpio archive to check this)
On Fri, May 8, 2009 at 6:11 PM, Tobias Gunkel <tobias.gunkel@qumido.de> wrote:
> Hello everyone!
>
> After rebooting one of our Debian servers yesterday (under normal
> conditions), mdadm was not able to assemble /dev/md0 automaticly any more.
> System: Debian Lenny, mdmadm v2.5.6, Kernel 2.6.26-preemptive-cpuset (from
> Debian testing sources)
>
> This is what I get during boot:
>
> [...]
> Begin: Mounting root file system... ...
> Begin: Running /scripts/local-top ...
> Begin: Loading MD modules ...
> md: raid1 personality registered for level 1
> Success: loaded module raid1.
> Done.
> Begin: Assembling all MD arrays ...
> [...]
> md: md0 stopped.
> mdadm: no devices found for /dev/md0
> Failure: failed to assemble all arrays.
> [...]
>
> Then the system falls back to BusyBox shell from initramfs, because the root
> fs - which is located on /dev/md0 - could not be mounted.
> But from the initramfs shell, it is possible to cleanly assemble and mount
> the md0 array:
>
> (initramfs) mdadm -A /dev/md0 /dev/sda2 /dev/sdb2
> md: md0 stopped.
> md: bind<sdb2>
> md: bind<sda2>
> raid1: raid set md0 active with 2 out of 2 mirrors
> mdadm: /dev/md0 has been started with 2 drives.
>
> (initramfs) mount /dev/md0 root
> kjournald starting. Commit interval 5 seconds
> EXT3 FS on md0, internal journal
> EXT3-fs: mounted filesystem with ordered data mode.
>
> After leaving the initramfs shell with 'exit', the system continues to boot
> normally.
>
> Strange: /dev/md1 (swap) which is the first array in assembling order, gets
> assembled and started correctly.
> I also played around with ROOTDELAY=60, but this did not changed anything.
>
> I'm grateful for any help.
> Best regards, Tobias
>
>
> PS: Maybe some helpful output (after starting the system the way described
> above):
>
> $ cat /proc/mdstat
> Personalities : [raid1]
> md0 : active raid1 sda2[0] sdb2[1]
> 487331648 blocks [2/2] [UU]
>
> md1 : active raid1 sda1[0] sdb1[1]
> 1052160 blocks [2/2] [UU]
>
> unused devices: <none>
>
>
> $ mdadm --detail --scan
> ARRAY /dev/md1 level=raid1 num-devices=2
> UUID=c3838888:50dbed72:15a9bffb:d0e83d23
> ARRAY /dev/md0 level=raid1 num-devices=2
> UUID=0d0a0c79:70adae03:f802952b:2b58c14d
>
>
> $ grep -v ^# /etc/mdadm/mdadm.conf
>
> DEVICE /dev/sd*[0-9] /dev/sd*[0-9]
>
> CREATE owner=root group=disk mode=0660 auto=yes
>
> HOMEHOST <system>
>
> MAILADDR root
>
> ARRAY /dev/md1 level=raid1 num-devices=2
> UUID=c3838888:50dbed72:15a9bffb:d0e83d23
> ARRAY /dev/md0 level=raid1 num-devices=2
> UUID=0d0a0c79:70adae03:f802952b:2b58c14d
>
>
> $ mdadm --detail /dev/md0
> /dev/md0:
> Version : 00.90.03
> Creation Time : Thu Dec 11 14:18:44 2008
> Raid Level : raid1
> Array Size : 487331648 (464.76 GiB 499.03 GB)
> Device Size : 487331648 (464.76 GiB 499.03 GB)
> Raid Devices : 2
> Total Devices : 2
> Preferred Minor : 0
> Persistence : Superblock is persistent
>
> Update Time : Fri May 8 15:45:32 2009
> State : clean
> Active Devices : 2
> Working Devices : 2
> Failed Devices : 0
> Spare Devices : 0
>
> UUID : 0d0a0c79:70adae03:f802952b:2b58c14d
> Events : 0.900
>
> Number Major Minor RaidDevice State
> 0 8 2 0 active sync /dev/sda2
> 1 8 18 1 active sync /dev/sdb2
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
--
Best regards,
[COOLCOLD-RIPN]
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: Problem with auto-assembling raid1 on system start
2009-05-08 18:36 ` CoolCold
@ 2009-05-08 20:09 ` Tobias Gunkel
2009-05-09 23:35 ` CoolCold
0 siblings, 1 reply; 4+ messages in thread
From: Tobias Gunkel @ 2009-05-08 20:09 UTC (permalink / raw)
To: linux-raid; +Cc: CoolCold
Great! That was exactly the problem. The uuid differed from the one out of
mdadm --detail --scan
After rebuilding the initrd image, including the fixed mdadm.conf,
everything is working fine again.
Thank you very much!
Best regards, Tobias
CoolCold schrieb:
> Does mdadm.conf in initrd image contains valid uuids/array names? (you
> can ungzip && extract cpio archive to check this)
>
> On Fri, May 8, 2009 at 6:11 PM, Tobias Gunkel <tobias.gunkel@qumido.de> wrote:
>
>> Hello everyone!
>>
>> After rebooting one of our Debian servers yesterday (under normal
>> conditions), mdadm was not able to assemble /dev/md0 automaticly any more.
>> System: Debian Lenny, mdmadm v2.5.6, Kernel 2.6.26-preemptive-cpuset (from
>> Debian testing sources)
>>
>> This is what I get during boot:
>>
>> [...]
>> Begin: Mounting root file system... ...
>> Begin: Running /scripts/local-top ...
>> Begin: Loading MD modules ...
>> md: raid1 personality registered for level 1
>> Success: loaded module raid1.
>> Done.
>> Begin: Assembling all MD arrays ...
>> [...]
>> md: md0 stopped.
>> mdadm: no devices found for /dev/md0
>> Failure: failed to assemble all arrays.
>> [...]
>>
>> Then the system falls back to BusyBox shell from initramfs, because the root
>> fs - which is located on /dev/md0 - could not be mounted.
>> But from the initramfs shell, it is possible to cleanly assemble and mount
>> the md0 array:
>>
>> (initramfs) mdadm -A /dev/md0 /dev/sda2 /dev/sdb2
>> md: md0 stopped.
>> md: bind<sdb2>
>> md: bind<sda2>
>> raid1: raid set md0 active with 2 out of 2 mirrors
>> mdadm: /dev/md0 has been started with 2 drives.
>>
>> (initramfs) mount /dev/md0 root
>> kjournald starting. Commit interval 5 seconds
>> EXT3 FS on md0, internal journal
>> EXT3-fs: mounted filesystem with ordered data mode.
>>
>> After leaving the initramfs shell with 'exit', the system continues to boot
>> normally.
>>
>> Strange: /dev/md1 (swap) which is the first array in assembling order, gets
>> assembled and started correctly.
>> I also played around with ROOTDELAY=60, but this did not changed anything.
>>
>> I'm grateful for any help.
>> Best regards, Tobias
>>
>>
>> PS: Maybe some helpful output (after starting the system the way described
>> above):
>>
>> $ cat /proc/mdstat
>> Personalities : [raid1]
>> md0 : active raid1 sda2[0] sdb2[1]
>> 487331648 blocks [2/2] [UU]
>>
>> md1 : active raid1 sda1[0] sdb1[1]
>> 1052160 blocks [2/2] [UU]
>>
>> unused devices: <none>
>>
>>
>> $ mdadm --detail --scan
>> ARRAY /dev/md1 level=raid1 num-devices=2
>> UUID=c3838888:50dbed72:15a9bffb:d0e83d23
>> ARRAY /dev/md0 level=raid1 num-devices=2
>> UUID=0d0a0c79:70adae03:f802952b:2b58c14d
>>
>>
>> $ grep -v ^# /etc/mdadm/mdadm.conf
>>
>> DEVICE /dev/sd*[0-9] /dev/sd*[0-9]
>>
>> CREATE owner=root group=disk mode=0660 auto=yes
>>
>> HOMEHOST <system>
>>
>> MAILADDR root
>>
>> ARRAY /dev/md1 level=raid1 num-devices=2
>> UUID=c3838888:50dbed72:15a9bffb:d0e83d23
>> ARRAY /dev/md0 level=raid1 num-devices=2
>> UUID=0d0a0c79:70adae03:f802952b:2b58c14d
>>
>>
>> $ mdadm --detail /dev/md0
>> /dev/md0:
>> Version : 00.90.03
>> Creation Time : Thu Dec 11 14:18:44 2008
>> Raid Level : raid1
>> Array Size : 487331648 (464.76 GiB 499.03 GB)
>> Device Size : 487331648 (464.76 GiB 499.03 GB)
>> Raid Devices : 2
>> Total Devices : 2
>> Preferred Minor : 0
>> Persistence : Superblock is persistent
>>
>> Update Time : Fri May 8 15:45:32 2009
>> State : clean
>> Active Devices : 2
>> Working Devices : 2
>> Failed Devices : 0
>> Spare Devices : 0
>>
>> UUID : 0d0a0c79:70adae03:f802952b:2b58c14d
>> Events : 0.900
>>
>> Number Major Minor RaidDevice State
>> 0 8 2 0 active sync /dev/sda2
>> 1 8 18 1 active sync /dev/sdb2
>>
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>>
>
>
>
>
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: Problem with auto-assembling raid1 on system start
2009-05-08 20:09 ` Tobias Gunkel
@ 2009-05-09 23:35 ` CoolCold
0 siblings, 0 replies; 4+ messages in thread
From: CoolCold @ 2009-05-09 23:35 UTC (permalink / raw)
To: Tobias Gunkel; +Cc: linux-raid
Had a lot of head scratches after cloning debian systems on raid1 :)
Still do not know good solution when i need to clone one server to
others - except dump/restore need to update mdadm.conf && initrd image
&& grub
On Sat, May 9, 2009 at 12:09 AM, Tobias Gunkel <tobias.gunkel@qumido.de> wrote:
> Great! That was exactly the problem. The uuid differed from the one out of
> mdadm --detail --scan
> After rebuilding the initrd image, including the fixed mdadm.conf,
> everything is working fine again.
>
> Thank you very much!
> Best regards, Tobias
>
> CoolCold schrieb:
>>
>> Does mdadm.conf in initrd image contains valid uuids/array names? (you
>> can ungzip && extract cpio archive to check this)
>>
>> On Fri, May 8, 2009 at 6:11 PM, Tobias Gunkel <tobias.gunkel@qumido.de>
>> wrote:
>>
>>>
>>> Hello everyone!
>>>
>>> After rebooting one of our Debian servers yesterday (under normal
>>> conditions), mdadm was not able to assemble /dev/md0 automaticly any
>>> more.
>>> System: Debian Lenny, mdmadm v2.5.6, Kernel 2.6.26-preemptive-cpuset
>>> (from
>>> Debian testing sources)
>>>
>>> This is what I get during boot:
>>>
>>> [...]
>>> Begin: Mounting root file system... ...
>>> Begin: Running /scripts/local-top ...
>>> Begin: Loading MD modules ...
>>> md: raid1 personality registered for level 1
>>> Success: loaded module raid1.
>>> Done.
>>> Begin: Assembling all MD arrays ...
>>> [...]
>>> md: md0 stopped.
>>> mdadm: no devices found for /dev/md0
>>> Failure: failed to assemble all arrays.
>>> [...]
>>>
>>> Then the system falls back to BusyBox shell from initramfs, because the
>>> root
>>> fs - which is located on /dev/md0 - could not be mounted.
>>> But from the initramfs shell, it is possible to cleanly assemble and
>>> mount
>>> the md0 array:
>>>
>>> (initramfs) mdadm -A /dev/md0 /dev/sda2 /dev/sdb2
>>> md: md0 stopped.
>>> md: bind<sdb2>
>>> md: bind<sda2>
>>> raid1: raid set md0 active with 2 out of 2 mirrors
>>> mdadm: /dev/md0 has been started with 2 drives.
>>>
>>> (initramfs) mount /dev/md0 root
>>> kjournald starting. Commit interval 5 seconds
>>> EXT3 FS on md0, internal journal
>>> EXT3-fs: mounted filesystem with ordered data mode.
>>>
>>> After leaving the initramfs shell with 'exit', the system continues to
>>> boot
>>> normally.
>>>
>>> Strange: /dev/md1 (swap) which is the first array in assembling order,
>>> gets
>>> assembled and started correctly.
>>> I also played around with ROOTDELAY=60, but this did not changed
>>> anything.
>>>
>>> I'm grateful for any help.
>>> Best regards, Tobias
>>>
>>>
>>> PS: Maybe some helpful output (after starting the system the way
>>> described
>>> above):
>>>
>>> $ cat /proc/mdstat
>>> Personalities : [raid1]
>>> md0 : active raid1 sda2[0] sdb2[1]
>>> 487331648 blocks [2/2] [UU]
>>>
>>> md1 : active raid1 sda1[0] sdb1[1]
>>> 1052160 blocks [2/2] [UU]
>>>
>>> unused devices: <none>
>>>
>>>
>>> $ mdadm --detail --scan
>>> ARRAY /dev/md1 level=raid1 num-devices=2
>>> UUID=c3838888:50dbed72:15a9bffb:d0e83d23
>>> ARRAY /dev/md0 level=raid1 num-devices=2
>>> UUID=0d0a0c79:70adae03:f802952b:2b58c14d
>>>
>>>
>>> $ grep -v ^# /etc/mdadm/mdadm.conf
>>>
>>> DEVICE /dev/sd*[0-9] /dev/sd*[0-9]
>>>
>>> CREATE owner=root group=disk mode=0660 auto=yes
>>>
>>> HOMEHOST <system>
>>>
>>> MAILADDR root
>>>
>>> ARRAY /dev/md1 level=raid1 num-devices=2
>>> UUID=c3838888:50dbed72:15a9bffb:d0e83d23
>>> ARRAY /dev/md0 level=raid1 num-devices=2
>>> UUID=0d0a0c79:70adae03:f802952b:2b58c14d
>>>
>>>
>>> $ mdadm --detail /dev/md0
>>> /dev/md0:
>>> Version : 00.90.03
>>> Creation Time : Thu Dec 11 14:18:44 2008
>>> Raid Level : raid1
>>> Array Size : 487331648 (464.76 GiB 499.03 GB)
>>> Device Size : 487331648 (464.76 GiB 499.03 GB)
>>> Raid Devices : 2
>>> Total Devices : 2
>>> Preferred Minor : 0
>>> Persistence : Superblock is persistent
>>>
>>> Update Time : Fri May 8 15:45:32 2009
>>> State : clean
>>> Active Devices : 2
>>> Working Devices : 2
>>> Failed Devices : 0
>>> Spare Devices : 0
>>>
>>> UUID : 0d0a0c79:70adae03:f802952b:2b58c14d
>>> Events : 0.900
>>>
>>> Number Major Minor RaidDevice State
>>> 0 8 2 0 active sync /dev/sda2
>>> 1 8 18 1 active sync /dev/sdb2
>>>
>>>
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
>>>
>>
>>
>>
>>
>
>
--
--
Best regards,
[COOLCOLD-RIPN]
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2009-05-09 23:35 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-05-08 14:11 Problem with auto-assembling raid1 on system start Tobias Gunkel
2009-05-08 18:36 ` CoolCold
2009-05-08 20:09 ` Tobias Gunkel
2009-05-09 23:35 ` CoolCold
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).