linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andy Bailey <andy@hazlorealidad.com>
To: linux-raid@vger.kernel.org
Subject: Re: sdb failure - mdadm: no devices found for /dev/md0
Date: Tue, 28 Jul 2009 08:05:36 -0500	[thread overview]
Message-ID: <1248786336.13147.172.camel@localhost.localdomain> (raw)
In-Reply-To: <1248711579.13147.75.camel@localhost.localdomain>

More info:

In the initrd the mdadm.conf has
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90
UUID=392f9510:37d5d89a:d9143456:0dceb00d

In the /etc/mdadm.conf (for that root partition)
ARRAY /dev/md0 level=raid1 num-devices=2
UUID=2d10ee45:1a407729:ec85deb0:7e9ea950

The rest of the UUID entries are identical in the 2 mdadm.confs.

So its now clear what the problem was.

However, I still dont know how the problem happened.

What could have caused he UUID for the root filesystem to change.
Anaconda installed it to md0 and the only thing that we had done was
grow the filesystem to 3 partitions and add a third partition, as I
mentioned in my previous email.

Could yum have installed a kernel upgrade that changed the UUID???

Also why didnt the kernel options md=0,/dev/sda1,/dev/sdb1 override the
settings in mdadm.conf?

Is the syntax correct?
If its not correct doesnt the kernel print an error message?

The kernel we were running was 2.6.27.24-170.2.68.fc10.x86_64

It also doesnt explain why mkinitrd didnt do anything using the rescue
disk, no delay, no message, no file.

On the new system it works as expected
mkinitrd test 2.6.27.24-170.2.68.fc10.x86_64

creates the initrd "test" in the current directory
and if theres a typo it reports
No modules available for kernel ....

Any ideas?

Thanks in advance

Andy Bailey

--------------------------------------------------------------------
[root@servidor cron.daily]# cd /mnt/boot/

[root@servidor boot]# zcat initrd-2.6.27.24-170.2.68.fc10.x86_64.img
> /tmp/initrd
[root@servidor boot]# cd /tmp/
[root@servidor tmp]# mkdir init
[root@servidor tmp]# file initrd 
initrd: ASCII cpio archive (SVR4 with no CRC)
[root@servidor tmp]# man cpio
[root@servidor tmp]# cd init
[root@servidor init]# cpio -i --make-directories < ../initrd 
15858 blocks
[root@servidor init]# cd etc

[root@servidor etc]# cat mdadm.conf 

# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root

ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90
UUID=ed127569:52ff37b7:da473d79:03ebe556
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90
UUID=392f9510:37d5d89a:d9143456:0dceb00d
ARRAY /dev/md2 level=raid1 num-devices=2 metadata=0.90
UUID=c964f385:5b6e5d18:2d631c3c:e70b1b22
ARRAY /dev/md9 level=raid1 num-devices=2 metadata=0.90
UUID=1ab3d6b5:575a8a2b:dfa789fa:805a401f
ARRAY /dev/md8 level=raid1 num-devices=2 metadata=0.90
UUID=318d1ff4:dd2b9ccc:7effe40e:a3e412fc
ARRAY /dev/md7 level=raid1 num-devices=2 metadata=0.90
UUID=0114a14e:b6a45be8:3a719518:7cb31784
ARRAY /dev/md6 level=raid1 num-devices=2 metadata=0.90
UUID=03254e0d:86de6727:2ff70881:773bec64
ARRAY /dev/md5 level=raid1 num-devices=2 metadata=0.90
UUID=644a897a:15ef0e34:833f147a:71729df0
ARRAY /dev/md4 level=raid1 num-devices=2 metadata=0.90
UUID=988ecde9:cc998c1b:c50949cb:adc77570
ARRAY /dev/md3 level=raid1 num-devices=2 metadata=0.90
UUID=e7b76f39:7c6d5185:3b70f7bd:4b6284f5

[root@servidor etc]# cat /mnt/etc/mdadm.conf 

# mdadm.conf written out by anaconda
DEVICE partitions
MAILADDR root

ARRAY /dev/md2 level=raid1 num-devices=2
UUID=c964f385:5b6e5d18:2d631c3c:e70b1b22
ARRAY /dev/md4 level=raid1 num-devices=2
UUID=988ecde9:cc998c1b:c50949cb:adc77570
ARRAY /dev/md3 level=raid1 num-devices=2
UUID=e7b76f39:7c6d5185:3b70f7bd:4b6284f5
ARRAY /dev/md8 level=raid1 num-devices=2
UUID=318d1ff4:dd2b9ccc:7effe40e:a3e412fc
ARRAY /dev/md7 level=raid1 num-devices=2
UUID=0114a14e:b6a45be8:3a719518:7cb31784
ARRAY /dev/md6 level=raid1 num-devices=2
UUID=03254e0d:86de6727:2ff70881:773bec64
ARRAY /dev/md5 level=raid1 num-devices=2
UUID=644a897a:15ef0e34:833f147a:71729df0
ARRAY /dev/md0 level=raid1 num-devices=2
UUID=2d10ee45:1a407729:ec85deb0:7e9ea950
ARRAY /dev/md1 level=raid1 num-devices=2
UUID=ed127569:52ff37b7:da473d79:03ebe556
ARRAY /dev/md9 level=raid1 num-devices=2
UUID=1ab3d6b5:575a8a2b:dfa789fa:805a401f
[root@servidor etc]# cat /mnt/etc/mdadm.conf /tmp/init/etc/mdadm.conf |
sort

ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90
UUID=392f9510:37d5d89a:d9143456:0dceb00d
ARRAY /dev/md0 level=raid1 num-devices=2
UUID=2d10ee45:1a407729:ec85deb0:7e9ea950

ARRAY /dev/md1 level=raid1 num-devices=2 metadata=0.90
UUID=ed127569:52ff37b7:da473d79:03ebe556
ARRAY /dev/md1 level=raid1 num-devices=2
UUID=ed127569:52ff37b7:da473d79:03ebe556

ARRAY /dev/md2 level=raid1 num-devices=2 metadata=0.90
UUID=c964f385:5b6e5d18:2d631c3c:e70b1b22
ARRAY /dev/md2 level=raid1 num-devices=2
UUID=c964f385:5b6e5d18:2d631c3c:e70b1b22

ARRAY /dev/md3 level=raid1 num-devices=2 metadata=0.90
UUID=e7b76f39:7c6d5185:3b70f7bd:4b6284f5
ARRAY /dev/md3 level=raid1 num-devices=2
UUID=e7b76f39:7c6d5185:3b70f7bd:4b6284f5

ARRAY /dev/md4 level=raid1 num-devices=2 metadata=0.90
UUID=988ecde9:cc998c1b:c50949cb:adc77570
ARRAY /dev/md4 level=raid1 num-devices=2
UUID=988ecde9:cc998c1b:c50949cb:adc77570

ARRAY /dev/md5 level=raid1 num-devices=2 metadata=0.90
UUID=644a897a:15ef0e34:833f147a:71729df0
ARRAY /dev/md5 level=raid1 num-devices=2
UUID=644a897a:15ef0e34:833f147a:71729df0

ARRAY /dev/md6 level=raid1 num-devices=2 metadata=0.90
UUID=03254e0d:86de6727:2ff70881:773bec64
ARRAY /dev/md6 level=raid1 num-devices=2
UUID=03254e0d:86de6727:2ff70881:773bec64

ARRAY /dev/md7 level=raid1 num-devices=2 metadata=0.90
UUID=0114a14e:b6a45be8:3a719518:7cb31784
ARRAY /dev/md7 level=raid1 num-devices=2
UUID=0114a14e:b6a45be8:3a719518:7cb31784

ARRAY /dev/md8 level=raid1 num-devices=2 metadata=0.90
UUID=318d1ff4:dd2b9ccc:7effe40e:a3e412fc
ARRAY /dev/md8 level=raid1 num-devices=2
UUID=318d1ff4:dd2b9ccc:7effe40e:a3e412fc

ARRAY /dev/md9 level=raid1 num-devices=2 metadata=0.90
UUID=1ab3d6b5:575a8a2b:dfa789fa:805a401f
ARRAY /dev/md9 level=raid1 num-devices=2
UUID=1ab3d6b5:575a8a2b:dfa789fa:805a401f



  reply	other threads:[~2009-07-28 13:05 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-07-27 16:19 sdb failure - mdadm: no devices found for /dev/md0 Andy Bailey
2009-07-28 13:05 ` Andy Bailey [this message]
2009-07-30  3:21   ` Andy Bailey

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1248786336.13147.172.camel@localhost.localdomain \
    --to=andy@hazlorealidad.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).