* 2.6.0: Should I worry about "hdc12 has different UUID to hdc13" ?
@ 2003-12-24 8:58 Mark Smith
2003-12-24 10:23 ` hadware raid [_U] issue Vijay Kumar
0 siblings, 1 reply; 3+ messages in thread
From: Mark Smith @ 2003-12-24 8:58 UTC (permalink / raw)
To: linux-raid
Hi,
I recently upgraded to 2.6.0 from 2.4.22. My MD RAID1 arrays seems to be working fine since the upgrade.
However, during boot, for each of my MD devices (I have 13 of them), I get a number of "hdcN has different UUID to hdcN" status or error messages. For example
--
md: adding hdc12 ...
md: hdc11 has different UUID to hdc12
md: hdc10 has different UUID to hdc12
md: hdc9 has different UUID to hdc12
md: hdc8 has different UUID to hdc12
md: hdc7 has different UUID to hdc12
md: hdc6 has different UUID to hdc12
md: hdc5 has different UUID to hdc12
md: hdc3 has different UUID to hdc12
md: hdc2 has different UUID to hdc12
md: adding hda12 ...
md: hda11 has different UUID to hdc12
md: hda10 has different UUID to hdc12
md: hda9 has different UUID to hdc12
md: hda8 has different UUID to hdc12
md: hda7 has different UUID to hdc12
md: hda6 has different UUID to hdc12
md: hda5 has different UUID to hdc12
md: hda3 has different UUID to hdc12
md: hda2 has different UUID to hdc12
md: created md12
md: bind<hda12>
md: bind<hdc12>
md: running: <hdc12><hda12>
raid1: raid set md12 active with 2 out of 2 mirrors
--
I get thirteen sets of these.
Is there a problem with my RAID1 arrays or is there something I can do to get rid of them ? At least to me, they don't seem to mean much (I'm guessing that UUID is some sort of unique identifier, identifying the component devices of the MD devices, and this message indicates that the RAID subsystem is looking for RAID1 component device pairs), and all they end up doing is filling up my kernel buffer before syslogd/klogd starts, which means I loose some of my other boot status messages, which I think I would find more useful eg. early PCI setup / ACPI etc.
Does anybody have any suggestions ?
Thanks,
Mark.
^ permalink raw reply [flat|nested] 3+ messages in thread
* hadware raid [_U] issue.
2003-12-24 8:58 2.6.0: Should I worry about "hdc12 has different UUID to hdc13" ? Mark Smith
@ 2003-12-24 10:23 ` Vijay Kumar
2003-12-25 7:03 ` H. Peter Anvin
0 siblings, 1 reply; 3+ messages in thread
From: Vijay Kumar @ 2003-12-24 10:23 UTC (permalink / raw)
To: linux-raid
Hello,
I have a redhat box with Hardware Raid. Below ar ethe outputs of various
files/commands. Please let me know whether there is anything wrong.
I think there might be some problem in md0, but I am not very sure.
Somone told me that since the box runs hardware the md-status does not
matter. What does the "[UU]" mean in mdstat ? In md0, one U is missing.
What should I do next ?
Kindly help me.
Regards,
Vijay Kumar.
Below is output of cat /prc/mdstat
-------------------------------------------
Personalities : [raid1]
read_ahead 1024 sectors
md2 : active raid1 hdc5[0] hdd5[1]
30716160 blocks [2/2] [UU]
md1 : active raid1 hdc3[0] hdd3[1]
30716160 blocks [2/2] [UU]
md0 : active raid1 hdd2[1]
93699008 blocks [2/1] [_U]
md3 : active raid1 hdc1[0] hdd1[1]
104320 blocks [2/2] [UU]
unused devices: <none>
--------------------------------
Below is the output of /etc/raidtab
raiddev /dev/md1
raid-level 1
nr-raid-disks 2
chunk-size 64k
persistent-superblock 1
nr-spare-disks 0
device /dev/hdc3
raid-disk 0
device /dev/hdd3
raid-disk 1
raiddev /dev/md3
raid-level 1
nr-raid-disks 2
chunk-size 64k
persistent-superblock 1
nr-spare-disks 0
device /dev/hdc1
raid-disk 0
device /dev/hdd1
raid-disk 1
raiddev /dev/md2
raid-level 1
nr-raid-disks 2
chunk-size 64k
persistent-superblock 1
nr-spare-disks 0
device /dev/hdc5
raid-disk 0
device /dev/hdd5
raid-disk 1
raiddev /dev/md0
raid-level 1
nr-raid-disks 2
chunk-size 64k
persistent-superblock 1
nr-spare-disks 0
device /dev/hdc2
raid-disk 0
device /dev/hdd2
raid-disk 1
Below is the output of /etc/fstab:
/dev/md1 / ext3 defaults
1 1
/dev/md3 /boot ext3 defaults
1 2
/dev/md2 /home/gforge/cvsroot ext3 defaults
1 2
none /dev/pts devpts gid=5,mode=620
0 0
/dev/md0 /home ext3 defaults
1 2
none /proc proc defaults
0 0
none /dev/shm tmpfs defaults
0 0
/dev/hdc6 swap swap defaults
0 0
/dev/fd0 /mnt/floppy auto
noauto,owner,kudzu 0 0
Below is teh output of "fdisk -l" :
Disk /dev/hdc: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hdc1 * 1 13 104391 fd Linux raid autodetect
/dev/hdc2 14 11678 93699112+ fd Linux raid autodetect
/dev/hdc3 11679 15502 30716280 fd Linux raid autodetect
/dev/hdc4 15503 19457 31768537+ f Win95 Ext'd (LBA)
/dev/hdc5 15503 19326 30716248+ fd Linux raid autodetect
/dev/hdc6 19327 19457 1052226 82 Linux swap
Disk /dev/hdd: 160.0 GB, 160041885696 bytes
255 heads, 63 sectors/track, 19457 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hdd1 * 1 13 104391 fd Linux raid autodetect
/dev/hdd2 14 11678 93699112+ fd Linux raid autodetect
/dev/hdd3 11679 15502 30716280 fd Linux raid autodetect
/dev/hdd4 15503 19457 31768537+ f Win95 Ext'd (LBA)
/dev/hdd5 15503 19326 30716248+ fd Linux raid autodetect
Disk /dev/hda: 40.0 GB, 40020664320 bytes
255 heads, 63 sectors/track, 4865 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 6 48163+ 83 Linux
/dev/hda2 7 1918 15358140 83 Linux
/dev/hda3 1919 3193 10241437+ 83 Linux
/dev/hda4 3194 4865 13430340 f Win95 Ext'd (LBA)
/dev/hda5 3194 3257 514048+ 82 Linux swap
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: hadware raid [_U] issue.
2003-12-24 10:23 ` hadware raid [_U] issue Vijay Kumar
@ 2003-12-25 7:03 ` H. Peter Anvin
0 siblings, 0 replies; 3+ messages in thread
From: H. Peter Anvin @ 2003-12-25 7:03 UTC (permalink / raw)
To: linux-raid
Followup to: <1072261391.21554.125.camel@localhost.localdomain>
By author: Vijay Kumar <vijay@calsoftinc.com>
In newsgroup: linux.dev.raid
>
> Hello,
>
> I have a redhat box with Hardware Raid. Below ar ethe outputs of various
> files/commands. Please let me know whether there is anything wrong.
> I think there might be some problem in md0, but I am not very sure.
> Somone told me that since the box runs hardware the md-status does not
> matter. What does the "[UU]" mean in mdstat ? In md0, one U is missing.
> What should I do next ?
>
U = working drive
_ = broken drive
You need to replace the broken drive and use "mdadm -a" to replace it
into the array.
If you believe the hardware is actually OK you can test it out
(e.g. using badblocks) and then add it back into the array.
-hpa
--
<hpa@transmeta.com> at work, <hpa@zytor.com> in private!
If you send me mail in HTML format I will assume it's spam.
"Unix gives you enough rope to shoot yourself in the foot."
Architectures needed: ia64 m68k mips64 ppc ppc64 s390 s390x sh v850 x86-64
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2003-12-25 7:03 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-12-24 8:58 2.6.0: Should I worry about "hdc12 has different UUID to hdc13" ? Mark Smith
2003-12-24 10:23 ` hadware raid [_U] issue Vijay Kumar
2003-12-25 7:03 ` H. Peter Anvin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).