* RAID5 in sync does not populate slots sequentially, shows array as (somewhat) faulty
@ 2008-03-03 10:42 Peter Rabbitson
2008-03-03 11:44 ` Tor Arne Vestbø
2008-03-03 18:34 ` Bill Davidsen
0 siblings, 2 replies; 8+ messages in thread
From: Peter Rabbitson @ 2008-03-03 10:42 UTC (permalink / raw)
To: linux-raid
Hello,
Noticing the problems Tor Vestbø is having, I remembered that I have an array
in a similar state, which I never figured out. The array has been working
flawlessly for 3 months, the monthly 'check' runs come back with everything
being clean. However this is how the array looks through mdadm's eyes:
root@Thesaurus:~# mdadm -D /dev/md5
/dev/md5:
Version : 01.01.03
Creation Time : Tue Jan 22 03:52:42 2008
Raid Level : raid5
Array Size : 865081344 (825.01 GiB 885.84 GB)
Used Dev Size : 576720896 (275.00 GiB 295.28 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 5
Persistence : Superblock is persistent
Update Time : Mon Mar 3 11:34:36 2008
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 2048K
Name : Thesaurus:Crypta (local to host Thesaurus)
UUID : 1decb2d1:ebf16128:a240938a:669b0999
Events : 9090
Number Major Minor RaidDevice State
4 8 3 0 active sync /dev/sda3 <-- why at
#4, not #0?
1 8 19 1 active sync /dev/sdb3
2 8 35 2 active sync /dev/sdc3
3 8 51 3 active sync /dev/sdd3
root@Thesaurus:~#
root@Thesaurus:~# mdadm -E /dev/sda3
/dev/sda3:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x0
Array UUID : 1decb2d1:ebf16128:a240938a:669b0999
Name : Thesaurus:Crypta (local to host Thesaurus)
Creation Time : Tue Jan 22 03:52:42 2008
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 576733236 (275.01 GiB 295.29 GB)
Array Size : 1730162688 (825.01 GiB 885.84 GB)
Used Dev Size : 576720896 (275.00 GiB 295.28 GB)
Data Offset : 264 sectors <-- not sure why this is different from the rest
Super Offset : 0 sectors
State : active
Device UUID : 72d9ceb4:89a20f49:50d51384:9b164f72
Update Time : Mon Mar 3 11:34:26 2008
Checksum : 4bd3ae95 - correct
Events : 9090
Layout : left-symmetric
Chunk Size : 2048K
Array Slot : 4 (failed, 1, 2, 3, 0)
Array State : Uuuu 1 failed <-- why is that?
root@Thesaurus:~#
root@Thesaurus:~# mdadm -E /dev/sdb3
/dev/sdb3:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x0
Array UUID : 1decb2d1:ebf16128:a240938a:669b0999
Name : Thesaurus:Crypta (local to host Thesaurus)
Creation Time : Tue Jan 22 03:52:42 2008
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 576733364 (275.01 GiB 295.29 GB)
Array Size : 1730162688 (825.01 GiB 885.84 GB)
Used Dev Size : 576720896 (275.00 GiB 295.28 GB)
Data Offset : 136 sectors
Super Offset : 0 sectors
State : active
Device UUID : 0ba427b2:6cc26a34:cc9838d9:1dc77772
Update Time : Mon Mar 3 11:39:01 2008
Checksum : 89d50e1f - correct
Events : 9090
Layout : left-symmetric
Chunk Size : 2048K
Array Slot : 1 (failed, 1, 2, 3, 0)
Array State : uUuu 1 failed <-- and that?
root@Thesaurus:~#
root@Thesaurus:~# mdadm -E /dev/sdc3
/dev/sdc3:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x0
Array UUID : 1decb2d1:ebf16128:a240938a:669b0999
Name : Thesaurus:Crypta (local to host Thesaurus)
Creation Time : Tue Jan 22 03:52:42 2008
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 576733364 (275.01 GiB 295.29 GB)
Array Size : 1730162688 (825.01 GiB 885.84 GB)
Used Dev Size : 576720896 (275.00 GiB 295.28 GB)
Data Offset : 136 sectors
Super Offset : 0 sectors
State : active
Device UUID : ada93838:5e3fef07:fd478ba7:8fcf75e6
Update Time : Mon Mar 3 11:39:44 2008
Checksum : 25bb4882 - correct
Events : 9090
Layout : left-symmetric
Chunk Size : 2048K
Array Slot : 2 (failed, 1, 2, 3, 0)
Array State : uuUu 1 failed <-- same here
root@Thesaurus:~#
root@Thesaurus:~# mdadm -E /dev/sdd3
/dev/sdd3:
Magic : a92b4efc
Version : 1.1
Feature Map : 0x0
Array UUID : 1decb2d1:ebf16128:a240938a:669b0999
Name : Thesaurus:Crypta (local to host Thesaurus)
Creation Time : Tue Jan 22 03:52:42 2008
Raid Level : raid5
Raid Devices : 4
Avail Dev Size : 576733364 (275.01 GiB 295.29 GB)
Array Size : 1730162688 (825.01 GiB 885.84 GB)
Used Dev Size : 576720896 (275.00 GiB 295.28 GB)
Data Offset : 136 sectors
Super Offset : 0 sectors
State : active
Device UUID : be5f202f:c9097996:98d66036:9b6453ee
Update Time : Mon Mar 3 11:40:11 2008
Checksum : 41dfecc1 - correct
Events : 9090
Layout : left-symmetric
Chunk Size : 2048K
Array Slot : 3 (failed, 1, 2, 3, 0)
Array State : uuuU 1 failed
root@Thesaurus:~# <-- and here
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: RAID5 in sync does not populate slots sequentially, shows array as (somewhat) faulty
2008-03-03 10:42 RAID5 in sync does not populate slots sequentially, shows array as (somewhat) faulty Peter Rabbitson
@ 2008-03-03 11:44 ` Tor Arne Vestbø
2008-03-03 18:34 ` Bill Davidsen
1 sibling, 0 replies; 8+ messages in thread
From: Tor Arne Vestbø @ 2008-03-03 11:44 UTC (permalink / raw)
To: linux-raid
Peter Rabbitson wrote:
> Hello,
>
> Noticing the problems Tor Vestbø is having, I remembered that I have an
> array in a similar state, which I never figured out.
[snip]
> Array Slot : 4 (failed, 1, 2, 3, 0)
> Array State : Uuuu 1 failed <-- why is that?
> root@Thesaurus:~#
I've seen this in my array too. Was expecting Uuu_, or Uuuu after
rebuild, but it also said "1 failed".
Tor Arne
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: RAID5 in sync does not populate slots sequentially, shows array as (somewhat) faulty
2008-03-03 10:42 RAID5 in sync does not populate slots sequentially, shows array as (somewhat) faulty Peter Rabbitson
2008-03-03 11:44 ` Tor Arne Vestbø
@ 2008-03-03 18:34 ` Bill Davidsen
2008-03-04 10:25 ` Peter Rabbitson
1 sibling, 1 reply; 8+ messages in thread
From: Bill Davidsen @ 2008-03-03 18:34 UTC (permalink / raw)
To: Peter Rabbitson; +Cc: linux-raid
Peter Rabbitson wrote:
> Hello,
>
> Noticing the problems Tor Vestbø is having, I remembered that I have
> an array in a similar state, which I never figured out. The array has
> been working flawlessly for 3 months, the monthly 'check' runs come
> back with everything being clean. However this is how the array looks
> through mdadm's eyes:
I'm in agreement that something is odd about the disk numbers here, and
I'm suspicious because I have never seen this with 0.90 superblocks.
That doesn't mean it couldn't happen and I never noticed, it's certainly
odd that four drives wouldn't be numbered 0..3, in raid5 they are all
equally out of sync.
--
Bill Davidsen <davidsen@tmr.com>
"Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: RAID5 in sync does not populate slots sequentially, shows array as (somewhat) faulty
2008-03-03 18:34 ` Bill Davidsen
@ 2008-03-04 10:25 ` Peter Rabbitson
2008-03-04 10:42 ` Tor Arne Vestbø
0 siblings, 1 reply; 8+ messages in thread
From: Peter Rabbitson @ 2008-03-04 10:25 UTC (permalink / raw)
To: Bill Davidsen; +Cc: linux-raid
Bill Davidsen wrote:
> Peter Rabbitson wrote:
>> Hello,
>>
>> Noticing the problems Tor Vestbø is having, I remembered that I have
>> an array in a similar state, which I never figured out. The array has
>> been working flawlessly for 3 months, the monthly 'check' runs come
>> back with everything being clean. However this is how the array looks
>> through mdadm's eyes:
>
> I'm in agreement that something is odd about the disk numbers here, and
> I'm suspicious because I have never seen this with 0.90 superblocks.
> That doesn't mean it couldn't happen and I never noticed, it's certainly
> odd that four drives wouldn't be numbered 0..3, in raid5 they are all
> equally out of sync.
>
After Tor Arne reported his success I figured I will simply fail/remove sda3,
scrape it clean, and will add it back. I zeroed superblocks beforehand and
also wrote zeros (dd if=/dev/zero) to the drives start and end just to make
sure everythign is off. After resync I am back at square one - the offset of
sda3 is different than everything else and the array has one failed drive. If
someone can shed some light I made snapshots of the superblocks[1] alongside
with the current output of mdadm at http://rabbit.us/pool/md5_problem.tar.bz2.
[1] dd if=/dev/sdX3 of=sdX_sb count=<Data Offset> bs=512
Here is my system config:
root@Thesaurus:/arx/space/pool# fdisk -l /dev/sd[abcd]
Disk /dev/sda: 400.0 GB, 400088457216 bytes
255 heads, 63 sectors/track, 48641 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sda1 1 7 56196 fd Linux raid autodetect
/dev/sda2 8 507 4016250 fd Linux raid autodetect
/dev/sda3 508 36407 288366750 83 Linux
/dev/sda4 36408 48641 98269605 83 Linux
Disk /dev/sdb: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 1 7 56196 fd Linux raid autodetect
/dev/sdb2 8 507 4016250 fd Linux raid autodetect
/dev/sdb3 508 36407 288366750 83 Linux
/dev/sdb4 36408 38913 20129445 83 Linux
Disk /dev/sdc: 300.0 GB, 300090728448 bytes
255 heads, 63 sectors/track, 36483 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdc1 1 7 56196 fd Linux raid autodetect
/dev/sdc2 8 507 4016250 fd Linux raid autodetect
/dev/sdc3 508 36407 288366750 83 Linux
/dev/sdc4 36408 36483 610470 83 Linux
Disk /dev/sdd: 300.0 GB, 300090728448 bytes
255 heads, 63 sectors/track, 36483 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdd1 1 7 56196 fd Linux raid autodetect
/dev/sdd2 8 507 4016250 fd Linux raid autodetect
/dev/sdd3 508 36407 288366750 83 Linux
/dev/sdd4 36408 36483 610470 83 Linux
root@Thesaurus:/arx/space/pool#
root@Thesaurus:~# cat /proc/mdstat
Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md5 : active raid5 sda3[4] sdd3[3] sdc3[2] sdb3[1]
865081344 blocks super 1.1 level 5, 2048k chunk, algorithm 2 [4/4] [UUUU]
md1 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]
56128 blocks [4/4] [UUUU]
md10 : active raid10 sdd2[3] sdc2[2] sdb2[1] sda2[0]
5353472 blocks 1024K chunks 3 far-copies [4/4] [UUUU]
unused devices: <none>
root@Thesaurus:~#
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: RAID5 in sync does not populate slots sequentially, shows array as (somewhat) faulty
2008-03-04 10:25 ` Peter Rabbitson
@ 2008-03-04 10:42 ` Tor Arne Vestbø
2008-03-04 10:52 ` Peter Rabbitson
2008-03-06 14:51 ` Rui Santos
0 siblings, 2 replies; 8+ messages in thread
From: Tor Arne Vestbø @ 2008-03-04 10:42 UTC (permalink / raw)
To: linux-raid
Peter Rabbitson wrote:
> After Tor Arne reported his success I figured I will simply fail/remove
> sda3, scrape it clean, and will add it back. I zeroed superblocks
> beforehand and also wrote zeros (dd if=/dev/zero) to the drives start
> and end just to make sure everythign is off. After resync I am back at
> square one - the offset of sda3 is different than everything else and
> the array has one failed drive. If someone can shed some light I made
> snapshots of the superblocks[1] alongside with the current output of
> mdadm at http://rabbit.us/pool/md5_problem.tar.bz2.
Not sure if this is at all related to your problem, but one of the
things I tried was to shred all the old drives in the system that were
not going to be part of the array.
/dev/sda system (250GB) <-- shred
/dev/sdb home (250GB) <-- shred
/dev/sdc raid (750GB)
/dev/sdd raid (750GB)
/dev/sde raid (750GB)
/dev/sdf raid (750GB)
The reason I did this was because /dev/sda and /dev/sdb used to be part
of a RAID1 array, but were now used as system disk and home disk
respectively. I was afraid that mdadm would pick up on some of the
lingering RAID superblocks on those disks when reporting, so I shredded
them both using 'shred -n 1' and reinstalled.
Don't know if that affected anything at all for me, since the actual
problem was that I didn't wait for a full resync, but now you know :)
Tor Arne
>
> [1] dd if=/dev/sdX3 of=sdX_sb count=<Data Offset> bs=512
>
> Here is my system config:
>
> root@Thesaurus:/arx/space/pool# fdisk -l /dev/sd[abcd]
>
> Disk /dev/sda: 400.0 GB, 400088457216 bytes
> 255 heads, 63 sectors/track, 48641 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Disk identifier: 0x00000000
>
> Device Boot Start End Blocks Id System
> /dev/sda1 1 7 56196 fd Linux raid
> autodetect
> /dev/sda2 8 507 4016250 fd Linux raid
> autodetect
> /dev/sda3 508 36407 288366750 83 Linux
> /dev/sda4 36408 48641 98269605 83 Linux
>
> Disk /dev/sdb: 320.0 GB, 320072933376 bytes
> 255 heads, 63 sectors/track, 38913 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Disk identifier: 0x00000000
>
> Device Boot Start End Blocks Id System
> /dev/sdb1 1 7 56196 fd Linux raid
> autodetect
> /dev/sdb2 8 507 4016250 fd Linux raid
> autodetect
> /dev/sdb3 508 36407 288366750 83 Linux
> /dev/sdb4 36408 38913 20129445 83 Linux
>
> Disk /dev/sdc: 300.0 GB, 300090728448 bytes
> 255 heads, 63 sectors/track, 36483 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Disk identifier: 0x00000000
>
> Device Boot Start End Blocks Id System
> /dev/sdc1 1 7 56196 fd Linux raid
> autodetect
> /dev/sdc2 8 507 4016250 fd Linux raid
> autodetect
> /dev/sdc3 508 36407 288366750 83 Linux
> /dev/sdc4 36408 36483 610470 83 Linux
>
> Disk /dev/sdd: 300.0 GB, 300090728448 bytes
> 255 heads, 63 sectors/track, 36483 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Disk identifier: 0x00000000
>
> Device Boot Start End Blocks Id System
> /dev/sdd1 1 7 56196 fd Linux raid
> autodetect
> /dev/sdd2 8 507 4016250 fd Linux raid
> autodetect
> /dev/sdd3 508 36407 288366750 83 Linux
> /dev/sdd4 36408 36483 610470 83 Linux
> root@Thesaurus:/arx/space/pool#
>
> root@Thesaurus:~# cat /proc/mdstat
> Personalities : [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
> md5 : active raid5 sda3[4] sdd3[3] sdc3[2] sdb3[1]
> 865081344 blocks super 1.1 level 5, 2048k chunk, algorithm 2 [4/4]
> [UUUU]
>
> md1 : active raid1 sdd1[3] sdc1[2] sdb1[1] sda1[0]
> 56128 blocks [4/4] [UUUU]
>
> md10 : active raid10 sdd2[3] sdc2[2] sdb2[1] sda2[0]
> 5353472 blocks 1024K chunks 3 far-copies [4/4] [UUUU]
>
> unused devices: <none>
> root@Thesaurus:~#
>
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: RAID5 in sync does not populate slots sequentially, shows array as (somewhat) faulty
2008-03-04 10:42 ` Tor Arne Vestbø
@ 2008-03-04 10:52 ` Peter Rabbitson
2008-03-04 10:58 ` Tor Arne Vestbø
2008-03-06 14:51 ` Rui Santos
1 sibling, 1 reply; 8+ messages in thread
From: Peter Rabbitson @ 2008-03-04 10:52 UTC (permalink / raw)
To: Tor Arne Vestbø; +Cc: linux-raid
Tor Arne Vestbø wrote:
> The reason I did this was because /dev/sda and /dev/sdb used to be part
> of a RAID1 array, but were now used as system disk and home disk
> respectively. I was afraid that mdadm would pick up on some of the
> lingering RAID superblocks on those disks when reporting, so I shredded
> them both using 'shred -n 1' and reinstalled.
>
This is irrelevant to 1.x superblocks, and largely insignificant for 0.9
superblocks (baring some really bizzare cases). In either case mdadm
--zero-superblock /dev/XX (possibly executed multiple times) would save you a
lot of disk churning :)
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: RAID5 in sync does not populate slots sequentially, shows array as (somewhat) faulty
2008-03-04 10:52 ` Peter Rabbitson
@ 2008-03-04 10:58 ` Tor Arne Vestbø
0 siblings, 0 replies; 8+ messages in thread
From: Tor Arne Vestbø @ 2008-03-04 10:58 UTC (permalink / raw)
To: linux-raid
Peter Rabbitson wrote:
>> Tor Arne Vestbø wrote:
>> [...] so I shredded them both using 'shred -n 1' and reinstalled.
>
> This is irrelevant to 1.x superblocks, and largely insignificant for 0.9
> superblocks (baring some really bizzare cases). In either case mdadm
> --zero-superblock /dev/XX (possibly executed multiple times) would save
> you a lot of disk churning :)
The only way to learn is the hard way ;)
Tor Arne
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: RAID5 in sync does not populate slots sequentially, shows array as (somewhat) faulty
2008-03-04 10:42 ` Tor Arne Vestbø
2008-03-04 10:52 ` Peter Rabbitson
@ 2008-03-06 14:51 ` Rui Santos
1 sibling, 0 replies; 8+ messages in thread
From: Rui Santos @ 2008-03-06 14:51 UTC (permalink / raw)
To: linux-raid; +Cc: Tor Arne Vestbø
Tor Arne Vestbø wrote:
> Not sure if this is at all related to your problem, but one of the
> things I tried was to shred all the old drives in the system that were
> not going to be part of the array.
>
> /dev/sda system (250GB) <-- shred
> /dev/sdb home (250GB) <-- shred
>
> /dev/sdc raid (750GB)
> /dev/sdd raid (750GB)
> /dev/sde raid (750GB)
> /dev/sdf raid (750GB)
>
> The reason I did this was because /dev/sda and /dev/sdb used to be
> part of a RAID1 array, but were now used as system disk and home disk
> respectively. I was afraid that mdadm would pick up on some of the
> lingering RAID superblocks on those disks when reporting, so I
> shredded them both using 'shred -n 1' and reinstalled.
>
> Don't know if that affected anything at all for me, since the actual
> problem was that I didn't wait for a full resync, but now you know :)
>
> Tor Arne
Hi all,
I have a identical problem. But it didn't went way with the zeroing
superblock / shred procedure. I also have no extra discs.
Here is my config:
~# cat /proc/mdstat:
md0 : active raid1 sda1[0] sdc1[2] sdb1[1]
136508 blocks super 1.0 [3/3] [UUU]
bitmap: 0/9 pages [0KB], 8KB chunk
md1 : active raid5 sdb2[0] sda2[3] sdc2[1]
1060096 blocks super 1.0 level 5, 128k chunk, algorithm 2
[3/3] [UUU]
md2 : active raid5 sda3[0] sdc3[3] sdb3[1]
780083968 blocks super 1.0 level 5, 128k chunk, algorithm 2
[3/3] [UUU]
~# mdadm -D /dev/md{0,1,2}
/dev/md0:
Version : 01.00.03
Creation Time : Wed Feb 27 15:38:43 2008
Raid Level : raid1
Array Size : 136508 (133.33 MiB 139.78 MB)
Used Dev Size : 136508 (133.33 MiB 139.78 MB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu Mar 6 14:38:08 2008
State : active
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Name : 0
UUID : c5e9420d:67e022ae:eaf9fc3e:4949a042
Events : 20
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
/dev/md1:
Version : 01.00.03
Creation Time : Thu Mar 6 13:14:36 2008
Raid Level : raid5
Array Size : 1060096 (1035.42 MiB 1085.54 MB)
Used Dev Size : 530048 (517.71 MiB 542.77 MB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 1
Persistence : Superblock is persistent
Update Time : Thu Mar 6 14:38:08 2008
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 128K
Name : 1
UUID : 3e76c555:d423cd7d:b1454b79:f34e6322
Events : 4
Number Major Minor RaidDevice State
0 8 18 0 active sync /dev/sdb2
1 8 34 1 active sync /dev/sdc2
3 8 2 2 active sync /dev/sda2
/dev/md2:
Version : 01.00.03
Creation Time : Wed Feb 27 15:38:46 2008
Raid Level : raid5
Array Size : 780083968 (743.95 GiB 798.81 GB)
Used Dev Size : 780083968 (371.97 GiB 399.40 GB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 2
Persistence : Superblock is persistent
Update Time : Thu Mar 6 14:38:08 2008
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 128K
Name : 2
UUID : a55c4f4e:8cba34b7:b5f70bb0:97fd1366
Events : 5070
Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 19 1 active sync /dev/sdb3
3 8 35 2 active sync /dev/sdc3
unused devices: <none>
~# mdadm -E /dev/sd{a1,b1,c1,a2,b2,c2,a3,b3,c3}
/dev/sda1:
Magic : a92b4efc
Version : 01
Feature Map : 0x1
Array UUID : c5e9420d:67e022ae:eaf9fc3e:4949a042
Name : 0
Creation Time : Wed Feb 27 15:38:43 2008
Raid Level : raid1
Raid Devices : 3
Used Dev Size : 273016 (133.33 MiB 139.78 MB)
Array Size : 273016 (133.33 MiB 139.78 MB)
Super Offset : 273024 sectors
State : clean
Device UUID : 678a316f:e7a2a641:14c1c3f0:b55d6aae
Internal Bitmap : 2 sectors from superblock
Update Time : Thu Mar 6 14:38:11 2008
Checksum : c5d6af8b - correct
Events : 20
Array Slot : 0 (0, 1, 2)
Array State : Uuu
/dev/sdb1:
Magic : a92b4efc
Version : 01
Feature Map : 0x1
Array UUID : c5e9420d:67e022ae:eaf9fc3e:4949a042
Name : 0
Creation Time : Wed Feb 27 15:38:43 2008
Raid Level : raid1
Raid Devices : 3
Used Dev Size : 273016 (133.33 MiB 139.78 MB)
Array Size : 273016 (133.33 MiB 139.78 MB)
Super Offset : 273024 sectors
State : clean
Device UUID : 17525ff0:fe48f81d:8f28e04c:34901f21
Internal Bitmap : 2 sectors from superblock
Update Time : Thu Mar 6 14:38:11 2008
Checksum : f227b74c - correct
Events : 20
Array Slot : 1 (0, 1, 2)
Array State : uUu
/dev/sdc1:
Magic : a92b4efc
Version : 01
Feature Map : 0x1
Array UUID : c5e9420d:67e022ae:eaf9fc3e:4949a042
Name : 0
Creation Time : Wed Feb 27 15:38:43 2008
Raid Level : raid1
Raid Devices : 3
Used Dev Size : 273016 (133.33 MiB 139.78 MB)
Array Size : 273016 (133.33 MiB 139.78 MB)
Super Offset : 273024 sectors
State : clean
Device UUID : c3e00260:dfa90f02:3c39380b:b090375e
Internal Bitmap : 2 sectors from superblock
Update Time : Thu Mar 6 14:38:11 2008
Checksum : 4152b803 - correct
Events : 20
Array Slot : 2 (0, 1, 2)
Array State : uuU
/dev/sda2:
Magic : a92b4efc
Version : 01
Feature Map : 0x0
Array UUID : 3e76c555:d423cd7d:b1454b79:f34e6322
Name : 1
Creation Time : Thu Mar 6 13:14:36 2008
Raid Level : raid5
Raid Devices : 3
Used Dev Size : 1060264 (517.79 MiB 542.86 MB)
Array Size : 2120192 (1035.42 MiB 1085.54 MB)
Used Size : 1060096 (517.71 MiB 542.77 MB)
Super Offset : 1060272 sectors
State : clean
Device UUID : 2bd8b81f:6e38a263:9c1a48f5:81c2cbc8
Update Time : Thu Mar 6 14:38:11 2008
Checksum : e9afe694 - correct
Events : 4
Layout : left-symmetric
Chunk Size : 128K
Array Slot : 3 (0, 1, failed, 2)
Array State : uuU 1 failed
/dev/sdb2:
Magic : a92b4efc
Version : 01
Feature Map : 0x0
Array UUID : 3e76c555:d423cd7d:b1454b79:f34e6322
Name : 1
Creation Time : Thu Mar 6 13:14:36 2008
Raid Level : raid5
Raid Devices : 3
Used Dev Size : 1060264 (517.79 MiB 542.86 MB)
Array Size : 2120192 (1035.42 MiB 1085.54 MB)
Used Size : 1060096 (517.71 MiB 542.77 MB)
Super Offset : 1060272 sectors
State : clean
Device UUID : 18c77f52:4bbbf090:31a3724c:b7cafa3c
Update Time : Thu Mar 6 14:38:11 2008
Checksum : 151ee926 - correct
Events : 4
Layout : left-symmetric
Chunk Size : 128K
Array Slot : 0 (0, 1, failed, 2)
Array State : Uuu 1 failed
/dev/sdc2:
Magic : a92b4efc
Version : 01
Feature Map : 0x0
Array UUID : 3e76c555:d423cd7d:b1454b79:f34e6322
Name : 1
Creation Time : Thu Mar 6 13:14:36 2008
Raid Level : raid5
Raid Devices : 3
Used Dev Size : 1060264 (517.79 MiB 542.86 MB)
Array Size : 2120192 (1035.42 MiB 1085.54 MB)
Used Size : 1060096 (517.71 MiB 542.77 MB)
Super Offset : 1060272 sectors
State : clean
Device UUID : 19fd5caf:ff6a3e82:95c84b1d:8ec60429
Update Time : Thu Mar 6 14:38:11 2008
Checksum : 202cf017 - correct
Events : 4
Layout : left-symmetric
Chunk Size : 128K
Array Slot : 1 (0, 1, failed, 2)
Array State : uUu 1 failed
/dev/sda3:
Magic : a92b4efc
Version : 01
Feature Map : 0x0
Array UUID : a55c4f4e:8cba34b7:b5f70bb0:97fd1366
Name : 2
Creation Time : Wed Feb 27 15:38:46 2008
Raid Level : raid5
Raid Devices : 3
Used Dev Size : 780083992 (371.97 GiB 399.40 GB)
Array Size : 1560167936 (743.95 GiB 798.81 GB)
Used Size : 780083968 (371.97 GiB 399.40 GB)
Super Offset : 780084248 sectors
State : active
Device UUID : 590ac9c2:e4ae82b3:1248d87a:d655dd7c
Update Time : Thu Mar 6 14:39:49 2008
Checksum : 4de7b99b - correct
Events : 5071
Layout : left-symmetric
Chunk Size : 128K
Array Slot : 0 (0, 1, failed, 2)
Array State : Uuu 1 failed
/dev/sdb3:
Magic : a92b4efc
Version : 01
Feature Map : 0x0
Array UUID : a55c4f4e:8cba34b7:b5f70bb0:97fd1366
Name : 2
Creation Time : Wed Feb 27 15:38:46 2008
Raid Level : raid5
Raid Devices : 3
Used Dev Size : 780083992 (371.97 GiB 399.40 GB)
Array Size : 1560167936 (743.95 GiB 798.81 GB)
Used Size : 780083968 (371.97 GiB 399.40 GB)
Super Offset : 780084248 sectors
State : active
Device UUID : 9589b278:38932876:414d8879:b9c70fe7
Update Time : Thu Mar 6 14:39:49 2008
Checksum : 2f59943e - correct
Events : 5071
Layout : left-symmetric
Chunk Size : 128K
Array Slot : 1 (0, 1, failed, 2)
Array State : uUu 1 failed
/dev/sdc3:
Magic : a92b4efc
Version : 01
Feature Map : 0x0
Array UUID : a55c4f4e:8cba34b7:b5f70bb0:97fd1366
Name : 2
Creation Time : Wed Feb 27 15:38:46 2008
Raid Level : raid5
Raid Devices : 3
Used Dev Size : 780083992 (371.97 GiB 399.40 GB)
Array Size : 1560167936 (743.95 GiB 798.81 GB)
Used Size : 780083968 (371.97 GiB 399.40 GB)
Super Offset : 780084248 sectors
State : active
Device UUID : a8a110fa:75d91ef2:5e0376a7:dad76b1a
Update Time : Thu Mar 6 14:39:49 2008
Checksum : 8df7b8ce - correct
Events : 5071
Layout : left-symmetric
Chunk Size : 128K
Array Slot : 3 (0, 1, failed, 2)
Array State : uuU 1 failed
As you can see, all of RAID5 devices state that it is failed. On the
other hand, the RAID device reports the array as clean. You can also see
the discrepancies in the array slots composition.
Anyway, this only happens when I use 1.0 or 1.1 superblock ( didn't try
1.2 ). If I use the 0.9 superblocks all problems are solved. I'm able to
test with one of the arrays, if some test is needed or advised.
I'm using a 2.6.24 x86_64 kernel ( SuSE ) and mdadm v2.6.2 ( also tried
v2.6.4 with no success ). Also as stated, on this setup, exist only
three disk with three partitions ( 0xFD ) each to build three raid arrays.
Thanks for your help,
Rui Santos
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2008-03-06 14:51 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-03-03 10:42 RAID5 in sync does not populate slots sequentially, shows array as (somewhat) faulty Peter Rabbitson
2008-03-03 11:44 ` Tor Arne Vestbø
2008-03-03 18:34 ` Bill Davidsen
2008-03-04 10:25 ` Peter Rabbitson
2008-03-04 10:42 ` Tor Arne Vestbø
2008-03-04 10:52 ` Peter Rabbitson
2008-03-04 10:58 ` Tor Arne Vestbø
2008-03-06 14:51 ` Rui Santos
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).