* Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"?
@ 2009-02-04 19:27 Thomas J. Baker
2009-02-04 20:50 ` Joe Landman
2009-02-06 5:14 ` Neil Brown
0 siblings, 2 replies; 19+ messages in thread
From: Thomas J. Baker @ 2009-02-04 19:27 UTC (permalink / raw)
To: linux-raid
Any help greately appreciated. Here are the details:
[root@node002 ~]# ./examineRAIDDisks
mdadm -E /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1 /dev/sdu1 /dev/sdv1 /dev/sdw1 /dev/sdx1 /dev/sdy1 /dev/sdz1 /dev/sdaa1 /dev/sdab1 /dev/sdac1
/dev/sdb1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Tue Jan 27 23:12:33 2009
State : clean
Active Devices : 20
Working Devices : 21
Failed Devices : 7
Spare Devices : 1
Checksum : 2a4aa806 - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 17 0 active sync /dev/sdb1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 0 0 14 active sync
15 15 0 0 15 active sync
16 16 0 0 16 active sync
17 17 0 0 17 active sync
18 18 0 0 18 active sync
19 19 0 0 19 active sync
20 20 0 0 20 active sync
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 0 8 17 0 active sync /dev/sdb1
/dev/sdc1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Tue Jan 27 23:12:33 2009
State : clean
Active Devices : 20
Working Devices : 21
Failed Devices : 7
Spare Devices : 1
Checksum : 2a4aa82d - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 1 8 33 1 active sync /dev/sdc1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 0 0 14 faulty removed
15 15 0 0 15 faulty removed
16 16 0 0 16 faulty removed
17 17 0 0 17 faulty removed
18 18 0 0 18 faulty removed
19 19 0 0 19 faulty removed
20 20 0 0 20 faulty removed
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 1 8 33 1 active sync /dev/sdc1
/dev/sdd1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Tue Jan 27 23:12:33 2009
State : clean
Active Devices : 20
Working Devices : 21
Failed Devices : 7
Spare Devices : 1
Checksum : 2a4aa83f - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 2 8 49 2 active sync /dev/sdd1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 0 0 14 faulty removed
15 15 0 0 15 faulty removed
16 16 0 0 16 faulty removed
17 17 0 0 17 faulty removed
18 18 0 0 18 faulty removed
19 19 0 0 19 faulty removed
20 20 0 0 20 faulty removed
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 2 8 49 2 active sync /dev/sdd1
/dev/sde1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Tue Jan 27 23:12:33 2009
State : clean
Active Devices : 20
Working Devices : 21
Failed Devices : 7
Spare Devices : 1
Checksum : 2a4aa851 - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 3 8 65 3 active sync /dev/sde1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 0 0 14 faulty removed
15 15 0 0 15 faulty removed
16 16 0 0 16 faulty removed
17 17 0 0 17 faulty removed
18 18 0 0 18 faulty removed
19 19 0 0 19 faulty removed
20 20 0 0 20 faulty removed
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 3 8 65 3 active sync /dev/sde1
/dev/sdf1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Tue Jan 27 23:12:33 2009
State : clean
Active Devices : 20
Working Devices : 21
Failed Devices : 7
Spare Devices : 1
Checksum : 2a4aa863 - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 4 8 81 4 active sync /dev/sdf1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 0 0 14 faulty removed
15 15 0 0 15 faulty removed
16 16 0 0 16 faulty removed
17 17 0 0 17 faulty removed
18 18 0 0 18 faulty removed
19 19 0 0 19 faulty removed
20 20 0 0 20 faulty removed
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 4 8 81 4 active sync /dev/sdf1
/dev/sdg1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Tue Jan 27 23:12:33 2009
State : clean
Active Devices : 20
Working Devices : 21
Failed Devices : 7
Spare Devices : 1
Checksum : 2a4aa875 - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 5 8 97 5 active sync /dev/sdg1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 0 0 14 faulty removed
15 15 0 0 15 faulty removed
16 16 0 0 16 faulty removed
17 17 0 0 17 faulty removed
18 18 0 0 18 faulty removed
19 19 0 0 19 faulty removed
20 20 0 0 20 faulty removed
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 5 8 97 5 active sync /dev/sdg1
mdadm: No md superblock detected on /dev/sdh1.
/dev/sdi1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Tue Jan 27 23:12:33 2009
State : clean
Active Devices : 20
Working Devices : 21
Failed Devices : 7
Spare Devices : 1
Checksum : 2a4aa899 - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 7 8 129 7 active sync /dev/sdi1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 0 0 14 faulty removed
15 15 0 0 15 faulty removed
16 16 0 0 16 faulty removed
17 17 0 0 17 faulty removed
18 18 0 0 18 faulty removed
19 19 0 0 19 faulty removed
20 20 0 0 20 faulty removed
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 7 8 129 7 active sync /dev/sdi1
mdadm: No md superblock detected on /dev/sdj1.
/dev/sdk1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Tue Jan 27 23:12:33 2009
State : clean
Active Devices : 20
Working Devices : 21
Failed Devices : 7
Spare Devices : 1
Checksum : 2a4aa8bd - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 9 8 161 9 active sync /dev/sdk1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 0 0 14 faulty removed
15 15 0 0 15 faulty removed
16 16 0 0 16 faulty removed
17 17 0 0 17 faulty removed
18 18 0 0 18 faulty removed
19 19 0 0 19 faulty removed
20 20 0 0 20 faulty removed
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 9 8 161 9 active sync /dev/sdk1
/dev/sdl1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Tue Jan 27 23:12:33 2009
State : clean
Active Devices : 20
Working Devices : 21
Failed Devices : 7
Spare Devices : 1
Checksum : 2a4aa8cf - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 10 8 177 10 active sync /dev/sdl1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 0 0 14 faulty removed
15 15 0 0 15 faulty removed
16 16 0 0 16 faulty removed
17 17 0 0 17 faulty removed
18 18 0 0 18 faulty removed
19 19 0 0 19 faulty removed
20 20 0 0 20 faulty removed
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 10 8 177 10 active sync /dev/sdl1
/dev/sdm1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Tue Jan 27 23:12:33 2009
State : clean
Active Devices : 20
Working Devices : 21
Failed Devices : 7
Spare Devices : 1
Checksum : 2a4aa8e1 - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 11 8 193 11 active sync /dev/sdm1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 0 0 14 faulty removed
15 15 0 0 15 faulty removed
16 16 0 0 16 faulty removed
17 17 0 0 17 faulty removed
18 18 0 0 18 faulty removed
19 19 0 0 19 faulty removed
20 20 0 0 20 faulty removed
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 11 8 193 11 active sync /dev/sdm1
/dev/sdn1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Tue Jan 27 23:12:33 2009
State : clean
Active Devices : 20
Working Devices : 21
Failed Devices : 7
Spare Devices : 1
Checksum : 2a4aa8f3 - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 12 8 209 12 active sync /dev/sdn1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 0 0 14 faulty removed
15 15 0 0 15 faulty removed
16 16 0 0 16 faulty removed
17 17 0 0 17 faulty removed
18 18 0 0 18 faulty removed
19 19 0 0 19 faulty removed
20 20 0 0 20 faulty removed
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 12 8 209 12 active sync /dev/sdn1
/dev/sdo1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Tue Jan 27 23:12:33 2009
State : clean
Active Devices : 20
Working Devices : 21
Failed Devices : 7
Spare Devices : 1
Checksum : 2a4aa905 - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 13 8 225 13 active sync /dev/sdo1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 0 0 14 faulty removed
15 15 0 0 15 faulty removed
16 16 0 0 16 faulty removed
17 17 0 0 17 faulty removed
18 18 0 0 18 faulty removed
19 19 0 0 19 faulty removed
20 20 0 0 20 faulty removed
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 13 8 225 13 active sync /dev/sdo1
/dev/sdp1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Tue Jan 27 23:12:33 2009
State : clean
Active Devices : 20
Working Devices : 21
Failed Devices : 7
Spare Devices : 1
Checksum : 2a4aa8ce - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 21 65 97 21 active sync /dev/sdw1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 0 0 14 faulty removed
15 15 0 0 15 faulty removed
16 16 0 0 16 faulty removed
17 17 0 0 17 faulty removed
18 18 0 0 18 faulty removed
19 19 0 0 19 faulty removed
20 20 0 0 20 faulty removed
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 21 65 97 21 active sync /dev/sdw1
/dev/sdq1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Tue Jan 27 23:12:33 2009
State : clean
Active Devices : 20
Working Devices : 21
Failed Devices : 7
Spare Devices : 1
Checksum : 2a4aa8e0 - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 22 65 113 22 active sync /dev/sdx1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 0 0 14 faulty removed
15 15 0 0 15 faulty removed
16 16 0 0 16 faulty removed
17 17 0 0 17 faulty removed
18 18 0 0 18 faulty removed
19 19 0 0 19 faulty removed
20 20 0 0 20 faulty removed
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 22 65 113 22 active sync /dev/sdx1
/dev/sdr1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Tue Jan 27 23:12:33 2009
State : clean
Active Devices : 20
Working Devices : 21
Failed Devices : 7
Spare Devices : 1
Checksum : 2a4aa8f2 - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 23 65 129 23 active sync /dev/sdy1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 0 0 14 faulty removed
15 15 0 0 15 faulty removed
16 16 0 0 16 faulty removed
17 17 0 0 17 faulty removed
18 18 0 0 18 faulty removed
19 19 0 0 19 faulty removed
20 20 0 0 20 faulty removed
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 23 65 129 23 active sync /dev/sdy1
mdadm: No md superblock detected on /dev/sds1.
/dev/sdt1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Tue Jan 27 23:12:33 2009
State : clean
Active Devices : 20
Working Devices : 21
Failed Devices : 7
Spare Devices : 1
Checksum : 2a4aa916 - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 25 65 161 25 active sync /dev/sdaa1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 0 0 14 faulty removed
15 15 0 0 15 faulty removed
16 16 0 0 16 faulty removed
17 17 0 0 17 faulty removed
18 18 0 0 18 faulty removed
19 19 0 0 19 faulty removed
20 20 0 0 20 faulty removed
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 25 65 161 25 active sync /dev/sdaa1
/dev/sdu1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Tue Jan 27 23:12:33 2009
State : clean
Active Devices : 20
Working Devices : 21
Failed Devices : 7
Spare Devices : 1
Checksum : 2a4aa928 - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 26 65 177 26 active sync /dev/sdab1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 0 0 14 faulty removed
15 15 0 0 15 faulty removed
16 16 0 0 16 faulty removed
17 17 0 0 17 faulty removed
18 18 0 0 18 faulty removed
19 19 0 0 19 faulty removed
20 20 0 0 20 faulty removed
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 26 65 177 26 active sync /dev/sdab1
mdadm: No md superblock detected on /dev/sdv1.
/dev/sdw1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Mon Jan 26 14:03:23 2009
State : clean
Active Devices : 27
Working Devices : 28
Failed Devices : 0
Spare Devices : 1
Checksum : 2a48da42 - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 14 8 241 14 active sync /dev/sdp1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 8 241 14 active sync /dev/sdp1
15 15 65 1 15 active sync /dev/sdq1
16 16 65 17 16 active sync /dev/sdr1
17 17 65 33 17 active sync /dev/sds1
18 18 65 49 18 active sync /dev/sdt1
19 19 65 65 19 active sync /dev/sdu1
20 20 65 81 20 active sync /dev/sdv1
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 14 8 241 14 active sync /dev/sdp1
/dev/sdx1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Mon Jan 26 14:03:23 2009
State : clean
Active Devices : 27
Working Devices : 28
Failed Devices : 0
Spare Devices : 1
Checksum : 2a48d98d - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 15 65 1 15 active sync /dev/sdq1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 8 241 14 active sync /dev/sdp1
15 15 65 1 15 active sync /dev/sdq1
16 16 65 17 16 active sync /dev/sdr1
17 17 65 33 17 active sync /dev/sds1
18 18 65 49 18 active sync /dev/sdt1
19 19 65 65 19 active sync /dev/sdu1
20 20 65 81 20 active sync /dev/sdv1
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 15 65 1 15 active sync /dev/sdq1
/dev/sdy1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Mon Jan 26 14:03:23 2009
State : clean
Active Devices : 27
Working Devices : 28
Failed Devices : 0
Spare Devices : 1
Checksum : 2a48d99f - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 16 65 17 16 active sync /dev/sdr1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 8 241 14 active sync /dev/sdp1
15 15 65 1 15 active sync /dev/sdq1
16 16 65 17 16 active sync /dev/sdr1
17 17 65 33 17 active sync /dev/sds1
18 18 65 49 18 active sync /dev/sdt1
19 19 65 65 19 active sync /dev/sdu1
20 20 65 81 20 active sync /dev/sdv1
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 16 65 17 16 active sync /dev/sdr1
/dev/sdz1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Mon Jan 26 14:03:23 2009
State : clean
Active Devices : 27
Working Devices : 28
Failed Devices : 0
Spare Devices : 1
Checksum : 2a48d9b1 - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 17 65 33 17 active sync /dev/sds1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 8 241 14 active sync /dev/sdp1
15 15 65 1 15 active sync /dev/sdq1
16 16 65 17 16 active sync /dev/sdr1
17 17 65 33 17 active sync /dev/sds1
18 18 65 49 18 active sync /dev/sdt1
19 19 65 65 19 active sync /dev/sdu1
20 20 65 81 20 active sync /dev/sdv1
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 17 65 33 17 active sync /dev/sds1
/dev/sdaa1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Mon Jan 26 14:03:23 2009
State : clean
Active Devices : 27
Working Devices : 28
Failed Devices : 0
Spare Devices : 1
Checksum : 2a48d9c3 - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 18 65 49 18 active sync /dev/sdt1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 8 241 14 active sync /dev/sdp1
15 15 65 1 15 active sync /dev/sdq1
16 16 65 17 16 active sync /dev/sdr1
17 17 65 33 17 active sync /dev/sds1
18 18 65 49 18 active sync /dev/sdt1
19 19 65 65 19 active sync /dev/sdu1
20 20 65 81 20 active sync /dev/sdv1
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 18 65 49 18 active sync /dev/sdt1
/dev/sdab1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Mon Jan 26 14:03:23 2009
State : clean
Active Devices : 27
Working Devices : 28
Failed Devices : 0
Spare Devices : 1
Checksum : 2a48d9d5 - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 19 65 65 19 active sync /dev/sdu1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 8 241 14 active sync /dev/sdp1
15 15 65 1 15 active sync /dev/sdq1
16 16 65 17 16 active sync /dev/sdr1
17 17 65 33 17 active sync /dev/sds1
18 18 65 49 18 active sync /dev/sdt1
19 19 65 65 19 active sync /dev/sdu1
20 20 65 81 20 active sync /dev/sdv1
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 19 65 65 19 active sync /dev/sdu1
/dev/sdac1:
Magic : a92b4efc
Version : 00.90.00
UUID : c00b698c:0b2bc972:2fe4a5a3:e488dcec
Creation Time : Thu Jun 28 05:16:13 2007
Raid Level : raid6
Used Dev Size : 292961216 (279.39 GiB 299.99 GB)
Array Size : 7324030400 (6984.74 GiB 7499.81 GB)
Raid Devices : 27
Total Devices : 28
Preferred Minor : 0
Update Time : Tue Jan 27 23:12:33 2009
State : clean
Active Devices : 26
Working Devices : 27
Failed Devices : 1
Spare Devices : 1
Checksum : 2a4aabdf - correct
Events : 0.12
Chunk Size : 64K
Number Major Minor RaidDevice State
this 20 65 81 20 active sync /dev/sdv1
0 0 8 17 0 active sync /dev/sdb1
1 1 8 33 1 active sync /dev/sdc1
2 2 8 49 2 active sync /dev/sdd1
3 3 8 65 3 active sync /dev/sde1
4 4 8 81 4 active sync /dev/sdf1
5 5 8 97 5 active sync /dev/sdg1
6 6 8 113 6 active sync /dev/sdh1
7 7 8 129 7 active sync /dev/sdi1
8 8 8 145 8 active sync /dev/sdj1
9 9 8 161 9 active sync /dev/sdk1
10 10 8 177 10 active sync /dev/sdl1
11 11 8 193 11 active sync /dev/sdm1
12 12 8 209 12 active sync /dev/sdn1
13 13 8 225 13 active sync /dev/sdo1
14 14 8 241 14 active sync /dev/sdp1
15 15 0 0 15 faulty removed
16 16 65 17 16 active sync /dev/sdr1
17 17 65 33 17 active sync /dev/sds1
18 18 65 49 18 active sync /dev/sdt1
19 19 65 65 19 active sync /dev/sdu1
20 20 65 81 20 active sync /dev/sdv1
21 21 65 97 21 active sync /dev/sdw1
22 22 65 113 22 active sync /dev/sdx1
23 23 65 129 23 active sync /dev/sdy1
24 24 65 145 24 active sync /dev/sdz1
25 25 65 161 25 active sync /dev/sdaa1
26 26 65 177 26 active sync /dev/sdab1
27 20 65 81 20 active sync /dev/sdv1
[root@node002 ~]#
Thanks,
tjb
--
=======================================================================
| Thomas Baker email: tjb@unh.edu |
| Systems Programmer |
| Research Computing Center voice: (603) 862-4490 |
| University of New Hampshire fax: (603) 862-1761 |
| 332 Morse Hall |
| Durham, NH 03824 USA http://wintermute.sr.unh.edu/~tjb |
=======================================================================
^ permalink raw reply [flat|nested] 19+ messages in thread* Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? 2009-02-04 19:27 Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? Thomas J. Baker @ 2009-02-04 20:50 ` Joe Landman 2009-02-04 21:03 ` Thomas J. Baker 2009-02-06 5:14 ` Neil Brown 1 sibling, 1 reply; 19+ messages in thread From: Joe Landman @ 2009-02-04 20:50 UTC (permalink / raw) To: tjb; +Cc: linux-raid Thomas J. Baker wrote: > Any help greately appreciated. Here are the details: > > [root@node002 ~]# ./examineRAIDDisks > mdadm -E /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1 /dev/sdu1 /dev/sdv1 /dev/sdw1 /dev/sdx1 /dev/sdy1 /dev/sdz1 /dev/sdaa1 /dev/sdab1 /dev/sdac1 > /dev/sdb1: > Magic : a92b4efc > Version : 00.90.00 Hi Thomas: Don't you need 1.0 or higher superblocks to make this work? Joe -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics LLC, email: landman@scalableinformatics.com web : http://www.scalableinformatics.com http://jackrabbit.scalableinformatics.com phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615 ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? 2009-02-04 20:50 ` Joe Landman @ 2009-02-04 21:03 ` Thomas J. Baker 2009-02-04 21:17 ` Thomas J. Baker 2009-02-05 18:49 ` Bill Davidsen 0 siblings, 2 replies; 19+ messages in thread From: Thomas J. Baker @ 2009-02-04 21:03 UTC (permalink / raw) To: landman; +Cc: linux-raid On Wed, 2009-02-04 at 15:50 -0500, Joe Landman wrote: > Thomas J. Baker wrote: > > Any help greately appreciated. Here are the details: > > > > [root@node002 ~]# ./examineRAIDDisks > > mdadm -E /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1 /dev/sdu1 /dev/sdv1 /dev/sdw1 /dev/sdx1 /dev/sdy1 /dev/sdz1 /dev/sdaa1 /dev/sdab1 /dev/sdac1 > > /dev/sdb1: > > Magic : a92b4efc > > Version : 00.90.00 > > Hi Thomas: > > Don't you need 1.0 or higher superblocks to make this work? > > Joe The array was made probably two years ago and had been working fine until recently. In reading the documentation for mdadm, it did seem like it should have required me to use the higher version but it never complained when I made it and worked fine. Thanks, tjb -- ======================================================================= | Thomas Baker email: tjb@unh.edu | | Systems Programmer | | Research Computing Center voice: (603) 862-4490 | | University of New Hampshire fax: (603) 862-1761 | | 332 Morse Hall | | Durham, NH 03824 USA http://wintermute.sr.unh.edu/~tjb | ======================================================================= ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? 2009-02-04 21:03 ` Thomas J. Baker @ 2009-02-04 21:17 ` Thomas J. Baker 2009-02-05 18:49 ` Bill Davidsen 1 sibling, 0 replies; 19+ messages in thread From: Thomas J. Baker @ 2009-02-04 21:17 UTC (permalink / raw) To: landman; +Cc: linux-raid On Wed, 2009-02-04 at 16:03 -0500, Thomas J. Baker wrote: > On Wed, 2009-02-04 at 15:50 -0500, Joe Landman wrote: > > Thomas J. Baker wrote: > > > Any help greately appreciated. Here are the details: > > > > > > [root@node002 ~]# ./examineRAIDDisks > > > mdadm -E /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1 /dev/sdu1 /dev/sdv1 /dev/sdw1 /dev/sdx1 /dev/sdy1 /dev/sdz1 /dev/sdaa1 /dev/sdab1 /dev/sdac1 > > > /dev/sdb1: > > > Magic : a92b4efc > > > Version : 00.90.00 > > > > Hi Thomas: > > > > Don't you need 1.0 or higher superblocks to make this work? > > > > Joe > > The array was made probably two years ago and had been working fine > until recently. In reading the documentation for mdadm, it did seem like > it should have required me to use the higher version but it never > complained when I made it and worked fine. > > Thanks, > > tjb Actually, rereading it makes me think it's OK. There are only 27 devices in the array and one hot spare. Each device is only 300GB, less than the 2TB limit. 0, 0.90, default Use the original 0.90 format superblock. This format limits arrays to 28 component devices and limits compo- nent devices of levels 1 and greater to 2 terabytes. I'll admit I could be misunderstanding something here. Thanks, tjb -- ======================================================================= | Thomas Baker email: tjb@unh.edu | | Systems Programmer | | Research Computing Center voice: (603) 862-4490 | | University of New Hampshire fax: (603) 862-1761 | | 332 Morse Hall | | Durham, NH 03824 USA http://wintermute.sr.unh.edu/~tjb | ======================================================================= ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? 2009-02-04 21:03 ` Thomas J. Baker 2009-02-04 21:17 ` Thomas J. Baker @ 2009-02-05 18:49 ` Bill Davidsen 2009-02-05 18:59 ` Thomas J. Baker 1 sibling, 1 reply; 19+ messages in thread From: Bill Davidsen @ 2009-02-05 18:49 UTC (permalink / raw) To: tjb; +Cc: landman, linux-raid Thomas J. Baker wrote: > The array was made probably two years ago and had been working fine > until recently. In reading the documentation for mdadm, it did seem like > it should have required me to use the higher version but it never > complained when I made it and worked fine. > What have you changed lately? Are the drives all on a single controller? Are you using PARTITIONS in mdadm.conf and letting mdadm find things for itself? -- Bill Davidsen <davidsen@tmr.com> "Woe unto the statesman who makes war without a reason that will still be valid when the war is over..." Otto von Bismark ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? 2009-02-05 18:49 ` Bill Davidsen @ 2009-02-05 18:59 ` Thomas J. Baker 2009-02-05 23:57 ` Bill Davidsen 0 siblings, 1 reply; 19+ messages in thread From: Thomas J. Baker @ 2009-02-05 18:59 UTC (permalink / raw) To: Bill Davidsen; +Cc: linux-raid On Thu, 2009-02-05 at 13:49 -0500, Bill Davidsen wrote: > Thomas J. Baker wrote: > > The array was made probably two years ago and had been working fine > > until recently. In reading the documentation for mdadm, it did seem like > > it should have required me to use the higher version but it never > > complained when I made it and worked fine. > > > > What have you changed lately? Are the drives all on a single controller? > Are you using PARTITIONS in mdadm.conf and letting mdadm find things for > itself? > The array is made up of two Dell PowerVault 220s in split bus configuration with two Adaptec 39160 Dual Channel SCSI controllers. Each half of each PowerVault (7 disks) is connected to one of the channels on the Adaptecs. Four channels in all. As far as changing things, what do you mean? The cause of the failure is likely heat as we've had some AC issues recently. I didn't use mdadm.conf at all. All disks are partitioned with one 'Linux raid autodetect' partition. mdadm had always found the array automatically at boot. Thanks, tjb -- ======================================================================= | Thomas Baker email: tjb@unh.edu | | Systems Programmer | | Research Computing Center voice: (603) 862-4490 | | University of New Hampshire fax: (603) 862-1761 | | 332 Morse Hall | | Durham, NH 03824 USA http://wintermute.sr.unh.edu/~tjb | ======================================================================= ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? 2009-02-05 18:59 ` Thomas J. Baker @ 2009-02-05 23:57 ` Bill Davidsen 2009-02-06 0:08 ` Thomas Baker 0 siblings, 1 reply; 19+ messages in thread From: Bill Davidsen @ 2009-02-05 23:57 UTC (permalink / raw) To: tjb; +Cc: linux-raid Thomas J. Baker wrote: > On Thu, 2009-02-05 at 13:49 -0500, Bill Davidsen wrote: > >> Thomas J. Baker wrote: >> >>> The array was made probably two years ago and had been working fine >>> until recently. In reading the documentation for mdadm, it did seem like >>> it should have required me to use the higher version but it never >>> complained when I made it and worked fine. >>> >>> >> What have you changed lately? Are the drives all on a single controller? >> Are you using PARTITIONS in mdadm.conf and letting mdadm find things for >> itself? >> >> > > The array is made up of two Dell PowerVault 220s in split bus > configuration with two Adaptec 39160 Dual Channel SCSI controllers. Each > half of each PowerVault (7 disks) is connected to one of the channels on > the Adaptecs. Four channels in all. > > As far as changing things, what do you mean? The cause of the failure is > likely heat as we've had some AC issues recently. > > Well that's change, but if you can read the drives at all it doesn't sound like the typical "fall down dead" heat issues, I would expect tons of hardware errors at a lower level from the device controller. Did you check the partition tables with fdisk or similar? Are the drives all in the same physical box? IBM split their boxes, running four drives off one power and four (or three+CD) off the other. They are likely to have something in common, if you can find it you might fix it. > I didn't use mdadm.conf at all. All disks are partitioned with one > 'Linux raid autodetect' partition. mdadm had always found the array > automatically at boot. > No kernel update or utilities update lately? Given the choice of identify in hopes of a fixable problem or reinstall, config, recover from backup, I'm trying to see if you can do the former in preference to the latter. -- Bill Davidsen <davidsen@tmr.com> "Woe unto the statesman who makes war without a reason that will still be valid when the war is over..." Otto von Bismark ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? 2009-02-05 23:57 ` Bill Davidsen @ 2009-02-06 0:08 ` Thomas Baker 0 siblings, 0 replies; 19+ messages in thread From: Thomas Baker @ 2009-02-06 0:08 UTC (permalink / raw) To: Bill Davidsen; +Cc: linux-raid On Feb 5, 2009, at 6:57 PM, Bill Davidsen wrote: > Thomas J. Baker wrote: >> On Thu, 2009-02-05 at 13:49 -0500, Bill Davidsen wrote: >> >>> Thomas J. Baker wrote: >>> >>>> The array was made probably two years ago and had been working fine >>>> until recently. In reading the documentation for mdadm, it did >>>> seem like >>>> it should have required me to use the higher version but it never >>>> complained when I made it and worked fine. >>>> >>> What have you changed lately? Are the drives all on a single >>> controller? Are you using PARTITIONS in mdadm.conf and letting >>> mdadm find things for itself? >>> >>> >> >> The array is made up of two Dell PowerVault 220s in split bus >> configuration with two Adaptec 39160 Dual Channel SCSI controllers. >> Each >> half of each PowerVault (7 disks) is connected to one of the >> channels on >> the Adaptecs. Four channels in all. >> >> As far as changing things, what do you mean? The cause of the >> failure is >> likely heat as we've had some AC issues recently. >> >> > Well that's change, but if you can read the drives at all it doesn't > sound like the typical "fall down dead" heat issues, I would expect > tons of hardware errors at a lower level from the device controller. > Did you check the partition tables with fdisk or similar? Are the > drives all in the same physical box? IBM split their boxes, running > four drives off one power and four (or three+CD) off the other. They > are likely to have something in common, if you can find it you might > fix it. > >> I didn't use mdadm.conf at all. All disks are partitioned with one >> 'Linux raid autodetect' partition. mdadm had always found the array >> automatically at boot. >> > > No kernel update or utilities update lately? > > Given the choice of identify in hopes of a fixable problem or > reinstall, config, recover from backup, I'm trying to see if you can > do the former in preference to the latter. > Fdisk reports all drives look OK as far as partition table and partition type. I'm in the process of running a media verify from the Adaptec BIOS on each of the four to make sure nothing is really wrong with them. The PowerVaults house 14 drives so we have two boxes. A PowerVault is just a box for disks, essentially an external SCSI enclosure. As far as I can tell, the hardware seems fine now that the AC is fixed. I did do a software update after the failure in hopes of it helping, which likely updated the kernel since it had been a month or two on that machine. CentOS5 so nothing major should have changed in terms of versions, etc. The research group that uses the array is hoping for a fixable problem too as opposed to the longer remake/restore route. The only hope to me seems to be if mdadm can somehow recover/remake the md superblock on the four troublesome disks. Thanks, tjb -- ======================================================================= | Thomas Baker email: tjb@unh.edu | | Systems Programmer | | Research Computing Center voice: (603) 862-4490 | | University of New Hampshire fax: (603) 862-1761 | | 332 Morse Hall | | Durham, NH 03824 USA http://wintermute.sr.unh.edu/~tjb | ======================================================================= ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? 2009-02-04 19:27 Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? Thomas J. Baker 2009-02-04 20:50 ` Joe Landman @ 2009-02-06 5:14 ` Neil Brown 2009-02-06 20:32 ` Thomas J. Baker 2009-02-07 4:05 ` Mr. James W. Laferriere 1 sibling, 2 replies; 19+ messages in thread From: Neil Brown @ 2009-02-06 5:14 UTC (permalink / raw) To: tjb; +Cc: linux-raid On Wednesday February 4, tjb@unh.edu wrote: > Any help greately appreciated. Here are the details: Hmm..... The limit on the number of devices in a 0.90 array is 27, despite the fact that the manual page says '28'. And the only limit that is enforced is that the number of raid_disks is limited to 27. So when you added a hot spare to your array, bad things started happening. I'd better fix that code and documentation. But the issue at the moment is fixing your array. It appears that all slots (0-26) are present except 6,8,24 It seems likely that 6 is on sdh1 8 is on sdj1 24 is on sdz1 ... or sds1. They seem to move around a bit. If only 2 were missing you would be able to bring the array up. But with 3 missing - not. So we will need to recreate the array. This should preserve all your old data. The command you will need is mdadm --create /dev/md0 -l6 -n27 .... list of device names..... Getting the correct list of device names is tricky, but quite possible if you exercise due care. The final list should have 27 entries, 2 of which should be the word "missing". When you do this it will create a degraded array. As the array is degraded, no resync will happen so the data on the arrays will not be changed, only the metadata. So if the list of devices turns out to be wrong, it isn't the end of the world. Just stop the array and try again with a different list. So: how to get the list. Start with the output of ./examinRAIDDisks | grep -E '^(/dev|this)' Based on your current output, the start of this will be: vvv /dev/sdb1: this 0 8 17 0 active sync /dev/sdb1 /dev/sdc1: this 1 8 33 1 active sync /dev/sdc1 /dev/sdd1: this 2 8 49 2 active sync /dev/sdd1 /dev/sde1: this 3 8 65 3 active sync /dev/sde1 /dev/sdf1: this 4 8 81 4 active sync /dev/sdf1 /dev/sdg1: this 5 8 97 5 active sync /dev/sdg1 /dev/sdi1: this 7 8 129 7 active sync /dev/sdi1 /dev/sdk1: this 9 8 161 9 active sync /dev/sdk1 ^^^ however if you have rebooted and particularly if you have moved any drives, this could be different now. The information that is important is the /dev/sdX1: line and the 5th column of the other line, that I have highlighted. Ignore the device name at the end of the lines (column 8), that is just confusing. The 5th column number tells you where in the array the /dev device should live. So from the above information, the first few devices in your list would be /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 missing /dev/sdi missing /dev/sdk1 If you follow this process on the complete output of the run, you will get a list with 27 entries, 3 of which will be the word 'missing'. You need to replace one of the 'missings' with a device that is not listed, but probably goes at that place in the order e.g. sdh1 in place of the first missing. This command might help you ./examineRAIDDisks | grep -E '^(/dev|this)' | awk 'NF==1 {d=$1} NF==8 {print $5, d}' | sort -n | awk 'BEGIN {l=0} $1 != l+1 {print l+1, "missing" } {print; l = $1}' If you use the --create command as describe above to create the array you will probably have all your data accessible. Use "fsck" or whatever to check. Do *not* add any other drives to the array until you are sure that you are happy with the data that you have found. If it doesn't look right, try a different drive in place of the 'missing' When you are happy, add two more drives to the array to get redundancy back (it will have to recover the drives) but *do not* add any more spares. Leave it with a total of 27 devices. If you add a spare, you will have problems again. If any of this isn't clear, please ask for clarification. Good luck. NeilBrown ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? 2009-02-06 5:14 ` Neil Brown @ 2009-02-06 20:32 ` Thomas J. Baker 2009-02-06 21:01 ` NeilBrown 2009-02-07 4:05 ` Mr. James W. Laferriere 1 sibling, 1 reply; 19+ messages in thread From: Thomas J. Baker @ 2009-02-06 20:32 UTC (permalink / raw) To: Neil Brown; +Cc: linux-raid On Fri, 2009-02-06 at 16:14 +1100, Neil Brown wrote: > On Wednesday February 4, tjb@unh.edu wrote: > > Any help greately appreciated. Here are the details: > > Hmm..... > > The limit on the number of devices in a 0.90 array is 27, despite the > fact that the manual page says '28'. > > And the only limit that is enforced is that the number of raid_disks > is limited to 27. So when you added a hot spare to your array, bad > things started happening. > > I'd better fix that code and documentation. > > But the issue at the moment is fixing your array. > It appears that all slots (0-26) are present except > 6,8,24 > > It seems likely that > 6 is on sdh1 > 8 is on sdj1 > 24 is on sdz1 ... or sds1. They seem to move around a bit. > > If only 2 were missing you would be able to bring the array up. > But with 3 missing - not. > > So we will need to recreate the array. This should preserve all your > old data. > > The command you will need is > > mdadm --create /dev/md0 -l6 -n27 .... list of device names..... > > Getting the correct list of device names is tricky, but quite possible > if you exercise due care. > > The final list should have 27 entries, 2 of which should be the word > "missing". > > When you do this it will create a degraded array. As the array is > degraded, no resync will happen so the data on the arrays will not be > changed, only the metadata. > > So if the list of devices turns out to be wrong, it isn't the end of > the world. Just stop the array and try again with a different list. > > So: how to get the list. > Start with the output of > ./examinRAIDDisks | grep -E '^(/dev|this)' > > Based on your current output, the start of this will be: > > vvv > /dev/sdb1: > this 0 8 17 0 active sync /dev/sdb1 > /dev/sdc1: > this 1 8 33 1 active sync /dev/sdc1 > /dev/sdd1: > this 2 8 49 2 active sync /dev/sdd1 > /dev/sde1: > this 3 8 65 3 active sync /dev/sde1 > /dev/sdf1: > this 4 8 81 4 active sync /dev/sdf1 > /dev/sdg1: > this 5 8 97 5 active sync /dev/sdg1 > /dev/sdi1: > this 7 8 129 7 active sync /dev/sdi1 > /dev/sdk1: > this 9 8 161 9 active sync /dev/sdk1 > ^^^ > > however if you have rebooted and particularly if you have moved any > drives, this could be different now. > > The information that is important is the > /dev/sdX1: > line and the 5th column of the other line, that I have highlighted. > Ignore the device name at the end of the lines (column 8), that is > just confusing. > > The 5th column number tells you where in the array the /dev device > should live. > So from the above information, the first few devices in your list > would be > > /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 missing > /dev/sdi missing /dev/sdk1 > > If you follow this process on the complete output of the run, you will > get a list with 27 entries, 3 of which will be the word 'missing'. > You need to replace one of the 'missings' with a device that is not > listed, but probably goes at that place in the order > e.g. sdh1 in place of the first missing. > > This command might help you > > ./examineRAIDDisks | > grep -E '^(/dev|this)' | awk 'NF==1 {d=$1} NF==8 {print $5, d}' | > sort -n | awk 'BEGIN {l=0} $1 != l+1 {print l+1, "missing" } {print; l = $1}' > > > If you use the --create command as describe above to create the array > you will probably have all your data accessible. Use "fsck" or > whatever to check. Do *not* add any other drives to the array until > you are sure that you are happy with the data that you have found. If > it doesn't look right, try a different drive in place of the 'missing' > > When you are happy, add two more drives to the array to get redundancy > back (it will have to recover the drives) but *do not* add any more > spares. Leave it with a total of 27 devices. If you add a spare, you > will have problems again. > > If any of this isn't clear, please ask for clarification. > > Good luck. > > NeilBrown Thanks for the info. I think I follow everything. One last question before really trying it - is this what is expected when I actually run the command - the warnings about previous array, etc? [root@node002 ~]# ./recoverRAID mdadm --create /dev/md0 --verbose --level=6 --raid-devices=27 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 missing /dev/sdi1 missing /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 /dev/sdw1 /dev/sdx1 /dev/sdy1 /dev/sdz1 /dev/sdaa1 /dev/sdab1 /dev/sdac1 /dev/sdp1 /dev/sdq1 /dev/sdr1 missing /dev/sdt1 /dev/sdu1 mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 64K mdadm: /dev/sdb1 appears to contain an ext2fs file system size=-295395124K mtime=Fri Nov 20 19:36:27 1931 mdadm: /dev/sdb1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdc1 appears to contain an ext2fs file system size=-1265904192K mtime=Tue Dec 23 15:07:10 2008 mdadm: /dev/sdc1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdd1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sde1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdf1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdg1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdi1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdk1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdl1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdm1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdn1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdo1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdw1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdx1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdy1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdz1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdaa1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdab1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdac1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdp1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdq1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdr1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdt1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: /dev/sdu1 appears to contain an ext2fs file system size=-1265903936K mtime=Sun Mar 1 20:48:00 2009 mdadm: /dev/sdu1 appears to be part of a raid array: level=raid6 devices=27 ctime=Thu Jun 28 05:16:13 2007 mdadm: size set to 292961216K Continue creating array? n mdadm: create aborted. [root@node002 ~]# Thanks, tjb -- ======================================================================= | Thomas Baker email: tjb@unh.edu | | Systems Programmer | | Research Computing Center voice: (603) 862-4490 | | University of New Hampshire fax: (603) 862-1761 | | 332 Morse Hall | | Durham, NH 03824 USA http://wintermute.sr.unh.edu/~tjb | ======================================================================= ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? 2009-02-06 20:32 ` Thomas J. Baker @ 2009-02-06 21:01 ` NeilBrown 2009-02-06 21:47 ` Thomas J. Baker 0 siblings, 1 reply; 19+ messages in thread From: NeilBrown @ 2009-02-06 21:01 UTC (permalink / raw) To: tjb; +Cc: linux-raid On Sat, February 7, 2009 7:32 am, Thomas J. Baker wrote: > Thanks for the info. I think I follow everything. One last question > before really trying it - is this what is expected when I actually run > the command - the warnings about previous array, etc? Yes, you would expect exactly those messages. NeilBrown ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? 2009-02-06 21:01 ` NeilBrown @ 2009-02-06 21:47 ` Thomas J. Baker 2009-02-07 2:09 ` NeilBrown 0 siblings, 1 reply; 19+ messages in thread From: Thomas J. Baker @ 2009-02-06 21:47 UTC (permalink / raw) To: NeilBrown; +Cc: linux-raid On Sat, 2009-02-07 at 08:01 +1100, NeilBrown wrote: > On Sat, February 7, 2009 7:32 am, Thomas J. Baker wrote: > > > Thanks for the info. I think I follow everything. One last question > > before really trying it - is this what is expected when I actually run > > the command - the warnings about previous array, etc? > > Yes, you would expect exactly those messages. > > NeilBrown > OK, first I put the drives back to their original configuration (I had swapped two banks to make them appear physically as the kernel sees them logically to make it easier to track down the possibly faulty disks.) For clarity, I put them back to the original configuration before trying anything. Next I ran mdadm --create /dev/md0 --verbose --level=6 --raid-devices=26 \ /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 missing \ /dev/sdi1 missing /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 \ /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sdt1 /dev/sdu1 /dev/sdv1 \ /dev/sdw1 /dev/sdx1 /dev/sdy1 /dev/sdz1 /dev/sdaa1 /dev/sdab1 which completed but when I fsck'd the disk, it wasn't happy, complaining about a missing superblock on /dev/md0. I then stopped the array. I'm not sure what to do next. It seems like the examine information of all the disks has been updated by the last create. I guess I should only go by the previous examine? As for the missing statement, should that only relate to drives with no md superblock? Thanks for the help and sorry for the continued questions. Here is how it currently sits: [root@node002 ~]# ./examineRAIDDisks mdadm -E /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1 /dev/sdu1 /dev/sdv1 /dev/sdw1 /dev/sdx1 /dev/sdy1 /dev/sdz1 /dev/sdaa1 /dev/sdab1 /dev/sdac1 /dev/sdb1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0bffbe - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 0 8 17 0 active sync /dev/sdb1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdc1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0bffd0 - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 1 8 33 1 active sync /dev/sdc1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdd1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0bffe2 - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 2 8 49 2 active sync /dev/sdd1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sde1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0bfff4 - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 3 8 65 3 active sync /dev/sde1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdf1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c0006 - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 4 8 81 4 active sync /dev/sdf1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdg1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c0018 - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 5 8 97 5 active sync /dev/sdg1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 mdadm: No md superblock detected on /dev/sdh1. /dev/sdi1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c003c - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 7 8 129 7 active sync /dev/sdi1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 mdadm: No md superblock detected on /dev/sdj1. /dev/sdk1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c0060 - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 9 8 161 9 active sync /dev/sdk1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdl1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c0072 - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 10 8 177 10 active sync /dev/sdl1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdm1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c0084 - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 11 8 193 11 active sync /dev/sdm1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdn1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c0096 - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 12 8 209 12 active sync /dev/sdn1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdo1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c00a8 - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 13 8 225 13 active sync /dev/sdo1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdp1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c00ba - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 14 8 241 14 active sync /dev/sdp1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdq1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c0005 - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 15 65 1 15 active sync /dev/sdq1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdr1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c0017 - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 16 65 17 16 active sync /dev/sdr1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sds1: Magic : a92b4efc Version : 00.90.00 UUID : 216d848a:ec7c507e:d27b1c78:ee592a9e Creation Time : Fri Feb 6 16:06:02 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7324030400 (6984.74 GiB 7499.81 GB) Raid Devices : 27 Total Devices : 27 Preferred Minor : 0 Update Time : Fri Feb 6 16:06:02 2009 State : active Active Devices : 24 Working Devices : 24 Failed Devices : 3 Spare Devices : 0 Checksum : 1c7a06a4 - correct Events : 0.1 Chunk Size : 64K Number Major Minor RaidDevice State this 17 65 145 17 active sync /dev/sdz1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 65 97 14 active sync /dev/sdw1 15 15 65 113 15 active sync /dev/sdx1 16 16 65 129 16 active sync /dev/sdy1 17 17 65 145 17 active sync /dev/sdz1 18 18 65 161 18 active sync /dev/sdaa1 19 19 65 177 19 active sync /dev/sdab1 20 20 65 193 20 active sync /dev/sdac1 21 21 8 241 21 active sync /dev/sdp1 22 22 65 1 22 active sync /dev/sdq1 23 23 65 17 23 active sync /dev/sdr1 24 24 0 0 24 faulty 25 25 65 49 25 active sync /dev/sdt1 26 26 65 65 26 active sync /dev/sdu1 /dev/sdt1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c0039 - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 17 65 49 17 active sync /dev/sdt1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdu1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c004b - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 18 65 65 18 active sync /dev/sdu1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdv1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c005d - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 19 65 81 19 active sync /dev/sdv1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdw1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c006f - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 20 65 97 20 active sync /dev/sdw1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdx1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c0081 - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 21 65 113 21 active sync /dev/sdx1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdy1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c0093 - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 22 65 129 22 active sync /dev/sdy1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdz1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c00a5 - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 23 65 145 23 active sync /dev/sdz1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdaa1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c00b7 - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 24 65 161 24 active sync /dev/sdaa1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 /dev/sdab1: Magic : a92b4efc Version : 00.90.00 UUID : ccf28b9d:ea58ca5b:6ec35d57:27415988 Creation Time : Fri Feb 6 16:26:44 2009 Raid Level : raid6 Used Dev Size : 292961216 (279.39 GiB 299.99 GB) Array Size : 7031069184 (6705.35 GiB 7199.81 GB) Raid Devices : 26 Total Devices : 24 Preferred Minor : 0 Update Time : Fri Feb 6 16:27:11 2009 State : clean Active Devices : 24 Working Devices : 24 Failed Devices : 2 Spare Devices : 0 Checksum : 9b0c00c9 - correct Events : 0.4 Chunk Size : 64K Number Major Minor RaidDevice State this 25 65 177 25 active sync /dev/sdab1 0 0 8 17 0 active sync /dev/sdb1 1 1 8 33 1 active sync /dev/sdc1 2 2 8 49 2 active sync /dev/sdd1 3 3 8 65 3 active sync /dev/sde1 4 4 8 81 4 active sync /dev/sdf1 5 5 8 97 5 active sync /dev/sdg1 6 6 0 0 6 faulty removed 7 7 8 129 7 active sync /dev/sdi1 8 8 0 0 8 faulty removed 9 9 8 161 9 active sync /dev/sdk1 10 10 8 177 10 active sync /dev/sdl1 11 11 8 193 11 active sync /dev/sdm1 12 12 8 209 12 active sync /dev/sdn1 13 13 8 225 13 active sync /dev/sdo1 14 14 8 241 14 active sync /dev/sdp1 15 15 65 1 15 active sync /dev/sdq1 16 16 65 17 16 active sync /dev/sdr1 17 17 65 49 17 active sync /dev/sdt1 18 18 65 65 18 active sync /dev/sdu1 19 19 65 81 19 active sync /dev/sdv1 20 20 65 97 20 active sync /dev/sdw1 21 21 65 113 21 active sync /dev/sdx1 22 22 65 129 22 active sync /dev/sdy1 23 23 65 145 23 active sync /dev/sdz1 24 24 65 161 24 active sync /dev/sdaa1 25 25 65 177 25 active sync /dev/sdab1 mdadm: No md superblock detected on /dev/sdac1. [root@node002 ~]# Thanks, tjb -- ======================================================================= | Thomas Baker email: tjb@unh.edu | | Systems Programmer | | Research Computing Center voice: (603) 862-4490 | | University of New Hampshire fax: (603) 862-1761 | | 332 Morse Hall | | Durham, NH 03824 USA http://wintermute.sr.unh.edu/~tjb | ======================================================================= ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? 2009-02-06 21:47 ` Thomas J. Baker @ 2009-02-07 2:09 ` NeilBrown 2009-02-09 14:48 ` Thomas J. Baker 0 siblings, 1 reply; 19+ messages in thread From: NeilBrown @ 2009-02-07 2:09 UTC (permalink / raw) To: tjb; +Cc: linux-raid On Sat, February 7, 2009 8:47 am, Thomas J. Baker wrote: > On Sat, 2009-02-07 at 08:01 +1100, NeilBrown wrote: >> On Sat, February 7, 2009 7:32 am, Thomas J. Baker wrote: >> >> > Thanks for the info. I think I follow everything. One last question >> > before really trying it - is this what is expected when I actually run >> > the command - the warnings about previous array, etc? >> >> Yes, you would expect exactly those messages. >> >> NeilBrown >> > > OK, first I put the drives back to their original configuration (I had > swapped two banks to make them appear physically as the kernel sees them > logically to make it easier to track down the possibly faulty disks.) > For clarity, I put them back to the original configuration before trying > anything. > > Next I ran > > mdadm --create /dev/md0 --verbose --level=6 --raid-devices=26 \ > /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 missing \ > /dev/sdi1 missing /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 \ > /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sdt1 /dev/sdu1 /dev/sdv1 \ > /dev/sdw1 /dev/sdx1 /dev/sdy1 /dev/sdz1 /dev/sdaa1 /dev/sdab1 You said "--raid-devices=26". Why did you do that? It should be '27'. You need to have 25 real devices and 2 'missing' devices. 24 of those real devices you can identify the correct position for by looking at the --examine information from before you tried to recreate the array. The 25th real device you have to guess which is likely to be the right one. > > which completed but when I fsck'd the disk, it wasn't happy, complaining > about a missing superblock on /dev/md0. I then stopped the array. I'm > not sure what to do next. It seems like the examine information of all > the disks has been updated by the last create. I guess I should only go > by the previous examine? As for the missing statement, should that only > relate to drives with no md superblock? Yes, go by the previous examine information. The 'missing' can be any two devices. The important thing is the devices that aren't 'missing'. They need to be devices that you have reason to believe belong to that slot in the array. NeilBrown ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? 2009-02-07 2:09 ` NeilBrown @ 2009-02-09 14:48 ` Thomas J. Baker 2009-02-10 16:58 ` Nagilum 0 siblings, 1 reply; 19+ messages in thread From: Thomas J. Baker @ 2009-02-09 14:48 UTC (permalink / raw) To: NeilBrown; +Cc: linux-raid On Sat, 2009-02-07 at 13:09 +1100, NeilBrown wrote: > On Sat, February 7, 2009 8:47 am, Thomas J. Baker wrote: > > On Sat, 2009-02-07 at 08:01 +1100, NeilBrown wrote: > >> On Sat, February 7, 2009 7:32 am, Thomas J. Baker wrote: > >> > >> > Thanks for the info. I think I follow everything. One last question > >> > before really trying it - is this what is expected when I actually run > >> > the command - the warnings about previous array, etc? > >> > >> Yes, you would expect exactly those messages. > >> > >> NeilBrown > >> > > > > OK, first I put the drives back to their original configuration (I had > > swapped two banks to make them appear physically as the kernel sees them > > logically to make it easier to track down the possibly faulty disks.) > > For clarity, I put them back to the original configuration before trying > > anything. > > > > Next I ran > > > > mdadm --create /dev/md0 --verbose --level=6 --raid-devices=26 \ > > /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 missing \ > > /dev/sdi1 missing /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 \ > > /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sdt1 /dev/sdu1 /dev/sdv1 \ > > /dev/sdw1 /dev/sdx1 /dev/sdy1 /dev/sdz1 /dev/sdaa1 /dev/sdab1 > > > You said "--raid-devices=26". Why did you do that? > It should be '27'. > You need to have 25 real devices and 2 'missing' devices. > 24 of those real devices you can identify the correct position for > by looking at the --examine information from before you tried to > recreate the array. > The 25th real device you have to guess which is likely to be > the right one. > > > > > > which completed but when I fsck'd the disk, it wasn't happy, complaining > > about a missing superblock on /dev/md0. I then stopped the array. I'm > > not sure what to do next. It seems like the examine information of all > > the disks has been updated by the last create. I guess I should only go > > by the previous examine? As for the missing statement, should that only > > relate to drives with no md superblock? > > Yes, go by the previous examine information. > > The 'missing' can be any two devices. The important thing is the devices > that aren't 'missing'. They need to be devices that you have reason to > believe belong to that slot in the array. > > NeilBrown > OK, specifying 27 devices worked and I successfully mounted the RAID and looked around. Thanks again for the guidance! I guess things look OK (there are directories and files!) I was wondering if I should fsck it before adding the other two disks back in or after? Any other words of experience? Thanks again, tjb -- ======================================================================= | Thomas Baker email: tjb@unh.edu | | Systems Programmer | | Research Computing Center voice: (603) 862-4490 | | University of New Hampshire fax: (603) 862-1761 | | 332 Morse Hall | | Durham, NH 03824 USA http://wintermute.sr.unh.edu/~tjb | ======================================================================= ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? 2009-02-09 14:48 ` Thomas J. Baker @ 2009-02-10 16:58 ` Nagilum 0 siblings, 0 replies; 19+ messages in thread From: Nagilum @ 2009-02-10 16:58 UTC (permalink / raw) To: tjb; +Cc: linux-raid ----- Message from tjb@unh.edu --------- Date: Mon, 09 Feb 2009 09:48:18 -0500 From: "Thomas J. Baker" <tjb@unh.edu> Reply-To: tjb@unh.edu Subject: Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? To: NeilBrown <neilb@suse.de> Cc: linux-raid@vger.kernel.org > OK, specifying 27 devices worked and I successfully mounted the RAID and > looked around. Thanks again for the guidance! I guess things look OK > (there are directories and files!) > > I was wondering if I should fsck it before adding the other two disks > back in or after? Any other words of experience? ----- End message from tjb@unh.edu ----- All fsck commands should have a 'dry-run' or 'no-modify' mode in which they just check without attempting to change anything. This should be your first mode of operation. If everything checks out you can add your spares and regain your redundancy. Kind regards, Alex. ======================================================================== # _ __ _ __ http://www.nagilum.org/ \n icq://69646724 # # / |/ /__ ____ _(_) /_ ____ _ nagilum@nagilum.org \n +491776461165 # # / / _ `/ _ `/ / / // / ' \ Amiga (68k/PPC): AOS/NetBSD/Linux # # /_/|_/\_,_/\_, /_/_/\_,_/_/_/_/ Mac (PPC): MacOS-X / NetBSD /Linux # # /___/ x86: FreeBSD/Linux/Solaris/Win2k ARM9: EPOC EV6 # ======================================================================== ---------------------------------------------------------------- cakebox.homeunix.net - all the machine one needs.. ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? 2009-02-06 5:14 ` Neil Brown 2009-02-06 20:32 ` Thomas J. Baker @ 2009-02-07 4:05 ` Mr. James W. Laferriere 2009-02-08 22:02 ` Thomas Baker 1 sibling, 1 reply; 19+ messages in thread From: Mr. James W. Laferriere @ 2009-02-07 4:05 UTC (permalink / raw) To: Neil Brown; +Cc: linux-raid maillist Hello Neil , In this thread you mention a (I think) script named examinRAIDDisks . Is this available someplace ? I've searched the archive & it does not appear to be mentioned anywhere but this thread . Tia , JimL -- +------------------------------------------------------------------+ | James W. Laferriere | System Techniques | Give me VMS | | Network&System Engineer | 2133 McCullam Ave | Give me Linux | | babydr@baby-dragons.com | Fairbanks, AK. 99701 | only on AXP | +------------------------------------------------------------------+ ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? 2009-02-07 4:05 ` Mr. James W. Laferriere @ 2009-02-08 22:02 ` Thomas Baker 2009-02-09 11:47 ` Max Waterman 0 siblings, 1 reply; 19+ messages in thread From: Thomas Baker @ 2009-02-08 22:02 UTC (permalink / raw) To: Mr. James W. Laferriere; +Cc: Neil Brown, linux-raid maillist On Feb 6, 2009, at 11:05 PM, Mr. James W. Laferriere wrote: > Hello Neil , In this thread you mention a (I think) script named > examinRAIDDisks . > Is this available someplace ? > I've searched the archive & it does not appear to be mentioned > anywhere but this thread . > Tia , JimL It's just a script I wrote that runs mdadm -E on all my disks so I don't have to keep typing all those disk names: #!/bin/csh -x mdadm -E \ /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 \ /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 \ /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1 /dev/sdu1 /dev/sdv1 \ /dev/sdw1 /dev/sdx1 /dev/sdy1 /dev/sdz1 /dev/sdaa1 /dev/sdab1 /dev/sdac1 Thanks, tjb -- ======================================================================= | Thomas Baker email: tjb@unh.edu | | Systems Programmer | | Research Computing Center voice: (603) 862-4490 | | University of New Hampshire fax: (603) 862-1761 | | 332 Morse Hall | | Durham, NH 03824 USA http://wintermute.sr.unh.edu/~tjb | ======================================================================= ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? 2009-02-08 22:02 ` Thomas Baker @ 2009-02-09 11:47 ` Max Waterman 2009-02-10 8:55 ` Luca Berra 0 siblings, 1 reply; 19+ messages in thread From: Max Waterman @ 2009-02-09 11:47 UTC (permalink / raw) To: linux-raid maillist Thomas Baker wrote: > > On Feb 6, 2009, at 11:05 PM, Mr. James W. Laferriere wrote: > >> Hello Neil , In this thread you mention a (I think) script named >> examinRAIDDisks . >> Is this available someplace ? >> I've searched the archive & it does not appear to be mentioned >> anywhere but this thread . >> Tia , JimL > > It's just a script I wrote that runs mdadm -E on all my disks so I > don't have to keep typing all those disk names: > > #!/bin/csh -x > > mdadm -E \ > /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 \ > /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 \ > /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1 /dev/sdu1 /dev/sdv1 \ > /dev/sdw1 /dev/sdx1 /dev/sdy1 /dev/sdz1 /dev/sdaa1 /dev/sdab1 /dev/sdac1 I'm sure most people know this (and I already replied privately), but I just love these features of 'the' (most) shell, so I figured I'd share... I'd guess these produce identical arg lists : The shell will expand the args, match them to files, and complain if one doesn't exist... $ echo /dev/sd[b-z]1 /dev/sba[a-c]1 The shell will just expand the args, not checking if a file exists with the same name : $ echo /dev/sd{b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z,aa,ab,ac}1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1 /dev/sdu1 /dev/sdv1 /dev/sdw1 /dev/sdx1 /dev/sdy1 /dev/sdz1 /dev/sdaa1 /dev/sdab1 /dev/sdac1 ie, this will also work : $ echo max was {here,there,everywhere} irrespective of any files that may or may not exist; ...but this might be a surprise : $ echo /dev/sd[abcd]1 /dev/sda1 $ echo /dev/sd[bcd]1 /dev/sd[bcd]1 I only have the one drive (sda) on this computer, so 'sa[abcd]1' only matched one drive. The unexpected bit is that the shell passed on the arg 'as is' when it didn't match any file (IIRC that's different for csh, which I notice you're using). As ever, YMMV. Max. ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? 2009-02-09 11:47 ` Max Waterman @ 2009-02-10 8:55 ` Luca Berra 0 siblings, 0 replies; 19+ messages in thread From: Luca Berra @ 2009-02-10 8:55 UTC (permalink / raw) To: linux-raid maillist im going a bit ot, bare with me. On Mon, Feb 09, 2009 at 01:47:23PM +0200, Max Waterman wrote: > I only have the one drive (sda) on this computer, so 'sa[abcd]1' only > matched one drive. The unexpected bit is that the shell passed on the arg > 'as is' when it didn't match any file (IIRC that's different for csh, which > I notice you're using). filename generation (glob) syntax is common to all shell iirc behaviour when globbing does not match varies ksh: will pass the argument unmodified, which i believe is posix. csh: depends on option nonomatch, if set lets the argument unmodified, else (default) will print an error. csh% got light? No Match. bash: has two options that controll globbing default is as ksh failglob: if set prints error, like csh nullglob: if set just ignores the pattern (i.e. replaces it with nothing) zsh: rtfm -- Luca Berra -- bluca@comedia.it Communication Media & Services S.r.l. /"\ \ / ASCII RIBBON CAMPAIGN X AGAINST HTML MAIL / \ ^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2009-02-10 16:58 UTC | newest] Thread overview: 19+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2009-02-04 19:27 Any hope for a 27 disk RAID6+1HS array with four disks reporting "No md superblock detected"? Thomas J. Baker 2009-02-04 20:50 ` Joe Landman 2009-02-04 21:03 ` Thomas J. Baker 2009-02-04 21:17 ` Thomas J. Baker 2009-02-05 18:49 ` Bill Davidsen 2009-02-05 18:59 ` Thomas J. Baker 2009-02-05 23:57 ` Bill Davidsen 2009-02-06 0:08 ` Thomas Baker 2009-02-06 5:14 ` Neil Brown 2009-02-06 20:32 ` Thomas J. Baker 2009-02-06 21:01 ` NeilBrown 2009-02-06 21:47 ` Thomas J. Baker 2009-02-07 2:09 ` NeilBrown 2009-02-09 14:48 ` Thomas J. Baker 2009-02-10 16:58 ` Nagilum 2009-02-07 4:05 ` Mr. James W. Laferriere 2009-02-08 22:02 ` Thomas Baker 2009-02-09 11:47 ` Max Waterman 2009-02-10 8:55 ` Luca Berra
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).