* (unknown)
@ 2011-06-08 14:24 Dragon
2011-06-08 14:39 ` SRaid with 13 Disks crashed Phil Turmel
0 siblings, 1 reply; 8+ messages in thread
From: Dragon @ 2011-06-08 14:24 UTC (permalink / raw)
To: linux-raid
SRaid with 13 Disks crashed
Hello,
this seems to be my last chance to get back all of my data from a sw-raid5 with 12-13 disks.
i use debian ( 2.6.32-bpo.5-amd64) and last i wanted to grow the raid from 12 to 13 disk with a size at all of 18tb. after run mke2fs i must see that the tool on ext4 allow a maximum size of 16tb. after that i wanted to shrink the size back to 12 disk and now the raid is gone.
i tried some assemble and examine things but without success.
here some information:
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : inactive sdh[0](S) sda[13](S) sdg[12](S) sdf[11](S) sde[10](S) sdd[9](S) sdc[8](S) sdb[6](S) sdm[5](S) sdl[4](S) sdj[3](S) sdi[2](S)
17581661952 blocks
unused devices: <none>
mdadm --detail /dev/md0
mdadm: md device /dev/md0 does not appear to be active.
mdadm --assemble --force -v /dev/md0 /dev/sdh /dev/sda /dev/sdg /dev/sdf /dev/sde /dev/sdd /dev/sdc /dev/sdb /dev/sdm /dev/sdl /dev/sdj /dev/sdi --update=super-minor /dev/sdh
mdadm: looking for devices for /dev/md0
mdadm: updating superblock of /dev/sdh with minor number 0
mdadm: /dev/sdh is identified as a member of /dev/md0, slot 0.
mdadm: updating superblock of /dev/sda with minor number 0
mdadm: /dev/sda is identified as a member of /dev/md0, slot 13.
mdadm: updating superblock of /dev/sdg with minor number 0
mdadm: /dev/sdg is identified as a member of /dev/md0, slot 12.
mdadm: updating superblock of /dev/sdf with minor number 0
mdadm: /dev/sdf is identified as a member of /dev/md0, slot 11.
mdadm: updating superblock of /dev/sde with minor number 0
mdadm: /dev/sde is identified as a member of /dev/md0, slot 10.
mdadm: updating superblock of /dev/sdd with minor number 0
mdadm: /dev/sdd is identified as a member of /dev/md0, slot 9.
mdadm: updating superblock of /dev/sdc with minor number 0
mdadm: /dev/sdc is identified as a member of /dev/md0, slot 8.
mdadm: updating superblock of /dev/sdb with minor number 0
mdadm: /dev/sdb is identified as a member of /dev/md0, slot 6.
mdadm: updating superblock of /dev/sdm with minor number 0
mdadm: /dev/sdm is identified as a member of /dev/md0, slot 5.
mdadm: updating superblock of /dev/sdl with minor number 0
mdadm: /dev/sdl is identified as a member of /dev/md0, slot 4.
mdadm: updating superblock of /dev/sdj with minor number 0
mdadm: /dev/sdj is identified as a member of /dev/md0, slot 3.
mdadm: updating superblock of /dev/sdi with minor number 0
mdadm: /dev/sdi is identified as a member of /dev/md0, slot 2.
mdadm: updating superblock of /dev/sdh with minor number 0
mdadm: /dev/sdh is identified as a member of /dev/md0, slot 0.
mdadm: no uptodate device for slot 1 of /dev/md0
mdadm: added /dev/sdi to /dev/md0 as 2
mdadm: added /dev/sdj to /dev/md0 as 3
mdadm: added /dev/sdl to /dev/md0 as 4
mdadm: added /dev/sdm to /dev/md0 as 5
mdadm: added /dev/sdb to /dev/md0 as 6
mdadm: no uptodate device for slot 7 of /dev/md0
mdadm: added /dev/sdc to /dev/md0 as 8
mdadm: added /dev/sdd to /dev/md0 as 9
mdadm: added /dev/sde to /dev/md0 as 10
mdadm: added /dev/sdf to /dev/md0 as 11
mdadm: added /dev/sdg to /dev/md0 as 12
mdadm: added /dev/sda to /dev/md0 as 13
mdadm: added /dev/sdh to /dev/md0 as 0
mdadm: /dev/md0 assembled from 11 drives and 1 spare - not enough to start the array.
mdadm.conf
#old=ARRAY /dev/md0 level=raid5 num-devices=13 metadata=0.90 UUID=975d6eb2:285eed11:021df236:c2d05073
ARRAY /dev/md0 UUID=975d6eb2:285eed11:021df236:c2d05073
Hope some can help. Thx
--
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SRaid with 13 Disks crashed
2011-06-08 14:24 (unknown) Dragon
@ 2011-06-08 14:39 ` Phil Turmel
0 siblings, 0 replies; 8+ messages in thread
From: Phil Turmel @ 2011-06-08 14:39 UTC (permalink / raw)
To: Dragon; +Cc: linux-raid
Hi Dragon,
On 06/08/2011 10:24 AM, Dragon wrote:
> SRaid with 13 Disks crashed
> Hello,
>
>
> this seems to be my last chance to get back all of my data from a sw-raid5 with 12-13 disks.
> i use debian ( 2.6.32-bpo.5-amd64) and last i wanted to grow the raid from 12 to 13 disk with a size at all of 18tb. after run mke2fs i must see that the tool on ext4 allow a maximum size of 16tb. after that i wanted to shrink the size back to 12 disk and now the raid is gone.
Did you actually mean "mke2fs" ? It destroys existing data. I hope you meant "resize2fs".
> i tried some assemble and examine things but without success.
>
> here some information:
> cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> md0 : inactive sdh[0](S) sda[13](S) sdg[12](S) sdf[11](S) sde[10](S) sdd[9](S) sdc[8](S) sdb[6](S) sdm[5](S) sdl[4](S) sdj[3](S) sdi[2](S)
> 17581661952 blocks
>
> unused devices: <none>
>
> mdadm --detail /dev/md0
> mdadm: md device /dev/md0 does not appear to be active.
>
> mdadm --assemble --force -v /dev/md0 /dev/sdh /dev/sda /dev/sdg /dev/sdf /dev/sde /dev/sdd /dev/sdc /dev/sdb /dev/sdm /dev/sdl /dev/sdj /dev/sdi --update=super-minor /dev/sdh
Was /dev/sdk supposed to be in this list?
> mdadm: looking for devices for /dev/md0
> mdadm: updating superblock of /dev/sdh with minor number 0
> mdadm: /dev/sdh is identified as a member of /dev/md0, slot 0.
> mdadm: updating superblock of /dev/sda with minor number 0
> mdadm: /dev/sda is identified as a member of /dev/md0, slot 13.
This is suspicious. Looks like sda was added as a spare?
> mdadm: updating superblock of /dev/sdg with minor number 0
> mdadm: /dev/sdg is identified as a member of /dev/md0, slot 12.
> mdadm: updating superblock of /dev/sdf with minor number 0
> mdadm: /dev/sdf is identified as a member of /dev/md0, slot 11.
> mdadm: updating superblock of /dev/sde with minor number 0
> mdadm: /dev/sde is identified as a member of /dev/md0, slot 10.
> mdadm: updating superblock of /dev/sdd with minor number 0
> mdadm: /dev/sdd is identified as a member of /dev/md0, slot 9.
> mdadm: updating superblock of /dev/sdc with minor number 0
> mdadm: /dev/sdc is identified as a member of /dev/md0, slot 8.
> mdadm: updating superblock of /dev/sdb with minor number 0
> mdadm: /dev/sdb is identified as a member of /dev/md0, slot 6.
> mdadm: updating superblock of /dev/sdm with minor number 0
> mdadm: /dev/sdm is identified as a member of /dev/md0, slot 5.
> mdadm: updating superblock of /dev/sdl with minor number 0
> mdadm: /dev/sdl is identified as a member of /dev/md0, slot 4.
> mdadm: updating superblock of /dev/sdj with minor number 0
> mdadm: /dev/sdj is identified as a member of /dev/md0, slot 3.
> mdadm: updating superblock of /dev/sdi with minor number 0
> mdadm: /dev/sdi is identified as a member of /dev/md0, slot 2.
> mdadm: updating superblock of /dev/sdh with minor number 0
> mdadm: /dev/sdh is identified as a member of /dev/md0, slot 0.
> mdadm: no uptodate device for slot 1 of /dev/md0
> mdadm: added /dev/sdi to /dev/md0 as 2
> mdadm: added /dev/sdj to /dev/md0 as 3
> mdadm: added /dev/sdl to /dev/md0 as 4
> mdadm: added /dev/sdm to /dev/md0 as 5
> mdadm: added /dev/sdb to /dev/md0 as 6
> mdadm: no uptodate device for slot 7 of /dev/md0
> mdadm: added /dev/sdc to /dev/md0 as 8
> mdadm: added /dev/sdd to /dev/md0 as 9
> mdadm: added /dev/sde to /dev/md0 as 10
> mdadm: added /dev/sdf to /dev/md0 as 11
> mdadm: added /dev/sdg to /dev/md0 as 12
> mdadm: added /dev/sda to /dev/md0 as 13
> mdadm: added /dev/sdh to /dev/md0 as 0
> mdadm: /dev/md0 assembled from 11 drives and 1 spare - not enough to start the array.
Indeed. Your problem is likely to be /dev/sda.
> mdadm.conf
> #old=ARRAY /dev/md0 level=raid5 num-devices=13 metadata=0.90 UUID=975d6eb2:285eed11:021df236:c2d05073
> ARRAY /dev/md0 UUID=975d6eb2:285eed11:021df236:c2d05073
>
> Hope some can help. Thx
Please share the output of "mdadm -E /dev/sd[abcdefghijklm]"
Phil
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SRaid with 13 Disks crashed
@ 2011-06-08 20:02 Dragon
2011-06-08 20:32 ` Phil Turmel
0 siblings, 1 reply; 8+ messages in thread
From: Dragon @ 2011-06-08 20:02 UTC (permalink / raw)
To: philip; +Cc: linux-raid
Hi Dragon,
On 06/08/2011 10:24 AM, Dragon wrote:
> SRaid with 13 Disks crashed
> Hello,
>
>
> this seems to be my last chance to get back all of my data from a sw-raid5 with
12-13 disks.
> i use debian ( 2.6.32-bpo.5-amd64) and last i wanted to grow the raid from 12 to
13 disk with a size at all of 18tb. after run mke2fs i must see that the tool on
ext4 allow a maximum size of 16tb. after that i wanted to shrink the size back to
12 disk and now the raid is gone.
Did you actually mean "mke2fs" ? It destroys existing data. I hope you meant
"resize2fs".
> i tried some assemble and examine things but without success.
>
> here some information:
> cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> md0 : inactive sdh[0](S) sda[13](S) sdg[12](S) sdf[11](S) sde[10](S) sdd[9](S)
sdc[8](S) sdb[6](S) sdm[5](S) sdl[4](S) sdj[3](S) sdi[2](S)
> 17581661952 blocks
>
> unused devices: <none>
>
> mdadm --detail /dev/md0
> mdadm: md device /dev/md0 does not appear to be active.
>
> mdadm --assemble --force -v /dev/md0 /dev/sdh /dev/sda /dev/sdg /dev/sdf /dev/sde
/dev/sdd /dev/sdc /dev/sdb /dev/sdm /dev/sdl /dev/sdj /dev/sdi
--update=super-minor /dev/sdh
Was /dev/sdk supposed to be in this list?
> mdadm: looking for devices for /dev/md0
> mdadm: updating superblock of /dev/sdh with minor number 0
> mdadm: /dev/sdh is identified as a member of /dev/md0, slot 0.
> mdadm: updating superblock of /dev/sda with minor number 0
> mdadm: /dev/sda is identified as a member of /dev/md0, slot 13.
This is suspicious. Looks like sda was added as a spare?
> mdadm: updating superblock of /dev/sdg with minor number 0
> mdadm: /dev/sdg is identified as a member of /dev/md0, slot 12.
> mdadm: updating superblock of /dev/sdf with minor number 0
> mdadm: /dev/sdf is identified as a member of /dev/md0, slot 11.
> mdadm: updating superblock of /dev/sde with minor number 0
> mdadm: /dev/sde is identified as a member of /dev/md0, slot 10.
> mdadm: updating superblock of /dev/sdd with minor number 0
> mdadm: /dev/sdd is identified as a member of /dev/md0, slot 9.
> mdadm: updating superblock of /dev/sdc with minor number 0
> mdadm: /dev/sdc is identified as a member of /dev/md0, slot 8.
> mdadm: updating superblock of /dev/sdb with minor number 0
> mdadm: /dev/sdb is identified as a member of /dev/md0, slot 6.
> mdadm: updating superblock of /dev/sdm with minor number 0
> mdadm: /dev/sdm is identified as a member of /dev/md0, slot 5.
> mdadm: updating superblock of /dev/sdl with minor number 0
> mdadm: /dev/sdl is identified as a member of /dev/md0, slot 4.
> mdadm: updating superblock of /dev/sdj with minor number 0
> mdadm: /dev/sdj is identified as a member of /dev/md0, slot 3.
> mdadm: updating superblock of /dev/sdi with minor number 0
> mdadm: /dev/sdi is identified as a member of /dev/md0, slot 2.
> mdadm: updating superblock of /dev/sdh with minor number 0
> mdadm: /dev/sdh is identified as a member of /dev/md0, slot 0.
> mdadm: no uptodate device for slot 1 of /dev/md0
> mdadm: added /dev/sdi to /dev/md0 as 2
> mdadm: added /dev/sdj to /dev/md0 as 3
> mdadm: added /dev/sdl to /dev/md0 as 4
> mdadm: added /dev/sdm to /dev/md0 as 5
> mdadm: added /dev/sdb to /dev/md0 as 6
> mdadm: no uptodate device for slot 7 of /dev/md0
> mdadm: added /dev/sdc to /dev/md0 as 8
> mdadm: added /dev/sdd to /dev/md0 as 9
> mdadm: added /dev/sde to /dev/md0 as 10
> mdadm: added /dev/sdf to /dev/md0 as 11
> mdadm: added /dev/sdg to /dev/md0 as 12
> mdadm: added /dev/sda to /dev/md0 as 13
> mdadm: added /dev/sdh to /dev/md0 as 0
> mdadm: /dev/md0 assembled from 11 drives and 1 spare - not enough to start the array.
Indeed. Your problem is likely to be /dev/sda.
> mdadm.conf
> #old=ARRAY /dev/md0 level=raid5 num-devices=13 metadata=0.90
UUID=975d6eb2:285eed11:021df236:c2d05073
> ARRAY /dev/md0 UUID=975d6eb2:285eed11:021df236:c2d05073
>
> Hope some can help. Thx
Please share the output of "mdadm -E /dev/sd[abcdefghijklm]"
Phil
-------------------
Sorry my misstake, of course "resize2fs".
"Was /dev/sdk supposed to be in this list?"
->yes was a part of the raid, but i think something happends with the uuid and now its the system and boot disk. for explanation the raid is stand alone
"This is suspicious. Looks like sda was added as a spare?"
->last i wanted to downgrade the raid from 13 to 12 and there for take a disk and marked it as spare, but it was the sda, uuid problem?
->
mdadm -E /dev/sda
/dev/sda:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee418e - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 13 8 0 13 spare /dev/sda
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdb
/dev/sdb:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee4196 - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 6 8 16 6 active sync /dev/sdb
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdc
/dev/sdc:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee41aa - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 8 8 32 8 active sync /dev/sdc
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdd
/dev/sdd:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee41bc - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 9 8 48 9 active sync /dev/sdd
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sde
/dev/sde:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee41ea - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 112 0 active sync /dev/sdh
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdf
/dev/sdf:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee41fe - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 2 8 128 2 active sync /dev/sdi
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdg
/dev/sdg:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee4210 - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 3 8 144 3 active sync /dev/sdj
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdh
/dev/sdh:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 22:49:22 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee3313 - correct
Events : 156606
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 13 8 160 13 spare /dev/sdk
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 160 13 spare /dev/sdk
mdadm -E /dev/sdi
/dev/sdi:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee4232 - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 4 8 176 4 active sync /dev/sdl
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdj
/dev/sdj:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee4244 - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 5 8 192 5 active sync /dev/sdm
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdk
mdadm: No md superblock detected on /dev/sdk.
mdadm -E /dev/sdl
/dev/sdl:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee41ce - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 10 8 64 10 active sync /dev/sde
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdm
/dev/sdm:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee41e0 - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 11 8 80 11 active sync /dev/sdf
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdn
/dev/sdn:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee41f2 - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 12 8 96 12 active sync /dev/sdg
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdo
mdadm: No md superblock detected on /dev/sdo.
nassrv01:~# fdisk -l |grep sd
Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes
Disk /dev/sde: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdf: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdg: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdh: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdi: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdj: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdk: 20.4 GB, 20409532416 bytes
/dev/sdk1 * 1 2372 19053058+ 83 Linux
/dev/sdk2 2373 2481 875542+ 5 Extended
/dev/sdk5 2373 2481 875511 82 Linux swap / Solaris
Disk /dev/sdl: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdm: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdn: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdo: 1500.3 GB, 1500301910016 bytes
/dev/sdo1 1 182401 1465136001 83 Linux
Hope that helps. thx
--
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SRaid with 13 Disks crashed
2011-06-08 20:02 Dragon
@ 2011-06-08 20:32 ` Phil Turmel
0 siblings, 0 replies; 8+ messages in thread
From: Phil Turmel @ 2011-06-08 20:32 UTC (permalink / raw)
To: Dragon; +Cc: linux-raid
Hi Dragon,
On 06/08/2011 04:02 PM, Dragon wrote:
[...]
> Please share the output of "mdadm -E /dev/sd[abcdefghijklm]"
>
> Phil
> -------------------
> Sorry my misstake, of course "resize2fs".
> "Was /dev/sdk supposed to be in this list?"
> ->yes was a part of the raid, but i think something happends with the uuid and now its the system and boot disk. for explanation the raid is stand alone
> "This is suspicious. Looks like sda was added as a spare?"
> ->last i wanted to downgrade the raid from 13 to 12 and there for take a disk and marked it as spare, but it was the sda, uuid problem?
> ->
> mdadm -E /dev/sda
> /dev/sda:
> Magic : a92b4efc
> Version : 0.90.00
> UUID : 975d6eb2:285eed11:021df236:c2d05073
> Creation Time : Tue Oct 13 23:26:17 2009
> Raid Level : raid5
> Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
> Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
> Raid Devices : 13
> Total Devices : 12
> Preferred Minor : 0
>
> Update Time : Fri Jun 3 23:47:53 2011
> State : clean
> Active Devices : 11
> Working Devices : 12
> Failed Devices : 2
> Spare Devices : 1
> Checksum : 1dee418e - correct
> Events : 156864
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 13 8 0 13 spare /dev/sda
>
> 0 0 8 112 0 active sync /dev/sdh
> 1 1 0 0 1 faulty removed
> 2 2 8 128 2 active sync /dev/sdi
> 3 3 8 144 3 active sync /dev/sdj
> 4 4 8 176 4 active sync /dev/sdl
> 5 5 8 192 5 active sync /dev/sdm
> 6 6 8 16 6 active sync /dev/sdb
> 7 7 0 0 7 faulty removed
> 8 8 8 32 8 active sync /dev/sdc
> 9 9 8 48 9 active sync /dev/sdd
> 10 10 8 64 10 active sync /dev/sde
> 11 11 8 80 11 active sync /dev/sdf
> 12 12 8 96 12 active sync /dev/sdg
> 13 13 8 0 13 spare /dev/sda
> mdadm -E /dev/sdb
> /dev/sdb:
> Magic : a92b4efc
> Version : 0.90.00
> UUID : 975d6eb2:285eed11:021df236:c2d05073
> Creation Time : Tue Oct 13 23:26:17 2009
> Raid Level : raid5
> Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
> Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
> Raid Devices : 13
> Total Devices : 12
> Preferred Minor : 0
>
> Update Time : Fri Jun 3 23:47:53 2011
> State : clean
> Active Devices : 11
> Working Devices : 12
> Failed Devices : 2
> Spare Devices : 1
> Checksum : 1dee4196 - correct
> Events : 156864
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 6 8 16 6 active sync /dev/sdb
>
> 0 0 8 112 0 active sync /dev/sdh
> 1 1 0 0 1 faulty removed
> 2 2 8 128 2 active sync /dev/sdi
> 3 3 8 144 3 active sync /dev/sdj
> 4 4 8 176 4 active sync /dev/sdl
> 5 5 8 192 5 active sync /dev/sdm
> 6 6 8 16 6 active sync /dev/sdb
> 7 7 0 0 7 faulty removed
> 8 8 8 32 8 active sync /dev/sdc
> 9 9 8 48 9 active sync /dev/sdd
> 10 10 8 64 10 active sync /dev/sde
> 11 11 8 80 11 active sync /dev/sdf
> 12 12 8 96 12 active sync /dev/sdg
> 13 13 8 0 13 spare /dev/sda
> mdadm -E /dev/sdc
> /dev/sdc:
> Magic : a92b4efc
> Version : 0.90.00
> UUID : 975d6eb2:285eed11:021df236:c2d05073
> Creation Time : Tue Oct 13 23:26:17 2009
> Raid Level : raid5
> Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
> Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
> Raid Devices : 13
> Total Devices : 12
> Preferred Minor : 0
>
> Update Time : Fri Jun 3 23:47:53 2011
> State : clean
> Active Devices : 11
> Working Devices : 12
> Failed Devices : 2
> Spare Devices : 1
> Checksum : 1dee41aa - correct
> Events : 156864
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 8 8 32 8 active sync /dev/sdc
>
> 0 0 8 112 0 active sync /dev/sdh
> 1 1 0 0 1 faulty removed
> 2 2 8 128 2 active sync /dev/sdi
> 3 3 8 144 3 active sync /dev/sdj
> 4 4 8 176 4 active sync /dev/sdl
> 5 5 8 192 5 active sync /dev/sdm
> 6 6 8 16 6 active sync /dev/sdb
> 7 7 0 0 7 faulty removed
> 8 8 8 32 8 active sync /dev/sdc
> 9 9 8 48 9 active sync /dev/sdd
> 10 10 8 64 10 active sync /dev/sde
> 11 11 8 80 11 active sync /dev/sdf
> 12 12 8 96 12 active sync /dev/sdg
> 13 13 8 0 13 spare /dev/sda
> mdadm -E /dev/sdd
> /dev/sdd:
> Magic : a92b4efc
> Version : 0.90.00
> UUID : 975d6eb2:285eed11:021df236:c2d05073
> Creation Time : Tue Oct 13 23:26:17 2009
> Raid Level : raid5
> Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
> Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
> Raid Devices : 13
> Total Devices : 12
> Preferred Minor : 0
>
> Update Time : Fri Jun 3 23:47:53 2011
> State : clean
> Active Devices : 11
> Working Devices : 12
> Failed Devices : 2
> Spare Devices : 1
> Checksum : 1dee41bc - correct
> Events : 156864
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 9 8 48 9 active sync /dev/sdd
>
> 0 0 8 112 0 active sync /dev/sdh
> 1 1 0 0 1 faulty removed
> 2 2 8 128 2 active sync /dev/sdi
> 3 3 8 144 3 active sync /dev/sdj
> 4 4 8 176 4 active sync /dev/sdl
> 5 5 8 192 5 active sync /dev/sdm
> 6 6 8 16 6 active sync /dev/sdb
> 7 7 0 0 7 faulty removed
> 8 8 8 32 8 active sync /dev/sdc
> 9 9 8 48 9 active sync /dev/sdd
> 10 10 8 64 10 active sync /dev/sde
> 11 11 8 80 11 active sync /dev/sdf
> 12 12 8 96 12 active sync /dev/sdg
> 13 13 8 0 13 spare /dev/sda
OK, something is odd here:
> mdadm -E /dev/sde
> /dev/sde:
^^^
You request /dev/sde
> Magic : a92b4efc
> Version : 0.90.00
> UUID : 975d6eb2:285eed11:021df236:c2d05073
> Creation Time : Tue Oct 13 23:26:17 2009
> Raid Level : raid5
> Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
> Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
> Raid Devices : 13
> Total Devices : 12
> Preferred Minor : 0
>
> Update Time : Fri Jun 3 23:47:53 2011
> State : clean
> Active Devices : 11
> Working Devices : 12
> Failed Devices : 2
> Spare Devices : 1
> Checksum : 1dee41ea - correct
> Events : 156864
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 0 8 112 0 active sync /dev/sdh
We get a report for /dev/sdh (did you scramble the report?) ^^^
> 0 0 8 112 0 active sync /dev/sdh
> 1 1 0 0 1 faulty removed
> 2 2 8 128 2 active sync /dev/sdi
> 3 3 8 144 3 active sync /dev/sdj
> 4 4 8 176 4 active sync /dev/sdl
> 5 5 8 192 5 active sync /dev/sdm
> 6 6 8 16 6 active sync /dev/sdb
> 7 7 0 0 7 faulty removed
> 8 8 8 32 8 active sync /dev/sdc
> 9 9 8 48 9 active sync /dev/sdd
> 10 10 8 64 10 active sync /dev/sde
> 11 11 8 80 11 active sync /dev/sdf
> 12 12 8 96 12 active sync /dev/sdg
> 13 13 8 0 13 spare /dev/sda
And again here, and for all the rest:
> mdadm -E /dev/sdf
> /dev/sdf:
> Magic : a92b4efc
> Version : 0.90.00
> UUID : 975d6eb2:285eed11:021df236:c2d05073
> Creation Time : Tue Oct 13 23:26:17 2009
> Raid Level : raid5
> Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
> Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
> Raid Devices : 13
> Total Devices : 12
> Preferred Minor : 0
>
> Update Time : Fri Jun 3 23:47:53 2011
> State : clean
> Active Devices : 11
> Working Devices : 12
> Failed Devices : 2
> Spare Devices : 1
> Checksum : 1dee41fe - correct
> Events : 156864
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 2 8 128 2 active sync /dev/sdi
>
> 0 0 8 112 0 active sync /dev/sdh
> 1 1 0 0 1 faulty removed
> 2 2 8 128 2 active sync /dev/sdi
> 3 3 8 144 3 active sync /dev/sdj
> 4 4 8 176 4 active sync /dev/sdl
> 5 5 8 192 5 active sync /dev/sdm
> 6 6 8 16 6 active sync /dev/sdb
> 7 7 0 0 7 faulty removed
> 8 8 8 32 8 active sync /dev/sdc
> 9 9 8 48 9 active sync /dev/sdd
> 10 10 8 64 10 active sync /dev/sde
> 11 11 8 80 11 active sync /dev/sdf
> 12 12 8 96 12 active sync /dev/sdg
> 13 13 8 0 13 spare /dev/sda
> mdadm -E /dev/sdg
> /dev/sdg:
> Magic : a92b4efc
> Version : 0.90.00
> UUID : 975d6eb2:285eed11:021df236:c2d05073
> Creation Time : Tue Oct 13 23:26:17 2009
> Raid Level : raid5
> Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
> Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
> Raid Devices : 13
> Total Devices : 12
> Preferred Minor : 0
>
> Update Time : Fri Jun 3 23:47:53 2011
> State : clean
> Active Devices : 11
> Working Devices : 12
> Failed Devices : 2
> Spare Devices : 1
> Checksum : 1dee4210 - correct
> Events : 156864
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 3 8 144 3 active sync /dev/sdj
>
> 0 0 8 112 0 active sync /dev/sdh
> 1 1 0 0 1 faulty removed
> 2 2 8 128 2 active sync /dev/sdi
> 3 3 8 144 3 active sync /dev/sdj
> 4 4 8 176 4 active sync /dev/sdl
> 5 5 8 192 5 active sync /dev/sdm
> 6 6 8 16 6 active sync /dev/sdb
> 7 7 0 0 7 faulty removed
> 8 8 8 32 8 active sync /dev/sdc
> 9 9 8 48 9 active sync /dev/sdd
> 10 10 8 64 10 active sync /dev/sde
> 11 11 8 80 11 active sync /dev/sdf
> 12 12 8 96 12 active sync /dev/sdg
> 13 13 8 0 13 spare /dev/sda
This one is odd:
> mdadm -E /dev/sdh
> /dev/sdh:
> Magic : a92b4efc
> Version : 0.90.00
> UUID : 975d6eb2:285eed11:021df236:c2d05073
> Creation Time : Tue Oct 13 23:26:17 2009
> Raid Level : raid5
> Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
> Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
> Raid Devices : 13
> Total Devices : 12
> Preferred Minor : 0
>
> Update Time : Fri Jun 3 22:49:22 2011
> State : clean
> Active Devices : 11
> Working Devices : 12
> Failed Devices : 2
> Spare Devices : 1
> Checksum : 1dee3313 - correct
> Events : 156606
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 13 8 160 13 spare /dev/sdk
Supposed requested /dev/sdh, shows report for /dev/sdk, which is not a raid device, but shows as a spare here.
>
> 0 0 8 112 0 active sync /dev/sdh
> 1 1 0 0 1 faulty removed
> 2 2 8 128 2 active sync /dev/sdi
> 3 3 8 144 3 active sync /dev/sdj
> 4 4 8 176 4 active sync /dev/sdl
> 5 5 8 192 5 active sync /dev/sdm
> 6 6 8 16 6 active sync /dev/sdb
> 7 7 0 0 7 faulty removed
> 8 8 8 32 8 active sync /dev/sdc
> 9 9 8 48 9 active sync /dev/sdd
> 10 10 8 64 10 active sync /dev/sde
> 11 11 8 80 11 active sync /dev/sdf
> 12 12 8 96 12 active sync /dev/sdg
> 13 13 8 160 13 spare /dev/sdk
> mdadm -E /dev/sdi
> /dev/sdi:
> Magic : a92b4efc
> Version : 0.90.00
> UUID : 975d6eb2:285eed11:021df236:c2d05073
> Creation Time : Tue Oct 13 23:26:17 2009
> Raid Level : raid5
> Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
> Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
> Raid Devices : 13
> Total Devices : 12
> Preferred Minor : 0
>
> Update Time : Fri Jun 3 23:47:53 2011
> State : clean
> Active Devices : 11
> Working Devices : 12
> Failed Devices : 2
> Spare Devices : 1
> Checksum : 1dee4232 - correct
> Events : 156864
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 4 8 176 4 active sync /dev/sdl
>
> 0 0 8 112 0 active sync /dev/sdh
> 1 1 0 0 1 faulty removed
> 2 2 8 128 2 active sync /dev/sdi
> 3 3 8 144 3 active sync /dev/sdj
> 4 4 8 176 4 active sync /dev/sdl
> 5 5 8 192 5 active sync /dev/sdm
> 6 6 8 16 6 active sync /dev/sdb
> 7 7 0 0 7 faulty removed
> 8 8 8 32 8 active sync /dev/sdc
> 9 9 8 48 9 active sync /dev/sdd
> 10 10 8 64 10 active sync /dev/sde
> 11 11 8 80 11 active sync /dev/sdf
> 12 12 8 96 12 active sync /dev/sdg
> 13 13 8 0 13 spare /dev/sda
> mdadm -E /dev/sdj
> /dev/sdj:
> Magic : a92b4efc
> Version : 0.90.00
> UUID : 975d6eb2:285eed11:021df236:c2d05073
> Creation Time : Tue Oct 13 23:26:17 2009
> Raid Level : raid5
> Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
> Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
> Raid Devices : 13
> Total Devices : 12
> Preferred Minor : 0
>
> Update Time : Fri Jun 3 23:47:53 2011
> State : clean
> Active Devices : 11
> Working Devices : 12
> Failed Devices : 2
> Spare Devices : 1
> Checksum : 1dee4244 - correct
> Events : 156864
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 5 8 192 5 active sync /dev/sdm
>
> 0 0 8 112 0 active sync /dev/sdh
> 1 1 0 0 1 faulty removed
> 2 2 8 128 2 active sync /dev/sdi
> 3 3 8 144 3 active sync /dev/sdj
> 4 4 8 176 4 active sync /dev/sdl
> 5 5 8 192 5 active sync /dev/sdm
> 6 6 8 16 6 active sync /dev/sdb
> 7 7 0 0 7 faulty removed
> 8 8 8 32 8 active sync /dev/sdc
> 9 9 8 48 9 active sync /dev/sdd
> 10 10 8 64 10 active sync /dev/sde
> 11 11 8 80 11 active sync /dev/sdf
> 12 12 8 96 12 active sync /dev/sdg
> 13 13 8 0 13 spare /dev/sda
Now, this is significant:
> mdadm -E /dev/sdk
> mdadm: No md superblock detected on /dev/sdk.
As we see below, this is not one of your 1.5T drives.
> mdadm -E /dev/sdl
> /dev/sdl:
> Magic : a92b4efc
> Version : 0.90.00
> UUID : 975d6eb2:285eed11:021df236:c2d05073
> Creation Time : Tue Oct 13 23:26:17 2009
> Raid Level : raid5
> Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
> Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
> Raid Devices : 13
> Total Devices : 12
> Preferred Minor : 0
>
> Update Time : Fri Jun 3 23:47:53 2011
> State : clean
> Active Devices : 11
> Working Devices : 12
> Failed Devices : 2
> Spare Devices : 1
> Checksum : 1dee41ce - correct
> Events : 156864
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 10 8 64 10 active sync /dev/sde
>
> 0 0 8 112 0 active sync /dev/sdh
> 1 1 0 0 1 faulty removed
> 2 2 8 128 2 active sync /dev/sdi
> 3 3 8 144 3 active sync /dev/sdj
> 4 4 8 176 4 active sync /dev/sdl
> 5 5 8 192 5 active sync /dev/sdm
> 6 6 8 16 6 active sync /dev/sdb
> 7 7 0 0 7 faulty removed
> 8 8 8 32 8 active sync /dev/sdc
> 9 9 8 48 9 active sync /dev/sdd
> 10 10 8 64 10 active sync /dev/sde
> 11 11 8 80 11 active sync /dev/sdf
> 12 12 8 96 12 active sync /dev/sdg
> 13 13 8 0 13 spare /dev/sda
> mdadm -E /dev/sdm
> /dev/sdm:
> Magic : a92b4efc
> Version : 0.90.00
> UUID : 975d6eb2:285eed11:021df236:c2d05073
> Creation Time : Tue Oct 13 23:26:17 2009
> Raid Level : raid5
> Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
> Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
> Raid Devices : 13
> Total Devices : 12
> Preferred Minor : 0
>
> Update Time : Fri Jun 3 23:47:53 2011
> State : clean
> Active Devices : 11
> Working Devices : 12
> Failed Devices : 2
> Spare Devices : 1
> Checksum : 1dee41e0 - correct
> Events : 156864
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 11 8 80 11 active sync /dev/sdf
>
> 0 0 8 112 0 active sync /dev/sdh
> 1 1 0 0 1 faulty removed
> 2 2 8 128 2 active sync /dev/sdi
> 3 3 8 144 3 active sync /dev/sdj
> 4 4 8 176 4 active sync /dev/sdl
> 5 5 8 192 5 active sync /dev/sdm
> 6 6 8 16 6 active sync /dev/sdb
> 7 7 0 0 7 faulty removed
> 8 8 8 32 8 active sync /dev/sdc
> 9 9 8 48 9 active sync /dev/sdd
> 10 10 8 64 10 active sync /dev/sde
> 11 11 8 80 11 active sync /dev/sdf
> 12 12 8 96 12 active sync /dev/sdg
> 13 13 8 0 13 spare /dev/sda
> mdadm -E /dev/sdn
> /dev/sdn:
> Magic : a92b4efc
> Version : 0.90.00
> UUID : 975d6eb2:285eed11:021df236:c2d05073
> Creation Time : Tue Oct 13 23:26:17 2009
> Raid Level : raid5
> Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
> Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
> Raid Devices : 13
> Total Devices : 12
> Preferred Minor : 0
>
> Update Time : Fri Jun 3 23:47:53 2011
> State : clean
> Active Devices : 11
> Working Devices : 12
> Failed Devices : 2
> Spare Devices : 1
> Checksum : 1dee41f2 - correct
> Events : 156864
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> Number Major Minor RaidDevice State
> this 12 8 96 12 active sync /dev/sdg
>
> 0 0 8 112 0 active sync /dev/sdh
> 1 1 0 0 1 faulty removed
> 2 2 8 128 2 active sync /dev/sdi
> 3 3 8 144 3 active sync /dev/sdj
> 4 4 8 176 4 active sync /dev/sdl
> 5 5 8 192 5 active sync /dev/sdm
> 6 6 8 16 6 active sync /dev/sdb
> 7 7 0 0 7 faulty removed
> 8 8 8 32 8 active sync /dev/sdc
> 9 9 8 48 9 active sync /dev/sdd
> 10 10 8 64 10 active sync /dev/sde
> 11 11 8 80 11 active sync /dev/sdf
> 12 12 8 96 12 active sync /dev/sdg
> 13 13 8 0 13 spare /dev/sda
> mdadm -E /dev/sdo
> mdadm: No md superblock detected on /dev/sdo.
> nassrv01:~# fdisk -l |grep sd
> Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sde: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdf: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdg: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdh: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdi: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdj: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdk: 20.4 GB, 20409532416 bytes
> /dev/sdk1 * 1 2372 19053058+ 83 Linux
> /dev/sdk2 2373 2481 875542+ 5 Extended
> /dev/sdk5 2373 2481 875511 82 Linux swap / Solaris
> Disk /dev/sdl: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdm: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdn: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdo: 1500.3 GB, 1500301910016 bytes
> /dev/sdo1 1 182401 1465136001 83 Linux
Uh, oh. /dev/sdo has a partition table. Please show "mdadm -E /dev/sdo1"
I'm guessing that drive letters have changed since you first put this together. Could you please get my "lsdrv" script from:
http://github.com/pturmel/lsdrv
and show us the output. This will help make sure we don't lose track of the roles of each of these drives.
My first impression from the above output is that the shrink from 13 back to 12 did not actually happen. Shrinking is dangerous enough that mdadm stays in test mode unless you set a sysfs variable with the new size before trying to shrink.
Another question: Do you have a backup?
Phil
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SRaid with 13 Disks crashed
@ 2011-06-10 7:52 Dragon
2011-06-10 11:48 ` Phil Turmel
0 siblings, 1 reply; 8+ messages in thread
From: Dragon @ 2011-06-10 7:52 UTC (permalink / raw)
To: philip; +Cc: linux-raid
I have nothing limited. It's all i get from the bash after excecute the script. i am not aware of using a fast -boot kernel, but i saw that it comes whit kernel version 2.6.28. i think after the raid crashes i upgrade to the last kernel version, because of having the newer mdamd, ext and what ever tools to recover the raid.
Am i right to use the --backup-file option, i must have a backup file ;)? i didnt have a file.
as far as i understand your advise, to recreate the raid and the order of the disks. you choose them by the output off mdadm -E wich gave the actuall number of the disk and the variation at disk sdd and sdn is because of both shows the same number, right? i think i understand that. but why is disk "j" the last in the order. my output say that disk sdd is number 13, but as spare for sda....?
here the stand after rebooting at the morning:
fdisk -l|grep sd
Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes
Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdc: 20.4 GB, 20409532416 bytes
/dev/sdc1 * 1 2372 19053058+ 83 Linux
/dev/sdc2 2373 2481 875542+ 5 Extended
/dev/sdc5 2373 2481 875511 82 Linux swap / Solaris
Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes
Disk /dev/sde: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdf: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdg: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdh: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdi: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdj: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdk: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdl: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdm: 1500.3 GB, 1500301910016 bytes
Disk /dev/sdn: 1500.3 GB, 1500301910016 bytes
mdadm -E /dev/sda
/dev/sda:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee4232 - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 4 8 176 4 active sync /dev/sdl
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdb
/dev/sdb:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee4244 - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 5 8 192 5 active sync /dev/sdm
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdd
/dev/sdd:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee418e - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 13 8 0 13 spare /dev/sda
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sde
/dev/sde:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee4196 - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 6 8 16 6 active sync /dev/sdb
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdf
/dev/sdf:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee41aa - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 8 8 32 8 active sync /dev/sdc
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdg
/dev/sdg:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee41bc - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 9 8 48 9 active sync /dev/sdd
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdh
/dev/sdh:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee41ce - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 10 8 64 10 active sync /dev/sde
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdi
/dev/sdi:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee41e0 - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 11 8 80 11 active sync /dev/sdf
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdj
/dev/sdj:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee41f2 - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 12 8 96 12 active sync /dev/sdg
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdk
/dev/sdk:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee41ea - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 112 0 active sync /dev/sdh
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdl
/dev/sdl:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee41fe - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 2 8 128 2 active sync /dev/sdi
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdm
/dev/sdm:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 23:47:53 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee4210 - correct
Events : 156864
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 3 8 144 3 active sync /dev/sdj
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 0 13 spare /dev/sda
mdadm -E /dev/sdn
/dev/sdn:
Magic : a92b4efc
Version : 0.90.00
UUID : 975d6eb2:285eed11:021df236:c2d05073
Creation Time : Tue Oct 13 23:26:17 2009
Raid Level : raid5
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Raid Devices : 13
Total Devices : 12
Preferred Minor : 0
Update Time : Fri Jun 3 22:49:22 2011
State : clean
Active Devices : 11
Working Devices : 12
Failed Devices : 2
Spare Devices : 1
Checksum : 1dee3313 - correct
Events : 156606
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 13 8 160 13 spare /dev/sdk
0 0 8 112 0 active sync /dev/sdh
1 1 0 0 1 faulty removed
2 2 8 128 2 active sync /dev/sdi
3 3 8 144 3 active sync /dev/sdj
4 4 8 176 4 active sync /dev/sdl
5 5 8 192 5 active sync /dev/sdm
6 6 8 16 6 active sync /dev/sdb
7 7 0 0 7 faulty removed
8 8 8 32 8 active sync /dev/sdc
9 9 8 48 9 active sync /dev/sdd
10 10 8 64 10 active sync /dev/sde
11 11 8 80 11 active sync /dev/sdf
12 12 8 96 12 active sync /dev/sdg
13 13 8 160 13 spare /dev/sdk
------
in short:
/dev/sda:
this 4 8 176 4 active sync /dev/sdl
/dev/sdb:
this 5 8 192 5 active sync /dev/sdm
/dev/sdd:
this 13 8 0 13 spare /dev/sda
/dev/sde:
this 6 8 16 6 active sync /dev/sdb
/dev/sdf:
this 8 8 32 8 active sync /dev/sdc
/dev/sdg:
this 9 8 48 9 active sync /dev/sdd
/dev/sdh:
this 10 8 64 10 active sync /dev/sde
/dev/sdi:
this 11 8 80 11 active sync /dev/sdf
/dev/sdj:
this 12 8 96 12 active sync /dev/sdg
/dev/sdk:
this 0 8 112 0 active sync /dev/sdh
/dev/sdl:
this 2 8 128 2 active sync /dev/sdi
/dev/sdm:
this 3 8 144 3 active sync /dev/sdj
/dev/sdn:
this 13 8 160 13 spare /dev/sdk
----
after that i would think the order confused because no disk is the right in the first line auf the position. but i would do this:
mdadm -C /dev/md0 -l 5 -n 13 -e 0.90 -c 64 --assume-clean /dev/sd{k,l,m,a,b,e,?d?,f,g,h,i,j,n}
or
mdadm -C /dev/md0 -l 5 -n 13 -e 0.90 -c 64 --assume-clean /dev/sd{k,l,m,a,b,e,?n?,f,g,h,i,j,d}
or what you think? how must i handle both spare disk sda and sdk?
--
Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SRaid with 13 Disks crashed
2011-06-10 7:52 Dragon
@ 2011-06-10 11:48 ` Phil Turmel
0 siblings, 0 replies; 8+ messages in thread
From: Phil Turmel @ 2011-06-10 11:48 UTC (permalink / raw)
To: Dragon; +Cc: linux-raid
Hi Dragon,
On 06/10/2011 03:52 AM, Dragon wrote:
> I have nothing limited. It's all i get from the bash after excecute the script. i am not aware of using a fast -boot kernel, but i saw that it comes whit kernel version 2.6.28. i think after the raid crashes i upgrade to the last kernel version, because of having the newer mdamd, ext and what ever tools to recover the raid.
Newer kernels have the option to parallel probe. You got a newer kernel.
> Am i right to use the --backup-file option, i must have a backup file ;)? i didnt have a file.
Your attempt to shrink the array from 13 back to 12 would have needed the file, and you didn't supply one. mdadm almost certainly told you it wouldn't shrink the array.
> as far as i understand your advise, to recreate the raid and the order of the disks. you choose them by the output off mdadm -E wich gave the actuall number of the disk and the variation at disk sdd and sdn is because of both shows the same number, right? i think i understand that. but why is disk "j" the last in the order. my output say that disk sdd is number 13, but as spare for sda....?
> here the stand after rebooting at the morning:
Yes, I'm using the "RaidDevice" column to determine which drive is which. The drive names in the far right column are the original names, before your kernel changed. They aren't important. To reconstruct your 13-disk array, the devices from 0 to 12 must be listed in the create command in the correct order. Two of your devices think they are spares, reporting "RaidDevice" > 12. They must fit into the two slots that are reported as "faulty removed".
> fdisk -l|grep sd
> Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdc: 20.4 GB, 20409532416 bytes
> /dev/sdc1 * 1 2372 19053058+ 83 Linux
> /dev/sdc2 2373 2481 875542+ 5 Extended
> /dev/sdc5 2373 2481 875511 82 Linux swap / Solaris
> Disk /dev/sdd: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sde: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdf: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdg: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdh: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdi: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdj: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdk: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdl: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdm: 1500.3 GB, 1500301910016 bytes
> Disk /dev/sdn: 1500.3 GB, 1500301910016 bytes
[...]
> ------
> in short:
> /dev/sda:
> this 4 8 176 4 active sync /dev/sdl
>
> /dev/sdb:
> this 5 8 192 5 active sync /dev/sdm
>
> /dev/sdd:
> this 13 8 0 13 spare /dev/sda
>
> /dev/sde:
> this 6 8 16 6 active sync /dev/sdb
>
> /dev/sdf:
> this 8 8 32 8 active sync /dev/sdc
>
> /dev/sdg:
> this 9 8 48 9 active sync /dev/sdd
>
> /dev/sdh:
> this 10 8 64 10 active sync /dev/sde
>
> /dev/sdi:
> this 11 8 80 11 active sync /dev/sdf
>
> /dev/sdj:
> this 12 8 96 12 active sync /dev/sdg
>
> /dev/sdk:
> this 0 8 112 0 active sync /dev/sdh
>
> /dev/sdl:
> this 2 8 128 2 active sync /dev/sdi
>
> /dev/sdm:
> this 3 8 144 3 active sync /dev/sdj
>
> /dev/sdn:
> this 13 8 160 13 spare /dev/sdk
Good to summarize, but ignore the names on the right.
> after that i would think the order confused because no disk is the right in the first line auf the position. but i would do this:
>
> mdadm -C /dev/md0 -l 5 -n 13 -e 0.90 -c 64 --assume-clean /dev/sd{k,l,m,a,b,e,?d?,f,g,h,i,j,n}
> or
> mdadm -C /dev/md0 -l 5 -n 13 -e 0.90 -c 64 --assume-clean /dev/sd{k,l,m,a,b,e,?n?,f,g,h,i,j,d}
No. The devices that know where they belong are 0,2,3,4,5,6,8,9,10,11,12. That would be /dev/sd{k,*,l,m,a,b,e,*,f,g,h,i,j}. The devices that report 13, /dev/sdd & /dev/sdn, must fit in to positions 1 & 7. Two possibilities:
/dev/sd{k,d,l,m,a,b,e,n,f,g,h,i,j} or /dev/sd{k,n,l,m,a,b,e,d,f,g,h,i,j}
HTH,
Phil
^ permalink raw reply [flat|nested] 8+ messages in thread
* (unknown)
@ 2011-06-10 13:06 Dragon
2011-06-10 14:01 ` SRaid with 13 Disks crashed Phil Turmel
0 siblings, 1 reply; 8+ messages in thread
From: Dragon @ 2011-06-10 13:06 UTC (permalink / raw)
To: philip; +Cc: linux-raid
You are right, the array starts at pos 0 and so pos 1 and 7 are the right pos. the 2. try was perfect. fsck shows this:
fsck -n /dev/md0
fsck from util-linux-ng 2.17.2
e2fsck 1.41.12 (17-May-2010)
/dev/md0 wurde nicht ordnungsgemäß ausgehängt, Prüfung erzwungen.
Durchgang 1: Prüfe Inodes, Blocks, und Größen
Durchgang 2: Prüfe Verzeichnis Struktur
Durchgang 3: Prüfe Verzeichnis Verknüpfungen
Durchgang 4: Überprüfe die Referenzzähler
Durchgang 5: Überprüfe Gruppe Zusammenfassung
dd/dev/md0: 266872/1007288320 Dateien (15.4% nicht zusammenhängend), 3769576927/4029130864 Blöcke
and:
mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Fri Jun 10 14:19:24 2011
Raid Level : raid5
Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
Raid Devices : 13
Total Devices : 13
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Fri Jun 10 14:19:24 2011
State : clean
Active Devices : 13
Working Devices : 13
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 8c4d8438:42aa49f9:a6d866f6:b6ea6b93 (local to host nassrv01)
Events : 0.1
Number Major Minor RaidDevice State
0 8 160 0 active sync /dev/sdk
1 8 208 1 active sync /dev/sdn
2 8 176 2 active sync /dev/sdl
3 8 192 3 active sync /dev/sdm
4 8 0 4 active sync /dev/sda
5 8 16 5 active sync /dev/sdb
6 8 64 6 active sync /dev/sde
7 8 48 7 active sync /dev/sdd
8 8 80 8 active sync /dev/sdf
9 8 96 9 active sync /dev/sdg
10 8 112 10 active sync /dev/sdh
11 8 128 11 active sync /dev/sdi
12 8 144 12 active sync /dev/sdj
normaly i use fsck.ext4 e.a. fsck.ext4dev. problem? what means 15,4% not related? the quote of lost data? after that i shrink like this:?
mdadm /dev/md0 --fail /dev/sdj
mdadm /dev/md0 --remove /dev/sdj
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
right way? i assume that the disk that i take off the raid is not the same like i added at last? so i have to read out the serial to find it under the harddrives?
many thx so far
--
NEU: FreePhone - kostenlos mobil telefonieren!
Jetzt informieren: http://www.gmx.net/de/go/freephone
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: SRaid with 13 Disks crashed
2011-06-10 13:06 (unknown) Dragon
@ 2011-06-10 14:01 ` Phil Turmel
0 siblings, 0 replies; 8+ messages in thread
From: Phil Turmel @ 2011-06-10 14:01 UTC (permalink / raw)
To: Dragon; +Cc: linux-raid
On 06/10/2011 09:06 AM, Dragon wrote:
> You are right, the array starts at pos 0 and so pos 1 and 7 are the right pos. the 2. try was perfect. fsck shows this:
Yay!
> fsck -n /dev/md0
> fsck from util-linux-ng 2.17.2
> e2fsck 1.41.12 (17-May-2010)
> /dev/md0 wurde nicht ordnungsgemäß ausgehängt, Prüfung erzwungen.
> Durchgang 1: Prüfe Inodes, Blocks, und Größen
> Durchgang 2: Prüfe Verzeichnis Struktur
> Durchgang 3: Prüfe Verzeichnis Verknüpfungen
> Durchgang 4: Überprüfe die Referenzzähler
> Durchgang 5: Überprüfe Gruppe Zusammenfassung
> dd/dev/md0: 266872/1007288320 Dateien (15.4% nicht zusammenhängend), 3769576927/4029130864 Blöcke
>
> and:
> mdadm --detail /dev/md0
> /dev/md0:
> Version : 0.90
> Creation Time : Fri Jun 10 14:19:24 2011
> Raid Level : raid5
> Array Size : 17581661952 (16767.18 GiB 18003.62 GB)
> Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)
> Raid Devices : 13
> Total Devices : 13
> Preferred Minor : 0
> Persistence : Superblock is persistent
>
> Update Time : Fri Jun 10 14:19:24 2011
> State : clean
> Active Devices : 13
> Working Devices : 13
> Failed Devices : 0
> Spare Devices : 0
>
> Layout : left-symmetric
> Chunk Size : 64K
>
> UUID : 8c4d8438:42aa49f9:a6d866f6:b6ea6b93 (local to host nassrv01)
> Events : 0.1
>
> Number Major Minor RaidDevice State
> 0 8 160 0 active sync /dev/sdk
> 1 8 208 1 active sync /dev/sdn
> 2 8 176 2 active sync /dev/sdl
> 3 8 192 3 active sync /dev/sdm
> 4 8 0 4 active sync /dev/sda
> 5 8 16 5 active sync /dev/sdb
> 6 8 64 6 active sync /dev/sde
> 7 8 48 7 active sync /dev/sdd
> 8 8 80 8 active sync /dev/sdf
> 9 8 96 9 active sync /dev/sdg
> 10 8 112 10 active sync /dev/sdh
> 11 8 128 11 active sync /dev/sdi
> 12 8 144 12 active sync /dev/sdj
>
> normaly i use fsck.ext4 e.a. fsck.ext4dev. problem? what means 15,4% not related? the quote of lost data? after that i shrink like this:?
fsck automatically calls fsck.ext4 when it sees an ext4 filesystem. 15.4% Not contiguous == 15.4 fragmented. No lost data.
Now that you have a good filesystem, mounting it and taking a backup would be a good idea. Or at least retrieve any files that are very important to you.
> mdadm /dev/md0 --fail /dev/sdj
> mdadm /dev/md0 --remove /dev/sdj
NO! You must use "mdadm --grow". Yes, "--grow" also does "shrink". Your fsck shows that the ext4 filesystem is still sized for the original 12-disk setup, so you don't have to shrink the filesystem. You do have to shrink the raid:
Step 1a: Tell mdadm the final size you are aiming for. MD will emulate this while you test that the new size works:
mdadm /dev/md0 --grow --array-size=16116523456k
(Please show "mdadm -D /dev/md0" at this point.)
Step 1b: Verify data integrity with another fsck -n
Step 2: Tell mdadm to really reshape to the 12-disk raid5
mdadm /dev/md0 --grow -n 12 --backup-file=/reshape.bak
When the reshape/shrink is done, "mdadm -D /dev/md0" will report "Raid Devices : 12" and "Spare Devices : 1", and one of them, almost certainly /dev/sdj, will be marked "spare".
At this point, I recommend converting to raid6, consuming the spare.
mdadm /dev/md0 --grow -n 13 -l 6 --backup-file=/reshape.bak
It might be possible to go directly to this layout (in place of step 2 above). It would save a lot of time. Maybe someone else on the list can answer that. Or you can just try it. I'm sure mdadm will complain if it's not possible ;).
> mdadm --detail --scan >> /etc/mdadm/mdadm.conf
Yes. Make sure you edit it afterwards to remove the old array's information.
> right way? i assume that the disk that i take off the raid is not the same like i added at last? so i have to read out the serial to find it under the harddrives?
Yes, use lsdrv or "/s -l /dev/disk/by-id/" to make sure you remove the spare. Of course, if you convert to raid6, it won't be a spare :).
> many thx so far
You are welcome.
Phil
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 8+ messages in thread
[parent not found: <20110610144429.298520@gmx.net>]
* Re: SRaid with 13 Disks crashed
[not found] <20110610144429.298520@gmx.net>
@ 2011-06-10 14:49 ` Phil Turmel
0 siblings, 0 replies; 8+ messages in thread
From: Phil Turmel @ 2011-06-10 14:49 UTC (permalink / raw)
To: Dragon; +Cc: linux-raid@vger.kernel.org
On 06/10/2011 10:44 AM, Dragon wrote:
> "fsck automatically calls fsck.ext4 when it sees an ext4 filesystem. 15.4% Notcontiguous == 15.4 fragmented. No lost data."
> -> ok, puh
>
> Now that you have a good filesystem, mounting it and taking a backup would be a good idea. Or at least retrieve any files that are very important to you.
> -> ups
>
> NO! You must use "mdadm --grow". Yes, "--grow" also does "shrink". Your fsck shows that the ext4 filesystem is still sized for the original 12-disk setup, so you don't have to shrink the filesystem. You do have to shrink the raid:
>
> ->mount was successfull and i can see all of my data ;) im so happy, but have still no possibility to make a backup
Yay! :) and Oooo! :(
> Step 1a: Tell mdadm the final size you are aiming for. MD will emulate this while you test that the new size works:
> mdadm /dev/md0 --grow --array-size=16116523456k
> -> shows
> mdadm /dev/md0 --grow --array-size=16116523456k
> mdadm: invalid array size: 16116523456k
>
> might be: 16896000k better? 1500*1024*11
No, it must be "Used Device Size" * 11 = 16116523456. Try it without the 'k'.
Phil
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2011-06-10 14:49 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-06-08 14:24 (unknown) Dragon
2011-06-08 14:39 ` SRaid with 13 Disks crashed Phil Turmel
-- strict thread matches above, loose matches on Subject: below --
2011-06-08 20:02 Dragon
2011-06-08 20:32 ` Phil Turmel
2011-06-10 7:52 Dragon
2011-06-10 11:48 ` Phil Turmel
2011-06-10 13:06 (unknown) Dragon
2011-06-10 14:01 ` SRaid with 13 Disks crashed Phil Turmel
[not found] <20110610144429.298520@gmx.net>
2011-06-10 14:49 ` Phil Turmel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).