* looking for advice on raid0+raid5 array recovery with mdadm and sector offset
@ 2014-01-02 19:45 den Hoog
2014-01-06 1:41 ` NeilBrown
0 siblings, 1 reply; 10+ messages in thread
From: den Hoog @ 2014-01-02 19:45 UTC (permalink / raw)
To: linux-raid; +Cc: neilb
Hi Neil
I apologize if I made mistakes with the first mail post but probably
something went wrong, so this is a retry.
I'm looking for advice on my plan to recover my raid5 volume with mdadm.
I was in a hurry and made a stupid mistake when upgrading the MB bios.
Forgot to turn on the Intel SATA raid, and Windows recovery erased the
first disk of 4.
It is an array of 4x4TB, in a matrix, having 1 RAID0 volume, and 1 RAID5 vol.
Although the array displays a failed array in Windows, 3 disks are
active, and 1 is missing and showing as a non-raid array being
available.
In (my) theory, I should still be able to recover the raid5 vol. with
the remaining 3 disks, however I should specify the specific sector
offset I guess.
I read many articles on this, but none of them address the
'difficulty' of recovering a specific volume when multiple exist in an
array.
Although I've some backups, I really would appreciate your help in
getting this recovered.
sda is the SSD
sdb is the 'missing' and erased drive (serial ending on P82C)
sdc is the second drive in the array
sdd is the 3rd drive in the array
sde is the 4th drive in the array
sdf is the usb stick I'm running Fedora live from
What I've done so far :
- Started Fedora 15 Live from a USB
- installed the mdadm package data_offset and compiled
My plan to work with an offset to recover the [HitR5] volume:
- echo 1 > /sys/module/md_mod/parameters/
start_dirty_degraded
- mdadm -C /dev/md/imsm -e imsm -n 4 /dev/sdb /dev/sdc /dev/sdd /dev/sde
- mdadm -C /dev/md0 -l5 -n4 -c 128 /dev/sdb:1073746184s
/dev/sdc:1073746184s /dev/sdd:1073746184s /dev/sde:1073746184s
I'm in doubt about working with missing disks first to start a
degraded array, with -C and missing for the first drive.
Or choosing assemble --auto, or as stated above and create the volume with an
offset.
Another thing I'm not certain of: do I need to build a new mdadm with
data_offset, or is it already present in my 3.2.6 version?
When I built a new version with Neils mdadm I ended up with a 3.2.5 18May2012
version.
As I guess I have only one shot at this I have not executed anything yet.
thanks many for your help, time and advice!
best regards Dennis
=======output mdadm -Evvvvs=============
root@localhost ~]# mdadm -Evvvvs
mdadm: No md superblock detected on /dev/dm-1.
mdadm: No md superblock detected on /dev/dm-0.
/dev/sdf1:
MBR Magic : aa55
Partition[0] : 432871117 sectors at 3224498923 (type 07)
Partition[1] : 1953460034 sectors at 3272020941 (type 16)
Partition[3] : 924335794 sectors at 50200576 (type 00)
/dev/sdf:
MBR Magic : aa55
Partition[0] : 15769600 sectors at 2048 (type 0b)
/dev/sde:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.3.00
Orig Family : f3437c9b
Family : f3437c9d
Generation : 00002c5f
Attributes : All supported
UUID : 47b011c7:4a8531ea:7e94ab93:06034952
Checksum : 671f5f84 correct
MPB Sectors : 2
Disks : 4
RAID Devices : 2
Disk03 Serial : PL1321LAG4RXEH
State : active
Id : 00000005
Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
[HitR0]:
UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
RAID Level : 0
Members : 4
Slots : [_UUU]
Failed disk : 1
This Slot : 3
Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
Sector Offset : 0
Num Stripes : 4194304
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : failed
Dirty State : clean
[HitR5]:
UUID : 71626250:b8fc1262:3545d952:69eb329e
RAID Level : 5
Members : 4
Slots : [_UU_]
Failed disk : 3
This Slot : 3 (out-of-sync)
Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
Sector Offset : 1073746184
Num Stripes : 26329208
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : failed
Dirty State : clean
Disk00 Serial : PL2311LAG1P82C:0
State : active
Id : ffffffff
Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
Disk01 Serial : PL1321LAG4NMEH
State : active
Id : 00000003
Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
Disk02 Serial : PL1321LAG4TH4H
State : active
Id : 00000004
Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
/dev/sdd:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.3.00
Orig Family : f3437c9b
Family : f3437c9d
Generation : 00002c5f
Attributes : All supported
UUID : 47b011c7:4a8531ea:7e94ab93:06034952
Checksum : 671f5f84 correct
MPB Sectors : 2
Disks : 4
RAID Devices : 2
Disk02 Serial : PL1321LAG4TH4H
State : active
Id : 00000004
Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
[HitR0]:
UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
RAID Level : 0
Members : 4
Slots : [_UUU]
Failed disk : 1
This Slot : 2
Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
Sector Offset : 0
Num Stripes : 4194304
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : failed
Dirty State : clean
[HitR5]:
UUID : 71626250:b8fc1262:3545d952:69eb329e
RAID Level : 5
Members : 4
Slots : [_UU_]
Failed disk : 3
This Slot : 2
Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
Sector Offset : 1073746184
Num Stripes : 26329208
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : failed
Dirty State : clean
Disk00 Serial : PL2311LAG1P82C:0
State : active
Id : ffffffff
Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
Disk01 Serial : PL1321LAG4NMEH
State : active
Id : 00000003
Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
Disk03 Serial : PL1321LAG4RXEH
State : active
Id : 00000005
Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
/dev/sdc:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.3.00
Orig Family : f3437c9b
Family : f3437c9d
Generation : 00002c5f
Attributes : All supported
UUID : 47b011c7:4a8531ea:7e94ab93:06034952
Checksum : 671f5f84 correct
MPB Sectors : 2
Disks : 4
RAID Devices : 2
Disk01 Serial : PL1321LAG4NMEH
State : active
Id : 00000003
Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
[HitR0]:
UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
RAID Level : 0
Members : 4
Slots : [_UUU]
Failed disk : 1
This Slot : 1
Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
Sector Offset : 0
Num Stripes : 4194304
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : failed
Dirty State : clean
[HitR5]:
UUID : 71626250:b8fc1262:3545d952:69eb329e
RAID Level : 5
Members : 4
Slots : [_UU_]
Failed disk : 3
This Slot : 1
Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
Sector Offset : 1073746184
Num Stripes : 26329208
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : failed
Dirty State : clean
Disk00 Serial : PL2311LAG1P82C:0
State : active
Id : ffffffff
Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
Disk02 Serial : PL1321LAG4TH4H
State : active
Id : 00000004
Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
Disk03 Serial : PL1321LAG4RXEH
State : active
Id : 00000005
Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
mdadm: No md superblock detected on /dev/sdb1.
/dev/sdb:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
/dev/sda2:
MBR Magic : aa55
Partition[0] : 1816210284 sectors at 1920221984 (type 72)
Partition[1] : 1953653108 sectors at 1936028192 (type 6c)
Partition[3] : 447 sectors at 27722122 (type 00)
/dev/sda1:
MBR Magic : aa55
Partition[0] : 1816210284 sectors at 1920221984 (type 72)
Partition[1] : 1953653108 sectors at 1936028192 (type 6c)
Partition[3] : 447 sectors at 27722122 (type 00)
/dev/sda:
MBR Magic : aa55
Partition[0] : 716800 sectors at 2048 (type 07)
Partition[1] : 499396608 sectors at 718848 (type 07)
mdadm: No md superblock detected on /dev/loop4.
mdadm: No md superblock detected on /dev/loop3.
mdadm: No md superblock detected on /dev/loop2.
mdadm: No md superblock detected on /dev/loop1.
mdadm: No md superblock detected on /dev/loop0.
/dev/md127:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.3.00
Orig Family : f3437c9b
Family : f3437c9d
Generation : 00002c5f
Attributes : All supported
UUID : 47b011c7:4a8531ea:7e94ab93:06034952
Checksum : 671f5f84 correct
MPB Sectors : 2
Disks : 4
RAID Devices : 2
Disk02 Serial : PL1321LAG4TH4H
State : active
Id : 00000004
Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
[HitR0]:
UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
RAID Level : 0
Members : 4
Slots : [_UUU]
Failed disk : 1
This Slot : 2
Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
Sector Offset : 0
Num Stripes : 4194304
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : failed
Dirty State : clean
[HitR5]:
UUID : 71626250:b8fc1262:3545d952:69eb329e
RAID Level : 5
Members : 4
Slots : [_UU_]
Failed disk : 3
This Slot : 2
Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
Sector Offset : 1073746184
Num Stripes : 26329208
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : failed
Dirty State : clean
Disk00 Serial : PL2311LAG1P82C:0
State : active
Id : ffffffff
Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
Disk01 Serial : PL1321LAG4NMEH
State : active
Id : 00000003
Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
Disk03 Serial : PL1321LAG4RXEH
State : active
Id : 00000005
Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: looking for advice on raid0+raid5 array recovery with mdadm and sector offset
2014-01-02 19:45 looking for advice on raid0+raid5 array recovery with mdadm and sector offset den Hoog
@ 2014-01-06 1:41 ` NeilBrown
2014-01-06 6:53 ` den Hoog
0 siblings, 1 reply; 10+ messages in thread
From: NeilBrown @ 2014-01-06 1:41 UTC (permalink / raw)
To: den Hoog; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 14420 bytes --]
On Thu, 2 Jan 2014 20:45:24 +0100 den Hoog <speedyden@gmail.com> wrote:
> Hi Neil
>
> I apologize if I made mistakes with the first mail post but probably
> something went wrong, so this is a retry.
>
> I'm looking for advice on my plan to recover my raid5 volume with mdadm.
>
> I was in a hurry and made a stupid mistake when upgrading the MB bios.
> Forgot to turn on the Intel SATA raid, and Windows recovery erased the
> first disk of 4.
>
> It is an array of 4x4TB, in a matrix, having 1 RAID0 volume, and 1 RAID5 vol.
> Although the array displays a failed array in Windows, 3 disks are
> active, and 1 is missing and showing as a non-raid array being
> available.
>
> In (my) theory, I should still be able to recover the raid5 vol. with
> the remaining 3 disks, however I should specify the specific sector
> offset I guess.
> I read many articles on this, but none of them address the
> 'difficulty' of recovering a specific volume when multiple exist in an
> array.
>
> Although I've some backups, I really would appreciate your help in
> getting this recovered.
> sda is the SSD
> sdb is the 'missing' and erased drive (serial ending on P82C)
> sdc is the second drive in the array
> sdd is the 3rd drive in the array
> sde is the 4th drive in the array
> sdf is the usb stick I'm running Fedora live from
>
> What I've done so far :
>
> - Started Fedora 15 Live from a USB
> - installed the mdadm package data_offset and compiled
>
>
> My plan to work with an offset to recover the [HitR5] volume:
>
> - echo 1 > /sys/module/md_mod/parameters/
> start_dirty_degraded
> - mdadm -C /dev/md/imsm -e imsm -n 4 /dev/sdb /dev/sdc /dev/sdd /dev/sde
> - mdadm -C /dev/md0 -l5 -n4 -c 128 /dev/sdb:1073746184s
> /dev/sdc:1073746184s /dev/sdd:1073746184s /dev/sde:1073746184s
This certainly won't work.
You need "--data-offset=variable" for the "NNNNs" suffixes to be recognised,
and even then it only works for 1.x metadata, not for imsm metadata.
There isn't much support for sticking together broken IMSM arrays at
present. Your best bet is to re-create the whole array.
So:
mdadm -C /dev/md/imsm -e imsm -n 4 /dev/sd[bcde]
mdadm -C /dev/md0 -l0 -n4 -c 128K -z 512G /dev/md/imsm
then check that /dev/md0 looks OK for the RAID0 array.
If it does then you can continue to create the raid5 array
mdadm -C /dev/md1 -l5 -n4 -c 128k --assume-clean /dev/md/imsm
That *should* then be correct.
If the RAID0 array doesn't look right, the possible sdb really was cleared
rather than just having the metadata erased.
In this case the RAID0 is definitely gone and it will be a bit harder to
create the RAID5. It could be something like:
mdadm -C /dev/md1 -l5 -n4 -c 128k missing /dev/sd[cde]
but I'm not sure that 'missing' is works for imsm. If you need to go this
way I can try to make 'missing' for for imsm. It shouldn't be too hard.
NeilBrown
>
>
> I'm in doubt about working with missing disks first to start a
> degraded array, with -C and missing for the first drive.
> Or choosing assemble --auto, or as stated above and create the volume with an
> offset.
>
> Another thing I'm not certain of: do I need to build a new mdadm with
> data_offset, or is it already present in my 3.2.6 version?
> When I built a new version with Neils mdadm I ended up with a 3.2.5 18May2012
> version.
>
> As I guess I have only one shot at this I have not executed anything yet.
>
> thanks many for your help, time and advice!
>
> best regards Dennis
>
>
> =======output mdadm -Evvvvs=============
>
> root@localhost ~]# mdadm -Evvvvs
>
> mdadm: No md superblock detected on /dev/dm-1.
>
> mdadm: No md superblock detected on /dev/dm-0.
>
> /dev/sdf1:
>
> MBR Magic : aa55
>
> Partition[0] : 432871117 sectors at 3224498923 (type 07)
>
> Partition[1] : 1953460034 sectors at 3272020941 (type 16)
>
> Partition[3] : 924335794 sectors at 50200576 (type 00)
>
> /dev/sdf:
>
> MBR Magic : aa55
>
> Partition[0] : 15769600 sectors at 2048 (type 0b)
>
> /dev/sde:
>
> Magic : Intel Raid ISM Cfg Sig.
>
> Version : 1.3.00
>
> Orig Family : f3437c9b
>
> Family : f3437c9d
>
> Generation : 00002c5f
>
> Attributes : All supported
>
> UUID : 47b011c7:4a8531ea:7e94ab93:06034952
>
> Checksum : 671f5f84 correct
>
> MPB Sectors : 2
>
> Disks : 4
>
> RAID Devices : 2
>
>
> Disk03 Serial : PL1321LAG4RXEH
>
> State : active
>
> Id : 00000005
>
> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>
>
> [HitR0]:
>
> UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
>
> RAID Level : 0
>
> Members : 4
>
> Slots : [_UUU]
>
> Failed disk : 1
>
> This Slot : 3
>
> Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
>
> Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
>
> Sector Offset : 0
>
> Num Stripes : 4194304
>
> Chunk Size : 128 KiB
>
> Reserved : 0
>
> Migrate State : idle
>
> Map State : failed
>
> Dirty State : clean
>
>
> [HitR5]:
>
> UUID : 71626250:b8fc1262:3545d952:69eb329e
>
> RAID Level : 5
>
> Members : 4
>
> Slots : [_UU_]
>
> Failed disk : 3
>
> This Slot : 3 (out-of-sync)
>
> Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
>
> Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
>
> Sector Offset : 1073746184
>
> Num Stripes : 26329208
>
> Chunk Size : 128 KiB
>
> Reserved : 0
>
> Migrate State : idle
>
> Map State : failed
>
> Dirty State : clean
>
>
> Disk00 Serial : PL2311LAG1P82C:0
>
> State : active
>
> Id : ffffffff
>
> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>
>
> Disk01 Serial : PL1321LAG4NMEH
>
> State : active
>
> Id : 00000003
>
> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>
>
> Disk02 Serial : PL1321LAG4TH4H
>
> State : active
>
> Id : 00000004
>
> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>
> /dev/sdd:
>
> Magic : Intel Raid ISM Cfg Sig.
>
> Version : 1.3.00
>
> Orig Family : f3437c9b
>
> Family : f3437c9d
>
> Generation : 00002c5f
>
> Attributes : All supported
>
> UUID : 47b011c7:4a8531ea:7e94ab93:06034952
>
> Checksum : 671f5f84 correct
>
> MPB Sectors : 2
>
> Disks : 4
>
> RAID Devices : 2
>
>
> Disk02 Serial : PL1321LAG4TH4H
>
> State : active
>
> Id : 00000004
>
> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>
>
> [HitR0]:
>
> UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
>
> RAID Level : 0
>
> Members : 4
>
> Slots : [_UUU]
>
> Failed disk : 1
>
> This Slot : 2
>
> Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
>
> Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
>
> Sector Offset : 0
>
> Num Stripes : 4194304
>
> Chunk Size : 128 KiB
>
> Reserved : 0
>
> Migrate State : idle
>
> Map State : failed
>
> Dirty State : clean
>
>
> [HitR5]:
>
> UUID : 71626250:b8fc1262:3545d952:69eb329e
>
> RAID Level : 5
>
> Members : 4
>
> Slots : [_UU_]
>
> Failed disk : 3
>
> This Slot : 2
>
> Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
>
> Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
>
> Sector Offset : 1073746184
>
> Num Stripes : 26329208
>
> Chunk Size : 128 KiB
>
> Reserved : 0
>
> Migrate State : idle
>
> Map State : failed
>
> Dirty State : clean
>
>
> Disk00 Serial : PL2311LAG1P82C:0
>
> State : active
>
> Id : ffffffff
>
> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>
>
> Disk01 Serial : PL1321LAG4NMEH
>
> State : active
>
> Id : 00000003
>
> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>
>
> Disk03 Serial : PL1321LAG4RXEH
>
> State : active
>
> Id : 00000005
>
> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>
> /dev/sdc:
>
> Magic : Intel Raid ISM Cfg Sig.
>
> Version : 1.3.00
>
> Orig Family : f3437c9b
>
> Family : f3437c9d
>
> Generation : 00002c5f
>
> Attributes : All supported
>
> UUID : 47b011c7:4a8531ea:7e94ab93:06034952
>
> Checksum : 671f5f84 correct
>
> MPB Sectors : 2
>
> Disks : 4
>
> RAID Devices : 2
>
>
> Disk01 Serial : PL1321LAG4NMEH
>
> State : active
>
> Id : 00000003
>
> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>
>
> [HitR0]:
>
> UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
>
> RAID Level : 0
>
> Members : 4
>
> Slots : [_UUU]
>
> Failed disk : 1
>
> This Slot : 1
>
> Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
>
> Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
>
> Sector Offset : 0
>
> Num Stripes : 4194304
>
> Chunk Size : 128 KiB
>
> Reserved : 0
>
> Migrate State : idle
>
> Map State : failed
>
> Dirty State : clean
>
>
> [HitR5]:
>
> UUID : 71626250:b8fc1262:3545d952:69eb329e
>
> RAID Level : 5
>
> Members : 4
>
> Slots : [_UU_]
>
> Failed disk : 3
>
> This Slot : 1
>
> Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
>
> Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
>
> Sector Offset : 1073746184
>
> Num Stripes : 26329208
>
> Chunk Size : 128 KiB
>
> Reserved : 0
>
> Migrate State : idle
>
> Map State : failed
>
> Dirty State : clean
>
>
> Disk00 Serial : PL2311LAG1P82C:0
>
> State : active
>
> Id : ffffffff
>
> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>
>
> Disk02 Serial : PL1321LAG4TH4H
>
> State : active
>
> Id : 00000004
>
> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>
>
> Disk03 Serial : PL1321LAG4RXEH
>
> State : active
>
> Id : 00000005
>
> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>
> mdadm: No md superblock detected on /dev/sdb1.
>
> /dev/sdb:
>
> MBR Magic : aa55
>
> Partition[0] : 4294967295 sectors at 1 (type ee)
>
> /dev/sda2:
>
> MBR Magic : aa55
>
> Partition[0] : 1816210284 sectors at 1920221984 (type 72)
>
> Partition[1] : 1953653108 sectors at 1936028192 (type 6c)
>
> Partition[3] : 447 sectors at 27722122 (type 00)
>
> /dev/sda1:
>
> MBR Magic : aa55
>
> Partition[0] : 1816210284 sectors at 1920221984 (type 72)
>
> Partition[1] : 1953653108 sectors at 1936028192 (type 6c)
>
> Partition[3] : 447 sectors at 27722122 (type 00)
>
> /dev/sda:
>
> MBR Magic : aa55
>
> Partition[0] : 716800 sectors at 2048 (type 07)
>
> Partition[1] : 499396608 sectors at 718848 (type 07)
>
> mdadm: No md superblock detected on /dev/loop4.
>
> mdadm: No md superblock detected on /dev/loop3.
>
> mdadm: No md superblock detected on /dev/loop2.
>
> mdadm: No md superblock detected on /dev/loop1.
>
> mdadm: No md superblock detected on /dev/loop0.
>
> /dev/md127:
>
> Magic : Intel Raid ISM Cfg Sig.
>
> Version : 1.3.00
>
> Orig Family : f3437c9b
>
> Family : f3437c9d
>
> Generation : 00002c5f
>
> Attributes : All supported
>
> UUID : 47b011c7:4a8531ea:7e94ab93:06034952
>
> Checksum : 671f5f84 correct
>
> MPB Sectors : 2
>
> Disks : 4
>
> RAID Devices : 2
>
>
> Disk02 Serial : PL1321LAG4TH4H
>
> State : active
>
> Id : 00000004
>
> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>
>
> [HitR0]:
>
> UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
>
> RAID Level : 0
>
> Members : 4
>
> Slots : [_UUU]
>
> Failed disk : 1
>
> This Slot : 2
>
> Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
>
> Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
>
> Sector Offset : 0
>
> Num Stripes : 4194304
>
> Chunk Size : 128 KiB
>
> Reserved : 0
>
> Migrate State : idle
>
> Map State : failed
>
> Dirty State : clean
>
>
> [HitR5]:
>
> UUID : 71626250:b8fc1262:3545d952:69eb329e
>
> RAID Level : 5
>
> Members : 4
>
> Slots : [_UU_]
>
> Failed disk : 3
>
> This Slot : 2
>
> Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
>
> Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
>
> Sector Offset : 1073746184
>
> Num Stripes : 26329208
>
> Chunk Size : 128 KiB
>
> Reserved : 0
>
> Migrate State : idle
>
> Map State : failed
>
> Dirty State : clean
>
>
> Disk00 Serial : PL2311LAG1P82C:0
>
> State : active
>
> Id : ffffffff
>
> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>
>
> Disk01 Serial : PL1321LAG4NMEH
>
> State : active
>
> Id : 00000003
>
> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>
>
> Disk03 Serial : PL1321LAG4RXEH
>
> State : active
>
> Id : 00000005
>
> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: looking for advice on raid0+raid5 array recovery with mdadm and sector offset
2014-01-06 1:41 ` NeilBrown
@ 2014-01-06 6:53 ` den Hoog
2014-01-07 21:43 ` den Hoog
0 siblings, 1 reply; 10+ messages in thread
From: den Hoog @ 2014-01-06 6:53 UTC (permalink / raw)
To: NeilBrown; +Cc: linux-raid
Hi Neil, hope you have had good holidays and appreciate your help!
good to know that it is useless to go on the offset path for imsm.
I know for sure the sdb disk was cleared by the win recovery as it
created a new 2GB partition on it
I will however try to re-create when I get home and keep you posted
Probably will have to go the 'missing' way. Will that somehow figure
out that it needs an offset?
br Dennis
On Mon, Jan 6, 2014 at 2:41 AM, NeilBrown <neilb@suse.de> wrote:
> On Thu, 2 Jan 2014 20:45:24 +0100 den Hoog <speedyden@gmail.com> wrote:
>
>> Hi Neil
>>
>> I apologize if I made mistakes with the first mail post but probably
>> something went wrong, so this is a retry.
>>
>> I'm looking for advice on my plan to recover my raid5 volume with mdadm.
>>
>> I was in a hurry and made a stupid mistake when upgrading the MB bios.
>> Forgot to turn on the Intel SATA raid, and Windows recovery erased the
>> first disk of 4.
>>
>> It is an array of 4x4TB, in a matrix, having 1 RAID0 volume, and 1 RAID5 vol.
>> Although the array displays a failed array in Windows, 3 disks are
>> active, and 1 is missing and showing as a non-raid array being
>> available.
>>
>> In (my) theory, I should still be able to recover the raid5 vol. with
>> the remaining 3 disks, however I should specify the specific sector
>> offset I guess.
>> I read many articles on this, but none of them address the
>> 'difficulty' of recovering a specific volume when multiple exist in an
>> array.
>>
>> Although I've some backups, I really would appreciate your help in
>> getting this recovered.
>> sda is the SSD
>> sdb is the 'missing' and erased drive (serial ending on P82C)
>> sdc is the second drive in the array
>> sdd is the 3rd drive in the array
>> sde is the 4th drive in the array
>> sdf is the usb stick I'm running Fedora live from
>>
>> What I've done so far :
>>
>> - Started Fedora 15 Live from a USB
>> - installed the mdadm package data_offset and compiled
>>
>>
>> My plan to work with an offset to recover the [HitR5] volume:
>>
>> - echo 1 > /sys/module/md_mod/parameters/
>> start_dirty_degraded
>> - mdadm -C /dev/md/imsm -e imsm -n 4 /dev/sdb /dev/sdc /dev/sdd /dev/sde
>> - mdadm -C /dev/md0 -l5 -n4 -c 128 /dev/sdb:1073746184s
>> /dev/sdc:1073746184s /dev/sdd:1073746184s /dev/sde:1073746184s
>
> This certainly won't work.
> You need "--data-offset=variable" for the "NNNNs" suffixes to be recognised,
> and even then it only works for 1.x metadata, not for imsm metadata.
>
> There isn't much support for sticking together broken IMSM arrays at
> present. Your best bet is to re-create the whole array.
>
> So:
> mdadm -C /dev/md/imsm -e imsm -n 4 /dev/sd[bcde]
> mdadm -C /dev/md0 -l0 -n4 -c 128K -z 512G /dev/md/imsm
>
> then check that /dev/md0 looks OK for the RAID0 array.
> If it does then you can continue to create the raid5 array
>
> mdadm -C /dev/md1 -l5 -n4 -c 128k --assume-clean /dev/md/imsm
>
> That *should* then be correct.
>
> If the RAID0 array doesn't look right, the possible sdb really was cleared
> rather than just having the metadata erased.
> In this case the RAID0 is definitely gone and it will be a bit harder to
> create the RAID5. It could be something like:
>
> mdadm -C /dev/md1 -l5 -n4 -c 128k missing /dev/sd[cde]
>
> but I'm not sure that 'missing' is works for imsm. If you need to go this
> way I can try to make 'missing' for for imsm. It shouldn't be too hard.
>
> NeilBrown
>
>
>>
>>
>> I'm in doubt about working with missing disks first to start a
>> degraded array, with -C and missing for the first drive.
>> Or choosing assemble --auto, or as stated above and create the volume with an
>> offset.
>>
>> Another thing I'm not certain of: do I need to build a new mdadm with
>> data_offset, or is it already present in my 3.2.6 version?
>> When I built a new version with Neils mdadm I ended up with a 3.2.5 18May2012
>> version.
>>
>> As I guess I have only one shot at this I have not executed anything yet.
>>
>> thanks many for your help, time and advice!
>>
>> best regards Dennis
>>
>>
>> =======output mdadm -Evvvvs=============
>>
>> root@localhost ~]# mdadm -Evvvvs
>>
>> mdadm: No md superblock detected on /dev/dm-1.
>>
>> mdadm: No md superblock detected on /dev/dm-0.
>>
>> /dev/sdf1:
>>
>> MBR Magic : aa55
>>
>> Partition[0] : 432871117 sectors at 3224498923 (type 07)
>>
>> Partition[1] : 1953460034 sectors at 3272020941 (type 16)
>>
>> Partition[3] : 924335794 sectors at 50200576 (type 00)
>>
>> /dev/sdf:
>>
>> MBR Magic : aa55
>>
>> Partition[0] : 15769600 sectors at 2048 (type 0b)
>>
>> /dev/sde:
>>
>> Magic : Intel Raid ISM Cfg Sig.
>>
>> Version : 1.3.00
>>
>> Orig Family : f3437c9b
>>
>> Family : f3437c9d
>>
>> Generation : 00002c5f
>>
>> Attributes : All supported
>>
>> UUID : 47b011c7:4a8531ea:7e94ab93:06034952
>>
>> Checksum : 671f5f84 correct
>>
>> MPB Sectors : 2
>>
>> Disks : 4
>>
>> RAID Devices : 2
>>
>>
>> Disk03 Serial : PL1321LAG4RXEH
>>
>> State : active
>>
>> Id : 00000005
>>
>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>
>>
>> [HitR0]:
>>
>> UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
>>
>> RAID Level : 0
>>
>> Members : 4
>>
>> Slots : [_UUU]
>>
>> Failed disk : 1
>>
>> This Slot : 3
>>
>> Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
>>
>> Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
>>
>> Sector Offset : 0
>>
>> Num Stripes : 4194304
>>
>> Chunk Size : 128 KiB
>>
>> Reserved : 0
>>
>> Migrate State : idle
>>
>> Map State : failed
>>
>> Dirty State : clean
>>
>>
>> [HitR5]:
>>
>> UUID : 71626250:b8fc1262:3545d952:69eb329e
>>
>> RAID Level : 5
>>
>> Members : 4
>>
>> Slots : [_UU_]
>>
>> Failed disk : 3
>>
>> This Slot : 3 (out-of-sync)
>>
>> Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
>>
>> Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
>>
>> Sector Offset : 1073746184
>>
>> Num Stripes : 26329208
>>
>> Chunk Size : 128 KiB
>>
>> Reserved : 0
>>
>> Migrate State : idle
>>
>> Map State : failed
>>
>> Dirty State : clean
>>
>>
>> Disk00 Serial : PL2311LAG1P82C:0
>>
>> State : active
>>
>> Id : ffffffff
>>
>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>
>>
>> Disk01 Serial : PL1321LAG4NMEH
>>
>> State : active
>>
>> Id : 00000003
>>
>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>
>>
>> Disk02 Serial : PL1321LAG4TH4H
>>
>> State : active
>>
>> Id : 00000004
>>
>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>
>> /dev/sdd:
>>
>> Magic : Intel Raid ISM Cfg Sig.
>>
>> Version : 1.3.00
>>
>> Orig Family : f3437c9b
>>
>> Family : f3437c9d
>>
>> Generation : 00002c5f
>>
>> Attributes : All supported
>>
>> UUID : 47b011c7:4a8531ea:7e94ab93:06034952
>>
>> Checksum : 671f5f84 correct
>>
>> MPB Sectors : 2
>>
>> Disks : 4
>>
>> RAID Devices : 2
>>
>>
>> Disk02 Serial : PL1321LAG4TH4H
>>
>> State : active
>>
>> Id : 00000004
>>
>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>
>>
>> [HitR0]:
>>
>> UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
>>
>> RAID Level : 0
>>
>> Members : 4
>>
>> Slots : [_UUU]
>>
>> Failed disk : 1
>>
>> This Slot : 2
>>
>> Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
>>
>> Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
>>
>> Sector Offset : 0
>>
>> Num Stripes : 4194304
>>
>> Chunk Size : 128 KiB
>>
>> Reserved : 0
>>
>> Migrate State : idle
>>
>> Map State : failed
>>
>> Dirty State : clean
>>
>>
>> [HitR5]:
>>
>> UUID : 71626250:b8fc1262:3545d952:69eb329e
>>
>> RAID Level : 5
>>
>> Members : 4
>>
>> Slots : [_UU_]
>>
>> Failed disk : 3
>>
>> This Slot : 2
>>
>> Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
>>
>> Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
>>
>> Sector Offset : 1073746184
>>
>> Num Stripes : 26329208
>>
>> Chunk Size : 128 KiB
>>
>> Reserved : 0
>>
>> Migrate State : idle
>>
>> Map State : failed
>>
>> Dirty State : clean
>>
>>
>> Disk00 Serial : PL2311LAG1P82C:0
>>
>> State : active
>>
>> Id : ffffffff
>>
>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>
>>
>> Disk01 Serial : PL1321LAG4NMEH
>>
>> State : active
>>
>> Id : 00000003
>>
>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>
>>
>> Disk03 Serial : PL1321LAG4RXEH
>>
>> State : active
>>
>> Id : 00000005
>>
>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>
>> /dev/sdc:
>>
>> Magic : Intel Raid ISM Cfg Sig.
>>
>> Version : 1.3.00
>>
>> Orig Family : f3437c9b
>>
>> Family : f3437c9d
>>
>> Generation : 00002c5f
>>
>> Attributes : All supported
>>
>> UUID : 47b011c7:4a8531ea:7e94ab93:06034952
>>
>> Checksum : 671f5f84 correct
>>
>> MPB Sectors : 2
>>
>> Disks : 4
>>
>> RAID Devices : 2
>>
>>
>> Disk01 Serial : PL1321LAG4NMEH
>>
>> State : active
>>
>> Id : 00000003
>>
>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>
>>
>> [HitR0]:
>>
>> UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
>>
>> RAID Level : 0
>>
>> Members : 4
>>
>> Slots : [_UUU]
>>
>> Failed disk : 1
>>
>> This Slot : 1
>>
>> Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
>>
>> Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
>>
>> Sector Offset : 0
>>
>> Num Stripes : 4194304
>>
>> Chunk Size : 128 KiB
>>
>> Reserved : 0
>>
>> Migrate State : idle
>>
>> Map State : failed
>>
>> Dirty State : clean
>>
>>
>> [HitR5]:
>>
>> UUID : 71626250:b8fc1262:3545d952:69eb329e
>>
>> RAID Level : 5
>>
>> Members : 4
>>
>> Slots : [_UU_]
>>
>> Failed disk : 3
>>
>> This Slot : 1
>>
>> Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
>>
>> Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
>>
>> Sector Offset : 1073746184
>>
>> Num Stripes : 26329208
>>
>> Chunk Size : 128 KiB
>>
>> Reserved : 0
>>
>> Migrate State : idle
>>
>> Map State : failed
>>
>> Dirty State : clean
>>
>>
>> Disk00 Serial : PL2311LAG1P82C:0
>>
>> State : active
>>
>> Id : ffffffff
>>
>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>
>>
>> Disk02 Serial : PL1321LAG4TH4H
>>
>> State : active
>>
>> Id : 00000004
>>
>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>
>>
>> Disk03 Serial : PL1321LAG4RXEH
>>
>> State : active
>>
>> Id : 00000005
>>
>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>
>> mdadm: No md superblock detected on /dev/sdb1.
>>
>> /dev/sdb:
>>
>> MBR Magic : aa55
>>
>> Partition[0] : 4294967295 sectors at 1 (type ee)
>>
>> /dev/sda2:
>>
>> MBR Magic : aa55
>>
>> Partition[0] : 1816210284 sectors at 1920221984 (type 72)
>>
>> Partition[1] : 1953653108 sectors at 1936028192 (type 6c)
>>
>> Partition[3] : 447 sectors at 27722122 (type 00)
>>
>> /dev/sda1:
>>
>> MBR Magic : aa55
>>
>> Partition[0] : 1816210284 sectors at 1920221984 (type 72)
>>
>> Partition[1] : 1953653108 sectors at 1936028192 (type 6c)
>>
>> Partition[3] : 447 sectors at 27722122 (type 00)
>>
>> /dev/sda:
>>
>> MBR Magic : aa55
>>
>> Partition[0] : 716800 sectors at 2048 (type 07)
>>
>> Partition[1] : 499396608 sectors at 718848 (type 07)
>>
>> mdadm: No md superblock detected on /dev/loop4.
>>
>> mdadm: No md superblock detected on /dev/loop3.
>>
>> mdadm: No md superblock detected on /dev/loop2.
>>
>> mdadm: No md superblock detected on /dev/loop1.
>>
>> mdadm: No md superblock detected on /dev/loop0.
>>
>> /dev/md127:
>>
>> Magic : Intel Raid ISM Cfg Sig.
>>
>> Version : 1.3.00
>>
>> Orig Family : f3437c9b
>>
>> Family : f3437c9d
>>
>> Generation : 00002c5f
>>
>> Attributes : All supported
>>
>> UUID : 47b011c7:4a8531ea:7e94ab93:06034952
>>
>> Checksum : 671f5f84 correct
>>
>> MPB Sectors : 2
>>
>> Disks : 4
>>
>> RAID Devices : 2
>>
>>
>> Disk02 Serial : PL1321LAG4TH4H
>>
>> State : active
>>
>> Id : 00000004
>>
>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>
>>
>> [HitR0]:
>>
>> UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
>>
>> RAID Level : 0
>>
>> Members : 4
>>
>> Slots : [_UUU]
>>
>> Failed disk : 1
>>
>> This Slot : 2
>>
>> Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
>>
>> Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
>>
>> Sector Offset : 0
>>
>> Num Stripes : 4194304
>>
>> Chunk Size : 128 KiB
>>
>> Reserved : 0
>>
>> Migrate State : idle
>>
>> Map State : failed
>>
>> Dirty State : clean
>>
>>
>> [HitR5]:
>>
>> UUID : 71626250:b8fc1262:3545d952:69eb329e
>>
>> RAID Level : 5
>>
>> Members : 4
>>
>> Slots : [_UU_]
>>
>> Failed disk : 3
>>
>> This Slot : 2
>>
>> Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
>>
>> Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
>>
>> Sector Offset : 1073746184
>>
>> Num Stripes : 26329208
>>
>> Chunk Size : 128 KiB
>>
>> Reserved : 0
>>
>> Migrate State : idle
>>
>> Map State : failed
>>
>> Dirty State : clean
>>
>>
>> Disk00 Serial : PL2311LAG1P82C:0
>>
>> State : active
>>
>> Id : ffffffff
>>
>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>
>>
>> Disk01 Serial : PL1321LAG4NMEH
>>
>> State : active
>>
>> Id : 00000003
>>
>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>
>>
>> Disk03 Serial : PL1321LAG4RXEH
>>
>> State : active
>>
>> Id : 00000005
>>
>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: looking for advice on raid0+raid5 array recovery with mdadm and sector offset
2014-01-06 6:53 ` den Hoog
@ 2014-01-07 21:43 ` den Hoog
2014-01-16 21:03 ` den Hoog
0 siblings, 1 reply; 10+ messages in thread
From: den Hoog @ 2014-01-07 21:43 UTC (permalink / raw)
To: NeilBrown; +Cc: linux-raid
Hi Neil,
some delay due to illness, but as I expected all data on the first
drive was lost.
I created the new container and recreated raid0, and looks identical
now, but it won't mount.
As expected the raid0 offset and sectors are identical to the old volume.
If you would be willing to make a working imsm 'missing' version, I'd
be grateful , and will give that a shot.
It would be the only way to trigger a rebuild on the first drive with
the parity, correct?
Will it detect the existing r5 at that specific offset though?
Thanks again, Dennis
On Mon, Jan 6, 2014 at 7:53 AM, den Hoog <speedyden@gmail.com> wrote:
> Hi Neil, hope you have had good holidays and appreciate your help!
>
> good to know that it is useless to go on the offset path for imsm.
>
> I know for sure the sdb disk was cleared by the win recovery as it
> created a new 2GB partition on it
> I will however try to re-create when I get home and keep you posted
> Probably will have to go the 'missing' way. Will that somehow figure
> out that it needs an offset?
>
> br Dennis
>
> On Mon, Jan 6, 2014 at 2:41 AM, NeilBrown <neilb@suse.de> wrote:
>> On Thu, 2 Jan 2014 20:45:24 +0100 den Hoog <speedyden@gmail.com> wrote:
>>
>>> Hi Neil
>>>
>>> I apologize if I made mistakes with the first mail post but probably
>>> something went wrong, so this is a retry.
>>>
>>> I'm looking for advice on my plan to recover my raid5 volume with mdadm.
>>>
>>> I was in a hurry and made a stupid mistake when upgrading the MB bios.
>>> Forgot to turn on the Intel SATA raid, and Windows recovery erased the
>>> first disk of 4.
>>>
>>> It is an array of 4x4TB, in a matrix, having 1 RAID0 volume, and 1 RAID5 vol.
>>> Although the array displays a failed array in Windows, 3 disks are
>>> active, and 1 is missing and showing as a non-raid array being
>>> available.
>>>
>>> In (my) theory, I should still be able to recover the raid5 vol. with
>>> the remaining 3 disks, however I should specify the specific sector
>>> offset I guess.
>>> I read many articles on this, but none of them address the
>>> 'difficulty' of recovering a specific volume when multiple exist in an
>>> array.
>>>
>>> Although I've some backups, I really would appreciate your help in
>>> getting this recovered.
>>> sda is the SSD
>>> sdb is the 'missing' and erased drive (serial ending on P82C)
>>> sdc is the second drive in the array
>>> sdd is the 3rd drive in the array
>>> sde is the 4th drive in the array
>>> sdf is the usb stick I'm running Fedora live from
>>>
>>> What I've done so far :
>>>
>>> - Started Fedora 15 Live from a USB
>>> - installed the mdadm package data_offset and compiled
>>>
>>>
>>> My plan to work with an offset to recover the [HitR5] volume:
>>>
>>> - echo 1 > /sys/module/md_mod/parameters/
>>> start_dirty_degraded
>>> - mdadm -C /dev/md/imsm -e imsm -n 4 /dev/sdb /dev/sdc /dev/sdd /dev/sde
>>> - mdadm -C /dev/md0 -l5 -n4 -c 128 /dev/sdb:1073746184s
>>> /dev/sdc:1073746184s /dev/sdd:1073746184s /dev/sde:1073746184s
>>
>> This certainly won't work.
>> You need "--data-offset=variable" for the "NNNNs" suffixes to be recognised,
>> and even then it only works for 1.x metadata, not for imsm metadata.
>>
>> There isn't much support for sticking together broken IMSM arrays at
>> present. Your best bet is to re-create the whole array.
>>
>> So:
>> mdadm -C /dev/md/imsm -e imsm -n 4 /dev/sd[bcde]
>> mdadm -C /dev/md0 -l0 -n4 -c 128K -z 512G /dev/md/imsm
>>
>> then check that /dev/md0 looks OK for the RAID0 array.
>> If it does then you can continue to create the raid5 array
>>
>> mdadm -C /dev/md1 -l5 -n4 -c 128k --assume-clean /dev/md/imsm
>>
>> That *should* then be correct.
>>
>> If the RAID0 array doesn't look right, the possible sdb really was cleared
>> rather than just having the metadata erased.
>> In this case the RAID0 is definitely gone and it will be a bit harder to
>> create the RAID5. It could be something like:
>>
>> mdadm -C /dev/md1 -l5 -n4 -c 128k missing /dev/sd[cde]
>>
>> but I'm not sure that 'missing' is works for imsm. If you need to go this
>> way I can try to make 'missing' for for imsm. It shouldn't be too hard.
>>
>> NeilBrown
>>
>>
>>>
>>>
>>> I'm in doubt about working with missing disks first to start a
>>> degraded array, with -C and missing for the first drive.
>>> Or choosing assemble --auto, or as stated above and create the volume with an
>>> offset.
>>>
>>> Another thing I'm not certain of: do I need to build a new mdadm with
>>> data_offset, or is it already present in my 3.2.6 version?
>>> When I built a new version with Neils mdadm I ended up with a 3.2.5 18May2012
>>> version.
>>>
>>> As I guess I have only one shot at this I have not executed anything yet.
>>>
>>> thanks many for your help, time and advice!
>>>
>>> best regards Dennis
>>>
>>>
>>> =======output mdadm -Evvvvs=============
>>>
>>> root@localhost ~]# mdadm -Evvvvs
>>>
>>> mdadm: No md superblock detected on /dev/dm-1.
>>>
>>> mdadm: No md superblock detected on /dev/dm-0.
>>>
>>> /dev/sdf1:
>>>
>>> MBR Magic : aa55
>>>
>>> Partition[0] : 432871117 sectors at 3224498923 (type 07)
>>>
>>> Partition[1] : 1953460034 sectors at 3272020941 (type 16)
>>>
>>> Partition[3] : 924335794 sectors at 50200576 (type 00)
>>>
>>> /dev/sdf:
>>>
>>> MBR Magic : aa55
>>>
>>> Partition[0] : 15769600 sectors at 2048 (type 0b)
>>>
>>> /dev/sde:
>>>
>>> Magic : Intel Raid ISM Cfg Sig.
>>>
>>> Version : 1.3.00
>>>
>>> Orig Family : f3437c9b
>>>
>>> Family : f3437c9d
>>>
>>> Generation : 00002c5f
>>>
>>> Attributes : All supported
>>>
>>> UUID : 47b011c7:4a8531ea:7e94ab93:06034952
>>>
>>> Checksum : 671f5f84 correct
>>>
>>> MPB Sectors : 2
>>>
>>> Disks : 4
>>>
>>> RAID Devices : 2
>>>
>>>
>>> Disk03 Serial : PL1321LAG4RXEH
>>>
>>> State : active
>>>
>>> Id : 00000005
>>>
>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>
>>>
>>> [HitR0]:
>>>
>>> UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
>>>
>>> RAID Level : 0
>>>
>>> Members : 4
>>>
>>> Slots : [_UUU]
>>>
>>> Failed disk : 1
>>>
>>> This Slot : 3
>>>
>>> Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
>>>
>>> Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
>>>
>>> Sector Offset : 0
>>>
>>> Num Stripes : 4194304
>>>
>>> Chunk Size : 128 KiB
>>>
>>> Reserved : 0
>>>
>>> Migrate State : idle
>>>
>>> Map State : failed
>>>
>>> Dirty State : clean
>>>
>>>
>>> [HitR5]:
>>>
>>> UUID : 71626250:b8fc1262:3545d952:69eb329e
>>>
>>> RAID Level : 5
>>>
>>> Members : 4
>>>
>>> Slots : [_UU_]
>>>
>>> Failed disk : 3
>>>
>>> This Slot : 3 (out-of-sync)
>>>
>>> Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
>>>
>>> Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
>>>
>>> Sector Offset : 1073746184
>>>
>>> Num Stripes : 26329208
>>>
>>> Chunk Size : 128 KiB
>>>
>>> Reserved : 0
>>>
>>> Migrate State : idle
>>>
>>> Map State : failed
>>>
>>> Dirty State : clean
>>>
>>>
>>> Disk00 Serial : PL2311LAG1P82C:0
>>>
>>> State : active
>>>
>>> Id : ffffffff
>>>
>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>
>>>
>>> Disk01 Serial : PL1321LAG4NMEH
>>>
>>> State : active
>>>
>>> Id : 00000003
>>>
>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>
>>>
>>> Disk02 Serial : PL1321LAG4TH4H
>>>
>>> State : active
>>>
>>> Id : 00000004
>>>
>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>
>>> /dev/sdd:
>>>
>>> Magic : Intel Raid ISM Cfg Sig.
>>>
>>> Version : 1.3.00
>>>
>>> Orig Family : f3437c9b
>>>
>>> Family : f3437c9d
>>>
>>> Generation : 00002c5f
>>>
>>> Attributes : All supported
>>>
>>> UUID : 47b011c7:4a8531ea:7e94ab93:06034952
>>>
>>> Checksum : 671f5f84 correct
>>>
>>> MPB Sectors : 2
>>>
>>> Disks : 4
>>>
>>> RAID Devices : 2
>>>
>>>
>>> Disk02 Serial : PL1321LAG4TH4H
>>>
>>> State : active
>>>
>>> Id : 00000004
>>>
>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>
>>>
>>> [HitR0]:
>>>
>>> UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
>>>
>>> RAID Level : 0
>>>
>>> Members : 4
>>>
>>> Slots : [_UUU]
>>>
>>> Failed disk : 1
>>>
>>> This Slot : 2
>>>
>>> Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
>>>
>>> Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
>>>
>>> Sector Offset : 0
>>>
>>> Num Stripes : 4194304
>>>
>>> Chunk Size : 128 KiB
>>>
>>> Reserved : 0
>>>
>>> Migrate State : idle
>>>
>>> Map State : failed
>>>
>>> Dirty State : clean
>>>
>>>
>>> [HitR5]:
>>>
>>> UUID : 71626250:b8fc1262:3545d952:69eb329e
>>>
>>> RAID Level : 5
>>>
>>> Members : 4
>>>
>>> Slots : [_UU_]
>>>
>>> Failed disk : 3
>>>
>>> This Slot : 2
>>>
>>> Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
>>>
>>> Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
>>>
>>> Sector Offset : 1073746184
>>>
>>> Num Stripes : 26329208
>>>
>>> Chunk Size : 128 KiB
>>>
>>> Reserved : 0
>>>
>>> Migrate State : idle
>>>
>>> Map State : failed
>>>
>>> Dirty State : clean
>>>
>>>
>>> Disk00 Serial : PL2311LAG1P82C:0
>>>
>>> State : active
>>>
>>> Id : ffffffff
>>>
>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>
>>>
>>> Disk01 Serial : PL1321LAG4NMEH
>>>
>>> State : active
>>>
>>> Id : 00000003
>>>
>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>
>>>
>>> Disk03 Serial : PL1321LAG4RXEH
>>>
>>> State : active
>>>
>>> Id : 00000005
>>>
>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>
>>> /dev/sdc:
>>>
>>> Magic : Intel Raid ISM Cfg Sig.
>>>
>>> Version : 1.3.00
>>>
>>> Orig Family : f3437c9b
>>>
>>> Family : f3437c9d
>>>
>>> Generation : 00002c5f
>>>
>>> Attributes : All supported
>>>
>>> UUID : 47b011c7:4a8531ea:7e94ab93:06034952
>>>
>>> Checksum : 671f5f84 correct
>>>
>>> MPB Sectors : 2
>>>
>>> Disks : 4
>>>
>>> RAID Devices : 2
>>>
>>>
>>> Disk01 Serial : PL1321LAG4NMEH
>>>
>>> State : active
>>>
>>> Id : 00000003
>>>
>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>
>>>
>>> [HitR0]:
>>>
>>> UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
>>>
>>> RAID Level : 0
>>>
>>> Members : 4
>>>
>>> Slots : [_UUU]
>>>
>>> Failed disk : 1
>>>
>>> This Slot : 1
>>>
>>> Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
>>>
>>> Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
>>>
>>> Sector Offset : 0
>>>
>>> Num Stripes : 4194304
>>>
>>> Chunk Size : 128 KiB
>>>
>>> Reserved : 0
>>>
>>> Migrate State : idle
>>>
>>> Map State : failed
>>>
>>> Dirty State : clean
>>>
>>>
>>> [HitR5]:
>>>
>>> UUID : 71626250:b8fc1262:3545d952:69eb329e
>>>
>>> RAID Level : 5
>>>
>>> Members : 4
>>>
>>> Slots : [_UU_]
>>>
>>> Failed disk : 3
>>>
>>> This Slot : 1
>>>
>>> Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
>>>
>>> Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
>>>
>>> Sector Offset : 1073746184
>>>
>>> Num Stripes : 26329208
>>>
>>> Chunk Size : 128 KiB
>>>
>>> Reserved : 0
>>>
>>> Migrate State : idle
>>>
>>> Map State : failed
>>>
>>> Dirty State : clean
>>>
>>>
>>> Disk00 Serial : PL2311LAG1P82C:0
>>>
>>> State : active
>>>
>>> Id : ffffffff
>>>
>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>
>>>
>>> Disk02 Serial : PL1321LAG4TH4H
>>>
>>> State : active
>>>
>>> Id : 00000004
>>>
>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>
>>>
>>> Disk03 Serial : PL1321LAG4RXEH
>>>
>>> State : active
>>>
>>> Id : 00000005
>>>
>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>
>>> mdadm: No md superblock detected on /dev/sdb1.
>>>
>>> /dev/sdb:
>>>
>>> MBR Magic : aa55
>>>
>>> Partition[0] : 4294967295 sectors at 1 (type ee)
>>>
>>> /dev/sda2:
>>>
>>> MBR Magic : aa55
>>>
>>> Partition[0] : 1816210284 sectors at 1920221984 (type 72)
>>>
>>> Partition[1] : 1953653108 sectors at 1936028192 (type 6c)
>>>
>>> Partition[3] : 447 sectors at 27722122 (type 00)
>>>
>>> /dev/sda1:
>>>
>>> MBR Magic : aa55
>>>
>>> Partition[0] : 1816210284 sectors at 1920221984 (type 72)
>>>
>>> Partition[1] : 1953653108 sectors at 1936028192 (type 6c)
>>>
>>> Partition[3] : 447 sectors at 27722122 (type 00)
>>>
>>> /dev/sda:
>>>
>>> MBR Magic : aa55
>>>
>>> Partition[0] : 716800 sectors at 2048 (type 07)
>>>
>>> Partition[1] : 499396608 sectors at 718848 (type 07)
>>>
>>> mdadm: No md superblock detected on /dev/loop4.
>>>
>>> mdadm: No md superblock detected on /dev/loop3.
>>>
>>> mdadm: No md superblock detected on /dev/loop2.
>>>
>>> mdadm: No md superblock detected on /dev/loop1.
>>>
>>> mdadm: No md superblock detected on /dev/loop0.
>>>
>>> /dev/md127:
>>>
>>> Magic : Intel Raid ISM Cfg Sig.
>>>
>>> Version : 1.3.00
>>>
>>> Orig Family : f3437c9b
>>>
>>> Family : f3437c9d
>>>
>>> Generation : 00002c5f
>>>
>>> Attributes : All supported
>>>
>>> UUID : 47b011c7:4a8531ea:7e94ab93:06034952
>>>
>>> Checksum : 671f5f84 correct
>>>
>>> MPB Sectors : 2
>>>
>>> Disks : 4
>>>
>>> RAID Devices : 2
>>>
>>>
>>> Disk02 Serial : PL1321LAG4TH4H
>>>
>>> State : active
>>>
>>> Id : 00000004
>>>
>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>
>>>
>>> [HitR0]:
>>>
>>> UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
>>>
>>> RAID Level : 0
>>>
>>> Members : 4
>>>
>>> Slots : [_UUU]
>>>
>>> Failed disk : 1
>>>
>>> This Slot : 2
>>>
>>> Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
>>>
>>> Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
>>>
>>> Sector Offset : 0
>>>
>>> Num Stripes : 4194304
>>>
>>> Chunk Size : 128 KiB
>>>
>>> Reserved : 0
>>>
>>> Migrate State : idle
>>>
>>> Map State : failed
>>>
>>> Dirty State : clean
>>>
>>>
>>> [HitR5]:
>>>
>>> UUID : 71626250:b8fc1262:3545d952:69eb329e
>>>
>>> RAID Level : 5
>>>
>>> Members : 4
>>>
>>> Slots : [_UU_]
>>>
>>> Failed disk : 3
>>>
>>> This Slot : 2
>>>
>>> Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
>>>
>>> Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
>>>
>>> Sector Offset : 1073746184
>>>
>>> Num Stripes : 26329208
>>>
>>> Chunk Size : 128 KiB
>>>
>>> Reserved : 0
>>>
>>> Migrate State : idle
>>>
>>> Map State : failed
>>>
>>> Dirty State : clean
>>>
>>>
>>> Disk00 Serial : PL2311LAG1P82C:0
>>>
>>> State : active
>>>
>>> Id : ffffffff
>>>
>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>
>>>
>>> Disk01 Serial : PL1321LAG4NMEH
>>>
>>> State : active
>>>
>>> Id : 00000003
>>>
>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>
>>>
>>> Disk03 Serial : PL1321LAG4RXEH
>>>
>>> State : active
>>>
>>> Id : 00000005
>>>
>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: looking for advice on raid0+raid5 array recovery with mdadm and sector offset
2014-01-07 21:43 ` den Hoog
@ 2014-01-16 21:03 ` den Hoog
2014-01-20 6:16 ` NeilBrown
0 siblings, 1 reply; 10+ messages in thread
From: den Hoog @ 2014-01-16 21:03 UTC (permalink / raw)
To: NeilBrown; +Cc: linux-raid
Hi Neil
can you try to make 'missing' working for imsm?
probably not on the short list of things to do, but if you are willing
to help me out, I'll appreciate it much!
thanks, Dennis
On Tue, Jan 7, 2014 at 10:43 PM, den Hoog <speedyden@gmail.com> wrote:
> Hi Neil,
>
> some delay due to illness, but as I expected all data on the first
> drive was lost.
>
> I created the new container and recreated raid0, and looks identical
> now, but it won't mount.
> As expected the raid0 offset and sectors are identical to the old volume.
>
> If you would be willing to make a working imsm 'missing' version, I'd
> be grateful , and will give that a shot.
> It would be the only way to trigger a rebuild on the first drive with
> the parity, correct?
> Will it detect the existing r5 at that specific offset though?
>
> Thanks again, Dennis
>
>
>
> On Mon, Jan 6, 2014 at 7:53 AM, den Hoog <speedyden@gmail.com> wrote:
>> Hi Neil, hope you have had good holidays and appreciate your help!
>>
>> good to know that it is useless to go on the offset path for imsm.
>>
>> I know for sure the sdb disk was cleared by the win recovery as it
>> created a new 2GB partition on it
>> I will however try to re-create when I get home and keep you posted
>> Probably will have to go the 'missing' way. Will that somehow figure
>> out that it needs an offset?
>>
>> br Dennis
>>
>> On Mon, Jan 6, 2014 at 2:41 AM, NeilBrown <neilb@suse.de> wrote:
>>> On Thu, 2 Jan 2014 20:45:24 +0100 den Hoog <speedyden@gmail.com> wrote:
>>>
>>>> Hi Neil
>>>>
>>>> I apologize if I made mistakes with the first mail post but probably
>>>> something went wrong, so this is a retry.
>>>>
>>>> I'm looking for advice on my plan to recover my raid5 volume with mdadm.
>>>>
>>>> I was in a hurry and made a stupid mistake when upgrading the MB bios.
>>>> Forgot to turn on the Intel SATA raid, and Windows recovery erased the
>>>> first disk of 4.
>>>>
>>>> It is an array of 4x4TB, in a matrix, having 1 RAID0 volume, and 1 RAID5 vol.
>>>> Although the array displays a failed array in Windows, 3 disks are
>>>> active, and 1 is missing and showing as a non-raid array being
>>>> available.
>>>>
>>>> In (my) theory, I should still be able to recover the raid5 vol. with
>>>> the remaining 3 disks, however I should specify the specific sector
>>>> offset I guess.
>>>> I read many articles on this, but none of them address the
>>>> 'difficulty' of recovering a specific volume when multiple exist in an
>>>> array.
>>>>
>>>> Although I've some backups, I really would appreciate your help in
>>>> getting this recovered.
>>>> sda is the SSD
>>>> sdb is the 'missing' and erased drive (serial ending on P82C)
>>>> sdc is the second drive in the array
>>>> sdd is the 3rd drive in the array
>>>> sde is the 4th drive in the array
>>>> sdf is the usb stick I'm running Fedora live from
>>>>
>>>> What I've done so far :
>>>>
>>>> - Started Fedora 15 Live from a USB
>>>> - installed the mdadm package data_offset and compiled
>>>>
>>>>
>>>> My plan to work with an offset to recover the [HitR5] volume:
>>>>
>>>> - echo 1 > /sys/module/md_mod/parameters/
>>>> start_dirty_degraded
>>>> - mdadm -C /dev/md/imsm -e imsm -n 4 /dev/sdb /dev/sdc /dev/sdd /dev/sde
>>>> - mdadm -C /dev/md0 -l5 -n4 -c 128 /dev/sdb:1073746184s
>>>> /dev/sdc:1073746184s /dev/sdd:1073746184s /dev/sde:1073746184s
>>>
>>> This certainly won't work.
>>> You need "--data-offset=variable" for the "NNNNs" suffixes to be recognised,
>>> and even then it only works for 1.x metadata, not for imsm metadata.
>>>
>>> There isn't much support for sticking together broken IMSM arrays at
>>> present. Your best bet is to re-create the whole array.
>>>
>>> So:
>>> mdadm -C /dev/md/imsm -e imsm -n 4 /dev/sd[bcde]
>>> mdadm -C /dev/md0 -l0 -n4 -c 128K -z 512G /dev/md/imsm
>>>
>>> then check that /dev/md0 looks OK for the RAID0 array.
>>> If it does then you can continue to create the raid5 array
>>>
>>> mdadm -C /dev/md1 -l5 -n4 -c 128k --assume-clean /dev/md/imsm
>>>
>>> That *should* then be correct.
>>>
>>> If the RAID0 array doesn't look right, the possible sdb really was cleared
>>> rather than just having the metadata erased.
>>> In this case the RAID0 is definitely gone and it will be a bit harder to
>>> create the RAID5. It could be something like:
>>>
>>> mdadm -C /dev/md1 -l5 -n4 -c 128k missing /dev/sd[cde]
>>>
>>> but I'm not sure that 'missing' is works for imsm. If you need to go this
>>> way I can try to make 'missing' for for imsm. It shouldn't be too hard.
>>>
>>> NeilBrown
>>>
>>>
>>>>
>>>>
>>>> I'm in doubt about working with missing disks first to start a
>>>> degraded array, with -C and missing for the first drive.
>>>> Or choosing assemble --auto, or as stated above and create the volume with an
>>>> offset.
>>>>
>>>> Another thing I'm not certain of: do I need to build a new mdadm with
>>>> data_offset, or is it already present in my 3.2.6 version?
>>>> When I built a new version with Neils mdadm I ended up with a 3.2.5 18May2012
>>>> version.
>>>>
>>>> As I guess I have only one shot at this I have not executed anything yet.
>>>>
>>>> thanks many for your help, time and advice!
>>>>
>>>> best regards Dennis
>>>>
>>>>
>>>> =======output mdadm -Evvvvs=============
>>>>
>>>> root@localhost ~]# mdadm -Evvvvs
>>>>
>>>> mdadm: No md superblock detected on /dev/dm-1.
>>>>
>>>> mdadm: No md superblock detected on /dev/dm-0.
>>>>
>>>> /dev/sdf1:
>>>>
>>>> MBR Magic : aa55
>>>>
>>>> Partition[0] : 432871117 sectors at 3224498923 (type 07)
>>>>
>>>> Partition[1] : 1953460034 sectors at 3272020941 (type 16)
>>>>
>>>> Partition[3] : 924335794 sectors at 50200576 (type 00)
>>>>
>>>> /dev/sdf:
>>>>
>>>> MBR Magic : aa55
>>>>
>>>> Partition[0] : 15769600 sectors at 2048 (type 0b)
>>>>
>>>> /dev/sde:
>>>>
>>>> Magic : Intel Raid ISM Cfg Sig.
>>>>
>>>> Version : 1.3.00
>>>>
>>>> Orig Family : f3437c9b
>>>>
>>>> Family : f3437c9d
>>>>
>>>> Generation : 00002c5f
>>>>
>>>> Attributes : All supported
>>>>
>>>> UUID : 47b011c7:4a8531ea:7e94ab93:06034952
>>>>
>>>> Checksum : 671f5f84 correct
>>>>
>>>> MPB Sectors : 2
>>>>
>>>> Disks : 4
>>>>
>>>> RAID Devices : 2
>>>>
>>>>
>>>> Disk03 Serial : PL1321LAG4RXEH
>>>>
>>>> State : active
>>>>
>>>> Id : 00000005
>>>>
>>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>>
>>>>
>>>> [HitR0]:
>>>>
>>>> UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
>>>>
>>>> RAID Level : 0
>>>>
>>>> Members : 4
>>>>
>>>> Slots : [_UUU]
>>>>
>>>> Failed disk : 1
>>>>
>>>> This Slot : 3
>>>>
>>>> Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
>>>>
>>>> Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
>>>>
>>>> Sector Offset : 0
>>>>
>>>> Num Stripes : 4194304
>>>>
>>>> Chunk Size : 128 KiB
>>>>
>>>> Reserved : 0
>>>>
>>>> Migrate State : idle
>>>>
>>>> Map State : failed
>>>>
>>>> Dirty State : clean
>>>>
>>>>
>>>> [HitR5]:
>>>>
>>>> UUID : 71626250:b8fc1262:3545d952:69eb329e
>>>>
>>>> RAID Level : 5
>>>>
>>>> Members : 4
>>>>
>>>> Slots : [_UU_]
>>>>
>>>> Failed disk : 3
>>>>
>>>> This Slot : 3 (out-of-sync)
>>>>
>>>> Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
>>>>
>>>> Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
>>>>
>>>> Sector Offset : 1073746184
>>>>
>>>> Num Stripes : 26329208
>>>>
>>>> Chunk Size : 128 KiB
>>>>
>>>> Reserved : 0
>>>>
>>>> Migrate State : idle
>>>>
>>>> Map State : failed
>>>>
>>>> Dirty State : clean
>>>>
>>>>
>>>> Disk00 Serial : PL2311LAG1P82C:0
>>>>
>>>> State : active
>>>>
>>>> Id : ffffffff
>>>>
>>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>>
>>>>
>>>> Disk01 Serial : PL1321LAG4NMEH
>>>>
>>>> State : active
>>>>
>>>> Id : 00000003
>>>>
>>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>>
>>>>
>>>> Disk02 Serial : PL1321LAG4TH4H
>>>>
>>>> State : active
>>>>
>>>> Id : 00000004
>>>>
>>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>>
>>>> /dev/sdd:
>>>>
>>>> Magic : Intel Raid ISM Cfg Sig.
>>>>
>>>> Version : 1.3.00
>>>>
>>>> Orig Family : f3437c9b
>>>>
>>>> Family : f3437c9d
>>>>
>>>> Generation : 00002c5f
>>>>
>>>> Attributes : All supported
>>>>
>>>> UUID : 47b011c7:4a8531ea:7e94ab93:06034952
>>>>
>>>> Checksum : 671f5f84 correct
>>>>
>>>> MPB Sectors : 2
>>>>
>>>> Disks : 4
>>>>
>>>> RAID Devices : 2
>>>>
>>>>
>>>> Disk02 Serial : PL1321LAG4TH4H
>>>>
>>>> State : active
>>>>
>>>> Id : 00000004
>>>>
>>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>>
>>>>
>>>> [HitR0]:
>>>>
>>>> UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
>>>>
>>>> RAID Level : 0
>>>>
>>>> Members : 4
>>>>
>>>> Slots : [_UUU]
>>>>
>>>> Failed disk : 1
>>>>
>>>> This Slot : 2
>>>>
>>>> Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
>>>>
>>>> Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
>>>>
>>>> Sector Offset : 0
>>>>
>>>> Num Stripes : 4194304
>>>>
>>>> Chunk Size : 128 KiB
>>>>
>>>> Reserved : 0
>>>>
>>>> Migrate State : idle
>>>>
>>>> Map State : failed
>>>>
>>>> Dirty State : clean
>>>>
>>>>
>>>> [HitR5]:
>>>>
>>>> UUID : 71626250:b8fc1262:3545d952:69eb329e
>>>>
>>>> RAID Level : 5
>>>>
>>>> Members : 4
>>>>
>>>> Slots : [_UU_]
>>>>
>>>> Failed disk : 3
>>>>
>>>> This Slot : 2
>>>>
>>>> Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
>>>>
>>>> Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
>>>>
>>>> Sector Offset : 1073746184
>>>>
>>>> Num Stripes : 26329208
>>>>
>>>> Chunk Size : 128 KiB
>>>>
>>>> Reserved : 0
>>>>
>>>> Migrate State : idle
>>>>
>>>> Map State : failed
>>>>
>>>> Dirty State : clean
>>>>
>>>>
>>>> Disk00 Serial : PL2311LAG1P82C:0
>>>>
>>>> State : active
>>>>
>>>> Id : ffffffff
>>>>
>>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>>
>>>>
>>>> Disk01 Serial : PL1321LAG4NMEH
>>>>
>>>> State : active
>>>>
>>>> Id : 00000003
>>>>
>>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>>
>>>>
>>>> Disk03 Serial : PL1321LAG4RXEH
>>>>
>>>> State : active
>>>>
>>>> Id : 00000005
>>>>
>>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>>
>>>> /dev/sdc:
>>>>
>>>> Magic : Intel Raid ISM Cfg Sig.
>>>>
>>>> Version : 1.3.00
>>>>
>>>> Orig Family : f3437c9b
>>>>
>>>> Family : f3437c9d
>>>>
>>>> Generation : 00002c5f
>>>>
>>>> Attributes : All supported
>>>>
>>>> UUID : 47b011c7:4a8531ea:7e94ab93:06034952
>>>>
>>>> Checksum : 671f5f84 correct
>>>>
>>>> MPB Sectors : 2
>>>>
>>>> Disks : 4
>>>>
>>>> RAID Devices : 2
>>>>
>>>>
>>>> Disk01 Serial : PL1321LAG4NMEH
>>>>
>>>> State : active
>>>>
>>>> Id : 00000003
>>>>
>>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>>
>>>>
>>>> [HitR0]:
>>>>
>>>> UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
>>>>
>>>> RAID Level : 0
>>>>
>>>> Members : 4
>>>>
>>>> Slots : [_UUU]
>>>>
>>>> Failed disk : 1
>>>>
>>>> This Slot : 1
>>>>
>>>> Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
>>>>
>>>> Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
>>>>
>>>> Sector Offset : 0
>>>>
>>>> Num Stripes : 4194304
>>>>
>>>> Chunk Size : 128 KiB
>>>>
>>>> Reserved : 0
>>>>
>>>> Migrate State : idle
>>>>
>>>> Map State : failed
>>>>
>>>> Dirty State : clean
>>>>
>>>>
>>>> [HitR5]:
>>>>
>>>> UUID : 71626250:b8fc1262:3545d952:69eb329e
>>>>
>>>> RAID Level : 5
>>>>
>>>> Members : 4
>>>>
>>>> Slots : [_UU_]
>>>>
>>>> Failed disk : 3
>>>>
>>>> This Slot : 1
>>>>
>>>> Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
>>>>
>>>> Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
>>>>
>>>> Sector Offset : 1073746184
>>>>
>>>> Num Stripes : 26329208
>>>>
>>>> Chunk Size : 128 KiB
>>>>
>>>> Reserved : 0
>>>>
>>>> Migrate State : idle
>>>>
>>>> Map State : failed
>>>>
>>>> Dirty State : clean
>>>>
>>>>
>>>> Disk00 Serial : PL2311LAG1P82C:0
>>>>
>>>> State : active
>>>>
>>>> Id : ffffffff
>>>>
>>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>>
>>>>
>>>> Disk02 Serial : PL1321LAG4TH4H
>>>>
>>>> State : active
>>>>
>>>> Id : 00000004
>>>>
>>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>>
>>>>
>>>> Disk03 Serial : PL1321LAG4RXEH
>>>>
>>>> State : active
>>>>
>>>> Id : 00000005
>>>>
>>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>>
>>>> mdadm: No md superblock detected on /dev/sdb1.
>>>>
>>>> /dev/sdb:
>>>>
>>>> MBR Magic : aa55
>>>>
>>>> Partition[0] : 4294967295 sectors at 1 (type ee)
>>>>
>>>> /dev/sda2:
>>>>
>>>> MBR Magic : aa55
>>>>
>>>> Partition[0] : 1816210284 sectors at 1920221984 (type 72)
>>>>
>>>> Partition[1] : 1953653108 sectors at 1936028192 (type 6c)
>>>>
>>>> Partition[3] : 447 sectors at 27722122 (type 00)
>>>>
>>>> /dev/sda1:
>>>>
>>>> MBR Magic : aa55
>>>>
>>>> Partition[0] : 1816210284 sectors at 1920221984 (type 72)
>>>>
>>>> Partition[1] : 1953653108 sectors at 1936028192 (type 6c)
>>>>
>>>> Partition[3] : 447 sectors at 27722122 (type 00)
>>>>
>>>> /dev/sda:
>>>>
>>>> MBR Magic : aa55
>>>>
>>>> Partition[0] : 716800 sectors at 2048 (type 07)
>>>>
>>>> Partition[1] : 499396608 sectors at 718848 (type 07)
>>>>
>>>> mdadm: No md superblock detected on /dev/loop4.
>>>>
>>>> mdadm: No md superblock detected on /dev/loop3.
>>>>
>>>> mdadm: No md superblock detected on /dev/loop2.
>>>>
>>>> mdadm: No md superblock detected on /dev/loop1.
>>>>
>>>> mdadm: No md superblock detected on /dev/loop0.
>>>>
>>>> /dev/md127:
>>>>
>>>> Magic : Intel Raid ISM Cfg Sig.
>>>>
>>>> Version : 1.3.00
>>>>
>>>> Orig Family : f3437c9b
>>>>
>>>> Family : f3437c9d
>>>>
>>>> Generation : 00002c5f
>>>>
>>>> Attributes : All supported
>>>>
>>>> UUID : 47b011c7:4a8531ea:7e94ab93:06034952
>>>>
>>>> Checksum : 671f5f84 correct
>>>>
>>>> MPB Sectors : 2
>>>>
>>>> Disks : 4
>>>>
>>>> RAID Devices : 2
>>>>
>>>>
>>>> Disk02 Serial : PL1321LAG4TH4H
>>>>
>>>> State : active
>>>>
>>>> Id : 00000004
>>>>
>>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>>
>>>>
>>>> [HitR0]:
>>>>
>>>> UUID : 73ecf2cf:fcfd2598:d6523381:71e57931
>>>>
>>>> RAID Level : 0
>>>>
>>>> Members : 4
>>>>
>>>> Slots : [_UUU]
>>>>
>>>> Failed disk : 1
>>>>
>>>> This Slot : 2
>>>>
>>>> Array Size : 4294967296 (2048.00 GiB 2199.02 GB)
>>>>
>>>> Per Dev Size : 1073742088 (512.00 GiB 549.76 GB)
>>>>
>>>> Sector Offset : 0
>>>>
>>>> Num Stripes : 4194304
>>>>
>>>> Chunk Size : 128 KiB
>>>>
>>>> Reserved : 0
>>>>
>>>> Migrate State : idle
>>>>
>>>> Map State : failed
>>>>
>>>> Dirty State : clean
>>>>
>>>>
>>>> [HitR5]:
>>>>
>>>> UUID : 71626250:b8fc1262:3545d952:69eb329e
>>>>
>>>> RAID Level : 5
>>>>
>>>> Members : 4
>>>>
>>>> Slots : [_UU_]
>>>>
>>>> Failed disk : 3
>>>>
>>>> This Slot : 2
>>>>
>>>> Array Size : 20220831744 (9642.04 GiB 10353.07 GB)
>>>>
>>>> Per Dev Size : 6740279304 (3214.02 GiB 3451.02 GB)
>>>>
>>>> Sector Offset : 1073746184
>>>>
>>>> Num Stripes : 26329208
>>>>
>>>> Chunk Size : 128 KiB
>>>>
>>>> Reserved : 0
>>>>
>>>> Migrate State : idle
>>>>
>>>> Map State : failed
>>>>
>>>> Dirty State : clean
>>>>
>>>>
>>>> Disk00 Serial : PL2311LAG1P82C:0
>>>>
>>>> State : active
>>>>
>>>> Id : ffffffff
>>>>
>>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>>
>>>>
>>>> Disk01 Serial : PL1321LAG4NMEH
>>>>
>>>> State : active
>>>>
>>>> Id : 00000003
>>>>
>>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>>
>>>>
>>>> Disk03 Serial : PL1321LAG4RXEH
>>>>
>>>> State : active
>>>>
>>>> Id : 00000005
>>>>
>>>> Usable Size : 7814030862 (3726.02 GiB 4000.78 GB)
>>>> --
>>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>>> the body of a message to majordomo@vger.kernel.org
>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: looking for advice on raid0+raid5 array recovery with mdadm and sector offset
2014-01-16 21:03 ` den Hoog
@ 2014-01-20 6:16 ` NeilBrown
2014-01-20 21:43 ` den Hoog
0 siblings, 1 reply; 10+ messages in thread
From: NeilBrown @ 2014-01-20 6:16 UTC (permalink / raw)
To: den Hoog; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 992 bytes --]
On Thu, 16 Jan 2014 22:03:00 +0100 den Hoog <speedyden@gmail.com> wrote:
> Hi Neil
>
> can you try to make 'missing' working for imsm?
>
> probably not on the short list of things to do, but if you are willing
> to help me out, I'll appreciate it much!
Hi Dennis,
it seems that "missing" already works with IMSM. I thought I remembered that
it didn't, but I was wrong.
So you should be able to create the raid0 using the first parts of the
devices, then create the RAID5.
When creating the RAID5, list the devices that should make up the RAID5 in
the correct order.
So something like:
mdadm -C /dev/md/imsm -e imsm -n 4 /dev/sd[bcde]
mdadm -C /dev/md0 -l0 -n4 -c 128k -z 512G /dev/md/imsm
mdadm -C /dev/md1 -l5 -n4 -c 128k missing /dev/sdc /dev/sdd /dev/sde
Then /dev/md1 should be your RAID5 array.
You should double-check the above command to make sure you agree with ever
part, and ask if there is anything that doesn't seem right.
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: looking for advice on raid0+raid5 array recovery with mdadm and sector offset
2014-01-20 6:16 ` NeilBrown
@ 2014-01-20 21:43 ` den Hoog
2014-01-20 22:11 ` NeilBrown
0 siblings, 1 reply; 10+ messages in thread
From: den Hoog @ 2014-01-20 21:43 UTC (permalink / raw)
To: NeilBrown; +Cc: linux-raid
Hello Neil,
thanks again for spending time on this when you're already swamped, I
appreciate it much
the commands look logical for my setup (just a type on the chunk size,
needs tobe capital K)
I did not dare to fire off the last command last time, but as missing
seems to be available for imsm, I re-processed all of the commands
again
the Raid0 is created without any trouble (only stating it is already
part of an array, confirmed with Y), but the raid5 array won't.
After it also stated sdc sdd sde are part of an array, and confirming
with Y for the create, it gives the following:
"mdadm: unable to add 'missing' disk to container"
I'm using mdadm - v3.2.6 - 25th October 2012.
Could it be the 3.2.6 does not support missing for imsm?
best regards Dennis
On Mon, Jan 20, 2014 at 1:16 AM, NeilBrown <neilb@suse.de> wrote:
> On Thu, 16 Jan 2014 22:03:00 +0100 den Hoog <speedyden@gmail.com> wrote:
>
>> Hi Neil
>>
>> can you try to make 'missing' working for imsm?
>>
>> probably not on the short list of things to do, but if you are willing
>> to help me out, I'll appreciate it much!
>
> Hi Dennis,
>
> it seems that "missing" already works with IMSM. I thought I remembered that
> it didn't, but I was wrong.
>
> So you should be able to create the raid0 using the first parts of the
> devices, then create the RAID5.
> When creating the RAID5, list the devices that should make up the RAID5 in
> the correct order.
> So something like:
>
> mdadm -C /dev/md/imsm -e imsm -n 4 /dev/sd[bcde]
> mdadm -C /dev/md0 -l0 -n4 -c 128k -z 512G /dev/md/imsm
> mdadm -C /dev/md1 -l5 -n4 -c 128k missing /dev/sdc /dev/sdd /dev/sde
>
> Then /dev/md1 should be your RAID5 array.
>
> You should double-check the above command to make sure you agree with ever
> part, and ask if there is anything that doesn't seem right.
>
> NeilBrown
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: looking for advice on raid0+raid5 array recovery with mdadm and sector offset
2014-01-20 21:43 ` den Hoog
@ 2014-01-20 22:11 ` NeilBrown
2014-01-20 22:51 ` NeilBrown
0 siblings, 1 reply; 10+ messages in thread
From: NeilBrown @ 2014-01-20 22:11 UTC (permalink / raw)
To: den Hoog; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 1122 bytes --]
On Mon, 20 Jan 2014 16:43:38 -0500 den Hoog <speedyden@gmail.com> wrote:
> Hello Neil,
>
> thanks again for spending time on this when you're already swamped, I
> appreciate it much
>
> the commands look logical for my setup (just a type on the chunk size,
> needs tobe capital K)
> I did not dare to fire off the last command last time, but as missing
> seems to be available for imsm, I re-processed all of the commands
> again
>
> the Raid0 is created without any trouble (only stating it is already
> part of an array, confirmed with Y), but the raid5 array won't.
> After it also stated sdc sdd sde are part of an array, and confirming
> with Y for the create, it gives the following:
>
> "mdadm: unable to add 'missing' disk to container"
> I'm using mdadm - v3.2.6 - 25th October 2012.
> Could it be the 3.2.6 does not support missing for imsm?
That's weird. I'm sure I tested it yesterday and it worked.
Today it doesn't in exactly the way you describe.
I'll have a poke and see what is happening.
Support for 'missing' with mdadm was supposedly added in 3.2.3...
NeilBrown
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: looking for advice on raid0+raid5 array recovery with mdadm and sector offset
2014-01-20 22:11 ` NeilBrown
@ 2014-01-20 22:51 ` NeilBrown
2014-01-24 23:13 ` den Hoog
0 siblings, 1 reply; 10+ messages in thread
From: NeilBrown @ 2014-01-20 22:51 UTC (permalink / raw)
To: den Hoog; +Cc: linux-raid
[-- Attachment #1: Type: text/plain, Size: 2871 bytes --]
On Tue, 21 Jan 2014 09:11:44 +1100 NeilBrown <neilb@suse.de> wrote:
> On Mon, 20 Jan 2014 16:43:38 -0500 den Hoog <speedyden@gmail.com> wrote:
>
> > Hello Neil,
> >
> > thanks again for spending time on this when you're already swamped, I
> > appreciate it much
> >
> > the commands look logical for my setup (just a type on the chunk size,
> > needs tobe capital K)
> > I did not dare to fire off the last command last time, but as missing
> > seems to be available for imsm, I re-processed all of the commands
> > again
> >
> > the Raid0 is created without any trouble (only stating it is already
> > part of an array, confirmed with Y), but the raid5 array won't.
> > After it also stated sdc sdd sde are part of an array, and confirming
> > with Y for the create, it gives the following:
> >
> > "mdadm: unable to add 'missing' disk to container"
> > I'm using mdadm - v3.2.6 - 25th October 2012.
> > Could it be the 3.2.6 does not support missing for imsm?
>
> That's weird. I'm sure I tested it yesterday and it worked.
> Today it doesn't in exactly the way you describe.
> I'll have a poke and see what is happening.
OK I think I figured it out.
Firstly, to create an IMSM array with a missing device, every array in the
container must have the same device missing.
Now you cannot create the RAID0 with a missing device, so you need to create
a RAID5 in it's place instead.
So:
mdadm -C /dev/md/imsm -e imsm -n 3 /dev/sd[cde]
mdadm -C /dev/md0 -l 5 -n 4 -c 128K -z 512G missing /dev/sd[cde]
then create the radi5 you want:
mdadm -C /dev/md1 -l 5 -n 4 -c 128K missing /dev/sd[cde]
That creates the container with 3 devices, and two 4-device arrays each with
one device missing (every array in a container must have the same number of
devices, and must have the same number that are missing).
However the above will crash. That is the "secondly".
You need the following patch, or you can just collect the latest from
git://neil.brown.name/mdadm/
NeilBrown
commit 1ca5c8e0c74946f4fcd74e97c5f48fba482d9596
Author: NeilBrown <neilb@suse.de>
Date: Tue Jan 21 09:40:02 2014 +1100
IMSM: don't crash when creating an array with missing devices.
'missing' devices are in a different list so when collection the
serial numbers of all devices we need to check both lists.
Signed-off-by: NeilBrown <neilb@suse.de>
diff --git a/super-intel.c b/super-intel.c
index c103ffdd2dd8..f0a7ab5ccc7a 100644
--- a/super-intel.c
+++ b/super-intel.c
@@ -5210,6 +5210,8 @@ static int create_array(struct supertype *st, int dev_idx)
int idx = get_imsm_disk_idx(dev, i, MAP_X);
disk = get_imsm_disk(super, idx);
+ if (!disk)
+ disk = get_imsm_missing(super, idx);
serialcpy(inf[i].serial, disk->serial);
}
append_metadata_update(st, u, len);
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: looking for advice on raid0+raid5 array recovery with mdadm and sector offset
2014-01-20 22:51 ` NeilBrown
@ 2014-01-24 23:13 ` den Hoog
0 siblings, 0 replies; 10+ messages in thread
From: den Hoog @ 2014-01-24 23:13 UTC (permalink / raw)
To: NeilBrown; +Cc: linux-raid
thanks for your effort Neil!
sorry for the delay, could not free serious time to do this thoroughly.
Ok, I pathced mdadm, did a make after the git, and ended up with a
brand new v3.3-55 23-Jan 2014
After the command review I started executing them, but after the
creation of the container I had a PSU failure.....
long story short, booted again, did the same, and created the 3 device
container and gave the following
" mdadm: container /dev/md/imsm prepared."
after that I tried to create the first raid5 with missing, It gave the message:
"mdadm: largest drive (/dev/sdc) exceeds size (536870912K) by more than 1%"
but continued creating the array
"mdadm: Creating array inside imsm container md127
mdadm: internal bitmaps not supported with imsm metadata"
I stopped the md127, re issued the command and got the following:
"mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started."
so far so good. I examined the array and found the used device to be
slightly less than the original raid0 volume:
Used Dev Size : 1073741824 (512.00 GiB 549.76 GB)
Hoping for the best I created the next raid5 volume, and got the following:
"mdadm: cannot open /dev/sdc: Device or resource busy"
so I stopped the md0, and re-issued the command for the 2nd L5 volume again.
"mdadm: array /dev/md1 started."
After examining the array again, I discovered that it did not put it
after md0, but created a L5 array at full capacity:
Avail Dev Size : 7813775024 (3725.90 GiB 4000.65 GB)
Array Size : 11720662272 (11177.69 GiB 12001.96 GB)
Used Dev Size : 7813774848 (3725.90 GiB 4000.65 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
did an mdstat, but it did not appear to be doing anything, no resync ,
fortunately
"md1 : active raid5 sde[3] sdd[2] sdc[1]"
I reset it all again, with only the container again, but I figure that
although it did not start a resync, it does not look very promising
If you can find a minute and tell me where I went wrong, and if you
still think it is worth giving it another try?
If all is lost, I have to live with an old partial backup I have, but
we gave it a try ;)
thanks again, Dennis
On Mon, Jan 20, 2014 at 5:51 PM, NeilBrown <neilb@suse.de> wrote:
> On Tue, 21 Jan 2014 09:11:44 +1100 NeilBrown <neilb@suse.de> wrote:
>
>> On Mon, 20 Jan 2014 16:43:38 -0500 den Hoog <speedyden@gmail.com> wrote:
>>
>> > Hello Neil,
>> >
>> > thanks again for spending time on this when you're already swamped, I
>> > appreciate it much
>> >
>> > the commands look logical for my setup (just a type on the chunk size,
>> > needs tobe capital K)
>> > I did not dare to fire off the last command last time, but as missing
>> > seems to be available for imsm, I re-processed all of the commands
>> > again
>> >
>> > the Raid0 is created without any trouble (only stating it is already
>> > part of an array, confirmed with Y), but the raid5 array won't.
>> > After it also stated sdc sdd sde are part of an array, and confirming
>> > with Y for the create, it gives the following:
>> >
>> > "mdadm: unable to add 'missing' disk to container"
>> > I'm using mdadm - v3.2.6 - 25th October 2012.
>> > Could it be the 3.2.6 does not support missing for imsm?
>>
>> That's weird. I'm sure I tested it yesterday and it worked.
>> Today it doesn't in exactly the way you describe.
>> I'll have a poke and see what is happening.
>
> OK I think I figured it out.
>
> Firstly, to create an IMSM array with a missing device, every array in the
> container must have the same device missing.
> Now you cannot create the RAID0 with a missing device, so you need to create
> a RAID5 in it's place instead.
> So:
>
> mdadm -C /dev/md/imsm -e imsm -n 3 /dev/sd[cde]
> mdadm -C /dev/md0 -l 5 -n 4 -c 128K -z 512G missing /dev/sd[cde]
>
> then create the radi5 you want:
>
> mdadm -C /dev/md1 -l 5 -n 4 -c 128K missing /dev/sd[cde]
>
> That creates the container with 3 devices, and two 4-device arrays each with
> one device missing (every array in a container must have the same number of
> devices, and must have the same number that are missing).
>
> However the above will crash. That is the "secondly".
>
> You need the following patch, or you can just collect the latest from
> git://neil.brown.name/mdadm/
>
> NeilBrown
>
> commit 1ca5c8e0c74946f4fcd74e97c5f48fba482d9596
> Author: NeilBrown <neilb@suse.de>
> Date: Tue Jan 21 09:40:02 2014 +1100
>
> IMSM: don't crash when creating an array with missing devices.
>
> 'missing' devices are in a different list so when collection the
> serial numbers of all devices we need to check both lists.
>
> Signed-off-by: NeilBrown <neilb@suse.de>
>
> diff --git a/super-intel.c b/super-intel.c
> index c103ffdd2dd8..f0a7ab5ccc7a 100644
> --- a/super-intel.c
> +++ b/super-intel.c
> @@ -5210,6 +5210,8 @@ static int create_array(struct supertype *st, int dev_idx)
> int idx = get_imsm_disk_idx(dev, i, MAP_X);
>
> disk = get_imsm_disk(super, idx);
> + if (!disk)
> + disk = get_imsm_missing(super, idx);
> serialcpy(inf[i].serial, disk->serial);
> }
> append_metadata_update(st, u, len);
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2014-01-24 23:13 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-01-02 19:45 looking for advice on raid0+raid5 array recovery with mdadm and sector offset den Hoog
2014-01-06 1:41 ` NeilBrown
2014-01-06 6:53 ` den Hoog
2014-01-07 21:43 ` den Hoog
2014-01-16 21:03 ` den Hoog
2014-01-20 6:16 ` NeilBrown
2014-01-20 21:43 ` den Hoog
2014-01-20 22:11 ` NeilBrown
2014-01-20 22:51 ` NeilBrown
2014-01-24 23:13 ` den Hoog
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).