linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Recovery/Access of imsm raid via mdadm?
@ 2013-01-10 16:23 chris
  2013-01-10 17:09 ` Dave Jiang
  0 siblings, 1 reply; 18+ messages in thread
From: chris @ 2013-01-10 16:23 UTC (permalink / raw)
  To: Linux-RAID

Hello,

I have a machine which was running a imsm raid volume, where the
motherboard failed and I do not have access to another system with
imsm. I remember noticing some time ago that mdadm could recognize
these arrays, so I decided to try recovery in a spare machine with the
disks from the array.

I guess my questions are:
Is this the right forum for help with this?
Am I even going down a feasible path here or is this array dependent
on the HBA in some way?
If it is possible any ideas of anything else I can do to debug this further?

The original array was a raid 5 of 4x2TB sata disks

When I examine the first disk, things look good:

mdadm --examine /dev/sdb
mdadm: /dev/sdb is not attached to Intel(R) RAID controller.
mdadm: /dev/sdb is not attached to Intel(R) RAID controller.
/dev/sdb:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.3.00
    Orig Family : 226cc5df
         Family : 226cc5df
     Generation : 000019dc
     Attributes : All supported
           UUID : b67ea029:aaea7da2:2540c1eb:ebe98af1
       Checksum : 651263bf correct
    MPB Sectors : 2
          Disks : 4
   RAID Devices : 1

  Disk02 Serial : Z1E1RPA9
          State : active
             Id : 00030000
    Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)

[Volume0]:
           UUID : 51a415ba:dc9c8cd7:5b3ea8de:465b4630
     RAID Level : 5
        Members : 4
          Slots : [__U_]
    Failed disk : 0
      This Slot : 2
     Array Size : 11721072640 (5589.04 GiB 6001.19 GB)
   Per Dev Size : 3907024648 (1863.01 GiB 2000.40 GB)
  Sector Offset : 0
    Num Stripes : 15261814
     Chunk Size : 128 KiB
       Reserved : 0
  Migrate State : idle
      Map State : failed
    Dirty State : clean

  Disk00 Serial : Z1E1AKPH:0
          State : active
             Id : ffffffff
    Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)

  Disk01 Serial : Z24091Q5:0
          State : active
             Id : ffffffff
    Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)

  Disk03 Serial : Z1E19E4K:0
          State : active
             Id : ffffffff
    Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)


When I try to scan for arrays I get this:
# mdadm --examine --scan
HBAs of devices does not match (null) != (null)
ARRAY metadata=imsm UUID=b67ea029:aaea7da2:2540c1eb:ebe98af1
ARRAY /dev/md/Volume0 container=b67ea029:aaea7da2:2540c1eb:ebe98af1
member=0 UUID=51a415ba:dc9c8cd7:5b3ea8de:465b4630
ARRAY metadata=imsm UUID=b67ea029:aaea7da2:2540c1eb:ebe98af1
ARRAY /dev/md/Volume0 container=b67ea029:aaea7da2:2540c1eb:ebe98af1
member=0 UUID=51a415ba:dc9c8cd7:5b3ea8de:465b4630

My first concern is the warning that the HBA is missing, the whole
reason I am going at it this way is because I don't have the HBA.
Second concern is duplicate detection of the same array.

If i try to run # mdadm -As:
mdadm: No arrays found in config file or automatically

 I also tried adding the output from --examine --scan to
/etc/mdadm/mdadm.conf but after trying that I now get blank output:

# mdadm --assemble /dev/md/Volume0
#
# mdadm --assemble --scan
#

full examine of all disks involved:

/dev/sdb:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.3.00
    Orig Family : 226cc5df
         Family : 226cc5df
     Generation : 000019dc
     Attributes : All supported
           UUID : b67ea029:aaea7da2:2540c1eb:ebe98af1
       Checksum : 651263bf correct
    MPB Sectors : 2
          Disks : 4
   RAID Devices : 1

  Disk02 Serial : Z1E1RPA9
          State : active
             Id : 00030000
    Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)

[Volume0]:
           UUID : 51a415ba:dc9c8cd7:5b3ea8de:465b4630
     RAID Level : 5
        Members : 4
          Slots : [__U_]
    Failed disk : 0
      This Slot : 2
     Array Size : 11721072640 (5589.04 GiB 6001.19 GB)
   Per Dev Size : 3907024648 (1863.01 GiB 2000.40 GB)
  Sector Offset : 0
    Num Stripes : 15261814
     Chunk Size : 128 KiB
       Reserved : 0
  Migrate State : idle
      Map State : failed
    Dirty State : clean

  Disk00 Serial : Z1E1AKPH:0
          State : active
             Id : ffffffff
    Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)

  Disk01 Serial : Z24091Q5:0
          State : active
             Id : ffffffff
    Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)

  Disk03 Serial : Z1E19E4K:0
          State : active
             Id : ffffffff
    Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)
/dev/sdc:
   MBR Magic : aa55
Partition[0] :   3907027057 sectors at           63 (type 42)
/dev/sdd:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.3.00
    Orig Family : 226cc5df
         Family : 226cc5df
     Generation : 000019d9
     Attributes : All supported
           UUID : b67ea029:aaea7da2:2540c1eb:ebe98af1
       Checksum : 641438ba correct
    MPB Sectors : 2
          Disks : 4
   RAID Devices : 1

  Disk03 Serial : Z1E19E4K
          State : active
             Id : 00020000
    Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)

[Volume0]:
           UUID : 51a415ba:dc9c8cd7:5b3ea8de:465b4630
     RAID Level : 5
        Members : 4
          Slots : [__UU]
    Failed disk : 0
      This Slot : 3
     Array Size : 11721072640 (5589.04 GiB 6001.19 GB)
   Per Dev Size : 3907024648 (1863.01 GiB 2000.40 GB)
  Sector Offset : 0
    Num Stripes : 15261814
     Chunk Size : 128 KiB
       Reserved : 0
  Migrate State : idle
      Map State : failed
    Dirty State : clean

  Disk00 Serial : Z1E1AKPH:0
          State : active
             Id : ffffffff
    Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)

  Disk01 Serial : Z24091Q5:0
          State : active
             Id : ffffffff
    Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)

  Disk02 Serial : Z1E1RPA9
          State : active
             Id : 00030000
    Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)
/dev/sde:
   MBR Magic : aa55
Partition[0] :   4294967295 sectors at            1 (type ee)

# dpkg -l | grep mdadm
ii  mdadm                                                       3.2.5-1+b1

thanks
chris

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Recovery/Access of imsm raid via mdadm?
  2013-01-10 16:23 Recovery/Access of imsm raid via mdadm? chris
@ 2013-01-10 17:09 ` Dave Jiang
  2013-01-10 20:19   ` chris
  0 siblings, 1 reply; 18+ messages in thread
From: Dave Jiang @ 2013-01-10 17:09 UTC (permalink / raw)
  To: chris; +Cc: Linux-RAID

On 01/10/2013 09:23 AM, chris wrote:
> Hello,
>
> I have a machine which was running a imsm raid volume, where the
> motherboard failed and I do not have access to another system with
> imsm. I remember noticing some time ago that mdadm could recognize
> these arrays, so I decided to try recovery in a spare machine with the
> disks from the array.
>
> I guess my questions are:
> Is this the right forum for help with this?
> Am I even going down a feasible path here or is this array dependent
> on the HBA in some way?
> If it is possible any ideas of anything else I can do to debug this further?

Typically mdadm probes the OROM and look for platform details. But you 
can try overriding with:
export IMSM_NO_PLATFORM=1

See if that works for you.

> The original array was a raid 5 of 4x2TB sata disks
>
> When I examine the first disk, things look good:
>
> mdadm --examine /dev/sdb
> mdadm: /dev/sdb is not attached to Intel(R) RAID controller.
> mdadm: /dev/sdb is not attached to Intel(R) RAID controller.
> /dev/sdb:
>            Magic : Intel Raid ISM Cfg Sig.
>          Version : 1.3.00
>      Orig Family : 226cc5df
>           Family : 226cc5df
>       Generation : 000019dc
>       Attributes : All supported
>             UUID : b67ea029:aaea7da2:2540c1eb:ebe98af1
>         Checksum : 651263bf correct
>      MPB Sectors : 2
>            Disks : 4
>     RAID Devices : 1
>
>    Disk02 Serial : Z1E1RPA9
>            State : active
>               Id : 00030000
>      Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)
>
> [Volume0]:
>             UUID : 51a415ba:dc9c8cd7:5b3ea8de:465b4630
>       RAID Level : 5
>          Members : 4
>            Slots : [__U_]
>      Failed disk : 0
>        This Slot : 2
>       Array Size : 11721072640 (5589.04 GiB 6001.19 GB)
>     Per Dev Size : 3907024648 (1863.01 GiB 2000.40 GB)
>    Sector Offset : 0
>      Num Stripes : 15261814
>       Chunk Size : 128 KiB
>         Reserved : 0
>    Migrate State : idle
>        Map State : failed
>      Dirty State : clean
>
>    Disk00 Serial : Z1E1AKPH:0
>            State : active
>               Id : ffffffff
>      Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)
>
>    Disk01 Serial : Z24091Q5:0
>            State : active
>               Id : ffffffff
>      Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)
>
>    Disk03 Serial : Z1E19E4K:0
>            State : active
>               Id : ffffffff
>      Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)
>
>
> When I try to scan for arrays I get this:
> # mdadm --examine --scan
> HBAs of devices does not match (null) != (null)
> ARRAY metadata=imsm UUID=b67ea029:aaea7da2:2540c1eb:ebe98af1
> ARRAY /dev/md/Volume0 container=b67ea029:aaea7da2:2540c1eb:ebe98af1
> member=0 UUID=51a415ba:dc9c8cd7:5b3ea8de:465b4630
> ARRAY metadata=imsm UUID=b67ea029:aaea7da2:2540c1eb:ebe98af1
> ARRAY /dev/md/Volume0 container=b67ea029:aaea7da2:2540c1eb:ebe98af1
> member=0 UUID=51a415ba:dc9c8cd7:5b3ea8de:465b4630
>
> My first concern is the warning that the HBA is missing, the whole
> reason I am going at it this way is because I don't have the HBA.
> Second concern is duplicate detection of the same array.
>
> If i try to run # mdadm -As:
> mdadm: No arrays found in config file or automatically
>
>   I also tried adding the output from --examine --scan to
> /etc/mdadm/mdadm.conf but after trying that I now get blank output:
>
> # mdadm --assemble /dev/md/Volume0
> #
> # mdadm --assemble --scan
> #
>
> full examine of all disks involved:
>
> /dev/sdb:
>            Magic : Intel Raid ISM Cfg Sig.
>          Version : 1.3.00
>      Orig Family : 226cc5df
>           Family : 226cc5df
>       Generation : 000019dc
>       Attributes : All supported
>             UUID : b67ea029:aaea7da2:2540c1eb:ebe98af1
>         Checksum : 651263bf correct
>      MPB Sectors : 2
>            Disks : 4
>     RAID Devices : 1
>
>    Disk02 Serial : Z1E1RPA9
>            State : active
>               Id : 00030000
>      Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)
>
> [Volume0]:
>             UUID : 51a415ba:dc9c8cd7:5b3ea8de:465b4630
>       RAID Level : 5
>          Members : 4
>            Slots : [__U_]
>      Failed disk : 0
>        This Slot : 2
>       Array Size : 11721072640 (5589.04 GiB 6001.19 GB)
>     Per Dev Size : 3907024648 (1863.01 GiB 2000.40 GB)
>    Sector Offset : 0
>      Num Stripes : 15261814
>       Chunk Size : 128 KiB
>         Reserved : 0
>    Migrate State : idle
>        Map State : failed
>      Dirty State : clean
>
>    Disk00 Serial : Z1E1AKPH:0
>            State : active
>               Id : ffffffff
>      Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)
>
>    Disk01 Serial : Z24091Q5:0
>            State : active
>               Id : ffffffff
>      Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)
>
>    Disk03 Serial : Z1E19E4K:0
>            State : active
>               Id : ffffffff
>      Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)
> /dev/sdc:
>     MBR Magic : aa55
> Partition[0] :   3907027057 sectors at           63 (type 42)
> /dev/sdd:
>            Magic : Intel Raid ISM Cfg Sig.
>          Version : 1.3.00
>      Orig Family : 226cc5df
>           Family : 226cc5df
>       Generation : 000019d9
>       Attributes : All supported
>             UUID : b67ea029:aaea7da2:2540c1eb:ebe98af1
>         Checksum : 641438ba correct
>      MPB Sectors : 2
>            Disks : 4
>     RAID Devices : 1
>
>    Disk03 Serial : Z1E19E4K
>            State : active
>               Id : 00020000
>      Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)
>
> [Volume0]:
>             UUID : 51a415ba:dc9c8cd7:5b3ea8de:465b4630
>       RAID Level : 5
>          Members : 4
>            Slots : [__UU]
>      Failed disk : 0
>        This Slot : 3
>       Array Size : 11721072640 (5589.04 GiB 6001.19 GB)
>     Per Dev Size : 3907024648 (1863.01 GiB 2000.40 GB)
>    Sector Offset : 0
>      Num Stripes : 15261814
>       Chunk Size : 128 KiB
>         Reserved : 0
>    Migrate State : idle
>        Map State : failed
>      Dirty State : clean
>
>    Disk00 Serial : Z1E1AKPH:0
>            State : active
>               Id : ffffffff
>      Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)
>
>    Disk01 Serial : Z24091Q5:0
>            State : active
>               Id : ffffffff
>      Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)
>
>    Disk02 Serial : Z1E1RPA9
>            State : active
>               Id : 00030000
>      Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)
> /dev/sde:
>     MBR Magic : aa55
> Partition[0] :   4294967295 sectors at            1 (type ee)
>
> # dpkg -l | grep mdadm
> ii  mdadm                                                       3.2.5-1+b1
>
> thanks
> chris
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Recovery/Access of imsm raid via mdadm?
  2013-01-10 17:09 ` Dave Jiang
@ 2013-01-10 20:19   ` chris
  2013-01-11  1:42     ` Dan Williams
  0 siblings, 1 reply; 18+ messages in thread
From: chris @ 2013-01-10 20:19 UTC (permalink / raw)
  To: Dave Jiang; +Cc: Linux-RAID

Hi Dave,

Thanks for the tip, this has gotten me further

# mdadm --assemble --scan
# cat /proc/mdstat
Personalities :
md127 : inactive sdd[1](S) sdb[0](S)
      4520 blocks super external:imsm

So atleast now it has created an array of sorts but it only has 2 out
of the 4 disks it should and also all disks appear to be marked as
spares.

Would I follow the same logic as typical mdadm recovery where I need
to create the array again with the right layout and disks in right
order and gain access to data without wiping it ?

Any advice what direction to go in from here?

This info I was able to get using mdadm seems accurate as far as how
the array was setup so I just need to know how to recreate or assemble
the array and get it online so I can access it

          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.3.00
    Orig Family : 226cc5df
         Family : 226cc5df
     Generation : 000019dc
     Attributes : All supported
           UUID : b67ea029:aaea7da2:2540c1eb:ebe98af1
       Checksum : 651263bf correct
    MPB Sectors : 2
          Disks : 4
   RAID Devices : 1

  Disk02 Serial : Z1E1RPA9
          State : active
             Id : 00030000
    Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)

[Volume0]:
           UUID : 51a415ba:dc9c8cd7:5b3ea8de:465b4630
     RAID Level : 5
        Members : 4
          Slots : [__U_]
    Failed disk : 0
      This Slot : 2
     Array Size : 11721072640 (5589.04 GiB 6001.19 GB)
   Per Dev Size : 3907024648 (1863.01 GiB 2000.40 GB)
  Sector Offset : 0
    Num Stripes : 15261814
     Chunk Size : 128 KiB
       Reserved : 0
  Migrate State : idle
      Map State : failed
    Dirty State : clean

  Disk00 Serial : Z1E1AKPH:0
          State : active
             Id : ffffffff
    Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)

  Disk01 Serial : Z24091Q5:0
          State : active
             Id : ffffffff
    Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)

  Disk03 Serial : Z1E19E4K:0
          State : active
             Id : ffffffff
    Usable Size : 3907024648 (1863.01 GiB 2000.40 GB)


thanks
chris

On Thu, Jan 10, 2013 at 12:09 PM, Dave Jiang <dave.jiang@intel.com> wrote:
> export IMSM_NO_PLATFORM=1

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Recovery/Access of imsm raid via mdadm?
  2013-01-10 20:19   ` chris
@ 2013-01-11  1:42     ` Dan Williams
  2013-01-11 17:53       ` chris
  0 siblings, 1 reply; 18+ messages in thread
From: Dan Williams @ 2013-01-11  1:42 UTC (permalink / raw)
  To: chris; +Cc: Dave Jiang, Linux-RAID

On Thu, Jan 10, 2013 at 12:19 PM, chris <tknchris@gmail.com> wrote:
> Hi Dave,
>
> Thanks for the tip, this has gotten me further
>
> # mdadm --assemble --scan
> # cat /proc/mdstat
> Personalities :
> md127 : inactive sdd[1](S) sdb[0](S)
>       4520 blocks super external:imsm
>
> So atleast now it has created an array of sorts but it only has 2 out
> of the 4 disks it should and also all disks appear to be marked as
> spares.
>
> Would I follow the same logic as typical mdadm recovery where I need
> to create the array again with the right layout and disks in right
> order and gain access to data without wiping it ?
>
> Any advice what direction to go in from here?
>

Backup the disks you have if possible.

Don't recreate the array with all 4 disks be sure to specify a missing
slot.  With mdadm-3.2.5 you do have support for creating imsm arrays
with 'missing' slots [1].  Then you can test some recreations to see
if you can find data.

At a minimum you know that sdb and sdd are in slot 2, 3, so the
question is finding a good candidate for slot 0, or 1.  It's a bit
concerning that the superblocks for those have been lost, or those
disks weren't moved over?  You can look for serial numbers in
/dev/disks/by-id.

--
Dan

[1]: http://marc.info/?l=linux-raid&m=131432484118038&w=2

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Recovery/Access of imsm raid via mdadm?
  2013-01-11  1:42     ` Dan Williams
@ 2013-01-11 17:53       ` chris
  2013-01-13 19:00         ` chris
  0 siblings, 1 reply; 18+ messages in thread
From: chris @ 2013-01-11 17:53 UTC (permalink / raw)
  To: Dan Williams; +Cc: Dave Jiang, Linux-RAID

Ok so I looked through the examine data and it appears the order of
the disks was:

sde 0 - Z1E1AKPH
sdc 1 - Z24091Q5
sdb 2 - Z1E1RPA9
sdd 3 - Z1E19E4K

So I was going to try the first possible combination and assemble with
sde/sdc/sdb/missing, this is what I tried:

# export IMSM_NO_PLATFORM=1
# mdadm --create --verbose /dev/md/imsm /dev/sde /dev/sdc /dev/sdd
missing --raid-devices 4 --metadata=imsm
mdadm: /dev/sde appears to be part of a raid array:
    level=raid0 devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sde.
mdadm: /dev/sdc appears to be part of a raid array:
    level=raid0 devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdc.
mdadm: /dev/sdd appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdd.
mdadm: size set to 1953511431K
Continue creating array? y
mdadm: This level does not support missing devices

The example commands I found for creating imsm array were like this:
mdadm --create --verbose /dev/md/imsm /dev/sde /dev/sdc /dev/sdd
missing --raid-devices 4 --metadata=imsm
mdadm --create --verbose /dev/md/Volume0 /dev/md/imsm --raid-devices 4 --level 5

Should I be defining level=5 when creating /dev/md/imsm also? Or
should I only do it for imsm container and omit it from creating
/dev/md/Volume0?

thanks for everything so far I feel like I'm pretty close, once I can
get the arrays up with missing and dont have to worry about
overwriting anything then I'm sure I can work out the right
combination of disks and hopefully the disks with no imsm metadata
arent so damaged I cant recover

chris

On Thu, Jan 10, 2013 at 8:42 PM, Dan Williams <djbw@fb.com> wrote:
>
>>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Recovery/Access of imsm raid via mdadm?
  2013-01-11 17:53       ` chris
@ 2013-01-13 19:00         ` chris
  2013-01-13 21:05           ` Dan Williams
  0 siblings, 1 reply; 18+ messages in thread
From: chris @ 2013-01-13 19:00 UTC (permalink / raw)
  To: Neil Brown; +Cc: Dan Williams, Dave Jiang, Linux-RAID

Neil/Dave,

Is it not possible to create imsm container with missing disk?
If not, Is there any way to recreate the array with all disks but
prevent any kind of sync which may overwrite array data?

Any other ideas where to go from here?

thanks
chris

On Fri, Jan 11, 2013 at 12:53 PM, chris <tknchris@gmail.com> wrote:
> Ok so I looked through the examine data and it appears the order of
> the disks was:
>
> sde 0 - Z1E1AKPH
> sdc 1 - Z24091Q5
> sdb 2 - Z1E1RPA9
> sdd 3 - Z1E19E4K
>
> So I was going to try the first possible combination and assemble with
> sde/sdc/sdb/missing, this is what I tried:
>
> # export IMSM_NO_PLATFORM=1
> # mdadm --create --verbose /dev/md/imsm /dev/sde /dev/sdc /dev/sdd
> missing --raid-devices 4 --metadata=imsm
> mdadm: /dev/sde appears to be part of a raid array:
>     level=raid0 devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sde.
> mdadm: /dev/sdc appears to be part of a raid array:
>     level=raid0 devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdc.
> mdadm: /dev/sdd appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdd.
> mdadm: size set to 1953511431K
> Continue creating array? y
> mdadm: This level does not support missing devices
>
> The example commands I found for creating imsm array were like this:
> mdadm --create --verbose /dev/md/imsm /dev/sde /dev/sdc /dev/sdd
> missing --raid-devices 4 --metadata=imsm
> mdadm --create --verbose /dev/md/Volume0 /dev/md/imsm --raid-devices 4 --level 5
>
> Should I be defining level=5 when creating /dev/md/imsm also? Or
> should I only do it for imsm container and omit it from creating
> /dev/md/Volume0?
>
> thanks for everything so far I feel like I'm pretty close, once I can
> get the arrays up with missing and dont have to worry about
> overwriting anything then I'm sure I can work out the right
> combination of disks and hopefully the disks with no imsm metadata
> arent so damaged I cant recover
>
> chris
>
> On Thu, Jan 10, 2013 at 8:42 PM, Dan Williams <djbw@fb.com> wrote:
>>
>>>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Recovery/Access of imsm raid via mdadm?
  2013-01-13 19:00         ` chris
@ 2013-01-13 21:05           ` Dan Williams
  2013-01-14  0:56             ` chris
  0 siblings, 1 reply; 18+ messages in thread
From: Dan Williams @ 2013-01-13 21:05 UTC (permalink / raw)
  To: chris, Neil Brown; +Cc: Dave Jiang, Linux-RAID



On 1/13/13 11:00 AM, "chris" <tknchris@gmail.com> wrote:

>Neil/Dave,
>
>Is it not possible to create imsm container with missing disk?
>If not, Is there any way to recreate the array with all disks but
>prevent any kind of sync which may overwrite array data?

The example was in that link I sent:

mdadm --create /dev/md/imsm /dev/sd[bde] -e imsm
mdadm --create /dev/md/vol0 /dev/sde missing /dev/sdb /dev/sdd -n 4 -l 5

The first command marks all devices as spares.  The second creates the
degraded array.
 
You probably want at least sdb and sdd in there since they have a copy of
the metadata.

--
Dan


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Recovery/Access of imsm raid via mdadm?
  2013-01-13 21:05           ` Dan Williams
@ 2013-01-14  0:56             ` chris
  2013-01-14 12:36               ` Dorau, Lukasz
                                 ` (2 more replies)
  0 siblings, 3 replies; 18+ messages in thread
From: chris @ 2013-01-14  0:56 UTC (permalink / raw)
  To: Dan Williams; +Cc: Neil Brown, Dave Jiang, Linux-RAID

Hi Dan,

OK so the container comes up just fine now:

# mdadm --create --verbose /dev/md/imsm /dev/sdb /dev/sdc /dev/sdd
/dev/sde --raid-devices 4 --metadata=imsm
mdadm: /dev/sdb appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdb.
mdadm: /dev/sdc appears to be part of a raid array:
    level=raid0 devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdc.
mdadm: /dev/sdd appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdd.
mdadm: /dev/sde appears to be part of a raid array:
    level=raid0 devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sde.
mdadm: size set to 1953511431K
Continue creating array? y
mdadm: container /dev/md/imsm prepared.
root@Microknoppix:/mnt/external# cat /proc/mdstat
Personalities :
md126 : inactive sde[3](S) sdd[2](S) sdc[1](S) sdb[0](S)
      4420 blocks super external:imsm

Upon review it all looks good
# mdadm --detail /dev/md/imsm
/dev/md/imsm:
        Version : imsm
     Raid Level : container
  Total Devices : 4

Working Devices : 4

  Member Arrays :

    Number   Major   Minor   RaidDevice

       0       8       16        -        /dev/sdb
       1       8       32        -        /dev/sdc
       2       8       48        -        /dev/sdd
       3       8       64        -        /dev/sde


Now when I try all the permutations of disks, they all fail with
"mdadm: failed to activate array."

ATTEMPT #1
# export IMSM_NO_PLATFORM=1
# mdadm --create --verbose /dev/md/imsm /dev/sdb /dev/sdc /dev/sdd
/dev/sde --raid-devices 4 --metadata=imsm
mdadm: /dev/sdb appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdb.
mdadm: /dev/sdc appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdc.
mdadm: /dev/sdd appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdd.
mdadm: /dev/sde appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sde.
mdadm: size set to 1953511431K
Continue creating array? y
mdadm: container /dev/md/imsm prepared.
# mdadm --create --verbose /dev/md/Volume0 /dev/sde missing /dev/sdb
/dev/sdd --raid-devices 4 --level=5
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: super1.x cannot open /dev/sde: Device or resource busy
mdadm: chunk size defaults to 128K
mdadm: /dev/sde appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sde but will be lost or
       meaningless after creating array
mdadm: /dev/sdb appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdb but will be lost or
       meaningless after creating array
mdadm: /dev/sdd appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdd but will be lost or
       meaningless after creating array
mdadm: size set to 1953511424K
Continue creating array? y
mdadm: Creating array inside imsm container /dev/md/imsm
mdadm: failed to activate array.
# mdadm --stop /dev/md/imsm
mdadm: stopped /dev/md/imsm
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices: <none>

ATTEMPT #2
# export IMSM_NO_PLATFORM=1
# mdadm --create --verbose /dev/md/imsm /dev/sdb /dev/sdc /dev/sdd
/dev/sde --raid-devices 4 --metadata=imsm
mdadm: /dev/sdb appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdb.
mdadm: /dev/sdc appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdc.
mdadm: /dev/sdd appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdd.
mdadm: /dev/sde appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sde.
mdadm: size set to 1953511431K
Continue creating array? y
mdadm: container /dev/md/imsm prepared.
# mdadm --create --verbose /dev/md/Volume0 missing /dev/sdc /dev/sdb
/dev/sdd --raid-devices 4 --level=5
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: super1.x cannot open /dev/sdc: Device or resource busy
mdadm: chunk size defaults to 128K
mdadm: /dev/sdc appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdc but will be lost or
       meaningless after creating array
mdadm: /dev/sdb appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdb but will be lost or
       meaningless after creating array
mdadm: /dev/sdd appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdd but will be lost or
       meaningless after creating array
mdadm: size set to 1953511424K
Continue creating array? y
mdadm: Creating array inside imsm container /dev/md/imsm
mdadm: failed to activate array.
# mdadm --stop /dev/md/imsm
mdadm: stopped /dev/md/imsm
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices: <none>

ATTEMPT #3
# export IMSM_NO_PLATFORM=1
# mdadm --create --verbose /dev/md/imsm /dev/sdb /dev/sdc /dev/sdd
/dev/sde --raid-devices 4 --metadata=imsm
mdadm: /dev/sdb appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdb.
mdadm: /dev/sdc appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdc.
mdadm: /dev/sdd appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdd.
mdadm: /dev/sde appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sde.
mdadm: size set to 1953511431K
Continue creating array? y
mdadm: container /dev/md/imsm prepared.
# mdadm --create --verbose /dev/md/Volume0 /dev/sde /dev/sdc missing
/dev/sdd --raid-devices 4 --level=5
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: super1.x cannot open /dev/sde: Device or resource busy
mdadm: chunk size defaults to 128K
mdadm: /dev/sde appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sde but will be lost or
       meaningless after creating array
mdadm: /dev/sdc appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdc but will be lost or
       meaningless after creating array
mdadm: /dev/sdd appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdd but will be lost or
       meaningless after creating array
mdadm: size set to 1953511424K
Continue creating array? y
mdadm: Creating array inside imsm container /dev/md/imsm
mdadm: failed to activate array.
# mdadm --stop /dev/md/imsm
mdadm: stopped /dev/md/imsm
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
unused devices: <none>

ATTEMPT #4
# export IMSM_NO_PLATFORM=1
# mdadm --create --verbose /dev/md/imsm /dev/sdb /dev/sdc /dev/sdd
/dev/sde --raid-devices 4 --metadata=imsm
mdadm: /dev/sdb appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdb.
mdadm: /dev/sdc appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdc.
mdadm: /dev/sdd appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdd.
mdadm: /dev/sde appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sde.
mdadm: size set to 1953511431K
Continue creating array? y
mdadm: container /dev/md/imsm prepared.
# mdadm --create --verbose /dev/md/Volume0 /dev/sde /dev/sdc /dev/sdb
missing --raid-devices 4 --level=5
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: super1.x cannot open /dev/sde: Device or resource busy
mdadm: chunk size defaults to 128K
mdadm: /dev/sde appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sde but will be lost or
       meaningless after creating array
mdadm: /dev/sdc appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdc but will be lost or
       meaningless after creating array
mdadm: /dev/sdb appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdb but will be lost or
       meaningless after creating array
mdadm: size set to 1953511424K
Continue creating array? y
mdadm: Creating array inside imsm container /dev/md/imsm
mdadm: failed to activate array.

Also noticed this in dmesg, all the attemps generate the same output
but heres the dmesg output related to attempt #4:

[292271.903919] md: md127 stopped.
[292271.903938] md: unbind<sde>
[292271.914179] md: export_rdev(sde)
[292271.914424] md: unbind<sdd>
[292271.927499] md: export_rdev(sdd)
[292271.927728] md: unbind<sdc>
[292271.940823] md: export_rdev(sdc)
[292271.940979] md: unbind<sdb>
[292271.954153] md: export_rdev(sdb)
[292289.371545] md: bind<sdb>
[292289.371731] md: bind<sdc>
[292289.371877] md: bind<sdd>
[292289.372024] md: bind<sde>
[292295.910881] md: bind<sde>
[292295.911130] md: bind<sdc>
[292295.911314] md: bind<sdb>
[292295.923942] bio: create slab <bio-1> at 1
[292295.923965] md/raid:md126: not clean -- starting background reconstruction
[292295.924000] md/raid:md126: device sdb operational as raid disk 2
[292295.924005] md/raid:md126: device sdc operational as raid disk 1
[292295.924009] md/raid:md126: device sde operational as raid disk 0
[292295.925149] md/raid:md126: allocated 4250kB
[292295.927268] md/raid:md126: cannot start dirty degraded array.
[292295.929666] RAID conf printout:
[292295.929677]  --- level:5 rd:4 wd:3
[292295.929683]  disk 0, o:1, dev:sde
[292295.929688]  disk 1, o:1, dev:sdc
[292295.929693]  disk 2, o:1, dev:sdb
[292295.930898] md/raid:md126: failed to run raid set.
[292295.930902] md: pers->run() failed ...
[292295.931079] md: md126 stopped.
[292295.931096] md: unbind<sdb>
[292295.944228] md: export_rdev(sdb)
[292295.944267] md: unbind<sdc>
[292295.958126] md: export_rdev(sdc)
[292295.958167] md: unbind<sde>
[292295.970902] md: export_rdev(sde)
[292296.219837] device-mapper: table: 252:1: raid45: unknown target type
[292296.219845] device-mapper: ioctl: error adding target to table
[292296.291542] device-mapper: table: 252:1: raid45: unknown target type
[292296.291548] device-mapper: ioctl: error adding target to table
[292296.310926] quiet_error: 1116 callbacks suppressed
[292296.310934] Buffer I/O error on device dm-0, logical block 3907022720
[292296.310940] Buffer I/O error on device dm-0, logical block 3907022721
[292296.310944] Buffer I/O error on device dm-0, logical block 3907022722
[292296.310949] Buffer I/O error on device dm-0, logical block 3907022723
[292296.310953] Buffer I/O error on device dm-0, logical block 3907022724
[292296.310958] Buffer I/O error on device dm-0, logical block 3907022725
[292296.310962] Buffer I/O error on device dm-0, logical block 3907022726
[292296.310966] Buffer I/O error on device dm-0, logical block 3907022727
[292296.310973] Buffer I/O error on device dm-0, logical block 3907022720
[292296.310977] Buffer I/O error on device dm-0, logical block 3907022721
[292296.319968] device-mapper: table: 252:1: raid45: unknown target type
[292296.319975] device-mapper: ioctl: error adding target to table

Any ideas from here? Am I up the creek without a paddle? :(

thanks to everyone for all your help so far
chris

On Sun, Jan 13, 2013 at 4:05 PM, Dan Williams <djbw@fb.com> wrote:
>
>
> On 1/13/13 11:00 AM, "chris" <tknchris@gmail.com> wrote:
>
>>Neil/Dave,
>>
>>Is it not possible to create imsm container with missing disk?
>>If not, Is there any way to recreate the array with all disks but
>>prevent any kind of sync which may overwrite array data?
>
> The example was in that link I sent:
>
> mdadm --create /dev/md/imsm /dev/sd[bde] -e imsm
> mdadm --create /dev/md/vol0 /dev/sde missing /dev/sdb /dev/sdd -n 4 -l 5
>
> The first command marks all devices as spares.  The second creates the
> degraded array.
>
> You probably want at least sdb and sdd in there since they have a copy of
> the metadata.
>
> --
> Dan
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: Recovery/Access of imsm raid via mdadm?
  2013-01-14  0:56             ` chris
@ 2013-01-14 12:36               ` Dorau, Lukasz
  2013-01-14 14:10               ` Dorau, Lukasz
  2013-01-14 14:24               ` Dorau, Lukasz
  2 siblings, 0 replies; 18+ messages in thread
From: Dorau, Lukasz @ 2013-01-14 12:36 UTC (permalink / raw)
  To: chris; +Cc: Dan Williams, Neil Brown, Jiang, Dave, Linux-RAID

On Monday, January 14, 2013 1:56 AM chris <tknchris@gmail.com> wrote:
> 
> Now when I try all the permutations of disks, they all fail with
> "mdadm: failed to activate array."
> 

There are 2 more possibilities you should try:
# mdadm --create --verbose /dev/md/Volume0 /dev/sdc missing /dev/sdb /dev/sdd --raid-devices 4 --level=5
and
# mdadm --create --verbose /dev/md/Volume0 missing /dev/sde /dev/sdb /dev/sdd --raid-devices 4 --level=5

Lukasz


> ATTEMPT #1
> # export IMSM_NO_PLATFORM=1
> # mdadm --create --verbose /dev/md/imsm /dev/sdb /dev/sdc /dev/sdd
> /dev/sde --raid-devices 4 --metadata=imsm
> mdadm: /dev/sdb appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdb.
> mdadm: /dev/sdc appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdc.
> mdadm: /dev/sdd appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdd.
> mdadm: /dev/sde appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sde.
> mdadm: size set to 1953511431K
> Continue creating array? y
> mdadm: container /dev/md/imsm prepared.
> # mdadm --create --verbose /dev/md/Volume0 /dev/sde missing /dev/sdb
> /dev/sdd --raid-devices 4 --level=5
> mdadm: layout defaults to left-symmetric
> mdadm: layout defaults to left-symmetric
> mdadm: super1.x cannot open /dev/sde: Device or resource busy
> mdadm: chunk size defaults to 128K
> mdadm: /dev/sde appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sde but will be lost or
>        meaningless after creating array
> mdadm: /dev/sdb appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sdb but will be lost or
>        meaningless after creating array
> mdadm: /dev/sdd appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sdd but will be lost or
>        meaningless after creating array
> mdadm: size set to 1953511424K
> Continue creating array? y
> mdadm: Creating array inside imsm container /dev/md/imsm
> mdadm: failed to activate array.
> # mdadm --stop /dev/md/imsm
> mdadm: stopped /dev/md/imsm
> # cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> unused devices: <none>
> 
> ATTEMPT #2
> # export IMSM_NO_PLATFORM=1
> # mdadm --create --verbose /dev/md/imsm /dev/sdb /dev/sdc /dev/sdd
> /dev/sde --raid-devices 4 --metadata=imsm
> mdadm: /dev/sdb appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdb.
> mdadm: /dev/sdc appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdc.
> mdadm: /dev/sdd appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdd.
> mdadm: /dev/sde appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sde.
> mdadm: size set to 1953511431K
> Continue creating array? y
> mdadm: container /dev/md/imsm prepared.
> # mdadm --create --verbose /dev/md/Volume0 missing /dev/sdc /dev/sdb
> /dev/sdd --raid-devices 4 --level=5
> mdadm: layout defaults to left-symmetric
> mdadm: layout defaults to left-symmetric
> mdadm: super1.x cannot open /dev/sdc: Device or resource busy
> mdadm: chunk size defaults to 128K
> mdadm: /dev/sdc appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sdc but will be lost or
>        meaningless after creating array
> mdadm: /dev/sdb appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sdb but will be lost or
>        meaningless after creating array
> mdadm: /dev/sdd appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sdd but will be lost or
>        meaningless after creating array
> mdadm: size set to 1953511424K
> Continue creating array? y
> mdadm: Creating array inside imsm container /dev/md/imsm
> mdadm: failed to activate array.
> # mdadm --stop /dev/md/imsm
> mdadm: stopped /dev/md/imsm
> # cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> unused devices: <none>
> 
> ATTEMPT #3
> # export IMSM_NO_PLATFORM=1
> # mdadm --create --verbose /dev/md/imsm /dev/sdb /dev/sdc /dev/sdd
> /dev/sde --raid-devices 4 --metadata=imsm
> mdadm: /dev/sdb appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdb.
> mdadm: /dev/sdc appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdc.
> mdadm: /dev/sdd appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdd.
> mdadm: /dev/sde appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sde.
> mdadm: size set to 1953511431K
> Continue creating array? y
> mdadm: container /dev/md/imsm prepared.
> # mdadm --create --verbose /dev/md/Volume0 /dev/sde /dev/sdc missing
> /dev/sdd --raid-devices 4 --level=5
> mdadm: layout defaults to left-symmetric
> mdadm: layout defaults to left-symmetric
> mdadm: super1.x cannot open /dev/sde: Device or resource busy
> mdadm: chunk size defaults to 128K
> mdadm: /dev/sde appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sde but will be lost or
>        meaningless after creating array
> mdadm: /dev/sdc appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sdc but will be lost or
>        meaningless after creating array
> mdadm: /dev/sdd appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sdd but will be lost or
>        meaningless after creating array
> mdadm: size set to 1953511424K
> Continue creating array? y
> mdadm: Creating array inside imsm container /dev/md/imsm
> mdadm: failed to activate array.
> # mdadm --stop /dev/md/imsm
> mdadm: stopped /dev/md/imsm
> # cat /proc/mdstat
> Personalities : [raid6] [raid5] [raid4]
> unused devices: <none>
> 
> ATTEMPT #4
> # export IMSM_NO_PLATFORM=1
> # mdadm --create --verbose /dev/md/imsm /dev/sdb /dev/sdc /dev/sdd
> /dev/sde --raid-devices 4 --metadata=imsm
> mdadm: /dev/sdb appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdb.
> mdadm: /dev/sdc appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdc.
> mdadm: /dev/sdd appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdd.
> mdadm: /dev/sde appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sde.
> mdadm: size set to 1953511431K
> Continue creating array? y
> mdadm: container /dev/md/imsm prepared.
> # mdadm --create --verbose /dev/md/Volume0 /dev/sde /dev/sdc /dev/sdb
> missing --raid-devices 4 --level=5
> mdadm: layout defaults to left-symmetric
> mdadm: layout defaults to left-symmetric
> mdadm: super1.x cannot open /dev/sde: Device or resource busy
> mdadm: chunk size defaults to 128K
> mdadm: /dev/sde appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sde but will be lost or
>        meaningless after creating array
> mdadm: /dev/sdc appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sdc but will be lost or
>        meaningless after creating array
> mdadm: /dev/sdb appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sdb but will be lost or
>        meaningless after creating array
> mdadm: size set to 1953511424K
> Continue creating array? y
> mdadm: Creating array inside imsm container /dev/md/imsm
> mdadm: failed to activate array.
> 
> Also noticed this in dmesg, all the attemps generate the same output
> but heres the dmesg output related to attempt #4:
> 
> [292271.903919] md: md127 stopped.
> [292271.903938] md: unbind<sde>
> [292271.914179] md: export_rdev(sde)
> [292271.914424] md: unbind<sdd>
> [292271.927499] md: export_rdev(sdd)
> [292271.927728] md: unbind<sdc>
> [292271.940823] md: export_rdev(sdc)
> [292271.940979] md: unbind<sdb>
> [292271.954153] md: export_rdev(sdb)
> [292289.371545] md: bind<sdb>
> [292289.371731] md: bind<sdc>
> [292289.371877] md: bind<sdd>
> [292289.372024] md: bind<sde>
> [292295.910881] md: bind<sde>
> [292295.911130] md: bind<sdc>
> [292295.911314] md: bind<sdb>
> [292295.923942] bio: create slab <bio-1> at 1
> [292295.923965] md/raid:md126: not clean -- starting background
> reconstruction
> [292295.924000] md/raid:md126: device sdb operational as raid disk 2
> [292295.924005] md/raid:md126: device sdc operational as raid disk 1
> [292295.924009] md/raid:md126: device sde operational as raid disk 0
> [292295.925149] md/raid:md126: allocated 4250kB
> [292295.927268] md/raid:md126: cannot start dirty degraded array.
> [292295.929666] RAID conf printout:
> [292295.929677]  --- level:5 rd:4 wd:3
> [292295.929683]  disk 0, o:1, dev:sde
> [292295.929688]  disk 1, o:1, dev:sdc
> [292295.929693]  disk 2, o:1, dev:sdb
> [292295.930898] md/raid:md126: failed to run raid set.
> [292295.930902] md: pers->run() failed ...
> [292295.931079] md: md126 stopped.
> [292295.931096] md: unbind<sdb>
> [292295.944228] md: export_rdev(sdb)
> [292295.944267] md: unbind<sdc>
> [292295.958126] md: export_rdev(sdc)
> [292295.958167] md: unbind<sde>
> [292295.970902] md: export_rdev(sde)
> [292296.219837] device-mapper: table: 252:1: raid45: unknown target type
> [292296.219845] device-mapper: ioctl: error adding target to table
> [292296.291542] device-mapper: table: 252:1: raid45: unknown target type
> [292296.291548] device-mapper: ioctl: error adding target to table
> [292296.310926] quiet_error: 1116 callbacks suppressed
> [292296.310934] Buffer I/O error on device dm-0, logical block 3907022720
> [292296.310940] Buffer I/O error on device dm-0, logical block 3907022721
> [292296.310944] Buffer I/O error on device dm-0, logical block 3907022722
> [292296.310949] Buffer I/O error on device dm-0, logical block 3907022723
> [292296.310953] Buffer I/O error on device dm-0, logical block 3907022724
> [292296.310958] Buffer I/O error on device dm-0, logical block 3907022725
> [292296.310962] Buffer I/O error on device dm-0, logical block 3907022726
> [292296.310966] Buffer I/O error on device dm-0, logical block 3907022727
> [292296.310973] Buffer I/O error on device dm-0, logical block 3907022720
> [292296.310977] Buffer I/O error on device dm-0, logical block 3907022721
> [292296.319968] device-mapper: table: 252:1: raid45: unknown target type
> [292296.319975] device-mapper: ioctl: error adding target to table
> 
> Any ideas from here? Am I up the creek without a paddle? :(
> 
> thanks to everyone for all your help so far
> chris
> 
> On Sun, Jan 13, 2013 at 4:05 PM, Dan Williams <djbw@fb.com> wrote:
> >
> >
> > On 1/13/13 11:00 AM, "chris" <tknchris@gmail.com> wrote:
> >
> >>Neil/Dave,
> >>
> >>Is it not possible to create imsm container with missing disk?
> >>If not, Is there any way to recreate the array with all disks but
> >>prevent any kind of sync which may overwrite array data?
> >
> > The example was in that link I sent:
> >
> > mdadm --create /dev/md/imsm /dev/sd[bde] -e imsm
> > mdadm --create /dev/md/vol0 /dev/sde missing /dev/sdb /dev/sdd -n 4 -l 5
> >
> > The first command marks all devices as spares.  The second creates the
> > degraded array.
> >
> > You probably want at least sdb and sdd in there since they have a copy of
> > the metadata.
> >
> > --
> > Dan
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: Recovery/Access of imsm raid via mdadm?
  2013-01-14  0:56             ` chris
  2013-01-14 12:36               ` Dorau, Lukasz
@ 2013-01-14 14:10               ` Dorau, Lukasz
  2013-01-14 14:24               ` Dorau, Lukasz
  2 siblings, 0 replies; 18+ messages in thread
From: Dorau, Lukasz @ 2013-01-14 14:10 UTC (permalink / raw)
  To: chris; +Cc: Neil Brown, Jiang, Dave, Linux-RAID, Dan Williams

On Monday, January 14, 2013 1:56 AM chris <tknchris@gmail.com> wrote:
> [292295.923942] bio: create slab <bio-1> at 1
> [292295.923965] md/raid:md126: not clean -- starting background
> reconstruction
> [292295.924000] md/raid:md126: device sdb operational as raid disk 2
> [292295.924005] md/raid:md126: device sdc operational as raid disk 1
> [292295.924009] md/raid:md126: device sde operational as raid disk 0
> [292295.925149] md/raid:md126: allocated 4250kB
> [292295.927268] md/raid:md126: cannot start dirty degraded array.

Hi 

*Remember to backup the disks you have before trying the following! *

You can try starting dirty degraded array using:
#  mdadm --assemble --force ....

See also the "Boot time assembly of degraded/dirty arrays" chapter in:
http://www.kernel.org/doc/Documentation/md.txt
(you can boot with option md-mod.start_dirty_degraded=1)

Lukasz


> [292295.929666] RAID conf printout:
> [292295.929677]  --- level:5 rd:4 wd:3
> [292295.929683]  disk 0, o:1, dev:sde
> [292295.929688]  disk 1, o:1, dev:sdc
> [292295.929693]  disk 2, o:1, dev:sdb
> [292295.930898] md/raid:md126: failed to run raid set.
> [292295.930902] md: pers->run() failed ...
> [292295.931079] md: md126 stopped.
> [292295.931096] md: unbind<sdb>
> [292295.944228] md: export_rdev(sdb)
> [292295.944267] md: unbind<sdc>
> [292295.958126] md: export_rdev(sdc)
> [292295.958167] md: unbind<sde>
> [292295.970902] md: export_rdev(sde)
> [292296.219837] device-mapper: table: 252:1: raid45: unknown target type
> [292296.219845] device-mapper: ioctl: error adding target to table
> [292296.291542] device-mapper: table: 252:1: raid45: unknown target type
> [292296.291548] device-mapper: ioctl: error adding target to table
> [292296.310926] quiet_error: 1116 callbacks suppressed
> [292296.310934] Buffer I/O error on device dm-0, logical block 3907022720
> [292296.310940] Buffer I/O error on device dm-0, logical block 3907022721
> [292296.310944] Buffer I/O error on device dm-0, logical block 3907022722
> [292296.310949] Buffer I/O error on device dm-0, logical block 3907022723
> [292296.310953] Buffer I/O error on device dm-0, logical block 3907022724
> [292296.310958] Buffer I/O error on device dm-0, logical block 3907022725
> [292296.310962] Buffer I/O error on device dm-0, logical block 3907022726
> [292296.310966] Buffer I/O error on device dm-0, logical block 3907022727
> [292296.310973] Buffer I/O error on device dm-0, logical block 3907022720
> [292296.310977] Buffer I/O error on device dm-0, logical block 3907022721
> [292296.319968] device-mapper: table: 252:1: raid45: unknown target type
> [292296.319975] device-mapper: ioctl: error adding target to table
> 
> Any ideas from here? Am I up the creek without a paddle? :(
> 
> thanks to everyone for all your help so far
> chris
> 
> On Sun, Jan 13, 2013 at 4:05 PM, Dan Williams <djbw@fb.com> wrote:
> >
> >
> > On 1/13/13 11:00 AM, "chris" <tknchris@gmail.com> wrote:
> >
> >>Neil/Dave,
> >>
> >>Is it not possible to create imsm container with missing disk?
> >>If not, Is there any way to recreate the array with all disks but
> >>prevent any kind of sync which may overwrite array data?
> >
> > The example was in that link I sent:
> >
> > mdadm --create /dev/md/imsm /dev/sd[bde] -e imsm
> > mdadm --create /dev/md/vol0 /dev/sde missing /dev/sdb /dev/sdd -n 4 -l 5
> >
> > The first command marks all devices as spares.  The second creates the
> > degraded array.
> >
> > You probably want at least sdb and sdd in there since they have a copy of
> > the metadata.
> >
> > --
> > Dan
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: Recovery/Access of imsm raid via mdadm?
  2013-01-14  0:56             ` chris
  2013-01-14 12:36               ` Dorau, Lukasz
  2013-01-14 14:10               ` Dorau, Lukasz
@ 2013-01-14 14:24               ` Dorau, Lukasz
  2013-01-14 15:25                 ` chris
  2 siblings, 1 reply; 18+ messages in thread
From: Dorau, Lukasz @ 2013-01-14 14:24 UTC (permalink / raw)
  To: chris; +Cc: Neil Brown, Jiang, Dave, Linux-RAID, Dan Williams

On Monday, January 14, 2013 3:11 PM Dorau, Lukasz <lukasz.dorau@intel.com> wrote:
> On Monday, January 14, 2013 1:56 AM chris <tknchris@gmail.com> wrote:
> > [292295.923942] bio: create slab <bio-1> at 1
> > [292295.923965] md/raid:md126: not clean -- starting background
> > reconstruction
> > [292295.924000] md/raid:md126: device sdb operational as raid disk 2
> > [292295.924005] md/raid:md126: device sdc operational as raid disk 1
> > [292295.924009] md/raid:md126: device sde operational as raid disk 0
> > [292295.925149] md/raid:md126: allocated 4250kB
> > [292295.927268] md/raid:md126: cannot start dirty degraded array.
> 
> Hi
> 
> *Remember to backup the disks you have before trying the following! *
> 
> You can try starting dirty degraded array using:
> #  mdadm --assemble --force ....
> 

I meant adding --force option to:
# mdadm --create --verbose --force /dev/md/Volume0 /dev/sdc missing /dev/sdb /dev/sdd --raid-devices 4 --level=5

Be very careful using "--force" option, because it can cause data corruption!

Lukasz


> See also the "Boot time assembly of degraded/dirty arrays" chapter in:
> http://www.kernel.org/doc/Documentation/md.txt
> (you can boot with option md-mod.start_dirty_degraded=1)
> 
> Lukasz
> 
> 
> > [292295.929666] RAID conf printout:
> > [292295.929677]  --- level:5 rd:4 wd:3
> > [292295.929683]  disk 0, o:1, dev:sde
> > [292295.929688]  disk 1, o:1, dev:sdc
> > [292295.929693]  disk 2, o:1, dev:sdb
> > [292295.930898] md/raid:md126: failed to run raid set.
> > [292295.930902] md: pers->run() failed ...
> > [292295.931079] md: md126 stopped.
> > [292295.931096] md: unbind<sdb>
> > [292295.944228] md: export_rdev(sdb)
> > [292295.944267] md: unbind<sdc>
> > [292295.958126] md: export_rdev(sdc)
> > [292295.958167] md: unbind<sde>
> > [292295.970902] md: export_rdev(sde)
> > [292296.219837] device-mapper: table: 252:1: raid45: unknown target type
> > [292296.219845] device-mapper: ioctl: error adding target to table
> > [292296.291542] device-mapper: table: 252:1: raid45: unknown target type
> > [292296.291548] device-mapper: ioctl: error adding target to table
> > [292296.310926] quiet_error: 1116 callbacks suppressed
> > [292296.310934] Buffer I/O error on device dm-0, logical block 3907022720
> > [292296.310940] Buffer I/O error on device dm-0, logical block 3907022721
> > [292296.310944] Buffer I/O error on device dm-0, logical block 3907022722
> > [292296.310949] Buffer I/O error on device dm-0, logical block 3907022723
> > [292296.310953] Buffer I/O error on device dm-0, logical block 3907022724
> > [292296.310958] Buffer I/O error on device dm-0, logical block 3907022725
> > [292296.310962] Buffer I/O error on device dm-0, logical block 3907022726
> > [292296.310966] Buffer I/O error on device dm-0, logical block 3907022727
> > [292296.310973] Buffer I/O error on device dm-0, logical block 3907022720
> > [292296.310977] Buffer I/O error on device dm-0, logical block 3907022721
> > [292296.319968] device-mapper: table: 252:1: raid45: unknown target type
> > [292296.319975] device-mapper: ioctl: error adding target to table
> >
> > Any ideas from here? Am I up the creek without a paddle? :(
> >
> > thanks to everyone for all your help so far
> > chris
> >
> > On Sun, Jan 13, 2013 at 4:05 PM, Dan Williams <djbw@fb.com> wrote:
> > >
> > >
> > > On 1/13/13 11:00 AM, "chris" <tknchris@gmail.com> wrote:
> > >
> > >>Neil/Dave,
> > >>
> > >>Is it not possible to create imsm container with missing disk?
> > >>If not, Is there any way to recreate the array with all disks but
> > >>prevent any kind of sync which may overwrite array data?
> > >
> > > The example was in that link I sent:
> > >
> > > mdadm --create /dev/md/imsm /dev/sd[bde] -e imsm
> > > mdadm --create /dev/md/vol0 /dev/sde missing /dev/sdb /dev/sdd -n 4 -l 5
> > >
> > > The first command marks all devices as spares.  The second creates the
> > > degraded array.
> > >
> > > You probably want at least sdb and sdd in there since they have a copy of
> > > the metadata.
> > >
> > > --
> > > Dan
> > >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Recovery/Access of imsm raid via mdadm?
  2013-01-14 14:24               ` Dorau, Lukasz
@ 2013-01-14 15:25                 ` chris
  2013-01-15 10:25                   ` Dorau, Lukasz
  0 siblings, 1 reply; 18+ messages in thread
From: chris @ 2013-01-14 15:25 UTC (permalink / raw)
  To: Dorau, Lukasz; +Cc: Neil Brown, Jiang, Dave, Linux-RAID, Dan Williams

Ok thanks for the tips, I am imaging the disks now and will try after
that is done. Just out of curiousity what could become corrupted by
forcing the assemble? I was under the impression that as long as I
have one member missing that the only thing that would be touched is
metadata, is that right?

thanks
chris

On Mon, Jan 14, 2013 at 9:24 AM, Dorau, Lukasz <lukasz.dorau@intel.com> wrote:
> On Monday, January 14, 2013 3:11 PM Dorau, Lukasz <lukasz.dorau@intel.com> wrote:
>> On Monday, January 14, 2013 1:56 AM chris <tknchris@gmail.com> wrote:
>> > [292295.923942] bio: create slab <bio-1> at 1
>> > [292295.923965] md/raid:md126: not clean -- starting background
>> > reconstruction
>> > [292295.924000] md/raid:md126: device sdb operational as raid disk 2
>> > [292295.924005] md/raid:md126: device sdc operational as raid disk 1
>> > [292295.924009] md/raid:md126: device sde operational as raid disk 0
>> > [292295.925149] md/raid:md126: allocated 4250kB
>> > [292295.927268] md/raid:md126: cannot start dirty degraded array.
>>
>> Hi
>>
>> *Remember to backup the disks you have before trying the following! *
>>
>> You can try starting dirty degraded array using:
>> #  mdadm --assemble --force ....
>>
>
> I meant adding --force option to:
> # mdadm --create --verbose --force /dev/md/Volume0 /dev/sdc missing /dev/sdb /dev/sdd --raid-devices 4 --level=5
>
> Be very careful using "--force" option, because it can cause data corruption!
>
> Lukasz
>
>
>> See also the "Boot time assembly of degraded/dirty arrays" chapter in:
>> http://www.kernel.org/doc/Documentation/md.txt
>> (you can boot with option md-mod.start_dirty_degraded=1)
>>
>> Lukasz
>>
>>
>> > [292295.929666] RAID conf printout:
>> > [292295.929677]  --- level:5 rd:4 wd:3
>> > [292295.929683]  disk 0, o:1, dev:sde
>> > [292295.929688]  disk 1, o:1, dev:sdc
>> > [292295.929693]  disk 2, o:1, dev:sdb
>> > [292295.930898] md/raid:md126: failed to run raid set.
>> > [292295.930902] md: pers->run() failed ...
>> > [292295.931079] md: md126 stopped.
>> > [292295.931096] md: unbind<sdb>
>> > [292295.944228] md: export_rdev(sdb)
>> > [292295.944267] md: unbind<sdc>
>> > [292295.958126] md: export_rdev(sdc)
>> > [292295.958167] md: unbind<sde>
>> > [292295.970902] md: export_rdev(sde)
>> > [292296.219837] device-mapper: table: 252:1: raid45: unknown target type
>> > [292296.219845] device-mapper: ioctl: error adding target to table
>> > [292296.291542] device-mapper: table: 252:1: raid45: unknown target type
>> > [292296.291548] device-mapper: ioctl: error adding target to table
>> > [292296.310926] quiet_error: 1116 callbacks suppressed
>> > [292296.310934] Buffer I/O error on device dm-0, logical block 3907022720
>> > [292296.310940] Buffer I/O error on device dm-0, logical block 3907022721
>> > [292296.310944] Buffer I/O error on device dm-0, logical block 3907022722
>> > [292296.310949] Buffer I/O error on device dm-0, logical block 3907022723
>> > [292296.310953] Buffer I/O error on device dm-0, logical block 3907022724
>> > [292296.310958] Buffer I/O error on device dm-0, logical block 3907022725
>> > [292296.310962] Buffer I/O error on device dm-0, logical block 3907022726
>> > [292296.310966] Buffer I/O error on device dm-0, logical block 3907022727
>> > [292296.310973] Buffer I/O error on device dm-0, logical block 3907022720
>> > [292296.310977] Buffer I/O error on device dm-0, logical block 3907022721
>> > [292296.319968] device-mapper: table: 252:1: raid45: unknown target type
>> > [292296.319975] device-mapper: ioctl: error adding target to table
>> >
>> > Any ideas from here? Am I up the creek without a paddle? :(
>> >
>> > thanks to everyone for all your help so far
>> > chris
>> >
>> > On Sun, Jan 13, 2013 at 4:05 PM, Dan Williams <djbw@fb.com> wrote:
>> > >
>> > >
>> > > On 1/13/13 11:00 AM, "chris" <tknchris@gmail.com> wrote:
>> > >
>> > >>Neil/Dave,
>> > >>
>> > >>Is it not possible to create imsm container with missing disk?
>> > >>If not, Is there any way to recreate the array with all disks but
>> > >>prevent any kind of sync which may overwrite array data?
>> > >
>> > > The example was in that link I sent:
>> > >
>> > > mdadm --create /dev/md/imsm /dev/sd[bde] -e imsm
>> > > mdadm --create /dev/md/vol0 /dev/sde missing /dev/sdb /dev/sdd -n 4 -l 5
>> > >
>> > > The first command marks all devices as spares.  The second creates the
>> > > degraded array.
>> > >
>> > > You probably want at least sdb and sdd in there since they have a copy of
>> > > the metadata.
>> > >
>> > > --
>> > > Dan
>> > >
>> > --
>> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> > the body of a message to majordomo@vger.kernel.org
>> > More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 18+ messages in thread

* RE: Recovery/Access of imsm raid via mdadm?
  2013-01-14 15:25                 ` chris
@ 2013-01-15 10:25                   ` Dorau, Lukasz
  2013-01-16 16:49                     ` chris
  0 siblings, 1 reply; 18+ messages in thread
From: Dorau, Lukasz @ 2013-01-15 10:25 UTC (permalink / raw)
  To: chris; +Cc: Neil Brown, Jiang, Dave, Linux-RAID, Dan Williams

On Monday, January 14, 2013 4:25 PM chris <tknchris@gmail.com> wrote:
> Ok thanks for the tips, I am imaging the disks now and will try after
> that is done. Just out of curiousity what could become corrupted by
> forcing the assemble? I was under the impression that as long as I
> have one member missing that the only thing that would be touched is
> metadata, is that right?
> 
Yes, that is right. I meant that using --force option it may be possible to assembly an array in the wrong way and data can be incorrect, so it is better to be careful.

Lukasz

> On Mon, Jan 14, 2013 at 9:24 AM, Dorau, Lukasz <lukasz.dorau@intel.com>
> wrote:
> > On Monday, January 14, 2013 3:11 PM Dorau, Lukasz
> <lukasz.dorau@intel.com> wrote:
> >> On Monday, January 14, 2013 1:56 AM chris <tknchris@gmail.com> wrote:
> >> > [292295.923942] bio: create slab <bio-1> at 1
> >> > [292295.923965] md/raid:md126: not clean -- starting background
> >> > reconstruction
> >> > [292295.924000] md/raid:md126: device sdb operational as raid disk 2
> >> > [292295.924005] md/raid:md126: device sdc operational as raid disk 1
> >> > [292295.924009] md/raid:md126: device sde operational as raid disk 0
> >> > [292295.925149] md/raid:md126: allocated 4250kB
> >> > [292295.927268] md/raid:md126: cannot start dirty degraded array.
> >>
> >> Hi
> >>
> >> *Remember to backup the disks you have before trying the following! *
> >>
> >> You can try starting dirty degraded array using:
> >> #  mdadm --assemble --force ....
> >>
> >
> > I meant adding --force option to:
> > # mdadm --create --verbose --force /dev/md/Volume0 /dev/sdc missing
> /dev/sdb /dev/sdd --raid-devices 4 --level=5
> >
> > Be very careful using "--force" option, because it can cause data corruption!
> >
> > Lukasz
> >
> >
> >> See also the "Boot time assembly of degraded/dirty arrays" chapter in:
> >> http://www.kernel.org/doc/Documentation/md.txt
> >> (you can boot with option md-mod.start_dirty_degraded=1)
> >>
> >> Lukasz
> >>
> >>
> >> > [292295.929666] RAID conf printout:
> >> > [292295.929677]  --- level:5 rd:4 wd:3
> >> > [292295.929683]  disk 0, o:1, dev:sde
> >> > [292295.929688]  disk 1, o:1, dev:sdc
> >> > [292295.929693]  disk 2, o:1, dev:sdb
> >> > [292295.930898] md/raid:md126: failed to run raid set.
> >> > [292295.930902] md: pers->run() failed ...
> >> > [292295.931079] md: md126 stopped.
> >> > [292295.931096] md: unbind<sdb>
> >> > [292295.944228] md: export_rdev(sdb)
> >> > [292295.944267] md: unbind<sdc>
> >> > [292295.958126] md: export_rdev(sdc)
> >> > [292295.958167] md: unbind<sde>
> >> > [292295.970902] md: export_rdev(sde)
> >> > [292296.219837] device-mapper: table: 252:1: raid45: unknown target type
> >> > [292296.219845] device-mapper: ioctl: error adding target to table
> >> > [292296.291542] device-mapper: table: 252:1: raid45: unknown target type
> >> > [292296.291548] device-mapper: ioctl: error adding target to table
> >> > [292296.310926] quiet_error: 1116 callbacks suppressed
> >> > [292296.310934] Buffer I/O error on device dm-0, logical block
> 3907022720
> >> > [292296.310940] Buffer I/O error on device dm-0, logical block
> 3907022721
> >> > [292296.310944] Buffer I/O error on device dm-0, logical block
> 3907022722
> >> > [292296.310949] Buffer I/O error on device dm-0, logical block
> 3907022723
> >> > [292296.310953] Buffer I/O error on device dm-0, logical block
> 3907022724
> >> > [292296.310958] Buffer I/O error on device dm-0, logical block
> 3907022725
> >> > [292296.310962] Buffer I/O error on device dm-0, logical block
> 3907022726
> >> > [292296.310966] Buffer I/O error on device dm-0, logical block
> 3907022727
> >> > [292296.310973] Buffer I/O error on device dm-0, logical block
> 3907022720
> >> > [292296.310977] Buffer I/O error on device dm-0, logical block
> 3907022721
> >> > [292296.319968] device-mapper: table: 252:1: raid45: unknown target type
> >> > [292296.319975] device-mapper: ioctl: error adding target to table
> >> >
> >> > Any ideas from here? Am I up the creek without a paddle? :(
> >> >
> >> > thanks to everyone for all your help so far
> >> > chris
> >> >
> >> > On Sun, Jan 13, 2013 at 4:05 PM, Dan Williams <djbw@fb.com> wrote:
> >> > >
> >> > >
> >> > > On 1/13/13 11:00 AM, "chris" <tknchris@gmail.com> wrote:
> >> > >
> >> > >>Neil/Dave,
> >> > >>
> >> > >>Is it not possible to create imsm container with missing disk?
> >> > >>If not, Is there any way to recreate the array with all disks but
> >> > >>prevent any kind of sync which may overwrite array data?
> >> > >
> >> > > The example was in that link I sent:
> >> > >
> >> > > mdadm --create /dev/md/imsm /dev/sd[bde] -e imsm
> >> > > mdadm --create /dev/md/vol0 /dev/sde missing /dev/sdb /dev/sdd -n 4 -l
> 5
> >> > >
> >> > > The first command marks all devices as spares.  The second creates the
> >> > > degraded array.
> >> > >
> >> > > You probably want at least sdb and sdd in there since they have a copy of
> >> > > the metadata.
> >> > >
> >> > > --
> >> > > Dan
> >> > >
> >> > --
> >> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> >> > the body of a message to majordomo@vger.kernel.org
> >> > More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Recovery/Access of imsm raid via mdadm?
  2013-01-15 10:25                   ` Dorau, Lukasz
@ 2013-01-16 16:49                     ` chris
  2013-01-16 16:53                       ` chris
  2013-01-16 22:47                       ` Dan Williams
  0 siblings, 2 replies; 18+ messages in thread
From: chris @ 2013-01-16 16:49 UTC (permalink / raw)
  To: Dorau, Lukasz; +Cc: Neil Brown, Jiang, Dave, Linux-RAID, Dan Williams

Hi, so after painfully imaging 4x2tb I have tried the suggestions of
the other 2 permutations and adding --force with no change

Other 2 permutations:
# mdadm --stop /dev/md/imsm
# export IMSM_NO_PLATFORM=1
# mdadm --create --verbose /dev/md/imsm /dev/sdb /dev/sdc /dev/sdd
/dev/sde --raid-devices 4 --metadata=imsm
mdadm: /dev/sdb appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdb.
mdadm: /dev/sdc appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdc.
mdadm: /dev/sdd appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdd.
mdadm: /dev/sde appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sde.
mdadm: size set to 1953511431K
Continue creating array? y
mdadm: container /dev/md/imsm prepared.
# mdadm --create --verbose /dev/md/Volume0 /dev/sdc missing /dev/sdb
/dev/sdd --raid-devices 4 --level=5
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: super1.x cannot open /dev/sdc: Device or resource busy
mdadm: chunk size defaults to 128K
mdadm: /dev/sdc appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdc but will be lost or
       meaningless after creating array
mdadm: /dev/sdb appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdb but will be lost or
       meaningless after creating array
mdadm: /dev/sdd appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdd but will be lost or
       meaningless after creating array
mdadm: size set to 1953511424K
Continue creating array? y
mdadm: Creating array inside imsm container /dev/md/imsm
mdadm: failed to activate array.

# mdadm --stop /dev/md/imsm
# export IMSM_NO_PLATFORM=1
# mdadm --create --verbose /dev/md/imsm /dev/sdb /dev/sdc /dev/sdd
/dev/sde --raid-devices 4 --metadata=imsm
mdadm: /dev/sdb appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdb.
mdadm: /dev/sdc appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdc.
mdadm: /dev/sdd appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdd.
mdadm: /dev/sde appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sde.
mdadm: size set to 1953511431K
Continue creating array? y
mdadm: container /dev/md/imsm prepared.
# mdadm --create --verbose /dev/md/Volume0 missing /dev/sde /dev/sdb
/dev/sdd --raid-devices 4 --level=5
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: super1.x cannot open /dev/sde: Device or resource busy
mdadm: chunk size defaults to 128K
mdadm: /dev/sde appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sde but will be lost or
       meaningless after creating array
mdadm: /dev/sdb appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdb but will be lost or
       meaningless after creating array
mdadm: /dev/sdd appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdd but will be lost or
       meaningless after creating array
mdadm: size set to 1953511424K
Continue creating array? y
mdadm: Creating array inside imsm container /dev/md/imsm
mdadm: failed to activate array.

Tried again with --force but same thing:

# mdadm --stop /dev/md/imsm
mdadm: stopped /dev/md/imsm
# export IMSM_NO_PLATFORM=1
# mdadm --create --verbose /dev/md/imsm /dev/sdb /dev/sdc /dev/sdd
/dev/sde --raid-devices 4 --metadata=imsm
mdadm: /dev/sdb appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdb.
mdadm: /dev/sdc appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdc.
mdadm: /dev/sdd appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sdd.
mdadm: /dev/sde appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: metadata will over-write last partition on /dev/sde.
mdadm: size set to 1953511431K
Continue creating array? y
mdadm: container /dev/md/imsm prepared.
# mdadm --create --verbose --force /dev/md/Volume0 /dev/sdc missing
/dev/sdb /dev/sdd --raid-devices 4 --level=5
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: super1.x cannot open /dev/sdc: Device or resource busy
mdadm: chunk size defaults to 128K
mdadm: /dev/sdc appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdc but will be lost or
       meaningless after creating array
mdadm: /dev/sdb appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdb but will be lost or
       meaningless after creating array
mdadm: /dev/sdd appears to be part of a raid array:
    level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
mdadm: partition table exists on /dev/sdd but will be lost or
       meaningless after creating array
mdadm: size set to 1953511424K
Continue creating array? y
mdadm: Creating array inside imsm container /dev/md/imsm
mdadm: failed to activate array.

dmesg shows:
[522342.240901] md: md127 stopped.
[522342.240925] md: unbind<sde>
[522342.251221] md: export_rdev(sde)
[522342.251457] md: unbind<sdd>
[522342.267808] md: export_rdev(sdd)
[522342.267999] md: unbind<sdc>
[522342.281160] md: export_rdev(sdc)
[522342.281309] md: unbind<sdb>
[522342.291136] md: export_rdev(sdb)
[522351.758217] md: bind<sdb>
[522351.758409] md: bind<sdc>
[522351.758552] md: bind<sdd>
[522351.758690] md: bind<sde>
[522368.121090] md: bind<sdc>
[522368.122401] md: bind<sdb>
[522368.122577] md: bind<sdd>
[522368.147454] bio: create slab <bio-1> at 1
[522368.147477] md/raid:md126: not clean -- starting background reconstruction
[522368.147515] md/raid:md126: device sdd operational as raid disk 3
[522368.147520] md/raid:md126: device sdb operational as raid disk 2
[522368.147525] md/raid:md126: device sdc operational as raid disk 0
[522368.148651] md/raid:md126: allocated 4250kB
[522368.152966] md/raid:md126: cannot start dirty degraded array.
[522368.155245] RAID conf printout:
[522368.155259]  --- level:5 rd:4 wd:3
[522368.155269]  disk 0, o:1, dev:sdc
[522368.155275]  disk 2, o:1, dev:sdb
[522368.155281]  disk 3, o:1, dev:sdd
[522368.157095] md/raid:md126: failed to run raid set.
[522368.157102] md: pers->run() failed ...
[522368.157418] md: md126 stopped.
[522368.157435] md: unbind<sdd>
[522368.167883] md: export_rdev(sdd)
[522368.167922] md: unbind<sdb>
[522368.181259] md: export_rdev(sdb)
[522368.181302] md: unbind<sdc>
[522368.194576] md: export_rdev(sdc)
[522368.701814] device-mapper: table: 252:1: raid45: unknown target type
[522368.701820] device-mapper: ioctl: error adding target to table
[522368.775341] device-mapper: table: 252:1: raid45: unknown target type
[522368.775347] device-mapper: ioctl: error adding target to table
[522368.876314] quiet_error: 1116 callbacks suppressed
[522368.876324] Buffer I/O error on device dm-0, logical block 3907022720
[522368.876331] Buffer I/O error on device dm-0, logical block 3907022721
[522368.876335] Buffer I/O error on device dm-0, logical block 3907022722
[522368.876340] Buffer I/O error on device dm-0, logical block 3907022723
[522368.876344] Buffer I/O error on device dm-0, logical block 3907022724
[522368.876348] Buffer I/O error on device dm-0, logical block 3907022725
[522368.876352] Buffer I/O error on device dm-0, logical block 3907022726
[522368.876356] Buffer I/O error on device dm-0, logical block 3907022727
[522368.876362] Buffer I/O error on device dm-0, logical block 3907022720
[522368.876366] Buffer I/O error on device dm-0, logical block 3907022721
[522368.883428] device-mapper: table: 252:1: raid45: unknown target type
[522368.883434] device-mapper: ioctl: error adding target to table
[522371.066343] device-mapper: table: 252:1: raid45: unknown target type
[522371.066350] device-mapper: ioctl: error adding target to table

any idea why it wont assemble? I thought even if data was corrupt I
should be able to force it to assemble and look at it to determine if
it is corrupt or in tact

thanks
chris

On Tue, Jan 15, 2013 at 5:25 AM, Dorau, Lukasz <lukasz.dorau@intel.com> wrote:
> On Monday, January 14, 2013 4:25 PM chris <tknchris@gmail.com> wrote:
>> Ok thanks for the tips, I am imaging the disks now and will try after
>> that is done. Just out of curiousity what could become corrupted by
>> forcing the assemble? I was under the impression that as long as I
>> have one member missing that the only thing that would be touched is
>> metadata, is that right?
>>
> Yes, that is right. I meant that using --force option it may be possible to assembly an array in the wrong way and data can be incorrect, so it is better to be careful.
>
> Lukasz
>
>> On Mon, Jan 14, 2013 at 9:24 AM, Dorau, Lukasz <lukasz.dorau@intel.com>
>> wrote:
>> > On Monday, January 14, 2013 3:11 PM Dorau, Lukasz
>> <lukasz.dorau@intel.com> wrote:
>> >> On Monday, January 14, 2013 1:56 AM chris <tknchris@gmail.com> wrote:
>> >> > [292295.923942] bio: create slab <bio-1> at 1
>> >> > [292295.923965] md/raid:md126: not clean -- starting background
>> >> > reconstruction
>> >> > [292295.924000] md/raid:md126: device sdb operational as raid disk 2
>> >> > [292295.924005] md/raid:md126: device sdc operational as raid disk 1
>> >> > [292295.924009] md/raid:md126: device sde operational as raid disk 0
>> >> > [292295.925149] md/raid:md126: allocated 4250kB
>> >> > [292295.927268] md/raid:md126: cannot start dirty degraded array.
>> >>
>> >> Hi
>> >>
>> >> *Remember to backup the disks you have before trying the following! *
>> >>
>> >> You can try starting dirty degraded array using:
>> >> #  mdadm --assemble --force ....
>> >>
>> >
>> > I meant adding --force option to:
>> > # mdadm --create --verbose --force /dev/md/Volume0 /dev/sdc missing
>> /dev/sdb /dev/sdd --raid-devices 4 --level=5
>> >
>> > Be very careful using "--force" option, because it can cause data corruption!
>> >
>> > Lukasz
>> >
>> >
>> >> See also the "Boot time assembly of degraded/dirty arrays" chapter in:
>> >> http://www.kernel.org/doc/Documentation/md.txt
>> >> (you can boot with option md-mod.start_dirty_degraded=1)
>> >>
>> >> Lukasz
>> >>
>> >>
>> >> > [292295.929666] RAID conf printout:
>> >> > [292295.929677]  --- level:5 rd:4 wd:3
>> >> > [292295.929683]  disk 0, o:1, dev:sde
>> >> > [292295.929688]  disk 1, o:1, dev:sdc
>> >> > [292295.929693]  disk 2, o:1, dev:sdb
>> >> > [292295.930898] md/raid:md126: failed to run raid set.
>> >> > [292295.930902] md: pers->run() failed ...
>> >> > [292295.931079] md: md126 stopped.
>> >> > [292295.931096] md: unbind<sdb>
>> >> > [292295.944228] md: export_rdev(sdb)
>> >> > [292295.944267] md: unbind<sdc>
>> >> > [292295.958126] md: export_rdev(sdc)
>> >> > [292295.958167] md: unbind<sde>
>> >> > [292295.970902] md: export_rdev(sde)
>> >> > [292296.219837] device-mapper: table: 252:1: raid45: unknown target type
>> >> > [292296.219845] device-mapper: ioctl: error adding target to table
>> >> > [292296.291542] device-mapper: table: 252:1: raid45: unknown target type
>> >> > [292296.291548] device-mapper: ioctl: error adding target to table
>> >> > [292296.310926] quiet_error: 1116 callbacks suppressed
>> >> > [292296.310934] Buffer I/O error on device dm-0, logical block
>> 3907022720
>> >> > [292296.310940] Buffer I/O error on device dm-0, logical block
>> 3907022721
>> >> > [292296.310944] Buffer I/O error on device dm-0, logical block
>> 3907022722
>> >> > [292296.310949] Buffer I/O error on device dm-0, logical block
>> 3907022723
>> >> > [292296.310953] Buffer I/O error on device dm-0, logical block
>> 3907022724
>> >> > [292296.310958] Buffer I/O error on device dm-0, logical block
>> 3907022725
>> >> > [292296.310962] Buffer I/O error on device dm-0, logical block
>> 3907022726
>> >> > [292296.310966] Buffer I/O error on device dm-0, logical block
>> 3907022727
>> >> > [292296.310973] Buffer I/O error on device dm-0, logical block
>> 3907022720
>> >> > [292296.310977] Buffer I/O error on device dm-0, logical block
>> 3907022721
>> >> > [292296.319968] device-mapper: table: 252:1: raid45: unknown target type
>> >> > [292296.319975] device-mapper: ioctl: error adding target to table
>> >> >
>> >> > Any ideas from here? Am I up the creek without a paddle? :(
>> >> >
>> >> > thanks to everyone for all your help so far
>> >> > chris
>> >> >
>> >> > On Sun, Jan 13, 2013 at 4:05 PM, Dan Williams <djbw@fb.com> wrote:
>> >> > >
>> >> > >
>> >> > > On 1/13/13 11:00 AM, "chris" <tknchris@gmail.com> wrote:
>> >> > >
>> >> > >>Neil/Dave,
>> >> > >>
>> >> > >>Is it not possible to create imsm container with missing disk?
>> >> > >>If not, Is there any way to recreate the array with all disks but
>> >> > >>prevent any kind of sync which may overwrite array data?
>> >> > >
>> >> > > The example was in that link I sent:
>> >> > >
>> >> > > mdadm --create /dev/md/imsm /dev/sd[bde] -e imsm
>> >> > > mdadm --create /dev/md/vol0 /dev/sde missing /dev/sdb /dev/sdd -n 4 -l
>> 5
>> >> > >
>> >> > > The first command marks all devices as spares.  The second creates the
>> >> > > degraded array.
>> >> > >
>> >> > > You probably want at least sdb and sdd in there since they have a copy of
>> >> > > the metadata.
>> >> > >
>> >> > > --
>> >> > > Dan
>> >> > >
>> >> > --
>> >> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> >> > the body of a message to majordomo@vger.kernel.org
>> >> > More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Recovery/Access of imsm raid via mdadm?
  2013-01-16 16:49                     ` chris
@ 2013-01-16 16:53                       ` chris
  2013-01-16 22:47                       ` Dan Williams
  1 sibling, 0 replies; 18+ messages in thread
From: chris @ 2013-01-16 16:53 UTC (permalink / raw)
  To: Dorau, Lukasz; +Cc: Neil Brown, Jiang, Dave, Linux-RAID, Dan Williams

Just an idea... if I plugged disks into a machine with IMSM OROM,
could I "create" the array again inside the OROM and try to bring it
online without wiping any data? I'm not too sure if the OROM would let
me create a degraded array and I'm just wondering if that would do
anything different than what I'm trying with mdadm

chris

On Wed, Jan 16, 2013 at 11:49 AM, chris <tknchris@gmail.com> wrote:
> Hi, so after painfully imaging 4x2tb I have tried the suggestions of
> the other 2 permutations and adding --force with no change
>
> Other 2 permutations:
> # mdadm --stop /dev/md/imsm
> # export IMSM_NO_PLATFORM=1
> # mdadm --create --verbose /dev/md/imsm /dev/sdb /dev/sdc /dev/sdd
> /dev/sde --raid-devices 4 --metadata=imsm
> mdadm: /dev/sdb appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdb.
> mdadm: /dev/sdc appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdc.
> mdadm: /dev/sdd appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdd.
> mdadm: /dev/sde appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sde.
> mdadm: size set to 1953511431K
> Continue creating array? y
> mdadm: container /dev/md/imsm prepared.
> # mdadm --create --verbose /dev/md/Volume0 /dev/sdc missing /dev/sdb
> /dev/sdd --raid-devices 4 --level=5
> mdadm: layout defaults to left-symmetric
> mdadm: layout defaults to left-symmetric
> mdadm: super1.x cannot open /dev/sdc: Device or resource busy
> mdadm: chunk size defaults to 128K
> mdadm: /dev/sdc appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sdc but will be lost or
>        meaningless after creating array
> mdadm: /dev/sdb appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sdb but will be lost or
>        meaningless after creating array
> mdadm: /dev/sdd appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sdd but will be lost or
>        meaningless after creating array
> mdadm: size set to 1953511424K
> Continue creating array? y
> mdadm: Creating array inside imsm container /dev/md/imsm
> mdadm: failed to activate array.
>
> # mdadm --stop /dev/md/imsm
> # export IMSM_NO_PLATFORM=1
> # mdadm --create --verbose /dev/md/imsm /dev/sdb /dev/sdc /dev/sdd
> /dev/sde --raid-devices 4 --metadata=imsm
> mdadm: /dev/sdb appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdb.
> mdadm: /dev/sdc appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdc.
> mdadm: /dev/sdd appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdd.
> mdadm: /dev/sde appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sde.
> mdadm: size set to 1953511431K
> Continue creating array? y
> mdadm: container /dev/md/imsm prepared.
> # mdadm --create --verbose /dev/md/Volume0 missing /dev/sde /dev/sdb
> /dev/sdd --raid-devices 4 --level=5
> mdadm: layout defaults to left-symmetric
> mdadm: layout defaults to left-symmetric
> mdadm: super1.x cannot open /dev/sde: Device or resource busy
> mdadm: chunk size defaults to 128K
> mdadm: /dev/sde appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sde but will be lost or
>        meaningless after creating array
> mdadm: /dev/sdb appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sdb but will be lost or
>        meaningless after creating array
> mdadm: /dev/sdd appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sdd but will be lost or
>        meaningless after creating array
> mdadm: size set to 1953511424K
> Continue creating array? y
> mdadm: Creating array inside imsm container /dev/md/imsm
> mdadm: failed to activate array.
>
> Tried again with --force but same thing:
>
> # mdadm --stop /dev/md/imsm
> mdadm: stopped /dev/md/imsm
> # export IMSM_NO_PLATFORM=1
> # mdadm --create --verbose /dev/md/imsm /dev/sdb /dev/sdc /dev/sdd
> /dev/sde --raid-devices 4 --metadata=imsm
> mdadm: /dev/sdb appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdb.
> mdadm: /dev/sdc appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdc.
> mdadm: /dev/sdd appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sdd.
> mdadm: /dev/sde appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: metadata will over-write last partition on /dev/sde.
> mdadm: size set to 1953511431K
> Continue creating array? y
> mdadm: container /dev/md/imsm prepared.
> # mdadm --create --verbose --force /dev/md/Volume0 /dev/sdc missing
> /dev/sdb /dev/sdd --raid-devices 4 --level=5
> mdadm: layout defaults to left-symmetric
> mdadm: layout defaults to left-symmetric
> mdadm: super1.x cannot open /dev/sdc: Device or resource busy
> mdadm: chunk size defaults to 128K
> mdadm: /dev/sdc appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sdc but will be lost or
>        meaningless after creating array
> mdadm: /dev/sdb appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sdb but will be lost or
>        meaningless after creating array
> mdadm: /dev/sdd appears to be part of a raid array:
>     level=container devices=0 ctime=Thu Jan  1 00:00:00 1970
> mdadm: partition table exists on /dev/sdd but will be lost or
>        meaningless after creating array
> mdadm: size set to 1953511424K
> Continue creating array? y
> mdadm: Creating array inside imsm container /dev/md/imsm
> mdadm: failed to activate array.
>
> dmesg shows:
> [522342.240901] md: md127 stopped.
> [522342.240925] md: unbind<sde>
> [522342.251221] md: export_rdev(sde)
> [522342.251457] md: unbind<sdd>
> [522342.267808] md: export_rdev(sdd)
> [522342.267999] md: unbind<sdc>
> [522342.281160] md: export_rdev(sdc)
> [522342.281309] md: unbind<sdb>
> [522342.291136] md: export_rdev(sdb)
> [522351.758217] md: bind<sdb>
> [522351.758409] md: bind<sdc>
> [522351.758552] md: bind<sdd>
> [522351.758690] md: bind<sde>
> [522368.121090] md: bind<sdc>
> [522368.122401] md: bind<sdb>
> [522368.122577] md: bind<sdd>
> [522368.147454] bio: create slab <bio-1> at 1
> [522368.147477] md/raid:md126: not clean -- starting background reconstruction
> [522368.147515] md/raid:md126: device sdd operational as raid disk 3
> [522368.147520] md/raid:md126: device sdb operational as raid disk 2
> [522368.147525] md/raid:md126: device sdc operational as raid disk 0
> [522368.148651] md/raid:md126: allocated 4250kB
> [522368.152966] md/raid:md126: cannot start dirty degraded array.
> [522368.155245] RAID conf printout:
> [522368.155259]  --- level:5 rd:4 wd:3
> [522368.155269]  disk 0, o:1, dev:sdc
> [522368.155275]  disk 2, o:1, dev:sdb
> [522368.155281]  disk 3, o:1, dev:sdd
> [522368.157095] md/raid:md126: failed to run raid set.
> [522368.157102] md: pers->run() failed ...
> [522368.157418] md: md126 stopped.
> [522368.157435] md: unbind<sdd>
> [522368.167883] md: export_rdev(sdd)
> [522368.167922] md: unbind<sdb>
> [522368.181259] md: export_rdev(sdb)
> [522368.181302] md: unbind<sdc>
> [522368.194576] md: export_rdev(sdc)
> [522368.701814] device-mapper: table: 252:1: raid45: unknown target type
> [522368.701820] device-mapper: ioctl: error adding target to table
> [522368.775341] device-mapper: table: 252:1: raid45: unknown target type
> [522368.775347] device-mapper: ioctl: error adding target to table
> [522368.876314] quiet_error: 1116 callbacks suppressed
> [522368.876324] Buffer I/O error on device dm-0, logical block 3907022720
> [522368.876331] Buffer I/O error on device dm-0, logical block 3907022721
> [522368.876335] Buffer I/O error on device dm-0, logical block 3907022722
> [522368.876340] Buffer I/O error on device dm-0, logical block 3907022723
> [522368.876344] Buffer I/O error on device dm-0, logical block 3907022724
> [522368.876348] Buffer I/O error on device dm-0, logical block 3907022725
> [522368.876352] Buffer I/O error on device dm-0, logical block 3907022726
> [522368.876356] Buffer I/O error on device dm-0, logical block 3907022727
> [522368.876362] Buffer I/O error on device dm-0, logical block 3907022720
> [522368.876366] Buffer I/O error on device dm-0, logical block 3907022721
> [522368.883428] device-mapper: table: 252:1: raid45: unknown target type
> [522368.883434] device-mapper: ioctl: error adding target to table
> [522371.066343] device-mapper: table: 252:1: raid45: unknown target type
> [522371.066350] device-mapper: ioctl: error adding target to table
>
> any idea why it wont assemble? I thought even if data was corrupt I
> should be able to force it to assemble and look at it to determine if
> it is corrupt or in tact
>
> thanks
> chris
>
> On Tue, Jan 15, 2013 at 5:25 AM, Dorau, Lukasz <lukasz.dorau@intel.com> wrote:
>> On Monday, January 14, 2013 4:25 PM chris <tknchris@gmail.com> wrote:
>>> Ok thanks for the tips, I am imaging the disks now and will try after
>>> that is done. Just out of curiousity what could become corrupted by
>>> forcing the assemble? I was under the impression that as long as I
>>> have one member missing that the only thing that would be touched is
>>> metadata, is that right?
>>>
>> Yes, that is right. I meant that using --force option it may be possible to assembly an array in the wrong way and data can be incorrect, so it is better to be careful.
>>
>> Lukasz
>>
>>> On Mon, Jan 14, 2013 at 9:24 AM, Dorau, Lukasz <lukasz.dorau@intel.com>
>>> wrote:
>>> > On Monday, January 14, 2013 3:11 PM Dorau, Lukasz
>>> <lukasz.dorau@intel.com> wrote:
>>> >> On Monday, January 14, 2013 1:56 AM chris <tknchris@gmail.com> wrote:
>>> >> > [292295.923942] bio: create slab <bio-1> at 1
>>> >> > [292295.923965] md/raid:md126: not clean -- starting background
>>> >> > reconstruction
>>> >> > [292295.924000] md/raid:md126: device sdb operational as raid disk 2
>>> >> > [292295.924005] md/raid:md126: device sdc operational as raid disk 1
>>> >> > [292295.924009] md/raid:md126: device sde operational as raid disk 0
>>> >> > [292295.925149] md/raid:md126: allocated 4250kB
>>> >> > [292295.927268] md/raid:md126: cannot start dirty degraded array.
>>> >>
>>> >> Hi
>>> >>
>>> >> *Remember to backup the disks you have before trying the following! *
>>> >>
>>> >> You can try starting dirty degraded array using:
>>> >> #  mdadm --assemble --force ....
>>> >>
>>> >
>>> > I meant adding --force option to:
>>> > # mdadm --create --verbose --force /dev/md/Volume0 /dev/sdc missing
>>> /dev/sdb /dev/sdd --raid-devices 4 --level=5
>>> >
>>> > Be very careful using "--force" option, because it can cause data corruption!
>>> >
>>> > Lukasz
>>> >
>>> >
>>> >> See also the "Boot time assembly of degraded/dirty arrays" chapter in:
>>> >> http://www.kernel.org/doc/Documentation/md.txt
>>> >> (you can boot with option md-mod.start_dirty_degraded=1)
>>> >>
>>> >> Lukasz
>>> >>
>>> >>
>>> >> > [292295.929666] RAID conf printout:
>>> >> > [292295.929677]  --- level:5 rd:4 wd:3
>>> >> > [292295.929683]  disk 0, o:1, dev:sde
>>> >> > [292295.929688]  disk 1, o:1, dev:sdc
>>> >> > [292295.929693]  disk 2, o:1, dev:sdb
>>> >> > [292295.930898] md/raid:md126: failed to run raid set.
>>> >> > [292295.930902] md: pers->run() failed ...
>>> >> > [292295.931079] md: md126 stopped.
>>> >> > [292295.931096] md: unbind<sdb>
>>> >> > [292295.944228] md: export_rdev(sdb)
>>> >> > [292295.944267] md: unbind<sdc>
>>> >> > [292295.958126] md: export_rdev(sdc)
>>> >> > [292295.958167] md: unbind<sde>
>>> >> > [292295.970902] md: export_rdev(sde)
>>> >> > [292296.219837] device-mapper: table: 252:1: raid45: unknown target type
>>> >> > [292296.219845] device-mapper: ioctl: error adding target to table
>>> >> > [292296.291542] device-mapper: table: 252:1: raid45: unknown target type
>>> >> > [292296.291548] device-mapper: ioctl: error adding target to table
>>> >> > [292296.310926] quiet_error: 1116 callbacks suppressed
>>> >> > [292296.310934] Buffer I/O error on device dm-0, logical block
>>> 3907022720
>>> >> > [292296.310940] Buffer I/O error on device dm-0, logical block
>>> 3907022721
>>> >> > [292296.310944] Buffer I/O error on device dm-0, logical block
>>> 3907022722
>>> >> > [292296.310949] Buffer I/O error on device dm-0, logical block
>>> 3907022723
>>> >> > [292296.310953] Buffer I/O error on device dm-0, logical block
>>> 3907022724
>>> >> > [292296.310958] Buffer I/O error on device dm-0, logical block
>>> 3907022725
>>> >> > [292296.310962] Buffer I/O error on device dm-0, logical block
>>> 3907022726
>>> >> > [292296.310966] Buffer I/O error on device dm-0, logical block
>>> 3907022727
>>> >> > [292296.310973] Buffer I/O error on device dm-0, logical block
>>> 3907022720
>>> >> > [292296.310977] Buffer I/O error on device dm-0, logical block
>>> 3907022721
>>> >> > [292296.319968] device-mapper: table: 252:1: raid45: unknown target type
>>> >> > [292296.319975] device-mapper: ioctl: error adding target to table
>>> >> >
>>> >> > Any ideas from here? Am I up the creek without a paddle? :(
>>> >> >
>>> >> > thanks to everyone for all your help so far
>>> >> > chris
>>> >> >
>>> >> > On Sun, Jan 13, 2013 at 4:05 PM, Dan Williams <djbw@fb.com> wrote:
>>> >> > >
>>> >> > >
>>> >> > > On 1/13/13 11:00 AM, "chris" <tknchris@gmail.com> wrote:
>>> >> > >
>>> >> > >>Neil/Dave,
>>> >> > >>
>>> >> > >>Is it not possible to create imsm container with missing disk?
>>> >> > >>If not, Is there any way to recreate the array with all disks but
>>> >> > >>prevent any kind of sync which may overwrite array data?
>>> >> > >
>>> >> > > The example was in that link I sent:
>>> >> > >
>>> >> > > mdadm --create /dev/md/imsm /dev/sd[bde] -e imsm
>>> >> > > mdadm --create /dev/md/vol0 /dev/sde missing /dev/sdb /dev/sdd -n 4 -l
>>> 5
>>> >> > >
>>> >> > > The first command marks all devices as spares.  The second creates the
>>> >> > > degraded array.
>>> >> > >
>>> >> > > You probably want at least sdb and sdd in there since they have a copy of
>>> >> > > the metadata.
>>> >> > >
>>> >> > > --
>>> >> > > Dan
>>> >> > >
>>> >> > --
>>> >> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> >> > the body of a message to majordomo@vger.kernel.org
>>> >> > More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Recovery/Access of imsm raid via mdadm?
  2013-01-16 16:49                     ` chris
  2013-01-16 16:53                       ` chris
@ 2013-01-16 22:47                       ` Dan Williams
  2013-01-17 15:12                         ` Charles Polisher
  2013-01-17 16:07                         ` chris
  1 sibling, 2 replies; 18+ messages in thread
From: Dan Williams @ 2013-01-16 22:47 UTC (permalink / raw)
  To: chris; +Cc: Dorau, Lukasz, Neil Brown, Jiang, Dave, Linux-RAID

On Wed, Jan 16, 2013 at 8:49 AM, chris <tknchris@gmail.com> wrote:
> any idea why it wont assemble? I thought even if data was corrupt I
> should be able to force it to assemble and look at it to determine if
> it is corrupt or in tact

A bug in mdadm was introduced shortly after the "missing" support was
added.  Looks like you need 3.2.6.

--
Dan

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Recovery/Access of imsm raid via mdadm?
  2013-01-16 22:47                       ` Dan Williams
@ 2013-01-17 15:12                         ` Charles Polisher
  2013-01-17 16:07                         ` chris
  1 sibling, 0 replies; 18+ messages in thread
From: Charles Polisher @ 2013-01-17 15:12 UTC (permalink / raw)
  To: Dan Williams; +Cc: chris, Dorau, Lukasz, Neil Brown, Jiang, Dave, Linux-RAID

Dan Williams wrote:
> chris <tknchris@gmail.com> wrote:
> > any idea why it wont assemble? I thought even if data was corrupt I
> > should be able to force it to assemble and look at it to determine if
> > it is corrupt or in tact
> 
> A bug in mdadm was introduced shortly after the "missing" support was
> added.  Looks like you need 3.2.6.

Probably unrelated, but in case you were unaware, ICHxR chipsets
have snared me in the past with bugs that were later corrected
by firmware updates from Intel. If I remember correctly from
upthread, you were at one point attempting to / running this
array with such hardware. If this is still an option, you might
check your firmware version and the release notes for any more
recent release. This won't fix software RAID but it could
conceivably coax the ICH10R into assembling the array.

-- 
Charles


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: Recovery/Access of imsm raid via mdadm?
  2013-01-16 22:47                       ` Dan Williams
  2013-01-17 15:12                         ` Charles Polisher
@ 2013-01-17 16:07                         ` chris
  1 sibling, 0 replies; 18+ messages in thread
From: chris @ 2013-01-17 16:07 UTC (permalink / raw)
  To: Dan Williams; +Cc: Dorau, Lukasz, Neil Brown, Jiang, Dave, Linux-RAID

Dan,

You hit the nail on the head, mdadm 3.2.6 is able to assemble and get
the arrays online, now just going through the permutations to try to
get recoverable data but at this point recovery is normal and
non-intel specific.

Thanks to everyone who helped!

chris

On Wed, Jan 16, 2013 at 5:47 PM, Dan Williams <djbw@fb.com> wrote:
> On Wed, Jan 16, 2013 at 8:49 AM, chris <tknchris@gmail.com> wrote:
>> any idea why it wont assemble? I thought even if data was corrupt I
>> should be able to force it to assemble and look at it to determine if
>> it is corrupt or in tact
>
> A bug in mdadm was introduced shortly after the "missing" support was
> added.  Looks like you need 3.2.6.
>
> --
> Dan

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2013-01-17 16:07 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-01-10 16:23 Recovery/Access of imsm raid via mdadm? chris
2013-01-10 17:09 ` Dave Jiang
2013-01-10 20:19   ` chris
2013-01-11  1:42     ` Dan Williams
2013-01-11 17:53       ` chris
2013-01-13 19:00         ` chris
2013-01-13 21:05           ` Dan Williams
2013-01-14  0:56             ` chris
2013-01-14 12:36               ` Dorau, Lukasz
2013-01-14 14:10               ` Dorau, Lukasz
2013-01-14 14:24               ` Dorau, Lukasz
2013-01-14 15:25                 ` chris
2013-01-15 10:25                   ` Dorau, Lukasz
2013-01-16 16:49                     ` chris
2013-01-16 16:53                       ` chris
2013-01-16 22:47                       ` Dan Williams
2013-01-17 15:12                         ` Charles Polisher
2013-01-17 16:07                         ` chris

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).