linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Broken array, trying to assemble enough to copy data off
@ 2013-10-09 23:25 Digimer
  2013-10-09 23:41 ` Digimer
  0 siblings, 1 reply; 7+ messages in thread
From: Digimer @ 2013-10-09 23:25 UTC (permalink / raw)
  To: linux-raid

Hi all,

  I've got a CentOS 6.4 box with a 4-drive RAID level 5 array that died
while I was away (so I didn't see the error(s) on screen). I took a
fresh drive, did a new minimal install. I then plugged in the four
drives from the dead box and tried to re-assemble the array. It didn't
work, so here I am. :) Note that I can't get to the machine's dmesg or
syslogs as they're on the failed array.

  I was following https://raid.wiki.kernel.org/index.php/RAID_Recovery
and stopped when I hit "Restore array by recreating". I tried some steps
suggested by folks in #centos on freenode, but had no more luck.

  Below is the output of 'mdadm --examine ...'. I'm trying to get just a
few files off. Ironically, it was a backup server, but there were a
couple files on there that I don't have elsewhere anymore. It's not the
end of the world if I don't get it back, but it would certainly save me
a lot of hassle to recover some or all of it.

  Some details;

  When I try;

====
[root@an-to-nas01 ~]# mdadm --assemble --run /dev/md1 /dev/sd[bcde]2
mdadm: ignoring /dev/sde2 as it reports /dev/sdb2 as failed
mdadm: failed to RUN_ARRAY /dev/md1: Input/output error
mdadm: Not enough devices to start the array.

[root@an-to-nas01 ~]# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md1 : inactive sdc2[0] sdd2[4](S) sdb2[2]
      4393872384 blocks super 1.1

unused devices: <none>
====

  Syslog shows;

====
Oct 10 03:19:01 an-to-nas01 kernel: md: md1 stopped.
Oct 10 03:19:01 an-to-nas01 kernel: md: bind<sdb2>
Oct 10 03:19:01 an-to-nas01 kernel: md: bind<sdd2>
Oct 10 03:19:01 an-to-nas01 kernel: md: bind<sdc2>
Oct 10 03:19:01 an-to-nas01 kernel: bio: create slab <bio-1> at 1
Oct 10 03:19:01 an-to-nas01 kernel: md/raid:md1: device sdc2 operational
as raid disk 0
Oct 10 03:19:01 an-to-nas01 kernel: md/raid:md1: device sdb2 operational
as raid disk 2
Oct 10 03:19:01 an-to-nas01 kernel: md/raid:md1: allocated 4314kB
Oct 10 03:19:01 an-to-nas01 kernel: md/raid:md1: not enough operational
devices (2/4 failed)
Oct 10 03:19:01 an-to-nas01 kernel: md/raid:md1: failed to run raid set.
Oct 10 03:19:01 an-to-nas01 kernel: md: pers->run() failed ...
====

  As you can see, for some odd reason, it says that sde2 thinks sdb2 has
failed and tosses it out.

====
[root@an-to-nas01 ~]# mdadm --examine /dev/sd[b-e]2
/dev/sdb2:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x1
     Array UUID : 8be7648d:09648bf3:f406b7fc:5ebd6b44
           Name : ikebukuro.alteeve.ca:1
  Creation Time : Sat Jun 16 14:01:41 2012
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 2929248256 (1396.77 GiB 1499.78 GB)
     Array Size : 4393870848 (4190.32 GiB 4499.32 GB)
  Used Dev Size : 2929247232 (1396.77 GiB 1499.77 GB)
    Data Offset : 2048 sectors
   Super Offset : 0 sectors
          State : clean
    Device UUID : 127735bd:0ba713c2:57900a47:3ffe04e3

Internal Bitmap : 8 sectors from superblock
    Update Time : Fri Sep 13 04:00:39 2013
       Checksum : 2c41412c - correct
         Events : 2376224

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 2
   Array State : AAAA ('A' == active, '.' == missing)
/dev/sdc2:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x1
     Array UUID : 8be7648d:09648bf3:f406b7fc:5ebd6b44
           Name : ikebukuro.alteeve.ca:1
  Creation Time : Sat Jun 16 14:01:41 2012
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 2929248256 (1396.77 GiB 1499.78 GB)
     Array Size : 4393870848 (4190.32 GiB 4499.32 GB)
  Used Dev Size : 2929247232 (1396.77 GiB 1499.77 GB)
    Data Offset : 2048 sectors
   Super Offset : 0 sectors
          State : clean
    Device UUID : 83e37849:5d985457:acf0e3b7:b7207a73

Internal Bitmap : 8 sectors from superblock
    Update Time : Fri Sep 13 04:01:13 2013
       Checksum : 4f1521d7 - correct
         Events : 2376224

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 0
   Array State : AAA. ('A' == active, '.' == missing)
/dev/sdd2:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x1
     Array UUID : 8be7648d:09648bf3:f406b7fc:5ebd6b44
           Name : ikebukuro.alteeve.ca:1
  Creation Time : Sat Jun 16 14:01:41 2012
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 2929248256 (1396.77 GiB 1499.78 GB)
     Array Size : 4393870848 (4190.32 GiB 4499.32 GB)
  Used Dev Size : 2929247232 (1396.77 GiB 1499.77 GB)
    Data Offset : 2048 sectors
   Super Offset : 0 sectors
          State : clean
    Device UUID : a2dac6b5:b1dc31aa:84ebd704:53bf55d9

Internal Bitmap : 8 sectors from superblock
    Update Time : Fri Sep 13 04:01:13 2013
       Checksum : c110f6be - correct
         Events : 2376224

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : spare
   Array State : AA.. ('A' == active, '.' == missing)
/dev/sde2:
          Magic : a92b4efc
        Version : 1.1
    Feature Map : 0x1
     Array UUID : 8be7648d:09648bf3:f406b7fc:5ebd6b44
           Name : ikebukuro.alteeve.ca:1
  Creation Time : Sat Jun 16 14:01:41 2012
     Raid Level : raid5
   Raid Devices : 4

 Avail Dev Size : 2929248256 (1396.77 GiB 1499.78 GB)
     Array Size : 4393870848 (4190.32 GiB 4499.32 GB)
  Used Dev Size : 2929247232 (1396.77 GiB 1499.77 GB)
    Data Offset : 2048 sectors
   Super Offset : 0 sectors
          State : clean
    Device UUID : faa31bd7:f9c11afb:650fc564:f50bb8f7

Internal Bitmap : 8 sectors from superblock
    Update Time : Fri Sep 13 04:01:13 2013
       Checksum : b19e15df - correct
         Events : 2376224

         Layout : left-symmetric
     Chunk Size : 512K

   Device Role : Active device 1
   Array State : AA.. ('A' == active, '.' == missing)
====

  Any help is appreciated!

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Broken array, trying to assemble enough to copy data off
  2013-10-09 23:25 Broken array, trying to assemble enough to copy data off Digimer
@ 2013-10-09 23:41 ` Digimer
  2013-10-10 12:44   ` Phil Turmel
  0 siblings, 1 reply; 7+ messages in thread
From: Digimer @ 2013-10-09 23:41 UTC (permalink / raw)
  To: linux-raid

I forgot to add the smartctl output;

/dev/sdb: http://fpaste.org/45627/13813614/
/dev/sdc: http://fpaste.org/45628/38136150/
/dev/sdd: http://fpaste.org/45630/36151813/
/dev/sde: http://fpaste.org/45632/36154613/

(sorry for the top post, worried this would have gotten lost below)

digimer

On 09/10/13 19:25, Digimer wrote:
> Hi all,
> 
>   I've got a CentOS 6.4 box with a 4-drive RAID level 5 array that died
> while I was away (so I didn't see the error(s) on screen). I took a
> fresh drive, did a new minimal install. I then plugged in the four
> drives from the dead box and tried to re-assemble the array. It didn't
> work, so here I am. :) Note that I can't get to the machine's dmesg or
> syslogs as they're on the failed array.
> 
>   I was following https://raid.wiki.kernel.org/index.php/RAID_Recovery
> and stopped when I hit "Restore array by recreating". I tried some steps
> suggested by folks in #centos on freenode, but had no more luck.
> 
>   Below is the output of 'mdadm --examine ...'. I'm trying to get just a
> few files off. Ironically, it was a backup server, but there were a
> couple files on there that I don't have elsewhere anymore. It's not the
> end of the world if I don't get it back, but it would certainly save me
> a lot of hassle to recover some or all of it.
> 
>   Some details;
> 
>   When I try;
> 
> ====
> [root@an-to-nas01 ~]# mdadm --assemble --run /dev/md1 /dev/sd[bcde]2
> mdadm: ignoring /dev/sde2 as it reports /dev/sdb2 as failed
> mdadm: failed to RUN_ARRAY /dev/md1: Input/output error
> mdadm: Not enough devices to start the array.
> 
> [root@an-to-nas01 ~]# cat /proc/mdstat
> Personalities : [raid1] [raid6] [raid5] [raid4]
> md1 : inactive sdc2[0] sdd2[4](S) sdb2[2]
>       4393872384 blocks super 1.1
> 
> unused devices: <none>
> ====
> 
>   Syslog shows;
> 
> ====
> Oct 10 03:19:01 an-to-nas01 kernel: md: md1 stopped.
> Oct 10 03:19:01 an-to-nas01 kernel: md: bind<sdb2>
> Oct 10 03:19:01 an-to-nas01 kernel: md: bind<sdd2>
> Oct 10 03:19:01 an-to-nas01 kernel: md: bind<sdc2>
> Oct 10 03:19:01 an-to-nas01 kernel: bio: create slab <bio-1> at 1
> Oct 10 03:19:01 an-to-nas01 kernel: md/raid:md1: device sdc2 operational
> as raid disk 0
> Oct 10 03:19:01 an-to-nas01 kernel: md/raid:md1: device sdb2 operational
> as raid disk 2
> Oct 10 03:19:01 an-to-nas01 kernel: md/raid:md1: allocated 4314kB
> Oct 10 03:19:01 an-to-nas01 kernel: md/raid:md1: not enough operational
> devices (2/4 failed)
> Oct 10 03:19:01 an-to-nas01 kernel: md/raid:md1: failed to run raid set.
> Oct 10 03:19:01 an-to-nas01 kernel: md: pers->run() failed ...
> ====
> 
>   As you can see, for some odd reason, it says that sde2 thinks sdb2 has
> failed and tosses it out.
> 
> ====
> [root@an-to-nas01 ~]# mdadm --examine /dev/sd[b-e]2
> /dev/sdb2:
>           Magic : a92b4efc
>         Version : 1.1
>     Feature Map : 0x1
>      Array UUID : 8be7648d:09648bf3:f406b7fc:5ebd6b44
>            Name : ikebukuro.alteeve.ca:1
>   Creation Time : Sat Jun 16 14:01:41 2012
>      Raid Level : raid5
>    Raid Devices : 4
> 
>  Avail Dev Size : 2929248256 (1396.77 GiB 1499.78 GB)
>      Array Size : 4393870848 (4190.32 GiB 4499.32 GB)
>   Used Dev Size : 2929247232 (1396.77 GiB 1499.77 GB)
>     Data Offset : 2048 sectors
>    Super Offset : 0 sectors
>           State : clean
>     Device UUID : 127735bd:0ba713c2:57900a47:3ffe04e3
> 
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Fri Sep 13 04:00:39 2013
>        Checksum : 2c41412c - correct
>          Events : 2376224
> 
>          Layout : left-symmetric
>      Chunk Size : 512K
> 
>    Device Role : Active device 2
>    Array State : AAAA ('A' == active, '.' == missing)
> /dev/sdc2:
>           Magic : a92b4efc
>         Version : 1.1
>     Feature Map : 0x1
>      Array UUID : 8be7648d:09648bf3:f406b7fc:5ebd6b44
>            Name : ikebukuro.alteeve.ca:1
>   Creation Time : Sat Jun 16 14:01:41 2012
>      Raid Level : raid5
>    Raid Devices : 4
> 
>  Avail Dev Size : 2929248256 (1396.77 GiB 1499.78 GB)
>      Array Size : 4393870848 (4190.32 GiB 4499.32 GB)
>   Used Dev Size : 2929247232 (1396.77 GiB 1499.77 GB)
>     Data Offset : 2048 sectors
>    Super Offset : 0 sectors
>           State : clean
>     Device UUID : 83e37849:5d985457:acf0e3b7:b7207a73
> 
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Fri Sep 13 04:01:13 2013
>        Checksum : 4f1521d7 - correct
>          Events : 2376224
> 
>          Layout : left-symmetric
>      Chunk Size : 512K
> 
>    Device Role : Active device 0
>    Array State : AAA. ('A' == active, '.' == missing)
> /dev/sdd2:
>           Magic : a92b4efc
>         Version : 1.1
>     Feature Map : 0x1
>      Array UUID : 8be7648d:09648bf3:f406b7fc:5ebd6b44
>            Name : ikebukuro.alteeve.ca:1
>   Creation Time : Sat Jun 16 14:01:41 2012
>      Raid Level : raid5
>    Raid Devices : 4
> 
>  Avail Dev Size : 2929248256 (1396.77 GiB 1499.78 GB)
>      Array Size : 4393870848 (4190.32 GiB 4499.32 GB)
>   Used Dev Size : 2929247232 (1396.77 GiB 1499.77 GB)
>     Data Offset : 2048 sectors
>    Super Offset : 0 sectors
>           State : clean
>     Device UUID : a2dac6b5:b1dc31aa:84ebd704:53bf55d9
> 
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Fri Sep 13 04:01:13 2013
>        Checksum : c110f6be - correct
>          Events : 2376224
> 
>          Layout : left-symmetric
>      Chunk Size : 512K
> 
>    Device Role : spare
>    Array State : AA.. ('A' == active, '.' == missing)
> /dev/sde2:
>           Magic : a92b4efc
>         Version : 1.1
>     Feature Map : 0x1
>      Array UUID : 8be7648d:09648bf3:f406b7fc:5ebd6b44
>            Name : ikebukuro.alteeve.ca:1
>   Creation Time : Sat Jun 16 14:01:41 2012
>      Raid Level : raid5
>    Raid Devices : 4
> 
>  Avail Dev Size : 2929248256 (1396.77 GiB 1499.78 GB)
>      Array Size : 4393870848 (4190.32 GiB 4499.32 GB)
>   Used Dev Size : 2929247232 (1396.77 GiB 1499.77 GB)
>     Data Offset : 2048 sectors
>    Super Offset : 0 sectors
>           State : clean
>     Device UUID : faa31bd7:f9c11afb:650fc564:f50bb8f7
> 
> Internal Bitmap : 8 sectors from superblock
>     Update Time : Fri Sep 13 04:01:13 2013
>        Checksum : b19e15df - correct
>          Events : 2376224
> 
>          Layout : left-symmetric
>      Chunk Size : 512K
> 
>    Device Role : Active device 1
>    Array State : AA.. ('A' == active, '.' == missing)
> ====
> 
>   Any help is appreciated!
> 


-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Broken array, trying to assemble enough to copy data off
  2013-10-09 23:41 ` Digimer
@ 2013-10-10 12:44   ` Phil Turmel
  2013-10-10 16:20     ` Digimer
  0 siblings, 1 reply; 7+ messages in thread
From: Phil Turmel @ 2013-10-10 12:44 UTC (permalink / raw)
  To: Digimer, linux-raid

Good morning,

On 10/09/2013 07:41 PM, Digimer wrote:
> I forgot to add the smartctl output;
> 
> /dev/sdb: http://fpaste.org/45627/13813614/
> /dev/sdc: http://fpaste.org/45628/38136150/
> /dev/sdd: http://fpaste.org/45630/36151813/
> /dev/sde: http://fpaste.org/45632/36154613/
> 
> (sorry for the top post, worried this would have gotten lost below)

[You could/should have just trimmed the material below.]

Anyways, excellent report.

According to the mdadm -E data, you should only need to perform a forced
assembly, like so:

mdadm --stop /dev/md1
mdadm --assemble --force /dev/md1 /dev/sd[bcde]2

Your drives all have the same event counts, suggesting that they were
all dieing within milliseconds of each other.  One of them lived long
enough to record the another's failure, but not to bump the event count.
 The dead machine almost certainly suffered a catastrophic hardware failure.

Presuming the forced assembly works, you should plan on tossing these
drives after you get your data... they have dangerously high relocation
counts and cannot be trusted.  (Fairly typical for consumer drives
approaching 30k hours.)

HTH,

Phil

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Broken array, trying to assemble enough to copy data off
  2013-10-10 12:44   ` Phil Turmel
@ 2013-10-10 16:20     ` Digimer
  2013-10-10 16:36       ` Phil Turmel
  0 siblings, 1 reply; 7+ messages in thread
From: Digimer @ 2013-10-10 16:20 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

On 10/10/13 08:44, Phil Turmel wrote:
> Good morning,
> 
> On 10/09/2013 07:41 PM, Digimer wrote:
>> I forgot to add the smartctl output;
>>
>> /dev/sdb: http://fpaste.org/45627/13813614/
>> /dev/sdc: http://fpaste.org/45628/38136150/
>> /dev/sdd: http://fpaste.org/45630/36151813/
>> /dev/sde: http://fpaste.org/45632/36154613/
>>
>> (sorry for the top post, worried this would have gotten lost below)
> 
> [You could/should have just trimmed the material below.]
> 
> Anyways, excellent report.
> 
> According to the mdadm -E data, you should only need to perform a forced
> assembly, like so:
> 
> mdadm --stop /dev/md1
> mdadm --assemble --force /dev/md1 /dev/sd[bcde]2
> 
> Your drives all have the same event counts, suggesting that they were
> all dieing within milliseconds of each other.  One of them lived long
> enough to record the another's failure, but not to bump the event count.
>  The dead machine almost certainly suffered a catastrophic hardware failure.
> 
> Presuming the forced assembly works, you should plan on tossing these
> drives after you get your data... they have dangerously high relocation
> counts and cannot be trusted.  (Fairly typical for consumer drives
> approaching 30k hours.)
> 
> HTH,
> 
> Phil

Ya, I have no plan at all to use these drives or the server they came
from anymore. In fact, they've already been replaced. :)

I tried the --assemble --force (and --assemble --force --run) without
success. It fails saying that sde2 thinks sdb2 has failed, leaving two
dead members. If I try to start with just sd[bcd], it says that it has
two drives and one spare, so still refuses to start.

Any other options/ideas? I'm not in any rush, so I am happy to test things.

Cheers!

digimer

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Broken array, trying to assemble enough to copy data off
  2013-10-10 16:20     ` Digimer
@ 2013-10-10 16:36       ` Phil Turmel
  2013-10-10 16:54         ` Digimer
  0 siblings, 1 reply; 7+ messages in thread
From: Phil Turmel @ 2013-10-10 16:36 UTC (permalink / raw)
  To: Digimer; +Cc: linux-raid

On 10/10/2013 12:20 PM, Digimer wrote:
>> Phil
> 
> Ya, I have no plan at all to use these drives or the server they came
> from anymore. In fact, they've already been replaced. :)

That's good.

> I tried the --assemble --force (and --assemble --force --run) without
> success. It fails saying that sde2 thinks sdb2 has failed, leaving two
> dead members. If I try to start with just sd[bcd], it says that it has
> two drives and one spare, so still refuses to start.

Ok.

> Any other options/ideas? I'm not in any rush, so I am happy to test things.

Well, you have rock-solid knowledge of the device order and array
parameters.  So a --create operation is the next step.  Given that sdd2
is marked as spare, and therefore of unknown value, I'd leave it out.

mdadm --stop /dev/md1
mdadm --create --level=5 -n 4 --chunk=512 /dev/md1 \
	/dev/sd{c,e,b}2 missing

(--assume-clean isn't needed when creating a degraded raid5)

The brace syntax is needed, not brackets, as the order matters.

After creation, use mdadm -E to verify the Data Offset is 2048.  If not,
get a new version of mdadm that lets you specify it.

Only after that should you use "fsck -n" to verify your filesystem and
mount it.

HTH,

Phil

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Broken array, trying to assemble enough to copy data off
  2013-10-10 16:36       ` Phil Turmel
@ 2013-10-10 16:54         ` Digimer
  2013-10-10 17:58           ` Phil Turmel
  0 siblings, 1 reply; 7+ messages in thread
From: Digimer @ 2013-10-10 16:54 UTC (permalink / raw)
  To: Phil Turmel; +Cc: linux-raid

On 10/10/13 12:36, Phil Turmel wrote:
> On 10/10/2013 12:20 PM, Digimer wrote:
>>> Phil
>>
>> Ya, I have no plan at all to use these drives or the server they came
>> from anymore. In fact, they've already been replaced. :)
> 
> That's good.
> 
>> I tried the --assemble --force (and --assemble --force --run) without
>> success. It fails saying that sde2 thinks sdb2 has failed, leaving two
>> dead members. If I try to start with just sd[bcd], it says that it has
>> two drives and one spare, so still refuses to start.
> 
> Ok.
> 
>> Any other options/ideas? I'm not in any rush, so I am happy to test things.
> 
> Well, you have rock-solid knowledge of the device order and array
> parameters.  So a --create operation is the next step.  Given that sdd2
> is marked as spare, and therefore of unknown value, I'd leave it out.
> 
> mdadm --stop /dev/md1
> mdadm --create --level=5 -n 4 --chunk=512 /dev/md1 \
> 	/dev/sd{c,e,b}2 missing
> 
> (--assume-clean isn't needed when creating a degraded raid5)
> 
> The brace syntax is needed, not brackets, as the order matters.
> 
> After creation, use mdadm -E to verify the Data Offset is 2048.  If not,
> get a new version of mdadm that lets you specify it.
> 
> Only after that should you use "fsck -n" to verify your filesystem and
> mount it.
> 
> HTH,
> 
> Phil

What if I don't have rock-solid knowledge? Is there a way to connect the
four drives and query them to determine which is which? I was using the
new system (minus the new drives) plus a spare new 500GB drive with the
fresh centos install to do the testing yesterday. I am pretty sure I can
redo the cabling to match, but I always prefer safe over sorry. :)

Thanks very much for your help so far!

digimer

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Broken array, trying to assemble enough to copy data off
  2013-10-10 16:54         ` Digimer
@ 2013-10-10 17:58           ` Phil Turmel
  0 siblings, 0 replies; 7+ messages in thread
From: Phil Turmel @ 2013-10-10 17:58 UTC (permalink / raw)
  To: Digimer; +Cc: linux-raid

On 10/10/2013 12:54 PM, Digimer wrote:
>> Phil
> 
> What if I don't have rock-solid knowledge? Is there a way to connect the
> four drives and query them to determine which is which? I was using the
> new system (minus the new drives) plus a spare new 500GB drive with the
> fresh centos install to do the testing yesterday. I am pretty sure I can
> redo the cabling to match, but I always prefer safe over sorry. :)

Your mdadm -E reports clearly identified the order for the device names
at that moment.  You can replug drives and query them again if you like.
 Look for "Device Role :" in the output of "mdadm -E".  The numbering
starts with zero.

> Thanks very much for your help so far!

You are welcome.

Phil


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2013-10-10 17:58 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-10-09 23:25 Broken array, trying to assemble enough to copy data off Digimer
2013-10-09 23:41 ` Digimer
2013-10-10 12:44   ` Phil Turmel
2013-10-10 16:20     ` Digimer
2013-10-10 16:36       ` Phil Turmel
2013-10-10 16:54         ` Digimer
2013-10-10 17:58           ` Phil Turmel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).