linux-bcache.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Recover filenames from failed RAID0
@ 2016-07-10 23:03 Michel Dubois
  2016-07-17 19:10 ` Michel Dubois
  0 siblings, 1 reply; 4+ messages in thread
From: Michel Dubois @ 2016-07-10 23:03 UTC (permalink / raw)
  To: linux-raid
  Cc: Jens Axboe, Keith Busch, dm-devel, Martin K. Petersen,
	Ingo Molnar, Peter Zijlstra, Jiri Kosina, Ming Lei, NeilBrown,
	linux-kernel, linux-block, Takashi Iwai, linux-bcache, Zheng Liu,
	Mike Snitzer, Alasdair Kergon, Lars Ellenberg, Shaohua Li,
	Kent Overstreet, Kirill A. Shutemov, Roland Kammerer


[-- Attachment #1.1: Type: text/plain, Size: 4796 bytes --]

Dear linux-raid mailing list,

I have a RAID0 array of four 3TB disks that failed on the "third" disk.

I am aware of the non-redundancy of RAID0 but I would like to recover the
filenames from that RAID0. If I could recover some data it would be a bonus.

Below you'll find the outputs of the following commands
 mdadm --examine /dev/sd[abcd]1
 fdisk -l

where sda1, sdb1, sdc1 and sdd1 should be the 4 RAID devices.

What could be my next step?

I thank you for your time

Michel Dubois

======================
mdadm --examine /dev/sd[abcd]1
/dev/sda1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 7d247a6e:7b5d46c8:f52d9c89:db304b21
  Creation Time : Mon Apr 23 19:55:36 2012
     Raid Level : raid1
  Used Dev Size : 20980800 (20.01 GiB 21.48 GB)
     Array Size : 20980800 (20.01 GiB 21.48 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 0

    Update Time : Mon Jun 27 21:12:23 2016
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0
       Checksum : 1a57db60 - correct
         Events : 164275


      Number   Major   Minor   RaidDevice State
this     0       8        1        0      active sync   /dev/sda1

   0     0       8        1        0      active sync   /dev/sda1
   1     1       8       17        1      active sync   /dev/sdb1
   2     2       0        0        2      faulty removed
   3     3       8       33        3      active sync   /dev/sdc1
/dev/sdb1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 7d247a6e:7b5d46c8:f52d9c89:db304b21
  Creation Time : Mon Apr 23 19:55:36 2012
     Raid Level : raid1
  Used Dev Size : 20980800 (20.01 GiB 21.48 GB)
     Array Size : 20980800 (20.01 GiB 21.48 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 0

    Update Time : Mon Jun 27 21:12:23 2016
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0
       Checksum : 1a57db72 - correct
         Events : 164275


      Number   Major   Minor   RaidDevice State
this     1       8       17        1      active sync   /dev/sdb1

   0     0       8        1        0      active sync   /dev/sda1
   1     1       8       17        1      active sync   /dev/sdb1
   2     2       0        0        2      faulty removed
   3     3       8       33        3      active sync   /dev/sdc1
/dev/sdc1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 7d247a6e:7b5d46c8:f52d9c89:db304b21
  Creation Time : Mon Apr 23 19:55:36 2012
     Raid Level : raid1
  Used Dev Size : 20980800 (20.01 GiB 21.48 GB)
     Array Size : 20980800 (20.01 GiB 21.48 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 0

    Update Time : Mon Jun 27 21:12:23 2016
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0
       Checksum : 1a57db86 - correct
         Events : 164275


      Number   Major   Minor   RaidDevice State
this     3       8       33        3      active sync   /dev/sdc1

   0     0       8        1        0      active sync   /dev/sda1
   1     1       8       17        1      active sync   /dev/sdb1
   2     2       0        0        2      faulty removed
   3     3       8       33        3      active sync   /dev/sdc1

======================
fdisk -l

WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk
doesn't support GPT. Use GNU Parted.


Disk /dev/sda: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x03afffbe

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1      267350  2147483647+  ee  EFI GPT

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk
doesn't support GPT. Use GNU Parted.


Disk /dev/sdb: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x142a889c

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      267350  2147483647+  ee  EFI GPT

WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk
doesn't support GPT. Use GNU Parted.


Disk /dev/sdc: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x3daebd50

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      267350  2147483647+  ee  EFI GPT

Disk /dev/md0: 21.4 GB, 21484339200 bytes
2 heads, 4 sectors/track, 5245200 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table

[-- Attachment #1.2: Type: text/html, Size: 13702 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Recover filenames from failed RAID0
  2016-07-10 23:03 Recover filenames from failed RAID0 Michel Dubois
@ 2016-07-17 19:10 ` Michel Dubois
  2016-07-17 22:56   ` Stewart Ives
  2016-07-18  9:37   ` keld
  0 siblings, 2 replies; 4+ messages in thread
From: Michel Dubois @ 2016-07-17 19:10 UTC (permalink / raw)
  To: linux-raid
  Cc: Lars Ellenberg, Jens Axboe, linux-block, Keith Busch,
	Martin K. Petersen, Peter Zijlstra, Jiri Kosina, Ming Lei,
	Kirill A. Shutemov, NeilBrown, linux-kernel, Takashi Iwai,
	linux-bcache, Zheng Liu, Kent Overstreet, dm-devel, Shaohua Li,
	Ingo Molnar, Alasdair Kergon, Roland Kammerer, Mike Snitzer

Dear linux-raid mailing list,

I have a RAID0 array of four 3TB disks that failed on the "third" disk.

I am aware of the non-redundancy of RAID0 but I would like to recover
the filenames from that RAID0. If I could recover some data it would
be a bonus.

Below you'll find the outputs of the following commands
 mdadm --examine /dev/sd[abcd]1
 fdisk -l

where sda1, sdb1, sdc1 and sdd1 should be the 4 RAID devices.

What could be my next step?

I thank you for your time

Michel Dubois

======================
mdadm --examine /dev/sd[abcd]1
/dev/sda1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 7d247a6e:7b5d46c8:f52d9c89:db304b21
  Creation Time : Mon Apr 23 19:55:36 2012
     Raid Level : raid1
  Used Dev Size : 20980800 (20.01 GiB 21.48 GB)
     Array Size : 20980800 (20.01 GiB 21.48 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 0

    Update Time : Mon Jun 27 21:12:23 2016
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0
       Checksum : 1a57db60 - correct
         Events : 164275


      Number   Major   Minor   RaidDevice State
this     0       8        1        0      active sync   /dev/sda1

   0     0       8        1        0      active sync   /dev/sda1
   1     1       8       17        1      active sync   /dev/sdb1
   2     2       0        0        2      faulty removed
   3     3       8       33        3      active sync   /dev/sdc1
/dev/sdb1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 7d247a6e:7b5d46c8:f52d9c89:db304b21
  Creation Time : Mon Apr 23 19:55:36 2012
     Raid Level : raid1
  Used Dev Size : 20980800 (20.01 GiB 21.48 GB)
     Array Size : 20980800 (20.01 GiB 21.48 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 0

    Update Time : Mon Jun 27 21:12:23 2016
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0
       Checksum : 1a57db72 - correct
         Events : 164275


      Number   Major   Minor   RaidDevice State
this     1       8       17        1      active sync   /dev/sdb1

   0     0       8        1        0      active sync   /dev/sda1
   1     1       8       17        1      active sync   /dev/sdb1
   2     2       0        0        2      faulty removed
   3     3       8       33        3      active sync   /dev/sdc1
/dev/sdc1:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 7d247a6e:7b5d46c8:f52d9c89:db304b21
  Creation Time : Mon Apr 23 19:55:36 2012
     Raid Level : raid1
  Used Dev Size : 20980800 (20.01 GiB 21.48 GB)
     Array Size : 20980800 (20.01 GiB 21.48 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 0

    Update Time : Mon Jun 27 21:12:23 2016
          State : clean
 Active Devices : 3
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 0
       Checksum : 1a57db86 - correct
         Events : 164275


      Number   Major   Minor   RaidDevice State
this     3       8       33        3      active sync   /dev/sdc1

   0     0       8        1        0      active sync   /dev/sda1
   1     1       8       17        1      active sync   /dev/sdb1
   2     2       0        0        2      faulty removed
   3     3       8       33        3      active sync   /dev/sdc1

======================
fdisk -l

WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util
fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sda: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x03afffbe

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1      267350  2147483647+  ee  EFI GPT

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util
fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdb: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x142a889c

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      267350  2147483647+  ee  EFI GPT

WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util
fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdc: 3000.5 GB, 3000592982016 bytes
255 heads, 63 sectors/track, 364801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x3daebd50

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1      267350  2147483647+  ee  EFI GPT

Disk /dev/md0: 21.4 GB, 21484339200 bytes
2 heads, 4 sectors/track, 5245200 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Recover filenames from failed RAID0
  2016-07-17 19:10 ` Michel Dubois
@ 2016-07-17 22:56   ` Stewart Ives
  2016-07-18  9:37   ` keld
  1 sibling, 0 replies; 4+ messages in thread
From: Stewart Ives @ 2016-07-17 22:56 UTC (permalink / raw)
  To: Michel Dubois
  Cc: Mike Snitzer, Peter Zijlstra, Ming Lei, NeilBrown, Keith Busch,
	dm-devel, Alasdair Kergon, Roland Kammerer, Zheng Liu,
	Takashi Iwai, Ingo Molnar, Shaohua Li, Kent Overstreet,
	linux-block, linux-bcache, Jens Axboe, linux-raid,
	Martin K. Petersen, Jiri Kosina, linux-kernel, Lars Ellenberg,
	Kirill A. Shutemov


[-- Attachment #1.1: Type: text/plain, Size: 6358 bytes --]

Michael,

I'll preface my reply with the statement that I am far from an expert at
this but I can read and understand the descriptions of the different RAID
levels and it seems to me with a RAID0 you are SOL if you lose a device in
the array. Just by the very nature of the RAID0 configuration there is
absolutely NO redundancy. The only reason anyone would configure such a
system is for SPEED and the only data that should be permitted on a RAID0
array is temp or working data that is recoverable by other means in the
event of a failure. I know of many Videographers that use a SSD RAID0 array
for working on their current project but they also copy that array out
about every hour for backup.

I pose only one question to you. Did you have a backup?

Good luck.

-stew


>>
>> Stewart M. Ives
>> SofTEC USA
>> 1717 Bridge St
>> New Cumberland, PA 17070 USA
>>
>> Tel: 717-910-4600
>> Fax: 888-371-6022
>> Skype: softecusa-ivessm
>> EMail: ivessm@softecusa.com
>> WebSite: www.softecusa.com
>>

On Sun, Jul 17, 2016 at 3:10 PM, Michel Dubois <michel.dubois.mtl@gmail.com>
wrote:

> Dear linux-raid mailing list,
>
> I have a RAID0 array of four 3TB disks that failed on the "third" disk.
>
> I am aware of the non-redundancy of RAID0 but I would like to recover
> the filenames from that RAID0. If I could recover some data it would
> be a bonus.
>
> Below you'll find the outputs of the following commands
>  mdadm --examine /dev/sd[abcd]1
>  fdisk -l
>
> where sda1, sdb1, sdc1 and sdd1 should be the 4 RAID devices.
>
> What could be my next step?
>
> I thank you for your time
>
> Michel Dubois
>
> ======================
> mdadm --examine /dev/sd[abcd]1
> /dev/sda1:
>           Magic : a92b4efc
>         Version : 00.90.00
>            UUID : 7d247a6e:7b5d46c8:f52d9c89:db304b21
>   Creation Time : Mon Apr 23 19:55:36 2012
>      Raid Level : raid1
>   Used Dev Size : 20980800 (20.01 GiB 21.48 GB)
>      Array Size : 20980800 (20.01 GiB 21.48 GB)
>    Raid Devices : 4
>   Total Devices : 3
> Preferred Minor : 0
>
>     Update Time : Mon Jun 27 21:12:23 2016
>           State : clean
>  Active Devices : 3
> Working Devices : 3
>  Failed Devices : 1
>   Spare Devices : 0
>        Checksum : 1a57db60 - correct
>          Events : 164275
>
>
>       Number   Major   Minor   RaidDevice State
> this     0       8        1        0      active sync   /dev/sda1
>
>    0     0       8        1        0      active sync   /dev/sda1
>    1     1       8       17        1      active sync   /dev/sdb1
>    2     2       0        0        2      faulty removed
>    3     3       8       33        3      active sync   /dev/sdc1
> /dev/sdb1:
>           Magic : a92b4efc
>         Version : 00.90.00
>            UUID : 7d247a6e:7b5d46c8:f52d9c89:db304b21
>   Creation Time : Mon Apr 23 19:55:36 2012
>      Raid Level : raid1
>   Used Dev Size : 20980800 (20.01 GiB 21.48 GB)
>      Array Size : 20980800 (20.01 GiB 21.48 GB)
>    Raid Devices : 4
>   Total Devices : 3
> Preferred Minor : 0
>
>     Update Time : Mon Jun 27 21:12:23 2016
>           State : clean
>  Active Devices : 3
> Working Devices : 3
>  Failed Devices : 1
>   Spare Devices : 0
>        Checksum : 1a57db72 - correct
>          Events : 164275
>
>
>       Number   Major   Minor   RaidDevice State
> this     1       8       17        1      active sync   /dev/sdb1
>
>    0     0       8        1        0      active sync   /dev/sda1
>    1     1       8       17        1      active sync   /dev/sdb1
>    2     2       0        0        2      faulty removed
>    3     3       8       33        3      active sync   /dev/sdc1
> /dev/sdc1:
>           Magic : a92b4efc
>         Version : 00.90.00
>            UUID : 7d247a6e:7b5d46c8:f52d9c89:db304b21
>   Creation Time : Mon Apr 23 19:55:36 2012
>      Raid Level : raid1
>   Used Dev Size : 20980800 (20.01 GiB 21.48 GB)
>      Array Size : 20980800 (20.01 GiB 21.48 GB)
>    Raid Devices : 4
>   Total Devices : 3
> Preferred Minor : 0
>
>     Update Time : Mon Jun 27 21:12:23 2016
>           State : clean
>  Active Devices : 3
> Working Devices : 3
>  Failed Devices : 1
>   Spare Devices : 0
>        Checksum : 1a57db86 - correct
>          Events : 164275
>
>
>       Number   Major   Minor   RaidDevice State
> this     3       8       33        3      active sync   /dev/sdc1
>
>    0     0       8        1        0      active sync   /dev/sda1
>    1     1       8       17        1      active sync   /dev/sdb1
>    2     2       0        0        2      faulty removed
>    3     3       8       33        3      active sync   /dev/sdc1
>
> ======================
> fdisk -l
>
> WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util
> fdisk doesn't support GPT. Use GNU Parted.
>
>
> Disk /dev/sda: 3000.5 GB, 3000592982016 bytes
> 255 heads, 63 sectors/track, 364801 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Disk identifier: 0x03afffbe
>
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sda1               1      267350  2147483647+  ee  EFI GPT
>
> WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util
> fdisk doesn't support GPT. Use GNU Parted.
>
>
> Disk /dev/sdb: 3000.5 GB, 3000592982016 bytes
> 255 heads, 63 sectors/track, 364801 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Disk identifier: 0x142a889c
>
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sdb1               1      267350  2147483647+  ee  EFI GPT
>
> WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util
> fdisk doesn't support GPT. Use GNU Parted.
>
>
> Disk /dev/sdc: 3000.5 GB, 3000592982016 bytes
> 255 heads, 63 sectors/track, 364801 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Disk identifier: 0x3daebd50
>
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sdc1               1      267350  2147483647+  ee  EFI GPT
>
> Disk /dev/md0: 21.4 GB, 21484339200 bytes
> 2 heads, 4 sectors/track, 5245200 cylinders
> Units = cylinders of 8 * 512 = 4096 bytes
> Disk identifier: 0x00000000
>
> Disk /dev/md0 doesn't contain a valid partition table
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

[-- Attachment #1.2: Type: text/html, Size: 8863 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Recover filenames from failed RAID0
  2016-07-17 19:10 ` Michel Dubois
  2016-07-17 22:56   ` Stewart Ives
@ 2016-07-18  9:37   ` keld
  1 sibling, 0 replies; 4+ messages in thread
From: keld @ 2016-07-18  9:37 UTC (permalink / raw)
  To: Michel Dubois
  Cc: linux-raid, Lars Ellenberg, Jens Axboe, linux-block, Keith Busch,
	Martin K. Petersen, Peter Zijlstra, Jiri Kosina, Ming Lei,
	Kirill A. Shutemov, NeilBrown, linux-kernel, Takashi Iwai,
	linux-bcache, Zheng Liu, Kent Overstreet, dm-devel, Shaohua Li,
	Ingo Molnar, Alasdair Kergon, Roland Kammerer, Mike Snitzer

Hi

Which file system did you use?
I once wrote some code to get files out of an ext3 filesystem, 
http://www.open-std.org/keld/readme-salvage.html

Maybe you can make some corrections for it to work
as I remember the code, then if you hit a directory, it will salvage the file names and the files
of that directory.

Best regards
keld

On Sun, Jul 17, 2016 at 03:10:03PM -0400, Michel Dubois wrote:
> Dear linux-raid mailing list,
> 
> I have a RAID0 array of four 3TB disks that failed on the "third" disk.
> 
> I am aware of the non-redundancy of RAID0 but I would like to recover
> the filenames from that RAID0. If I could recover some data it would
> be a bonus.
> 
> Below you'll find the outputs of the following commands
>  mdadm --examine /dev/sd[abcd]1
>  fdisk -l
> 
> where sda1, sdb1, sdc1 and sdd1 should be the 4 RAID devices.
> 
> What could be my next step?
> 
> I thank you for your time
> 
> Michel Dubois
> 
> ======================
> mdadm --examine /dev/sd[abcd]1
> /dev/sda1:
>           Magic : a92b4efc
>         Version : 00.90.00
>            UUID : 7d247a6e:7b5d46c8:f52d9c89:db304b21
>   Creation Time : Mon Apr 23 19:55:36 2012
>      Raid Level : raid1
>   Used Dev Size : 20980800 (20.01 GiB 21.48 GB)
>      Array Size : 20980800 (20.01 GiB 21.48 GB)
>    Raid Devices : 4
>   Total Devices : 3
> Preferred Minor : 0
> 
>     Update Time : Mon Jun 27 21:12:23 2016
>           State : clean
>  Active Devices : 3
> Working Devices : 3
>  Failed Devices : 1
>   Spare Devices : 0
>        Checksum : 1a57db60 - correct
>          Events : 164275
> 
> 
>       Number   Major   Minor   RaidDevice State
> this     0       8        1        0      active sync   /dev/sda1
> 
>    0     0       8        1        0      active sync   /dev/sda1
>    1     1       8       17        1      active sync   /dev/sdb1
>    2     2       0        0        2      faulty removed
>    3     3       8       33        3      active sync   /dev/sdc1
> /dev/sdb1:
>           Magic : a92b4efc
>         Version : 00.90.00
>            UUID : 7d247a6e:7b5d46c8:f52d9c89:db304b21
>   Creation Time : Mon Apr 23 19:55:36 2012
>      Raid Level : raid1
>   Used Dev Size : 20980800 (20.01 GiB 21.48 GB)
>      Array Size : 20980800 (20.01 GiB 21.48 GB)
>    Raid Devices : 4
>   Total Devices : 3
> Preferred Minor : 0
> 
>     Update Time : Mon Jun 27 21:12:23 2016
>           State : clean
>  Active Devices : 3
> Working Devices : 3
>  Failed Devices : 1
>   Spare Devices : 0
>        Checksum : 1a57db72 - correct
>          Events : 164275
> 
> 
>       Number   Major   Minor   RaidDevice State
> this     1       8       17        1      active sync   /dev/sdb1
> 
>    0     0       8        1        0      active sync   /dev/sda1
>    1     1       8       17        1      active sync   /dev/sdb1
>    2     2       0        0        2      faulty removed
>    3     3       8       33        3      active sync   /dev/sdc1
> /dev/sdc1:
>           Magic : a92b4efc
>         Version : 00.90.00
>            UUID : 7d247a6e:7b5d46c8:f52d9c89:db304b21
>   Creation Time : Mon Apr 23 19:55:36 2012
>      Raid Level : raid1
>   Used Dev Size : 20980800 (20.01 GiB 21.48 GB)
>      Array Size : 20980800 (20.01 GiB 21.48 GB)
>    Raid Devices : 4
>   Total Devices : 3
> Preferred Minor : 0
> 
>     Update Time : Mon Jun 27 21:12:23 2016
>           State : clean
>  Active Devices : 3
> Working Devices : 3
>  Failed Devices : 1
>   Spare Devices : 0
>        Checksum : 1a57db86 - correct
>          Events : 164275
> 
> 
>       Number   Major   Minor   RaidDevice State
> this     3       8       33        3      active sync   /dev/sdc1
> 
>    0     0       8        1        0      active sync   /dev/sda1
>    1     1       8       17        1      active sync   /dev/sdb1
>    2     2       0        0        2      faulty removed
>    3     3       8       33        3      active sync   /dev/sdc1
> 
> ======================
> fdisk -l
> 
> WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util
> fdisk doesn't support GPT. Use GNU Parted.
> 
> 
> Disk /dev/sda: 3000.5 GB, 3000592982016 bytes
> 255 heads, 63 sectors/track, 364801 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Disk identifier: 0x03afffbe
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sda1               1      267350  2147483647+  ee  EFI GPT
> 
> WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util
> fdisk doesn't support GPT. Use GNU Parted.
> 
> 
> Disk /dev/sdb: 3000.5 GB, 3000592982016 bytes
> 255 heads, 63 sectors/track, 364801 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Disk identifier: 0x142a889c
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sdb1               1      267350  2147483647+  ee  EFI GPT
> 
> WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util
> fdisk doesn't support GPT. Use GNU Parted.
> 
> 
> Disk /dev/sdc: 3000.5 GB, 3000592982016 bytes
> 255 heads, 63 sectors/track, 364801 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
> Disk identifier: 0x3daebd50
> 
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sdc1               1      267350  2147483647+  ee  EFI GPT
> 
> Disk /dev/md0: 21.4 GB, 21484339200 bytes
> 2 heads, 4 sectors/track, 5245200 cylinders
> Units = cylinders of 8 * 512 = 4096 bytes
> Disk identifier: 0x00000000
> 
> Disk /dev/md0 doesn't contain a valid partition table
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-07-18  9:37 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-07-10 23:03 Recover filenames from failed RAID0 Michel Dubois
2016-07-17 19:10 ` Michel Dubois
2016-07-17 22:56   ` Stewart Ives
2016-07-18  9:37   ` keld

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).