linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Austin S. Hemmelgarn" <ahferroin7@gmail.com>
To: Matt <langelino@gmx.net>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: Data recovery from a linear multi-disk btrfs file system
Date: Fri, 15 Jul 2016 14:52:33 -0400	[thread overview]
Message-ID: <6acece40-1457-e4f3-646b-083780d8a251@gmail.com> (raw)
In-Reply-To: <1378A988-195F-4E68-B6DE-30CEDFAC8474@gmx.net>

On 2016-07-15 14:45, Matt wrote:
>
>> On 15 Jul 2016, at 14:10, Austin S. Hemmelgarn <ahferroin7@gmail.com> wrote:
>>
>> On 2016-07-15 05:51, Matt wrote:
>>> Hello
>>>
>>> I glued together 6 disks in linear lvm fashion (no RAID) to obtain one large file system (see below).  One of the 6 disk failed. What is the best way to recover from this?
>>>
>> The tool you want is `btrfs restore`.  You'll need somewhere to put the files from this too of course.  That said, given that you had data in raid0 mode, you're not likely to get much other than very small files back out of this, and given other factors, you're not likely to get what you would consider reasonable performance out of this either.
>
> Thanks so much for pointing me towards btrfs-restore. I surely will give it a try.  Note that the FS is not a RAID0 but  linear (“JPOD") configuration. This is why  it somehow did not occur to me to try btrfs-restore.  The good news about in this configuration  the files are *not* distributed across disks. We can  read most of the files just fine.  The failed disk was actually smaller than the others five so that we should be able to recover more than 5/6 of the data, shouldn’t we?  My trouble is that the IO errors due to the missing disk  cripple the transfer speed of both rsync and dd_rescue.
Your own 'btrfs fi df' output clearly says that more than 99% of your 
data chunks are in a RAID0 profile, hence my statement.  Functionally, 
this is similar to concatenating all the disks, but it gets better 
performance and is a bit harder to recover data from.  I hadn't noticed 
however that the disks were different sizes, so should be able to 
recover a significant amount of data from it.
>
>> Your best bet to get a working filesystem again would be to just recreate it from scratch, there's not much else that can be done when you've got a raid0 profile and have lost a disk.
>
> This is what I plan to do if there if btrfs-restore turns out to be too slow and nobody on this list has any better idea.  It will, however, require  transferring  >15TB across the Atlantic (this is were the “backup” reside).  This can be tedious which is why I would love to avoid it.
Entirely understandable.


  reply	other threads:[~2016-07-15 18:52 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-15  9:51 Data recovery from a linear multi-disk btrfs file system Matt
2016-07-15 12:10 ` Austin S. Hemmelgarn
2016-07-15 18:45   ` Matt
2016-07-15 18:52     ` Austin S. Hemmelgarn [this message]
2016-07-20 20:20       ` Chris Murphy
2016-07-20 22:19     ` Kai Krakow
2016-07-20 22:30       ` Kai Krakow

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6acece40-1457-e4f3-646b-083780d8a251@gmail.com \
    --to=ahferroin7@gmail.com \
    --cc=langelino@gmx.net \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).