public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
* Raid0 rescue
@ 2017-07-27 14:49 Alan Brand
  2017-07-27 15:10 ` Hugo Mills
  2017-08-01 18:24 ` Chris Murphy
  0 siblings, 2 replies; 11+ messages in thread
From: Alan Brand @ 2017-07-27 14:49 UTC (permalink / raw)
  To: linux-btrfs

I know I am screwed but hope someone here can point at a possible solution.

I had a pair of btrfs drives in a raid0 configuration.  One of the
drives was pulled by mistake, put in a windows box, and a quick NTFS
format was done.  Then much screaming occurred.

I know the data is still there.  Is there anyway to rebuild the raid
bringing in the bad disk?  I know some info is still good, for example
metadata0 is corrupt but 1 and 2 are good.
The trees look bad which is probably the killer.

I can't run a normal recovery as only half of each file is there.

$100 reward if you come up with a workable solution.

^ permalink raw reply	[flat|nested] 11+ messages in thread
* Re: Raid0 rescue
@ 2017-07-27 20:07 Alan Brand
  0 siblings, 0 replies; 11+ messages in thread
From: Alan Brand @ 2017-07-27 20:07 UTC (permalink / raw)
  To: hugo, linux-btrfs

> > Correct, I should have said 'superblock'.
> > It is/was raid0.  Funny thing is that this all happened when I was
> > prepping to convert to raid1.
>    If youre metadata was also RAID-0, then your filesystem is almost
> certainly toast. If any part of the btrfs metadata was overwritten by
> some of the NTFS metadata, then the FS will be broken (somewhere) and
> probably not in a fixable way.


It should have been raid-1 as I believe that is the default for
metadata when creating a btrfs volume
How do i put the good copy back on the corrupt volume?
I cant even look at the metadata on the good disk as it complains
about one of the disk being missing.

> running a btrfs-find-root shows this (which gives me hope)
> Well block 4871870791680(gen: 73257 level: 1) seems good, but
> generation/level doesn't match, want gen: 73258 level: 1
> Well block 4639933562880(gen: 73256 level: 1) seems good, but
> generation/level doesn't match, want gen: 73258 level: 1
> Well block 4639935168512(gen: 73255 level: 1) seems good, but
> generation/level doesn't match, want gen: 73258 level: 1
> Well block 4639926239232(gen: 73242 level: 0) seems good, but
> generation/level doesn't match, want gen: 73258 level: 1
>
> but when I run btrfs
> inspect-internal dump-tree -r /dev/sdc1
>
> checksum verify failed on 874856448 found 5A85B5D9 wanted 17E3CB7D
> checksum verify failed on 874856448 found 5A85B5D9 wanted 17E3CB7D
> checksum verify failed on 874856448 found 2204C752 wanted C6ADDF7E
> checksum verify failed on 874856448 found 2204C752 wanted C6ADDF7E
> bytenr mismatch, want=874856448, have=8568478783891655077

   This would suggest that some fairly important part of the metadata
was damaged. You'll probably spend far less effort recovering the data
by restoring your backups than trying to fix this.

   Hugo.

> root tree: 4871875543040 level 1
> chunk tree: 20971520 level 1
> extent tree key (EXTENT_TREE ROOT_ITEM 0) 4871875559424 level 2
> device tree key (DEV_TREE ROOT_ITEM 0) 4635801976832 level 1
> fs tree key (FS_TREE ROOT_ITEM 0) 4871870414848 level 3
> checksum tree key (CSUM_TREE ROOT_ITEM 0) 4871876034560 level 3
> uuid tree key (UUID_TREE ROOT_ITEM 0) 29376512 level 0
> checksum verify failed on 728891392 found 75E2752C wanted D6CA4FB4
> checksum verify failed on 728891392 found 75E2752C wanted D6CA4FB4
> checksum verify failed on 728891392 found F4F3A4AD wanted E6D063C7
> checksum verify failed on 728891392 found 75E2752C wanted D6CA4FB4
> bytenr mismatch, want=728891392, have=269659807399918462
> total bytes 5000989728768
> bytes used 3400345264128
>
>
>
> On Thu, Jul 27, 2017 at 11:10 AM, Hugo Mills <hugo@carfax.org.uk> wrote:
> > On Thu, Jul 27, 2017 at 10:49:37AM -0400, Alan Brand wrote:
> >> I know I am screwed but hope someone here can point at a possible solution.
> >>
> >> I had a pair of btrfs drives in a raid0 configuration.  One of the
> >> drives was pulled by mistake, put in a windows box, and a quick NTFS
> >> format was done.  Then much screaming occurred.
> >>
> >> I know the data is still there.
> >
> >    Well, except for all the parts overwritten by a blank NTFS metadata
> > structure.
> >
> >>   Is there anyway to rebuild the raid
> >> bringing in the bad disk?  I know some info is still good, for example
> >> metadata0 is corrupt but 1 and 2 are good.
> >
> >    I assume you mean superblock there.
> >
> >> The trees look bad which is probably the killer.
> >
> >    We really should improve the error messages at some point. Whatever
> > you're inferring from the kernel logs is probably not quite right. :)
> >
> >    What's the metadata configuration on this FS? Also RAID-0? or RAID-1?
> >
> >> I can't run a normal recovery as only half of each file is there.
> >
> >    Welcome to RAID-0...
> >
> >    Hugo.
> >

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2017-08-17  5:13 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-07-27 14:49 Raid0 rescue Alan Brand
2017-07-27 15:10 ` Hugo Mills
2017-07-27 19:43   ` Alan Brand
2017-07-27 19:53     ` Hugo Mills
2017-07-27 20:25   ` Duncan
2017-07-27 23:38     ` Adam Borowski
2017-08-17  1:48     ` Chris Murphy
2017-08-17  5:13       ` Chris Murphy
2017-08-01 18:24 ` Chris Murphy
     [not found]   ` <CAFcRpx5JkNnTOtrVbjTe6e7tde=Sw3_78TAJThEd+cYtx62h4w@mail.gmail.com>
2017-08-01 18:48     ` Chris Murphy
  -- strict thread matches above, loose matches on Subject: below --
2017-07-27 20:07 Alan Brand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox