linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alexandre Poux <pums974@gmail.com>
To: Chris Murphy <lists@colorremedies.com>
Cc: Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: multi-device btrfs with single data mode and disk failure
Date: Sat, 1 Oct 2016 01:46:49 +0200	[thread overview]
Message-ID: <0a5e6528-f929-729b-a608-a46dc6f8a6a3@gmail.com> (raw)
In-Reply-To: <e3955660-bf6b-096c-5249-bbe67be255bd@gmail.com>

Hello again,

Just a quick question.

I did a full scrub and got no error at all

And a full check that gave me this :

#> btrfs check --check-data-csum -p /dev/sde6

Checking filesystem on /dev/sde6
UUID: 62db560b-a040-4c64-b613-6e7db033dc4d
checking extents [o]
checking free space cache [o]
checking fs roots [.]
checking csums
checking root refs
checking quota groups
Counts for qgroup id: 0/5 are different
our:        referenced 7239132803072 referenced compressed 7239132803072
disk:        referenced 7238982733824 referenced compressed 7238982733824
diff:        referenced 150069248 referenced compressed 150069248
our:        exclusive 7239132803072 exclusive compressed 7239132803072
disk:        exclusive 7238982733824 exclusive compressed 7238982733824
diff:        exclusive 150069248 exclusive compressed 150069248
found 7323422314496 bytes used err is 0
total csum bytes: 7020314688
total tree bytes: 11797741568
total fs tree bytes: 2904932352
total extent tree bytes: 656654336
btree space waste bytes: 1560529439
file data blocks allocated: 297363385454592
 referenced 6628544720896

I'm guessing that's not important, but I found nothing about this
so I don't really know what's about.

Can just confirm that everything seems OK ?

Do you think of an another test I should do before starting to use my
array again ?

Le 29/09/2016 à 14:55, Alexandre Poux a écrit :
> Hi,
>
> I finally did it : patched the kernel and removed the device.
> As expected he did not scream since there was nothing at all on the device.
> Now I'm checking that everything is fine:
> scrub (in read only)
> check (in read only)
> but I think that everything will be OK
> If not, I will rebuild the array from scratch (I did managed to save my
> data)
>
> Thank you both for your guidance.
> I think that a warning should be put in the wiki in order for other user
> to not do the same mistake I did :
> never ever use the single mode
>
> I will try to do it soon
>
> Again thank you
>
> Le 20/09/2016 à 23:15, Chris Murphy a écrit :
>> On Tue, Sep 20, 2016 at 2:18 PM, Alexandre Poux <pums974@gmail.com> wrote:
>>> Le 20/09/2016 à 21:46, Chris Murphy a écrit :
>>>> On Tue, Sep 20, 2016 at 1:31 PM, Alexandre Poux <pums974@gmail.com> wrote:
>>>>> Le 20/09/2016 à 21:11, Chris Murphy a écrit :
>>>>>> And no backup? Umm, I'd resolve that sooner than anything else.
>>>>> Yeah you are absolutely right, this was a temporary solution which came
>>>>> to be not that temporary.
>>>>> And I regret it already...
>>>> Well on the bright side, if this were LVM or mdadm linear/concat
>>>> array, the whole thing would be toast because any other file system
>>>> would have lost too much fs metadata on the missing device.
>>>>
>>>>>>  It
>>>>>> should be true that it'll tolerate a read only mount indefinitely, but
>>>>>> read write? Not sure. This sort of edge case isn't well tested at all
>>>>>> seeing as it required changing the kernel to reduce safe guards. So
>>>>>> all bets are off the whole thing could become unmountable, not even
>>>>>> read only, and then it's a scraping job.
>>>>> I'm not that crazy, I tried the patch inside a virtual machine on
>>>>> virtual drives...
>>>>> And since it's only virtual, it may not work on the real partition...
>>>> Are you sure the virtual setup lacked a CHUNK_ITEM on the missing
>>>> device? That might be what pinned it in that case.
>>> In fact in my virtual setup there was more chunk missing (1 metadata 1
>>> System and 1 Data).
>>> I will try to do a setup closer to my real one.
>> Probably the reason why that missing device has no used chunks is
>> because it's so small. Btrfs allocates block groups to devices with
>> the most unallocated space first. Only once the unallocated space is
>> even (approximately) on all devices would it allocate a block group to
>> the small device.
>>
>>
>


  reply	other threads:[~2016-09-30 23:46 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-15  7:44 multi-device btrfs with single data mode and disk failure Alexandre Poux
2016-09-15 15:38 ` Chris Murphy
2016-09-15 16:30   ` Alexandre Poux
2016-09-15 16:54     ` Chris Murphy
     [not found]       ` <760be1b7-79b2-a25d-7c60-04ceac1b6e40@gmail.com>
2016-09-15 21:54         ` Chris Murphy
2016-09-19 22:05           ` Alexandre Poux
2016-09-20 17:03             ` Alexandre Poux
2016-09-20 17:54               ` Chris Murphy
2016-09-20 18:19                 ` Alexandre Poux
2016-09-20 18:38                   ` Chris Murphy
2016-09-20 18:53                     ` Alexandre Poux
2016-09-20 19:11                       ` Chris Murphy
     [not found]                         ` <4e7ec5eb-7fb6-2d19-f29d-82461e2d0bd2@gmail.com>
2016-09-20 19:46                           ` Chris Murphy
2016-09-20 20:18                             ` Alexandre Poux
2016-09-20 21:05                               ` Alexandre Poux
2016-09-20 21:15                               ` Chris Murphy
2016-09-29 12:55                                 ` Alexandre Poux
2016-09-30 23:46                                   ` Alexandre Poux [this message]
2016-09-20 19:43                       ` Austin S. Hemmelgarn
2016-09-20 19:54                         ` Alexandre Poux
2016-09-20 20:02                           ` Chris Murphy
2016-09-20 19:55                         ` Chris Murphy
2016-09-21 11:07                           ` Austin S. Hemmelgarn
2016-09-20 20:59                       ` Graham Cobb
2016-09-20 18:56                 ` Austin S. Hemmelgarn
2016-09-20 19:06                   ` Alexandre Poux

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0a5e6528-f929-729b-a608-a46dc6f8a6a3@gmail.com \
    --to=pums974@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=lists@colorremedies.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).