From: Chris Murphy <lists@colorremedies.com>
To: "Holger Hoffstätte" <holger@applied-asynchrony.com>
Cc: Larkin Lowrey <llowrey@nuclearwinter.com>,
Qu Wenruo <quwenruo.btrfs@gmx.com>,
Chris Murphy <lists@colorremedies.com>,
Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: Scrub aborts due to corrupt leaf
Date: Wed, 10 Oct 2018 11:44:27 -0600 [thread overview]
Message-ID: <CAJCQCtTV3a9rH6ystLCbLUVoh4b1LL5yXWiOBPp6Fa28efZf_w@mail.gmail.com> (raw)
In-Reply-To: <4652a690-26ed-fb90-9386-3020ee9e9841@applied-asynchrony.com>
On Wed, Oct 10, 2018 at 10:04 AM, Holger Hoffstätte
<holger@applied-asynchrony.com> wrote:
> On 10/10/18 17:44, Larkin Lowrey wrote:
> (..)
>>
>> About once a week, or so, I'm running into the above situation where
>> FS seems to deadlock. All IO to the FS blocks, there is no IO
>> activity at all. I have to hard reboot the system to recover. There
>> are no error indications except for the following which occurs well
>> before the FS freezes up:
>>
>> BTRFS warning (device dm-3): block group 78691883286528 has wrong amount
>> of free space
>> BTRFS warning (device dm-3): failed to load free space cache for block
>> group 78691883286528, rebuilding it now
>>
>> Do I have any options other the nuking the FS and starting over?
>
>
> Unmount cleanly & mount again with -o space_cache=v2.
I'm pretty sure you have to umount, and then clear the space_cache
with 'btrfs check --clear-space-cache=v1' and then do a one time mount
with -o space_cache=v2.
But anyway, to me that seems premature because we don't even know
what's causing the problem.
a. Freezing means there's a kernel bug. Hands down.
b. Is it freezing on the rebuild? Or something else?
c. I think the devs would like to see the output from btrfs-progs
v4.17.1, 'btrfs check --mode=lowmem' and see if it finds anything, in
particular something not related to free space cache.
Rebuilding either version of space cache requires successfully reading
(and parsing) the extent tree.
--
Chris Murphy
next prev parent reply other threads:[~2018-10-10 17:44 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-08-26 20:45 Scrub aborts due to corrupt leaf Larkin Lowrey
2018-08-27 0:16 ` Qu Wenruo
2018-08-27 2:32 ` Larkin Lowrey
2018-08-27 4:46 ` Qu Wenruo
2018-08-28 2:12 ` Larkin Lowrey
2018-08-28 3:29 ` Chris Murphy
2018-08-28 13:29 ` Larkin Lowrey
2018-08-28 13:42 ` Qu Wenruo
2018-08-28 13:56 ` Chris Murphy
2018-08-29 1:27 ` Qu Wenruo
2018-08-29 5:32 ` Qu Wenruo
2018-09-11 15:23 ` Larkin Lowrey
2018-10-10 15:44 ` Larkin Lowrey
2018-10-10 16:04 ` Holger Hoffstätte
2018-10-10 17:25 ` Larkin Lowrey
2018-10-10 18:20 ` Holger Hoffstätte
2018-10-10 18:31 ` Larkin Lowrey
2018-10-10 19:53 ` Chris Murphy
2018-10-10 23:43 ` Qu Wenruo
2018-10-10 17:44 ` Chris Murphy [this message]
2018-10-10 18:25 ` Holger Hoffstätte
2018-10-10 23:55 ` Hans van Kranenburg
2018-10-11 2:12 ` Larkin Lowrey
2018-10-11 2:51 ` Chris Murphy
2018-10-11 3:07 ` Larkin Lowrey
2018-10-11 4:00 ` Chris Murphy
2018-10-11 4:15 ` Chris Murphy
2018-12-31 15:52 ` Larkin Lowrey
2019-01-01 0:12 ` Qu Wenruo
2019-01-01 2:38 ` Larkin Lowrey
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJCQCtTV3a9rH6ystLCbLUVoh4b1LL5yXWiOBPp6Fa28efZf_w@mail.gmail.com \
--to=lists@colorremedies.com \
--cc=holger@applied-asynchrony.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=llowrey@nuclearwinter.com \
--cc=quwenruo.btrfs@gmx.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).