From: Qu Wenruo <quwenruo.btrfs@gmx.com>
To: "Stéphane Lesimple" <stephane_btrfs2@lesimple.fr>,
"Qu Wenruo" <wqu@suse.com>,
linux-btrfs@vger.kernel.org
Subject: Re: [PATCH] btrfs: relocation: output warning message for leftover v1 space cache before aborting current data balance
Date: Tue, 29 Dec 2020 21:06:26 +0800 [thread overview]
Message-ID: <02f6b3d7-c502-fe29-ec74-cce99922296c@gmx.com> (raw)
In-Reply-To: <9c981dde8dafe773e2a99417e4935f6b@lesimple.fr>
On 2020/12/29 下午8:51, Stéphane Lesimple wrote:
> December 29, 2020 1:43 PM, "Qu Wenruo" <wqu@suse.com> wrote:
>
>> On 2020/12/29 下午8:30, Stéphane Lesimple wrote:
>>
>>> December 29, 2020 12:32 PM, "Qu Wenruo" <quwenruo.btrfs@gmx.com> wrote:
>>>
>>>> On 2020/12/29 下午7:08, Stéphane Lesimple wrote:
>>>
>>> December 29, 2020 11:31 AM, "Qu Wenruo" <quwenruo.btrfs@gmx.com> wrote:
>>>
>>> # btrfs ins dump-tree -t root /dev/mapper/luks-tank-mdata | grep EXTENT_DA
>>> item 27 key (51933 EXTENT_DATA 0) itemoff 9854 itemsize 53
>>> item 12 key (72271 EXTENT_DATA 0) itemoff 14310 itemsize 53
>>> item 25 key (74907 EXTENT_DATA 0) itemoff 12230 itemsize 53
>>> Mind to dump all those related inodes?
>>>
>>> E.g:
>>>
>>> $ btrfs ins dump-tree -t root <dev> | gerp 51933 -C 10
>>>
>>> Sure. I added -w to avoid grepping big numbers just containing the digits 51933:
>>>
>>> # btrfs ins dump-tree -t root /dev/mapper/luks-tank-mdata | grep 51933 -C 10 -w
>>> generation 2614632 root_dirid 256 bytenr 42705449811968 level 2 refs 1
>>> lastsnap 2614456 byte_limit 0 bytes_used 101154816 flags 0x1(RDONLY)
>>> uuid 1100ff6c-45fa-824d-ad93-869c94a87c7b
>>> parent_uuid 8bb8a884-ea4f-d743-8b0c-b6fdecbc397c
>>> ctransid 1337630 otransid 1249372 stransid 0 rtransid 0
>>> ctime 1554266422.693480985 (2019-04-03 06:40:22)
>>> otime 1547877605.465117667 (2019-01-19 07:00:05)
>>> drop key (0 UNKNOWN.0 0) level 0
>>> item 25 key (51098 ROOT_BACKREF 5) itemoff 10067 itemsize 42
>>> root backref key dirid 534 sequence 22219 name 20190119_070006_hourly.7
>>> item 26 key (51933 INODE_ITEM 0) itemoff 9907 itemsize 160
>>> generation 0 transid 1517381 size 262144 nbytes 30553407488
>>> block group 0 mode 100600 links 1 uid 0 gid 0 rdev 0
>>> sequence 116552 flags 0x1b(NODATASUM|NODATACOW|NOCOMPRESS|PREALLOC)
>>> atime 0.0 (1970-01-01 01:00:00)
>>> ctime 1567904822.739884119 (2019-09-08 03:07:02)
>>> mtime 0.0 (1970-01-01 01:00:00)
>>> otime 0.0 (1970-01-01 01:00:00)
>>> item 27 key (51933 EXTENT_DATA 0) itemoff 9854 itemsize 53
>>> generation 1517381 type 2 (prealloc)
>>> prealloc data disk byte 34626327621632 nr 262144
>>
>> Got the point now.
>>
>> The type is preallocated, which means we haven't yet written space cache
>> into it.
>>
>> But the code only checks the regular file extent (written, not
>> preallocated).
>>
>> So the proper fix would looks like this:
>>
>> diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
>> index 19b7db8b2117..1d73d7c2fbd7 100644
>> --- a/fs/btrfs/relocation.c
>> +++ b/fs/btrfs/relocation.c
>> @@ -2975,11 +2975,14 @@ static int delete_v1_space_cache(struct
>> extent_buffer *leaf,
>> return 0;
>>
>> for (i = 0; i < btrfs_header_nritems(leaf); i++) {
>> + u8 type;
>> btrfs_item_key_to_cpu(leaf, &key, i);
>> if (key.type != BTRFS_EXTENT_DATA_KEY)
>> continue;
>> ei = btrfs_item_ptr(leaf, i, struct
>> btrfs_file_extent_item);
>> - if (btrfs_file_extent_type(leaf, ei) ==
>> BTRFS_FILE_EXTENT_REG &&
>> + type = btrfs_file_extent_type(leaf, ei);
>> + if ((type == BTRFS_FILE_EXTENT_REG ||
>> + type == BTRFS_FILE_EXTENT_PREALLOC) &&
>> btrfs_file_extent_disk_bytenr(leaf, ei) ==
>> data_bytenr) {
>> found = true;
>> space_cache_ino = key.objectid;
>>
>> With this, the relocation should finish without problem.
>
> Aaah, it makes sense indeed.
>
> Do you want me to try this fix right now, or do you want to have a look
> at the btrfs-progs crash first? I don't know if it's related, but if it is,
> then maybe I won't be able to reproduce it again after finishing the balance.
The problem is, I'm not that confident with v2 space cache debugging,
thus can't help much.
But at least, the problem you're reporting doesn't really need a
btrfs-check repair.
Just a kernel fix would be enough.
>
> I don't mind leaving the FS in this state for a few more days/weeks if needed.
That's fine if you want some one to look into the btrfs-progs bug.
I'll submit a proper bug fix soon.
Thanks,
Qu
>
>> Thanks for all your effort, from reporting to most of the debug, this
>> really helps a lot!
>
> No problem, glad to help. Thanks for looking into it so fast!
>
next prev parent reply other threads:[~2020-12-29 13:08 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-29 0:38 [PATCH] btrfs: relocation: output warning message for leftover v1 space cache before aborting current data balance Qu Wenruo
2020-12-29 9:27 ` Stéphane Lesimple
2020-12-29 10:29 ` Qu Wenruo
2020-12-29 11:08 ` Stéphane Lesimple
2020-12-29 11:30 ` Qu Wenruo
2020-12-29 12:30 ` Stéphane Lesimple
2020-12-29 12:41 ` Qu Wenruo
2020-12-29 12:51 ` Stéphane Lesimple
2020-12-29 13:06 ` Qu Wenruo [this message]
2020-12-29 13:17 ` Stéphane Lesimple
2020-12-30 5:49 ` Qu Wenruo
2020-12-30 8:39 ` Stéphane Lesimple
2020-12-30 0:56 ` Qu Wenruo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=02f6b3d7-c502-fe29-ec74-cce99922296c@gmx.com \
--to=quwenruo.btrfs@gmx.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=stephane_btrfs2@lesimple.fr \
--cc=wqu@suse.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox