Linux Btrfs filesystem development
 help / color / mirror / Atom feed
From: "Stéphane Lesimple" <stephane_btrfs2@lesimple.fr>
To: "Qu Wenruo" <wqu@suse.com>, linux-btrfs@vger.kernel.org
Subject: Re: [PATCH] btrfs: relocation: output warning message for leftover v1 space cache before aborting current data balance
Date: Tue, 29 Dec 2020 09:27:05 +0000	[thread overview]
Message-ID: <6b4afae37ba5979f25bddabd876a7dc5@lesimple.fr> (raw)
In-Reply-To: <20201229003837.16074-1-wqu@suse.com>

December 29, 2020 1:38 AM, "Qu Wenruo" <wqu@suse.com> wrote:

> In delete_v1_space_cache(), if we find a leaf whose owner is tree root,
> and we can't grab the free space cache inode, then we return -ENOENT.
> 
> However this would make the caller, add_data_references(), to consider
> this as a critical error, and abort current data balance.
> 
> This happens for fs using free space cache v2, while still has v1 data
> left.
> 
> For v2 free space cache, we no longer load v1 data, making btrfs_igrab()
> no longer work for root tree to grab v1 free space cache inodes.
> 
> The proper fix for the problem is to delete v1 space cache completely
> during v2 convert.
> 
> We can't just ignore the -ENOENT error, as for root tree we don't use
> reloc tree to replace its data references, but rely on COW.
> This means, we have no way to relocate the leftover v1 data, and block
> the relocation.
> 
> This patch will just workaround it by outputting a warning message,
> showing the user how to manually solve it.
> 
> Reported-by: Stéphane Lesimple <stephane_btrfs2@lesimple.fr>
> Signed-off-by: Qu Wenruo <wqu@suse.com>

Your analysis seems correct, as this FS is quite old (several years),
and has seen quite a number of kernel versions! I converted it to
space_cache v2 ~6-12 months ago I think. It does has v1 leftovers:

# btrfs ins dump-tree -t root /dev/mapper/luks-tank-mdata | grep EXTENT_DA
item 27 key (51933 EXTENT_DATA 0) itemoff 9854 itemsize 53
item 12 key (72271 EXTENT_DATA 0) itemoff 14310 itemsize 53
item 25 key (74907 EXTENT_DATA 0) itemoff 12230 itemsize 53

What's interesting also is that a FS I created only a few weeks ago,
under kernel 5.6.17, also has v1 leftovers, as per the above command.
So the issue might be more common than we think (not just years-old FS).

Before fixing the FS I can't balance, I wanted to test your patch,
even if it's pretty straightforward, just to be sure:

# btrfs bal start -dvrange=34625344765952..34625344765953 /tank
ERROR: error during balancing '/tank': No such file or directory
There may be more info in syslog - try dmesg | tail

[   76.114187] BTRFS info (device dm-10): balance: start -dvrange=34625344765952..34625344765953
[   76.122792] BTRFS info (device dm-10): relocating block group 34625344765952 flags data|raid1
[   87.065468] BTRFS info (device dm-10): found 167 extents, stage: move data extents
[   87.685571] BTRFS warning (device dm-10): leftover v1 space cache found, please use btrfs-check --clear-space-cache v1 to clean it up
[  100.018692] BTRFS info (device dm-10): balance: ended with status: -2

So, it works. You can add my Tested-By.

Regards,

Stéphane.

  reply	other threads:[~2020-12-29  9:28 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-29  0:38 [PATCH] btrfs: relocation: output warning message for leftover v1 space cache before aborting current data balance Qu Wenruo
2020-12-29  9:27 ` Stéphane Lesimple [this message]
2020-12-29 10:29   ` Qu Wenruo
2020-12-29 11:08     ` Stéphane Lesimple
2020-12-29 11:30       ` Qu Wenruo
2020-12-29 12:30         ` Stéphane Lesimple
2020-12-29 12:41           ` Qu Wenruo
2020-12-29 12:51             ` Stéphane Lesimple
2020-12-29 13:06               ` Qu Wenruo
2020-12-29 13:17                 ` Stéphane Lesimple
2020-12-30  5:49                   ` Qu Wenruo
2020-12-30  8:39                     ` Stéphane Lesimple
2020-12-30  0:56 ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6b4afae37ba5979f25bddabd876a7dc5@lesimple.fr \
    --to=stephane_btrfs2@lesimple.fr \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=wqu@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox