From: Qu Wenruo <quwenruo.btrfs@gmx.com>
To: j4nn <j4nn.xda@gmail.com>, linux-btrfs@vger.kernel.org
Subject: Re: clear space cache v1 fails with Unable to find block group for 0
Date: Mon, 9 Dec 2024 06:56:48 +1030 [thread overview]
Message-ID: <c5ec9e5e-9113-490d-84c3-82ded6baa793@gmx.com> (raw)
In-Reply-To: <CADhu7iD1LOT=93o1DhFYBeDHTKW9SuYdSmz8VXvsE4vf285tDg@mail.gmail.com>
在 2024/12/9 02:32, j4nn 写道:
> Hi,
>
> I am trying to switch 8TB raid1 btrfs from space cache v1 to v2, but
> the clear space cache v1 fails as following:
>
> gentoo ~ # btrfs filesystem df /mnt/data
> Data, RAID1: total=7.36TiB, used=7.00TiB
> System, RAID1: total=64.00MiB, used=1.11MiB
> Metadata, RAID1: total=63.00GiB, used=57.37GiB
> Metadata, DUP: total=5.00GiB, used=1.18GiB
> GlobalReserve, single: total=512.00MiB, used=0.00B
> WARNING: Multiple block group profiles detected, see 'man btrfs(5)'
> WARNING: Metadata: raid1, dup
> gentoo ~ # umount /mnt/data
>
> gentoo ~ # time btrfs rescue clear-space-cache v1 /dev/mapper/wdrb-bdata
> Unable to find block group for 0
> Unable to find block group for 0
> Unable to find block group for 0
This is a common indicator of -ENOSPC.
But according to the fi df output, we should have quite a lot of
metadata space left.
The only concern is the DUP metadata, which may cause the space
reservation code not to work in progs.
Have you tried to convert the DUP metadata first?
And `btrfs fi usage` output please.
> ERROR: failed to clear free space cache
> extent buffer leak: start 9587384418304 len 16384
>
> real 7m8.174s
> user 0m6.883s
> sys 0m9.322s
>
>
> Here some info:
>
> gentoo ~ # uname -a
> Linux gentoo 6.12.3-gentoo-x86_64 #1 SMP PREEMPT_DYNAMIC Sun Dec 8
> 00:12:56 CET 2024 x86_64 AMD Ryzen 9 5950X 16-Core Processor
> AuthenticAMD GNU/Linux
> gentoo ~ # btrfs --version
> btrfs-progs v6.12
> -EXPERIMENTAL -INJECT -STATIC +LZO +ZSTD +UDEV +FSVERITY +ZONED CRYPTO=builtin
> gentoo ~ # btrfs filesystem show /mnt/data
> Label: 'rdata' uuid: 1dfac20a-3f84-4149-aba0-f160ab633373
> Total devices 2 FS bytes used 7.06TiB
> devid 1 size 8.00TiB used 7.26TiB path /dev/mapper/wdrb-bdata
> devid 2 size 8.00TiB used 7.25TiB path /dev/mapper/wdrc-cdata
> gentoo ~ # dmesg | tail -n 6
> [31008.980706] BTRFS info (device dm-0): first mount of filesystem
> 1dfac20a-3f84-4149-aba0-f160ab633373
> [31008.980726] BTRFS info (device dm-0): using crc32c (crc32c-intel)
> checksum algorithm
> [31008.980731] BTRFS info (device dm-0): disk space caching is enabled
> [31008.980734] BTRFS warning (device dm-0): space cache v1 is being
> deprecated and will be removed in a future release, please use -o
> space_cache=v2
> [31009.994687] BTRFS info (device dm-0): bdev /dev/mapper/wdrb-bdata
> errs: wr 8, rd 0, flush 0, corrupt 0, gen 0
> [31009.994696] BTRFS info (device dm-0): bdev /dev/mapper/wdrc-cdata
> errs: wr 7, rd 0, flush 0, corrupt 0, gen 0
>
> Completed scrub (which corrected 4 errors), btrfs check completed
> without errors:
>
> gentoo ~ # btrfs scrub status /mnt/data
> UUID: 1dfac20a-3f84-4149-aba0-f160ab633373
> Scrub started: Fri Dec 6 13:12:36 2024
> Status: finished
> Duration: 16:11:22
> Total to scrub: 14.92TiB
> Rate: 268.35MiB/s
> Error summary: verify=4
> Corrected: 4
> Uncorrectable: 0
> Unverified: 0
> gentoo ~ # umount /mnt/data
>
> gentoo ~ # time btrfs check -p /dev/mapper/wdrb-bdata
> Opening filesystem to check...
> Checking filesystem on /dev/mapper/wdrb-bdata
> UUID: 1dfac20a-3f84-4149-aba0-f160ab633373
> [1/7] checking root items (0:06:57 elapsed,
> 34253945 items checked)
> [2/7] checking extents (0:23:08 elapsed,
> 3999596 items checked)
> [3/7] checking free space cache (0:04:25 elapsed, 7868
> items checked)
> [4/7] checking fs roots (1:03:46 elapsed,
> 3215533 items checked)
> [5/7] checking csums (without verifying data) (0:11:58 elapsed,
> 15418322 items checked)
> [6/7] checking root refs (0:00:00 elapsed, 52
> items checked)
> [7/7] checking quota groups skipped (not enabled on this FS)
> found 8199989936128 bytes used, no error found
> total csum bytes: 7940889876
> total tree bytes: 65528446976
> total fs tree bytes: 52856799232
> total extent tree bytes: 3578331136
> btree space waste bytes: 10797983857
> file data blocks allocated: 21632555483136
> referenced 9547690319872
>
> real 111m10.370s
> user 10m28.442s
> sys 6m44.888s
>
> Tried some balance as found example posted, not really sure if that should help:
>
> gentoo ~ # btrfs balance start -dusage=10 /mnt/data
> Done, had to relocate 32 out of 7467 chunks
The balance doesn't do much, the overall chunk layout is still the same.
>
> gentoo ~ # btrfs filesystem df /mnt/data
> Data, RAID1: total=7.19TiB, used=7.00TiB
> System, RAID1: total=64.00MiB, used=1.08MiB
> Metadata, RAID1: total=63.00GiB, used=57.36GiB
> Metadata, DUP: total=5.00GiB, used=1.18GiB
> GlobalReserve, single: total=512.00MiB, used=0.00B
> WARNING: Multiple block group profiles detected, see 'man btrfs(5)'
> WARNING: Metadata: raid1, dup
>
> But it did not help:
>
> gentoo ~ # time btrfs rescue clear-space-cache v1 /dev/mapper/wdrb-bdata
> Unable to find block group for 0
> Unable to find block group for 0
> Unable to find block group for 0
> ERROR: failed to clear free space cache
> extent buffer leak: start 7995086045184 len 16384
>
> real 6m58.515s
> user 0m6.270s
> sys 0m9.586s
Migrating to v2 cache doesn't really need to manually clear the v1 cache.
Just mounting with "space_cache=v2" option will automatically purge the
v1 cache, just as explained in the man page:
If v2 is enabled, and v1 space cache will be cleared (at the first
mount)
If you want to dig deeper, the implementation is done in
btrfs_set_free_space_cache_v1_active() which calls
cleanup_free_space_cache_v1() if @active is false.
Thanks,
Qu
>
> Any idea how to fix this?
> Thanks.
>
next prev parent reply other threads:[~2024-12-08 20:26 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-08 16:02 clear space cache v1 fails with Unable to find block group for 0 j4nn
2024-12-08 20:26 ` Qu Wenruo [this message]
2024-12-08 21:25 ` j4nn
2024-12-08 21:36 ` Qu Wenruo
2024-12-08 22:19 ` j4nn
2024-12-08 22:32 ` Qu Wenruo
2024-12-08 22:50 ` j4nn
2024-12-16 18:52 ` j4nn
2024-12-17 5:31 ` j4nn
2024-12-17 5:50 ` Qu Wenruo
2024-12-17 6:10 ` j4nn
2024-12-17 6:34 ` j4nn
2024-12-17 6:55 ` Qu Wenruo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c5ec9e5e-9113-490d-84c3-82ded6baa793@gmx.com \
--to=quwenruo.btrfs@gmx.com \
--cc=j4nn.xda@gmail.com \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox