From: Qu Wenruo <wqu@suse.com>
To: j4nn <j4nn.xda@gmail.com>, Qu Wenruo <quwenruo.btrfs@gmx.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: clear space cache v1 fails with Unable to find block group for 0
Date: Tue, 17 Dec 2024 16:20:12 +1030 [thread overview]
Message-ID: <2e3d5a0b-4cc5-48e7-80a6-7cc5454d54e3@suse.com> (raw)
In-Reply-To: <CADhu7iD+kKsvxZtnX9q94tuJzS6z=zs3B7Xc9Bb4G+mnQ6_UhQ@mail.gmail.com>
在 2024/12/17 16:01, j4nn 写道:
> On Mon, 16 Dec 2024 at 19:52, j4nn <j4nn.xda@gmail.com> wrote:
>>
>> This is unrelated, but as you have been interested in the hung task
>> backtrace, I got two more when using "btrfs send ... | btrfs receive
>> ..." to copy 7TB of data from one btrfs disk to another one (still in
>> progress, both rotational hard drives):
>> [81837.347137] INFO: task btrfs-transacti:29385 blocked for more than
>> 122 seconds.
>> [81837.347144] Tainted: G W O 6.12.3-gentoo-x86_64 #1
>> [81837.347147] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>> disables this message.
>> [81837.347149] task:btrfs-transacti state:D stack:0 pid:29385
>> tgid:29385 ppid:2 flags:0x00004000
>> [81837.347154] Call Trace:
>> [81837.347156] <TASK>
>> [81837.347161] __schedule+0x3f0/0xbd0
>> [81837.347170] schedule+0x27/0xf0
>> [81837.347174] io_schedule+0x46/0x70
>> [81837.347177] folio_wait_bit_common+0x123/0x340
>> [81837.347184] ? __pfx_wake_page_function+0x10/0x10
>> [81837.347189] folio_wait_writeback+0x2b/0x80
>> [81837.347193] __filemap_fdatawait_range+0x7d/0xd0
>> [81837.347201] filemap_fdatawait_range+0x12/0x20
>> [81837.347206] __btrfs_wait_marked_extents.isra.0+0xb8/0xf0 [btrfs]
This is from the metadata writeback.
My guess is, since you're transfering a lot of data, the metadata also
go very large very fast.
And the default commit interval is 30s, and considering your hardware,
your hardware memory may also be pretty large (at least 32G I believe?).
That means we can have several giga bytes of metadata waiting to be
written back.
What makes things worse may be the fact that, metadata writeback
nowadays are always in nodesize, no merging at all.
Furthermore since your storage device is HDD, the low IOPS performance
is making it even worse.
Combining all those things together, we're writing several giga bytes of
metadata, all in 16K nodesize no merging, resulting a very bad write
performance on rotating disks...
IIRC there are some ways to limit how many bytes can be utilized by page
cache (btrfs metadata is also utilizing page cache), thus it may improve
the situation by not writing too much metadata in one go.
[...]
>
> The destination btrfs filesystem has been freshly created (single
> device, no raid) and thus empty before starting the transfer.
> This is output of destination 'btrfs filesystem df' after the transfer:
> Data, single: total=7.10TiB, used=7.06TiB
> System, DUP: total=8.00MiB, used=768.00KiB
> Metadata, DUP: total=8.00GiB, used=7.53GiB
> GlobalReserve, single: total=512.00MiB, used=0.00B
>
> The btrfs is on a luks device (cryptsetup -h sha512 -c aes-xts-plain64
> -s 256), but cpu has been idle the whole time (ryzen 5950x).
Considering the transfer finished, and you can unmount the fs, it should
really be a false alert, mind to share how large your RAM is?
32G or even 64G?
Thanks,
Qu
>
> Hope that helps.
> Thank you.
>
next prev parent reply other threads:[~2024-12-17 5:50 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-08 16:02 clear space cache v1 fails with Unable to find block group for 0 j4nn
2024-12-08 20:26 ` Qu Wenruo
2024-12-08 21:25 ` j4nn
2024-12-08 21:36 ` Qu Wenruo
2024-12-08 22:19 ` j4nn
2024-12-08 22:32 ` Qu Wenruo
2024-12-08 22:50 ` j4nn
2024-12-16 18:52 ` j4nn
2024-12-17 5:31 ` j4nn
2024-12-17 5:50 ` Qu Wenruo [this message]
2024-12-17 6:10 ` j4nn
2024-12-17 6:34 ` j4nn
2024-12-17 6:55 ` Qu Wenruo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2e3d5a0b-4cc5-48e7-80a6-7cc5454d54e3@suse.com \
--to=wqu@suse.com \
--cc=j4nn.xda@gmail.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=quwenruo.btrfs@gmx.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox