public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Qu Wenruo <quwenruo.btrfs@gmx.com>
To: j4nn <j4nn.xda@gmail.com>, Qu Wenruo <wqu@suse.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: clear space cache v1 fails with Unable to find block group for 0
Date: Tue, 17 Dec 2024 17:25:47 +1030	[thread overview]
Message-ID: <26ab3ad4-cd0e-4913-acbd-deeaba50f51c@gmx.com> (raw)
In-Reply-To: <CADhu7iD2U1R9ssAruFVu6s+xvkcAtAr7dYj26cRJ5f5pLNmiyw@mail.gmail.com>



在 2024/12/17 17:04, j4nn 写道:
> On Tue, 17 Dec 2024 at 07:10, j4nn <j4nn.xda@gmail.com> wrote:
>>
>> On Tue, 17 Dec 2024 at 06:50, Qu Wenruo <wqu@suse.com> wrote:
>>> 在 2024/12/17 16:01, j4nn 写道:
>>>> On Mon, 16 Dec 2024 at 19:52, j4nn <j4nn.xda@gmail.com> wrote:
>>>>>
>>>
>>> This is from the metadata writeback.
>>>
>>> My guess is, since you're transfering a lot of data, the metadata also
>>> go very large very fast.
>>>
>>> And the default commit interval is 30s, and considering your hardware,
>>> your hardware memory may also be pretty large (at least 32G I believe?).
>>>
>>> That means we can have several giga bytes of metadata waiting to be
>>> written back.
>>>
>>> What makes things worse may be the fact that, metadata writeback
>>> nowadays are always in nodesize, no merging at all.
>>>
>>> Furthermore since your storage device is HDD, the low IOPS performance
>>> is making it even worse.
>>>
>>>
>>> Combining all those things together, we're writing several giga bytes of
>>> metadata, all in 16K nodesize no merging, resulting a very bad write
>>> performance on rotating disks...
>>>
>>> IIRC there are some ways to limit how many bytes can be utilized by page
>>> cache (btrfs metadata is also utilizing page cache), thus it may improve
>>> the situation by not writing too much metadata in one go.
>>>
>>> [...]
>>> Considering the transfer finished, and you can unmount the fs, it should
>>> really be a false alert, mind to share how large your RAM is?
>>> 32G or even 64G?
>>
>> Yes, you are right, using both :-)
>> That is 96GB of RAM...
>>
>> Thank you for the explanations, particularly the "16K nodesize no
>> merging" metadata chunks.
>> The reported 'time' of the transfer included a 'sync' command after
>> the btrfs receive.
>
> I guess the hung task backtrace that appeared during creating of free
> space tree cache had the same cause as this simple btrfs send and
> receive?

I can only be more or less certain on the receive end. (receive is
mostly just writing data into the fs, as most of the work is done in
user space with buffered write).

The v2 cache rebuilding process is indeed problematic, but for a
different reason.

When rebuilding v2 cache can cause a huge hang, that's for sure.
We are using a single transaction to build new v2 cache for each block
group, no wonder it will hang.

Anyway I'll change the rebuilding process to at least do
multi-transactional update, to avoid holding one transaction too long.

But the rebuilding itself can still be very time consuming anyway.

Thanks,
Qu

      reply	other threads:[~2024-12-17  6:55 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-08 16:02 clear space cache v1 fails with Unable to find block group for 0 j4nn
2024-12-08 20:26 ` Qu Wenruo
2024-12-08 21:25   ` j4nn
2024-12-08 21:36     ` Qu Wenruo
2024-12-08 22:19       ` j4nn
2024-12-08 22:32         ` Qu Wenruo
2024-12-08 22:50           ` j4nn
2024-12-16 18:52             ` j4nn
2024-12-17  5:31               ` j4nn
2024-12-17  5:50                 ` Qu Wenruo
2024-12-17  6:10                   ` j4nn
2024-12-17  6:34                     ` j4nn
2024-12-17  6:55                       ` Qu Wenruo [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=26ab3ad4-cd0e-4913-acbd-deeaba50f51c@gmx.com \
    --to=quwenruo.btrfs@gmx.com \
    --cc=j4nn.xda@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=wqu@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox