From: Boris Burkov <boris@bur.io>
To: Qu Wenruo <quwenruo.btrfs@gmx.com>
Cc: linux-btrfs@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH v4 4/4] btrfs: cap shrink_delalloc iterations to 128M
Date: Fri, 24 Apr 2026 15:23:01 -0700 [thread overview]
Message-ID: <20260424222301.GA2978226@zen.localdomain> (raw)
In-Reply-To: <8bf661bc-2536-4310-a5cc-638b5cd9d25c@gmx.com>
On Sat, Apr 25, 2026 at 07:51:01AM +0930, Qu Wenruo wrote:
>
>
> 在 2026/4/25 07:40, Boris Burkov 写道:
> > On Sat, Apr 25, 2026 at 07:36:49AM +0930, Qu Wenruo wrote:
> [...]
> > > At least we got something that both of us can reproduce.
> > >
> > > Another thing is, for g/027 on arm64 I'm also actively monitoring the CPU
> > > usage through top.
> > >
> > > Have you experienced very high (~100%) CPU usage on a kworker during g/027?
> > >
> >
> > No :(
> > As far as I can tell the system is stuck waiting on a commit. I'll keep
> > trying to repro your symptom.
> >
> > I'm curious if it goes away for you with Sun's proposed fix, something
> > like setting pages to at least 1 after those to min() operations.
>
> I go with the following diff:
>
> diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
> index e931deb3d013..2c5214b24239 100644
> --- a/fs/btrfs/space-info.c
> +++ b/fs/btrfs/space-info.c
> @@ -770,6 +770,9 @@ static void shrink_delalloc(struct btrfs_space_info
> *space_info,
> u64 items = calc_reclaim_items_nr(fs_info, iter_reclaim) *
> 2;
> int async_pages;
>
> + if (nr_pages == 0)
> + nr_pages = 1;
> +
> btrfs_start_delalloc_roots(fs_info, nr_pages, true);
>
> /*
>
>
> It solves the dead looping kworker on arm64, now the it's several different
> kworkers taking around 5~15% along with 027 itself taking CPU time.
>
> But unfortunately the test case itself still seems not to end any time soon.
>
> I believe the old 3 loops limit is really what makes the difference.
> It's ugly, but at least it seems to work.
>
> Thanks,
> Qu
Thanks for re-testing. Glad to hear that at least fixes the dead loop.
Honestly I should have known better than to include an unbounded loop,
that was stupid. I even kind of thought it was dumb while doing it but
convinced myself it "must make progress one extent at a time" or
whatever.. Obviously overlooked the min() bug too.
next prev parent reply other threads:[~2026-04-24 22:23 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-09 17:48 [PATCH v4 0/4] btrfs: improve stalls under sudden writeback Boris Burkov
2026-04-09 17:48 ` [PATCH v4 1/4] btrfs: reserve space for delayed_refs in delalloc Boris Burkov
2026-04-10 16:07 ` Filipe Manana
2026-04-09 17:48 ` [PATCH v4 2/4] btrfs: account for compression in delalloc extent reservation Boris Burkov
2026-04-09 17:48 ` [PATCH v4 3/4] btrfs: make inode->outstanding_extents a u64 Boris Burkov
2026-04-13 18:43 ` David Sterba
2026-04-09 17:48 ` [PATCH v4 4/4] btrfs: cap shrink_delalloc iterations to 128M Boris Burkov
2026-04-24 6:38 ` Qu Wenruo
2026-04-24 9:48 ` Sun YangKai
2026-04-24 10:07 ` Qu Wenruo
2026-04-24 15:26 ` Boris Burkov
2026-04-24 20:11 ` Boris Burkov
2026-04-24 22:06 ` Qu Wenruo
2026-04-24 22:10 ` Boris Burkov
2026-04-24 22:21 ` Qu Wenruo
2026-04-24 22:23 ` Boris Burkov [this message]
2026-04-24 22:59 ` Qu Wenruo
2026-04-13 18:41 ` [PATCH v4 0/4] btrfs: improve stalls under sudden writeback David Sterba
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260424222301.GA2978226@zen.localdomain \
--to=boris@bur.io \
--cc=kernel-team@fb.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=quwenruo.btrfs@gmx.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox