Linux Btrfs filesystem development
 help / color / mirror / Atom feed
From: brainchild@mailbox.org
To: linux-btrfs@vger.kernel.org
Subject: Re: Strange behavior with scrub, quotas, and snapshots
Date: Mon, 27 Apr 2026 16:32:06 -0400	[thread overview]
Message-ID: <ID66ET.91J1PP272KJY1@mailbox.org> (raw)
In-Reply-To: <b53641af-f231-45b9-9d4f-21e1ee8132c7@gmx.com>

[-- Attachment #1: Type: text/plain, Size: 2009 bytes --]

I have the run the check command, which reported a variety of errors. 
The output is attached.

Are any recommendations available to attempt restoring the volume?

Thanks.

On Mon, Apr 27 2026 at 11:35:28 AM +09:30:00, Qu Wenruo 
<quwenruo.btrfs@gmx.com> wrote:
> 
> 
> 在 2026/4/27 09:22, brainchild@mailbox.org 写道:
>> Hello.
>> 
>> I am struggling with a poorly behaved BTRFS volume.
>> 
> [...]
>> ---
>> 
>> No errors are reported in the kernel log, only warnings about 
>> skipping \x7fthe swap file during scrub.
> 
> If you assume the fs has some corruption, none of the above is really 
> useful.
> A full "btrfs check" is strongly recommended.
> 
>> 
>> Second, within the logs generated for Timeshift, a concerning 
>> pattern \x7frecurs, as in the attached example. Further, during the 
>> periods in which \x7fare generated logs such as the one attached, the 
>> entire system lags \x7fconsiderably. It is clear that the volume is not 
>> healthy.
> 
> The lag is mostly caused by qgroup.
> You have a lot of snapshots (shown by the super large snapshot id), 
> every time a large snapshot/subvolume is deleted, btrfs will try to 
> disable qgroup to avoid such lag, but if whatever script/tool decides 
> to rescan qgroup when the snapshot/subvolume deleting is still under 
> going, the lag will be re-introduced.
> 
>> 
>> I was using a recent 6.x kernel, I believe one of 6.18.x, when the 
>> \x7fproblem emerged. I upgraded by to 7.0, finding no improvement in 
>> the \x7foperation of the volume.
>> 
>> Also, I tried initiating the scrub through the most recent static 
>> build \x7fof the user-space utility (i.e. btrfs-progs), with no 
>> improvement.
>> 
>> I would like some suggestions for restoring the volume to health, to 
>> \x7favoid the need to provision a new volume from scratch.
> 
> "btrfs check" first, if no error, disable qgroup if you have frequent 
> snapshot creation/deletion.
> 
> Thanks,
> Qu
> 
>> 
>> Thank you.
>> 
> 


[-- Attachment #2: brainchild_btrfs-check_log.txt --]
[-- Type: text/plain, Size: 1511 bytes --]

[1/8] checking log skipped (none written)
[2/8] checking root items
[3/8] checking extents
super bytes used 743892983808 mismatches actual used 740558856192
ERROR: errors found in extent allocation tree or chunk allocation
[4/8] checking free space tree
[5/8] checking fs roots
root 6855 inode 16523863 errors 400, nbytes wrong
root 58570 inode 16523863 errors 400, nbytes wrong
root 59486 inode 16523863 errors 400, nbytes wrong
root 60333 inode 16523863 errors 400, nbytes wrong
root 60367 inode 16523863 errors 400, nbytes wrong
root 60377 inode 16523863 errors 400, nbytes wrong
root 60383 inode 16523863 errors 400, nbytes wrong
root 60475 inode 16523863 errors 400, nbytes wrong
root 60713 inode 16523863 errors 400, nbytes wrong
root 61351 inode 16523863 errors 400, nbytes wrong
root 61421 inode 16523863 errors 400, nbytes wrong
root 61423 inode 16523863 errors 400, nbytes wrong
root 61425 inode 16523863 errors 400, nbytes wrong
root 61427 inode 16523863 errors 400, nbytes wrong
root 61429 inode 16523863 errors 400, nbytes wrong
root 61431 inode 16523863 errors 400, nbytes wrong
ERROR: errors found in fs roots
Opening filesystem to check...
Checking filesystem on /dev/nvme0n1p5
UUID: bbac86e5-eaba-45bf-bbaa-c2494e11831a
found 740558647296 bytes used, error(s) found
total csum bytes: 473596236
total tree bytes: 6834847744
total fs tree bytes: 5660327936
total extent tree bytes: 529825792
btree space waste bytes: 1514685920
file data blocks allocated: 11027378925568
 referenced 886704037888

  reply	other threads:[~2026-04-27 20:32 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-26 23:52 Strange behavior with scrub, quotas, and snapshots brainchild
2026-04-27  2:05 ` Qu Wenruo
2026-04-27 20:32   ` brainchild [this message]
2026-04-27 22:10     ` Qu Wenruo
     [not found]       ` <SNC6ET.5NSSU3PO7MKD2@mailbox.org>
2026-04-27 22:58         ` Qu Wenruo
2026-04-28  0:22           ` brainchild
2026-04-28  1:16             ` Qu Wenruo
2026-04-28  1:21               ` brainchild
2026-04-28  2:33                 ` brainchild
2026-04-28  3:13                   ` Qu Wenruo
2026-04-28  4:03                     ` brainchild
2026-04-28  5:13                       ` Qu Wenruo
2026-04-28  5:29                         ` brainchild
2026-04-28  6:41                           ` Qu Wenruo
2026-04-28 19:30                             ` brainchild
2026-04-28 22:19                               ` brainchild
2026-04-28 22:26                                 ` Qu Wenruo
2026-04-28 22:50                                   ` Qu Wenruo
2026-04-28 22:23                               ` Qu Wenruo
2026-04-28 22:34                                 ` Qu Wenruo
2026-04-29  0:57                                 ` brainchild
2026-04-29  1:11                                   ` Qu Wenruo
2026-04-29  1:16                                     ` brainchild
2026-04-29  1:27                                       ` Qu Wenruo
2026-04-29  2:11                                         ` brainchild

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ID66ET.91J1PP272KJY1@mailbox.org \
    --to=brainchild@mailbox.org \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox