Linux Btrfs filesystem development
 help / color / mirror / Atom feed
From: Qu Wenruo <wqu@suse.com>
To: brainchild@mailbox.org, linux-btrfs@vger.kernel.org
Subject: Re: Strange behavior with scrub, quotas, and snapshots
Date: Tue, 28 Apr 2026 07:40:31 +0930	[thread overview]
Message-ID: <3f28ce8f-25a2-4972-97de-b995799cea40@suse.com> (raw)
In-Reply-To: <ID66ET.91J1PP272KJY1@mailbox.org>



在 2026/4/28 06:02, brainchild@mailbox.org 写道:
> I have the run the check command, which reported a variety of errors. 
> The output is attached.
> 
> Are any recommendations available to attempt restoring the volume?

The super block bytes mismatch is a minor one, which shouldn't affect 
normal operations.

But still you can use "btrfs rescue fix-device-size" to fix the problem.


The nbytes wrong can be minor too, but it's affecting all snapshots 
containing the inode 16523863.

You can either fix it by copying the inode to another location (can be 
inside the same btrfs), remove the original file, mv back the new copy.
This will need to be done for every snapshot.

Or you can try "btrfs check --repair", which will do an in-place fix, 
but will still break the shared blocks of every snapshot.

Overall, I'd strongly recommend to remove all unused snapshots first 
before doing either fix.

> 
> Thanks.
> 
> On Mon, Apr 27 2026 at 11:35:28 AM +09:30:00, Qu Wenruo 
> <quwenruo.btrfs@gmx.com> wrote:
>>
>>
>> 在 2026/4/27 09:22, brainchild@mailbox.org 写道:
>>> Hello.
>>>
>>> I am struggling with a poorly behaved BTRFS volume.
>>>
>> [...]
>>> ---
>>>
>>> No errors are reported in the kernel log, only warnings about 
>>> skipping \x7fthe swap file during scrub.
>>
>> If you assume the fs has some corruption, none of the above is really 
>> useful.
>> A full "btrfs check" is strongly recommended.
>>
>>>
>>> Second, within the logs generated for Timeshift, a concerning pattern 
>>> \x7frecurs, as in the attached example. Further, during the periods in 
>>> which \x7fare generated logs such as the one attached, the entire system 
>>> lags \x7fconsiderably. It is clear that the volume is not healthy.
>>
>> The lag is mostly caused by qgroup.
>> You have a lot of snapshots (shown by the super large snapshot id), 
>> every time a large snapshot/subvolume is deleted, btrfs will try to 
>> disable qgroup to avoid such lag, but if whatever script/tool decides 
>> to rescan qgroup when the snapshot/subvolume deleting is still under 
>> going, the lag will be re-introduced.
>>
>>>
>>> I was using a recent 6.x kernel, I believe one of 6.18.x, when the 
>>> \x7fproblem emerged. I upgraded by to 7.0, finding no improvement in the 
>>> \x7foperation of the volume.
>>>
>>> Also, I tried initiating the scrub through the most recent static 
>>> build \x7fof the user-space utility (i.e. btrfs-progs), with no 
>>> improvement.
>>>
>>> I would like some suggestions for restoring the volume to health, to 
>>> \x7favoid the need to provision a new volume from scratch.
>>
>> "btrfs check" first, if no error, disable qgroup if you have frequent 
>> snapshot creation/deletion.

So your fsck is mostly fine, the lag part is highly possible to be 
caused by qgroup.

If you do not need it, just disable it for good.

Thanks,
Qu

>>
>> Thanks,
>> Qu
>>
>>>
>>> Thank you.
>>>
>>
> 


  reply	other threads:[~2026-04-27 22:10 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-26 23:52 Strange behavior with scrub, quotas, and snapshots brainchild
2026-04-27  2:05 ` Qu Wenruo
2026-04-27 20:32   ` brainchild
2026-04-27 22:10     ` Qu Wenruo [this message]
     [not found]       ` <SNC6ET.5NSSU3PO7MKD2@mailbox.org>
2026-04-27 22:58         ` Qu Wenruo
2026-04-28  0:22           ` brainchild
2026-04-28  1:16             ` Qu Wenruo
2026-04-28  1:21               ` brainchild
2026-04-28  2:33                 ` brainchild
2026-04-28  3:13                   ` Qu Wenruo
2026-04-28  4:03                     ` brainchild
2026-04-28  5:13                       ` Qu Wenruo
2026-04-28  5:29                         ` brainchild
2026-04-28  6:41                           ` Qu Wenruo
2026-04-28 19:30                             ` brainchild
2026-04-28 22:19                               ` brainchild
2026-04-28 22:26                                 ` Qu Wenruo
2026-04-28 22:50                                   ` Qu Wenruo
2026-04-28 22:23                               ` Qu Wenruo
2026-04-28 22:34                                 ` Qu Wenruo
2026-04-29  0:57                                 ` brainchild
2026-04-29  1:11                                   ` Qu Wenruo
2026-04-29  1:16                                     ` brainchild
2026-04-29  1:27                                       ` Qu Wenruo
2026-04-29  2:11                                         ` brainchild

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3f28ce8f-25a2-4972-97de-b995799cea40@suse.com \
    --to=wqu@suse.com \
    --cc=brainchild@mailbox.org \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox