From: Qu Wenruo <wqu@suse.com>
To: brainchild@mailbox.org, linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: Strange behavior with scrub, quotas, and snapshots
Date: Tue, 28 Apr 2026 08:28:28 +0930 [thread overview]
Message-ID: <0dfb1998-e951-4db7-be8a-987da3a8f471@suse.com> (raw)
In-Reply-To: <SNC6ET.5NSSU3PO7MKD2@mailbox.org>
在 2026/4/28 08:17, brainchild@mailbox.org 写道:
> Currently, scrub operations are not completing properly, so I certainly
> think it is important to try to repair the volume.
>
> Which error do you expect is related to the particular problem,
> concerning scrub?
Can you provide the full "btrfs scrub start -BR" output?
>
> Is there any evidence that data may have been lost,
Not yet.
> or concern that
> 'fix-device-size' could cause further loss?
That should be mostly safe, but if you're concerned about it, please
update to a newer version of btrfs-progs.
Ubuntu is pretty bad at backporting fixes for btrfs-progs.
> Should the operation be done
> while the device is mounted, or only while not mounted?
Must be unmounted for btrfs check and btrfs rescue.
Thanks,
Qu
>
> Thanks.
>
> On Tue, Apr 28 2026 at 07:40:31 AM +09:30:00, Qu Wenruo <wqu@suse.com>
> wrote:
>>
>>
>> 在 2026/4/28 06:02, brainchild@mailbox.org 写道:
>>> I have the run the check command, which reported a variety of errors.
>>> \x7fThe output is attached.
>>>
>>> Are any recommendations available to attempt restoring the volume?
>>
>> The super block bytes mismatch is a minor one, which shouldn't affect
>> normal operations.
>>
>> But still you can use "btrfs rescue fix-device-size" to fix the problem.
>>
>>
>> The nbytes wrong can be minor too, but it's affecting all snapshots
>> containing the inode 16523863.
>>
>> You can either fix it by copying the inode to another location (can be
>> inside the same btrfs), remove the original file, mv back the new copy.
>> This will need to be done for every snapshot.
>>
>> Or you can try "btrfs check --repair", which will do an in-place fix,
>> but will still break the shared blocks of every snapshot.
>>
>> Overall, I'd strongly recommend to remove all unused snapshots first
>> before doing either fix.
>>
>>>
>>> Thanks.
>>>
>>> On Mon, Apr 27 2026 at 11:35:28 AM +09:30:00, Qu Wenruo
>>> \x7f<quwenruo.btrfs@gmx.com> wrote:
>>>>
>>>>
>>>> 在 2026/4/27 09:22, brainchild@mailbox.org 写道:
>>>>> Hello.
>>>>>
>>>>> I am struggling with a poorly behaved BTRFS volume.
>>>>>
>>>> [...]
>>>>> ---
>>>>>
>>>>> No errors are reported in the kernel log, only warnings about
>>>>> \x7f\x7f\x7fskipping \x7fthe swap file during scrub.
>>>>
>>>> If you assume the fs has some corruption, none of the above is
>>>> really \x7f\x7fuseful.
>>>> A full "btrfs check" is strongly recommended.
>>>>
>>>>>
>>>>> Second, within the logs generated for Timeshift, a concerning
>>>>> pattern \x7f\x7f\x7f\x7frecurs, as in the attached example. Further, during the
>>>>> periods in \x7f\x7f\x7fwhich \x7fare generated logs such as the one attached,
>>>>> the entire system \x7f\x7f\x7flags \x7fconsiderably. It is clear that the
>>>>> volume is not healthy.
>>>>
>>>> The lag is mostly caused by qgroup.
>>>> You have a lot of snapshots (shown by the super large snapshot id),
>>>> \x7f\x7fevery time a large snapshot/subvolume is deleted, btrfs will try
>>>> to \x7f\x7fdisable qgroup to avoid such lag, but if whatever script/tool
>>>> decides \x7f\x7fto rescan qgroup when the snapshot/subvolume deleting is
>>>> still under \x7f\x7fgoing, the lag will be re-introduced.
>>>>
>>>>>
>>>>> I was using a recent 6.x kernel, I believe one of 6.18.x, when the
>>>>> \x7f\x7f\x7f\x7fproblem emerged. I upgraded by to 7.0, finding no improvement
>>>>> in the \x7f\x7f\x7f\x7foperation of the volume.
>>>>>
>>>>> Also, I tried initiating the scrub through the most recent static
>>>>> \x7f\x7f\x7fbuild \x7fof the user-space utility (i.e. btrfs-progs), with no
>>>>> \x7f\x7f\x7fimprovement.
>>>>>
>>>>> I would like some suggestions for restoring the volume to health,
>>>>> to \x7f\x7f\x7f\x7favoid the need to provision a new volume from scratch.
>>>>
>>>> "btrfs check" first, if no error, disable qgroup if you have
>>>> frequent \x7f\x7fsnapshot creation/deletion.
>>
>> So your fsck is mostly fine, the lag part is highly possible to be
>> caused by qgroup.
>>
>> If you do not need it, just disable it for good.
>>
>> Thanks,
>> Qu
>>
>>>>
>>>> Thanks,
>>>> Qu
>>>>
>>>>>
>>>>> Thank you.
>>>>>
>>>>
>>>
>>
>
>
next prev parent reply other threads:[~2026-04-27 22:58 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-26 23:52 Strange behavior with scrub, quotas, and snapshots brainchild
2026-04-27 2:05 ` Qu Wenruo
2026-04-27 20:32 ` brainchild
2026-04-27 22:10 ` Qu Wenruo
[not found] ` <SNC6ET.5NSSU3PO7MKD2@mailbox.org>
2026-04-27 22:58 ` Qu Wenruo [this message]
2026-04-28 0:22 ` brainchild
2026-04-28 1:16 ` Qu Wenruo
2026-04-28 1:21 ` brainchild
2026-04-28 2:33 ` brainchild
2026-04-28 3:13 ` Qu Wenruo
2026-04-28 4:03 ` brainchild
2026-04-28 5:13 ` Qu Wenruo
2026-04-28 5:29 ` brainchild
2026-04-28 6:41 ` Qu Wenruo
2026-04-28 19:30 ` brainchild
2026-04-28 22:19 ` brainchild
2026-04-28 22:26 ` Qu Wenruo
2026-04-28 22:50 ` Qu Wenruo
2026-04-28 22:23 ` Qu Wenruo
2026-04-28 22:34 ` Qu Wenruo
2026-04-29 0:57 ` brainchild
2026-04-29 1:11 ` Qu Wenruo
2026-04-29 1:16 ` brainchild
2026-04-29 1:27 ` Qu Wenruo
2026-04-29 2:11 ` brainchild
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0dfb1998-e951-4db7-be8a-987da3a8f471@suse.com \
--to=wqu@suse.com \
--cc=brainchild@mailbox.org \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox