Linux Btrfs filesystem development
 help / color / mirror / Atom feed
From: Qu Wenruo <wqu@suse.com>
To: brainchild <brainchild@mailbox.org>,
	linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: Strange behavior with scrub, quotas, and snapshots
Date: Tue, 28 Apr 2026 10:46:07 +0930	[thread overview]
Message-ID: <b86d2b52-5536-447d-9a63-be7f67effffb@suse.com> (raw)
In-Reply-To: <20260428002217.Horde.5h6SZuQXFy8tGMhc9x7qKTM@nextcloud.brainspace.site>



在 2026/4/28 09:52, brainchild 写道:
> Just as I was scanning for the name of the particular inode, the volume reverted to again insist on being without any free space.
>   
> I was able to extract the log for a new scrub, as requested (begun after the switch to being unable to create new files).

Only 10GiB of data is scrubbed, meanwhile your data should be 600GiB+.

You mentioned that there is a swap file, how large is that file, and 
have you tried to disable the swap file before scrubbing?

Thanks,
Qu

>   
> Thanks.
>   
> "Qu Wenruo" wqu@suse.com – April 27, 2026 6:58 PM
>> 在 2026/4/28 08:17, brainchild@mailbox.org 写道:
>>> Currently, scrub operations are not completing properly, so I certainly
>>> think it is important to try to repair the volume.
>>>   
>>> Which error do you expect is related to the particular problem,
>>> concerning scrub?
>>   
>> Can you provide the full "btrfs scrub start -BR" output?
>>   
>>>   
>>> Is there any evidence that data may have been lost,
>>   
>> Not yet.
>>   
>>> or concern that
>>> 'fix-device-size' could cause further loss?
>>   
>> That should be mostly safe, but if you're concerned about it, please
>> update to a newer version of btrfs-progs.
>>   
>> Ubuntu is pretty bad at backporting fixes for btrfs-progs.
>>   
>>> Should the operation be done
>>> while the device is mounted, or only while not mounted?
>>   
>> Must be unmounted for btrfs check and btrfs rescue.
>>   
>> Thanks,
>> Qu
>>   
>>>   
>>> Thanks.
>>>   
>>> On Tue, Apr 28 2026 at 07:40:31 AM +09:30:00, Qu Wenruo <wqu@suse.com>
>>> wrote:
>>>>
>>>>
>>>> 在 2026/4/28 06:02, brainchild@mailbox.org 写道:
>>>>> I have the run the check command, which reported a variety of errors.
>>>>> The output is attached.
>>>>>
>>>>> Are any recommendations available to attempt restoring the volume?
>>>>
>>>> The super block bytes mismatch is a minor one, which shouldn't affect
>>>> normal operations.
>>>>
>>>> But still you can use "btrfs rescue fix-device-size" to fix the problem.
>>>>
>>>>
>>>> The nbytes wrong can be minor too, but it's affecting all snapshots
>>>> containing the inode 16523863.
>>>>
>>>> You can either fix it by copying the inode to another location (can be
>>>> inside the same btrfs), remove the original file, mv back the new copy.
>>>> This will need to be done for every snapshot.
>>>>
>>>> Or you can try "btrfs check --repair", which will do an in-place fix,
>>>> but will still break the shared blocks of every snapshot.
>>>>
>>>> Overall, I'd strongly recommend to remove all unused snapshots first
>>>> before doing either fix.
>>>>
>>>>>
>>>>> Thanks.
>>>>>
>>>>> On Mon, Apr 27 2026 at 11:35:28 AM +09:30:00, Qu Wenruo
>>>>> <quwenruo.btrfs@gmx.com> wrote:
>>>>>>
>>>>>>
>>>>>> 在 2026/4/27 09:22, brainchild@mailbox.org 写道:
>>>>>>> Hello.
>>>>>>>
>>>>>>> I am struggling with a poorly behaved BTRFS volume.
>>>>>>>
>>>>>> [...]
>>>>>>> ---
>>>>>>>
>>>>>>> No errors are reported in the kernel log, only warnings about
>>>>>>> skipping the swap file during scrub.
>>>>>>
>>>>>> If you assume the fs has some corruption, none of the above is
>>>>>> really useful.
>>>>>> A full "btrfs check" is strongly recommended.
>>>>>>
>>>>>>>
>>>>>>> Second, within the logs generated for Timeshift, a concerning
>>>>>>> pattern recurs, as in the attached example. Further, during the
>>>>>>> periods in which are generated logs such as the one attached,
>>>>>>> the entire system lags considerably. It is clear that the
>>>>>>> volume is not healthy.
>>>>>>
>>>>>> The lag is mostly caused by qgroup.
>>>>>> You have a lot of snapshots (shown by the super large snapshot id),
>>>>>> every time a large snapshot/subvolume is deleted, btrfs will try
>>>>>> to disable qgroup to avoid such lag, but if whatever script/tool
>>>>>> decides to rescan qgroup when the snapshot/subvolume deleting is
>>>>>> still under going, the lag will be re-introduced.
>>>>>>
>>>>>>>
>>>>>>> I was using a recent 6.x kernel, I believe one of 6.18.x, when the
>>>>>>> problem emerged. I upgraded by to 7.0, finding no improvement
>>>>>>> in the operation of the volume.
>>>>>>>
>>>>>>> Also, I tried initiating the scrub through the most recent static
>>>>>>> build of the user-space utility (i.e. btrfs-progs), with no
>>>>>>> improvement.
>>>>>>>
>>>>>>> I would like some suggestions for restoring the volume to health,
>>>>>>> to avoid the need to provision a new volume from scratch.
>>>>>>
>>>>>> "btrfs check" first, if no error, disable qgroup if you have
>>>>>> frequent snapshot creation/deletion.
>>>>
>>>> So your fsck is mostly fine, the lag part is highly possible to be
>>>> caused by qgroup.
>>>>
>>>> If you do not need it, just disable it for good.
>>>>
>>>> Thanks,
>>>> Qu
>>>>
>>>>>>
>>>>>> Thanks,
>>>>>> Qu
>>>>>>
>>>>>>>
>>>>>>> Thank you.
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>   
>>>
>>   
>>


  reply	other threads:[~2026-04-28  1:16 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-26 23:52 Strange behavior with scrub, quotas, and snapshots brainchild
2026-04-27  2:05 ` Qu Wenruo
2026-04-27 20:32   ` brainchild
2026-04-27 22:10     ` Qu Wenruo
     [not found]       ` <SNC6ET.5NSSU3PO7MKD2@mailbox.org>
2026-04-27 22:58         ` Qu Wenruo
2026-04-28  0:22           ` brainchild
2026-04-28  1:16             ` Qu Wenruo [this message]
2026-04-28  1:21               ` brainchild
2026-04-28  2:33                 ` brainchild
2026-04-28  3:13                   ` Qu Wenruo
2026-04-28  4:03                     ` brainchild
2026-04-28  5:13                       ` Qu Wenruo
2026-04-28  5:29                         ` brainchild
2026-04-28  6:41                           ` Qu Wenruo
2026-04-28 19:30                             ` brainchild
2026-04-28 22:19                               ` brainchild
2026-04-28 22:26                                 ` Qu Wenruo
2026-04-28 22:50                                   ` Qu Wenruo
2026-04-28 22:23                               ` Qu Wenruo
2026-04-28 22:34                                 ` Qu Wenruo
2026-04-29  0:57                                 ` brainchild
2026-04-29  1:11                                   ` Qu Wenruo
2026-04-29  1:16                                     ` brainchild
2026-04-29  1:27                                       ` Qu Wenruo
2026-04-29  2:11                                         ` brainchild

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b86d2b52-5536-447d-9a63-be7f67effffb@suse.com \
    --to=wqu@suse.com \
    --cc=brainchild@mailbox.org \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox