Linux Btrfs filesystem development
 help / color / mirror / Atom feed
From: Qu Wenruo <wqu@suse.com>
To: brainchild@mailbox.org, linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: Strange behavior with scrub, quotas, and snapshots
Date: Wed, 29 Apr 2026 08:20:38 +0930	[thread overview]
Message-ID: <762ea596-a2b0-4e07-91ef-225a9298c3a0@suse.com> (raw)
In-Reply-To: <d3e1af7d-fa08-407b-894b-46d6ba3626cd@suse.com>



在 2026/4/29 07:56, Qu Wenruo 写道:
> 
> 
> 在 2026/4/29 07:49, brainchild@mailbox.org 写道:
>> Further to the comments in the previous message, I have also found 
>> some messages in the kernel log, from the balance operations, which 
>> may be relevant.
>>
>> As the console output of the command is "ERROR: error during balancing 
>> '/': No space left on device", the kernel messages are as shown below.
>>
>> ---
>>
>> balance: start -musage=50 -susage=50
>> BTRFS info (device nvme0n1p5): relocating block group 4126692868096 
>> flags metadata|dup
>> BTRFS info (device nvme0n1p5): found 9279 extents, stage: move data 
>> extents
>> BTRFS info (device nvme0n1p5): relocating block group 4126155997184 
>> flags metadata|dup
>> BTRFS info (device nvme0n1p5): found 6365 extents, stage: move data 
>> extents
>> BTRFS info (device nvme0n1p5): relocating block group 4125149364224 
>> flags metadata|dup
>> BTRFS info (device nvme0n1p5): found 10145 extents, stage: move data 
>> extents
>> BTRFS info (device nvme0n1p5): relocating block group 4124612493312 
>> flags metadata|dup
>> BTRFS info (device nvme0n1p5): found 11487 extents, stage: move data 
>> extents
>> BTRFS info (device nvme0n1p5): 1 enospc errors during balance
>> BTRFS info (device nvme0n1p5): balance: ended with status: -28
> 
>  From the dmesg, you're relocating only metadata block groups.
> 
> Meanwhile all the free space is inside data block groups, you need to 
> balance *only* data block groups to free up space for metadata.
> 
> Not the opposite.

And forgot to mention, balance is also affected by swapfiles.

The same as scrub, a block group (1GiB) will be completely skipped if 
there is any extent of an active swapfile.

So your previous balance runs may also be screwed up by your swapfile.

> 
>>
>>
>> On Tue, Apr 28 2026 at 03:30:00 PM -04:00:00, brainchild@mailbox.org 
>> wrote:
>>>
>>> On Tue, Apr 28 2026 at 04:11:05 PM +09:30:00, Qu Wenruo 
>>> <wqu@suse.com> wrote:
>>>>
>>>> I strongly don't recommend to use swap files on btrfs, as you have 
>>>> \x7falready experienced the limit on scrub, and I believe a lot of end 
>>>> \x7fusers are not aware of all the limits when using swap file on 
>>>> btrfs, \x7fplease check the long long list of limitations in "SWAPFILE 
>>>> SUPPORT" \x7fof btrfs(5).
>>>
>>> Is it expected that the scrub operation cannot function properly if 
>>> the volume has a swap file? I never before observed such a problem, 
>>> nor find any mention in the documentation.
>>>
>>> The specific restrictions, as documented, for the swap file, seem 
>>> completely compatible with my use, a single partition with no data 
>>> duplication. I have no need for spanning devices or duplicating data, 
>>> on the particular system.
>>>
>>>> Any dmesg of that RO flips? That indicates the fs flipped read-only, 
>>>> \x7fwhich is a huge problem by itself.
>>>
>>> No. There are no kernel messages that are errors for the file system, 
>>> or switches to read-only.
>>>
>>>> Especially with your initial info, there should be enough data 
>>>> space, \x7fmetadata space is less ideal but should be enough.
>>>
>>> I have read that the space allocated for metadata is expanded as 
>>> needed. Why would problems follow from too little space being allocated?
>>>
>>>> Considering how many snapshots you have (triggering qgroup lag), I 
>>>> \x7fstrongly recommended to remove unused snapshots to free up space.
>>>>
>>>> After freeing up enough space, then try to balance data block groups 
>>>> \x7fto make space for future metadata usages.
>>>
>>> The situation with balance is quite confused.
>>>
>>> The problem with the reported lack of free space first occurred 
>>> several weeks ago. At that time, I deleted snapshots, and ran balance 
>>> operations with incrementally higher usage values for data and 
>>> metadata. By the end, I had run the operation, without reported 
>>> failure, with values as high as 95%. Normally, such an operation 
>>> would be very long, but in my case it finished in less than a minute. 
>>> Also by the end, only about ten blocks in total had actually been 
>>> reported as moved.
>>>
>>> Perhaps my installation of btrfsd has been successfully maintaining 
>>> the balance for the volume. The system logs are not extensive enough 
>>> for me to know when it last performed any operations.
>>>
>>> Regardless, it seems that the general problems are not becoming 
>>> resolved by invocations of balance.
>>>
>>
>>
>>
> 
> 


  reply	other threads:[~2026-04-28 22:50 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-26 23:52 Strange behavior with scrub, quotas, and snapshots brainchild
2026-04-27  2:05 ` Qu Wenruo
2026-04-27 20:32   ` brainchild
2026-04-27 22:10     ` Qu Wenruo
     [not found]       ` <SNC6ET.5NSSU3PO7MKD2@mailbox.org>
2026-04-27 22:58         ` Qu Wenruo
2026-04-28  0:22           ` brainchild
2026-04-28  1:16             ` Qu Wenruo
2026-04-28  1:21               ` brainchild
2026-04-28  2:33                 ` brainchild
2026-04-28  3:13                   ` Qu Wenruo
2026-04-28  4:03                     ` brainchild
2026-04-28  5:13                       ` Qu Wenruo
2026-04-28  5:29                         ` brainchild
2026-04-28  6:41                           ` Qu Wenruo
2026-04-28 19:30                             ` brainchild
2026-04-28 22:19                               ` brainchild
2026-04-28 22:26                                 ` Qu Wenruo
2026-04-28 22:50                                   ` Qu Wenruo [this message]
2026-04-28 22:23                               ` Qu Wenruo
2026-04-28 22:34                                 ` Qu Wenruo
2026-04-29  0:57                                 ` brainchild
2026-04-29  1:11                                   ` Qu Wenruo
2026-04-29  1:16                                     ` brainchild
2026-04-29  1:27                                       ` Qu Wenruo
2026-04-29  2:11                                         ` brainchild

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=762ea596-a2b0-4e07-91ef-225a9298c3a0@suse.com \
    --to=wqu@suse.com \
    --cc=brainchild@mailbox.org \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox