From: Skirnir Torvaldsson <skirnir.torvaldsson@gmail.com>
To: Andrei Borzenkov <arvidjaar@gmail.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: support request: btrfs df reports drive is out of space, cannot find what occupies it
Date: Fri, 19 Apr 2024 20:27:20 +0500 [thread overview]
Message-ID: <da77921d-37eb-4116-91d3-80aa592e76da@gmail.com> (raw)
In-Reply-To: <CAA91j0WFKw_TZMLN=3NhdtnjNx5g6rcbM+gGVF+BGOKhG6-SxQ@mail.gmail.com>
> On Fri, Apr 19, 2024 at 4:18 PM Skirnir Torvaldsson
> <skirnir.torvaldsson@gmail.com> wrote:
>> With all due respect, 38G is a grand total, so it is 38G out of 78G
>> reported data.
>> There is one thing I've noticed that troubles me:
>>
>> root@next:/home/support/btrfs-list-2.3# ./btrfs-list
>> NAME TYPE REFER EXCL MOUNTPOINT
>> NEXT_ROOTFS fs - 78.37G (single/dup,
>> 15.03G/95.46G free, 15.74%)
>> [main] mainvol 16.00k 16.00k /
>> @System subvol 16.00k 16.00k /.snapshots
>> @Logs subvol 16.00k 16.00k /next/logs/.snapshots
>> @Logs/current subvol 4.70G 4.70G /next/logs
>> @AppData subvol 16.00k 16.00k /next/appdata/.snapshots
>> @AppData/current subvol 370.14M 370.14M /next/appdata
>> @AppData/var subvol 16.00k 16.00k
>> @Databases subvol 16.00k 16.00k /next/databases/.snapshots
>> @Databases/current subvol 754.95M 754.95M /next/databases
>> @MessageBus subvol 16.00k 16.00k /next/mbus/.snapshots
>> @MessageBus/current subvol 67.70G 67.70G /next/mbus
>> @Updates subvol 16.00k 16.00k /next/updates/.snapshots
>> @Updates/current subvol 1.81G 1.81G /next/updates
>> @SystemData subvol 16.00k 16.00k
>> /next/systemdata/.snapshots
>> @SystemData/current subvol 1.21G 1.21G /next/systemdata
>> @System/prev subvol 1.48G 1.48G
>> @System/current subvol 443.27M 443.27M /
>> root@next:/home/support/btrfs-list-2.3# du -hd1 /next/mbus
>> 0 /next/mbus/.snapshots
>> 1.4G /next/mbus/redpanda
>> 1.4G /next/mbus
>>
>> So, the MessageBus subvolume is occupying 67Gb (?), however I fail to
>> understand how come this space is not accounted for by du and how I can
>> clean it and limit it in future.
>>
> Could be variation of
>
> https://lore.kernel.org/linux-btrfs/0f4a5a08fe9c4a6fe1bfcb0785691a7532abb958.camel@scientia.org/
Thank you so much, this seems to be my case:
root@next:/next/mbus/redpanda/data# compsize .
Processed 3490 files, 3489 regular extents (3489 refs), 1 inline.
Type Perc Disk Usage Uncompressed Referenced
TOTAL 99% 67G 67G 209M
none 100% 67G 67G 177M
zstd 8% 4.0K 48K 48K
prealloc 100% 32M 32M 31M
And these 67Gb of data are reported as "unreachable" by btdu. However,
manual defragmentation has no effect. Is there anything else I could
try short of deleting these files completely?
>
>
>>> On Fri, Apr 19, 2024 at 11:40 AM Skirnir Torvaldsson
>>> <skirnir.torvaldsson@gmail.com> wrote:
>>>> Dear btrfs experts,
>>>>
>>>> Could you please help me sort out the following situation:
>>>>
>>>> btrfs df reports my 100Gb device is almost out of space (which agrees with the results produced by the standard "df"):
>>>>
>>>> root@next:/home/support# btrfs fi df /
>>>> Data, single: total=82.00GiB, used=78.23GiB
>>>> System, DUP: total=32.00MiB, used=16.00KiB
>>>> Metadata, DUP: total=1.00GiB, used=153.70MiB
>>>> GlobalReserve, single: total=68.45MiB, used=0.00B
>>>> root@next:/home/support# df -h /
>>>> Filesystem Size Used Avail Use% Mounted on
>>>> /dev/sda3 96G 79G 16G 84% /
>>>>
>>>> However when I try to locate files to delete with du that's what I get:
>>>>
>>>> root@next:/home/support# du -hd1 /
>>>> 70M /boot
>>>> 0 /dev
>>>> 2.2G /.snapshots
>>>> 14M /bin
>>>> 4.5M /etc
>>>> 2.5M /home
>>>> 348M /lib
>>>> 4.0K /lib64
>>>> 0 /media
>>>> 0 /mnt
>>>> 0 /opt
>>>> 0 /proc
>>>> 40K /root
>>>> 2.7M /run
>>>> 12M /sbin
>>>> 0 /srv
>>>> 0 /sys
>>>> 0 /tmp
>>>> 566M /usr
>>>> 5.0G /var
>>>> 29G /next
>>>> 38G /
>>>>
>>>> I.e. almost 40Gb just gone somewhere.
>>> Huh?
>>>
>>> 2.2G + 5.0G + 29G + 38G == 75.2G out of 78G reported for DATA. What
>>> 40G are you talking about?
>>>
>>> If you have some other mount points, you could start with explaining
>>> your storage layout first.
>>>
>>>> Am I doing something wrong? Is there a problem or a piece of theory I'm missing? Kindly advice.
>>>>
>>>> +++++++++++++++++++++++++++++++++++++
>>>> root@next:~# uname -a
>>>> Linux next 5.10.0-28-amd64 #1 SMP Debian 5.10.209-2 (2024-01-31) x86_64 GNU/Linux
>>>> root@next:~# btrfs --version
>>>> btrfs-progs v5.10.1
>>>> root@next:~# btrfs fi show
>>>> Label: 'NEXT_ROOTFS' uuid: abc71bdb-c570-461d-a28a-54294a646089
>>>> Total devices 1 FS bytes used 78.37GiB
>>>> devid 1 size 95.46GiB used 84.06GiB path /dev/sda3
>>>>
>>>> root@next:~# btrfs fi df /
>>>> Data, single: total=82.00GiB, used=78.22GiB
>>>> System, DUP: total=32.00MiB, used=16.00KiB
>>>> Metadata, DUP: total=1.00GiB, used=153.64MiB
>>>> GlobalReserve, single: total=68.45MiB, used=0.00B
>>>>
prev parent reply other threads:[~2024-04-19 15:27 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-19 8:40 support request: btrfs df reports drive is out of space, cannot find what occupies it Skirnir Torvaldsson
2024-04-19 11:19 ` Andrei Borzenkov
2024-04-19 13:18 ` Skirnir Torvaldsson
2024-04-19 13:41 ` Andrei Borzenkov
2024-04-19 15:27 ` Skirnir Torvaldsson [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=da77921d-37eb-4116-91d3-80aa592e76da@gmail.com \
--to=skirnir.torvaldsson@gmail.com \
--cc=arvidjaar@gmail.com \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox