From: Qu Wenruo <quwenruo.btrfs@gmx.com>
To: Christoph Anton Mitterer <calestyo@scientia.org>,
linux-btrfs@vger.kernel.org
Subject: Re: btrfs thinks fs is full, though 11GB should be still free
Date: Tue, 12 Dec 2023 07:27:23 +1030 [thread overview]
Message-ID: <253c6b4e-2b33-4892-8d6f-c0f783732cb6@gmx.com> (raw)
In-Reply-To: <0f4a5a08fe9c4a6fe1bfcb0785691a7532abb958.camel@scientia.org>
On 2023/12/12 06:56, Christoph Anton Mitterer wrote:
> Hey.
>
> I think the following might have already happened the 2nd time. I have
> a Debian stable with kernel 6.1.55 running Prometheus.
>
> There's one separate btrfs, just for Prometheus time series database.
>
>
> # btrfs check /dev/vdb
> Opening filesystem to check...
> Checking filesystem on /dev/vdb
> UUID: decdc81d-7cc4-431c-ab84-e03771f6de5d
> [1/7] checking root items
> [2/7] checking extents
> [3/7] checking free space tree
> [4/7] checking fs roots
> [5/7] checking only csums items (without verifying data)
> [6/7] checking root refs
> [7/7] checking quota groups skipped (not enabled on this FS)
> found 42427637760 bytes used, no error found
> total csum bytes: 27362284
> total tree bytes: 32686080
> total fs tree bytes: 1982464
> total extent tree bytes: 360448
> btree space waste bytes: 2839648
> file data blocks allocated: 54877196288
> referenced 28014796800
That's pretty good.
> # mount /data/main/
>
>
> # df | grep main
> /dev/vdb btrfs 43G 43G 25k 100% /data/main
>
> => df thinks it's full
>
>
> # btrfs filesystem usage /data/main/
> Overall:
> Device size: 40.00GiB
> Device allocated: 40.00GiB
> Device unallocated: 1.00MiB
Already full from the perspective of chunk space.
No new chunk can be allocated.
> Device missing: 0.00B
> Device slack: 0.00B
> Used: 39.54GiB
> Free (estimated): 24.00KiB (min: 24.00KiB)
> Free (statfs, df): 24.00KiB
> Data ratio: 1.00
> Metadata ratio: 2.00
> Global reserve: 29.22MiB (used: 0.00B)
> Multiple profiles: no
>
> Data,single: Size:39.48GiB, Used:39.48GiB (100.00%)
Data chunks are already exhausted.
> /dev/vdb 39.48GiB
>
> Metadata,DUP: Size:256.00MiB, Used:31.16MiB (12.17%)
A single metadata chunk, which is not full.
> /dev/vdb 512.00MiB
>
> System,DUP: Size:8.00MiB, Used:16.00KiB (0.20%)
> /dev/vdb 16.00MiB
>
> Unallocated:
> /dev/vdb 1.00MiB
>
> => btrfs does so, too
>
> # btrfs subvolume list -pagu /data/main/
> ID 257 gen 2347947 parent 5 top level 5 uuid ae3fa7ff-f5a4-cf44-8555-ad579195036c path <FS_TREE>/data
Is your current mounted subvolume the fs tree? Or already the data
subvolume?
If the latter case, there are some files you can not access from your
current mount point.
Thus it's recommended to use qgroup to show a correct full view of the
used space by each subvolume.
Thanks,
Qu
>
> => no snapshots involved
>
> # du --apparent-size --total -s --si /data/main/
> 29G /data/main/
> 29G total
>
> => but when actually counting the file sizes, there should be 11G left.
>
>
> :/data/main/prometheus# dd if=/dev/zero of=foo bs=1M count=1
> dd: error writing 'foo': No space left on device
> 1+0 records in
> 0+0 records out
> 0 bytes copied, 0,0876783 s, 0,0 kB/s
>
>
> And it really is full.
>
>
> Any ideas how this can happen?
>
>
> Thanks,
> Chris.
>
next prev parent reply other threads:[~2023-12-11 20:57 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-11 20:26 btrfs thinks fs is full, though 11GB should be still free Christoph Anton Mitterer
2023-12-11 20:57 ` Qu Wenruo [this message]
2023-12-11 22:23 ` Christoph Anton Mitterer
2023-12-11 22:26 ` Christoph Anton Mitterer
2023-12-11 23:20 ` Qu Wenruo
2023-12-11 23:38 ` Christoph Anton Mitterer
2023-12-11 23:54 ` Qu Wenruo
2023-12-12 0:12 ` Christoph Anton Mitterer
2023-12-12 0:58 ` Qu Wenruo
2023-12-12 2:30 ` Qu Wenruo
2023-12-12 3:27 ` Christoph Anton Mitterer
2023-12-12 3:40 ` Christoph Anton Mitterer
2023-12-12 4:13 ` Qu Wenruo
2023-12-15 2:33 ` Chris Murphy
2023-12-15 3:12 ` Qu Wenruo
2023-12-18 16:24 ` Christoph Anton Mitterer
2023-12-18 19:18 ` Goffredo Baroncelli
2023-12-18 20:04 ` Goffredo Baroncelli
2023-12-18 22:38 ` Christoph Anton Mitterer
2023-12-19 8:22 ` Andrei Borzenkov
2023-12-19 19:09 ` Goffredo Baroncelli
2023-12-21 13:53 ` Christoph Anton Mitterer
2023-12-21 18:03 ` Goffredo Baroncelli
2023-12-21 22:06 ` Christoph Anton Mitterer
2023-12-21 13:46 ` Christoph Anton Mitterer
2023-12-21 20:41 ` Qu Wenruo
2023-12-21 22:15 ` Christoph Anton Mitterer
2023-12-21 22:41 ` Qu Wenruo
2023-12-21 22:54 ` Christoph Anton Mitterer
2023-12-22 0:53 ` Qu Wenruo
2023-12-22 0:56 ` Christoph Anton Mitterer
2023-12-22 1:13 ` Qu Wenruo
2023-12-22 1:23 ` Christoph Anton Mitterer
2024-01-05 3:30 ` Christoph Anton Mitterer
2024-01-05 7:07 ` Qu Wenruo
2024-01-06 0:42 ` Christoph Anton Mitterer
2024-01-06 5:40 ` Qu Wenruo
2024-01-06 8:12 ` Andrei Borzenkov
2024-12-14 19:09 ` Christoph Anton Mitterer
2023-12-18 19:54 ` Qu Wenruo
2023-12-18 22:30 ` Christoph Anton Mitterer
2023-12-13 1:49 ` Remi Gauvin
2023-12-13 8:29 ` Andrea Gelmini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=253c6b4e-2b33-4892-8d6f-c0f783732cb6@gmx.com \
--to=quwenruo.btrfs@gmx.com \
--cc=calestyo@scientia.org \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox