* Large BTRFS array suddenly says 53TiB Free, usage inconsistent
@ 2021-11-14 5:48 Joshua
2021-11-14 18:45 ` Max Spliethöver
0 siblings, 1 reply; 5+ messages in thread
From: Joshua @ 2021-11-14 5:48 UTC (permalink / raw)
To: Btrfs BTRFS
I have a large multi-device BTRFS array. (13 devices / 96TiB total usable space)
As of yesterday, it had a little over 5 TiB reported as estimated free by 'btrfs fi usage'
At exactly 7am this morning, my reporting tool reports that the "Free (estimated)" line of 'btrfs
fi usage' jumped to 53TiB.
Now I do use snapshots, managed by btrbk. I currently have 80 snapshots, and it is possible old
snapshots were deleted at midnight, freeing up data. Perhaps the deletions didn't finish committing until 7am?
However, the current state of the array is concerning to me:
#> btrfs fi usage /mnt
Overall:
Device size: 96.42TiB
Device allocated: 43.16TiB
Device unallocated: 53.26TiB
Device missing: 0.00B
Used: 43.15TiB
Free (estimated): 53.27TiB (min: 53.27TiB)
Free (statfs, df): 4.71TiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 512.00MiB (used: 0.00B)
Multiple profiles: no
Data,RAID1: Size:43.10TiB, Used:43.09TiB (99.98%)
{snip}
Metadata,RAID1C3: Size:66.00GiB, Used:62.51GiB (94.71%)
{snip}
System,RAID1C3: Size:32.00MiB, Used:7.12MiB (22.27%)
{snip}
Unallocated:
{snip}
As you can see, it's showing all my data is Raid1 as it should be, and all my metadata is raid1c3
as it should be.
BUT it's showing data ratio: 1 and metadata ratio: 1
Also, the allocated space is showing 43 TiB, which I know to be around the actual amount of used
data by files. Since Raid1 is in use, Allocated data should be around 86....
Any ideas as to what happened, why it's showing this erroneous data, or if I should be worried
about my data in any way?
As of right now, everything appears intact....
--Joshua Villwock
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Large BTRFS array suddenly says 53TiB Free, usage inconsistent
2021-11-14 5:48 Large BTRFS array suddenly says 53TiB Free, usage inconsistent Joshua
@ 2021-11-14 18:45 ` Max Spliethöver
2021-11-14 19:06 ` Holger Hoffstätte
0 siblings, 1 reply; 5+ messages in thread
From: Max Spliethöver @ 2021-11-14 18:45 UTC (permalink / raw)
To: Joshua, Btrfs BTRFS
Hello everyone.
I observed the exact same behavior on my 2x4TB RAID1. After an update of my server that runs a btrfs RAID1 as data storage (root fs runs on different, non-btrfs disks) and running `sudo btrfs filesystem usage /tank`, I realized that the "Data ratio" and "Metadata ratio" had dropped from 2.00 (before upgrade) to 1.00 and that the Unallocated space on both drives jumped from ~550GB to 2.10TB. I sporadically checked the files and everything seems to be still there.
I would appreciate any help with explaining what happened and how to possibly fix this issue. Below I provided some information. If further outputs are required, please let me know.
```
$ sudo btrfs filesystem usage /tank
Overall:
Device size: 7.28TiB
Device allocated: 3.07TiB
Device unallocated: 4.21TiB
Device missing: 0.00B
Used: 3.07TiB
Free (estimated): 4.21TiB (min: 4.21TiB)
Free (statfs, df): 582.54GiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 512.00MiB (used: 0.00B)
Multiple profiles: no
Data,RAID1: Size:3.07TiB, Used:3.06TiB (99.95%)
/dev/sdd1 1.53TiB
/dev/sde1 1.53TiB
Metadata,RAID1: Size:5.00GiB, Used:4.13GiB (82.59%)
/dev/sdd1 2.50GiB
/dev/sde1 2.50GiB
System,RAID1: Size:32.00MiB, Used:464.00KiB (1.42%)
/dev/sdd1 16.00MiB
/dev/sde1 16.00MiB
Unallocated:
/dev/sdd1 2.10TiB
/dev/sde1 2.10TiB
```
Also, `dmesg` and `btrfs check` do not show any errors.
```
$ sudo dmesg | grep BTRFS
[ 4.161867] BTRFS: device label tank devid 2 transid 204379 /dev/sde1 scanned by systemd-udevd (252)
[ 4.163715] BTRFS: device label tank devid 1 transid 204379 /dev/sdd1 scanned by systemd-udevd (234)
[ 300.416174] BTRFS info (device sdd1): flagging fs with big metadata feature
[ 300.416179] BTRFS info (device sdd1): disk space caching is enabled
[ 300.416181] BTRFS info (device sdd1): has skinny extents
$ sudo btrfs check -p /dev/sdd1
Opening filesystem to check...
Checking filesystem on /dev/sdd1
UUID: 37ce3698-b9d4-4475-8569-fc440c54ad82
[1/7] checking root items (0:00:11 elapsed, 698424 items checked)
[2/7] checking extents (0:00:42 elapsed, 270676 items checked)
[3/7] checking free space cache (0:00:02 elapsed, 3147 items checked)
[4/7] checking fs roots (0:00:11 elapsed, 22115 items checked)
[5/7] checking csums (without verifying data) (0:00:02 elapsed, 1439154 items checked)
[6/7] checking root refs (0:00:00 elapsed, 13 items checked)
[7/7] checking quota groups skipped (not enabled on this FS)
found 3374319136768 bytes used, no error found
total csum bytes: 3290097884
total tree bytes: 4434460672
total fs tree bytes: 363446272
total extent tree bytes: 62717952
btree space waste bytes: 721586744
file data blocks allocated: 4579386322944
referenced 4576433889280
$ sudo btrfs check -p /dev/sde1
Opening filesystem to check...
Checking filesystem on /dev/sde1
UUID: 37ce3698-b9d4-4475-8569-fc440c54ad82
[1/7] checking root items (0:00:11 elapsed, 698424 items checked)
[2/7] checking extents (0:00:43 elapsed, 270676 items checked)
[3/7] checking free space cache (0:00:02 elapsed, 3147 items checked)
[4/7] checking fs roots (0:00:11 elapsed, 22115 items checked)
[5/7] checking csums (without verifying data) (0:00:02 elapsed, 1439154 items checked)
[6/7] checking root refs (0:00:00 elapsed, 13 items checked)
[7/7] checking quota groups skipped (not enabled on this FS)
found 3374319136768 bytes used, no error found
total csum bytes: 3290097884
total tree bytes: 4434460672
total fs tree bytes: 363446272
total extent tree bytes: 62717952
btree space waste bytes: 721586744
file data blocks allocated: 4579386322944
referenced 4576433889280
```
Below, you can find some more useful outputs.
```
$ sudo btrfs fi show
Label: 'tank' uuid: 37ce3698-b9d4-4475-8569-fc440c54ad82
Total devices 2 FS bytes used 3.07TiB
devid 1 size 3.64TiB used 3.07TiB path /dev/sdd1
devid 2 size 3.64TiB used 3.07TiB path /dev/sde1
$ btrfs --version
btrfs-progs v5.15
$ uname -r
5.15.2-arch1-1
```
-Max
On 11/14/21 06:48, Joshua wrote:
> I have a large multi-device BTRFS array. (13 devices / 96TiB total usable space)
>
> As of yesterday, it had a little over 5 TiB reported as estimated free by 'btrfs fi usage'
>
> At exactly 7am this morning, my reporting tool reports that the "Free (estimated)" line of 'btrfs
> fi usage' jumped to 53TiB.
>
> Now I do use snapshots, managed by btrbk. I currently have 80 snapshots, and it is possible old
> snapshots were deleted at midnight, freeing up data. Perhaps the deletions didn't finish committing until 7am?
>
> However, the current state of the array is concerning to me:
>
> #> btrfs fi usage /mnt
> Overall:
> Device size: 96.42TiB
> Device allocated: 43.16TiB
> Device unallocated: 53.26TiB
> Device missing: 0.00B
> Used: 43.15TiB
> Free (estimated): 53.27TiB (min: 53.27TiB)
> Free (statfs, df): 4.71TiB
> Data ratio: 1.00
> Metadata ratio: 1.00
> Global reserve: 512.00MiB (used: 0.00B)
> Multiple profiles: no
>
> Data,RAID1: Size:43.10TiB, Used:43.09TiB (99.98%)
> {snip}
> Metadata,RAID1C3: Size:66.00GiB, Used:62.51GiB (94.71%)
> {snip}
> System,RAID1C3: Size:32.00MiB, Used:7.12MiB (22.27%)
> {snip}
> Unallocated:
> {snip}
>
> As you can see, it's showing all my data is Raid1 as it should be, and all my metadata is raid1c3
> as it should be.
> BUT it's showing data ratio: 1 and metadata ratio: 1
> Also, the allocated space is showing 43 TiB, which I know to be around the actual amount of used
> data by files. Since Raid1 is in use, Allocated data should be around 86....
>
> Any ideas as to what happened, why it's showing this erroneous data, or if I should be worried
> about my data in any way?
> As of right now, everything appears intact....
>
> --Joshua Villwock
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Large BTRFS array suddenly says 53TiB Free, usage inconsistent
2021-11-14 18:45 ` Max Spliethöver
@ 2021-11-14 19:06 ` Holger Hoffstätte
2021-11-14 19:19 ` Max Spliethöver
2021-11-14 19:34 ` Joshua Villwock
0 siblings, 2 replies; 5+ messages in thread
From: Holger Hoffstätte @ 2021-11-14 19:06 UTC (permalink / raw)
To: Max Spliethöver, Joshua, Btrfs BTRFS
On 2021-11-14 19:45, Max Spliethöver wrote:
> Hello everyone. I observed the exact same behavior on my 2x4TB RAID1.
> After an update of my server that runs a btrfs RAID1 as data storage
> (root fs runs on different, non-btrfs disks) and running `sudo btrfs
> filesystem usage /tank`, I realized that the "Data ratio" and
> "Metadata ratio" had dropped from 2.00 (before upgrade) to 1.00 and
> that the Unallocated space on both drives jumped from ~550GB to
> 2.10TB. I sporadically checked the files and everything seems to be
> still there.
>
> I would appreciate any help with explaining what happened and how to
> possibly fix this issue. Below I provided some information. If
> further outputs are required, please let me know.
>
> ```
> $ btrfs --version
> btrfs-progs v5.15
---------------^^
https://github.com/kdave/btrfs-progs/issues/422
Try to revert progs to 5.14.x.
cheers
Holger
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Large BTRFS array suddenly says 53TiB Free, usage inconsistent
2021-11-14 19:06 ` Holger Hoffstätte
@ 2021-11-14 19:19 ` Max Spliethöver
2021-11-14 19:34 ` Joshua Villwock
1 sibling, 0 replies; 5+ messages in thread
From: Max Spliethöver @ 2021-11-14 19:19 UTC (permalink / raw)
To: Holger Hoffstätte, Joshua, Btrfs BTRFS
Hey Holger.
Thank you for the quick reply! The older btrfs-progs version does indeed report the numbers correctly. And also thanks for the pointer to the GitHub issue!
-Max
On 11/14/21 20:06, Holger Hoffstätte wrote:
> On 2021-11-14 19:45, Max Spliethöver wrote:
>> Hello everyone. I observed the exact same behavior on my 2x4TB RAID1.
>> After an update of my server that runs a btrfs RAID1 as data storage
>> (root fs runs on different, non-btrfs disks) and running `sudo btrfs
>> filesystem usage /tank`, I realized that the "Data ratio" and
>> "Metadata ratio" had dropped from 2.00 (before upgrade) to 1.00 and
>> that the Unallocated space on both drives jumped from ~550GB to
>> 2.10TB. I sporadically checked the files and everything seems to be
>> still there.
>>
>> I would appreciate any help with explaining what happened and how to
>> possibly fix this issue. Below I provided some information. If
>> further outputs are required, please let me know.
>>
>> ```
>> $ btrfs --version
>> btrfs-progs v5.15
> ---------------^^
>
> https://github.com/kdave/btrfs-progs/issues/422
>
> Try to revert progs to 5.14.x.
>
> cheers
> Holger
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Large BTRFS array suddenly says 53TiB Free, usage inconsistent
2021-11-14 19:06 ` Holger Hoffstätte
2021-11-14 19:19 ` Max Spliethöver
@ 2021-11-14 19:34 ` Joshua Villwock
1 sibling, 0 replies; 5+ messages in thread
From: Joshua Villwock @ 2021-11-14 19:34 UTC (permalink / raw)
To: Holger Hoffstätte; +Cc: Max Spliethöver, Btrfs BTRFS
> On Nov 14, 2021, at 11:16 AM, Holger Hoffstätte <holger@applied-asynchrony.com> wrote:
>
> On 2021-11-14 19:45, Max Spliethöver wrote:
>> Hello everyone. I observed the exact same behavior on my 2x4TB RAID1.
>> After an update of my server that runs a btrfs RAID1 as data storage
>> (root fs runs on different, non-btrfs disks) and running `sudo btrfs
>> filesystem usage /tank`, I realized that the "Data ratio" and
>> "Metadata ratio" had dropped from 2.00 (before upgrade) to 1.00 and
>> that the Unallocated space on both drives jumped from ~550GB to
>> 2.10TB. I sporadically checked the files and everything seems to be
>> still there.
>> I would appreciate any help with explaining what happened and how to
>> possibly fix this issue. Below I provided some information. If
>> further outputs are required, please let me know.
>> ```
>> $ btrfs --version
>> btrfs-progs v5.15
> ---------------^^
>
> https://github.com/kdave/btrfs-progs/issues/422
>
> Try to revert progs to 5.14.x.
>
> cheers
> Holger
I can also report that I am on btrfs-progs v5.15 as well.
I can also confirm that btrfs-progs was updated precisely when the issue began happening.
Thanks, I will test reverting later, and maybe provide more details on the github issue if that makes sense.
—Joshua Villwock
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2021-11-14 19:35 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-11-14 5:48 Large BTRFS array suddenly says 53TiB Free, usage inconsistent Joshua
2021-11-14 18:45 ` Max Spliethöver
2021-11-14 19:06 ` Holger Hoffstätte
2021-11-14 19:19 ` Max Spliethöver
2021-11-14 19:34 ` Joshua Villwock
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox