From: Brandon Heisner <brandonh@wolfram.com>
To: Forza <forza@tnonline.net>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: btrfs metadata has reserved 1T of extra space and balances don't reclaim it
Date: Wed, 29 Sep 2021 09:34:26 -0500 (CDT) [thread overview]
Message-ID: <2117366261.1733598.1632926066120.JavaMail.zimbra@wolfram.com> (raw)
In-Reply-To: <ce9f317.4dc05cb0.17c3070258f@tnonline.net>
No I do not use that option. Also, because of btrfs not mounting individual subvolume options, I have the compression and nodatacow set with filesystem attributes on the directories that are btrfs subvolumes.
UUID=ece150db-5817-4704-9e84-80f7d8a3b1da /opt/zimbra btrfs subvol=zimbra,defaults,discard,compress=lzo 0 0
UUID=ece150db-5817-4704-9e84-80f7d8a3b1da /var/log btrfs subvol=root-var-log,defaults,discard,compress=lzo 0 0
UUID=ece150db-5817-4704-9e84-80f7d8a3b1da /opt/zimbra/db btrfs subvol=db,defaults,discard,nodatacow 0 0
UUID=ece150db-5817-4704-9e84-80f7d8a3b1da /opt/zimbra/index btrfs subvol=index,defaults,discard,compress=lzo 0 0
UUID=ece150db-5817-4704-9e84-80f7d8a3b1da /opt/zimbra/store btrfs subvol=store,defaults,discard,compress=lzo 0 0
UUID=ece150db-5817-4704-9e84-80f7d8a3b1da /opt/zimbra/log btrfs subvol=log,defaults,discard,compress=lzo 0 0
UUID=ece150db-5817-4704-9e84-80f7d8a3b1da /opt/zimbra/snapshots btrfs subvol=snapshots,defaults,discard,compress=lzo 0 0
----- On Sep 29, 2021, at 2:23 AM, Forza forza@tnonline.net wrote:
> ---- From: Brandon Heisner <brandonh@wolfram.com> -- Sent: 2021-09-29 - 04:23
> ----
>
>> I have a server running CentOS 7 on 4.9.5-1.el7.elrepo.x86_64 #1 SMP Fri Jan 20
>> 11:34:13 EST 2017 x86_64 x86_64 x86_64 GNU/Linux. It is version locked to that
>> kernel. The metadata has reserved a full 1T of disk space, while only using
>> ~38G. I've tried to balance the metadata to reclaim that so it can be used for
>> data, but it doesn't work and gives no errors. It just says it balanced the
>> chunks but the size doesn't change. The metadata total is still growing as
>> well, as it used to be 1.04 and now it is 1.08 with only about 10G more of
>> metadata used. I've tried doing balances up to 70 or 80 musage I think, and
>> the total metadata does not decrease. I've done so many attempts at balancing,
>> I've probably tried to move 300 chunks or more. None have resulted in any
>> change to the metadata total like they do on other servers running btrfs. I
>> first started with very low musage, like 10 and then increased it by 10 to try
>> to see if that would balance any chunks out, but with no success.
>>
>> # /sbin/btrfs balance start -musage=60 -mlimit=30 /opt/zimbra
>> Done, had to relocate 30 out of 2127 chunks
>>
>> I can do that command over and over again, or increase the mlimit, and it
>> doesn't change the metadata total ever.
>>
>>
>> # btrfs fi show /opt/zimbra/
>> Label: 'Data' uuid: ece150db-5817-4704-9e84-80f7d8a3b1da
>> Total devices 4 FS bytes used 1.48TiB
>> devid 1 size 1.46TiB used 1.38TiB path /dev/sde
>> devid 2 size 1.46TiB used 1.38TiB path /dev/sdf
>> devid 3 size 1.46TiB used 1.38TiB path /dev/sdg
>> devid 4 size 1.46TiB used 1.38TiB path /dev/sdh
>>
>> # btrfs fi df /opt/zimbra/
>> Data, RAID10: total=1.69TiB, used=1.45TiB
>> System, RAID10: total=64.00MiB, used=640.00KiB
>> Metadata, RAID10: total=1.08TiB, used=37.69GiB
>> GlobalReserve, single: total=512.00MiB, used=0.00B
>>
>>
>> # btrfs fi us /opt/zimbra/ -T
>> Overall:
>> Device size: 5.82TiB
>> Device allocated: 5.54TiB
>> Device unallocated: 291.54GiB
>> Device missing: 0.00B
>> Used: 2.96TiB
>> Free (estimated): 396.36GiB (min: 396.36GiB)
>> Data ratio: 2.00
>> Metadata ratio: 2.00
>> Global reserve: 512.00MiB (used: 0.00B)
>>
>> Data Metadata System
>> Id Path RAID10 RAID10 RAID10 Unallocated
>> -- -------- --------- --------- --------- -----------
>> 1 /dev/sde 432.75GiB 276.00GiB 16.00MiB 781.65GiB
>> 2 /dev/sdf 432.75GiB 276.00GiB 16.00MiB 781.65GiB
>> 3 /dev/sdg 432.75GiB 276.00GiB 16.00MiB 781.65GiB
>> 4 /dev/sdh 432.75GiB 276.00GiB 16.00MiB 781.65GiB
>> -- -------- --------- --------- --------- -----------
>> Total 1.69TiB 1.08TiB 64.00MiB 3.05TiB
>> Used 1.45TiB 37.69GiB 640.00KiB
>>
>>
>>
>>
>>
>>
>> --
>> Brandon Heisner
>> System Administrator
>> Wolfram Research
>
>
> What are you mount options? Do you by any chance use metadata_ratio mount
> option?
>
> https://btrfs.wiki.kernel.org/index.php/Manpage/btrfs(5)#MOUNT_OPTIONS
next prev parent reply other threads:[~2021-09-29 14:34 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-29 2:23 btrfs metadata has reserved 1T of extra space and balances don't reclaim it Brandon Heisner
2021-09-29 7:23 ` Forza
2021-09-29 14:34 ` Brandon Heisner [this message]
2021-10-03 11:26 ` Forza
2021-10-03 18:21 ` Zygo Blaxell
2021-09-29 8:22 ` Qu Wenruo
2021-09-29 15:18 ` Andrea Gelmini
2021-09-29 16:39 ` Forza
2021-09-29 18:55 ` Andrea Gelmini
2021-09-29 17:31 ` Zygo Blaxell
2021-10-01 7:49 ` Brandon Heisner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2117366261.1733598.1632926066120.JavaMail.zimbra@wolfram.com \
--to=brandonh@wolfram.com \
--cc=forza@tnonline.net \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox