From: Eric Wheeler <btrfs@lists.ewheeler.net>
To: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
Cc: linux-btrfs@vger.kernel.org, Qu Wenruo <quwenruo.btrfs@gmx.com>
Subject: Re: Global reserve ran out of space at 512MB, fails to rebalance
Date: Thu, 10 Dec 2020 19:50:06 +0000 (UTC) [thread overview]
Message-ID: <alpine.LRH.2.21.2012101935440.15698@pop.dreamhost.com> (raw)
In-Reply-To: <20201210031251.GJ31381@hungrycats.org>
On Wed, 9 Dec 2020, Zygo Blaxell wrote:
> On Thu, Dec 10, 2020 at 01:52:19AM +0000, Eric Wheeler wrote:
> > Hello all,
> >
> > We have a 30TB volume with lots of snapshots that is low on space and we
> > are trying to rebalance. Even if we don't rebalance, the space cleaner
> > still fills up the Global reserve:
> >
> > >>> Global reserve: 512.00MiB (used: 512.00MiB) <<<<<<<
>
> It would be nice to have the rest of the btrfs fi usage output. We are
> having to guess how your drives are populated with data and metadata
> and what profiles are in use.
Here is the whole output:
]# df -h /mnt/btrbackup ; echo; btrfs fi df /mnt/btrbackup|column -t; echo; btrfs fi usage /mnt/btrbackup
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/btrbackup-luks 30T 30T 541G 99% /mnt/btrbackup
Data, single: total=29.80TiB, used=29.28TiB
System, DUP: total=8.00MiB, used=3.42MiB
Metadata, DUP: total=99.00GiB, used=87.03GiB
GlobalReserve, single: total=4.00GiB, used=1.73MiB
Overall:
Device size: 30.00TiB
Device allocated: 30.00TiB
Device unallocated: 1.00GiB
Device missing: 0.00B
Used: 29.45TiB
Free (estimated): 540.74GiB (min: 540.24GiB)
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 4.00GiB (used: 1.73MiB) <<<< with 4GB hack
Data,single: Size:29.80TiB, Used:29.28TiB (98.23%)
/dev/mapper/btrbackup-luks 29.80TiB
Metadata,DUP: Size:99.00GiB, Used:87.03GiB (87.91%)
/dev/mapper/btrbackup-luks 198.00GiB
System,DUP: Size:8.00MiB, Used:3.42MiB (42.77%)
/dev/mapper/btrbackup-luks 16.00MiB
Unallocated:
/dev/mapper/btrbackup-luks 1.00GiB
> You probably need to be running some data balances (btrfs balance start
> -dlimit=9 about once a day) to ensure there is always at least 1GB of
> unallocated space on every drive.
Thanks for the daily rebalance tip. Is there a reason you chose
-dlimit=9? I know it means 9 chunks, but why 9? Also, how big is a
"chunk" ?
In this case we have 1GB unallocated.
> Never balance metadata, especially not from a scheduled job. Metadata
> balances lead directly to this situation.
So when /would/ you balance metadata?
> > This was on a Linux 5.6 kernel. I'm trying a Linux 5.9.13 kernel with a
> > hacked in SZ_4G in place of the SZ_512MB and will report back when I learn
> > more.
>
> I've had similar problems with snapshot deletes hitting ENOSPC with
> small amounts of free metadata space. In this case, the upgrade from
> 5.6 to 5.9 will include a fix for that (it's in 5.8, also 5.4 and earlier
> LTS kernels).
Ok, now on 5.9.13
> Increasing the global reserve may seem to help, but so will just rebooting
> over and over, so a positive result from an experimental kernel does not
> necessarily mean anything.
At least this reduces the number of times I need to reboot ;)
Question:
What do people think of making this a module option or ioctl for those who
need to hack it into place to minimize reboots?
-Eric
--
Eric Wheeler
next prev parent reply other threads:[~2020-12-10 19:51 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-10 1:52 Global reserve ran out of space at 512MB, fails to rebalance Eric Wheeler
2020-12-10 2:38 ` Qu Wenruo
2020-12-10 3:12 ` Zygo Blaxell
2020-12-10 19:02 ` Eric Wheeler
2020-12-10 19:50 ` Eric Wheeler [this message]
2020-12-11 3:49 ` Zygo Blaxell
2020-12-11 19:08 ` Eric Wheeler
2020-12-11 21:05 ` Zygo Blaxell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.LRH.2.21.2012101935440.15698@pop.dreamhost.com \
--to=btrfs@lists.ewheeler.net \
--cc=ce3g8jdj@umail.furryterror.org \
--cc=linux-btrfs@vger.kernel.org \
--cc=quwenruo.btrfs@gmx.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox