Linux Btrfs filesystem development
 help / color / mirror / Atom feed
From: Eric Wheeler <btrfs@lists.ewheeler.net>
To: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
Cc: linux-btrfs@vger.kernel.org, Qu Wenruo <quwenruo.btrfs@gmx.com>
Subject: Re: Global reserve ran out of space at 512MB, fails to rebalance
Date: Thu, 10 Dec 2020 19:02:16 +0000 (UTC)	[thread overview]
Message-ID: <alpine.LRH.2.21.2012101901060.15698@pop.dreamhost.com> (raw)
In-Reply-To: <20201210031251.GJ31381@hungrycats.org>

On Wed, 9 Dec 2020, Zygo Blaxell wrote:

> On Thu, Dec 10, 2020 at 01:52:19AM +0000, Eric Wheeler wrote:
> > Hello all,
> > 
> > We have a 30TB volume with lots of snapshots that is low on space and we 
> > are trying to rebalance.  Even if we don't rebalance, the space cleaner 
> > still fills up the Global reserve:
> > 
> >     Device size:                  30.00TiB
> >     Device allocated:             30.00TiB
> >     Device unallocated:            1.00GiB
> >     Device missing:                  0.00B
> >     Used:                         29.27TiB
> >     Free (estimated):            705.21GiB	(min: 704.71GiB)
> >     Data ratio:                       1.00
> >     Metadata ratio:                   2.00
> > >>> Global reserve:              512.00MiB	(used: 512.00MiB) <<<<<<<
> 
> It would be nice to have the rest of the btrfs fi usage output.  We are
> having to guess how your drives are populated with data and metadata
> and what profiles are in use.
> 
> You probably need to be running some data balances (btrfs balance start
> -dlimit=9 about once a day) to ensure there is always at least 1GB of
> unallocated space on every drive.
> 
> Never balance metadata, especially not from a scheduled job.  Metadata
> balances lead directly to this situation.
> 
> > This was on a Linux 5.6 kernel.  I'm trying a Linux 5.9.13 kernel with a 
> > hacked in SZ_4G in place of the SZ_512MB and will report back when I learn 
> > more.
> > 
> > In the meantime, do you have any suggestions to work through the issue?
> 
> I've had similar problems with snapshot deletes hitting ENOSPC with
> small amounts of free metadata space.  In this case, the upgrade from
> 5.6 to 5.9 will include a fix for that (it's in 5.8, also 5.4 and earlier
> LTS kernels).

Good to know, glad there's a patch for that!

Zygo and Qu, thank you both for your feedback!

-Eric

> 
> Increasing the global reserve may seem to help, but so will just rebooting
> over and over, so a positive result from an experimental kernel does not
> necessarily mean anything.  Pending snapshot deletes will be making small
> amounts of progress just before hitting ENOSPC, so it will eventually
> succeed if you repeat the mount enough times even with an old stock
> kernel.
> 
> > Thank you for your help!
> > 
> > 
> > --
> > Eric Wheeler
> 

  reply	other threads:[~2020-12-10 19:04 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-10  1:52 Global reserve ran out of space at 512MB, fails to rebalance Eric Wheeler
2020-12-10  2:38 ` Qu Wenruo
2020-12-10  3:12 ` Zygo Blaxell
2020-12-10 19:02   ` Eric Wheeler [this message]
2020-12-10 19:50   ` Eric Wheeler
2020-12-11  3:49     ` Zygo Blaxell
2020-12-11 19:08       ` Eric Wheeler
2020-12-11 21:05         ` Zygo Blaxell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.LRH.2.21.2012101901060.15698@pop.dreamhost.com \
    --to=btrfs@lists.ewheeler.net \
    --cc=ce3g8jdj@umail.furryterror.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=quwenruo.btrfs@gmx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox