From: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
To: Chris Murphy <lists@colorremedies.com>
Cc: David Sterba <dsterba@suse.cz>,
Martin Steigerwald <martin@lichtvoll.de>,
Martin Raiber <martin@urbackup.org>,
Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: With Linux 5.5: Filesystem full while still 90 GiB free
Date: Thu, 30 Jan 2020 22:00:53 -0500 [thread overview]
Message-ID: <20200131030053.GR13306@hungrycats.org> (raw)
In-Reply-To: <CAJCQCtSwJHR2+jEXY=eK41xR7Z0=+Jf5xhsD03Qvoh92bAHO6g@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 4219 bytes --]
On Thu, Jan 30, 2020 at 12:31:40PM -0700, Chris Murphy wrote:
> On Thu, Jan 30, 2020 at 10:20 AM David Sterba <dsterba@suse.cz> wrote:
> >
> > On Wed, Jan 29, 2020 at 03:55:06PM -0700, Chris Murphy wrote:
> > > On Wed, Jan 29, 2020 at 2:20 PM Martin Steigerwald <martin@lichtvoll.de> wrote:
> > > >
> > > > So if its just a cosmetic issue then I can wait for the patch to land in
> > > > linux-stable. Or does it still need testing?
> > >
> > > I'm not seeing it in linux-next. A reasonable short term work around
> > > is mount option 'metadata_ratio=1' and that's what needs more testing,
> > > because it seems decently likely mortal users will need an easy work
> > > around until a fix gets backported to stable. And that's gonna be a
> > > while, me thinks.
> >
> > We're looking into some fix that could be backported, as it affects a
> > long-term kernel (5.4).
> >
> > The fix
> > https://lore.kernel.org/linux-btrfs/20200115034128.32889-1-wqu@suse.com/
> > IMHO works by accident and is not good even as a workaround, only papers
> > over the problem in some cases. The size of metadata over-reservation
> > (caused by a change in the logic that estimates the 'over-' part) adds
> > up to the global block reserve (that's permanent and as last resort
> > reserve for deletion).
> >
> > In other words "we're making this larger by number A, so let's subtract
> > some number B". The fix is to use A.
> >
> > > Is that mount option sufficient? Or does it take a filtered balance?
> > > What's the most minimal balance needed? I'm hoping -dlimit=1
> > >
> > > I can't figure out a way to trigger this though, otherwise I'd be
> > > doing more testing.
> >
> > I haven't checked but I think the suggested workarounds affect statfs as
> > a side effect. Also as the reservations are temporary, the numbers
> > change again after a sync.
>
> Yeah I'm being careful to qualify to mortal users that any workarounds
> are temporary and uncertain. I'm not even certain what the pattern is,
> people with new file systems have hit it. A full balance seems to fix
> it, and then soon after the problem happens again. I don't do any
> balancing these days, for over a year now, so I wonder if that's why
> I'm not seeing it.
I had to intentionally balance metadata to trigger the bug on pre-existing
test filesystems. With new filesystems it's easy, I hit it every time
the last metadata block group is half full (assuming default BG size of
1GB and max global reserved size of 512MB). It goes away when a new
metadata BG is allocated, then comes back again later. Sometimes it
appears and disappears rapidly while doing a large file tree copy.
An older filesystem will have some GB of allocated but partially empty
metadata BGs, and won't hit this condition unless you run metadata balance
(which shrinks metadata), or do something that causes explosive metadata
growth. If you normally keep a healthy amount of allocated but unused
metadata space, you probably will never hit the bug.
> But yeah a small number of people are hitting it, but it also stops
> any program that does a free space check (presumably using statfs).
>
> A more reliable/universal work around in the meantime is still useful;
> in particular if it doesn't require changing mount options, or only
> requires it temporarily (e.g. not added to /etc/fstab, where it can
> be forgotten for the life of that system).
You can create a GB of allocated but unused metadata space with something
like:
btrfs sub create sub_tmp
mkdir sub_tmp/single
head -c 2047 /dev/urandom > sub_tmp/single/inline_file
for x in $(seq 1 18); do
cp -a sub_tmp/single sub_tmp/double
mv sub_tmp/double sub_tmp/single/$x
done
sync
btrfs sub del sub_tmp
This requires the max_inline mount option to be set to the default (2048).
Random data means it works well when compression is enabled too.
Do not balance metadata until the bug is fixed. Balancing metadata will
release the allocated but unused metadata space, possibly retriggering
the bug.
(Hmmm...the above script is also a surprisingly effective commit latency
test case...)
>
> --
> Chris Murphy
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
prev parent reply other threads:[~2020-01-31 3:00 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-29 19:33 With Linux 5.5: Filesystem full while still 90 GiB free Martin Steigerwald
2020-01-29 20:04 ` Martin Raiber
2020-01-29 21:20 ` Martin Steigerwald
2020-01-29 22:55 ` Chris Murphy
2020-01-30 10:41 ` Martin Steigerwald
2020-01-30 16:37 ` Chris Murphy
2020-01-30 20:02 ` Martin Steigerwald
2020-01-30 20:18 ` Chris Murphy
2020-01-30 20:59 ` Josef Bacik
2020-01-30 21:09 ` Chris Murphy
2020-01-30 21:32 ` Martin Raiber
2020-01-30 21:42 ` Josef Bacik
2020-01-30 21:12 ` Martin Steigerwald
2020-01-30 21:10 ` Martin Steigerwald
2020-01-30 21:20 ` Remi Gauvin
2020-01-30 23:12 ` Martin Steigerwald
2020-01-31 1:43 ` Matt Corallo
2020-01-31 1:57 ` Qu Wenruo
2020-03-02 1:57 ` Etienne Champetier
2020-03-02 1:59 ` Qu Wenruo
2020-01-31 4:12 ` Etienne Champetier
2020-01-30 17:19 ` David Sterba
2020-01-30 19:31 ` Chris Murphy
2020-01-30 19:58 ` Martin Steigerwald
2020-01-31 3:00 ` Zygo Blaxell [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200131030053.GR13306@hungrycats.org \
--to=ce3g8jdj@umail.furryterror.org \
--cc=dsterba@suse.cz \
--cc=linux-btrfs@vger.kernel.org \
--cc=lists@colorremedies.com \
--cc=martin@lichtvoll.de \
--cc=martin@urbackup.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox