linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Holger Hoffstätte" <holger.hoffstaette@googlemail.com>
To: linux-btrfs@vger.kernel.org
Subject: Re: 3.16 Managed to ENOSPC with <80% used
Date: Wed, 24 Sep 2014 22:23:33 +0000 (UTC)	[thread overview]
Message-ID: <pan.2014.09.24.22.23.33@googlemail.com> (raw)
In-Reply-To: CAPL5yKfTfzkqBT9dQBpJAjuBtq2-=QqdCn_Pzfnnd1URKA_q4A@mail.gmail.com

On Wed, 24 Sep 2014 16:43:43 -0400, Dan Merillat wrote:

> Any idea how to recover?  I can't cut-paste but it's
> Total devices 1 FS bytes used 176.22GiB
> size 233.59GiB used 233.59GiB

The notorious -EBLOAT. But don't despair just yet.

> Basically it's been data allocation happy, since I haven't deleted
> 53GB at any point.  Unfortunately, none of the chunks are at 0% usage
> so a balance -dusage=0 finds nothing to drop.

Also try -musage=0..10, just for fun.

> Is this recoverable, or do I need to copy to another disk and back?

- if you have plenty RAM, make a big tmpfs (couple of GB), otherwise
  find some other storage to attach; any external drive is fine.

- else on tmpfs or spare fs:
  - fallocate --length <n>GiB <image>,
    where n: whatever you can spare and image: an arbitrary filename
  - losetup /dev/loop0 </path/to/image>

- btrfs device add /dev/loop0 <your hosed mountpoint>

You now have more unused space to fill up. \o/

Delete snapshots, make clean/.tar.gz some of those trees and run
balance -dusage/-musage again.

Another neat trick that will free up space is to convert to single
metadata: -mconvert=single -f (to force). A subsequent balance
with -musage=0..10 will likely free up quite some space.

When you're back in calmer waters you can btrfs device remove whatever
you added. That should fill your main drive right back up. :)

> This is a really unfortunate failure mode for BTRFS.  Usually I catch
> it before I get exactly 100% used and can use a balance to get it back
> into shape.

For now all we can do is run balance -d/-m regularly.

> What causes it to keep allocating datablocks when it's got so much
> free space?  The workload is pretty standard (for devs, at least): git
> and kernel builds, and git and android builds.

That particular workload seems to cause the block allocator to go
on a spending spree; you're not the first to see this.

Good luck!

-h


  reply	other threads:[~2014-09-24 22:23 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-09-24 20:43 3.16 Managed to ENOSPC with <80% used Dan Merillat
2014-09-24 22:23 ` Holger Hoffstätte [this message]
2014-09-25 21:05   ` Dan Merillat
2014-09-25 21:21     ` Holger Hoffstätte
2014-09-26 14:18       ` Rich Freeman
2014-09-27  2:36         ` Duncan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=pan.2014.09.24.22.23.33@googlemail.com \
    --to=holger.hoffstaette@googlemail.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).