public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Marcus Sundman <sundman@iki.fi>
To: linux-btrfs@vger.kernel.org
Cc: Hugo Mills <hugo@carfax.org.uk>, Jim Salter <jim@jrs-s.net>
Subject: Re: No space left on device (again)
Date: Tue, 25 Feb 2014 22:27:58 +0200	[thread overview]
Message-ID: <530CFCCE.5060707@iki.fi> (raw)
In-Reply-To: <20140225201921.GB13899@carfax.org.uk>

On 25.02.2014 22:19, Hugo Mills wrote:
> On Tue, Feb 25, 2014 at 01:05:51PM -0500, Jim Salter wrote:
>> 370GB of 410GB used isn't really "fine", it's over 90% usage.
>>
>> That said, I'd be interested to know why btrfs fi show /dev/sda3
>> shows 412.54G used, but btrfs fi df /home shows 379G used...
>     This is an FAQ...
>
>     btrfs fi show tells you how much is allocated out of the available
> pool on each disk. btrfs fi df then shows how much of that allocated
> space (in each category) is used.

What is the difference between the "used 371.11GB" and the "used 
412.54GB" displayed by "btrfs fi show"?

>     The problem here is also in the FAQ: the metadata is close to full
> -- typically something like 500-750 MiB of headroom is needed in
> metadata. The FS can't allocate more metadata because it's allocated
> everything already (total=used in btrfs fi show), so the solution is
> to do a filtered balance:
>
> btrfs balance start -dusage=5 /mountpoint

Of course that was the first thing I tried, and it didn't help *at* *all*:

> # btrfs filesystem balance start -dusage=5 /home
> Done, had to relocate 0 out of 415 chunks
> #

... and it really didn't free anything.

>> On 02/25/2014 11:49 AM, Marcus Sundman wrote:
>>> Hi
>>>
>>> I get "No space left on device" and it is unclear why:
>>>
>>>> # df -h|grep sda3
>>>> /dev/sda3       413G  368G   45G  90% /home
>>>> # btrfs filesystem show /dev/sda3
>>>> Label: 'home'  uuid: 46279061-51f4-40c2-afd0-61d6faab7f60
>>>>     Total devices 1 FS bytes used 371.11GB
>>>>     devid    1 size 412.54GB used 412.54GB path /dev/sda3
>>>>
>>>> Btrfs v0.20-rc1
>>>> # btrfs filesystem df /home
>>>> Data: total=410.52GB, used=369.61GB
>>>> System: total=4.00MB, used=64.00KB
>>>> Metadata: total=2.01GB, used=1.50GB
>>>> #
>>> So, 'data' and 'metadata' seem to be fine(?), but 'system' is a
>>> bit low. Is that it? If so, can I do something about it? Or should
>>> I look somewhere else?
>>>
>>> I really wish I could get a warning before running out of disk
>>> space, instead of everything breaking suddenly when there seems to
>>> be lots and lots of space left.
>>>
>>> - Marcus
>>>


  reply	other threads:[~2014-02-25 20:28 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-02-25 16:49 No space left on device (again) Marcus Sundman
2014-02-25 18:05 ` Jim Salter
2014-02-25 19:59   ` Marcus Sundman
2014-02-25 20:19   ` Hugo Mills
2014-02-25 20:27     ` Marcus Sundman [this message]
2014-02-25 20:30       ` Josef Bacik
2014-02-26 10:38         ` Sander
2014-02-27  0:16         ` Marcus Sundman
2014-02-27  1:17           ` Josef Bacik
2014-02-27  7:48           ` Hugo Mills
2014-02-25 20:30       ` cwillu
2014-02-25 20:40       ` Hugo Mills

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=530CFCCE.5060707@iki.fi \
    --to=sundman@iki.fi \
    --cc=hugo@carfax.org.uk \
    --cc=jim@jrs-s.net \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox