linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Marc Joliet <marcec@gmx.de>
To: linux-btrfs@vger.kernel.org
Cc: Chris Murphy <lists@colorremedies.com>
Subject: Re: ENOSPC errors during balance
Date: Sun, 20 Jul 2014 11:48:53 +0200	[thread overview]
Message-ID: <20140720114853.432ac71a@marcec> (raw)
In-Reply-To: <CE7F2F5A-8807-44B1-BD4E-F6BF02D88F9C@colorremedies.com>


[-- Attachment #1.1: Type: text/plain, Size: 1421 bytes --]

Am Sat, 19 Jul 2014 19:11:00 -0600
schrieb Chris Murphy <lists@colorremedies.com>:

> I'm seeing this also in the 2nd dmesg:
> 
> [  249.893310] BTRFS error (device sdg2): free space inode generation (0) did not match free space cache generation (26286)
> 
> 
> So you could try umounting the volume. And doing a one time mount with the clear_cache mount option. Give it some time to rebuild the space cache.
> 
> After that you could umount again, and mount with enospc_debug and try to reproduce the enospc with another balance and see if dmesg contains more information this time.

OK, I did that, and the new dmesg is attached. Also, some outputs again, first
"filesystem df" (that "total" surge at the end sure is consistent):

# btrfs filesystem df /mnt           
Data, single: total=237.00GiB, used=229.67GiB
System, DUP: total=32.00MiB, used=36.00KiB
Metadata, DUP: total=4.50GiB, used=3.49GiB
unknown, single: total=512.00MiB, used=0.00

And here what I described in my initial post, the output of "balance status"
immediately after the error (turns out my memory was correct):

btrfs filesystem balance status /mnt
Balance on '/mnt' is running
0 out of about 0 chunks balanced (0 considered), -nan% left

(Also, this is with Gentoo kernel 3.15.6 now.)

-- 
Marc Joliet
--
"People who think they know everything really annoy those of us who know we
don't" - Bjarne Stroustrup

[-- Attachment #1.2: dmesg4.log.xz --]
[-- Type: application/x-xz, Size: 26508 bytes --]

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

  reply	other threads:[~2014-07-20  9:50 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-07-19 20:10 Fw: ENOSPC errors during balance Marc Joliet
2014-07-19 20:58 ` Marc Joliet
2014-07-20  0:53   ` Chris Murphy
2014-07-20  9:50     ` Marc Joliet
2014-07-20  1:11   ` Chris Murphy
2014-07-20  9:48     ` Marc Joliet [this message]
2014-07-20 19:46       ` Marc Joliet
  -- strict thread matches above, loose matches on Subject: below --
2014-07-19 15:26 Marc Joliet
2014-07-19 17:38 ` Chris Murphy
2014-07-19 21:06   ` Piotr Szymaniak
2014-07-20  2:39   ` Duncan
2014-07-20 10:22     ` Marc Joliet
2014-07-20 11:40       ` Marc Joliet
2014-07-20 19:44         ` Marc Joliet
2014-07-21  2:41           ` Duncan
2014-07-21 13:22           ` Marc Joliet
2014-07-21 22:30             ` Marc Joliet
2014-07-21 23:30               ` Marc Joliet
2014-07-22  3:26                 ` Duncan
2014-07-22  7:37                   ` Marc Joliet
2014-07-20 12:59       ` Duncan
2014-07-21 11:01         ` Brendan Hide

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140720114853.432ac71a@marcec \
    --to=marcec@gmx.de \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=lists@colorremedies.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).