linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Holger Hoffstätte" <holger@applied-asynchrony.com>
To: Stefan Priebe - Profihost AG <s.priebe@profihost.ag>,
	Wang Xiaoguang <wangxg.fnst@cn.fujitsu.com>,
	"linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>
Subject: Re: BTRFS: space_info 4 has 18446742286429913088 free, is not full
Date: Wed, 28 Sep 2016 15:44:48 +0200	[thread overview]
Message-ID: <57EBC950.6040205@applied-asynchrony.com> (raw)
In-Reply-To: <38866575-422f-0d2d-b76c-a1d4b1018cfd@profihost.ag>

On 09/28/16 15:06, Stefan Priebe - Profihost AG wrote:
> 
> Yes this is 4.4.22 and no i don't have qgroups enabled so it can't help.
> 
> # btrfs qgroup show /path/
> ERROR: can't perform the search - No such file or directory
> ERROR: can't list qgroups: No such file or director
> 
> This is the same output on all backup machines.

OK, that is really good to know (your other mails arrived just after I
sent mine). The fact that you see this problem with all kernels - even with
4.8rc - *and* on all machines is good (in a way) because it means I haven't
messed up anything, and we're not chasing ghosts caused by broken backport
patches.

>> would be unfortunate, but you could try to disable compression for a
>> while and see what  happens, assuming the space requirements allow this
>> experiment.
> Good idea but it does not. I hope i can reproduce this with my already
> existing testscript which i've now bumped to use a 37TB partition and
> big files rather than a 15GB part and small files. If i can reproduce it
> i can also check whether disabling compression fixes this.

Great. Remember to undo the compression on existing files, or create
them from scratch.

> No that's not the case. No rsync nor inplace is involved. I'm dumping
> differences directly from ceph and put them on top of a base image but
> only for 7 days. So it's not endless fragmenting the file. After 7 days
> a clean whole image is dumped.

That sounds sane but it's also not at all how you described things to me
previosuly ;) But OK. How do you "dump differences directly from
Ceph"? I'd assume the VM images are RBDs, but it sounds you're somehow
using overlayfs.

> yes and no - this is not idea and even very slow if your customers need
> backups on a daily basis. So you must be able to mount a specific backup
> very fast. And stacking on demand is mostly too slow - but this is far
> away from the topic in this thread.

I understand the desire to mount & immediately access backups - it's what
I do here at home too (every machine can access its own last #n backups
via NFS) and it's very useful.

Anyway..something is off and you successfully cause it while other
people apparently do not. Do you still use those nonstandard mount
options with extremely long transaction flush times?

-h


  reply	other threads:[~2016-09-28 13:44 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-28 11:15 BTRFS: space_info 4 has 18446742286429913088 free, is not full Stefan Priebe - Profihost AG
2016-09-28 11:35 ` Wang Xiaoguang
2016-09-28 12:02   ` Stefan Priebe - Profihost AG
2016-09-28 12:10     ` Wang Xiaoguang
2016-09-28 12:25       ` Stefan Priebe - Profihost AG
2016-09-29  6:49       ` Stefan Priebe - Profihost AG
2016-09-29  6:55         ` Wang Xiaoguang
2016-09-29  7:09           ` Stefan Priebe - Profihost AG
2016-09-29  7:13             ` Wang Xiaoguang
2016-09-29  7:27               ` Stefan Priebe - Profihost AG
2016-09-29 10:03                 ` Adam Borowski
2016-09-29 10:05                   ` Stefan Priebe - Profihost AG
2016-10-06  3:04                 ` Wang Xiaoguang
2016-10-06  7:32                   ` Stefan Priebe - Profihost AG
2016-10-06  7:35                   ` Stefan Priebe - Profihost AG
2016-10-07  7:03                   ` Stefan Priebe - Profihost AG
2016-10-07  7:17                     ` Wang Xiaoguang
2016-10-07  7:47                       ` Paul Jones
2016-10-07  7:48                         ` Paul Jones
2016-10-07  7:59                       ` Stefan Priebe - Profihost AG
2016-10-07  8:05                       ` Stefan Priebe - Profihost AG
2016-10-07  8:06                       ` Stefan Priebe - Profihost AG
2016-10-07  8:07                         ` Wang Xiaoguang
2016-10-07  8:16                           ` Stefan Priebe - Profihost AG
2016-10-07  8:19                             ` Wang Xiaoguang
2016-10-07  9:33                       ` Holger Hoffstätte
2016-10-08  5:56                         ` Stefan Priebe - Profihost AG
2016-10-08 20:49                         ` Stefan Priebe - Profihost AG
2016-10-08  6:05                   ` Stefan Priebe - Profihost AG
2016-10-10 20:06                   ` Stefan Priebe - Profihost AG
2016-10-11  3:16                     ` Wang Xiaoguang
2016-10-23 17:47                 ` Stefan Priebe - Profihost AG
2016-10-25 10:48                   ` Wang Xiaoguang
2016-09-28 12:47   ` Holger Hoffstätte
2016-09-28 13:06     ` Stefan Priebe - Profihost AG
2016-09-28 13:44       ` Holger Hoffstätte [this message]
2016-09-28 13:59         ` Stefan Priebe - Profihost AG

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=57EBC950.6040205@applied-asynchrony.com \
    --to=holger@applied-asynchrony.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=s.priebe@profihost.ag \
    --cc=wangxg.fnst@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).