linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Marc MERLIN <marc@merlins.org>
To: Duncan <1i5t5.duncan@cox.net>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: Fixing Btrfs Filesystem Full Problems typo?
Date: Sun, 23 Nov 2014 13:16:50 -0800	[thread overview]
Message-ID: <20141123211650.GU8916@merlins.org> (raw)
In-Reply-To: <pan$5df6$ab571fd5$d428faeb$f4fcc034@cox.net>

On Sun, Nov 23, 2014 at 07:52:29AM +0000, Duncan wrote:
> > Right. So, why would you rebalance empty chunks or near empty chunks?
> > Don't you want to rebalance almost full chunks first, and work you way
> > to less and less full as needed?
> 
> No, the closer to empty a chunk is, the more effect you can get in 
> rebalancing it along with others of the same fullness.
 
Ok, now I see what I was thinking the wrong way around:
Rebalancing is not rebalancing data within a chunk, optimizing some tree 
data structure.

Rebalancing is taking a nearly empty chunk and merging it with other
chunks to free up that chunkspace.

So, -dusage=10 only picks chunks that are 10% used or less, and tries
to free them up by putting their data elsewhere.

Did I get it right this time? :)

> IOW, rewriting 20 95% usage chunks to 19, freeing just one, is going to 
> take you nearly 20 times as long as rewriting 20 5% usage chunks, freeing 
> 19 of them, since in the latter case you're actually only rewriting one 
> full chunk's worth of data or metadata.

Right, that makes sense.
 
> OK, so what /is/ the effect of a fuller filesystem?  Simply this.  As the 
> filesystem fills up, there's less and less fully free unallocated space 
> available even after a full balance, meaning that free space can be used 
> up with fewer and fewer chunk allocations, so you have to rebalance more 
> and more often to keep what's left from getting out of balance and 
> running into ENOSPC conditions.

Yes, been there, done that :)
 
> But, beware!  Just because your filesystem is say 55% full (number from 
> your example earlier), does **NOT** mean usage=55 is the best number to 
> use.  That may well be the case, or it may not.  There's simply no 
> necessarily direct correlation in that regard, and a recommended N for 
> usage=N cannot be determined without a LOT more use-case information than 
> simply knowing the filesystem is at 55% capacity.

Yeah, I remember that. I'm ok with using the same number but I
understand it's not a given that it's the perfect number.
 
> So the 55% filesystem capacity would probably inform my choice of jumps, 
> say 20% at a time, but I'd still start much lower and jump at that 20% or 
> so at a time.

That makes sense. I'll try to synthetize all this and rewrite my blog
post and the wiki to make this clearer.
 
Thanks,
Marc
-- 
"A mouse is a device used to point at the xterm you want to type in" - A.S.R.
Microsoft is to operating systems ....
                                      .... what McDonalds is to gourmet cooking
Home page: http://marc.merlins.org/                         | PGP 1024R/763BE901

  parent reply	other threads:[~2014-11-23 21:17 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CAA7pwKNH-Cbd+_D+sCEJxxdervLC=_3_AzaywSE3mXi8MLydxw@mail.gmail.com>
2014-11-22 22:26 ` Fixing Btrfs Filesystem Full Problems typo? Marc MERLIN
2014-11-22 23:26   ` Patrik Lundquist
2014-11-22 23:46     ` Marc MERLIN
2014-11-23  0:05     ` Hugo Mills
2014-11-23  1:07       ` Marc MERLIN
2014-11-23  7:52         ` Duncan
2014-11-23 15:12           ` Patrik Lundquist
2014-11-24  4:23             ` Duncan
2014-11-24 12:35               ` Patrik Lundquist
2014-12-09 22:29                 ` Patrik Lundquist
2014-12-09 23:13                   ` Robert White
2014-12-10  7:19                     ` Patrik Lundquist
2014-12-10 12:17                       ` Robert White
2014-12-10 13:11                         ` Duncan
2014-12-10 18:56                           ` Patrik Lundquist
2014-12-10 22:28                             ` Robert White
2014-12-11  4:13                               ` Duncan
2014-12-11 10:29                                 ` Patrik Lundquist
2014-12-11  6:16                               ` Patrik Lundquist
2014-12-10 13:36                         ` Patrik Lundquist
2014-12-11  8:42                           ` Robert White
2014-12-11  9:02                             ` Duncan
2014-12-11  9:55                             ` Patrik Lundquist
2014-12-11 11:01                               ` Robert White
2014-12-09 23:20                   ` Robert White
2014-12-09 23:48                   ` Robert White
2014-12-10  0:01                     ` Robert White
2014-12-10 12:47                       ` Duncan
2014-12-10 20:11                         ` Patrik Lundquist
2014-12-11  4:02                           ` Duncan
2014-12-11  4:49                           ` Duncan
2014-11-23 21:16           ` Marc MERLIN [this message]
2014-11-23 22:49             ` Holger Hoffstätte
2014-11-24  4:40               ` Duncan
2014-12-07 21:38           ` Marc MERLIN
2014-11-24 18:05         ` Brendan Hide

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20141123211650.GU8916@merlins.org \
    --to=marc@merlins.org \
    --cc=1i5t5.duncan@cox.net \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).