linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Austin S. Hemmelgarn" <ahferroin7@gmail.com>
To: Marat Khalili <mkh@rqc.ru>, Martin Raiber <martin@urbackup.org>
Cc: Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: Recommendations for balancing as part of regular maintenance?
Date: Tue, 9 Jan 2018 07:46:48 -0500	[thread overview]
Message-ID: <13b5063c-a7bd-5c95-1f6e-16124d385569@gmail.com> (raw)
In-Reply-To: <3eae37f6-3776-15c9-84ae-568e56abfa7e@rqc.ru>

On 2018-01-09 03:33, Marat Khalili wrote:
> On 08/01/18 19:34, Austin S. Hemmelgarn wrote:
>> A: While not strictly necessary, running regular filtered balances 
>> (for example `btrfs balance start -dusage=50 -dlimit=2 -musage=50 
>> -mlimit=4`, see `man btrfs-balance` for more info on what the options 
>> mean) can help keep a volume healthy by mitigating the things that 
>> typically cause ENOSPC errors. 
> 
> The choice of words is not very fortunate IMO. In my view volume 
> stopping being "healthy" during normal operation presumes some bugs (at 
> least shortcomings) in the filesystem code. In this case I'd prefer to 
> have detailed understanding of the situation before copy-pasting 
> commands from wiki pages. Remember, most users don't run cutting-edge 
> kernels and tools, preferring LTS distribution releases instead, so one 
> size might not fit all.
I will not dispute that the tendency of BTRFS to end up in bad 
situations is a shortcoming of the filesystem code.  However, that isn't 
likely to change any time soon (fixing it is going to be a lot of work 
that will likely reduce performance for quite a few people), so there is 
absolutely no reason that people should not be trying to mitigate the 
problem.

As far as the exact command, the one I quoted has worked for at least 2 
years worth of btrfs-progs and kernels, and I think far longer than that 
(the usage and limit filters were implemented pretty early on).  I agree 
that detailed knowledge would be better, but that doesn't exactly fit 
with the concept of a FAQ in most cases, and most people really don't 
care about the details as long as it works.
> 
> On 08/01/18 23:29, Martin Raiber wrote:
>> There have been reports of (rare) corruption caused by balance (won't be
>> detected by a scrub) here on the mailing list. So I would stay a away
>> from btrfs balance unless it is absolutely needed (ENOSPC), and while it
>> is run I would try not to do anything else wrt. to writes simultaneously.
> 
> This is my opinion too as a normal user, based upon reading this list 
> and own attempts to recover from ENOSPC. I'd rather re-create filesystem 
> from scratch, or at least make full verified backup before attempting to 
> fix problems with balance.
While I'm generally of the same opinion (and I have a feeling most other 
people who have been server admins are too), it's not a very user 
friendly position to recommend that.  Keep in mind that many (probably 
most) users don't keep proper backups, and just targeting 'sensible' 
people as your primary audience is a bad idea.  It also needs to work at 
at least a basic level anyway though simply because you can't always 
just nuke the volume and rebuild it from scratch.

Personally though, I don't think I've ever seen issues with balance 
corrupting data, and I don't recall seeing complaints about it either 
(though I would love to see some links that prove me wrong).

  reply	other threads:[~2018-01-09 12:46 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-08 15:55 Recommendations for balancing as part of regular maintenance? Austin S. Hemmelgarn
2018-01-08 16:20 ` ein
2018-01-08 16:34   ` Austin S. Hemmelgarn
2018-01-08 18:17     ` Graham Cobb
2018-01-08 18:34       ` Austin S. Hemmelgarn
2018-01-08 20:29         ` Martin Raiber
2018-01-09  8:33           ` Marat Khalili
2018-01-09 12:46             ` Austin S. Hemmelgarn [this message]
2018-01-10  3:49               ` Duncan
2018-01-10 16:30                 ` Tom Worster
2018-01-10 17:01                   ` Austin S. Hemmelgarn
2018-01-10 18:33                     ` Tom Worster
2018-01-10 20:44                       ` Timofey Titovets
2018-01-11 13:00                         ` Austin S. Hemmelgarn
2018-01-11  8:51                     ` Duncan
2018-01-10  4:38       ` Duncan
2018-01-10 12:41         ` Austin S. Hemmelgarn
2018-01-11 20:12         ` Hans van Kranenburg
2018-01-10 21:37 ` waxhead
2018-01-11 12:50   ` Austin S. Hemmelgarn
2018-01-11 19:56   ` Hans van Kranenburg
2018-01-12 18:24 ` Austin S. Hemmelgarn
2018-01-12 19:26   ` Tom Worster
2018-01-12 19:43     ` Austin S. Hemmelgarn
2018-01-13 22:09   ` Chris Murphy
2018-01-15 13:43     ` Austin S. Hemmelgarn
2018-01-15 18:23     ` Tom Worster
2018-01-16  6:45       ` Chris Murphy
2018-01-16 11:02         ` Andrei Borzenkov
2018-01-16 12:57         ` Austin S. Hemmelgarn
  -- strict thread matches above, loose matches on Subject: below --
2018-01-08 21:43 Tom Worster
2018-01-08 22:18 ` Hugo Mills
2018-01-09 12:23 ` Austin S. Hemmelgarn
2018-01-09 14:16   ` Tom Worster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=13b5063c-a7bd-5c95-1f6e-16124d385569@gmail.com \
    --to=ahferroin7@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=martin@urbackup.org \
    --cc=mkh@rqc.ru \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).