From: Duncan <1i5t5.duncan@cox.net>
To: linux-btrfs@vger.kernel.org
Subject: Re: btrfs balance did not progress after 12H
Date: Wed, 20 Jun 2018 08:55:23 +0000 (UTC) [thread overview]
Message-ID: <pan$2d3f1$510af9e4$2a3856f6$1a419b18@cox.net> (raw)
In-Reply-To: ab17f789-d475-9812-9c76-83702fc2bd65@gmail.com
Austin S. Hemmelgarn posted on Tue, 19 Jun 2018 12:58:44 -0400 as
excerpted:
> That said, I would question the value of repacking chunks that are
> already more than half full. Anything above a 50% usage filter
> generally takes a long time, and has limited value in most cases (higher
> values are less likely to reduce the total number of allocated chunks).
> With `-duszge=50` or less, you're guaranteed to reduce the number of
> chunk if at least two match, and it isn't very time consuming for the
> allocator, all because you can pack at least two matching chunks into
> one 'new' chunk (new in quotes because it may re-pack them into existing
> slack space on the FS). Additionally, `-dusage=50` is usually sufficient
> to mitigate the typical ENOSPC issues that regular balancing is supposed
> to help with.
While I used to agree, 50% for best efficiency, perhaps 66 or 70% if
you're really pressed for space, now that the allocator can repack into
existing chunks more efficiently than it used to (at least in ssd mode,
which all my storage is now), I've seen higher values result in practical/
noticeable recovery of space to unallocated as well.
In fact, I routinely use usage=70 these days, and sometimes use higher,
to 99 or even 100%[1]. But of course I'm on ssd so it's far faster, and
partition it up with the biggest partitions being under 100 GiB, so even
full unfiltered balances are normally under 10 minutes and normal
filtered balances under a minute, to the point I usually issue the
balance command and actually wait for completion, so it's a far different
ball game than issuing a balance command on a multi-TB hard drive and
expecting it to take hours or even days. In that case, yeah, a 50% cap
arguably makes sense, tho he was using 60, which still shouldn't (sans
bugs like we seem to have here) be /too/ bad.
---
[1] usage=100: -musage=1..100 is the only way I've found to balance
metadata without rebalancing system as well, with the unfortunate penalty
for rebalancing system on small filesystems being an increase of the
system chunk size from 8 MB original mkfs.btrfs size to 32 MB... only a
few KiB used! =:^(
--
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman
next prev parent reply other threads:[~2018-06-20 8:57 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-06-18 13:00 btrfs balance did not progress after 12H Marc MERLIN
2018-06-19 15:47 ` Marc MERLIN
2018-06-19 16:30 ` james harvey
2018-06-19 16:58 ` Austin S. Hemmelgarn
2018-06-20 8:55 ` Duncan [this message]
2018-06-25 16:07 ` btrfs balance did not progress after 12H, hang on reboot, btrfs check --repair kills the system still Marc MERLIN
2018-06-25 16:24 ` Hans van Kranenburg
2018-06-25 16:46 ` Marc MERLIN
2018-06-25 17:07 ` Austin S. Hemmelgarn
2018-06-25 17:34 ` Marc MERLIN
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='pan$2d3f1$510af9e4$2a3856f6$1a419b18@cox.net' \
--to=1i5t5.duncan@cox.net \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).