public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Duncan <1i5t5.duncan@cox.net>
To: Andrej Friesen <andre.friesen@gmail.com>, linux-btrfs@vger.kernel.org
Subject: Re: Questions about BTRFS balance and scrub on non-RAID setup
Date: Tue, 31 Aug 2021 21:54:01 -0700	[thread overview]
Message-ID: <20210831215401.6928aeef@ws> (raw)

Andrej Friesen posted on Tue, 31 Aug 2021 10:17:07 +0200 as excerpted:

>> You probably want to use autodefrag or a custom defragmentation
>> solution too. We weren't satisfied with autodefrag in some situations
>> (were clearly fragmentation crept in and IO performance suffered
>> until a manual defrag) and developed our own scheduler for triggering
>> defragmentation based on file writes and slow full filesystem scans,
> 
> The ceph cluster only uses SSDs therefore I guess we do not suffer
> from fragmentation problem as with HDDs. As far as I understood SSDs.

Since I saw mention of btrfs snapshots as well...

It's worth mentioning that defrag (of course) triggers a write-out of
the new defragmented data, which because btrfs snapshots are cow-based
(copy- on-write), duplicates blocks still locked into place by existing 
snapshots.  With rewrite-in-place write patterns (typical
write-patterns for database or VM image usage), defrag and repeated
snapshots this can eat up space rather fast.

(They tried snapshot-aware defrag at one point but due to the exploded 
complexity of dealing with all the COW-references the performance just 
wasn't within the realm of practical as the defrag ended up making
little forward progress, so that was dropped in favor of a defrag that
would break the cow-references and thus use extra space, but at least
/worked/ for its labeled purpose.)

So I'd suggest choosing either one or the other, either snapshotting or 
defrag, don't try to use both in combination, or at least limit their 
usage in combination and keep an eye on space usage, deleting snapshots 
and/or reducing defrag frequency to some fraction of the snapshot 
frequency as necessary.

For ssds, autodefrag without manual defrag may be a reasonable
compromise (it's one I like personally but my use-case isn't
commercial), tho it is said that autodefrag may be a performance
bottleneck for some database (and I suspect VM-image as well)
use-cases, but I suspect autodefrag on ssds should both mitigate the
performance issue and likely eliminate the need for more intensive
manual/scheduled defrag runs.

The other thing to consider with below-btrfs-level snapshotting, and
I'm out-of-league for ceph/rdb but know it's definitely a problem with
lvm, is that btrfs due to its multi-device functionality cannot be
allowed to see other snapshots of the filesystem with the same btrfs
UUID.  (Btrfs- scan is what would make btrfs aware of them, but udev
typically triggers btrfs-scan when it detects new devices, and with lvm
at least, udev device detection can trigger somewhat unexpectedly.)
Because when btrfs sees these other devices with the same btrfs UUID,
it considers them additional devices of a multi-device btrfs and can
attempt to write to them instead of the original target device,
potentially creating all sorts of mayhem!

Like I said I'm out-of-league with ceph, etc, and have no idea if this 
even applies with it, but when I saw rdb snapshots mentioned I thought
of the lvm snapshots problem and thought it was worth a heads-up, in
case further investigation is necessary.

Likewise I saw the mention of quotas and balance.  Balance with quotas 
running similarly explodes due to constant recalculation of the quota
as the balance does its thing, increasing balance time dramatically and 
often out of the realm of the practical.  So if quotas are needed, 
minimize the use of balance, and if a balance is necessary, turning off 
quotas temporarily may be the only way to make reasonable forward 
progress on the balance.

But it sounds like btrfs quotas may not be necessary, thus avoiding
that problem entirely. =:^)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

             reply	other threads:[~2021-09-01  4:54 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-01  4:54 Duncan [this message]
  -- strict thread matches above, loose matches on Subject: below --
2021-08-30 13:20 Questions about BTRFS balance and scrub on non-RAID setup Andrej Friesen
2021-08-30 14:18 ` Lionel Bouton
2021-08-31  8:17   ` Andrej Friesen
2021-08-31 13:06     ` Lionel Bouton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210831215401.6928aeef@ws \
    --to=1i5t5.duncan@cox.net \
    --cc=andre.friesen@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox