From: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
To: "Austin S. Hemmelgarn" <ahferroin7@gmail.com>
Cc: Timofey Titovets <nefelim4ag@gmail.com>,
darkbasic@linuxsystems.it, David Sterba <dsterba@suse.cz>,
linux-btrfs <linux-btrfs@vger.kernel.org>
Subject: Re: Any chance to get snapshot-aware defragmentation?
Date: Thu, 31 May 2018 23:19:59 -0400 [thread overview]
Message-ID: <20180601031959.GC22356@hungrycats.org> (raw)
In-Reply-To: <7ba873b7-48f3-ba2f-3e92-3a472e2d59f5@gmail.com>
[-- Attachment #1: Type: text/plain, Size: 8626 bytes --]
On Mon, May 21, 2018 at 11:38:28AM -0400, Austin S. Hemmelgarn wrote:
> On 2018-05-21 09:42, Timofey Titovets wrote:
> > пн, 21 мая 2018 г. в 16:16, Austin S. Hemmelgarn <ahferroin7@gmail.com>:
> > > On 2018-05-19 04:54, Niccolò Belli wrote:
> > > > On venerdì 18 maggio 2018 20:33:53 CEST, Austin S. Hemmelgarn wrote:
> > > > > With a bit of work, it's possible to handle things sanely. You can
> > > > > deduplicate data from snapshots, even if they are read-only (you need
> > > > > to pass the `-A` option to duperemove and run it as root), so it's
> > > > > perfectly reasonable to only defrag the main subvolume, and then
> > > > > deduplicate the snapshots against that (so that they end up all being
> > > > > reflinks to the main subvolume). Of course, this won't work if you're
> > > > > short on space, but if you're dealing with snapshots, you should have
> > > > > enough space that this will work (because even without defrag, it's
> > > > > fully possible for something to cause the snapshots to suddenly take
> > > > > up a lot more space).
> > > >
> > > > Been there, tried that. Unfortunately even if I skip the defreg a simple
> > > >
> > > > duperemove -drhA --dedupe-options=noblock --hashfile=rootfs.hash rootfs
> > > >
> > > > is going to eat more space than it was previously available (probably
> > > > due to autodefrag?).
> > > It's not autodefrag (that doesn't trigger on use of the EXTENT_SAME
> > > ioctl). There's two things involved here:
> >
> > > * BTRFS has somewhat odd and inefficient handling of partial extents.
> > > When part of an extent becomes unused (because of a CLONE ioctl, or an
> > > EXTENT_SAME ioctl, or something similar), that part stays allocated
> > > until the whole extent would be unused.
> > > * You're using the default deduplication block size (128k), which is
> > > larger than your filesystem block size (which is at most 64k, most
> > > likely 16k, but might be 4k if it's an old filesystem), so deduplicating
> > > can split extents.
> >
> > That's a metadata node leaf != fs block size.
> > btrfs fs block size == machine page size currently.
> You're right, I keep forgetting about that (probably because BTRFS is pretty
> much the only modern filesystem that doesn't let you change the block size).
> >
> > > Because of this, if a duplicate region happens to overlap the front of
> > > an already shared extent, and the end of said shared extent isn't
> > > aligned with the deduplication block size, the EXTENT_SAME call will
> > > deduplicate the first part, creating a new shared extent, but not the
> > > tail end of the existing shared region, and all of that original shared
> > > region will stick around, taking up extra space that it wasn't before.
> >
> > > Additionally, if only part of an extent is duplicated, then that area of
> > > the extent will stay allocated, because the rest of the extent is still
> > > referenced (so you won't necessarily see any actual space savings).
> >
> > > You can mitigate this by telling duperemove to use the same block size
> > > as your filesystem using the `-b` option. Note that using a smaller
> > > block size will also slow down the deduplication process and greatly
> > > increase the size of the hash file.
> >
> > duperemove -b control "how hash data", not more or less and only support
> > 4KiB..1MiB
> And you can only deduplicate the data at the granularity you hashed it at.
> In particular:
>
> * The total size of a region being deduplicated has to be an exact multiple
> of the hash block size (what you pass to `-b`). So for the default 128k
> size, you can only deduplicate regions that are multiples of 128k long
> (128k, 256k, 384k, 512k, etc). This is a simple limit derived from how
> blocks are matched for deduplication.
> * Because duperemove uses fixed hash blocks (as opposed to using a rolling
> hash window like many file synchronization tools do), the regions being
> deduplicated also have to be exactly aligned to the hash block size. So,
> with the default 128k size, you can only deduplicate regions starting at 0k,
> 128k, 256k, 384k, 512k, etc, but not ones starting at, for example, 64k into
> the file.
> >
> > And size of block for dedup will change efficiency of deduplication,
> > when count of hash-block pairs, will change hash file size and time
> > complexity.
> >
> > Let's assume that: 'A' - 1KiB of data 'AAAA' - 4KiB with repeated pattern.
> >
> > So, example, you have 2 of 2x4KiB blocks:
> > 1: 'AAAABBBB'
> > 2: 'BBBBAAAA'
> >
> > With -b 8KiB hash of first block not same as second.
> > But with -b 4KiB duperemove will see both 'AAAA' and 'BBBB'
> > And then that blocks will be deduped.
> This supports what I'm saying though. Your deduplication granularity is
> bounded by your hash granularity. If in addition to the above you have a
> file that looks like:
>
> AABBBAA
>
> It would not get deduplicated against the first two at either `-b 4k` or `-b
> 8k` despite the middle 4k of the file being an exact duplicate of the final
> 4k of the first file and first 4k of the second one.
>
> If instead you have:
>
> AABBBBBB
>
> And the final 6k is a single on-disk extent, that extent will get split when
> you go to deduplicate against the first two files with a 4k block size
> because only the final 4k can be deduplicated, and the entire 6k original
> extent will stay completely allocated.
It's the extent *ref* (in the subvol) that gets split. The original
extent *data* (in the extent tree) is never modified, only deleted when
the last ref to any part of the extent data item is removed. It looks
like there was intent in early btrfs to support splitting the extent data
too, but any code that might actually do that seems to have been removed
(among other things, there are gotchas with compression--you can't simply
truncate a compressed extent without modifying its data).
bees uses 4K block matching to find a common block in both extents, then
searches blocks adjacent to the matching blocks for more duplicates
until a complete extent is found. This enables bees to ignore the
dedup-block-size/extent-size alignment problem. This is similar to
a rolling hash window like rsync, but relying on slightly different
assumptions about how duplicate data is distributed through a typical
filesystem. To get a bigger block size, bees discards block hashes
(e.g. 32K block size = 7 out of 8 4K block hashes discarded) because
bees can find a 32K contiguous duplicate extent with just one 4K hash.
bees replaces the entire extent ref containing a duplicate block with
reflinks to duplicate blocks in other extents. If some blocks within
an extent are unique, bees creates a duplicate extent containing the
unique data, then dedups the new duplicate blocks over the old ones.
So if you have AABBBBB and AABBBCC, bees will make a copy of CC in a
new extent, then replace AABBBCC with reflinks to AABBB (from AABBBBB)
and the new CC. This eliminates the entire original AABBBCC extent
from the filesystem. At the moment bees isn't very smart about that,
which results in increased fragmentation when deduping data with lots
of non-extent-aligned duplication, like VM images and ELF binaries.
In the future bees could combine short extents (not necessarily
duplicates) into larger ones as it goes, making it an integrated
dedup and defrag tool. This would not really be snapshot-aware per
se--bees would be optimizing the layout of extent data items first,
then rewriting all of the extent ref (subvol/snapshot) trees to point
to the updated extent data items without having to care about whether
the original extent references come from snapshots, clones, or dedup.
I guess you could call that snapshot-agnostic, since a tool could do
this without an understanding of the snapshot concept at all.
Teaching bees that trick is a project I am working on *extremely*
slowly--it's practically a rewrite of bees, and $DAYJOB and home life
keep me stretched pretty thin these days.
> > Even, duperemove have 2 modes of deduping:
> > 1. By extents
> > 2. By blocks
> Yes, you can force it to not collapse runs of duplicate blocks into single
> extents, but that doesn't matter for this at all, you are still limited by
> your hash granularity.
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
next prev parent reply other threads:[~2018-06-01 3:21 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-05-11 15:22 Any chance to get snapshot-aware defragmentation? Niccolò Belli
2018-05-18 16:20 ` David Sterba
2018-05-18 16:36 ` Niccolò Belli
2018-05-18 17:10 ` Austin S. Hemmelgarn
2018-05-18 17:18 ` Niccolò Belli
2018-05-18 18:33 ` Austin S. Hemmelgarn
2018-05-18 22:26 ` Chris Murphy
2018-05-18 22:46 ` Omar Sandoval
2018-05-19 8:54 ` Niccolò Belli
2018-05-21 13:15 ` Austin S. Hemmelgarn
2018-05-21 13:42 ` Timofey Titovets
2018-05-21 15:38 ` Austin S. Hemmelgarn
2018-06-01 3:19 ` Zygo Blaxell [this message]
2018-05-18 23:55 ` Tomasz Pala
2018-05-19 8:56 ` Niccolò Belli
[not found] ` <20180520105928.GA17117@polanet.pl>
2018-05-21 13:49 ` Niccolò Belli
2018-05-21 17:43 ` David Sterba
2018-05-21 19:22 ` Austin S. Hemmelgarn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180601031959.GC22356@hungrycats.org \
--to=ce3g8jdj@umail.furryterror.org \
--cc=ahferroin7@gmail.com \
--cc=darkbasic@linuxsystems.it \
--cc=dsterba@suse.cz \
--cc=linux-btrfs@vger.kernel.org \
--cc=nefelim4ag@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).