public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
To: Phillip Susi <phill@thesusis.net>
Cc: Qu Wenruo <quwenruo.btrfs@gmx.com>,
	Jan Ziak <0xe2.0x9a.0x9b@gmail.com>,
	linux-btrfs@vger.kernel.org
Subject: Re: Btrfs autodefrag wrote 5TB in one day to a 0.5TB SSD without a measurable benefit
Date: Mon, 14 Mar 2022 18:59:51 -0400	[thread overview]
Message-ID: <Yi/I54pemZzSrNGg@hungrycats.org> (raw)
In-Reply-To: <87a6dscn20.fsf@vps.thesusis.net>


On Mon, Mar 14, 2022 at 04:09:08PM -0400, Phillip Susi wrote:
> 
> Qu Wenruo <quwenruo.btrfs@gmx.com> writes:
> 
> > That's more or less expected.
> >
> > Autodefrag has two limitations:
> >
> > 1. Only defrag newer writes
> >    It doesn't defrag older fragments.
> >    This is the existing behavior from the beginning of autodefrag.
> >    Thus it's not that effective against small random writes.
> 
> I don't understand this bit.  The whole point of defrag is to reduce the
> fragmentation of previous writes.  New writes should always attempt to
> follow the previous one if possible.  

New writes are allocated to the first available free space hole large
enough to hold them, starting from the point of the last write (plus
some other details like clustering and alignment).  The goal is that
data writes from memory are sequential as much as possible, even if
many different files were written in the same transaction.

btrfs extents are immutable, so the filesystem can't extend an existing
extent with new data.  Instead, a new extent must be created that contains
both the old and new data to replace the old extent.  At least one new
fragment must be created whenever the filesystem is modified.  (In
zoned mode, this is strictly enforced by the underlying hardware.)

> If auto defrag only changes the
> behavior of new writes, then how does it change it and why is that not
> the way new writes are always done?

Autodefrag doesn't change write behavior directly.  It is a
post-processing thread that rereads and rewrites recently written data,
_after_ it was originally written to disk.

In theory, running defrag after the writes means that the writes can
be fast for low latency--they are a physically sequential stream of
blocks sent to the disk as fast as it can write them, because btrfs does
not have to be concerned with trying to achieve physical contiguity
of logically discontiguous data.  Later on, when latency is no longer an
issue and some IO bandwidth is available, the fragments can be reread
and collected together into larger logically and physically contiguous
extents by a background process.

In practice, autodefrag does only part of that task, badly.

Say we have a program that writes 4K to the end of a file, every 5
seconds, for 5 minutes.

Every 30 seconds (default commit interval), kernel writeback submits all
the dirty pages for writing to btrfs, and in 30 seconds there will be 6
x 4K = 24K of those.  An extent in btrfs is created to hold the pages,
filled with the data blocks, connected to the various filesystem trees,
and flushed out to disk.

Over 5 minutes this will happen 10 times, so the file contains 10
fragments, each about 24K (commits are asynchronous, so it might be
20K in one fragment and 28K in the next).

After each commit, inodes with new extents are appended to a list
in memory.  Each list entry contains an inode, a transid of the commit
where the first write occurred, and the last defrag offset.  That list
is processed by a kernel thread some time after the commits are written
to disk.  The thread searches the inodes for extents created after the
last defrag transid, invokes defrag_range on each of these, and advances
the offset.  If the search offset reaches the end of file, then it is
reset to the beginning and another loop is done, and if the next search
loop over the file doesn't find new extents then the inode is removed
from the defrag list.

If there's a 5 minute delay between the original writes and autodefrag
finally catching up, then autodefrag will detect 10 new extents and
run defrag_range over them.  This is a read-then-write operation, since
the extent blocks may no longer be present in memory after writeback,
so autodefrag can easily fall behind writes if there are a lot of them.
Also the 64K size limit kicks in, so it might write 5 extents (2 x 24K =
48K, but 3 x 24K = 72K, and autodefrag cuts off at 64K).

If there's a 1 minute delay between the original writes and autodefrag,
then autodefrag will detect 1 new extents and run defrag over them
for a total of 5 new extents, about 240K each.  If there's no delay
at all, then there will be 10 extents of 120K each--if autodefrag
runs immediately after commit, it will see only one extent in each
loop, and issue no defrag_range calls.

Seen from the point of view of the disk, there are always at least
10x 120K writes.  In the no-autodefrag case it ends there.  In the
autodefrag cases, some of the data is read and rewritten later to make
larger extents.

In non-appending cases, the kernel autodefrag doesn't do very much useful
at all--random writes aren't logically contiguous, so autodefrag never
sees two adjacent extents in a search result, and therefore never sees
an opportunity to defrag anything.

At the time autodefrag was added to the kernel (May 2011), it was already
possible do to a better job in userspace for over a year (Feb 2010).
Between 2012 and 2021 there are only a handful of bug fixes, mostly of
the form "stop autodefrag from ruining things for the rest of the kernel."

  reply	other threads:[~2022-03-14 23:00 UTC|newest]

Thread overview: 71+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-06 15:59 Btrfs autodefrag wrote 5TB in one day to a 0.5TB SSD without a measurable benefit Jan Ziak
2022-03-07  0:48 ` Qu Wenruo
2022-03-07  2:23   ` Jan Ziak
2022-03-07  2:39     ` Qu Wenruo
2022-03-07  7:31       ` Qu Wenruo
2022-03-10  1:10         ` Jan Ziak
2022-03-10  1:26           ` Qu Wenruo
2022-03-10  4:33             ` Jan Ziak
2022-03-10  6:42               ` Qu Wenruo
2022-03-10 21:31                 ` Jan Ziak
2022-03-10 23:27                   ` Qu Wenruo
2022-03-11  2:42                     ` Jan Ziak
2022-03-11  2:59                       ` Qu Wenruo
2022-03-11  5:04                         ` Jan Ziak
2022-03-11 16:31                           ` Jan Ziak
2022-03-11 20:02                             ` Jan Ziak
2022-03-11 23:04                             ` Qu Wenruo
2022-03-11 23:28                               ` Jan Ziak
2022-03-11 23:39                                 ` Qu Wenruo
2022-03-12  0:01                                   ` Jan Ziak
2022-03-12  0:15                                     ` Qu Wenruo
2022-03-12  3:16                                     ` Zygo Blaxell
2022-03-12  2:43                                 ` Zygo Blaxell
2022-03-12  3:24                                   ` Qu Wenruo
2022-03-12  3:48                                     ` Zygo Blaxell
2022-03-14 20:09                         ` Phillip Susi
2022-03-14 22:59                           ` Zygo Blaxell [this message]
2022-03-15 18:28                             ` Phillip Susi
2022-03-15 19:28                               ` Jan Ziak
2022-03-15 21:06                               ` Zygo Blaxell
2022-03-15 22:20                                 ` Jan Ziak
2022-03-16 17:02                                   ` Zygo Blaxell
2022-03-16 17:48                                     ` Jan Ziak
2022-03-17  2:11                                       ` Zygo Blaxell
2022-03-16 18:46                                 ` Phillip Susi
2022-03-16 19:59                                   ` Zygo Blaxell
2022-03-20 17:50                             ` Forza
2022-03-20 21:15                               ` Zygo Blaxell
2022-03-08 21:57       ` Jan Ziak
2022-03-08 23:40         ` Qu Wenruo
2022-03-09 22:22           ` Jan Ziak
2022-03-09 22:44             ` Qu Wenruo
2022-03-09 22:55               ` Jan Ziak
2022-03-09 23:00                 ` Jan Ziak
2022-03-09  4:48         ` Zygo Blaxell
2022-03-07 14:30 ` Phillip Susi
2022-03-08 21:43   ` Jan Ziak
2022-03-09 18:46     ` Phillip Susi
2022-03-09 21:35       ` Jan Ziak
2022-03-14 20:02         ` Phillip Susi
2022-03-14 21:53           ` Jan Ziak
2022-03-14 22:24             ` Remi Gauvin
2022-03-14 22:51               ` Zygo Blaxell
2022-03-14 23:07                 ` Remi Gauvin
2022-03-14 23:39                   ` Zygo Blaxell
2022-03-15 14:14                     ` Remi Gauvin
2022-03-15 18:51                       ` Zygo Blaxell
2022-03-15 19:22                         ` Remi Gauvin
2022-03-15 21:08                           ` Zygo Blaxell
2022-03-15 18:15             ` Phillip Susi
2022-03-16 16:52           ` Andrei Borzenkov
2022-03-16 18:28             ` Jan Ziak
2022-03-16 18:31             ` Phillip Susi
2022-03-16 18:43               ` Andrei Borzenkov
2022-03-16 18:46               ` Jan Ziak
2022-03-16 19:04               ` Zygo Blaxell
2022-03-17 20:34                 ` Phillip Susi
2022-03-17 22:06                   ` Zygo Blaxell
2022-03-16 12:47 ` Kai Krakow
2022-03-16 18:18   ` Jan Ziak
  -- strict thread matches above, loose matches on Subject: below --
2022-06-17  0:20 Jan Ziak

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Yi/I54pemZzSrNGg@hungrycats.org \
    --to=ce3g8jdj@umail.furryterror.org \
    --cc=0xe2.0x9a.0x9b@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=phill@thesusis.net \
    --cc=quwenruo.btrfs@gmx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox