Linux Btrfs filesystem development
 help / color / mirror / Atom feed
From: David Sterba <dsterba@suse.cz>
To: Qu Wenruo <wqu@suse.com>
Cc: linux-btrfs@vger.kernel.org, Josef Bacik <josef@toxicpanda.com>
Subject: Re: [PATCH v3] btrfs: don't update the block group item if used bytes are the same
Date: Tue, 13 Sep 2022 14:38:28 +0200	[thread overview]
Message-ID: <20220913123828.GK32411@twin.jikos.cz> (raw)
In-Reply-To: <2a76b8005eb7208eda97e62a944ae456cbe8386f.1662705863.git.wqu@suse.com>

On Fri, Sep 09, 2022 at 02:45:22PM +0800, Qu Wenruo wrote:
> [BACKGROUND]
> 
> When committing a transaction, we will update block group items for all
> dirty block groups.
> 
> But in fact, dirty block groups don't always need to update their block
> group items.
> It's pretty common to have a metadata block group which experienced
> several CoW operations, but still have the same amount of used bytes.
> 
> In that case, we may unnecessarily CoW a tree block doing nothing.
> 
> [ENHANCEMENT]
> 
> This patch will introduce btrfs_block_group::commit_used member to
> remember the last used bytes, and use that new member to skip
> unnecessary block group item update.
> 
> This would be more common for large fs, which metadata block group can
> be as large as 1GiB, containing at most 64K metadata items.
> 
> In that case, if CoW added and the deleted one metadata item near the end
> of the block group, then it's completely possible we don't need to touch
> the block group item at all.
> 
> [BENCHMARK]
> 
> The patchset itself can have quite a high chance (20~80%) to skip block
> group item updates in a lot of workload.
> 
> As a result, it would result shorter time spent on
> btrfs_write_dirty_block_groups(), and overall reduce the execution time
> of the critical section of btrfs_commit_transaction().
> 
> Here comes a fio command, which will do random writes in 4K block size,
> causing a very heavy metadata updates.
> 
> fio --filename=$mnt/file --size=512M --rw=randwrite --direct=1 --bs=4k \
>     --ioengine=libaio --iodepth=64 --runtime=300 --numjobs=4 \
>     --name=random_write --fallocate=none --time_based --fsync_on_close=1
> 
> The file size (512M) and number of threads (4) means 2GiB file size in
> total, but during the full 300s run time, my dedicated SATA SSD is able
> to write around 20~25GiB, which is over 10 times the file size.
> 
> Thus after we fill the initial 2G, we should not cause much block group items
> updates.
> 
> Please note, the fio numbers by itself doesn't have much change, but if
> we look deeper, there is some reduced execution time, especially
> for the critical section of btrfs_commit_transaction().
> 
> I added extra trace_printk() to measure the following per-transaction
> execution time
> 
> - Critical section of btrfs_commit_transaction()
>   By re-using the existing update_commit_stats() function, which
>   has already calculated the interval correctly.
> 
> - The while() loop for btrfs_write_dirty_block_groups()
>   Although this includes the execution time of btrfs_run_delayed_refs(),
>   it should still be representative overall.
> 
> Both result involves transid 7~30, the same amount of transaction
> committed.
> 
> The result looks like this:
> 
>                       |      Before       |     After      |  Diff
> ----------------------+-------------------+----------------+--------
> Transaction interval  | 229247198.5       | 215016933.6    | -6.2%
> Block group interval  | 23133.33333       | 18970.83333    | -18.0%
> 
> The change in block group item updates is more obvious, as skipped bg
> item updates also means less delayed refs, thus the change is more
> obvious.
> 
> And the overall execution time for that bg update loop is pretty small,
> thus we can assume the extent tree is already mostly cached.
> If we can skip a uncached tree block, it would cause more obvious
> change.
> 
> Unfortunately the overall reduce in commit transaction critical section
> is much smaller, as the block group item updates loop is not really the
> major part, at least for the above fio script.
> 
> But still we have a observable reduction in the critical section.
> 
> Reviewed-by: Josef Bacik <josef@toxicpanda.com>
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> [Josef pinned down the race and provided a fix]
> Signed-off-by: Josef Bacik <josef@toxicpanda.com>

Thanks for the numbers, it seems worth and now that we have the fixed
version I'll add it to 6.1 queue as we have the other perf improvements
there.

  reply	other threads:[~2022-09-13 12:44 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-09  6:45 [PATCH] btrfs: don't update the block group item if used bytes are the same Qu Wenruo
2022-09-13 12:38 ` David Sterba [this message]
2022-09-13 16:55   ` [PATCH v3] " Andrea Gelmini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220913123828.GK32411@twin.jikos.cz \
    --to=dsterba@suse.cz \
    --cc=josef@toxicpanda.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=wqu@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox