From: Nikolay Borisov <nborisov@suse.com>
To: Josef Bacik <josef@toxicpanda.com>,
linux-btrfs@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH v2 00/11] Improve preemptive ENOSPC flushing
Date: Fri, 9 Oct 2020 13:39:50 +0300 [thread overview]
Message-ID: <09dd58a3-970e-59a9-d9fc-a4c4d1858450@suse.com> (raw)
In-Reply-To: <cover.1602189832.git.josef@toxicpanda.com>
On 8.10.20 г. 23:48 ч., Josef Bacik wrote:
> There's a lot of individual changes, but most of it revolves around fixing the
> O_DIRECT regression that Nikolay noted. With this set of patches we get
> slightly better performance in the buffered case than before, and the O_DIRECT
> case is slightly improved from baseline as well.
>
> v1->v2:
> - Added a FORCE_COMMIT_TRANS flush operation so we can keep the flush_space
> stuff consistent and get all the normal tracepoints.
> - Renamed fs_info->dio_bytes to ->ordered_bytes and changed it to count all
> ordered extents that were pending, not just DIO ordered extents that were
> pending.
> - Reworked the clamping to not apply if we're not doing a lot of delalloc
> reservations.
> - Reworked the preempt flushing loop to be more straightforward.
> - Fixed the need_preemptive_flushing() helper to take into account DIO heavy
> workloads.
>
<snip>
So indeed it seems to result in slightly better results:
dio-josef-v2:
WRITE: bw=48.6MiB/s (50.0MB/s), 48.6MiB/s-48.6MiB/s (50.0MB/s-50.0MB/s), io=8192MiB (8590MB), run=168534-168534msec
WRITE: bw=50.4MiB/s (52.8MB/s), 50.4MiB/s-50.4MiB/s (52.8MB/s-52.8MB/s), io=8192MiB (8590MB), run=162601-162601msec
WRITE: bw=50.9MiB/s (53.4MB/s), 50.9MiB/s-50.9MiB/s (53.4MB/s-53.4MB/s), io=8192MiB (8590MB), run=160964-160964msec
WRITE: bw=50.6MiB/s (53.0MB/s), 50.6MiB/s-50.6MiB/s (53.0MB/s-53.0MB/s), io=8192MiB (8590MB), run=161938-161938msec
WRITE: bw=49.8MiB/s (52.2MB/s), 49.8MiB/s-49.8MiB/s (52.2MB/s-52.2MB/s), io=8192MiB (8590MB), run=164577-164577msec
buffered-josef-v2:
WRITE: bw=32.0MiB/s (33.6MB/s), 32.0MiB/s-32.0MiB/s (33.6MB/s-33.6MB/s), io=8192MiB (8590MB), run=255670-255670msec
WRITE: bw=29.5MiB/s (30.9MB/s), 29.5MiB/s-29.5MiB/s (30.9MB/s-30.9MB/s), io=8192MiB (8590MB), run=277829-277829msec
WRITE: bw=31.8MiB/s (33.4MB/s), 31.8MiB/s-31.8MiB/s (33.4MB/s-33.4MB/s), io=8192MiB (8590MB), run=257554-257554msec
WRITE: bw=29.8MiB/s (31.3MB/s), 29.8MiB/s-29.8MiB/s (31.3MB/s-31.3MB/s), io=8192MiB (8590MB), run=274516-274516msec
WRITE: bw=29.8MiB/s (31.2MB/s), 29.8MiB/s-29.8MiB/s (31.2MB/s-31.2MB/s), io=8192MiB (8590MB), run=274975-274975msec
In comparison with V1 posting:
buffered-josef-v1:
WRITE: bw=29.1MiB/s (30.5MB/s), 29.1MiB/s-29.1MiB/s (30.5MB/s-30.5MB/s), io=8192MiB (8590MB), run=281678-281678msec
WRITE: bw=30.0MiB/s (32.5MB/s), 30.0MiB/s-30.0MiB/s (32.5MB/s-32.5MB/s), io=8192MiB (8590MB), run=264337-264337msec
WRITE: bw=29.6MiB/s (31.1MB/s), 29.6MiB/s-29.6MiB/s (31.1MB/s-31.1MB/s), io=8192MiB (8590MB), run=276312-276312msec
WRITE: bw=29.8MiB/s (31.2MB/s), 29.8MiB/s-29.8MiB/s (31.2MB/s-31.2MB/s), io=8192MiB (8590MB), run=274916-274916msec
WRITE: bw=30.4MiB/s (31.9MB/s), 30.4MiB/s-30.4MiB/s (31.9MB/s-31.9MB/s), io=8192MiB (8590MB), run=269030-269030msec
buffered-misc-next-no-josef:
WRITE: bw=20.2MiB/s (21.2MB/s), 20.2MiB/s-20.2MiB/s (21.2MB/s-21.2MB/s), io=8192MiB (8590MB), run=404831-404831msec
WRITE: bw=20.8MiB/s (21.8MB/s), 20.8MiB/s-20.8MiB/s (21.8MB/s-21.8MB/s), io=8192MiB (8590MB), run=394749-394749msec
WRITE: bw=20.8MiB/s (21.8MB/s), 20.8MiB/s-20.8MiB/s (21.8MB/s-21.8MB/s), io=8192MiB (8590MB), run=393291-393291msec
WRITE: bw=20.7MiB/s (21.8MB/s), 20.7MiB/s-20.7MiB/s (21.8MB/s-21.8MB/s), io=8192MiB (8590MB), run=394918-394918msec
WRITE: bw=21.1MiB/s (22.1MB/s), 21.1MiB/s-21.1MiB/s (22.1MB/s-22.1MB/s), io=8192MiB (8590MB), run=388499-388499msec
buffered-4.19.x:
WRITE: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=6387MiB (6697MB), run=274460-274460msec
WRITE: bw=23.3MiB/s (24.5MB/s), 23.3MiB/s-23.3MiB/s (24.5MB/s-24.5MB/s), io=6643MiB (6966MB), run=284518-284518msec
WRITE: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=6643MiB (6966MB), run=284372-284372msec
WRITE: bw=23.6MiB/s (24.7MB/s), 23.6MiB/s-23.6MiB/s (24.7MB/s-24.7MB/s), io=6387MiB (6697MB), run=271200-271200msec
WRITE: bw=23.4MiB/s (24.6MB/s), 23.4MiB/s-23.4MiB/s (24.6MB/s-24.6MB/s), io=6387MiB (6697MB), run=272670-272670msec
And dio:
dio-josef-v1:
WRITE: bw=47.1MiB/s (49.4MB/s), 47.1MiB/s-47.1MiB/s (49.4MB/s-49.4MB/s), io=8192MiB (8590MB), run=174049-174049msec
WRITE: bw=48.5MiB/s (50.8MB/s), 48.5MiB/s-48.5MiB/s (50.8MB/s-50.8MB/s), io=8192MiB (8590MB), run=169045-169045msec
WRITE: bw=45.0MiB/s (48.2MB/s), 45.0MiB/s-45.0MiB/s (48.2MB/s-48.2MB/s), io=8192MiB (8590MB), run=178196-178196msec
WRITE: bw=46.1MiB/s (48.3MB/s), 46.1MiB/s-46.1MiB/s (48.3MB/s-48.3MB/s), io=8192MiB (8590MB), run=177861-177861msec
WRITE: bw=46.4MiB/s (48.7MB/s), 46.4MiB/s-46.4MiB/s (48.7MB/s-48.7MB/s), io=8192MiB (8590MB), run=176376-176376msec
dio-misc-next-no-josef:
WRITE: bw=50.1MiB/s (52.6MB/s), 50.1MiB/s-50.1MiB/s (52.6MB/s-52.6MB/s), io=8192MiB (8590MB), run=163365-163365msec
WRITE: bw=50.3MiB/s (52.8MB/s), 50.3MiB/s-50.3MiB/s (52.8MB/s-52.8MB/s), io=8192MiB (8590MB), run=162753-162753msec
WRITE: bw=50.6MiB/s (53.1MB/s), 50.6MiB/s-50.6MiB/s (53.1MB/s-53.1MB/s), io=8192MiB (8590MB), run=161766-161766msec
WRITE: bw=50.2MiB/s (52.7MB/s), 50.2MiB/s-50.2MiB/s (52.7MB/s-52.7MB/s), io=8192MiB (8590MB), run=163074-163074msec
WRITE: bw=50.5MiB/s (52.9MB/s), 50.5MiB/s-50.5MiB/s (52.9MB/s-52.9MB/s), io=8192MiB (8590MB), run=162252-162252msec
With this:
Tested-by: Nikolay Borisov <nborisov@suse.com>
prev parent reply other threads:[~2020-10-09 10:39 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-10-08 20:48 [PATCH v2 00/11] Improve preemptive ENOSPC flushing Josef Bacik
2020-10-08 20:48 ` [PATCH v2 01/11] btrfs: add a trace point for reserve tickets Josef Bacik
2020-10-09 7:20 ` Nikolay Borisov
2020-10-08 20:48 ` [PATCH v2 02/11] btrfs: track ordered bytes instead of just dio ordered bytes Josef Bacik
2020-10-09 7:25 ` Nikolay Borisov
2020-10-08 20:48 ` [PATCH v2 03/11] btrfs: introduce a FORCE_COMMIT_TRANS flush operation Josef Bacik
2020-10-08 20:48 ` [PATCH v2 04/11] btrfs: improve preemptive background space flushing Josef Bacik
2020-10-08 20:48 ` [PATCH v2 05/11] btrfs: rename need_do_async_reclaim Josef Bacik
2020-10-08 20:48 ` [PATCH v2 06/11] btrfs: check reclaim_size in need_preemptive_reclaim Josef Bacik
2020-10-08 20:48 ` [PATCH v2 07/11] btrfs: rework btrfs_calc_reclaim_metadata_size Josef Bacik
2020-10-09 9:53 ` Nikolay Borisov
2020-10-08 20:48 ` [PATCH v2 08/11] btrfs: simplify the logic in need_preemptive_flushing Josef Bacik
2020-10-08 20:48 ` [PATCH v2 09/11] btrfs: implement space clamping for preemptive flushing Josef Bacik
2020-10-09 12:22 ` Nikolay Borisov
2020-10-08 20:48 ` [PATCH v2 10/11] btrfs: adjust the flush trace point to include the source Josef Bacik
2020-10-08 20:48 ` [PATCH v2 11/11] btrfs: add a trace class for dumping the current ENOSPC state Josef Bacik
2020-10-09 9:50 ` Nikolay Borisov
2020-10-09 10:39 ` Nikolay Borisov [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=09dd58a3-970e-59a9-d9fc-a4c4d1858450@suse.com \
--to=nborisov@suse.com \
--cc=josef@toxicpanda.com \
--cc=kernel-team@fb.com \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).