From: Niklas Cassel <cassel@kernel.org>
To: John Meneghini <jmeneghi@redhat.com>
Cc: tj@kernel.org, josef@toxicpanda.com, axboe@kernel.dk,
kbusch@kernel.org, hch@lst.de, sagi@grimberg.me,
emilne@redhat.com, hare@kernel.org, linux-block@vger.kernel.org,
cgroups@vger.kernel.org, linux-nvme@lists.infradead.org,
linux-kernel@vger.kernel.org, jrani@purestorage.com,
randyj@purestorage.com, aviv.coro@ibm.com
Subject: Re: [PATCH v3 0/3] block,nvme: latency-based I/O scheduler
Date: Fri, 10 May 2024 11:34:28 +0200 [thread overview]
Message-ID: <Zj3qJP_J-D3DEP6W@ryzen.lan> (raw)
In-Reply-To: <20240509204324.832846-1-jmeneghi@redhat.com>
On Thu, May 09, 2024 at 04:43:21PM -0400, John Meneghini wrote:
> I'm re-issuing Hannes's latency patches in preparation for LSFMM
Hello John,
Just a small note.
Please don't reply-to the previous version of the series (v2), when sending
out a v3.
It creates "an unmanageable forest of references in email clients".
See:
https://www.kernel.org/doc/html/latest/process/submitting-patches.html#explicit-in-reply-to-headers
Instead just add the url to the v2 on lore.kernel.org.
See you at LSFMM!
Kind regards,
Niklas
>
> Changes since V2:
>
> I've done quite a bit of work cleaning up these patches. There were a
> number of checkpatch.pl problems as well as some compile time errors
> when config BLK_NODE_LATENCY was turned off. After the clean up I
> rebased these patches onto Ewan's "nvme: queue-depth multipath iopolicy"
> patches. This allowed me to test both iopolicy changes together.
>
> All of my test results, together with the scripts I used to generate these
> graphs, are available at:
>
> https://github.com/johnmeneghini/iopolicy
>
> Please use the scripts in this repository to do your own testing.
>
> Changes since V1:
>
> Hi all,
>
> there had been several attempts to implement a latency-based I/O
> scheduler for native nvme multipath, all of which had its issues.
>
> So time to start afresh, this time using the QoS framework
> already present in the block layer.
> It consists of two parts:
> - a new 'blk-nlatency' QoS module, which is just a simple per-node
> latency tracker
> - a 'latency' nvme I/O policy
>
> Using the 'tiobench' fio script with 512 byte blocksize I'm getting
> the following latencies (in usecs) as a baseline:
> - seq write: avg 186 stddev 331
> - rand write: avg 4598 stddev 7903
> - seq read: avg 149 stddev 65
> - rand read: avg 150 stddev 68
>
> Enabling the 'latency' iopolicy:
> - seq write: avg 178 stddev 113
> - rand write: avg 3427 stddev 6703
> - seq read: avg 140 stddev 59
> - rand read: avg 141 stddev 58
>
> Setting the 'decay' parameter to 10:
> - seq write: avg 182 stddev 65
> - rand write: avg 2619 stddev 5894
> - seq read: avg 142 stddev 57
> - rand read: avg 140 stddev 57
>
> That's on a 32G FC testbed running against a brd target,
> fio running with 48 threads. So promises are met: latency
> goes down, and we're even able to control the standard
> deviation via the 'decay' parameter.
>
> As usual, comments and reviews are welcome.
>
> Changes to the original version:
> - split the rqos debugfs entries
> - Modify commit message to indicate latency
> - rename to blk-nlatency
>
> Hannes Reinecke (2):
> block: track per-node I/O latency
> nvme: add 'latency' iopolicy
>
> John Meneghini (1):
> nvme: multipath: pr_notice when iopolicy changes
>
> MAINTAINERS | 1 +
> block/Kconfig | 9 +
> block/Makefile | 1 +
> block/blk-mq-debugfs.c | 2 +
> block/blk-nlatency.c | 389 ++++++++++++++++++++++++++++++++++
> block/blk-rq-qos.h | 6 +
> drivers/nvme/host/multipath.c | 73 ++++++-
> drivers/nvme/host/nvme.h | 1 +
> include/linux/blk-mq.h | 11 +
> 9 files changed, 484 insertions(+), 9 deletions(-)
> create mode 100644 block/blk-nlatency.c
>
> --
> 2.39.3
>
>
next prev parent reply other threads:[~2024-05-10 9:34 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-03 14:17 [PATCHv2 0/2] block,nvme: latency-based I/O scheduler Hannes Reinecke
2024-04-03 14:17 ` [PATCH 1/2] block: track per-node I/O latency Hannes Reinecke
2024-04-04 2:22 ` kernel test robot
2024-04-04 2:55 ` kernel test robot
2024-04-04 18:47 ` kernel test robot
2024-04-03 14:17 ` [PATCH 2/2] nvme: add 'latency' iopolicy Hannes Reinecke
2024-04-04 21:14 ` [PATCHv2 0/2] block,nvme: latency-based I/O scheduler Keith Busch
2024-04-05 6:21 ` Hannes Reinecke
2024-04-05 15:03 ` Keith Busch
2024-04-05 15:36 ` Hannes Reinecke
2024-04-07 19:55 ` Sagi Grimberg
2024-05-09 20:43 ` [PATCH v3 0/3] " John Meneghini
2024-05-10 9:34 ` Niklas Cassel [this message]
2024-05-09 20:43 ` [PATCH v3 1/3] block: track per-node I/O latency John Meneghini
2024-05-10 7:11 ` Damien Le Moal
2024-05-10 9:28 ` Niklas Cassel
2024-05-10 10:00 ` Hannes Reinecke
2024-05-09 20:43 ` [PATCH v3 2/3] nvme: add 'latency' iopolicy John Meneghini
2024-05-10 7:17 ` Damien Le Moal
2024-05-10 10:03 ` Hannes Reinecke
2024-05-09 20:43 ` [PATCH v3 3/3] nvme: multipath: pr_notice when iopolicy changes John Meneghini
2024-05-10 7:19 ` Damien Le Moal
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zj3qJP_J-D3DEP6W@ryzen.lan \
--to=cassel@kernel.org \
--cc=aviv.coro@ibm.com \
--cc=axboe@kernel.dk \
--cc=cgroups@vger.kernel.org \
--cc=emilne@redhat.com \
--cc=hare@kernel.org \
--cc=hch@lst.de \
--cc=jmeneghi@redhat.com \
--cc=josef@toxicpanda.com \
--cc=jrani@purestorage.com \
--cc=kbusch@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=randyj@purestorage.com \
--cc=sagi@grimberg.me \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).