From: John Meneghini <jmeneghi@redhat.com>
To: kbusch@kernel.org, hch@lst.de, sagi@grimberg.me, emilne@redhat.com
Cc: linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org,
jmeneghi@redhat.com, jrani@purestorage.com,
randyj@purestorage.com, hare@kernel.org
Subject: [PATCH v3 0/1] nvme: queue-depth multipath iopolicy
Date: Mon, 20 May 2024 16:20:44 -0400 [thread overview]
Message-ID: <20240520202045.427110-1-jmeneghi@redhat.com> (raw)
Submitting for final review. As agreed at LSFMM I've squashed this series into
one patch and addressed all review comments. Please merge this into nvme-6.10.
Changes since V2:
Add the NVME_MPATH_CNT_ACTIVE flag to eliminate a READ_ONCE in the completion path
and increment/decrement the active_nr count on all mpath IOs - including
passthru commands.
Send a pr_notice when ever the iopolicy on a subsystem is changed. This is
important for support reasons. It is fully expected that users will be changing
the iopolicy with active IO in progress.
Squashed everything and rebased to nvme-v6.10
Changes since V1:
I'm re-issuing Ewan's queue-depth patches in preparation for LSFMM
These patches were first show at ALPSS 2023 where I shared the following
graphs which measure the IO distribution across 4 active-optimized
controllers using the round-robin verses queue-depth iopolicy.
https://people.redhat.com/jmeneghi/ALPSS_2023/NVMe_QD_Multipathing.pdf
Since that time we have continued testing these patches with a number of
different nvme-of storage arrays and test bed configurations, and I've codified
the tests and methods we use to measure IO distribution
All of my test results, together with the scripts I used to generate these
graphs, are available at:
https://github.com/johnmeneghini/iopolicy
Please use the scripts in this repository to do your own testing.
These patches are based on nvme-v6.9
Ewan D. Milne (1):
nvme: multipath: Implemented new iopolicy "queue-depth"
drivers/nvme/host/core.c | 2 +-
drivers/nvme/host/multipath.c | 86 +++++++++++++++++++++++++++++++++--
drivers/nvme/host/nvme.h | 9 ++++
3 files changed, 92 insertions(+), 5 deletions(-)
--
2.39.3
next reply other threads:[~2024-05-20 20:21 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-20 20:20 John Meneghini [this message]
2024-05-20 20:20 ` [PATCH v3 1/1] nvme: multipath: Implemented new iopolicy "queue-depth" John Meneghini
2024-05-20 20:50 ` Keith Busch
2024-05-21 14:20 ` John Meneghini
2024-05-21 6:46 ` Hannes Reinecke
2024-05-21 13:58 ` John Meneghini
2024-05-21 14:10 ` Keith Busch
2024-05-21 14:23 ` Hannes Reinecke
2024-05-21 16:35 ` Caleb Sander
2024-05-21 8:48 ` Nilay Shroff
2024-05-21 9:45 ` Sagi Grimberg
2024-05-21 10:07 ` Nilay Shroff
2024-05-21 10:11 ` Sagi Grimberg
2024-05-21 10:15 ` Sagi Grimberg
2024-05-21 10:16 ` Sagi Grimberg
2024-05-21 14:44 ` John Meneghini
2024-05-22 10:48 ` Nilay Shroff
2024-05-22 10:52 ` Sagi Grimberg
2024-05-22 13:12 ` John Meneghini
2024-05-21 10:22 ` Nilay Shroff
2024-05-21 13:05 ` Keith Busch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240520202045.427110-1-jmeneghi@redhat.com \
--to=jmeneghi@redhat.com \
--cc=emilne@redhat.com \
--cc=hare@kernel.org \
--cc=hch@lst.de \
--cc=jrani@purestorage.com \
--cc=kbusch@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=randyj@purestorage.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox