From: "Melanie Plageman (Microsoft)" <melanieplageman@gmail.com>
To: mikelley@microsoft.com, jejb@linux.ibm.com, kys@microsoft.com,
martin.petersen@oracle.com, mst@redhat.com,
benh@kernel.crashing.org, decui@microsoft.com,
don.brace@microchip.com, R-QLogic-Storage-Upstream@marvell.com,
haiyangz@microsoft.com, jasowang@redhat.com,
john.garry@huawei.com, kashyap.desai@broadcom.com,
mpe@ellerman.id.au, njavali@marvell.com, pbonzini@redhat.com,
paulus@samba.org, sathya.prakash@broadcom.com,
shivasharan.srikanteshwara@broadcom.com,
sreekanth.reddy@broadcom.com, stefanha@redhat.com,
sthemmin@microsoft.com, suganath-prabu.subramani@broadcom.com,
sumit.saxena@broadcom.com, tyreld@linux.ibm.com,
wei.liu@kernel.org, linuxppc-dev@lists.ozlabs.org,
megaraidlinux.pdl@broadcom.com, mpi3mr-linuxdrv.pdl@broadcom.com,
storagedev@microchip.com,
virtualization@lists.linux-foundation.org,
linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-scsi@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com
Cc: andres@anarazel.de
Subject: [PATCH RFC v1 5/5] scsi: storvsc: Hardware queues share blk_mq_tags
Date: Fri, 18 Feb 2022 18:41:57 +0000 [thread overview]
Message-ID: <20220218184157.176457-6-melanieplageman@gmail.com> (raw)
In-Reply-To: <20220218184157.176457-1-melanieplageman@gmail.com>
Decouple the number of tags available from the number of hardware queues
by sharing a single blk_mq_tags amongst all hardware queues.
When storage latency is relatively high, having too many tags available
can harm the performance of mixed workloads.
By sharing blk_mq_tags amongst hardware queues, nr_requests can be set
to the appropriate number of tags for the device.
Signed-off-by: Melanie Plageman <melanieplageman@gmail.com>
---
As an example, on a 16-core VM coupled with a 1 TiB storage device having a
combined (VM + disk) max BW of 200 MB/s and IOPS of 5000, configured with 16
hardware queues and with nr_requests set to 56 and queue_depth set to 15, the
following fio job description illustrates the benefit of hardware queues sharing
blk_mq_tags:
[global]
time_based=1
ioengine=io_uring
direct=1
runtime=60
[read_hogs]
bs=16k
iodepth=10000
rw=randread
filesize=10G
numjobs=15
directory=/mnt/test
[wal]
bs=8k
iodepth=3
filesize=4G
rw=write
numjobs=1
directory=/mnt/test
with hctx_share_tags set, the "wal" job does 271 IOPS, averaging 13120 usec
completion latency and the "read_hogs" jobs average around 4700 IOPS.
without hctx_share_tags set, the "wal" job does 85 IOPS and averages around
45308 usec completion latency and the "read_hogs" job average around 4900 IOPS.
Note that reducing nr_requests to a number sufficient to increase WAL IOPS
results in unacceptably low IOPS for the random reads when only one random read
job is running.
drivers/scsi/storvsc_drv.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index 0ed764bcabab..5048e7fcf959 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -1997,6 +1997,7 @@ static struct scsi_host_template scsi_driver = {
.track_queue_depth = 1,
.change_queue_depth = storvsc_change_queue_depth,
.per_device_tag_set = 1,
+ .hctx_share_tags = 1,
};
enum {
--
2.25.1
next prev parent reply other threads:[~2022-02-18 18:42 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-18 18:41 [PATCH RFC v1 0/5] Add SCSI per device tagsets Melanie Plageman (Microsoft)
2022-02-18 18:41 ` [PATCH RFC v1 1/5] scsi: core: Rename host_tagset to hctx_share_tags Melanie Plageman (Microsoft)
2022-02-18 18:41 ` [PATCH RFC v1 2/5] scsi: map_queues() takes tag set instead of host Melanie Plageman (Microsoft)
2022-02-18 18:41 ` [PATCH RFC v1 3/5] scsi: core: Add per device tag sets Melanie Plageman (Microsoft)
2022-02-18 18:41 ` [PATCH RFC v1 4/5] scsi: storvsc: use " Melanie Plageman (Microsoft)
2022-02-18 18:41 ` Melanie Plageman (Microsoft) [this message]
2022-02-19 7:37 ` [PATCH RFC v1 0/5] Add SCSI per device tagsets Christoph Hellwig
2022-02-21 17:40 ` John Garry
2022-02-22 13:36 ` Bart Van Assche
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220218184157.176457-6-melanieplageman@gmail.com \
--to=melanieplageman@gmail.com \
--cc=MPT-FusionLinux.pdl@broadcom.com \
--cc=R-QLogic-Storage-Upstream@marvell.com \
--cc=andres@anarazel.de \
--cc=benh@kernel.crashing.org \
--cc=decui@microsoft.com \
--cc=don.brace@microchip.com \
--cc=haiyangz@microsoft.com \
--cc=jasowang@redhat.com \
--cc=jejb@linux.ibm.com \
--cc=john.garry@huawei.com \
--cc=kashyap.desai@broadcom.com \
--cc=kys@microsoft.com \
--cc=linux-hyperv@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-scsi@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=martin.petersen@oracle.com \
--cc=megaraidlinux.pdl@broadcom.com \
--cc=mikelley@microsoft.com \
--cc=mpe@ellerman.id.au \
--cc=mpi3mr-linuxdrv.pdl@broadcom.com \
--cc=mst@redhat.com \
--cc=njavali@marvell.com \
--cc=paulus@samba.org \
--cc=pbonzini@redhat.com \
--cc=sathya.prakash@broadcom.com \
--cc=shivasharan.srikanteshwara@broadcom.com \
--cc=sreekanth.reddy@broadcom.com \
--cc=stefanha@redhat.com \
--cc=sthemmin@microsoft.com \
--cc=storagedev@microchip.com \
--cc=suganath-prabu.subramani@broadcom.com \
--cc=sumit.saxena@broadcom.com \
--cc=tyreld@linux.ibm.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=wei.liu@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox