From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Fam Zheng <fam@euphon.net>,
Peter Maydell <peter.maydell@linaro.org>,
"Michael S. Tsirkin" <mst@redhat.com>,
Max Reitz <mreitz@redhat.com>,
qemu-block@nongnu.org, David Hildenbrand <david@redhat.com>,
Halil Pasic <pasic@linux.ibm.com>,
Christian Borntraeger <borntraeger@de.ibm.com>,
Richard Henderson <rth@twiddle.net>,
Thomas Huth <thuth@redhat.com>,
Eduardo Habkost <ehabkost@redhat.com>,
qemu-s390x@nongnu.org, qemu-arm@nongnu.org,
Stefan Hajnoczi <stefanha@redhat.com>,
David Gibson <david@gibson.dropbear.id.au>,
Kevin Wolf <kwolf@redhat.com>,
Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
cohuck@redhat.com, Raphael Norwitz <raphael.norwitz@nutanix.com>,
qemu-ppc@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>
Subject: [PATCH v6 0/7] virtio-pci: enable blk and scsi multi-queue by default
Date: Tue, 18 Aug 2020 15:33:41 +0100 [thread overview]
Message-ID: <20200818143348.310613-1-stefanha@redhat.com> (raw)
v6:
* Rebased onto QEMU 5.1 and added the now-necessary machine compat opts.
v4:
* Sorry for the long delay. I considered replacing this series with a simpler
approach. Real hardware ships with a fixed number of queues (e.g. 128). The
equivalent can be done in QEMU too. That way we don't need to magically si=
ze
num_queues. In the end I decided against this approach because the Linux
virtio_blk.ko and virtio_scsi.ko guest drivers unconditionally initialized
all available queues until recently (it was written with
num_queues=3Dnum_vcpus in mind). It doesn't make sense for a 1 CPU guest to
bring up 128 virtqueues (waste of resources and possibly weird performance
effects with blk-mq).
* Honor maximum number of MSI-X vectors and virtqueues [Daniel Berrange]
* Update commit descriptions to mention maximum MSI-X vector and virtqueue
caps [Raphael]
v3:
* Introduce virtio_pci_optimal_num_queues() helper to enforce VIRTIO_QUEUE_M=
AX
in one place
* Use VIRTIO_SCSI_VQ_NUM_FIXED constant in all cases [Cornelia]
* Update hw/core/machine.c compat properties for QEMU 5.0 [Michael]
v3:
* Add new performance results that demonstrate the scalability
* Mention that this is PCI-specific [Cornelia]
v2:
* Let the virtio-DEVICE-pci device select num-queues because the optimal
multi-queue configuration may differ between virtio-pci, virtio-mmio, and
virtio-ccw [Cornelia]
Enabling multi-queue on virtio-pci storage devices improves performance on SMP
guests because the completion interrupt is handled on the vCPU that submitted
the I/O request. This avoids IPIs inside the guest.
Note that performance is unchanged in these cases:
1. Uniprocessor guests. They don't have IPIs.
2. Application threads might be scheduled on the sole vCPU that handles
completion interrupts purely by chance. (This is one reason why benchmark
results can vary noticably between runs.)
3. Users may bind the application to the vCPU that handles completion
interrupts.
Set the number of queues to the number of vCPUs by default on virtio-blk and
virtio-scsi PCI devices. Older machine types continue to default to 1 queue
for live migration compatibility.
Random read performance:
IOPS
q=3D1 78k
q=3D32 104k +33%
Boot time:
Duration
q=3D1 51s
q=3D32 1m41s +98%
Guest configuration: 32 vCPUs, 101 virtio-blk-pci disks
Previously measured results on a 4 vCPU guest were also positive but showed a
smaller 1-4% performance improvement. They are no longer valid because
significant event loop optimizations have been merged.
Peter Maydell (1):
Open 5.2 development tree
Stefan Hajnoczi (6):
hw: add 5.2 machine types and 5.1 compat options
virtio-pci: add virtio_pci_optimal_num_queues() helper
virtio-scsi: introduce a constant for fixed virtqueues
virtio-scsi-pci: default num_queues to -smp N
virtio-blk-pci: default num_queues to -smp N
vhost-user-blk-pci: default num_queues to -smp N
hw/virtio/virtio-pci.h | 9 +++++++++
include/hw/boards.h | 3 +++
include/hw/i386/pc.h | 3 +++
include/hw/virtio/vhost-user-blk.h | 2 ++
include/hw/virtio/virtio-blk.h | 2 ++
include/hw/virtio/virtio-scsi.h | 5 +++++
hw/arm/virt.c | 9 ++++++++-
hw/block/vhost-user-blk.c | 6 +++++-
hw/block/virtio-blk.c | 6 +++++-
hw/core/machine.c | 9 +++++++++
hw/i386/pc.c | 4 ++++
hw/i386/pc_piix.c | 14 ++++++++++++-
hw/i386/pc_q35.c | 13 +++++++++++-
hw/ppc/spapr.c | 15 ++++++++++++--
hw/s390x/s390-virtio-ccw.c | 14 ++++++++++++-
hw/scsi/vhost-scsi.c | 3 ++-
hw/scsi/vhost-user-scsi.c | 5 +++--
hw/scsi/virtio-scsi.c | 13 ++++++++----
hw/virtio/vhost-scsi-pci.c | 9 +++++++--
hw/virtio/vhost-user-blk-pci.c | 4 ++++
hw/virtio/vhost-user-scsi-pci.c | 9 +++++++--
hw/virtio/virtio-blk-pci.c | 7 ++++++-
hw/virtio/virtio-pci.c | 32 ++++++++++++++++++++++++++++++
hw/virtio/virtio-scsi-pci.c | 9 +++++++--
VERSION | 2 +-
25 files changed, 184 insertions(+), 23 deletions(-)
--=20
2.26.2
next reply other threads:[~2020-08-18 15:27 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-08-18 14:33 Stefan Hajnoczi [this message]
2020-08-18 14:33 ` [PATCH v6 1/7] Open 5.2 development tree Stefan Hajnoczi
2020-08-19 14:13 ` Stefan Hajnoczi
2020-08-18 14:33 ` [PATCH v6 2/7] hw: add 5.2 machine types and 5.1 compat options Stefan Hajnoczi
2020-08-18 15:11 ` Cornelia Huck
2020-08-19 12:54 ` Igor Mammedov
2020-08-19 14:10 ` Cornelia Huck
2020-08-19 14:12 ` Stefan Hajnoczi
2020-08-19 14:38 ` Laszlo Ersek
2020-08-19 15:27 ` Philippe Mathieu-Daudé
2020-08-19 13:06 ` Igor Mammedov
2020-08-18 14:33 ` [PATCH v6 3/7] virtio-pci: add virtio_pci_optimal_num_queues() helper Stefan Hajnoczi
2020-08-18 14:33 ` [PATCH v6 4/7] virtio-scsi: introduce a constant for fixed virtqueues Stefan Hajnoczi
2020-08-18 14:33 ` [PATCH v6 5/7] virtio-scsi-pci: default num_queues to -smp N Stefan Hajnoczi
2020-08-18 15:16 ` Cornelia Huck
2020-08-23 2:10 ` Raphael Norwitz
2020-08-18 14:33 ` [PATCH v6 6/7] virtio-blk-pci: " Stefan Hajnoczi
2020-08-18 14:33 ` [PATCH v6 7/7] vhost-user-blk-pci: " Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200818143348.310613-1-stefanha@redhat.com \
--to=stefanha@redhat.com \
--cc=borntraeger@de.ibm.com \
--cc=cohuck@redhat.com \
--cc=david@gibson.dropbear.id.au \
--cc=david@redhat.com \
--cc=ehabkost@redhat.com \
--cc=fam@euphon.net \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=mst@redhat.com \
--cc=pankaj.gupta.linux@gmail.com \
--cc=pasic@linux.ibm.com \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-arm@nongnu.org \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
--cc=qemu-s390x@nongnu.org \
--cc=raphael.norwitz@nutanix.com \
--cc=rth@twiddle.net \
--cc=thuth@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).