qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, "Paolo Bonzini" <pbonzini@redhat.com>,
	"David Hildenbrand" <david@redhat.com>,
	qemu-block@nongnu.org,
	"Philippe Mathieu-Daudé" <philmd@linaro.org>,
	"Peter Xu" <peterx@redhat.com>, "Fam Zheng" <fam@euphon.net>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	"Hanna Reitz" <hreitz@redhat.com>,
	pkrempa@redhat.com, "John Snow" <jsnow@redhat.com>
Subject: Re: [PATCH v2 11/13] virtio-scsi: add iothread-vq-mapping parameter
Date: Tue, 11 Mar 2025 13:06:11 +0100	[thread overview]
Message-ID: <Z9AnM4D6KZqHTakM@redhat.com> (raw)
In-Reply-To: <20250311101145.1037388-12-stefanha@redhat.com>

Am 11.03.2025 um 11:11 hat Stefan Hajnoczi geschrieben:
> Allow virtio-scsi virtqueues to be assigned to different IOThreads. This
> makes it possible to take advantage of host multi-queue block layer
> scalability by assigning virtqueues that have affinity with vCPUs to
> different IOThreads that have affinity with host CPUs. The same feature
> was introduced for virtio-blk in the past:
> https://developers.redhat.com/articles/2024/09/05/scaling-virtio-blk-disk-io-iothread-virtqueue-mapping
> 
> Here are fio randread 4k iodepth=64 results from a 4 vCPU guest with an
> Intel P4800X SSD:
> iothreads IOPS
> ------------------------------
> 1         189576
> 2         312698
> 4         346744
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>

> @@ -1218,14 +1224,16 @@ static void virtio_scsi_hotplug(HotplugHandler *hotplug_dev, DeviceState *dev,
>  {
>      VirtIODevice *vdev = VIRTIO_DEVICE(hotplug_dev);
>      VirtIOSCSI *s = VIRTIO_SCSI(vdev);
> +    AioContext *ctx = s->vq_aio_context[0];

At the end of the series, this is always qemu_aio_context...

>      SCSIDevice *sd = SCSI_DEVICE(dev);
> -    int ret;
>  
> -    if (s->ctx && !s->dataplane_fenced) {
> -        ret = blk_set_aio_context(sd->conf.blk, s->ctx, errp);
> -        if (ret < 0) {
> -            return;
> -        }
> +    if (ctx != qemu_get_aio_context() && !s->dataplane_fenced) {
> +        /*
> +         * Try to make the BlockBackend's AioContext match ours. Ignore failure
> +         * because I/O will still work although block jobs and other users
> +         * might be slower when multiple AioContexts use a BlockBackend.
> +         */
> +        blk_set_aio_context(sd->conf.blk, ctx, errp);
>      }

...so this becomes dead code. With multiple AioContexts, it's not clear
which one should be used. virtio-blk just takes the first one. The
equivalent thing here would be to use the one of the first command
queue.

>      if (virtio_vdev_has_feature(vdev, VIRTIO_SCSI_F_HOTPLUG)) {
> @@ -1260,7 +1268,7 @@ static void virtio_scsi_hotunplug(HotplugHandler *hotplug_dev, DeviceState *dev,
>  
>      qdev_simple_device_unplug_cb(hotplug_dev, dev, errp);
>  
> -    if (s->ctx) {
> +    if (s->vq_aio_context[0] != qemu_get_aio_context()) {

Same problem here.

>          /* If other users keep the BlockBackend in the iothread, that's ok */
>          blk_set_aio_context(sd->conf.blk, qemu_get_aio_context(), NULL);
>      }

As you wanted to avoid squashing patches anyway, I think this can be
fixed on top of this series.

Kevin



  reply	other threads:[~2025-03-11 12:08 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-11 10:11 [PATCH v2 00/13] virtio-scsi: add iothread-vq-mapping parameter Stefan Hajnoczi
2025-03-11 10:11 ` [PATCH v2 01/13] scsi-disk: drop unused SCSIDiskState->bh field Stefan Hajnoczi
2025-03-11 10:11 ` [PATCH v2 02/13] dma: use current AioContext for dma_blk_io() Stefan Hajnoczi
2025-03-11 10:11 ` [PATCH v2 03/13] scsi: track per-SCSIRequest AioContext Stefan Hajnoczi
2025-03-11 10:11 ` [PATCH v2 04/13] scsi: introduce requests_lock Stefan Hajnoczi
2025-03-11 10:11 ` [PATCH v2 05/13] virtio-scsi: introduce event and ctrl virtqueue locks Stefan Hajnoczi
2025-03-11 10:11 ` [PATCH v2 06/13] virtio-scsi: protect events_dropped field Stefan Hajnoczi
2025-03-11 10:11 ` [PATCH v2 07/13] virtio-scsi: perform TMFs in appropriate AioContexts Stefan Hajnoczi
2025-03-11 10:11 ` [PATCH v2 08/13] virtio-blk: extract cleanup_iothread_vq_mapping() function Stefan Hajnoczi
2025-03-11 10:11 ` [PATCH v2 09/13] virtio-blk: tidy up iothread_vq_mapping functions Stefan Hajnoczi
2025-03-11 10:11 ` [PATCH v2 10/13] virtio: extract iothread-vq-mapping.h API Stefan Hajnoczi
2025-03-11 10:11 ` [PATCH v2 11/13] virtio-scsi: add iothread-vq-mapping parameter Stefan Hajnoczi
2025-03-11 12:06   ` Kevin Wolf [this message]
2025-03-11 10:11 ` [PATCH v2 12/13] virtio-scsi: handle ctrl virtqueue in main loop Stefan Hajnoczi
2025-03-11 10:11 ` [PATCH v2 13/13] virtio-scsi: only expose cmd vqs via iothread-vq-mapping Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z9AnM4D6KZqHTakM@redhat.com \
    --to=kwolf@redhat.com \
    --cc=david@redhat.com \
    --cc=fam@euphon.net \
    --cc=hreitz@redhat.com \
    --cc=jsnow@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=philmd@linaro.org \
    --cc=pkrempa@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).