qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: Peter Krempa <pkrempa@redhat.com>
Cc: "Stefan Hajnoczi" <stefanha@redhat.com>,
	qemu-devel@nongnu.org, "Paolo Bonzini" <pbonzini@redhat.com>,
	"John Snow" <jsnow@redhat.com>, "Fam Zheng" <fam@euphon.net>,
	"Peter Xu" <peterx@redhat.com>,
	"Philippe Mathieu-Daudé" <philmd@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	qemu-block@nongnu.org, "David Hildenbrand" <david@redhat.com>,
	"Hanna Reitz" <hreitz@redhat.com>
Subject: Re: [PATCH 11/12] virtio-scsi: add iothread-vq-mapping parameter
Date: Mon, 10 Mar 2025 16:17:57 +0100	[thread overview]
Message-ID: <Z88CpZelTBC2DbCR@redhat.com> (raw)
In-Reply-To: <Z8744QIGJUEykuDd@angien.pipo.sk>

Am 10.03.2025 um 15:37 hat Peter Krempa geschrieben:
> On Mon, Mar 10, 2025 at 15:33:02 +0100, Kevin Wolf wrote:
> > Am 13.02.2025 um 19:00 hat Stefan Hajnoczi geschrieben:
> > > Allow virtio-scsi virtqueues to be assigned to different IOThreads. This
> > > makes it possible to take advantage of host multi-queue block layer
> > > scalability by assigning virtqueues that have affinity with vCPUs to
> > > different IOThreads that have affinity with host CPUs. The same feature
> > > was introduced for virtio-blk in the past:
> > > https://developers.redhat.com/articles/2024/09/05/scaling-virtio-blk-disk-io-iothread-virtqueue-mapping
> > > 
> > > Here are fio randread 4k iodepth=64 results from a 4 vCPU guest with an
> > > Intel P4800X SSD:
> > > iothreads IOPS
> > > ------------------------------
> > > 1         189576
> > > 2         312698
> > > 4         346744
> > > 
> > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> > 
> > As Peter already noted, the interface is a bit confusing in that it
> > considers the control and event queue just normal queues like any other
> > and you need to specify a mapping for them, too (even though you
> > probably don't care about them).
> > 
> > I wonder if it wouldn't be better to use the iothread-vq-mapping
> > property only for command queues and to have separate properties for the
> > event and control queue. I think this would be less surprising to users.
> 
> In the v2 of libvirt's patches I've proposed:
> 
>         <driver queues='3'>
>           <iothreads>
>             <iothread id='2'>
>               <queue id='ctrl'/>
>               <queue id='event'/>
>               <queue id='1'/>
>             </iothread>
>             <iothread id='3'>
>               <queue id='0'/>
>               <queue id='2'/>
>             </iothread>
>           </iothreads>
>         </driver>
> 
> To map the queues by name explicitly so that it's clear what's
> happening.
> 
> In my proposed it auto-translates ctrl and event into 0 and 1 and the
> command queues into N+2.

Note that if I understand patch 12 correctly, the 'ctrl' queue setting
will never actually take effect. So libvirt probably shouln't even offer
it (and neither should QEMU).

> > It would also allow you to use the round robin allocation for command
> > queues while using a different setting for the special queues - in
> > particular, the event queue is currently no_poll, which disables polling
> > for the whole AioContext, so you probably want to have it just anywhere
> > else, but not in the iothreads you use for command queues. This should
> > probably also be the default.
> 
> This sounds like an important bit of information. If that stays like
> this I think libvirt should also document this.
> 
> The proposed libvirt patch also words the recommendation to use the
> round-robin approach unless specific needs arise so if qemu did the
> correct thing here it would be great.

Yes, I consider this a QEMU bug that should be fixed. It's no_poll not
in the sense that we must use the eventfd because we can't otherwise
figure out if it's ready, but that we don't usually care about new
things being ready in the queue.

But if we always tie the event queue to the main loop, too, it would
already be worked around for most cases - the main loop generally won't
be able to poll anyway because of other fd handlers that don't support
polling.

Kevin



  reply	other threads:[~2025-03-10 15:19 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-13 18:00 [PATCH 00/12] virtio-scsi: add iothread-vq-mapping parameter Stefan Hajnoczi
2025-02-13 18:00 ` [PATCH 01/12] scsi-disk: drop unused SCSIDiskState->bh field Stefan Hajnoczi
2025-03-11  9:29   ` Philippe Mathieu-Daudé
2025-02-13 18:00 ` [PATCH 02/12] dma: use current AioContext for dma_blk_io() Stefan Hajnoczi
2025-02-13 18:00 ` [PATCH 03/12] scsi: track per-SCSIRequest AioContext Stefan Hajnoczi
2025-02-13 18:00 ` [PATCH 04/12] scsi: introduce requests_lock Stefan Hajnoczi
2025-03-10 13:37   ` Kevin Wolf
2025-03-11  9:11     ` Stefan Hajnoczi
2025-02-13 18:00 ` [PATCH 05/12] virtio-scsi: introduce event and ctrl virtqueue locks Stefan Hajnoczi
2025-02-13 18:00 ` [PATCH 06/12] virtio-scsi: protect events_dropped field Stefan Hajnoczi
2025-02-13 18:00 ` [PATCH 07/12] virtio-scsi: perform TMFs in appropriate AioContexts Stefan Hajnoczi
2025-02-13 18:00 ` [PATCH 08/12] virtio-blk: extract cleanup_iothread_vq_mapping() function Stefan Hajnoczi
2025-02-13 18:00 ` [PATCH 09/12] virtio-blk: tidy up iothread_vq_mapping functions Stefan Hajnoczi
2025-02-13 18:00 ` [PATCH 10/12] virtio: extract iothread-vq-mapping.h API Stefan Hajnoczi
2025-02-13 18:00 ` [PATCH 11/12] virtio-scsi: add iothread-vq-mapping parameter Stefan Hajnoczi
2025-03-10 14:33   ` Kevin Wolf
2025-03-10 14:37     ` Peter Krempa
2025-03-10 15:17       ` Kevin Wolf [this message]
2025-02-13 18:00 ` [PATCH 12/12] virtio-scsi: handle ctrl virtqueue in main loop Stefan Hajnoczi
2025-03-10 14:43   ` Kevin Wolf
2025-02-20 17:04 ` [PATCH 00/12] virtio-scsi: add iothread-vq-mapping parameter Peter Krempa
2025-03-10 14:43 ` Kevin Wolf
2025-03-11  9:22   ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Z88CpZelTBC2DbCR@redhat.com \
    --to=kwolf@redhat.com \
    --cc=david@redhat.com \
    --cc=fam@euphon.net \
    --cc=hreitz@redhat.com \
    --cc=jsnow@redhat.com \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=philmd@linaro.org \
    --cc=pkrempa@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).