From: Christian Schoenebeck <qemu_oss@crudebyte.com>
To: qemu-devel@nongnu.org
Cc: "Stefan Hajnoczi" <stefanha@redhat.com>,
"Kevin Wolf" <kwolf@redhat.com>,
"Laurent Vivier" <lvivier@redhat.com>,
qemu-block@nongnu.org, "Michael S. Tsirkin" <mst@redhat.com>,
"Jason Wang" <jasowang@redhat.com>, "Amit Shah" <amit@kernel.org>,
"David Hildenbrand" <david@redhat.com>,
"Greg Kurz" <groug@kaod.org>,
"Raphael Norwitz" <raphael.norwitz@nutanix.com>,
virtio-fs@redhat.com, "Eric Auger" <eric.auger@redhat.com>,
"Hanna Reitz" <hreitz@redhat.com>,
"Gonglei (Arei)" <arei.gonglei@huawei.com>,
"Gerd Hoffmann" <kraxel@redhat.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Marc-André Lureau" <marcandre.lureau@redhat.com>,
"Fam Zheng" <fam@euphon.net>,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>
Subject: Re: [PATCH v2 1/3] virtio: turn VIRTQUEUE_MAX_SIZE into a variable
Date: Tue, 05 Oct 2021 15:15:26 +0200 [thread overview]
Message-ID: <14205148.YOBg3JvQBA@silver> (raw)
In-Reply-To: <YVxJBKqsytKlos6M@stefanha-x1.localdomain>
On Dienstag, 5. Oktober 2021 14:45:56 CEST Stefan Hajnoczi wrote:
> On Mon, Oct 04, 2021 at 09:38:04PM +0200, Christian Schoenebeck wrote:
> > Refactor VIRTQUEUE_MAX_SIZE to effectively become a runtime
> > variable per virtio user.
>
> virtio user == virtio device model?
Yes
> > Reasons:
> >
> > (1) VIRTQUEUE_MAX_SIZE should reflect the absolute theoretical
> >
> > maximum queue size possible. Which is actually the maximum
> > queue size allowed by the virtio protocol. The appropriate
> > value for VIRTQUEUE_MAX_SIZE would therefore be 32768:
> >
> > https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.h
> > tml#x1-240006
> >
> > Apparently VIRTQUEUE_MAX_SIZE was instead defined with a
> > more or less arbitrary value of 1024 in the past, which
> > limits the maximum transfer size with virtio to 4M
> > (more precise: 1024 * PAGE_SIZE, with the latter typically
> > being 4k).
>
> Being equal to IOV_MAX is a likely reason. Buffers with more iovecs than
> that cannot be passed to host system calls (sendmsg(2), pwritev(2),
> etc).
Yes, that's use case dependent. Hence the solution to opt-in if it is desired
and feasible.
> > (2) Additionally the current value of 1024 poses a hidden limit,
> >
> > invisible to guest, which causes a system hang with the
> > following QEMU error if guest tries to exceed it:
> >
> > virtio: too many write descriptors in indirect table
>
> I don't understand this point. 2.6.5 The Virtqueue Descriptor Table says:
>
> The number of descriptors in the table is defined by the queue size for
> this virtqueue: this is the maximum possible descriptor chain length.
>
> and 2.6.5.3.1 Driver Requirements: Indirect Descriptors says:
>
> A driver MUST NOT create a descriptor chain longer than the Queue Size of
> the device.
>
> Do you mean a broken/malicious guest driver that is violating the spec?
> That's not a hidden limit, it's defined by the spec.
https://lists.gnu.org/archive/html/qemu-devel/2021-10/msg00781.html
https://lists.gnu.org/archive/html/qemu-devel/2021-10/msg00788.html
You can already go beyond that queue size at runtime with the indirection
table. The only actual limit is the currently hard coded value of 1k pages.
Hence the suggestion to turn that into a variable.
> > (3) Unfortunately not all virtio users in QEMU would currently
> >
> > work correctly with the new value of 32768.
> >
> > So let's turn this hard coded global value into a runtime
> > variable as a first step in this commit, configurable for each
> > virtio user by passing a corresponding value with virtio_init()
> > call.
>
> virtio_add_queue() already has an int queue_size argument, why isn't
> that enough to deal with the maximum queue size? There's probably a good
> reason for it, but please include it in the commit description.
[...]
> Can you make this value per-vq instead of per-vdev since virtqueues can
> have different queue sizes?
>
> The same applies to the rest of this patch. Anything using
> vdev->queue_max_size should probably use vq->vring.num instead.
I would like to avoid that and keep it per device. The maximum size stored
there is the maximum size supported by virtio user (or vortio device model,
however you want to call it). So that's really a limit per device, not per
queue, as no queue of the device would ever exceed that limit.
Plus a lot more code would need to be refactored, which I think is
unnecessary.
Best regards,
Christian Schoenebeck
next prev parent reply other threads:[~2021-10-05 13:18 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-04 19:38 [PATCH v2 0/3] virtio: increase VIRTQUEUE_MAX_SIZE to 32k Christian Schoenebeck
2021-10-04 19:38 ` [PATCH v2 1/3] virtio: turn VIRTQUEUE_MAX_SIZE into a variable Christian Schoenebeck
2021-10-05 7:36 ` Greg Kurz
2021-10-05 12:45 ` Stefan Hajnoczi
2021-10-05 13:15 ` Christian Schoenebeck [this message]
2021-10-05 15:10 ` Stefan Hajnoczi
2021-10-05 16:32 ` Christian Schoenebeck
2021-10-06 11:06 ` Stefan Hajnoczi
2021-10-06 12:50 ` Christian Schoenebeck
2021-10-06 14:42 ` Stefan Hajnoczi
2021-10-07 13:09 ` Christian Schoenebeck
2021-10-07 15:18 ` Stefan Hajnoczi
2021-10-08 14:48 ` Christian Schoenebeck
2021-10-04 19:38 ` [PATCH v2 2/3] virtio: increase VIRTQUEUE_MAX_SIZE to 32k Christian Schoenebeck
2021-10-05 7:16 ` Michael S. Tsirkin
2021-10-05 7:35 ` Greg Kurz
2021-10-05 11:17 ` Christian Schoenebeck
2021-10-05 11:24 ` Michael S. Tsirkin
2021-10-05 12:01 ` Christian Schoenebeck
2021-10-04 19:38 ` [PATCH v2 3/3] virtio-9p-device: switch to 32k max. transfer size Christian Schoenebeck
2021-10-05 7:38 ` [PATCH v2 0/3] virtio: increase VIRTQUEUE_MAX_SIZE to 32k David Hildenbrand
2021-10-05 11:10 ` Christian Schoenebeck
2021-10-05 11:19 ` Michael S. Tsirkin
2021-10-05 11:43 ` Christian Schoenebeck
2021-10-07 5:23 ` Stefan Hajnoczi
2021-10-07 12:51 ` Christian Schoenebeck
2021-10-07 15:42 ` Stefan Hajnoczi
2021-10-08 7:25 ` Greg Kurz
2021-10-08 14:24 ` Christian Schoenebeck
2021-10-08 16:08 ` Christian Schoenebeck
2021-10-21 15:39 ` Christian Schoenebeck
2021-10-25 10:30 ` Stefan Hajnoczi
2021-10-25 15:03 ` Christian Schoenebeck
2021-10-28 9:00 ` Stefan Hajnoczi
2021-11-01 20:29 ` Christian Schoenebeck
2021-11-03 11:33 ` Stefan Hajnoczi
2021-11-04 14:41 ` Christian Schoenebeck
2021-11-09 10:56 ` Stefan Hajnoczi
2021-11-09 13:09 ` Christian Schoenebeck
2021-11-10 10:05 ` Stefan Hajnoczi
2021-11-10 13:14 ` Christian Schoenebeck
2021-11-10 15:14 ` Stefan Hajnoczi
2021-11-10 15:53 ` Christian Schoenebeck
2021-11-11 16:31 ` Stefan Hajnoczi
2021-11-11 17:54 ` Christian Schoenebeck
2021-11-15 11:54 ` Stefan Hajnoczi
2021-11-15 14:32 ` Christian Schoenebeck
2021-11-16 11:13 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=14205148.YOBg3JvQBA@silver \
--to=qemu_oss@crudebyte.com \
--cc=amit@kernel.org \
--cc=arei.gonglei@huawei.com \
--cc=david@redhat.com \
--cc=dgilbert@redhat.com \
--cc=eric.auger@redhat.com \
--cc=fam@euphon.net \
--cc=groug@kaod.org \
--cc=hreitz@redhat.com \
--cc=jasowang@redhat.com \
--cc=kraxel@redhat.com \
--cc=kwolf@redhat.com \
--cc=lvivier@redhat.com \
--cc=marcandre.lureau@redhat.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=raphael.norwitz@nutanix.com \
--cc=stefanha@redhat.com \
--cc=virtio-fs@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).