From: Stefano Garzarella <sgarzare@redhat.com>
To: Jiang Wang <jiang.wang@bytedance.com>
Cc: arseny.krasnov@kaspersky.com, jasowang@redhat.com,
qemu-devel@nongnu.org, stefanha@redhat.com, mst@redhat.com
Subject: Re: [RFC v7] virtio/vsock: add two more queues for datagram types
Date: Wed, 22 Sep 2021 11:23:46 +0200 [thread overview]
Message-ID: <20210922092346.gxpc3w62ofimyhhh@steredhat> (raw)
In-Reply-To: <20210922000024.524646-1-jiang.wang@bytedance.com>
On Wed, Sep 22, 2021 at 12:00:24AM +0000, Jiang Wang wrote:
>Datagram sockets are connectionless and unreliable.
>The sender does not know the capacity of the receiver
>and may send more packets than the receiver can handle.
>
>Add two more dedicate virtqueues for datagram sockets,
>so that it will not unfairly steal resources from
>stream and future connection-oriented sockets.
>
>The two new virtqueues are enabled by default and will
>be removed if the guest does not support. This will help
>migration work.
>
>btw: enable_dgram argument in vhost_vsock_common_realize
>is redundant for now, but will be used later when we
>want to disable DGRAM feature bit for old versions.
>
>Signed-off-by: Jiang Wang <jiang.wang@bytedance.com>
>---
>v1 -> v2: use qemu cmd option to control number of queues,
> removed configuration settings for dgram.
>v2 -> v3: use ioctl to get features and decide number of
> virt queues, instead of qemu cmd option.
>v3 -> v4: change DGRAM feature bit value to 2. Add an argument
> in vhost_vsock_common_realize to indicate dgram is supported or not.
>v4 -> v5: don't open dev to get vhostfd. Removed leftover definition of
> enable_dgram
>v5 -> v6: fix style errors. Imporve error handling of
> vhost_vsock_dgram_supported. Rename MAX_VQS_WITH_DGRAM and another one.
>v6 -> v7: Always enable dgram for vhost-user and vhost kernel.
> Delete unused virtqueues at the beginning of
> vhost_vsock_common_start for migration. Otherwise, migration will fail.
>
> hw/virtio/vhost-user-vsock.c | 2 +-
> hw/virtio/vhost-vsock-common.c | 32 +++++++++++++++++--
> hw/virtio/vhost-vsock.c | 6 +++-
> include/hw/virtio/vhost-vsock-common.h | 6 ++--
> include/hw/virtio/vhost-vsock.h | 3 ++
> include/standard-headers/linux/virtio_vsock.h | 1 +
> 6 files changed, 43 insertions(+), 7 deletions(-)
>
>diff --git a/hw/virtio/vhost-user-vsock.c b/hw/virtio/vhost-user-vsock.c
>index 6095ed7349..9823a2f3bd 100644
>--- a/hw/virtio/vhost-user-vsock.c
>+++ b/hw/virtio/vhost-user-vsock.c
>@@ -105,7 +105,7 @@ static void vuv_device_realize(DeviceState *dev, Error **errp)
> return;
> }
>
>- vhost_vsock_common_realize(vdev, "vhost-user-vsock");
>+ vhost_vsock_common_realize(vdev, "vhost-user-vsock", true);
>
> vhost_dev_set_config_notifier(&vvc->vhost_dev, &vsock_ops);
>
>diff --git a/hw/virtio/vhost-vsock-common.c b/hw/virtio/vhost-vsock-common.c
>index 4ad6e234ad..7d89b4d242 100644
>--- a/hw/virtio/vhost-vsock-common.c
>+++ b/hw/virtio/vhost-vsock-common.c
>@@ -26,6 +26,18 @@ int vhost_vsock_common_start(VirtIODevice *vdev)
> int ret;
> int i;
>
>+ if (!virtio_has_feature(vdev->guest_features, VIRTIO_VSOCK_F_DGRAM)) {
>+ struct vhost_virtqueue *vqs;
>+ virtio_delete_queue(vvc->dgram_recv_vq);
>+ virtio_delete_queue(vvc->dgram_trans_vq);
>+
Are you sure it works removing queues here?
The event_queue would always be #4, but the guest will use #2 which
we're removing.
>+ vqs = vvc->vhost_dev.vqs;
>+ vvc->vhost_dev.nvqs = MAX_VQS_WITHOUT_DGRAM;
>+ vvc->vhost_dev.vqs = g_new0(struct vhost_virtqueue,
>+ vvc->vhost_dev.nvqs);
>+ g_free(vqs);
>+ }
>+
> if (!k->set_guest_notifiers) {
> error_report("binding does not support guest notifiers");
> return -ENOSYS;
>@@ -196,9 +208,11 @@ int vhost_vsock_common_post_load(void *opaque, int version_id)
> return 0;
> }
>
>-void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name)
>+void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name,
>+ bool enable_dgram)
^
It seems always true, and also unused.
> {
> VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev);
>+ int nvqs = MAX_VQS_WITH_DGRAM;
>
> virtio_init(vdev, name, VIRTIO_ID_VSOCK,
> sizeof(struct virtio_vsock_config));
>@@ -209,12 +223,17 @@ void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name)
> vvc->trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
> vhost_vsock_common_handle_output);
>
>+ vvc->dgram_recv_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
>+ vhost_vsock_common_handle_output);
>+ vvc->dgram_trans_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
>+ vhost_vsock_common_handle_output);
>+
> /* The event queue belongs to QEMU */
> vvc->event_vq = virtio_add_queue(vdev, VHOST_VSOCK_QUEUE_SIZE,
> vhost_vsock_common_handle_output);
>
>- vvc->vhost_dev.nvqs = ARRAY_SIZE(vvc->vhost_vqs);
>- vvc->vhost_dev.vqs = vvc->vhost_vqs;
>+ vvc->vhost_dev.nvqs = nvqs;
>+ vvc->vhost_dev.vqs = g_new0(struct vhost_virtqueue, vvc->vhost_dev.nvqs);
>
> vvc->post_load_timer = NULL;
> }
>@@ -227,6 +246,13 @@ void vhost_vsock_common_unrealize(VirtIODevice *vdev)
>
> virtio_delete_queue(vvc->recv_vq);
> virtio_delete_queue(vvc->trans_vq);
>+ if (vvc->vhost_dev.nvqs == MAX_VQS_WITH_DGRAM) {
>+ virtio_delete_queue(vvc->dgram_recv_vq);
>+ virtio_delete_queue(vvc->dgram_trans_vq);
>+ }
>+
>+ g_free(vvc->vhost_dev.vqs);
>+
> virtio_delete_queue(vvc->event_vq);
> virtio_cleanup(vdev);
> }
>diff --git a/hw/virtio/vhost-vsock.c b/hw/virtio/vhost-vsock.c
>index 1b1a5c70ed..6e315ecf23 100644
>--- a/hw/virtio/vhost-vsock.c
>+++ b/hw/virtio/vhost-vsock.c
>@@ -23,6 +23,7 @@
>
> const int feature_bits[] = {
> VIRTIO_VSOCK_F_SEQPACKET,
>+ VIRTIO_VSOCK_F_DGRAM,
> VHOST_INVALID_FEATURE_BIT
> };
>
>@@ -116,6 +117,9 @@ static uint64_t vhost_vsock_get_features(VirtIODevice *vdev,
> VHostVSockCommon *vvc = VHOST_VSOCK_COMMON(vdev);
>
> virtio_add_feature(&requested_features, VIRTIO_VSOCK_F_SEQPACKET);
>+ if (vvc->vhost_dev.nvqs == MAX_VQS_WITH_DGRAM) {
>+ virtio_add_feature(&requested_features, VIRTIO_VSOCK_F_DGRAM);
>+ }
Take a look at
https://lore.kernel.org/qemu-devel/20210921161642.206461-1-sgarzare@redhat.com/
If it will be accepted, we should use something similar (e.g.
`datagram` prop) and handle this flag in vhost-vsock-common.
And we should change a bit the queue handling:
- if QEMU (new `datagram` prop is `on` or `auto`) and the kernel
supports F_DGRAM, we can allocate 5 queues.
- if the guest acked F_DGRAM we can use queue #4 for events, otherwise
switch back to queue #2, without removing them.
> return vhost_get_features(&vvc->vhost_dev, feature_bits,
> requested_features);
> }
>@@ -175,7 +179,7 @@ static void vhost_vsock_device_realize(DeviceState *dev, Error **errp)
> qemu_set_nonblock(vhostfd);
> }
>
>- vhost_vsock_common_realize(vdev, "vhost-vsock");
>+ vhost_vsock_common_realize(vdev, "vhost-vsock", true);
>
> ret = vhost_dev_init(&vvc->vhost_dev, (void *)(uintptr_t)vhostfd,
> VHOST_BACKEND_TYPE_KERNEL, 0, errp);
>diff --git a/include/hw/virtio/vhost-vsock-common.h b/include/hw/virtio/vhost-vsock-common.h
>index e412b5ee98..80151aee35 100644
>--- a/include/hw/virtio/vhost-vsock-common.h
>+++ b/include/hw/virtio/vhost-vsock-common.h
>@@ -27,12 +27,13 @@ enum {
> struct VHostVSockCommon {
> VirtIODevice parent;
>
>- struct vhost_virtqueue vhost_vqs[2];
> struct vhost_dev vhost_dev;
>
> VirtQueue *event_vq;
> VirtQueue *recv_vq;
> VirtQueue *trans_vq;
>+ VirtQueue *dgram_recv_vq;
>+ VirtQueue *dgram_trans_vq;
>
> QEMUTimer *post_load_timer;
> };
>@@ -41,7 +42,8 @@ int vhost_vsock_common_start(VirtIODevice *vdev);
> void vhost_vsock_common_stop(VirtIODevice *vdev);
> int vhost_vsock_common_pre_save(void *opaque);
> int vhost_vsock_common_post_load(void *opaque, int version_id);
>-void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name);
>+void vhost_vsock_common_realize(VirtIODevice *vdev, const char *name,
>+ bool enable_dgram);
> void vhost_vsock_common_unrealize(VirtIODevice *vdev);
>
> #endif /* _QEMU_VHOST_VSOCK_COMMON_H */
>diff --git a/include/hw/virtio/vhost-vsock.h b/include/hw/virtio/vhost-vsock.h
>index 84f4e727c7..7d16c0e218 100644
>--- a/include/hw/virtio/vhost-vsock.h
>+++ b/include/hw/virtio/vhost-vsock.h
>@@ -33,4 +33,7 @@ struct VHostVSock {
> /*< public >*/
> };
>
>+#define MAX_VQS_WITHOUT_DGRAM 2
>+#define MAX_VQS_WITH_DGRAM 4
I think was better the naming in the previous version.
Thanks,
Stefano
next prev parent reply other threads:[~2021-09-22 9:27 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-22 0:00 [RFC v7] virtio/vsock: add two more queues for datagram types Jiang Wang
2021-09-22 9:23 ` Stefano Garzarella [this message]
-- strict thread matches above, loose matches on Subject: below --
2021-09-22 17:36 Jiang Wang .
2021-09-23 9:18 ` Stefano Garzarella
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210922092346.gxpc3w62ofimyhhh@steredhat \
--to=sgarzare@redhat.com \
--cc=arseny.krasnov@kaspersky.com \
--cc=jasowang@redhat.com \
--cc=jiang.wang@bytedance.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).