From: "Eugenio Pérez" <eperezma@redhat.com>
To: qemu-devel@nongnu.org
Cc: Cornelia Huck <cohuck@redhat.com>,
"Gonglei (Arei)" <arei.gonglei@huawei.com>,
Stefano Garzarella <sgarzare@redhat.com>,
Eli Cohen <eli@mellanox.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
Jason Wang <jasowang@redhat.com>,
Stefan Hajnoczi <stefanha@redhat.com>, Cindy Lu <lulu@redhat.com>,
Zhu Lingshan <lingshan.zhu@intel.com>,
Parav Pandit <parav@mellanox.com>,
Markus Armbruster <armbru@redhat.com>,
Liuxiangdong <liuxiangdong5@huawei.com>,
Laurent Vivier <lvivier@redhat.com>,
Gautam Dawar <gdawar@xilinx.com>, Eric Blake <eblake@redhat.com>,
Harpreet Singh Anand <hanand@xilinx.com>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: [PATCH v3 07/19] vhost: Decouple vhost_svq_add from VirtQueueElement
Date: Fri, 15 Jul 2022 19:18:22 +0200 [thread overview]
Message-ID: <20220715171834.2666455-8-eperezma@redhat.com> (raw)
In-Reply-To: <20220715171834.2666455-1-eperezma@redhat.com>
VirtQueueElement comes from the guest, but we're heading SVQ to be able
to modify the element presented to the device without the guest's
knowledge.
To do so, make SVQ accept sg buffers directly, instead of using
VirtQueueElement.
Add vhost_svq_add_element to maintain element convenience.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
hw/virtio/vhost-shadow-virtqueue.c | 33 ++++++++++++++++++++----------
1 file changed, 22 insertions(+), 11 deletions(-)
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index aee9891a67..b005a457c6 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -172,30 +172,31 @@ static bool vhost_svq_vring_write_descs(VhostShadowVirtqueue *svq, hwaddr *sg,
}
static bool vhost_svq_add_split(VhostShadowVirtqueue *svq,
- VirtQueueElement *elem, unsigned *head)
+ const struct iovec *out_sg, size_t out_num,
+ const struct iovec *in_sg, size_t in_num,
+ unsigned *head)
{
unsigned avail_idx;
vring_avail_t *avail = svq->vring.avail;
bool ok;
- g_autofree hwaddr *sgs = g_new(hwaddr, MAX(elem->out_num, elem->in_num));
+ g_autofree hwaddr *sgs = g_new(hwaddr, MAX(out_num, in_num));
*head = svq->free_head;
/* We need some descriptors here */
- if (unlikely(!elem->out_num && !elem->in_num)) {
+ if (unlikely(!out_num && !in_num)) {
qemu_log_mask(LOG_GUEST_ERROR,
"Guest provided element with no descriptors");
return false;
}
- ok = vhost_svq_vring_write_descs(svq, sgs, elem->out_sg, elem->out_num,
- elem->in_num > 0, false);
+ ok = vhost_svq_vring_write_descs(svq, sgs, out_sg, out_num, in_num > 0,
+ false);
if (unlikely(!ok)) {
return false;
}
- ok = vhost_svq_vring_write_descs(svq, sgs, elem->in_sg, elem->in_num, false,
- true);
+ ok = vhost_svq_vring_write_descs(svq, sgs, in_sg, in_num, false, true);
if (unlikely(!ok)) {
return false;
}
@@ -237,17 +238,19 @@ static void vhost_svq_kick(VhostShadowVirtqueue *svq)
*
* Return -EINVAL if element is invalid, -ENOSPC if dev queue is full
*/
-static int vhost_svq_add(VhostShadowVirtqueue *svq, VirtQueueElement *elem)
+static int vhost_svq_add(VhostShadowVirtqueue *svq, const struct iovec *out_sg,
+ size_t out_num, const struct iovec *in_sg,
+ size_t in_num, VirtQueueElement *elem)
{
unsigned qemu_head;
- unsigned ndescs = elem->in_num + elem->out_num;
+ unsigned ndescs = in_num + out_num;
bool ok;
if (unlikely(ndescs > vhost_svq_available_slots(svq))) {
return -ENOSPC;
}
- ok = vhost_svq_add_split(svq, elem, &qemu_head);
+ ok = vhost_svq_add_split(svq, out_sg, out_num, in_sg, in_num, &qemu_head);
if (unlikely(!ok)) {
g_free(elem);
return -EINVAL;
@@ -258,6 +261,14 @@ static int vhost_svq_add(VhostShadowVirtqueue *svq, VirtQueueElement *elem)
return 0;
}
+/* Convenience wrapper to add a guest's element to SVQ */
+static int vhost_svq_add_element(VhostShadowVirtqueue *svq,
+ VirtQueueElement *elem)
+{
+ return vhost_svq_add(svq, elem->out_sg, elem->out_num, elem->in_sg,
+ elem->in_num, elem);
+}
+
/**
* Forward available buffers.
*
@@ -294,7 +305,7 @@ static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
break;
}
- r = vhost_svq_add(svq, elem);
+ r = vhost_svq_add_element(svq, elem);
if (unlikely(r != 0)) {
if (r == -ENOSPC) {
/*
--
2.31.1
next prev parent reply other threads:[~2022-07-15 17:29 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-15 17:18 [PATCH v3 00/19] vdpa net devices Rx filter change notification with Shadow VQ Eugenio Pérez
2022-07-15 17:18 ` [PATCH v3 01/19] vhost: move descriptor translation to vhost_svq_vring_write_descs Eugenio Pérez
2022-07-15 17:18 ` [PATCH v3 02/19] virtio-net: Expose MAC_TABLE_ENTRIES Eugenio Pérez
2022-07-15 17:18 ` [PATCH v3 03/19] virtio-net: Expose ctrl virtqueue logic Eugenio Pérez
2022-07-15 17:18 ` [PATCH v3 04/19] vhost: Reorder vhost_svq_kick Eugenio Pérez
2022-07-15 17:18 ` [PATCH v3 05/19] vhost: Move vhost_svq_kick call to vhost_svq_add Eugenio Pérez
2022-07-15 17:18 ` [PATCH v3 06/19] vhost: Check for queue full at vhost_svq_add Eugenio Pérez
2022-07-15 17:18 ` Eugenio Pérez [this message]
2022-07-15 17:18 ` [PATCH v3 08/19] vhost: Add SVQDescState Eugenio Pérez
2022-07-15 17:18 ` [PATCH v3 09/19] vhost: Track number of descs in SVQDescState Eugenio Pérez
2022-07-15 17:18 ` [PATCH v3 10/19] vhost: add vhost_svq_push_elem Eugenio Pérez
2022-07-15 17:18 ` [PATCH v3 11/19] vhost: Expose vhost_svq_add Eugenio Pérez
2022-07-15 17:18 ` [PATCH v3 12/19] vhost: add vhost_svq_poll Eugenio Pérez
2022-07-15 17:18 ` [PATCH v3 13/19] vhost: Add svq avail_handler callback Eugenio Pérez
2022-07-15 17:18 ` [PATCH v3 14/19] vdpa: Export vhost_vdpa_dma_map and unmap calls Eugenio Pérez
2022-07-15 17:18 ` [PATCH v3 15/19] vdpa: manual forward CVQ buffers Eugenio Pérez
2022-07-15 17:18 ` [PATCH v3 16/19] vdpa: Buffer CVQ support on shadow virtqueue Eugenio Pérez
2022-07-15 17:18 ` [PATCH v3 17/19] vdpa: Extract get features part from vhost_vdpa_get_max_queue_pairs Eugenio Pérez
2022-07-15 17:18 ` [PATCH v3 18/19] vdpa: Add device migration blocker Eugenio Pérez
2022-07-15 17:18 ` [PATCH v3 19/19] vdpa: Add x-svq to NetdevVhostVDPAOptions Eugenio Pérez
2022-07-18 3:32 ` [PATCH v3 00/19] vdpa net devices Rx filter change notification with Shadow VQ Jason Wang
2022-07-19 11:28 ` Michael S. Tsirkin
2022-07-19 11:28 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220715171834.2666455-8-eperezma@redhat.com \
--to=eperezma@redhat.com \
--cc=arei.gonglei@huawei.com \
--cc=armbru@redhat.com \
--cc=cohuck@redhat.com \
--cc=eblake@redhat.com \
--cc=eli@mellanox.com \
--cc=gdawar@xilinx.com \
--cc=hanand@xilinx.com \
--cc=jasowang@redhat.com \
--cc=lingshan.zhu@intel.com \
--cc=liuxiangdong5@huawei.com \
--cc=lulu@redhat.com \
--cc=lvivier@redhat.com \
--cc=mst@redhat.com \
--cc=parav@mellanox.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=sgarzare@redhat.com \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).