From: "Eugenio Pérez" <eperezma@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefan Hajnoczi <stefanha@redhat.com>,
Gautam Dawar <gdawar@xilinx.com>,
Liuxiangdong <liuxiangdong5@huawei.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Markus Armbruster <armbru@redhat.com>,
Jason Wang <jasowang@redhat.com>,
Cornelia Huck <cohuck@redhat.com>,
Parav Pandit <parav@mellanox.com>, Eric Blake <eblake@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
Laurent Vivier <lvivier@redhat.com>,
Zhu Lingshan <lingshan.zhu@intel.com>,
Eli Cohen <eli@mellanox.com>, Cindy Lu <lulu@redhat.com>,
"Gonglei (Arei)" <arei.gonglei@huawei.com>,
Stefano Garzarella <sgarzare@redhat.com>,
Harpreet Singh Anand <hanand@xilinx.com>
Subject: [PATCH v5 07/20] vhost: Check for queue full at vhost_svq_add
Date: Tue, 19 Jul 2022 11:56:16 +0200 [thread overview]
Message-ID: <20220719095629.3031338-8-eperezma@redhat.com> (raw)
In-Reply-To: <20220719095629.3031338-1-eperezma@redhat.com>
The series need to expose vhost_svq_add with full functionality,
including checking for full queue.
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
hw/virtio/vhost-shadow-virtqueue.c | 59 +++++++++++++++++-------------
1 file changed, 33 insertions(+), 26 deletions(-)
diff --git a/hw/virtio/vhost-shadow-virtqueue.c b/hw/virtio/vhost-shadow-virtqueue.c
index e272c3318a..11302ea1f2 100644
--- a/hw/virtio/vhost-shadow-virtqueue.c
+++ b/hw/virtio/vhost-shadow-virtqueue.c
@@ -233,21 +233,29 @@ static void vhost_svq_kick(VhostShadowVirtqueue *svq)
* Add an element to a SVQ.
*
* The caller must check that there is enough slots for the new element. It
- * takes ownership of the element: In case of failure, it is free and the SVQ
- * is considered broken.
+ * takes ownership of the element: In case of failure not ENOSPC, it is free.
+ *
+ * Return -EINVAL if element is invalid, -ENOSPC if dev queue is full
*/
-static bool vhost_svq_add(VhostShadowVirtqueue *svq, VirtQueueElement *elem)
+static int vhost_svq_add(VhostShadowVirtqueue *svq, VirtQueueElement *elem)
{
unsigned qemu_head;
- bool ok = vhost_svq_add_split(svq, elem, &qemu_head);
+ unsigned ndescs = elem->in_num + elem->out_num;
+ bool ok;
+
+ if (unlikely(ndescs > vhost_svq_available_slots(svq))) {
+ return -ENOSPC;
+ }
+
+ ok = vhost_svq_add_split(svq, elem, &qemu_head);
if (unlikely(!ok)) {
g_free(elem);
- return false;
+ return -EINVAL;
}
svq->ring_id_maps[qemu_head] = elem;
vhost_svq_kick(svq);
- return true;
+ return 0;
}
/**
@@ -274,7 +282,7 @@ static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
while (true) {
VirtQueueElement *elem;
- bool ok;
+ int r;
if (svq->next_guest_avail_elem) {
elem = g_steal_pointer(&svq->next_guest_avail_elem);
@@ -286,25 +294,24 @@ static void vhost_handle_guest_kick(VhostShadowVirtqueue *svq)
break;
}
- if (elem->out_num + elem->in_num > vhost_svq_available_slots(svq)) {
- /*
- * This condition is possible since a contiguous buffer in GPA
- * does not imply a contiguous buffer in qemu's VA
- * scatter-gather segments. If that happens, the buffer exposed
- * to the device needs to be a chain of descriptors at this
- * moment.
- *
- * SVQ cannot hold more available buffers if we are here:
- * queue the current guest descriptor and ignore further kicks
- * until some elements are used.
- */
- svq->next_guest_avail_elem = elem;
- return;
- }
-
- ok = vhost_svq_add(svq, elem);
- if (unlikely(!ok)) {
- /* VQ is broken, just return and ignore any other kicks */
+ r = vhost_svq_add(svq, elem);
+ if (unlikely(r != 0)) {
+ if (r == -ENOSPC) {
+ /*
+ * This condition is possible since a contiguous buffer in
+ * GPA does not imply a contiguous buffer in qemu's VA
+ * scatter-gather segments. If that happens, the buffer
+ * exposed to the device needs to be a chain of descriptors
+ * at this moment.
+ *
+ * SVQ cannot hold more available buffers if we are here:
+ * queue the current guest descriptor and ignore kicks
+ * until some elements are used.
+ */
+ svq->next_guest_avail_elem = elem;
+ }
+
+ /* VQ is full or broken, just return and ignore kicks */
return;
}
}
--
2.31.1
next prev parent reply other threads:[~2022-07-19 10:13 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-19 9:56 [PATCH v5 00/20] vdpa net devices Rx filter change notification with Shadow VQ Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 01/20] vhost: move descriptor translation to vhost_svq_vring_write_descs Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 02/20] virtio-net: Expose MAC_TABLE_ENTRIES Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 03/20] virtio-net: Expose ctrl virtqueue logic Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 04/20] vdpa: Avoid compiler to squash reads to used idx Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 05/20] vhost: Reorder vhost_svq_kick Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 06/20] vhost: Move vhost_svq_kick call to vhost_svq_add Eugenio Pérez
2022-07-19 9:56 ` Eugenio Pérez [this message]
2022-07-19 9:56 ` [PATCH v5 08/20] vhost: Decouple vhost_svq_add from VirtQueueElement Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 09/20] vhost: Add SVQDescState Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 10/20] vhost: Track number of descs in SVQDescState Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 11/20] vhost: add vhost_svq_push_elem Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 12/20] vhost: Expose vhost_svq_add Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 13/20] vhost: add vhost_svq_poll Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 14/20] vhost: Add svq avail_handler callback Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 15/20] vdpa: Export vhost_vdpa_dma_map and unmap calls Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 16/20] vdpa: manual forward CVQ buffers Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 17/20] vdpa: Buffer CVQ support on shadow virtqueue Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 18/20] vdpa: Extract get features part from vhost_vdpa_get_max_queue_pairs Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 19/20] vdpa: Add device migration blocker Eugenio Pérez
2022-07-19 9:56 ` [PATCH v5 20/20] vdpa: Add x-svq to NetdevVhostVDPAOptions Eugenio Pérez
2022-07-19 10:04 ` [PATCH v5 00/20] vdpa net devices Rx filter change notification with Shadow VQ Eugenio Perez Martin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220719095629.3031338-8-eperezma@redhat.com \
--to=eperezma@redhat.com \
--cc=arei.gonglei@huawei.com \
--cc=armbru@redhat.com \
--cc=cohuck@redhat.com \
--cc=eblake@redhat.com \
--cc=eli@mellanox.com \
--cc=gdawar@xilinx.com \
--cc=hanand@xilinx.com \
--cc=jasowang@redhat.com \
--cc=lingshan.zhu@intel.com \
--cc=liuxiangdong5@huawei.com \
--cc=lulu@redhat.com \
--cc=lvivier@redhat.com \
--cc=mst@redhat.com \
--cc=parav@mellanox.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=sgarzare@redhat.com \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).