From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Fam Zheng <famz@redhat.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>
Subject: [Qemu-devel] [RFC 4/7] virtio: handle virtqueue_get_avail_bytes() errors
Date: Thu, 24 Mar 2016 17:56:51 +0000 [thread overview]
Message-ID: <1458842214-11450-5-git-send-email-stefanha@redhat.com> (raw)
In-Reply-To: <1458842214-11450-1-git-send-email-stefanha@redhat.com>
If the vring is invalid, tell the caller no bytes are available and mark
the device broken.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
hw/virtio/virtio.c | 17 +++++++++++------
1 file changed, 11 insertions(+), 6 deletions(-)
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index 86352c8..4758fe3 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -399,14 +399,14 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
if (desc.flags & VRING_DESC_F_INDIRECT) {
if (desc.len % sizeof(VRingDesc)) {
- error_report("Invalid size for indirect buffer table");
- exit(1);
+ virtio_error(vdev, "Invalid size for indirect buffer table");
+ goto err;
}
/* If we've got too many, that implies a descriptor loop. */
if (num_bufs >= max) {
- error_report("Looped descriptor");
- exit(1);
+ virtio_error(vdev, "Looped descriptor");
+ goto err;
}
/* loop over the indirect descriptor table */
@@ -420,8 +420,8 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
do {
/* If we've got too many, that implies a descriptor loop. */
if (++num_bufs > max) {
- error_report("Looped descriptor");
- exit(1);
+ virtio_error(vdev, "Looped descriptor");
+ goto err;
}
if (desc.flags & VRING_DESC_F_WRITE) {
@@ -446,6 +446,11 @@ done:
if (out_bytes) {
*out_bytes = out_total;
}
+ return;
+
+err:
+ in_total = out_total = 0;
+ goto done;
}
int virtqueue_avail_bytes(VirtQueue *vq, unsigned int in_bytes,
--
2.5.5
next prev parent reply other threads:[~2016-03-24 17:57 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-24 17:56 [Qemu-devel] [RFC 0/7] virtio: avoid exit() when device enters invalid states Stefan Hajnoczi
2016-03-24 17:56 ` [Qemu-devel] [RFC 1/7] virtio: fix stray tab character Stefan Hajnoczi
2016-03-25 6:45 ` Fam Zheng
2016-03-24 17:56 ` [Qemu-devel] [RFC 2/7] virtio: stop virtqueue processing if device is broken Stefan Hajnoczi
2016-03-25 6:48 ` Fam Zheng
2016-03-29 11:12 ` Stefan Hajnoczi
2016-03-29 7:52 ` Cornelia Huck
2016-03-29 11:14 ` Stefan Hajnoczi
2016-03-24 17:56 ` [Qemu-devel] [RFC 3/7] virtio: handle virtqueue_map_desc() errors Stefan Hajnoczi
2016-03-24 17:56 ` Stefan Hajnoczi [this message]
2016-03-24 17:56 ` [Qemu-devel] [RFC 5/7] virtio: handle virtqueue_read_next_desc() errors Stefan Hajnoczi
2016-03-25 7:01 ` Fam Zheng
2016-03-29 11:14 ` Stefan Hajnoczi
2016-03-24 17:56 ` [Qemu-devel] [RFC 6/7] virtio: handle virtqueue_num_heads() errors Stefan Hajnoczi
2016-03-25 7:03 ` Fam Zheng
2016-03-24 17:56 ` [Qemu-devel] [RFC 7/7] virtio: handle virtqueue_get_head() errors Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1458842214-11450-5-git-send-email-stefanha@redhat.com \
--to=stefanha@redhat.com \
--cc=famz@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).