From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: "Marc-André Lureau" <marcandre.lureau@redhat.com>
Cc: qemu-devel@nongnu.org, mst@redhat.com
Subject: Re: [Qemu-devel] [PATCH v2] libvhost-user: fix crash when rings aren't ready
Date: Wed, 3 May 2017 18:45:25 +0100 [thread overview]
Message-ID: <20170503174524.GI2657@work-vm> (raw)
In-Reply-To: <20170503165412.27766-1-marcandre.lureau@redhat.com>
* Marc-André Lureau (marcandre.lureau@redhat.com) wrote:
> Calling libvhost-user functions like vu_queue_get_avail_bytes() when the
> queue doesn't yet have addresses will result in the crashes like the
> following:
>
> Program received signal SIGSEGV, Segmentation fault.
> 0x000055c414112ce4 in vring_avail_idx (vq=0x55c41582fd68, vq=0x55c41582fd68)
> at /home/dgilbert/git/qemu/contrib/libvhost-user/libvhost-user.c:940
> 940 vq->shadow_avail_idx = vq->vring.avail->idx;
> (gdb) p vq
> $1 = (VuVirtq *) 0x55c41582fd68
> (gdb) p vq->vring
> $2 = {num = 0, desc = 0x0, avail = 0x0, used = 0x0, log_guest_addr = 0, flags = 0}
>
> at /home/dgilbert/git/qemu/contrib/libvhost-user/libvhost-user.c:940
> No locals.
> at /home/dgilbert/git/qemu/contrib/libvhost-user/libvhost-user.c:960
> num_heads = <optimized out>
> out_bytes=out_bytes@entry=0x7fffd035d7c4, max_in_bytes=max_in_bytes@entry=0,
> max_out_bytes=max_out_bytes@entry=0) at /home/dgilbert/git/qemu/contrib/libvhost-user/libvhost-user.c:1034
>
> Add a pre-condition checks on vring.avail before accessing it.
>
> Fix documentation and return type of vu_queue_empty() while at it.
>
> Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Yep, that's much happier.
I've succesfully done a migrate while running some guest networking so:
Tested-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Thanks,
Dave
> ---
> contrib/libvhost-user/libvhost-user.h | 6 +++---
> contrib/libvhost-user/libvhost-user.c | 26 ++++++++++++++++++++------
> 2 files changed, 23 insertions(+), 9 deletions(-)
>
> diff --git a/contrib/libvhost-user/libvhost-user.h b/contrib/libvhost-user/libvhost-user.h
> index 156b50e989..af02a31ebe 100644
> --- a/contrib/libvhost-user/libvhost-user.h
> +++ b/contrib/libvhost-user/libvhost-user.h
> @@ -327,13 +327,13 @@ void vu_queue_set_notification(VuDev *dev, VuVirtq *vq, int enable);
> bool vu_queue_enabled(VuDev *dev, VuVirtq *vq);
>
> /**
> - * vu_queue_enabled:
> + * vu_queue_empty:
> * @dev: a VuDev context
> * @vq: a VuVirtq queue
> *
> - * Returns: whether the queue is empty.
> + * Returns: true if the queue is empty or not ready.
> */
> -int vu_queue_empty(VuDev *dev, VuVirtq *vq);
> +bool vu_queue_empty(VuDev *dev, VuVirtq *vq);
>
> /**
> * vu_queue_notify:
> diff --git a/contrib/libvhost-user/libvhost-user.c b/contrib/libvhost-user/libvhost-user.c
> index af4faad60b..e1bf9644b5 100644
> --- a/contrib/libvhost-user/libvhost-user.c
> +++ b/contrib/libvhost-user/libvhost-user.c
> @@ -1031,6 +1031,11 @@ vu_queue_get_avail_bytes(VuDev *dev, VuVirtq *vq, unsigned int *in_bytes,
> idx = vq->last_avail_idx;
>
> total_bufs = in_total = out_total = 0;
> + if (unlikely(dev->broken) ||
> + unlikely(!vq->vring.avail)) {
> + goto done;
> + }
> +
> while ((rc = virtqueue_num_heads(dev, vq, idx)) > 0) {
> unsigned int max, num_bufs, indirect = 0;
> struct vring_desc *desc;
> @@ -1121,11 +1126,16 @@ vu_queue_avail_bytes(VuDev *dev, VuVirtq *vq, unsigned int in_bytes,
>
> /* Fetch avail_idx from VQ memory only when we really need to know if
> * guest has added some buffers. */
> -int
> +bool
> vu_queue_empty(VuDev *dev, VuVirtq *vq)
> {
> + if (unlikely(dev->broken) ||
> + unlikely(!vq->vring.avail)) {
> + return true;
> + }
> +
> if (vq->shadow_avail_idx != vq->last_avail_idx) {
> - return 0;
> + return false;
> }
>
> return vring_avail_idx(vq) == vq->last_avail_idx;
> @@ -1174,7 +1184,8 @@ vring_notify(VuDev *dev, VuVirtq *vq)
> void
> vu_queue_notify(VuDev *dev, VuVirtq *vq)
> {
> - if (unlikely(dev->broken)) {
> + if (unlikely(dev->broken) ||
> + unlikely(!vq->vring.avail)) {
> return;
> }
>
> @@ -1291,7 +1302,8 @@ vu_queue_pop(VuDev *dev, VuVirtq *vq, size_t sz)
> struct vring_desc *desc;
> int rc;
>
> - if (unlikely(dev->broken)) {
> + if (unlikely(dev->broken) ||
> + unlikely(!vq->vring.avail)) {
> return NULL;
> }
>
> @@ -1445,7 +1457,8 @@ vu_queue_fill(VuDev *dev, VuVirtq *vq,
> {
> struct vring_used_elem uelem;
>
> - if (unlikely(dev->broken)) {
> + if (unlikely(dev->broken) ||
> + unlikely(!vq->vring.avail)) {
> return;
> }
>
> @@ -1474,7 +1487,8 @@ vu_queue_flush(VuDev *dev, VuVirtq *vq, unsigned int count)
> {
> uint16_t old, new;
>
> - if (unlikely(dev->broken)) {
> + if (unlikely(dev->broken) ||
> + unlikely(!vq->vring.avail)) {
> return;
> }
>
> --
> 2.12.0.191.gc5d8de91d
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
next prev parent reply other threads:[~2017-05-03 17:45 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-03 16:54 [Qemu-devel] [PATCH v2] libvhost-user: fix crash when rings aren't ready Marc-André Lureau
2017-05-03 17:45 ` Dr. David Alan Gilbert [this message]
2017-05-03 19:07 ` Philippe Mathieu-Daudé
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170503174524.GI2657@work-vm \
--to=dgilbert@redhat.com \
--cc=marcandre.lureau@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).