qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Ilya Maximets <i.maximets@samsung.com>
To: "Michael S . Tsirkin" <mst@redhat.com>
Cc: qemu-devel@nongnu.org,
	Marc-Andre Lureau <marcandre.lureau@redhat.com>,
	Heetae Ahn <heetae82.ahn@samsung.com>,
	Maxime Coquelin <maxime.coquelin@redhat.com>,
	Ilya Maximets <i.maximets@samsung.com>
Subject: [Qemu-devel] [PATCH] vhost: fix crash on virtio_error while device stop
Date: Wed, 06 Dec 2017 16:06:18 +0300	[thread overview]
Message-ID: <1512565578-12078-1-git-send-email-i.maximets@samsung.com> (raw)
In-Reply-To: CGME20171206130624eucas1p1f73fd4cf613eaf3ce4ce6918c807f8e1@eucas1p1.samsung.com

In case virtio error occured after vhost_dev_close(), qemu will crash
in nested cleanup while checking IOMMU flag because dev->vdev already
set to zero and resources are already freed.

Example:

Program received signal SIGSEGV, Segmentation fault.
vhost_virtqueue_stop at hw/virtio/vhost.c:1155

    #0  vhost_virtqueue_stop at hw/virtio/vhost.c:1155
    #1  vhost_dev_stop at hw/virtio/vhost.c:1594
    #2  vhost_net_stop_one at hw/net/vhost_net.c:289
    #3  vhost_net_stop at hw/net/vhost_net.c:368

            Nested call to vhost_net_stop(). First time was at #14.

    #4  virtio_net_vhost_status at hw/net/virtio-net.c:180
    #5  virtio_net_set_status (status=79) at hw/net/virtio-net.c:254
    #6  virtio_set_status at hw/virtio/virtio.c:1146
    #7  virtio_error at hw/virtio/virtio.c:2455

        virtqueue_get_head() failed here.

    #8  virtqueue_get_head at hw/virtio/virtio.c:543
    #9  virtqueue_drop_all at hw/virtio/virtio.c:984
    #10 virtio_net_drop_tx_queue_data at hw/net/virtio-net.c:240
    #11 virtio_bus_set_host_notifier at hw/virtio/virtio-bus.c:297
    #12 vhost_dev_disable_notifiers at hw/virtio/vhost.c:1431

        vhost_dev_stop() was executed here. dev->vdev == NULL now.

    #13 vhost_net_stop_one at hw/net/vhost_net.c:290
    #14 vhost_net_stop at hw/net/vhost_net.c:368
    #15 virtio_net_vhost_status at hw/net/virtio-net.c:180
    #16 virtio_net_set_status (status=15) at hw/net/virtio-net.c:254
    #17 qmp_set_link ("hostnet0", up=false) at net/net.c:1430
    #18 chr_closed_bh at net/vhost-user.c:214
    #19 aio_bh_call at util/async.c:90
    #20 aio_bh_poll at util/async.c:118
    #21 aio_dispatch at util/aio-posix.c:429
    #22 aio_ctx_dispatch at util/async.c:261
    #23 g_main_context_dispatch
    #24 glib_pollfds_poll at util/main-loop.c:213
    #25 os_host_main_loop_wait at util/main-loop.c:261
    #26 main_loop_wait at util/main-loop.c:515
    #27 main_loop () at vl.c:1917
    #28 main at vl.c:4795

Above backtrace captured from qemu crash on vhost disconnect while
virtio driver in guest was in failed state.

We can just add checking for 'vdev' in 'vhost_dev_has_iommu()' but
it will assert further trying to free already freed ioeventfds. The
real problem is that we're allowing nested calls to 'vhost_net_stop'.

This patch is aimed to forbid nested calls to 'vhost_net_stop' to avoid
any possible double frees and segmentation faults doue to using of
already freed resources by setting 'vhost_started' flag to zero prior
to 'vhost_net_stop' call.

Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
---

This issue was already addressed more than a year ago by the following
patch: https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg06732.html
but it was dropped without review due to not yet implemented re-connection
of vhost-user. Re-connection implementation lately fixed most of the
nested calls, but few of them still exists. For example, above backtrace
captured after 'virtqueue_get_head()' failure on vhost-user disconnection.

 hw/net/virtio-net.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 38674b0..4d95a18 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -177,8 +177,8 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
             n->vhost_started = 0;
         }
     } else {
-        vhost_net_stop(vdev, n->nic->ncs, queues);
         n->vhost_started = 0;
+        vhost_net_stop(vdev, n->nic->ncs, queues);
     }
 }
 
-- 
2.7.4

       reply	other threads:[~2017-12-06 13:06 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CGME20171206130624eucas1p1f73fd4cf613eaf3ce4ce6918c807f8e1@eucas1p1.samsung.com>
2017-12-06 13:06 ` Ilya Maximets [this message]
2017-12-06 16:45   ` [Qemu-devel] [PATCH] vhost: fix crash on virtio_error while device stop Michael S. Tsirkin
2017-12-07  6:39     ` Ilya Maximets
2017-12-07 17:06       ` Michael S. Tsirkin
2017-12-07 17:27       ` Michael S. Tsirkin
2017-12-08 14:54         ` Ilya Maximets
2017-12-11  4:35           ` Michael S. Tsirkin
2017-12-13 13:45             ` Ilya Maximets
2017-12-13 19:48               ` Michael S. Tsirkin
2017-12-14  7:06                 ` Ilya Maximets
2017-12-14  9:01                   ` Maxime Coquelin
2017-12-14 14:31                     ` Ilya Maximets
2017-12-14 14:54                       ` Ilya Maximets

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1512565578-12078-1-git-send-email-i.maximets@samsung.com \
    --to=i.maximets@samsung.com \
    --cc=heetae82.ahn@samsung.com \
    --cc=marcandre.lureau@redhat.com \
    --cc=maxime.coquelin@redhat.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).