qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Cornelia Huck <cornelia.huck@de.ibm.com>,
	Stefan Hajnoczi <stefanha@gmail.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>,
	qemu-devel <qemu-devel@nongnu.org>,
	Stefan Hajnoczi <stefanha@redhat.com>
Subject: Re: [Qemu-devel] [PATCH for-2.4 0/2] AioContext: fix deadlock after aio_context_acquire() race
Date: Tue, 28 Jul 2015 14:18:24 +0200	[thread overview]
Message-ID: <55B77310.6050808@redhat.com> (raw)
In-Reply-To: <20150728125857.174d2887.cornelia.huck@de.ibm.com>



On 28/07/2015 12:58, Cornelia Huck wrote:
> > > Thanks.  I understand how to reproduce it now: use -drive aio=threads
> > > and do I/O during managedsave.
> > >
> > > I suspect there are more cases of this.  We need to clean it up during QEMU 2.5.
> > >
> > > For now let's continue leaking these BHs as we've always done.
> > 
> > Actually, this case can be fixed in the patch by moving
> > thread_pool_free() before the BH cleanup loop.
>
> Tried that, may have done it wrong, because the assertion still hits.

If you're doing savevm with a dataplane disk as the destination, that 
cannot work; savevm doesn't attempt to acquire the AioContext so it is 
not thread safe.

An even simpler reproducer for this bug, however, is to hot-unplug a 
disk created with x-data-plane.  It also shows another bug, fixed by 
this patch:

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 3db139b..6106e46 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -223,8 +223,8 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
     virtio_blk_data_plane_stop(s);
     blk_op_unblock_all(s->conf->conf.blk, s->blocker);
     error_free(s->blocker);
-    object_unref(OBJECT(s->iothread));
     qemu_bh_delete(s->bh);
+    object_unref(OBJECT(s->iothread));
     g_free(s);
 }
 
which I'll formally send shortly.

I would prefer to fix them all in 2.4 and risk regressions, because the
bugs are use-after-frees, i.e. pretty bad.

Paolo

  reply	other threads:[~2015-07-28 12:18 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-27 16:33 [Qemu-devel] [PATCH for-2.4 0/2] AioContext: fix deadlock after aio_context_acquire() race Stefan Hajnoczi
2015-07-27 16:33 ` [Qemu-devel] [PATCH for-2.4 1/2] AioContext: avoid leaking BHs on cleanup Stefan Hajnoczi
2015-07-27 16:33 ` [Qemu-devel] [PATCH for-2.4 2/2] AioContext: force event loop iteration using BH Stefan Hajnoczi
2015-07-27 17:49   ` Paolo Bonzini
2015-07-28  7:07 ` [Qemu-devel] [PATCH for-2.4 0/2] AioContext: fix deadlock after aio_context_acquire() race Cornelia Huck
2015-07-28  8:02   ` Cornelia Huck
2015-07-28  8:34     ` Stefan Hajnoczi
2015-07-28 10:26       ` Cornelia Huck
2015-07-28 10:31         ` Stefan Hajnoczi
2015-07-28 10:34           ` Stefan Hajnoczi
2015-07-28 10:58             ` Cornelia Huck
2015-07-28 12:18               ` Paolo Bonzini [this message]
2015-07-28 13:58                 ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55B77310.6050808@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=borntraeger@de.ibm.com \
    --cc=cornelia.huck@de.ibm.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@gmail.com \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).