qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: qemu-devel@nongnu.org
Cc: cornelia.huck@de.ibm.com, stefanha@redhat.com
Subject: [Qemu-devel] [PATCH for-2.4 v3 1/3] virtio-blk-dataplane: delete bottom half before the AioContext is freed
Date: Tue, 28 Jul 2015 18:34:07 +0200	[thread overview]
Message-ID: <1438101249-25166-2-git-send-email-pbonzini@redhat.com> (raw)
In-Reply-To: <1438101249-25166-1-git-send-email-pbonzini@redhat.com>

Other uses of aio_bh_new are safe as long as all scheduled bottom
halves are run before an iothread is destroyed, which bdrv_drain will
ensure:

- archipelago_finish_aiocb: BH deletes itself

- inject_error: BH deletes itself

- blkverify_aio_bh: BH deletes itself

- abort_aio_request: BH deletes itself

- curl_aio_readv: BH deletes itself

- gluster_finish_aiocb: BH deletes itself

- bdrv_aio_rw_vector: BH deletes itself

- bdrv_co_maybe_schedule_bh: BH deletes itself

- iscsi_schedule_bh, iscsi_co_generic_cb: BH deletes itself

- laio_attach_aio_context: deleted in laio_detach_aio_context,
called through bdrv_detach_aio_context before deleting the iothread

- nfs_co_generic_cb: BH deletes itself

- null_aio_common: BH deletes itself

- qed_aio_complete: BH deletes itself

- rbd_finish_aiocb: BH deletes itself

- dma_blk_cb: BH deletes itself

- virtio_blk_dma_restart_cb: BH deletes itself

- qemu_bh_new: main loop AioContext is never destroyed

- test-aio.c: bh_delete_cb deletes itself, otherwise deleted in
the same function that calls aio_bh_new

Reported-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1438086628-13000-1-git-send-email-pbonzini@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 3db139b..6106e46 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -223,8 +223,8 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
     virtio_blk_data_plane_stop(s);
     blk_op_unblock_all(s->conf->conf.blk, s->blocker);
     error_free(s->blocker);
-    object_unref(OBJECT(s->iothread));
     qemu_bh_delete(s->bh);
+    object_unref(OBJECT(s->iothread));
     g_free(s);
 }
 
-- 
2.4.3

  reply	other threads:[~2015-07-28 16:34 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-28 16:34 [Qemu-devel] [PATCH for-2.4 v3 0/3] AioContext: fix deadlock after aio_context_acquire() race Paolo Bonzini
2015-07-28 16:34 ` Paolo Bonzini [this message]
2015-07-28 16:34 ` [Qemu-devel] [PATCH for-2.4 v3 2/3] AioContext: avoid leaking BHs on cleanup Paolo Bonzini
2015-07-28 16:34 ` [Qemu-devel] [PATCH for-2.4 v3 3/3] AioContext: force event loop iteration using BH Paolo Bonzini
2015-07-29  9:02 ` [Qemu-devel] [PATCH for-2.4 v3 0/3] AioContext: fix deadlock after aio_context_acquire() race Stefan Hajnoczi
2015-07-29  9:08 ` Cornelia Huck

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1438101249-25166-2-git-send-email-pbonzini@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=cornelia.huck@de.ibm.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).