From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49371) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Wswmt-0004EL-4q for qemu-devel@nongnu.org; Fri, 06 Jun 2014 12:14:45 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Wswml-0004Pn-M5 for qemu-devel@nongnu.org; Fri, 06 Jun 2014 12:14:39 -0400 Received: from mx1.redhat.com ([209.132.183.28]:59685) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Wswml-0004Ph-D3 for qemu-devel@nongnu.org; Fri, 06 Jun 2014 12:14:31 -0400 From: Stefan Hajnoczi Date: Fri, 6 Jun 2014 18:13:30 +0200 Message-Id: <1402071243-16702-10-git-send-email-stefanha@redhat.com> In-Reply-To: <1402071243-16702-1-git-send-email-stefanha@redhat.com> References: <1402071243-16702-1-git-send-email-stefanha@redhat.com> Subject: [Qemu-devel] [PULL 09/42] gluster: use BlockDriverState's AioContext List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Peter Maydell , Stefan Hajnoczi , Bharata B Rao Drop the assumption that we're using the main AioContext. Use aio_bh_new() instead of qemu_bh_new(). The .bdrv_detach_aio_context() and .bdrv_attach_aio_context() interfaces are not needed since no fd handlers, timers, or BHs stay registered when requests have been drained. Cc: Bharata B Rao Signed-off-by: Stefan Hajnoczi --- block/gluster.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/block/gluster.c b/block/gluster.c index d0726ec..114689e 100644 --- a/block/gluster.c +++ b/block/gluster.c @@ -16,6 +16,7 @@ typedef struct GlusterAIOCB { int ret; QEMUBH *bh; Coroutine *coroutine; + AioContext *aio_context; } GlusterAIOCB; typedef struct BDRVGlusterState { @@ -249,7 +250,7 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg) acb->ret = -EIO; /* Partial read/write - fail it */ } - acb->bh = qemu_bh_new(qemu_gluster_complete_aio, acb); + acb->bh = aio_bh_new(acb->aio_context, qemu_gluster_complete_aio, acb); qemu_bh_schedule(acb->bh); } @@ -436,6 +437,7 @@ static coroutine_fn int qemu_gluster_co_write_zeroes(BlockDriverState *bs, acb->size = size; acb->ret = 0; acb->coroutine = qemu_coroutine_self(); + acb->aio_context = bdrv_get_aio_context(bs); ret = glfs_zerofill_async(s->fd, offset, size, &gluster_finish_aiocb, acb); if (ret < 0) { @@ -549,6 +551,7 @@ static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs, acb->size = size; acb->ret = 0; acb->coroutine = qemu_coroutine_self(); + acb->aio_context = bdrv_get_aio_context(bs); if (write) { ret = glfs_pwritev_async(s->fd, qiov->iov, qiov->niov, offset, 0, @@ -605,6 +608,7 @@ static coroutine_fn int qemu_gluster_co_flush_to_disk(BlockDriverState *bs) acb->size = 0; acb->ret = 0; acb->coroutine = qemu_coroutine_self(); + acb->aio_context = bdrv_get_aio_context(bs); ret = glfs_fsync_async(s->fd, &gluster_finish_aiocb, acb); if (ret < 0) { @@ -633,6 +637,7 @@ static coroutine_fn int qemu_gluster_co_discard(BlockDriverState *bs, acb->size = 0; acb->ret = 0; acb->coroutine = qemu_coroutine_self(); + acb->aio_context = bdrv_get_aio_context(bs); ret = glfs_discard_async(s->fd, offset, size, &gluster_finish_aiocb, acb); if (ret < 0) { -- 1.9.3