From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39598) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WiPQC-0000RO-NY for qemu-devel@nongnu.org; Thu, 08 May 2014 10:35:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1WiPQ6-0003EC-Ih for qemu-devel@nongnu.org; Thu, 08 May 2014 10:35:40 -0400 Received: from mx1.redhat.com ([209.132.183.28]:46092) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WiPQ6-0003D4-AW for qemu-devel@nongnu.org; Thu, 08 May 2014 10:35:34 -0400 From: Stefan Hajnoczi Date: Thu, 8 May 2014 16:34:41 +0200 Message-Id: <1399559698-31900-9-git-send-email-stefanha@redhat.com> In-Reply-To: <1399559698-31900-1-git-send-email-stefanha@redhat.com> References: <1399559698-31900-1-git-send-email-stefanha@redhat.com> Subject: [Qemu-devel] [PATCH v3 08/25] gluster: use BlockDriverState's AioContext List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Kevin Wolf , Paolo Bonzini , Bharata B Rao , Stefan Hajnoczi , Christian Borntraeger Drop the assumption that we're using the main AioContext. Use aio_bh_new() instead of qemu_bh_new(). The .bdrv_detach_aio_context() and .bdrv_attach_aio_context() interfaces are not needed since no fd handlers, timers, or BHs stay registered when requests have been drained. Cc: Bharata B Rao Signed-off-by: Stefan Hajnoczi --- block/gluster.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/block/gluster.c b/block/gluster.c index 8836085..b358bdc 100644 --- a/block/gluster.c +++ b/block/gluster.c @@ -16,6 +16,7 @@ typedef struct GlusterAIOCB { int ret; QEMUBH *bh; Coroutine *coroutine; + AioContext *aio_context; } GlusterAIOCB; typedef struct BDRVGlusterState { @@ -244,7 +245,7 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg) acb->ret = -EIO; /* Partial read/write - fail it */ } - acb->bh = qemu_bh_new(qemu_gluster_complete_aio, acb); + acb->bh = aio_bh_new(acb->aio_context, qemu_gluster_complete_aio, acb); qemu_bh_schedule(acb->bh); } @@ -431,6 +432,7 @@ static coroutine_fn int qemu_gluster_co_write_zeroes(BlockDriverState *bs, acb->size = size; acb->ret = 0; acb->coroutine = qemu_coroutine_self(); + acb->aio_context = bdrv_get_aio_context(bs); ret = glfs_zerofill_async(s->fd, offset, size, &gluster_finish_aiocb, acb); if (ret < 0) { @@ -544,6 +546,7 @@ static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs, acb->size = size; acb->ret = 0; acb->coroutine = qemu_coroutine_self(); + acb->aio_context = bdrv_get_aio_context(bs); if (write) { ret = glfs_pwritev_async(s->fd, qiov->iov, qiov->niov, offset, 0, @@ -600,6 +603,7 @@ static coroutine_fn int qemu_gluster_co_flush_to_disk(BlockDriverState *bs) acb->size = 0; acb->ret = 0; acb->coroutine = qemu_coroutine_self(); + acb->aio_context = bdrv_get_aio_context(bs); ret = glfs_fsync_async(s->fd, &gluster_finish_aiocb, acb); if (ret < 0) { @@ -628,6 +632,7 @@ static coroutine_fn int qemu_gluster_co_discard(BlockDriverState *bs, acb->size = 0; acb->ret = 0; acb->coroutine = qemu_coroutine_self(); + acb->aio_context = bdrv_get_aio_context(bs); ret = glfs_discard_async(s->fd, offset, size, &gluster_finish_aiocb, acb); if (ret < 0) { -- 1.9.0