From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:59937) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TYi03-0008AN-R2 for qemu-devel@nongnu.org; Wed, 14 Nov 2012 13:47:50 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TYi00-0000Sw-M6 for qemu-devel@nongnu.org; Wed, 14 Nov 2012 13:47:47 -0500 Received: from mx1.redhat.com ([209.132.183.28]:42219) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TYi00-0000Sk-Ce for qemu-devel@nongnu.org; Wed, 14 Nov 2012 13:47:44 -0500 From: Kevin Wolf Date: Wed, 14 Nov 2012 19:47:09 +0100 Message-Id: <1352918847-3696-9-git-send-email-kwolf@redhat.com> In-Reply-To: <1352918847-3696-1-git-send-email-kwolf@redhat.com> References: <1352918847-3696-1-git-send-email-kwolf@redhat.com> Subject: [Qemu-devel] [PATCH 08/26] aio: use g_slice_alloc() for AIOCB pooling List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: anthony@codemonkey.ws Cc: kwolf@redhat.com, qemu-devel@nongnu.org From: Stefan Hajnoczi AIO control blocks are frequently acquired and released because each aio request involves at least one AIOCB. Therefore, we pool them to avoid heap allocation overhead. The problem with the freelist approach in AIOPool is thread-safety. If we want BlockDriverStates to associate with AioContexts that execute in multiple threads, then a global freelist becomes a problem. This patch drops the freelist and instead uses g_slice_alloc() which is tuned for per-thread fixed-size object pools. qemu_aio_get() and qemu_aio_release() are now thread-safe. Note that the change from g_malloc0() to g_slice_alloc() should be safe since the freelist reuse case doesn't zero the AIOCB either. Signed-off-by: Stefan Hajnoczi Reviewed-by: Paolo Bonzini Signed-off-by: Kevin Wolf --- block.c | 15 ++++----------- qemu-aio.h | 2 -- 2 files changed, 4 insertions(+), 13 deletions(-) diff --git a/block.c b/block.c index da1fdca..ea0f7d8 100644 --- a/block.c +++ b/block.c @@ -3909,13 +3909,8 @@ void *qemu_aio_get(AIOPool *pool, BlockDriverState *bs, { BlockDriverAIOCB *acb; - if (pool->free_aiocb) { - acb = pool->free_aiocb; - pool->free_aiocb = acb->next; - } else { - acb = g_malloc0(pool->aiocb_size); - acb->pool = pool; - } + acb = g_slice_alloc(pool->aiocb_size); + acb->pool = pool; acb->bs = bs; acb->cb = cb; acb->opaque = opaque; @@ -3924,10 +3919,8 @@ void *qemu_aio_get(AIOPool *pool, BlockDriverState *bs, void qemu_aio_release(void *p) { - BlockDriverAIOCB *acb = (BlockDriverAIOCB *)p; - AIOPool *pool = acb->pool; - acb->next = pool->free_aiocb; - pool->free_aiocb = acb; + BlockDriverAIOCB *acb = p; + g_slice_free1(acb->pool->aiocb_size, acb); } /**************************************************************/ diff --git a/qemu-aio.h b/qemu-aio.h index 111b0b3..b29c509 100644 --- a/qemu-aio.h +++ b/qemu-aio.h @@ -24,7 +24,6 @@ typedef void BlockDriverCompletionFunc(void *opaque, int ret); typedef struct AIOPool { void (*cancel)(BlockDriverAIOCB *acb); size_t aiocb_size; - BlockDriverAIOCB *free_aiocb; } AIOPool; struct BlockDriverAIOCB { @@ -32,7 +31,6 @@ struct BlockDriverAIOCB { BlockDriverState *bs; BlockDriverCompletionFunc *cb; void *opaque; - BlockDriverAIOCB *next; }; void *qemu_aio_get(AIOPool *pool, BlockDriverState *bs, -- 1.7.6.5