From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45803) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YsYvV-0001ss-1K for qemu-devel@nongnu.org; Wed, 13 May 2015 11:50:32 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YsYvT-0000os-TI for qemu-devel@nongnu.org; Wed, 13 May 2015 11:50:28 -0400 Message-ID: <55536C6B.4040400@redhat.com> Date: Wed, 13 May 2015 17:23:23 +0200 From: Paolo Bonzini MIME-Version: 1.0 References: <1431530311-21647-1-git-send-email-yarygin@linux.vnet.ibm.com> In-Reply-To: <1431530311-21647-1-git-send-email-yarygin@linux.vnet.ibm.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH] block: Let bdrv_drain_all() to call aio_poll() for each AioContext List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alexander Yarygin , qemu-devel@nongnu.org Cc: Cornelia Huck , Christian Borntraeger , Kevin Wolf , Stefan Hajnoczi , qemu-block@nongnu.org On 13/05/2015 17:18, Alexander Yarygin wrote: > After the commit 9b536adc ("block: acquire AioContext in > bdrv_drain_all()") the aio_poll() function got called for every > BlockDriverState, in assumption that every device may have its own > AioContext. The bdrv_drain_all() function is called in each > virtio_reset() call, ... which should actually call bdrv_drain(). Can you fix that? > which in turn is called for every virtio-blk > device on initialization, so we got aio_poll() called > 'length(device_list)^2' times. > > If we have thousands of disks attached, there are a lot of > BlockDriverStates but only a few AioContexts, leading to tons of > unnecessary aio_poll() calls. For example, startup times with 1000 disks > takes over 13 minutes. > > This patch changes the bdrv_drain_all() function allowing it find shared > AioContexts and to call aio_poll() only for unique ones. This results in > much better startup times, e.g. 1000 disks do come up within 5 seconds. I'm not sure this patch is correct. You may have to call aio_poll multiple times before a BlockDriverState is drained. Paolo > Cc: Christian Borntraeger > Cc: Cornelia Huck > Cc: Kevin Wolf > Cc: Paolo Bonzini > Cc: Stefan Hajnoczi > Signed-off-by: Alexander Yarygin > --- > block.c | 13 +++++++++++-- > 1 file changed, 11 insertions(+), 2 deletions(-) > > diff --git a/block.c b/block.c > index f2f8ae7..7414815 100644 > --- a/block.c > +++ b/block.c > @@ -1994,7 +1994,6 @@ static bool bdrv_drain_one(BlockDriverState *bs) > bdrv_flush_io_queue(bs); > bdrv_start_throttled_reqs(bs); > bs_busy = bdrv_requests_pending(bs); > - bs_busy |= aio_poll(bdrv_get_aio_context(bs), bs_busy); > return bs_busy; > } > > @@ -2010,8 +2009,12 @@ static bool bdrv_drain_one(BlockDriverState *bs) > */ > void bdrv_drain(BlockDriverState *bs) > { > - while (bdrv_drain_one(bs)) { > + bool busy = true; > + > + while (busy) { > /* Keep iterating */ > + busy = bdrv_drain_one(bs); > + busy |= aio_poll(bdrv_get_aio_context(bs), busy); > } > } > > @@ -2032,6 +2035,7 @@ void bdrv_drain_all(void) > /* Always run first iteration so any pending completion BHs run */ > bool busy = true; > BlockDriverState *bs; > + GList *aio_ctxs = NULL; > > while (busy) { > busy = false; > @@ -2041,9 +2045,14 @@ void bdrv_drain_all(void) > > aio_context_acquire(aio_context); > busy |= bdrv_drain_one(bs); > + if (!aio_ctxs || !g_list_find(aio_ctxs, aio_context)) { > + busy |= aio_poll(aio_context, busy); > + aio_ctxs = g_list_append(aio_ctxs, aio_context); > + } > aio_context_release(aio_context); > } > } > + g_list_free(aio_ctxs); > } > > /* make a BlockDriverState anonymous by removing from bdrv_state and >