From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:41819) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Ysu8s-0003ib-V2 for qemu-devel@nongnu.org; Thu, 14 May 2015 10:29:47 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Ysu8m-0002xL-15 for qemu-devel@nongnu.org; Thu, 14 May 2015 10:29:42 -0400 Received: from e06smtp13.uk.ibm.com ([195.75.94.109]:51351) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Ysu8l-0002uu-PF for qemu-devel@nongnu.org; Thu, 14 May 2015 10:29:35 -0400 Received: from /spool/local by e06smtp13.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 14 May 2015 15:29:33 +0100 From: Alexander Yarygin References: <1431530311-21647-1-git-send-email-yarygin@linux.vnet.ibm.com> <55536C6B.4040400@redhat.com> <87vbfw77xb.fsf@linux.vnet.ibm.com> <55548F90.8010709@redhat.com> Date: Thu, 14 May 2015 17:29:21 +0300 In-Reply-To: <55548F90.8010709@redhat.com> (Paolo Bonzini's message of "Thu, 14 May 2015 14:05:36 +0200") Message-ID: <878ucr9qri.fsf@linux.vnet.ibm.com> MIME-Version: 1.0 Content-Type: text/plain Subject: Re: [Qemu-devel] [PATCH] block: Let bdrv_drain_all() to call aio_poll() for each AioContext List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: Cornelia Huck , Christian Borntraeger , Stefan Hajnoczi , qemu-devel@nongnu.org, qemu-block@nongnu.org Paolo Bonzini writes: > On 13/05/2015 18:34, Alexander Yarygin wrote: >> Ah, right. We need second loop, something like this: >> >> @@ -2030,20 +2033,33 @@ void bdrv_drain(BlockDriverState *bs) >> void bdrv_drain_all(void) >> { >> /* Always run first iteration so any pending completion BHs run */ >> - bool busy = true; >> + bool busy = true, pending = false; >> BlockDriverState *bs; >> + GList *aio_ctxs = NULL, *ctx; >> + AioContext *aio_context; >> >> while (busy) { >> busy = false; >> >> QTAILQ_FOREACH(bs, &bdrv_states, device_list) { >> - AioContext *aio_context = bdrv_get_aio_context(bs); >> + aio_context = bdrv_get_aio_context(bs); >> >> aio_context_acquire(aio_context); >> busy |= bdrv_drain_one(bs); >> aio_context_release(aio_context); >> + if (!aio_ctxs || !g_list_find(aio_ctxs, aio_context)) >> + aio_ctxs = g_list_append(aio_ctxs, aio_context); >> + } >> + pending = busy; >> + >> + for (ctx = aio_ctxs; ctx != NULL; ctx = ctx->next) { >> + aio_context = ctx->data; >> + aio_context_acquire(aio_context); >> + busy |= aio_poll(aio_context, pending); >> + aio_context_release(aio_context); >> } >> } >> + g_list_free(aio_ctxs); >> } >> >> That looks quite ugly for me and breaks consistence of bdrv_drain_one() >> since it doesn't call aio_poll() anymore... > > It's not ugly. After your patch bdrv_drain_one doesn't call aio_poll, > while bdrv_drain and bdrv_drain_all call bdrv_drain_one + aio_poll. All > callers of bdrv_drain_one are consistent. > > Perhaps you can rename bdrv_drain_one to bdrv_flush_io_queue (inlining > the existing bdrv_flush_io_queue into it)? That would work very well > for me. > > Paolo Hmm, bdrv_flush_io_queue() is public, but has no users. How about different name, maybe something like "bdrv_drain_requests_one" or so? Otherwise here is a patch related to renaming to bdrv_flush_io_queue() below. If it's ok, I will respin the whole patch. Thanks. --- a/block.c +++ b/block.c @@ -1987,11 +1987,17 @@ static bool bdrv_requests_pending(BlockDriverState *bs) return false; } -static bool bdrv_drain_one(BlockDriverState *bs) +static bool bdrv_flush_io_queue(BlockDriverState *bs) { bool bs_busy; + BlockDriver *drv = bs->drv; + + if (drv && drv->bdrv_flush_io_queue) { + drv->bdrv_flush_io_queue(bs); + } else if (bs->file) { + bdrv_flush_io_queue(bs->file); + } - bdrv_flush_io_queue(bs); bdrv_start_throttled_reqs(bs); bs_busy = bdrv_requests_pending(bs); return bs_busy; @@ -2013,7 +2019,7 @@ void bdrv_drain(BlockDriverState *bs) while (busy) { /* Keep iterating */ - busy = bdrv_drain_one(bs); + busy = bdrv_flush_io_queue(bs); busy |= aio_poll(bdrv_get_aio_context(bs), busy); } } @@ -2044,7 +2050,7 @@ void bdrv_drain_all(void) AioContext *aio_context = bdrv_get_aio_context(bs); aio_context_acquire(aio_context); - busy |= bdrv_drain_one(bs); + busy |= bdrv_flush_io_queue(bs); if (!aio_ctxs || !g_slist_find(aio_ctxs, aio_context)) { busy |= aio_poll(aio_context, busy); aio_ctxs = g_slist_prepend(aio_ctxs, aio_context); @@ -6088,16 +6094,6 @@ void bdrv_io_unplug(BlockDriverState *bs) } } -void bdrv_flush_io_queue(BlockDriverState *bs) -{ - BlockDriver *drv = bs->drv; - if (drv && drv->bdrv_flush_io_queue) { - drv->bdrv_flush_io_queue(bs); - } else if (bs->file) { - bdrv_flush_io_queue(bs->file); - } -} - static bool append_open_options(QDict *d, BlockDriverState *bs) { const QDictEntry *entry; --- a/include/block/block.h +++ b/include/block/block.h @@ -565,7 +565,6 @@ int bdrv_probe_geometry(BlockDriverState *bs, HDGeometry *geo); void bdrv_io_plug(BlockDriverState *bs); void bdrv_io_unplug(BlockDriverState *bs); -void bdrv_flush_io_queue(BlockDriverState *bs); BlockAcctStats *bdrv_get_stats(BlockDriverState *bs);