From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57040) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UnRiB-0006dF-Hn for qemu-devel@nongnu.org; Fri, 14 Jun 2013 06:58:33 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UnRi9-0006hX-Fy for qemu-devel@nongnu.org; Fri, 14 Jun 2013 06:58:31 -0400 Received: from mail-wg0-x231.google.com ([2a00:1450:400c:c00::231]:41541) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UnRi9-0006hG-8G for qemu-devel@nongnu.org; Fri, 14 Jun 2013 06:58:29 -0400 Received: by mail-wg0-f49.google.com with SMTP id a12so373082wgh.16 for ; Fri, 14 Jun 2013 03:58:28 -0700 (PDT) Date: Fri, 14 Jun 2013 12:58:24 +0200 From: Stefan Hajnoczi Message-ID: <20130614105824.GC26780@stefanha-thinkpad.redhat.com> References: <1370867173-25755-1-git-send-email-stefanha@redhat.com> <1370867173-25755-2-git-send-email-stefanha@redhat.com> <20130610143842.GB4838@stefanha-thinkpad.redhat.com> <51B9D856.20901@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <51B9D856.20901@redhat.com> Subject: Re: [Qemu-devel] [PATCH v3 01/17] block: stop relying on io_flush() in bdrv_drain_all() List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: Kevin Wolf , Anthony Liguori , Ping Fan Liu , qemu-devel@nongnu.org, Stefan Hajnoczi On Thu, Jun 13, 2013 at 10:33:58AM -0400, Paolo Bonzini wrote: > Il 10/06/2013 10:38, Stefan Hajnoczi ha scritto: > > On Mon, Jun 10, 2013 at 02:25:57PM +0200, Stefan Hajnoczi wrote: > >> @@ -1427,26 +1456,18 @@ void bdrv_close_all(void) > >> void bdrv_drain_all(void) > >> { > >> BlockDriverState *bs; > >> - bool busy; > >> - > >> - do { > >> - busy = qemu_aio_wait(); > >> > >> + while (bdrv_requests_pending_all()) { > >> /* FIXME: We do not have timer support here, so this is effectively > >> * a busy wait. > >> */ > >> QTAILQ_FOREACH(bs, &bdrv_states, list) { > >> if (!qemu_co_queue_empty(&bs->throttled_reqs)) { > >> qemu_co_queue_restart_all(&bs->throttled_reqs); > >> - busy = true; > >> } > >> } > >> - } while (busy); > >> > >> - /* If requests are still pending there is a bug somewhere */ > >> - QTAILQ_FOREACH(bs, &bdrv_states, list) { > >> - assert(QLIST_EMPTY(&bs->tracked_requests)); > >> - assert(qemu_co_queue_empty(&bs->throttled_reqs)); > >> + qemu_aio_wait(); > >> } > >> } > > > > tests/ide-test found an issue here: block.c invokes callbacks from a BH > > so we may not yet have completed the request when this loop terminates. > > > > Kevin: can you fold in this patch? > > > > diff --git a/block.c b/block.c > > index 31f7231..e176215 100644 > > --- a/block.c > > +++ b/block.c > > @@ -1469,6 +1469,9 @@ void bdrv_drain_all(void) > > > > qemu_aio_wait(); > > } > > + > > + /* Process pending completion BHs */ > > + aio_poll(qemu_get_aio_context(), false); > > } > > aio_poll could require multiple iterations, this is why the old code was > starting with "busy = qemu_aio_wait()". You can add a while loop around > the new call, or do something more similar to the old code, along the > lines of "do { QTAILQ_FOREACH(...) ...; busy = > bdrv_request_pending_all(); busy |= aio_poll(qemu_get_aio_context(), > busy); } while(busy);". Good idea, the trick is to use busy as the blocking flag. I will resend with the fix to avoid making this email thread too messy. Stefan