From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:36065) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UQJiN-0005tE-Kh for qemu-devel@nongnu.org; Thu, 11 Apr 2013 11:47:10 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UQJiH-0003CX-9X for qemu-devel@nongnu.org; Thu, 11 Apr 2013 11:47:07 -0400 Received: from mx1.redhat.com ([209.132.183.28]:39778) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UQJiH-0003Bz-0m for qemu-devel@nongnu.org; Thu, 11 Apr 2013 11:47:01 -0400 From: Stefan Hajnoczi Date: Thu, 11 Apr 2013 17:44:33 +0200 Message-Id: <1365695085-27970-2-git-send-email-stefanha@redhat.com> In-Reply-To: <1365695085-27970-1-git-send-email-stefanha@redhat.com> References: <1365695085-27970-1-git-send-email-stefanha@redhat.com> Subject: [Qemu-devel] [RFC 01/13] block: stop relying on io_flush() in bdrv_drain_all() List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Kevin Wolf , Paolo Bonzini , Anthony Liguori , pingfank@linux.vnet.ibm.com If a block driver has no file descriptors to monitor but there are still active requests, it can return 1 from .io_flush(). This is used to spin during synchronous I/O. Stop relying on .io_flush() and instead check QLIST_EMPTY(&bs->tracked_requests) to decide whether there are active requests. This is the first step in removing .io_flush() so that event loops no longer need to have the concept of synchronous I/O. Eventually we may be able to kill synchronous I/O completely by running everything in a coroutine, but that is future work. Signed-off-by: Stefan Hajnoczi --- block.c | 28 ++++++++++++++++++---------- 1 file changed, 18 insertions(+), 10 deletions(-) diff --git a/block.c b/block.c index 602d8a4..93d676b 100644 --- a/block.c +++ b/block.c @@ -1360,6 +1360,22 @@ void bdrv_close_all(void) } } +/* Check if any requests are in-flight (including throttled requests) */ +static bool bdrv_requests_pending(void) +{ + BlockDriverState *bs; + + QTAILQ_FOREACH(bs, &bdrv_states, list) { + if (!QLIST_EMPTY(&bs->tracked_requests)) { + return true; + } + if (!qemu_co_queue_empty(&bs->throttled_reqs)) { + return true; + } + } + return false; +} + /* * Wait for pending requests to complete across all BlockDriverStates * @@ -1375,26 +1391,18 @@ void bdrv_close_all(void) void bdrv_drain_all(void) { BlockDriverState *bs; - bool busy; - - do { - busy = qemu_aio_wait(); + while (bdrv_requests_pending()) { /* FIXME: We do not have timer support here, so this is effectively * a busy wait. */ QTAILQ_FOREACH(bs, &bdrv_states, list) { if (!qemu_co_queue_empty(&bs->throttled_reqs)) { qemu_co_queue_restart_all(&bs->throttled_reqs); - busy = true; } } - } while (busy); - /* If requests are still pending there is a bug somewhere */ - QTAILQ_FOREACH(bs, &bdrv_states, list) { - assert(QLIST_EMPTY(&bs->tracked_requests)); - assert(qemu_co_queue_empty(&bs->throttled_reqs)); + qemu_aio_wait(); } } -- 1.8.1.4