From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:49916) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1SSRsO-0000k5-4q for qemu-devel@nongnu.org; Thu, 10 May 2012 07:49:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1SSRsK-0006At-C1 for qemu-devel@nongnu.org; Thu, 10 May 2012 07:49:43 -0400 Received: from mx1.redhat.com ([209.132.183.28]:37006) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1SSRsK-0006AO-2W for qemu-devel@nongnu.org; Thu, 10 May 2012 07:49:40 -0400 From: Kevin Wolf Date: Thu, 10 May 2012 13:49:05 +0200 Message-Id: <1336650574-12835-2-git-send-email-kwolf@redhat.com> In-Reply-To: <1336650574-12835-1-git-send-email-kwolf@redhat.com> References: <1336650574-12835-1-git-send-email-kwolf@redhat.com> Subject: [Qemu-devel] [PATCH 01/30] block: add the support to drain throttled requests List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: anthony@codemonkey.ws Cc: kwolf@redhat.com, qemu-devel@nongnu.org From: Zhi Yong Wu Signed-off-by: Zhi Yong Wu [ Iterate until all block devices have processed all requests, add comments. - Paolo ] Signed-off-by: Paolo Bonzini Signed-off-by: Kevin Wolf --- block.c | 21 ++++++++++++++++++++- 1 files changed, 20 insertions(+), 1 deletions(-) diff --git a/block.c b/block.c index ee7d8f2..a307fe1 100644 --- a/block.c +++ b/block.c @@ -906,12 +906,31 @@ void bdrv_close_all(void) * * This function does not flush data to disk, use bdrv_flush_all() for that * after calling this function. + * + * Note that completion of an asynchronous I/O operation can trigger any + * number of other I/O operations on other devices---for example a coroutine + * can be arbitrarily complex and a constant flow of I/O can come until the + * coroutine is complete. Because of this, it is not possible to have a + * function to drain a single device's I/O queue. */ void bdrv_drain_all(void) { BlockDriverState *bs; + bool busy; - qemu_aio_flush(); + do { + busy = qemu_aio_wait(); + + /* FIXME: We do not have timer support here, so this is effectively + * a busy wait. + */ + QTAILQ_FOREACH(bs, &bdrv_states, list) { + if (!qemu_co_queue_empty(&bs->throttled_reqs)) { + qemu_co_queue_restart_all(&bs->throttled_reqs); + busy = true; + } + } + } while (busy); /* If requests are still pending there is a bug somewhere */ QTAILQ_FOREACH(bs, &bdrv_states, list) { -- 1.7.6.5