From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38200) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XlGC2-0006nb-6N for qemu-devel@nongnu.org; Mon, 03 Nov 2014 06:53:12 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XlGBv-0000vh-CY for qemu-devel@nongnu.org; Mon, 03 Nov 2014 06:53:06 -0500 Received: from mx1.redhat.com ([209.132.183.28]:41765) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XlGBv-0000vc-2V for qemu-devel@nongnu.org; Mon, 03 Nov 2014 06:52:59 -0500 From: Stefan Hajnoczi Date: Mon, 3 Nov 2014 11:50:51 +0000 Message-Id: <1415015456-25086-49-git-send-email-stefanha@redhat.com> In-Reply-To: <1415015456-25086-1-git-send-email-stefanha@redhat.com> References: <1415015456-25086-1-git-send-email-stefanha@redhat.com> Subject: [Qemu-devel] [PULL 48/53] block: add bdrv_drain() List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Peter Maydell , Stefan Hajnoczi Now that op blockers are in use, we can ensure that no other sources are generating I/O on a BlockDriverState. Therefore it is possible to drain requests for a single BDS. Signed-off-by: Stefan Hajnoczi Reviewed-by: Max Reitz Message-id: 1413889440-32577-7-git-send-email-stefanha@redhat.com --- block.c | 36 +++++++++++++++++++++++++++++------- include/block/block.h | 1 + 2 files changed, 30 insertions(+), 7 deletions(-) diff --git a/block.c b/block.c index c5ff560..a909b9d 100644 --- a/block.c +++ b/block.c @@ -1904,6 +1904,34 @@ static bool bdrv_requests_pending(BlockDriverState *bs) return false; } +static bool bdrv_drain_one(BlockDriverState *bs) +{ + bool bs_busy; + + bdrv_flush_io_queue(bs); + bdrv_start_throttled_reqs(bs); + bs_busy = bdrv_requests_pending(bs); + bs_busy |= aio_poll(bdrv_get_aio_context(bs), bs_busy); + return bs_busy; +} + +/* + * Wait for pending requests to complete on a single BlockDriverState subtree + * + * See the warning in bdrv_drain_all(). This function can only be called if + * you are sure nothing can generate I/O because you have op blockers + * installed. + * + * Note that unlike bdrv_drain_all(), the caller must hold the BlockDriverState + * AioContext. + */ +void bdrv_drain(BlockDriverState *bs) +{ + while (bdrv_drain_one(bs)) { + /* Keep iterating */ + } +} + /* * Wait for pending requests to complete across all BlockDriverStates * @@ -1927,16 +1955,10 @@ void bdrv_drain_all(void) QTAILQ_FOREACH(bs, &bdrv_states, device_list) { AioContext *aio_context = bdrv_get_aio_context(bs); - bool bs_busy; aio_context_acquire(aio_context); - bdrv_flush_io_queue(bs); - bdrv_start_throttled_reqs(bs); - bs_busy = bdrv_requests_pending(bs); - bs_busy |= aio_poll(aio_context, bs_busy); + busy |= bdrv_drain_one(bs); aio_context_release(aio_context); - - busy |= bs_busy; } } } diff --git a/include/block/block.h b/include/block/block.h index 5d13282..13e4537 100644 --- a/include/block/block.h +++ b/include/block/block.h @@ -334,6 +334,7 @@ int bdrv_flush(BlockDriverState *bs); int coroutine_fn bdrv_co_flush(BlockDriverState *bs); int bdrv_flush_all(void); void bdrv_close_all(void); +void bdrv_drain(BlockDriverState *bs); void bdrv_drain_all(void); int bdrv_discard(BlockDriverState *bs, int64_t sector_num, int nb_sectors); -- 1.9.3