From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47608) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YsUWy-00063a-Kg for qemu-devel@nongnu.org; Wed, 13 May 2015 07:08:53 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YsUWt-0003b0-Ii for qemu-devel@nongnu.org; Wed, 13 May 2015 07:08:52 -0400 Date: Wed, 13 May 2015 19:08:43 +0800 From: Fam Zheng Message-ID: <20150513110843.GB30644@ad.nay.redhat.com> References: <1431538099-3286-1-git-send-email-famz@redhat.com> <1431538099-3286-12-git-send-email-famz@redhat.com> <555326ED.3050609@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <555326ED.3050609@redhat.com> Subject: Re: [Qemu-devel] [PATCH v2 11/11] block: Block "device IO" during bdrv_drain and bdrv_drain_all List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: Kevin Wolf , qemu-block@nongnu.org, armbru@redhat.com, jcody@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, Stefan Hajnoczi On Wed, 05/13 12:26, Paolo Bonzini wrote: > > > On 13/05/2015 19:28, Fam Zheng wrote: > > We don't want new requests from guest, so block the operation around the > > nested poll. > > > > Signed-off-by: Fam Zheng > > --- > > block/io.c | 12 ++++++++++++ > > 1 file changed, 12 insertions(+) > > > > diff --git a/block/io.c b/block/io.c > > index 1ce62c4..d369de3 100644 > > --- a/block/io.c > > +++ b/block/io.c > > @@ -289,9 +289,15 @@ static bool bdrv_drain_one(BlockDriverState *bs) > > */ > > void bdrv_drain(BlockDriverState *bs) > > { > > + Error *blocker = NULL; > > + > > + error_setg(&blocker, "bdrv_drain in progress"); > > + bdrv_op_block(bs, BLOCK_OP_TYPE_DEVICE_IO, blocker); > > while (bdrv_drain_one(bs)) { > > /* Keep iterating */ > > } > > + bdrv_op_unblock(bs, BLOCK_OP_TYPE_DEVICE_IO, blocker); > > + error_free(blocker); > > } > > > > /* > > @@ -311,6 +317,9 @@ void bdrv_drain_all(void) > > /* Always run first iteration so any pending completion BHs run */ > > bool busy = true; > > BlockDriverState *bs = NULL; > > + Error *blocker = NULL; > > + > > + error_setg(&blocker, "bdrv_drain_all in progress"); > > > > while ((bs = bdrv_next(bs))) { > > AioContext *aio_context = bdrv_get_aio_context(bs); > > @@ -319,6 +328,7 @@ void bdrv_drain_all(void) > > if (bs->job) { > > block_job_pause(bs->job); > > } > > + bdrv_op_block(bs, BLOCK_OP_TYPE_DEVICE_IO, blocker); > > aio_context_release(aio_context); > > } > > > > @@ -343,8 +353,10 @@ void bdrv_drain_all(void) > > if (bs->job) { > > block_job_resume(bs->job); > > } > > + bdrv_op_unblock(bs, BLOCK_OP_TYPE_DEVICE_IO, blocker); > > aio_context_release(aio_context); > > } > > + error_free(blocker); > > } > > > > /** > > > > I think this isn't enough. It's the callers of bdrv_drain and > bdrv_drain_all that need to block before drain and unblock before > aio_context_release. Which callers do you mean? qmp_transaction is covered in this series. Fam