From: Max Reitz <mreitz@redhat.com>
To: Fam Zheng <famz@redhat.com>, qemu-devel@nongnu.org
Cc: Kevin Wolf <kwolf@redhat.com>,
qemu-block@nongnu.org, jcody@redhat.com, armbru@redhat.com,
Stefan Hajnoczi <stefanha@redhat.com>,
amit.shah@redhat.com, pbonzini@redhat.com
Subject: Re: [Qemu-devel] [PATCH v6 12/13] block: Block "device IO" during bdrv_drain and bdrv_drain_all
Date: Sat, 23 May 2015 19:11:54 +0200 [thread overview]
Message-ID: <5560B4DA.8000908@redhat.com> (raw)
In-Reply-To: <1432190583-10518-13-git-send-email-famz@redhat.com>
On 21.05.2015 08:43, Fam Zheng wrote:
> We don't want new requests from guest, so block the operation around the
> nested poll.
>
> It also avoids looping forever when iothread is submitting a lot of requests.
>
> Signed-off-by: Fam Zheng <famz@redhat.com>
> ---
> block/io.c | 22 ++++++++++++++++++++--
> 1 file changed, 20 insertions(+), 2 deletions(-)
Hm, I don't know about this. When I see someone calling
bdrv_drain()/bdrv_drain_all(), I'm expecting that every request has been
drained afterwards. This patch implies that this is not necessarily the
case, because apparently in some configurations the guest can still
submit I/O even while bdrv_drain() is running, but this means that even
after this patch, the same can happen if I/O is submitted after
bdrv_op_unblock() and before anything the caller of bdrv_drain() wants
to do while the BDS is still drained. So this looks to me more like the
caller must ensure that the BDS won't receive new requests, and do so
before bdrv_drain() is called.
Maybe it works anyway because I'm once again just confused by qemu's
threading model, and the problem here is only that bdrv_drain_one() may
yield which may result in new I/O requests being submitted. If apart
from yielding no new I/O can be submitted, I guess this patch is good.
Max
> diff --git a/block/io.c b/block/io.c
> index 1ce62c4..b23a83f 100644
> --- a/block/io.c
> +++ b/block/io.c
> @@ -286,12 +286,21 @@ static bool bdrv_drain_one(BlockDriverState *bs)
> *
> * Note that unlike bdrv_drain_all(), the caller must hold the BlockDriverState
> * AioContext.
> + *
> + * Devices are paused to avoid looping forever because otherwise they could
> + * keep submitting more requests.
> */
> void bdrv_drain(BlockDriverState *bs)
> {
> + Error *blocker = NULL;
> +
> + error_setg(&blocker, "bdrv_drain in progress");
> + bdrv_op_block(bs, BLOCK_OP_TYPE_DEVICE_IO, blocker);
> while (bdrv_drain_one(bs)) {
> /* Keep iterating */
> }
> + bdrv_op_unblock(bs, BLOCK_OP_TYPE_DEVICE_IO, blocker);
> + error_free(blocker);
> }
>
> /*
> @@ -303,14 +312,20 @@ void bdrv_drain(BlockDriverState *bs)
> * Note that completion of an asynchronous I/O operation can trigger any
> * number of other I/O operations on other devices---for example a coroutine
> * can be arbitrarily complex and a constant flow of I/O can come until the
> - * coroutine is complete. Because of this, it is not possible to have a
> - * function to drain a single device's I/O queue.
> + * coroutine is complete. Because of this, we must call bdrv_drain_one in a
> + * loop.
> + *
> + * We explicitly pause block jobs and devices to prevent them from submitting
> + * more requests.
> */
> void bdrv_drain_all(void)
> {
> /* Always run first iteration so any pending completion BHs run */
> bool busy = true;
> BlockDriverState *bs = NULL;
> + Error *blocker = NULL;
> +
> + error_setg(&blocker, "bdrv_drain_all in progress");
>
> while ((bs = bdrv_next(bs))) {
> AioContext *aio_context = bdrv_get_aio_context(bs);
> @@ -319,6 +334,7 @@ void bdrv_drain_all(void)
> if (bs->job) {
> block_job_pause(bs->job);
> }
> + bdrv_op_block(bs, BLOCK_OP_TYPE_DEVICE_IO, blocker);
> aio_context_release(aio_context);
> }
>
> @@ -343,8 +359,10 @@ void bdrv_drain_all(void)
> if (bs->job) {
> block_job_resume(bs->job);
> }
> + bdrv_op_unblock(bs, BLOCK_OP_TYPE_DEVICE_IO, blocker);
> aio_context_release(aio_context);
> }
> + error_free(blocker);
> }
>
> /**
next prev parent reply other threads:[~2015-05-23 17:12 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-21 6:42 [Qemu-devel] [PATCH v6 00/13] Fix transactional snapshot with dataplane and NBD export Fam Zheng
2015-05-21 6:42 ` [Qemu-devel] [PATCH v6 01/13] block: Add op blocker type "device IO" Fam Zheng
2015-05-21 7:06 ` Wen Congyang
2015-05-21 7:32 ` Fam Zheng
2015-05-22 4:54 ` Fam Zheng
2015-05-23 16:51 ` Max Reitz
2015-05-25 2:15 ` Fam Zheng
2015-05-21 8:00 ` Wen Congyang
2015-05-21 12:44 ` Fam Zheng
2015-05-22 6:18 ` Wen Congyang
2015-05-26 14:22 ` Kevin Wolf
2015-05-26 14:24 ` Max Reitz
2015-05-27 9:07 ` Kevin Wolf
2015-05-27 9:50 ` Paolo Bonzini
2015-05-27 10:10 ` Kevin Wolf
2015-05-27 10:43 ` Paolo Bonzini
2015-05-28 2:49 ` Fam Zheng
2015-05-28 8:23 ` Paolo Bonzini
2015-05-28 10:46 ` Fam Zheng
2015-05-28 10:52 ` Paolo Bonzini
2015-05-28 11:11 ` Fam Zheng
2015-05-28 11:19 ` Paolo Bonzini
2015-05-28 12:05 ` Fam Zheng
2015-05-29 11:11 ` Andrey Korolyov
2015-05-30 13:21 ` Paolo Bonzini
2015-05-28 9:40 ` Kevin Wolf
2015-05-28 10:55 ` Fam Zheng
2015-05-28 11:00 ` Paolo Bonzini
2015-05-28 11:24 ` Kevin Wolf
2015-05-28 11:41 ` Paolo Bonzini
2015-05-28 11:44 ` Fam Zheng
2015-05-28 11:47 ` Paolo Bonzini
2015-05-28 12:04 ` Fam Zheng
2015-05-21 6:42 ` [Qemu-devel] [PATCH v6 02/13] block: Add op blocker notifier list Fam Zheng
2015-05-21 6:42 ` [Qemu-devel] [PATCH v6 03/13] block-backend: Add blk_op_blocker_add_notifier Fam Zheng
2015-05-21 6:42 ` [Qemu-devel] [PATCH v6 04/13] virtio-blk: Move complete_request to 'ops' structure Fam Zheng
2015-05-21 6:42 ` [Qemu-devel] [PATCH v6 05/13] virtio-blk: Don't handle output when there is "device IO" op blocker Fam Zheng
2015-05-23 16:53 ` Max Reitz
2015-05-21 6:42 ` [Qemu-devel] [PATCH v6 06/13] virtio-scsi-dataplane: Add "device IO" op blocker listener Fam Zheng
2015-05-23 16:53 ` Max Reitz
2015-05-21 6:42 ` [Qemu-devel] [PATCH v6 07/13] nbd-server: Clear "can_read" when "device io" blocker is set Fam Zheng
2015-05-23 16:54 ` Max Reitz
2015-05-21 6:42 ` [Qemu-devel] [PATCH v6 08/13] blockdev: Block device IO during internal snapshot transaction Fam Zheng
2015-05-23 16:56 ` Max Reitz
2015-05-21 6:42 ` [Qemu-devel] [PATCH v6 09/13] blockdev: Block device IO during external " Fam Zheng
2015-05-23 16:58 ` Max Reitz
2015-05-21 6:43 ` [Qemu-devel] [PATCH v6 10/13] blockdev: Block device IO during drive-backup transaction Fam Zheng
2015-05-23 16:59 ` Max Reitz
2015-05-21 6:43 ` [Qemu-devel] [PATCH v6 11/13] blockdev: Block device IO during blockdev-backup transaction Fam Zheng
2015-05-23 17:05 ` Max Reitz
2015-05-21 6:43 ` [Qemu-devel] [PATCH v6 12/13] block: Block "device IO" during bdrv_drain and bdrv_drain_all Fam Zheng
2015-05-23 17:11 ` Max Reitz [this message]
2015-05-25 2:48 ` Fam Zheng
2015-05-26 14:21 ` Max Reitz
2015-05-21 6:43 ` [Qemu-devel] [PATCH v6 13/13] block/mirror: Block "device IO" during mirror exit Fam Zheng
2015-05-23 17:21 ` Max Reitz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5560B4DA.8000908@redhat.com \
--to=mreitz@redhat.com \
--cc=amit.shah@redhat.com \
--cc=armbru@redhat.com \
--cc=famz@redhat.com \
--cc=jcody@redhat.com \
--cc=kwolf@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).