From: Fam Zheng <famz@redhat.com>
To: Max Reitz <mreitz@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>,
qemu-block@nongnu.org, jcody@redhat.com, qemu-devel@nongnu.org,
armbru@redhat.com, Stefan Hajnoczi <stefanha@redhat.com>,
amit.shah@redhat.com, pbonzini@redhat.com
Subject: Re: [Qemu-devel] [PATCH v6 12/13] block: Block "device IO" during bdrv_drain and bdrv_drain_all
Date: Mon, 25 May 2015 10:48:31 +0800 [thread overview]
Message-ID: <20150525024831.GD7135@ad.nay.redhat.com> (raw)
In-Reply-To: <5560B4DA.8000908@redhat.com>
On Sat, 05/23 19:11, Max Reitz wrote:
> On 21.05.2015 08:43, Fam Zheng wrote:
> >We don't want new requests from guest, so block the operation around the
> >nested poll.
> >
> >It also avoids looping forever when iothread is submitting a lot of requests.
> >
> >Signed-off-by: Fam Zheng <famz@redhat.com>
> >---
> > block/io.c | 22 ++++++++++++++++++++--
> > 1 file changed, 20 insertions(+), 2 deletions(-)
>
> Hm, I don't know about this. When I see someone calling
> bdrv_drain()/bdrv_drain_all(), I'm expecting that every request has been
> drained afterwards. This patch implies that this is not necessarily the
> case, because apparently in some configurations the guest can still submit
> I/O even while bdrv_drain() is running,
In dataplane, aio_poll in bdrv_drain_all will poll the ioeventfd, which could
call the handlers of virtio queues. That's how guest I/O sneaks in.
> but this means that even after this
> patch, the same can happen if I/O is submitted after bdrv_op_unblock() and
> before anything the caller of bdrv_drain() wants to do while the BDS is
> still drained. So this looks to me more like the caller must ensure that the
> BDS won't receive new requests, and do so before bdrv_drain() is called.
Yes, callers of bdrv_drain*() should use a blocker like you reasoned. Other
patches in this series looked at qmp_transaction, but there are more, which may
still be wrong until they're fixed.
This patch, however, fixes one of the potential issues of those callers:
> >It also avoids looping forever when iothread is submitting a lot of
> >requests.
Fam
next prev parent reply other threads:[~2015-05-25 2:49 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-21 6:42 [Qemu-devel] [PATCH v6 00/13] Fix transactional snapshot with dataplane and NBD export Fam Zheng
2015-05-21 6:42 ` [Qemu-devel] [PATCH v6 01/13] block: Add op blocker type "device IO" Fam Zheng
2015-05-21 7:06 ` Wen Congyang
2015-05-21 7:32 ` Fam Zheng
2015-05-22 4:54 ` Fam Zheng
2015-05-23 16:51 ` Max Reitz
2015-05-25 2:15 ` Fam Zheng
2015-05-21 8:00 ` Wen Congyang
2015-05-21 12:44 ` Fam Zheng
2015-05-22 6:18 ` Wen Congyang
2015-05-26 14:22 ` Kevin Wolf
2015-05-26 14:24 ` Max Reitz
2015-05-27 9:07 ` Kevin Wolf
2015-05-27 9:50 ` Paolo Bonzini
2015-05-27 10:10 ` Kevin Wolf
2015-05-27 10:43 ` Paolo Bonzini
2015-05-28 2:49 ` Fam Zheng
2015-05-28 8:23 ` Paolo Bonzini
2015-05-28 10:46 ` Fam Zheng
2015-05-28 10:52 ` Paolo Bonzini
2015-05-28 11:11 ` Fam Zheng
2015-05-28 11:19 ` Paolo Bonzini
2015-05-28 12:05 ` Fam Zheng
2015-05-29 11:11 ` Andrey Korolyov
2015-05-30 13:21 ` Paolo Bonzini
2015-05-28 9:40 ` Kevin Wolf
2015-05-28 10:55 ` Fam Zheng
2015-05-28 11:00 ` Paolo Bonzini
2015-05-28 11:24 ` Kevin Wolf
2015-05-28 11:41 ` Paolo Bonzini
2015-05-28 11:44 ` Fam Zheng
2015-05-28 11:47 ` Paolo Bonzini
2015-05-28 12:04 ` Fam Zheng
2015-05-21 6:42 ` [Qemu-devel] [PATCH v6 02/13] block: Add op blocker notifier list Fam Zheng
2015-05-21 6:42 ` [Qemu-devel] [PATCH v6 03/13] block-backend: Add blk_op_blocker_add_notifier Fam Zheng
2015-05-21 6:42 ` [Qemu-devel] [PATCH v6 04/13] virtio-blk: Move complete_request to 'ops' structure Fam Zheng
2015-05-21 6:42 ` [Qemu-devel] [PATCH v6 05/13] virtio-blk: Don't handle output when there is "device IO" op blocker Fam Zheng
2015-05-23 16:53 ` Max Reitz
2015-05-21 6:42 ` [Qemu-devel] [PATCH v6 06/13] virtio-scsi-dataplane: Add "device IO" op blocker listener Fam Zheng
2015-05-23 16:53 ` Max Reitz
2015-05-21 6:42 ` [Qemu-devel] [PATCH v6 07/13] nbd-server: Clear "can_read" when "device io" blocker is set Fam Zheng
2015-05-23 16:54 ` Max Reitz
2015-05-21 6:42 ` [Qemu-devel] [PATCH v6 08/13] blockdev: Block device IO during internal snapshot transaction Fam Zheng
2015-05-23 16:56 ` Max Reitz
2015-05-21 6:42 ` [Qemu-devel] [PATCH v6 09/13] blockdev: Block device IO during external " Fam Zheng
2015-05-23 16:58 ` Max Reitz
2015-05-21 6:43 ` [Qemu-devel] [PATCH v6 10/13] blockdev: Block device IO during drive-backup transaction Fam Zheng
2015-05-23 16:59 ` Max Reitz
2015-05-21 6:43 ` [Qemu-devel] [PATCH v6 11/13] blockdev: Block device IO during blockdev-backup transaction Fam Zheng
2015-05-23 17:05 ` Max Reitz
2015-05-21 6:43 ` [Qemu-devel] [PATCH v6 12/13] block: Block "device IO" during bdrv_drain and bdrv_drain_all Fam Zheng
2015-05-23 17:11 ` Max Reitz
2015-05-25 2:48 ` Fam Zheng [this message]
2015-05-26 14:21 ` Max Reitz
2015-05-21 6:43 ` [Qemu-devel] [PATCH v6 13/13] block/mirror: Block "device IO" during mirror exit Fam Zheng
2015-05-23 17:21 ` Max Reitz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150525024831.GD7135@ad.nay.redhat.com \
--to=famz@redhat.com \
--cc=amit.shah@redhat.com \
--cc=armbru@redhat.com \
--cc=jcody@redhat.com \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).