From: Paolo Bonzini <pbonzini@redhat.com>
To: qemu-devel@nongnu.org
Cc: qemu-block@nongnu.org, famz@redhat.com, kwolf@redhat.com,
stefanha@redhat.com
Subject: [Qemu-devel] [PATCH 05/18] block: change drain to look only at one child at a time
Date: Thu, 13 Oct 2016 19:34:09 +0200 [thread overview]
Message-ID: <1476380062-18001-6-git-send-email-pbonzini@redhat.com> (raw)
In-Reply-To: <1476380062-18001-1-git-send-email-pbonzini@redhat.com>
bdrv_requests_pending is checking children to also wait until internal
requests (such as metadata writes) have completed. However, checking
children is in general overkill. Children requests can be of two kinds:
- requests caused by an operation on bs, e.g. a bdrv_aio_write to bs
causing a write to bs->file->bs. In this case, the parent's in_flight
count will always be incremented by at least one for every request in
the child.
- asynchronous metadata writes or flushes. Such writes can be started
even if bs's in_flight count is zero, but not after the .bdrv_drain
callback has been invoked.
This patch therefore changes bdrv_drain to finish I/O in the parent
(after which the parent's in_flight will be locked to zero), call
bdrv_drain (after which the parent will not generate I/O on the child
anymore), and then wait for internal I/O in the children to complete.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
block/io.c | 47 ++++++++++++++++++++++++++---------------------
1 file changed, 26 insertions(+), 21 deletions(-)
diff --git a/block/io.c b/block/io.c
index 8d46d8b..afec968 100644
--- a/block/io.c
+++ b/block/io.c
@@ -156,16 +156,33 @@ bool bdrv_requests_pending(BlockDriverState *bs)
return false;
}
-static void bdrv_drain_recurse(BlockDriverState *bs)
+static bool bdrv_drain_poll(BlockDriverState *bs)
+{
+ bool waited = false;
+
+ while (atomic_read(&bs->in_flight) > 0) {
+ aio_poll(bdrv_get_aio_context(bs), true);
+ waited = true;
+ }
+ return waited;
+}
+
+static bool bdrv_drain_recurse(BlockDriverState *bs)
{
BdrvChild *child;
+ bool waited;
+
+ waited = bdrv_drain_poll(bs);
if (bs->drv && bs->drv->bdrv_drain) {
bs->drv->bdrv_drain(bs);
}
+
QLIST_FOREACH(child, &bs->children, next) {
- bdrv_drain_recurse(child->bs);
+ waited |= bdrv_drain_recurse(child->bs);
}
+
+ return waited;
}
typedef struct {
@@ -174,14 +191,6 @@ typedef struct {
bool done;
} BdrvCoDrainData;
-static void bdrv_drain_poll(BlockDriverState *bs)
-{
- while (bdrv_requests_pending(bs)) {
- /* Keep iterating */
- aio_poll(bdrv_get_aio_context(bs), true);
- }
-}
-
static void bdrv_co_drain_bh_cb(void *opaque)
{
BdrvCoDrainData *data = opaque;
@@ -189,7 +198,7 @@ static void bdrv_co_drain_bh_cb(void *opaque)
BlockDriverState *bs = data->bs;
bdrv_dec_in_flight(bs);
- bdrv_drain_poll(bs);
+ bdrv_drained_begin(bs);
data->done = true;
qemu_coroutine_enter(co);
}
@@ -220,6 +229,11 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs)
void bdrv_drained_begin(BlockDriverState *bs)
{
+ if (qemu_in_coroutine()) {
+ bdrv_co_yield_to_drain(bs);
+ return;
+ }
+
if (!bs->quiesce_counter++) {
aio_disable_external(bdrv_get_aio_context(bs));
bdrv_parent_drained_begin(bs);
@@ -227,11 +241,6 @@ void bdrv_drained_begin(BlockDriverState *bs)
bdrv_io_unplugged_begin(bs);
bdrv_drain_recurse(bs);
- if (qemu_in_coroutine()) {
- bdrv_co_yield_to_drain(bs);
- } else {
- bdrv_drain_poll(bs);
- }
bdrv_io_unplugged_end(bs);
}
@@ -299,7 +308,6 @@ void bdrv_drain_all(void)
aio_context_acquire(aio_context);
bdrv_parent_drained_begin(bs);
bdrv_io_unplugged_begin(bs);
- bdrv_drain_recurse(bs);
aio_context_release(aio_context);
if (!g_slist_find(aio_ctxs, aio_context)) {
@@ -322,10 +330,7 @@ void bdrv_drain_all(void)
aio_context_acquire(aio_context);
for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
if (aio_context == bdrv_get_aio_context(bs)) {
- if (bdrv_requests_pending(bs)) {
- aio_poll(aio_context, true);
- waited = true;
- }
+ waited |= bdrv_drain_recurse(bs);
}
}
aio_context_release(aio_context);
--
2.7.4
next prev parent reply other threads:[~2016-10-13 17:34 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-10-13 17:34 [Qemu-devel] [PATCH 00/18] dataplane: remove RFifoLock (including almost all previously sent patches) Paolo Bonzini
2016-10-13 17:34 ` [Qemu-devel] [PATCH 01/18] replication: interrupt failover if the main device is closed Paolo Bonzini
2016-10-13 17:34 ` [Qemu-devel] [PATCH 02/18] blockjob: introduce .drain callback for jobs Paolo Bonzini
2016-10-16 10:02 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2016-10-17 7:53 ` [Qemu-devel] " Paolo Bonzini
2016-10-13 17:34 ` [Qemu-devel] [PATCH 03/18] mirror: use bdrv_drained_begin/bdrv_drained_end Paolo Bonzini
2016-10-14 9:43 ` Fam Zheng
2016-10-14 10:00 ` Paolo Bonzini
2016-10-13 17:34 ` [Qemu-devel] [PATCH 04/18] block: add BDS field to count in-flight requests Paolo Bonzini
2016-10-13 17:34 ` Paolo Bonzini [this message]
2016-10-14 10:12 ` [Qemu-devel] [PATCH 05/18] block: change drain to look only at one child at a time Fam Zheng
2016-10-13 17:34 ` [Qemu-devel] [PATCH 06/18] qed: Implement .bdrv_drain Paolo Bonzini
2016-10-14 10:33 ` Fam Zheng
2016-10-14 10:40 ` Paolo Bonzini
2016-10-13 17:34 ` [Qemu-devel] [PATCH 07/18] block: introduce bdrv_poll_while and bdrv_wakeup Paolo Bonzini
2016-10-14 10:42 ` Fam Zheng
2016-10-14 10:43 ` Paolo Bonzini
2016-10-16 10:25 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2016-10-17 7:54 ` [Qemu-devel] " Paolo Bonzini
2016-10-13 17:34 ` [Qemu-devel] [PATCH 08/18] nfs: move nfs_set_events out of the while loops Paolo Bonzini
2016-10-16 10:37 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2016-10-13 17:34 ` [Qemu-devel] [PATCH 09/18] nfs: use bdrv_poll_while and bdrv_wakeup Paolo Bonzini
2016-10-16 16:17 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2016-10-13 17:34 ` [Qemu-devel] [PATCH 10/18] sheepdog: " Paolo Bonzini
2016-10-16 16:21 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2016-10-13 17:34 ` [Qemu-devel] [PATCH 11/18] aio: introduce qemu_get_current_aio_context Paolo Bonzini
2016-10-16 16:28 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2016-10-13 17:34 ` [Qemu-devel] [PATCH 12/18] iothread: detach all block devices before stopping them Paolo Bonzini
2016-10-14 14:50 ` Fam Zheng
2016-10-14 14:59 ` Paolo Bonzini
2016-10-13 17:34 ` [Qemu-devel] [PATCH 13/18] replication: pass BlockDriverState to reopen_backing_file Paolo Bonzini
2016-10-16 16:31 ` Stefan Hajnoczi
2016-10-13 17:34 ` [Qemu-devel] [PATCH 14/18] block: prepare bdrv_reopen_multiple to release AioContext Paolo Bonzini
2016-10-16 16:32 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2016-10-13 17:34 ` [Qemu-devel] [PATCH 15/18] block: only call aio_poll on the current thread's AioContext Paolo Bonzini
2016-10-14 14:55 ` Fam Zheng
2016-10-16 16:40 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2016-10-17 8:04 ` [Qemu-devel] " Paolo Bonzini
2016-10-18 10:10 ` Stefan Hajnoczi
2016-10-13 17:34 ` [Qemu-devel] [PATCH 16/18] iothread: release AioContext around aio_poll Paolo Bonzini
2016-10-13 17:34 ` [Qemu-devel] [PATCH 17/18] qemu-thread: introduce QemuRecMutex Paolo Bonzini
2016-10-13 17:34 ` [Qemu-devel] [PATCH 18/18] aio: convert from RFifoLock to QemuRecMutex Paolo Bonzini
2016-10-16 16:43 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2016-10-17 8:58 ` [Qemu-devel] [PATCH 00/18] dataplane: remove RFifoLock (including almost all previously sent patches) Christian Borntraeger
2016-10-17 9:17 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1476380062-18001-6-git-send-email-pbonzini@redhat.com \
--to=pbonzini@redhat.com \
--cc=famz@redhat.com \
--cc=kwolf@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).