qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2
@ 2017-12-20 10:33 Kevin Wolf
  2017-12-20 10:33 ` [Qemu-devel] [PATCH 01/19] block: Remove unused bdrv_requests_pending Kevin Wolf
                   ` (19 more replies)
  0 siblings, 20 replies; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:33 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

This is the second part of my work to fix drain and hopefully to prevent
it from attracting bugs as much as it did in the past. There is
definitely at least a third part coming after this, see below.

In this series, the following improvments are made:
* Fix several bugs and inconsistencies
* Create lots of unit tests for the drain functions
* Introduce bdrv_subtree_drained_begin/end() to drain a whole subtree
  rather than only a single node
* Use this to make bdrv_reopen() safe (graph changes in callbacks
  called during its internal bdrv_drain_all() frequently broke it)

Planned for part three: Make graph modifications in callbacks safe by
avoiding BDRV_POLL_WHILE() calls while we're recursing through the
graph. We can recurse before BDRV_POLL_WHILE(), after it or while
evaluating its condition, but never call any callbacks while we're
iterating child or parent lists.

Fam Zheng (1):
  block: Remove unused bdrv_requests_pending

Kevin Wolf (18):
  block: Assert drain_all is only called from main AioContext
  block: Make bdrv_drain() driver callbacks non-recursive
  test-bdrv-drain: Test callback for bdrv_drain
  test-bdrv-drain: Test bs->quiesce_counter
  blockjob: Pause job on draining any job BDS
  test-bdrv-drain: Test drain vs. block jobs
  block: Don't block_job_pause_all() in bdrv_drain_all()
  block: Nested drain_end must still call callbacks
  test-bdrv-drain: Test nested drain sections
  block: Don't notify parents in drain call chain
  block: Add bdrv_subtree_drained_begin/end()
  test-bdrv-drain: Tests for bdrv_subtree_drain
  test-bdrv-drain: Test behaviour in coroutine context
  test-bdrv-drain: Recursive draining with multiple parents
  block: Allow graph changes in subtree drained section
  test-bdrv-drain: Test graph changes in drained section
  commit: Simplify reopen of base
  block: Keep nodes drained between reopen_queue/multiple

 include/block/block.h     |  20 +-
 include/block/block_int.h |   3 +-
 block.c                   |  74 +++++--
 block/commit.c            |   8 +-
 block/io.c                | 128 ++++++++----
 block/replication.c       |   6 +
 blockjob.c                |  22 +-
 qemu-io-cmds.c            |   3 +
 tests/test-bdrv-drain.c   | 520 +++++++++++++++++++++++++++++++++++++++++++++-
 9 files changed, 696 insertions(+), 88 deletions(-)

-- 
2.13.6

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 01/19] block: Remove unused bdrv_requests_pending
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
@ 2017-12-20 10:33 ` Kevin Wolf
  2017-12-20 10:33 ` [Qemu-devel] [PATCH 02/19] block: Assert drain_all is only called from main AioContext Kevin Wolf
                   ` (18 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:33 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

From: Fam Zheng <famz@redhat.com>

Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 include/block/block_int.h |  1 -
 block/io.c                | 18 ------------------
 2 files changed, 19 deletions(-)

diff --git a/include/block/block_int.h b/include/block/block_int.h
index a5482775ec..e107163594 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -1045,7 +1045,6 @@ bool blk_dev_is_tray_open(BlockBackend *blk);
 bool blk_dev_is_medium_locked(BlockBackend *blk);
 
 void bdrv_set_dirty(BlockDriverState *bs, int64_t offset, int64_t bytes);
-bool bdrv_requests_pending(BlockDriverState *bs);
 
 void bdrv_clear_dirty_bitmap(BdrvDirtyBitmap *bitmap, HBitmap **out);
 void bdrv_undo_clear_dirty_bitmap(BdrvDirtyBitmap *bitmap, HBitmap *in);
diff --git a/block/io.c b/block/io.c
index 1e92d2e5b2..cf780c3cb0 100644
--- a/block/io.c
+++ b/block/io.c
@@ -134,24 +134,6 @@ void bdrv_disable_copy_on_read(BlockDriverState *bs)
     assert(old >= 1);
 }
 
-/* Check if any requests are in-flight (including throttled requests) */
-bool bdrv_requests_pending(BlockDriverState *bs)
-{
-    BdrvChild *child;
-
-    if (atomic_read(&bs->in_flight)) {
-        return true;
-    }
-
-    QLIST_FOREACH(child, &bs->children, next) {
-        if (bdrv_requests_pending(child->bs)) {
-            return true;
-        }
-    }
-
-    return false;
-}
-
 typedef struct {
     Coroutine *co;
     BlockDriverState *bs;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 02/19] block: Assert drain_all is only called from main AioContext
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
  2017-12-20 10:33 ` [Qemu-devel] [PATCH 01/19] block: Remove unused bdrv_requests_pending Kevin Wolf
@ 2017-12-20 10:33 ` Kevin Wolf
  2017-12-20 12:14   ` Fam Zheng
  2017-12-20 10:33 ` [Qemu-devel] [PATCH 03/19] block: Make bdrv_drain() driver callbacks non-recursive Kevin Wolf
                   ` (17 subsequent siblings)
  19 siblings, 1 reply; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:33 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 block/io.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/block/io.c b/block/io.c
index cf780c3cb0..b94740b8ff 100644
--- a/block/io.c
+++ b/block/io.c
@@ -330,6 +330,12 @@ void bdrv_drain_all_begin(void)
     BdrvNextIterator it;
     GSList *aio_ctxs = NULL, *ctx;
 
+    /* BDRV_POLL_WHILE() for a node can only be called from its own I/O thread
+     * or the main loop AioContext. We potentially use BDRV_POLL_WHILE() on
+     * nodes in several different AioContexts, so make sure we're in the main
+     * context. */
+    assert(qemu_get_current_aio_context() == qemu_get_aio_context());
+
     block_job_pause_all();
 
     for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 03/19] block: Make bdrv_drain() driver callbacks non-recursive
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
  2017-12-20 10:33 ` [Qemu-devel] [PATCH 01/19] block: Remove unused bdrv_requests_pending Kevin Wolf
  2017-12-20 10:33 ` [Qemu-devel] [PATCH 02/19] block: Assert drain_all is only called from main AioContext Kevin Wolf
@ 2017-12-20 10:33 ` Kevin Wolf
  2017-12-20 13:16   ` Fam Zheng
  2017-12-20 10:33 ` [Qemu-devel] [PATCH 04/19] test-bdrv-drain: Test callback for bdrv_drain Kevin Wolf
                   ` (16 subsequent siblings)
  19 siblings, 1 reply; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:33 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

bdrv_drain_begin() doesn't increase bs->quiesce_counter recursively,
which means that the child nodes are not actually drained. To keep this
consistent, we also shouldn't call the block driver callbacks.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 block/io.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/block/io.c b/block/io.c
index b94740b8ff..91a52e2d82 100644
--- a/block/io.c
+++ b/block/io.c
@@ -158,7 +158,7 @@ static void coroutine_fn bdrv_drain_invoke_entry(void *opaque)
 }
 
 /* Recursively call BlockDriver.bdrv_co_drain_begin/end callbacks */
-static void bdrv_drain_invoke(BlockDriverState *bs, bool begin)
+static void bdrv_drain_invoke(BlockDriverState *bs, bool begin, bool recursive)
 {
     BdrvChild *child, *tmp;
     BdrvCoDrainData data = { .bs = bs, .done = false, .begin = begin};
@@ -172,8 +172,10 @@ static void bdrv_drain_invoke(BlockDriverState *bs, bool begin)
     bdrv_coroutine_enter(bs, data.co);
     BDRV_POLL_WHILE(bs, !data.done);
 
-    QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) {
-        bdrv_drain_invoke(child->bs, begin);
+    if (recursive) {
+        QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) {
+            bdrv_drain_invoke(child->bs, begin, true);
+        }
     }
 }
 
@@ -265,7 +267,7 @@ void bdrv_drained_begin(BlockDriverState *bs)
         bdrv_parent_drained_begin(bs);
     }
 
-    bdrv_drain_invoke(bs, true);
+    bdrv_drain_invoke(bs, true, false);
     bdrv_drain_recurse(bs);
 }
 
@@ -281,7 +283,7 @@ void bdrv_drained_end(BlockDriverState *bs)
     }
 
     /* Re-enable things in child-to-parent order */
-    bdrv_drain_invoke(bs, false);
+    bdrv_drain_invoke(bs, false, false);
     bdrv_parent_drained_end(bs);
     aio_enable_external(bdrv_get_aio_context(bs));
 }
@@ -345,7 +347,7 @@ void bdrv_drain_all_begin(void)
         aio_context_acquire(aio_context);
         aio_disable_external(aio_context);
         bdrv_parent_drained_begin(bs);
-        bdrv_drain_invoke(bs, true);
+        bdrv_drain_invoke(bs, true, true);
         aio_context_release(aio_context);
 
         if (!g_slist_find(aio_ctxs, aio_context)) {
@@ -388,7 +390,7 @@ void bdrv_drain_all_end(void)
 
         /* Re-enable things in child-to-parent order */
         aio_context_acquire(aio_context);
-        bdrv_drain_invoke(bs, false);
+        bdrv_drain_invoke(bs, false, true);
         bdrv_parent_drained_end(bs);
         aio_enable_external(aio_context);
         aio_context_release(aio_context);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 04/19] test-bdrv-drain: Test callback for bdrv_drain
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
                   ` (2 preceding siblings ...)
  2017-12-20 10:33 ` [Qemu-devel] [PATCH 03/19] block: Make bdrv_drain() driver callbacks non-recursive Kevin Wolf
@ 2017-12-20 10:33 ` Kevin Wolf
  2017-12-20 10:33 ` [Qemu-devel] [PATCH 05/19] test-bdrv-drain: Test bs->quiesce_counter Kevin Wolf
                   ` (15 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:33 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

The existing test is for bdrv_drain_all_begin/end() only. Generalise the
test case so that it can be run for the other variants as well. At the
moment this is only bdrv_drain_begin/end(), but in a while, we'll add
another one.

Also, add a backing file to the test node to test whether the operations
work recursively.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 tests/test-bdrv-drain.c | 69 ++++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 62 insertions(+), 7 deletions(-)

diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
index 67541438c1..c05203bba8 100644
--- a/tests/test-bdrv-drain.c
+++ b/tests/test-bdrv-drain.c
@@ -71,6 +71,8 @@ static BlockDriver bdrv_test = {
 
     .bdrv_co_drain_begin    = bdrv_test_co_drain_begin,
     .bdrv_co_drain_end      = bdrv_test_co_drain_end,
+
+    .bdrv_child_perm        = bdrv_format_default_perms,
 };
 
 static void aio_ret_cb(void *opaque, int ret)
@@ -79,11 +81,34 @@ static void aio_ret_cb(void *opaque, int ret)
     *aio_ret = ret;
 }
 
-static void test_drv_cb_drain_all(void)
+enum drain_type {
+    BDRV_DRAIN_ALL,
+    BDRV_DRAIN,
+};
+
+static void do_drain_begin(enum drain_type drain_type, BlockDriverState *bs)
+{
+    switch (drain_type) {
+    case BDRV_DRAIN_ALL:        bdrv_drain_all_begin(); break;
+    case BDRV_DRAIN:            bdrv_drained_begin(bs); break;
+    default:                    g_assert_not_reached();
+    }
+}
+
+static void do_drain_end(enum drain_type drain_type, BlockDriverState *bs)
+{
+    switch (drain_type) {
+    case BDRV_DRAIN_ALL:        bdrv_drain_all_end(); break;
+    case BDRV_DRAIN:            bdrv_drained_end(bs); break;
+    default:                    g_assert_not_reached();
+    }
+}
+
+static void test_drv_cb_common(enum drain_type drain_type, bool recursive)
 {
     BlockBackend *blk;
-    BlockDriverState *bs;
-    BDRVTestState *s;
+    BlockDriverState *bs, *backing;
+    BDRVTestState *s, *backing_s;
     BlockAIOCB *acb;
     int aio_ret;
 
@@ -100,12 +125,23 @@ static void test_drv_cb_drain_all(void)
     s = bs->opaque;
     blk_insert_bs(blk, bs, &error_abort);
 
+    backing = bdrv_new_open_driver(&bdrv_test, "backing", 0, &error_abort);
+    backing_s = backing->opaque;
+    bdrv_set_backing_hd(bs, backing, &error_abort);
+
     /* Simple bdrv_drain_all_begin/end pair, check that CBs are called */
     g_assert_cmpint(s->drain_count, ==, 0);
-    bdrv_drain_all_begin();
+    g_assert_cmpint(backing_s->drain_count, ==, 0);
+
+    do_drain_begin(drain_type, bs);
+
     g_assert_cmpint(s->drain_count, ==, 1);
-    bdrv_drain_all_end();
+    g_assert_cmpint(backing_s->drain_count, ==, !!recursive);
+
+    do_drain_end(drain_type, bs);
+
     g_assert_cmpint(s->drain_count, ==, 0);
+    g_assert_cmpint(backing_s->drain_count, ==, 0);
 
     /* Now do the same while a request is pending */
     aio_ret = -EINPROGRESS;
@@ -114,16 +150,34 @@ static void test_drv_cb_drain_all(void)
     g_assert_cmpint(aio_ret, ==, -EINPROGRESS);
 
     g_assert_cmpint(s->drain_count, ==, 0);
-    bdrv_drain_all_begin();
+    g_assert_cmpint(backing_s->drain_count, ==, 0);
+
+    do_drain_begin(drain_type, bs);
+
     g_assert_cmpint(aio_ret, ==, 0);
     g_assert_cmpint(s->drain_count, ==, 1);
-    bdrv_drain_all_end();
+    g_assert_cmpint(backing_s->drain_count, ==, !!recursive);
+
+    do_drain_end(drain_type, bs);
+
     g_assert_cmpint(s->drain_count, ==, 0);
+    g_assert_cmpint(backing_s->drain_count, ==, 0);
 
+    bdrv_unref(backing);
     bdrv_unref(bs);
     blk_unref(blk);
 }
 
+static void test_drv_cb_drain_all(void)
+{
+    test_drv_cb_common(BDRV_DRAIN_ALL, true);
+}
+
+static void test_drv_cb_drain(void)
+{
+    test_drv_cb_common(BDRV_DRAIN, false);
+}
+
 int main(int argc, char **argv)
 {
     bdrv_init();
@@ -132,6 +186,7 @@ int main(int argc, char **argv)
     g_test_init(&argc, &argv, NULL);
 
     g_test_add_func("/bdrv-drain/driver-cb/drain_all", test_drv_cb_drain_all);
+    g_test_add_func("/bdrv-drain/driver-cb/drain", test_drv_cb_drain);
 
     return g_test_run();
 }
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 05/19] test-bdrv-drain: Test bs->quiesce_counter
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
                   ` (3 preceding siblings ...)
  2017-12-20 10:33 ` [Qemu-devel] [PATCH 04/19] test-bdrv-drain: Test callback for bdrv_drain Kevin Wolf
@ 2017-12-20 10:33 ` Kevin Wolf
  2017-12-20 10:33 ` [Qemu-devel] [PATCH 06/19] blockjob: Pause job on draining any job BDS Kevin Wolf
                   ` (14 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:33 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

This is currently only working correctly for bdrv_drain(), not for
bdrv_drain_all(). Leave a comment for the drain_all case, we'll address
it later.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 tests/test-bdrv-drain.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 45 insertions(+)

diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
index c05203bba8..2aa2b9aa43 100644
--- a/tests/test-bdrv-drain.c
+++ b/tests/test-bdrv-drain.c
@@ -178,6 +178,48 @@ static void test_drv_cb_drain(void)
     test_drv_cb_common(BDRV_DRAIN, false);
 }
 
+static void test_quiesce_common(enum drain_type drain_type, bool recursive)
+{
+    BlockBackend *blk;
+    BlockDriverState *bs, *backing;
+
+    blk = blk_new(BLK_PERM_ALL, BLK_PERM_ALL);
+    bs = bdrv_new_open_driver(&bdrv_test, "test-node", BDRV_O_RDWR,
+                              &error_abort);
+    blk_insert_bs(blk, bs, &error_abort);
+
+    backing = bdrv_new_open_driver(&bdrv_test, "backing", 0, &error_abort);
+    bdrv_set_backing_hd(bs, backing, &error_abort);
+
+    g_assert_cmpint(bs->quiesce_counter, ==, 0);
+    g_assert_cmpint(backing->quiesce_counter, ==, 0);
+
+    do_drain_begin(drain_type, bs);
+
+    g_assert_cmpint(bs->quiesce_counter, ==, 1);
+    g_assert_cmpint(backing->quiesce_counter, ==, !!recursive);
+
+    do_drain_end(drain_type, bs);
+
+    g_assert_cmpint(bs->quiesce_counter, ==, 0);
+    g_assert_cmpint(backing->quiesce_counter, ==, 0);
+
+    bdrv_unref(backing);
+    bdrv_unref(bs);
+    blk_unref(blk);
+}
+
+static void test_quiesce_drain_all(void)
+{
+    // XXX drain_all doesn't quiesce
+    //test_quiesce_common(BDRV_DRAIN_ALL, true);
+}
+
+static void test_quiesce_drain(void)
+{
+    test_quiesce_common(BDRV_DRAIN, false);
+}
+
 int main(int argc, char **argv)
 {
     bdrv_init();
@@ -188,5 +230,8 @@ int main(int argc, char **argv)
     g_test_add_func("/bdrv-drain/driver-cb/drain_all", test_drv_cb_drain_all);
     g_test_add_func("/bdrv-drain/driver-cb/drain", test_drv_cb_drain);
 
+    g_test_add_func("/bdrv-drain/quiesce/drain_all", test_quiesce_drain_all);
+    g_test_add_func("/bdrv-drain/quiesce/drain", test_quiesce_drain);
+
     return g_test_run();
 }
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 06/19] blockjob: Pause job on draining any job BDS
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
                   ` (4 preceding siblings ...)
  2017-12-20 10:33 ` [Qemu-devel] [PATCH 05/19] test-bdrv-drain: Test bs->quiesce_counter Kevin Wolf
@ 2017-12-20 10:33 ` Kevin Wolf
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 07/19] test-bdrv-drain: Test drain vs. block jobs Kevin Wolf
                   ` (13 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:33 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

Block jobs already paused themselves when their main BlockBackend
entered a drained section. This is not good enough: We also want to
pause a block job and may not submit new requests if, for example, the
mirror target node should be drained.

This implements .drained_begin/end callbacks in child_job in order to
consider all block nodes related to the job, and removes the
BlockBackend callbacks which are unnecessary now because the root of the
job main BlockBackend is always referenced with a child_job, too.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 blockjob.c | 22 +++++++++-------------
 1 file changed, 9 insertions(+), 13 deletions(-)

diff --git a/blockjob.c b/blockjob.c
index 6173e4728c..f5cea84e73 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -234,26 +234,23 @@ static char *child_job_get_parent_desc(BdrvChild *c)
                            job->id);
 }
 
-static const BdrvChildRole child_job = {
-    .get_parent_desc    = child_job_get_parent_desc,
-    .stay_at_node       = true,
-};
-
-static void block_job_drained_begin(void *opaque)
+static void child_job_drained_begin(BdrvChild *c)
 {
-    BlockJob *job = opaque;
+    BlockJob *job = c->opaque;
     block_job_pause(job);
 }
 
-static void block_job_drained_end(void *opaque)
+static void child_job_drained_end(BdrvChild *c)
 {
-    BlockJob *job = opaque;
+    BlockJob *job = c->opaque;
     block_job_resume(job);
 }
 
-static const BlockDevOps block_job_dev_ops = {
-    .drained_begin = block_job_drained_begin,
-    .drained_end = block_job_drained_end,
+static const BdrvChildRole child_job = {
+    .get_parent_desc    = child_job_get_parent_desc,
+    .drained_begin      = child_job_drained_begin,
+    .drained_end        = child_job_drained_end,
+    .stay_at_node       = true,
 };
 
 void block_job_remove_all_bdrv(BlockJob *job)
@@ -715,7 +712,6 @@ void *block_job_create(const char *job_id, const BlockJobDriver *driver,
     block_job_add_bdrv(job, "main node", bs, 0, BLK_PERM_ALL, &error_abort);
     bs->job = job;
 
-    blk_set_dev_ops(blk, &block_job_dev_ops, job);
     bdrv_op_unblock(bs, BLOCK_OP_TYPE_DATAPLANE, job->blocker);
 
     QLIST_INSERT_HEAD(&block_jobs, job, job_list);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 07/19] test-bdrv-drain: Test drain vs. block jobs
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
                   ` (5 preceding siblings ...)
  2017-12-20 10:33 ` [Qemu-devel] [PATCH 06/19] blockjob: Pause job on draining any job BDS Kevin Wolf
@ 2017-12-20 10:34 ` Kevin Wolf
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 08/19] block: Don't block_job_pause_all() in bdrv_drain_all() Kevin Wolf
                   ` (12 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:34 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

Block jobs must be paused if any of the involved nodes are drained.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 tests/test-bdrv-drain.c | 121 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 121 insertions(+)

diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
index 2aa2b9aa43..019fe9074d 100644
--- a/tests/test-bdrv-drain.c
+++ b/tests/test-bdrv-drain.c
@@ -24,6 +24,7 @@
 
 #include "qemu/osdep.h"
 #include "block/block.h"
+#include "block/blockjob_int.h"
 #include "sysemu/block-backend.h"
 #include "qapi/error.h"
 
@@ -220,6 +221,123 @@ static void test_quiesce_drain(void)
     test_quiesce_common(BDRV_DRAIN, false);
 }
 
+
+typedef struct TestBlockJob {
+    BlockJob common;
+    bool should_complete;
+} TestBlockJob;
+
+static void test_job_completed(BlockJob *job, void *opaque)
+{
+    block_job_completed(job, 0);
+}
+
+static void coroutine_fn test_job_start(void *opaque)
+{
+    TestBlockJob *s = opaque;
+
+    while (!s->should_complete) {
+        block_job_sleep_ns(&s->common, 100000);
+    }
+
+    block_job_defer_to_main_loop(&s->common, test_job_completed, NULL);
+}
+
+static void test_job_complete(BlockJob *job, Error **errp)
+{
+    TestBlockJob *s = container_of(job, TestBlockJob, common);
+    s->should_complete = true;
+}
+
+BlockJobDriver test_job_driver = {
+    .instance_size  = sizeof(TestBlockJob),
+    .start          = test_job_start,
+    .complete       = test_job_complete,
+};
+
+static void test_blockjob_common(enum drain_type drain_type)
+{
+    BlockBackend *blk_src, *blk_target;
+    BlockDriverState *src, *target;
+    BlockJob *job;
+    int ret;
+
+    src = bdrv_new_open_driver(&bdrv_test, "source", BDRV_O_RDWR,
+                               &error_abort);
+    blk_src = blk_new(BLK_PERM_ALL, BLK_PERM_ALL);
+    blk_insert_bs(blk_src, src, &error_abort);
+
+    target = bdrv_new_open_driver(&bdrv_test, "target", BDRV_O_RDWR,
+                                  &error_abort);
+    blk_target = blk_new(BLK_PERM_ALL, BLK_PERM_ALL);
+    blk_insert_bs(blk_target, target, &error_abort);
+
+    job = block_job_create("job0", &test_job_driver, src, 0, BLK_PERM_ALL, 0,
+                           0, NULL, NULL, &error_abort);
+    block_job_add_bdrv(job, "target", target, 0, BLK_PERM_ALL, &error_abort);
+    block_job_start(job);
+
+    g_assert_cmpint(job->pause_count, ==, 0);
+    g_assert_false(job->paused);
+    g_assert_false(job->busy); /* We're in block_job_sleep_ns() */
+
+    do_drain_begin(drain_type, src);
+
+    if (drain_type == BDRV_DRAIN_ALL) {
+        /* bdrv_drain_all() drains both src and target, and involves an
+         * additional block_job_pause_all() */
+        g_assert_cmpint(job->pause_count, ==, 3);
+    } else {
+        g_assert_cmpint(job->pause_count, ==, 1);
+    }
+    /* XXX We don't wait until the job is actually paused. Is this okay? */
+    /* g_assert_true(job->paused); */
+    g_assert_false(job->busy); /* The job is paused */
+
+    do_drain_end(drain_type, src);
+
+    g_assert_cmpint(job->pause_count, ==, 0);
+    g_assert_false(job->paused);
+    g_assert_false(job->busy); /* We're in block_job_sleep_ns() */
+
+    do_drain_begin(drain_type, target);
+
+    if (drain_type == BDRV_DRAIN_ALL) {
+        /* bdrv_drain_all() drains both src and target, and involves an
+         * additional block_job_pause_all() */
+        g_assert_cmpint(job->pause_count, ==, 3);
+    } else {
+        g_assert_cmpint(job->pause_count, ==, 1);
+    }
+    /* XXX We don't wait until the job is actually paused. Is this okay? */
+    /* g_assert_true(job->paused); */
+    g_assert_false(job->busy); /* The job is paused */
+
+    do_drain_end(drain_type, target);
+
+    g_assert_cmpint(job->pause_count, ==, 0);
+    g_assert_false(job->paused);
+    g_assert_false(job->busy); /* We're in block_job_sleep_ns() */
+
+    ret = block_job_complete_sync(job, &error_abort);
+    g_assert_cmpint(ret, ==, 0);
+
+    blk_unref(blk_src);
+    blk_unref(blk_target);
+    bdrv_unref(src);
+    bdrv_unref(target);
+}
+
+static void test_blockjob_drain_all(void)
+{
+    test_blockjob_common(BDRV_DRAIN_ALL);
+}
+
+static void test_blockjob_drain(void)
+{
+    test_blockjob_common(BDRV_DRAIN);
+}
+
 int main(int argc, char **argv)
 {
     bdrv_init();
@@ -233,5 +351,8 @@ int main(int argc, char **argv)
     g_test_add_func("/bdrv-drain/quiesce/drain_all", test_quiesce_drain_all);
     g_test_add_func("/bdrv-drain/quiesce/drain", test_quiesce_drain);
 
+    g_test_add_func("/bdrv-drain/blockjob/drain_all", test_blockjob_drain_all);
+    g_test_add_func("/bdrv-drain/blockjob/drain", test_blockjob_drain);
+
     return g_test_run();
 }
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 08/19] block: Don't block_job_pause_all() in bdrv_drain_all()
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
                   ` (6 preceding siblings ...)
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 07/19] test-bdrv-drain: Test drain vs. block jobs Kevin Wolf
@ 2017-12-20 10:34 ` Kevin Wolf
  2017-12-21 23:24   ` Eric Blake
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 09/19] block: Nested drain_end must still call callbacks Kevin Wolf
                   ` (11 subsequent siblings)
  19 siblings, 1 reply; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:34 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

Block jobs are already paused using the BdrvChildRole drain callbacks,
so we don't need an additionall block_job_pause_all() call.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 block/io.c              |  4 ----
 tests/test-bdrv-drain.c | 10 ++++------
 2 files changed, 4 insertions(+), 10 deletions(-)

diff --git a/block/io.c b/block/io.c
index 91a52e2d82..74d2e5278e 100644
--- a/block/io.c
+++ b/block/io.c
@@ -338,8 +338,6 @@ void bdrv_drain_all_begin(void)
      * context. */
     assert(qemu_get_current_aio_context() == qemu_get_aio_context());
 
-    block_job_pause_all();
-
     for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
         AioContext *aio_context = bdrv_get_aio_context(bs);
 
@@ -395,8 +393,6 @@ void bdrv_drain_all_end(void)
         aio_enable_external(aio_context);
         aio_context_release(aio_context);
     }
-
-    block_job_resume_all();
 }
 
 void bdrv_drain_all(void)
diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
index 019fe9074d..4571137928 100644
--- a/tests/test-bdrv-drain.c
+++ b/tests/test-bdrv-drain.c
@@ -284,9 +284,8 @@ static void test_blockjob_common(enum drain_type drain_type)
     do_drain_begin(drain_type, src);
 
     if (drain_type == BDRV_DRAIN_ALL) {
-        /* bdrv_drain_all() drains both src and target, and involves an
-         * additional block_job_pause_all() */
-        g_assert_cmpint(job->pause_count, ==, 3);
+        /* bdrv_drain_all() drains both src and target */
+        g_assert_cmpint(job->pause_count, ==, 2);
     } else {
         g_assert_cmpint(job->pause_count, ==, 1);
     }
@@ -303,9 +302,8 @@ static void test_blockjob_common(enum drain_type drain_type)
     do_drain_begin(drain_type, target);
 
     if (drain_type == BDRV_DRAIN_ALL) {
-        /* bdrv_drain_all() drains both src and target, and involves an
-         * additional block_job_pause_all() */
-        g_assert_cmpint(job->pause_count, ==, 3);
+        /* bdrv_drain_all() drains both src and target */
+        g_assert_cmpint(job->pause_count, ==, 2);
     } else {
         g_assert_cmpint(job->pause_count, ==, 1);
     }
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 09/19] block: Nested drain_end must still call callbacks
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
                   ` (7 preceding siblings ...)
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 08/19] block: Don't block_job_pause_all() in bdrv_drain_all() Kevin Wolf
@ 2017-12-20 10:34 ` Kevin Wolf
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 10/19] test-bdrv-drain: Test nested drain sections Kevin Wolf
                   ` (10 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:34 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

bdrv_do_drained_begin() restricts the call of parent callbacks and
aio_disable_external() to the outermost drain section, but the block
driver callbacks are always called. bdrv_do_drained_end() must match
this behaviour, otherwise nodes stay drained even if begin/end calls
were balanced.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 block/io.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/block/io.c b/block/io.c
index 74d2e5278e..6038a16c58 100644
--- a/block/io.c
+++ b/block/io.c
@@ -273,19 +273,21 @@ void bdrv_drained_begin(BlockDriverState *bs)
 
 void bdrv_drained_end(BlockDriverState *bs)
 {
+    int old_quiesce_counter;
+
     if (qemu_in_coroutine()) {
         bdrv_co_yield_to_drain(bs, false);
         return;
     }
     assert(bs->quiesce_counter > 0);
-    if (atomic_fetch_dec(&bs->quiesce_counter) > 1) {
-        return;
-    }
+    old_quiesce_counter = atomic_fetch_dec(&bs->quiesce_counter);
 
     /* Re-enable things in child-to-parent order */
     bdrv_drain_invoke(bs, false, false);
-    bdrv_parent_drained_end(bs);
-    aio_enable_external(bdrv_get_aio_context(bs));
+    if (old_quiesce_counter == 1) {
+        bdrv_parent_drained_end(bs);
+        aio_enable_external(bdrv_get_aio_context(bs));
+    }
 }
 
 /*
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 10/19] test-bdrv-drain: Test nested drain sections
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
                   ` (8 preceding siblings ...)
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 09/19] block: Nested drain_end must still call callbacks Kevin Wolf
@ 2017-12-20 10:34 ` Kevin Wolf
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 11/19] block: Don't notify parents in drain call chain Kevin Wolf
                   ` (9 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:34 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 tests/test-bdrv-drain.c | 57 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 57 insertions(+)

diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
index 4571137928..ede5cf6e64 100644
--- a/tests/test-bdrv-drain.c
+++ b/tests/test-bdrv-drain.c
@@ -85,6 +85,7 @@ static void aio_ret_cb(void *opaque, int ret)
 enum drain_type {
     BDRV_DRAIN_ALL,
     BDRV_DRAIN,
+    DRAIN_TYPE_MAX,
 };
 
 static void do_drain_begin(enum drain_type drain_type, BlockDriverState *bs)
@@ -221,6 +222,60 @@ static void test_quiesce_drain(void)
     test_quiesce_common(BDRV_DRAIN, false);
 }
 
+static void test_nested(void)
+{
+    BlockBackend *blk;
+    BlockDriverState *bs, *backing;
+    BDRVTestState *s, *backing_s;
+    enum drain_type outer, inner;
+
+    blk = blk_new(BLK_PERM_ALL, BLK_PERM_ALL);
+    bs = bdrv_new_open_driver(&bdrv_test, "test-node", BDRV_O_RDWR,
+                              &error_abort);
+    s = bs->opaque;
+    blk_insert_bs(blk, bs, &error_abort);
+
+    backing = bdrv_new_open_driver(&bdrv_test, "backing", 0, &error_abort);
+    backing_s = backing->opaque;
+    bdrv_set_backing_hd(bs, backing, &error_abort);
+
+    for (outer = 0; outer < DRAIN_TYPE_MAX; outer++) {
+        for (inner = 0; inner < DRAIN_TYPE_MAX; inner++) {
+            /* XXX bdrv_drain_all() doesn't increase the quiesce_counter */
+            int bs_quiesce      = (outer != BDRV_DRAIN_ALL) +
+                                  (inner != BDRV_DRAIN_ALL);
+            int backing_quiesce = 0;
+            int backing_cb_cnt  = (outer != BDRV_DRAIN) +
+                                  (inner != BDRV_DRAIN);
+
+            g_assert_cmpint(bs->quiesce_counter, ==, 0);
+            g_assert_cmpint(backing->quiesce_counter, ==, 0);
+            g_assert_cmpint(s->drain_count, ==, 0);
+            g_assert_cmpint(backing_s->drain_count, ==, 0);
+
+            do_drain_begin(outer, bs);
+            do_drain_begin(inner, bs);
+
+            g_assert_cmpint(bs->quiesce_counter, ==, bs_quiesce);
+            g_assert_cmpint(backing->quiesce_counter, ==, backing_quiesce);
+            g_assert_cmpint(s->drain_count, ==, 2);
+            g_assert_cmpint(backing_s->drain_count, ==, backing_cb_cnt);
+
+            do_drain_end(inner, bs);
+            do_drain_end(outer, bs);
+
+            g_assert_cmpint(bs->quiesce_counter, ==, 0);
+            g_assert_cmpint(backing->quiesce_counter, ==, 0);
+            g_assert_cmpint(s->drain_count, ==, 0);
+            g_assert_cmpint(backing_s->drain_count, ==, 0);
+        }
+    }
+
+    bdrv_unref(backing);
+    bdrv_unref(bs);
+    blk_unref(blk);
+}
+
 
 typedef struct TestBlockJob {
     BlockJob common;
@@ -349,6 +404,8 @@ int main(int argc, char **argv)
     g_test_add_func("/bdrv-drain/quiesce/drain_all", test_quiesce_drain_all);
     g_test_add_func("/bdrv-drain/quiesce/drain", test_quiesce_drain);
 
+    g_test_add_func("/bdrv-drain/nested", test_nested);
+
     g_test_add_func("/bdrv-drain/blockjob/drain_all", test_blockjob_drain_all);
     g_test_add_func("/bdrv-drain/blockjob/drain", test_blockjob_drain);
 
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 11/19] block: Don't notify parents in drain call chain
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
                   ` (9 preceding siblings ...)
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 10/19] test-bdrv-drain: Test nested drain sections Kevin Wolf
@ 2017-12-20 10:34 ` Kevin Wolf
  2017-12-21 23:26   ` Eric Blake
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 12/19] block: Add bdrv_subtree_drained_begin/end() Kevin Wolf
                   ` (8 subsequent siblings)
  19 siblings, 1 reply; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:34 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

This is in preparation for subtree drains, i.e. drained sections that
affect not only a single node, but recursively all child nodes, too.

Calling the parent callbacks for drain is pointless when we just came
from that parent node recursively and leads to multiple increases of
bs->quiesce_counter in a single drain call. Don't do it.

In order for this to work correctly, the parent callback must be called
for every bdrv_drain_begin/end() call, not only for the outermost one:

If we have a node N with two parents A and B, recursive draining of A
should cause the quiesce_counter of B to increased because its child N
is drained independently of B. If now B is recursively drained, too, A
must increase its quiesce_counter because N is drained independently of
A only now, even if N is going from quiesce_counter 1 to 2.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 include/block/block.h |  4 ++--
 block.c               | 13 +++++++++----
 block/io.c            | 47 ++++++++++++++++++++++++++++++++++-------------
 3 files changed, 45 insertions(+), 19 deletions(-)

diff --git a/include/block/block.h b/include/block/block.h
index c05cac57e5..60c5d11029 100644
--- a/include/block/block.h
+++ b/include/block/block.h
@@ -585,7 +585,7 @@ void bdrv_io_unplug(BlockDriverState *bs);
  * Begin a quiesced section of all users of @bs. This is part of
  * bdrv_drained_begin.
  */
-void bdrv_parent_drained_begin(BlockDriverState *bs);
+void bdrv_parent_drained_begin(BlockDriverState *bs, BdrvChild *ignore);
 
 /**
  * bdrv_parent_drained_end:
@@ -593,7 +593,7 @@ void bdrv_parent_drained_begin(BlockDriverState *bs);
  * End a quiesced section of all users of @bs. This is part of
  * bdrv_drained_end.
  */
-void bdrv_parent_drained_end(BlockDriverState *bs);
+void bdrv_parent_drained_end(BlockDriverState *bs, BdrvChild *ignore);
 
 /**
  * bdrv_drained_begin:
diff --git a/block.c b/block.c
index ddb6836c52..5867a2a2a0 100644
--- a/block.c
+++ b/block.c
@@ -1972,13 +1972,16 @@ static void bdrv_replace_child_noperm(BdrvChild *child,
                                       BlockDriverState *new_bs)
 {
     BlockDriverState *old_bs = child->bs;
+    int i;
 
     if (old_bs && new_bs) {
         assert(bdrv_get_aio_context(old_bs) == bdrv_get_aio_context(new_bs));
     }
     if (old_bs) {
         if (old_bs->quiesce_counter && child->role->drained_end) {
-            child->role->drained_end(child);
+            for (i = 0; i < old_bs->quiesce_counter; i++) {
+                child->role->drained_end(child);
+            }
         }
         if (child->role->detach) {
             child->role->detach(child);
@@ -1991,7 +1994,9 @@ static void bdrv_replace_child_noperm(BdrvChild *child,
     if (new_bs) {
         QLIST_INSERT_HEAD(&new_bs->parents, child, next_parent);
         if (new_bs->quiesce_counter && child->role->drained_begin) {
-            child->role->drained_begin(child);
+            for (i = 0; i < new_bs->quiesce_counter; i++) {
+                child->role->drained_begin(child);
+            }
         }
 
         if (child->role->attach) {
@@ -4750,7 +4755,7 @@ void bdrv_set_aio_context(BlockDriverState *bs, AioContext *new_context)
     AioContext *ctx = bdrv_get_aio_context(bs);
 
     aio_disable_external(ctx);
-    bdrv_parent_drained_begin(bs);
+    bdrv_parent_drained_begin(bs, NULL);
     bdrv_drain(bs); /* ensure there are no in-flight requests */
 
     while (aio_poll(ctx, false)) {
@@ -4764,7 +4769,7 @@ void bdrv_set_aio_context(BlockDriverState *bs, AioContext *new_context)
      */
     aio_context_acquire(new_context);
     bdrv_attach_aio_context(bs, new_context);
-    bdrv_parent_drained_end(bs);
+    bdrv_parent_drained_end(bs, NULL);
     aio_enable_external(ctx);
     aio_context_release(new_context);
 }
diff --git a/block/io.c b/block/io.c
index 6038a16c58..09de0a9070 100644
--- a/block/io.c
+++ b/block/io.c
@@ -40,22 +40,28 @@
 static int coroutine_fn bdrv_co_do_pwrite_zeroes(BlockDriverState *bs,
     int64_t offset, int bytes, BdrvRequestFlags flags);
 
-void bdrv_parent_drained_begin(BlockDriverState *bs)
+void bdrv_parent_drained_begin(BlockDriverState *bs, BdrvChild *ignore)
 {
     BdrvChild *c, *next;
 
     QLIST_FOREACH_SAFE(c, &bs->parents, next_parent, next) {
+        if (c == ignore) {
+            continue;
+        }
         if (c->role->drained_begin) {
             c->role->drained_begin(c);
         }
     }
 }
 
-void bdrv_parent_drained_end(BlockDriverState *bs)
+void bdrv_parent_drained_end(BlockDriverState *bs, BdrvChild *ignore)
 {
     BdrvChild *c, *next;
 
     QLIST_FOREACH_SAFE(c, &bs->parents, next_parent, next) {
+        if (c == ignore) {
+            continue;
+        }
         if (c->role->drained_end) {
             c->role->drained_end(c);
         }
@@ -139,6 +145,7 @@ typedef struct {
     BlockDriverState *bs;
     bool done;
     bool begin;
+    BdrvChild *parent;
 } BdrvCoDrainData;
 
 static void coroutine_fn bdrv_drain_invoke_entry(void *opaque)
@@ -211,6 +218,9 @@ static bool bdrv_drain_recurse(BlockDriverState *bs)
     return waited;
 }
 
+static void bdrv_do_drained_begin(BlockDriverState *bs, BdrvChild *parent);
+static void bdrv_do_drained_end(BlockDriverState *bs, BdrvChild *parent);
+
 static void bdrv_co_drain_bh_cb(void *opaque)
 {
     BdrvCoDrainData *data = opaque;
@@ -219,9 +229,9 @@ static void bdrv_co_drain_bh_cb(void *opaque)
 
     bdrv_dec_in_flight(bs);
     if (data->begin) {
-        bdrv_drained_begin(bs);
+        bdrv_do_drained_begin(bs, data->parent);
     } else {
-        bdrv_drained_end(bs);
+        bdrv_do_drained_end(bs, data->parent);
     }
 
     data->done = true;
@@ -229,7 +239,7 @@ static void bdrv_co_drain_bh_cb(void *opaque)
 }
 
 static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs,
-                                                bool begin)
+                                                bool begin, BdrvChild *parent)
 {
     BdrvCoDrainData data;
 
@@ -243,6 +253,7 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs,
         .bs = bs,
         .done = false,
         .begin = begin,
+        .parent = parent,
     };
     bdrv_inc_in_flight(bs);
     aio_bh_schedule_oneshot(bdrv_get_aio_context(bs),
@@ -254,29 +265,34 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs,
     assert(data.done);
 }
 
-void bdrv_drained_begin(BlockDriverState *bs)
+static void bdrv_do_drained_begin(BlockDriverState *bs, BdrvChild *parent)
 {
     if (qemu_in_coroutine()) {
-        bdrv_co_yield_to_drain(bs, true);
+        bdrv_co_yield_to_drain(bs, true, parent);
         return;
     }
 
     /* Stop things in parent-to-child order */
     if (atomic_fetch_inc(&bs->quiesce_counter) == 0) {
         aio_disable_external(bdrv_get_aio_context(bs));
-        bdrv_parent_drained_begin(bs);
     }
 
+    bdrv_parent_drained_begin(bs, parent);
     bdrv_drain_invoke(bs, true, false);
     bdrv_drain_recurse(bs);
 }
 
-void bdrv_drained_end(BlockDriverState *bs)
+void bdrv_drained_begin(BlockDriverState *bs)
+{
+    bdrv_do_drained_begin(bs, NULL);
+}
+
+static void bdrv_do_drained_end(BlockDriverState *bs, BdrvChild *parent)
 {
     int old_quiesce_counter;
 
     if (qemu_in_coroutine()) {
-        bdrv_co_yield_to_drain(bs, false);
+        bdrv_co_yield_to_drain(bs, false, parent);
         return;
     }
     assert(bs->quiesce_counter > 0);
@@ -284,12 +300,17 @@ void bdrv_drained_end(BlockDriverState *bs)
 
     /* Re-enable things in child-to-parent order */
     bdrv_drain_invoke(bs, false, false);
+    bdrv_parent_drained_end(bs, parent);
     if (old_quiesce_counter == 1) {
-        bdrv_parent_drained_end(bs);
         aio_enable_external(bdrv_get_aio_context(bs));
     }
 }
 
+void bdrv_drained_end(BlockDriverState *bs)
+{
+    bdrv_do_drained_end(bs, NULL);
+}
+
 /*
  * Wait for pending requests to complete on a single BlockDriverState subtree,
  * and suspend block driver's internal I/O until next request arrives.
@@ -346,7 +367,7 @@ void bdrv_drain_all_begin(void)
         /* Stop things in parent-to-child order */
         aio_context_acquire(aio_context);
         aio_disable_external(aio_context);
-        bdrv_parent_drained_begin(bs);
+        bdrv_parent_drained_begin(bs, NULL);
         bdrv_drain_invoke(bs, true, true);
         aio_context_release(aio_context);
 
@@ -391,7 +412,7 @@ void bdrv_drain_all_end(void)
         /* Re-enable things in child-to-parent order */
         aio_context_acquire(aio_context);
         bdrv_drain_invoke(bs, false, true);
-        bdrv_parent_drained_end(bs);
+        bdrv_parent_drained_end(bs, NULL);
         aio_enable_external(aio_context);
         aio_context_release(aio_context);
     }
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 12/19] block: Add bdrv_subtree_drained_begin/end()
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
                   ` (10 preceding siblings ...)
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 11/19] block: Don't notify parents in drain call chain Kevin Wolf
@ 2017-12-20 10:34 ` Kevin Wolf
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 13/19] test-bdrv-drain: Tests for bdrv_subtree_drain Kevin Wolf
                   ` (7 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:34 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

bdrv_drained_begin() waits for the completion of requests in the whole
subtree, but it only actually keeps its immediate bs parameter quiesced
until bdrv_drained_end().

Add a version that keeps the whole subtree drained. As of this commit,
graph changes cannot be allowed during a subtree drained section, but
this will be fixed soon.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 include/block/block.h | 13 +++++++++++++
 block/io.c            | 54 ++++++++++++++++++++++++++++++++++++++++-----------
 2 files changed, 56 insertions(+), 11 deletions(-)

diff --git a/include/block/block.h b/include/block/block.h
index 60c5d11029..de9c5a2b9b 100644
--- a/include/block/block.h
+++ b/include/block/block.h
@@ -608,12 +608,25 @@ void bdrv_parent_drained_end(BlockDriverState *bs, BdrvChild *ignore);
 void bdrv_drained_begin(BlockDriverState *bs);
 
 /**
+ * Like bdrv_drained_begin, but recursively begins a quiesced section for
+ * exclusive access to all child nodes as well.
+ *
+ * Graph changes are not allowed during a subtree drain section.
+ */
+void bdrv_subtree_drained_begin(BlockDriverState *bs);
+
+/**
  * bdrv_drained_end:
  *
  * End a quiescent section started by bdrv_drained_begin().
  */
 void bdrv_drained_end(BlockDriverState *bs);
 
+/**
+ * End a quiescent section started by bdrv_subtree_drained_begin().
+ */
+void bdrv_subtree_drained_end(BlockDriverState *bs);
+
 void bdrv_add_child(BlockDriverState *parent, BlockDriverState *child,
                     Error **errp);
 void bdrv_del_child(BlockDriverState *parent, BdrvChild *child, Error **errp);
diff --git a/block/io.c b/block/io.c
index 09de0a9070..6befef166d 100644
--- a/block/io.c
+++ b/block/io.c
@@ -145,6 +145,7 @@ typedef struct {
     BlockDriverState *bs;
     bool done;
     bool begin;
+    bool recursive;
     BdrvChild *parent;
 } BdrvCoDrainData;
 
@@ -218,8 +219,10 @@ static bool bdrv_drain_recurse(BlockDriverState *bs)
     return waited;
 }
 
-static void bdrv_do_drained_begin(BlockDriverState *bs, BdrvChild *parent);
-static void bdrv_do_drained_end(BlockDriverState *bs, BdrvChild *parent);
+static void bdrv_do_drained_begin(BlockDriverState *bs, bool recursive,
+                                  BdrvChild *parent);
+static void bdrv_do_drained_end(BlockDriverState *bs, bool recursive,
+                                BdrvChild *parent);
 
 static void bdrv_co_drain_bh_cb(void *opaque)
 {
@@ -229,9 +232,9 @@ static void bdrv_co_drain_bh_cb(void *opaque)
 
     bdrv_dec_in_flight(bs);
     if (data->begin) {
-        bdrv_do_drained_begin(bs, data->parent);
+        bdrv_do_drained_begin(bs, data->recursive, data->parent);
     } else {
-        bdrv_do_drained_end(bs, data->parent);
+        bdrv_do_drained_end(bs, data->recursive, data->parent);
     }
 
     data->done = true;
@@ -239,7 +242,8 @@ static void bdrv_co_drain_bh_cb(void *opaque)
 }
 
 static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs,
-                                                bool begin, BdrvChild *parent)
+                                                bool begin, bool recursive,
+                                                BdrvChild *parent)
 {
     BdrvCoDrainData data;
 
@@ -253,6 +257,7 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs,
         .bs = bs,
         .done = false,
         .begin = begin,
+        .recursive = recursive,
         .parent = parent,
     };
     bdrv_inc_in_flight(bs);
@@ -265,10 +270,13 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs,
     assert(data.done);
 }
 
-static void bdrv_do_drained_begin(BlockDriverState *bs, BdrvChild *parent)
+static void bdrv_do_drained_begin(BlockDriverState *bs, bool recursive,
+                                  BdrvChild *parent)
 {
+    BdrvChild *child, *next;
+
     if (qemu_in_coroutine()) {
-        bdrv_co_yield_to_drain(bs, true, parent);
+        bdrv_co_yield_to_drain(bs, true, recursive, parent);
         return;
     }
 
@@ -280,19 +288,32 @@ static void bdrv_do_drained_begin(BlockDriverState *bs, BdrvChild *parent)
     bdrv_parent_drained_begin(bs, parent);
     bdrv_drain_invoke(bs, true, false);
     bdrv_drain_recurse(bs);
+
+    if (recursive) {
+        QLIST_FOREACH_SAFE(child, &bs->children, next, next) {
+            bdrv_do_drained_begin(child->bs, true, child);
+        }
+    }
 }
 
 void bdrv_drained_begin(BlockDriverState *bs)
 {
-    bdrv_do_drained_begin(bs, NULL);
+    bdrv_do_drained_begin(bs, false, NULL);
+}
+
+void bdrv_subtree_drained_begin(BlockDriverState *bs)
+{
+    bdrv_do_drained_begin(bs, true, NULL);
 }
 
-static void bdrv_do_drained_end(BlockDriverState *bs, BdrvChild *parent)
+static void bdrv_do_drained_end(BlockDriverState *bs, bool recursive,
+                                BdrvChild *parent)
 {
+    BdrvChild *child, *next;
     int old_quiesce_counter;
 
     if (qemu_in_coroutine()) {
-        bdrv_co_yield_to_drain(bs, false, parent);
+        bdrv_co_yield_to_drain(bs, false, recursive, parent);
         return;
     }
     assert(bs->quiesce_counter > 0);
@@ -304,11 +325,22 @@ static void bdrv_do_drained_end(BlockDriverState *bs, BdrvChild *parent)
     if (old_quiesce_counter == 1) {
         aio_enable_external(bdrv_get_aio_context(bs));
     }
+
+    if (recursive) {
+        QLIST_FOREACH_SAFE(child, &bs->children, next, next) {
+            bdrv_do_drained_end(child->bs, true, child);
+        }
+    }
 }
 
 void bdrv_drained_end(BlockDriverState *bs)
 {
-    bdrv_do_drained_end(bs, NULL);
+    bdrv_do_drained_end(bs, false, NULL);
+}
+
+void bdrv_subtree_drained_end(BlockDriverState *bs)
+{
+    bdrv_do_drained_end(bs, true, NULL);
 }
 
 /*
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 13/19] test-bdrv-drain: Tests for bdrv_subtree_drain
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
                   ` (11 preceding siblings ...)
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 12/19] block: Add bdrv_subtree_drained_begin/end() Kevin Wolf
@ 2017-12-20 10:34 ` Kevin Wolf
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 14/19] test-bdrv-drain: Test behaviour in coroutine context Kevin Wolf
                   ` (6 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:34 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

Add a subtree drain version to the existing test cases.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 tests/test-bdrv-drain.c | 27 ++++++++++++++++++++++++++-
 1 file changed, 26 insertions(+), 1 deletion(-)

diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
index ede5cf6e64..c00d96bb2f 100644
--- a/tests/test-bdrv-drain.c
+++ b/tests/test-bdrv-drain.c
@@ -85,6 +85,7 @@ static void aio_ret_cb(void *opaque, int ret)
 enum drain_type {
     BDRV_DRAIN_ALL,
     BDRV_DRAIN,
+    BDRV_SUBTREE_DRAIN,
     DRAIN_TYPE_MAX,
 };
 
@@ -93,6 +94,7 @@ static void do_drain_begin(enum drain_type drain_type, BlockDriverState *bs)
     switch (drain_type) {
     case BDRV_DRAIN_ALL:        bdrv_drain_all_begin(); break;
     case BDRV_DRAIN:            bdrv_drained_begin(bs); break;
+    case BDRV_SUBTREE_DRAIN:    bdrv_subtree_drained_begin(bs); break;
     default:                    g_assert_not_reached();
     }
 }
@@ -102,6 +104,7 @@ static void do_drain_end(enum drain_type drain_type, BlockDriverState *bs)
     switch (drain_type) {
     case BDRV_DRAIN_ALL:        bdrv_drain_all_end(); break;
     case BDRV_DRAIN:            bdrv_drained_end(bs); break;
+    case BDRV_SUBTREE_DRAIN:    bdrv_subtree_drained_end(bs); break;
     default:                    g_assert_not_reached();
     }
 }
@@ -180,6 +183,11 @@ static void test_drv_cb_drain(void)
     test_drv_cb_common(BDRV_DRAIN, false);
 }
 
+static void test_drv_cb_drain_subtree(void)
+{
+    test_drv_cb_common(BDRV_SUBTREE_DRAIN, true);
+}
+
 static void test_quiesce_common(enum drain_type drain_type, bool recursive)
 {
     BlockBackend *blk;
@@ -222,6 +230,11 @@ static void test_quiesce_drain(void)
     test_quiesce_common(BDRV_DRAIN, false);
 }
 
+static void test_quiesce_drain_subtree(void)
+{
+    test_quiesce_common(BDRV_SUBTREE_DRAIN, true);
+}
+
 static void test_nested(void)
 {
     BlockBackend *blk;
@@ -244,7 +257,8 @@ static void test_nested(void)
             /* XXX bdrv_drain_all() doesn't increase the quiesce_counter */
             int bs_quiesce      = (outer != BDRV_DRAIN_ALL) +
                                   (inner != BDRV_DRAIN_ALL);
-            int backing_quiesce = 0;
+            int backing_quiesce = (outer == BDRV_SUBTREE_DRAIN) +
+                                  (inner == BDRV_SUBTREE_DRAIN);
             int backing_cb_cnt  = (outer != BDRV_DRAIN) +
                                   (inner != BDRV_DRAIN);
 
@@ -391,6 +405,11 @@ static void test_blockjob_drain(void)
     test_blockjob_common(BDRV_DRAIN);
 }
 
+static void test_blockjob_drain_subtree(void)
+{
+    test_blockjob_common(BDRV_SUBTREE_DRAIN);
+}
+
 int main(int argc, char **argv)
 {
     bdrv_init();
@@ -400,14 +419,20 @@ int main(int argc, char **argv)
 
     g_test_add_func("/bdrv-drain/driver-cb/drain_all", test_drv_cb_drain_all);
     g_test_add_func("/bdrv-drain/driver-cb/drain", test_drv_cb_drain);
+    g_test_add_func("/bdrv-drain/driver-cb/drain_subtree",
+                    test_drv_cb_drain_subtree);
 
     g_test_add_func("/bdrv-drain/quiesce/drain_all", test_quiesce_drain_all);
     g_test_add_func("/bdrv-drain/quiesce/drain", test_quiesce_drain);
+    g_test_add_func("/bdrv-drain/quiesce/drain_subtree",
+                    test_quiesce_drain_subtree);
 
     g_test_add_func("/bdrv-drain/nested", test_nested);
 
     g_test_add_func("/bdrv-drain/blockjob/drain_all", test_blockjob_drain_all);
     g_test_add_func("/bdrv-drain/blockjob/drain", test_blockjob_drain);
+    g_test_add_func("/bdrv-drain/blockjob/drain_subtree",
+                    test_blockjob_drain_subtree);
 
     return g_test_run();
 }
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 14/19] test-bdrv-drain: Test behaviour in coroutine context
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
                   ` (12 preceding siblings ...)
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 13/19] test-bdrv-drain: Tests for bdrv_subtree_drain Kevin Wolf
@ 2017-12-20 10:34 ` Kevin Wolf
  2017-12-20 10:53   ` Paolo Bonzini
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 15/19] test-bdrv-drain: Recursive draining with multiple parents Kevin Wolf
                   ` (5 subsequent siblings)
  19 siblings, 1 reply; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:34 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

If bdrv_do_drained_begin/end() are called in coroutine context, they
first use a BH to get out of the coroutine context. Call some existing
tests again from a coroutine to cover this code path.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 tests/test-bdrv-drain.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 51 insertions(+)

diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
index c00d96bb2f..a1e5693f33 100644
--- a/tests/test-bdrv-drain.c
+++ b/tests/test-bdrv-drain.c
@@ -188,6 +188,30 @@ static void test_drv_cb_drain_subtree(void)
     test_drv_cb_common(BDRV_SUBTREE_DRAIN, true);
 }
 
+static coroutine_fn void test_drv_cb_coroutine_entry(void *opaque)
+{
+    bool *done = opaque;
+
+    // XXX bdrv_drain_all() doesn't work in coroutine context
+    //test_drv_cb_drain_all();
+    test_drv_cb_drain();
+    test_drv_cb_drain_subtree();
+
+    *done = true;
+}
+
+static void test_drv_cb_coroutine(void)
+{
+    Coroutine *co;
+    bool done;
+
+    co = qemu_coroutine_create(test_drv_cb_coroutine_entry, &done);
+    qemu_coroutine_enter(co);
+    while (!done) {
+        aio_poll(qemu_get_aio_context(), true);
+    }
+}
+
 static void test_quiesce_common(enum drain_type drain_type, bool recursive)
 {
     BlockBackend *blk;
@@ -235,6 +259,30 @@ static void test_quiesce_drain_subtree(void)
     test_quiesce_common(BDRV_SUBTREE_DRAIN, true);
 }
 
+static coroutine_fn void test_quiesce_coroutine_entry(void *opaque)
+{
+    bool *done = opaque;
+
+    // XXX bdrv_drain_all() doesn't work in coroutine context
+    //test_quiesce_drain_all();
+    test_quiesce_drain();
+    test_quiesce_drain_subtree();
+
+    *done = true;
+}
+
+static void test_quiesce_coroutine(void)
+{
+    Coroutine *co;
+    bool done;
+
+    co = qemu_coroutine_create(test_quiesce_coroutine_entry, &done);
+    qemu_coroutine_enter(co);
+    while (!done) {
+        aio_poll(qemu_get_aio_context(), true);
+    }
+}
+
 static void test_nested(void)
 {
     BlockBackend *blk;
@@ -421,11 +469,14 @@ int main(int argc, char **argv)
     g_test_add_func("/bdrv-drain/driver-cb/drain", test_drv_cb_drain);
     g_test_add_func("/bdrv-drain/driver-cb/drain_subtree",
                     test_drv_cb_drain_subtree);
+    g_test_add_func("/bdrv-drain/driver-cb/coroutine", test_drv_cb_coroutine);
+
 
     g_test_add_func("/bdrv-drain/quiesce/drain_all", test_quiesce_drain_all);
     g_test_add_func("/bdrv-drain/quiesce/drain", test_quiesce_drain);
     g_test_add_func("/bdrv-drain/quiesce/drain_subtree",
                     test_quiesce_drain_subtree);
+    g_test_add_func("/bdrv-drain/quiesce/coroutine", test_quiesce_coroutine);
 
     g_test_add_func("/bdrv-drain/nested", test_nested);
 
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 15/19] test-bdrv-drain: Recursive draining with multiple parents
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
                   ` (13 preceding siblings ...)
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 14/19] test-bdrv-drain: Test behaviour in coroutine context Kevin Wolf
@ 2017-12-20 10:34 ` Kevin Wolf
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 16/19] block: Allow graph changes in subtree drained section Kevin Wolf
                   ` (4 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:34 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

Test that drain sections are correctly propagated through the graph.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 tests/test-bdrv-drain.c | 74 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 74 insertions(+)

diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
index a1e5693f33..b7322c5826 100644
--- a/tests/test-bdrv-drain.c
+++ b/tests/test-bdrv-drain.c
@@ -338,6 +338,79 @@ static void test_nested(void)
     blk_unref(blk);
 }
 
+static void test_multiparent(void)
+{
+    BlockBackend *blk_a, *blk_b;
+    BlockDriverState *bs_a, *bs_b, *backing;
+    BDRVTestState *a_s, *b_s, *backing_s;
+
+    blk_a = blk_new(BLK_PERM_ALL, BLK_PERM_ALL);
+    bs_a = bdrv_new_open_driver(&bdrv_test, "test-node-a", BDRV_O_RDWR,
+                                &error_abort);
+    a_s = bs_a->opaque;
+    blk_insert_bs(blk_a, bs_a, &error_abort);
+
+    blk_b = blk_new(BLK_PERM_ALL, BLK_PERM_ALL);
+    bs_b = bdrv_new_open_driver(&bdrv_test, "test-node-b", BDRV_O_RDWR,
+                                &error_abort);
+    b_s = bs_b->opaque;
+    blk_insert_bs(blk_b, bs_b, &error_abort);
+
+    backing = bdrv_new_open_driver(&bdrv_test, "backing", 0, &error_abort);
+    backing_s = backing->opaque;
+    bdrv_set_backing_hd(bs_a, backing, &error_abort);
+    bdrv_set_backing_hd(bs_b, backing, &error_abort);
+
+    g_assert_cmpint(bs_a->quiesce_counter, ==, 0);
+    g_assert_cmpint(bs_b->quiesce_counter, ==, 0);
+    g_assert_cmpint(backing->quiesce_counter, ==, 0);
+    g_assert_cmpint(a_s->drain_count, ==, 0);
+    g_assert_cmpint(b_s->drain_count, ==, 0);
+    g_assert_cmpint(backing_s->drain_count, ==, 0);
+
+    do_drain_begin(BDRV_SUBTREE_DRAIN, bs_a);
+
+    g_assert_cmpint(bs_a->quiesce_counter, ==, 1);
+    g_assert_cmpint(bs_b->quiesce_counter, ==, 1);
+    g_assert_cmpint(backing->quiesce_counter, ==, 1);
+    g_assert_cmpint(a_s->drain_count, ==, 1);
+    g_assert_cmpint(b_s->drain_count, ==, 1);
+    g_assert_cmpint(backing_s->drain_count, ==, 1);
+
+    do_drain_begin(BDRV_SUBTREE_DRAIN, bs_b);
+
+    g_assert_cmpint(bs_a->quiesce_counter, ==, 2);
+    g_assert_cmpint(bs_b->quiesce_counter, ==, 2);
+    g_assert_cmpint(backing->quiesce_counter, ==, 2);
+    g_assert_cmpint(a_s->drain_count, ==, 2);
+    g_assert_cmpint(b_s->drain_count, ==, 2);
+    g_assert_cmpint(backing_s->drain_count, ==, 2);
+
+    do_drain_end(BDRV_SUBTREE_DRAIN, bs_b);
+
+    g_assert_cmpint(bs_a->quiesce_counter, ==, 1);
+    g_assert_cmpint(bs_b->quiesce_counter, ==, 1);
+    g_assert_cmpint(backing->quiesce_counter, ==, 1);
+    g_assert_cmpint(a_s->drain_count, ==, 1);
+    g_assert_cmpint(b_s->drain_count, ==, 1);
+    g_assert_cmpint(backing_s->drain_count, ==, 1);
+
+    do_drain_end(BDRV_SUBTREE_DRAIN, bs_a);
+
+    g_assert_cmpint(bs_a->quiesce_counter, ==, 0);
+    g_assert_cmpint(bs_b->quiesce_counter, ==, 0);
+    g_assert_cmpint(backing->quiesce_counter, ==, 0);
+    g_assert_cmpint(a_s->drain_count, ==, 0);
+    g_assert_cmpint(b_s->drain_count, ==, 0);
+    g_assert_cmpint(backing_s->drain_count, ==, 0);
+
+    bdrv_unref(backing);
+    bdrv_unref(bs_a);
+    bdrv_unref(bs_b);
+    blk_unref(blk_a);
+    blk_unref(blk_b);
+}
+
 
 typedef struct TestBlockJob {
     BlockJob common;
@@ -479,6 +552,7 @@ int main(int argc, char **argv)
     g_test_add_func("/bdrv-drain/quiesce/coroutine", test_quiesce_coroutine);
 
     g_test_add_func("/bdrv-drain/nested", test_nested);
+    g_test_add_func("/bdrv-drain/multiparent", test_multiparent);
 
     g_test_add_func("/bdrv-drain/blockjob/drain_all", test_blockjob_drain_all);
     g_test_add_func("/bdrv-drain/blockjob/drain", test_blockjob_drain);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 16/19] block: Allow graph changes in subtree drained section
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
                   ` (14 preceding siblings ...)
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 15/19] test-bdrv-drain: Recursive draining with multiple parents Kevin Wolf
@ 2017-12-20 10:34 ` Kevin Wolf
  2017-12-20 10:51   ` Paolo Bonzini
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 17/19] test-bdrv-drain: Test graph changes in " Kevin Wolf
                   ` (3 subsequent siblings)
  19 siblings, 1 reply; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:34 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

We need to remember how many of the drain sections in which a node is
were recursive (i.e. subtree drain rather than node drain), so that they
can be correctly applied when children are added or removed during the
drained section.

With this change, it is safe to modify the graph even inside a
bdrv_subtree_drained_begin/end() section.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 include/block/block.h     |  7 +++++--
 include/block/block_int.h |  2 ++
 block.c                   | 40 +++++++++++++++++++++++++++++++++++++---
 block/io.c                | 15 ++++++---------
 4 files changed, 50 insertions(+), 14 deletions(-)

diff --git a/include/block/block.h b/include/block/block.h
index de9c5a2b9b..c8e6163cf7 100644
--- a/include/block/block.h
+++ b/include/block/block.h
@@ -610,8 +610,6 @@ void bdrv_drained_begin(BlockDriverState *bs);
 /**
  * Like bdrv_drained_begin, but recursively begins a quiesced section for
  * exclusive access to all child nodes as well.
- *
- * Graph changes are not allowed during a subtree drain section.
  */
 void bdrv_subtree_drained_begin(BlockDriverState *bs);
 
@@ -627,6 +625,11 @@ void bdrv_drained_end(BlockDriverState *bs);
  */
 void bdrv_subtree_drained_end(BlockDriverState *bs);
 
+void bdrv_do_drained_begin(BlockDriverState *bs, bool recursive,
+                           BdrvChild *parent);
+void bdrv_do_drained_end(BlockDriverState *bs, bool recursive,
+                         BdrvChild *parent);
+
 void bdrv_add_child(BlockDriverState *parent, BlockDriverState *child,
                     Error **errp);
 void bdrv_del_child(BlockDriverState *parent, BdrvChild *child, Error **errp);
diff --git a/include/block/block_int.h b/include/block/block_int.h
index e107163594..c2d3956633 100644
--- a/include/block/block_int.h
+++ b/include/block/block_int.h
@@ -717,6 +717,8 @@ struct BlockDriverState {
 
     /* Accessed with atomic ops.  */
     int quiesce_counter;
+    int recursive_quiesce_counter;
+
     unsigned int write_gen;               /* Current data generation */
 
     /* Protected by reqs_lock.  */
diff --git a/block.c b/block.c
index 5867a2a2a0..37f8639fe9 100644
--- a/block.c
+++ b/block.c
@@ -822,6 +822,26 @@ static void bdrv_child_cb_drained_end(BdrvChild *child)
     bdrv_drained_end(bs);
 }
 
+static void bdrv_child_cb_attach(BdrvChild *child)
+{
+    BlockDriverState *bs = child->opaque;
+    int i;
+
+    for (i = 0; i < bs->recursive_quiesce_counter; i++) {
+        bdrv_do_drained_begin(child->bs, true, child);
+    }
+}
+
+static void bdrv_child_cb_detach(BdrvChild *child)
+{
+    BlockDriverState *bs = child->opaque;
+    int i;
+
+    for (i = 0; i < bs->recursive_quiesce_counter; i++) {
+        bdrv_do_drained_end(child->bs, true, child);
+    }
+}
+
 static int bdrv_child_cb_inactivate(BdrvChild *child)
 {
     BlockDriverState *bs = child->opaque;
@@ -889,6 +909,8 @@ const BdrvChildRole child_file = {
     .inherit_options = bdrv_inherited_options,
     .drained_begin   = bdrv_child_cb_drained_begin,
     .drained_end     = bdrv_child_cb_drained_end,
+    .attach          = bdrv_child_cb_attach,
+    .detach          = bdrv_child_cb_detach,
     .inactivate      = bdrv_child_cb_inactivate,
 };
 
@@ -911,6 +933,8 @@ const BdrvChildRole child_format = {
     .inherit_options = bdrv_inherited_fmt_options,
     .drained_begin   = bdrv_child_cb_drained_begin,
     .drained_end     = bdrv_child_cb_drained_end,
+    .attach          = bdrv_child_cb_attach,
+    .detach          = bdrv_child_cb_detach,
     .inactivate      = bdrv_child_cb_inactivate,
 };
 
@@ -953,6 +977,8 @@ static void bdrv_backing_attach(BdrvChild *c)
                     parent->backing_blocker);
     bdrv_op_unblock(backing_hd, BLOCK_OP_TYPE_BACKUP_TARGET,
                     parent->backing_blocker);
+
+    bdrv_child_cb_attach(c);
 }
 
 static void bdrv_backing_detach(BdrvChild *c)
@@ -963,6 +989,8 @@ static void bdrv_backing_detach(BdrvChild *c)
     bdrv_op_unblock_all(c->bs, parent->backing_blocker);
     error_free(parent->backing_blocker);
     parent->backing_blocker = NULL;
+
+    bdrv_child_cb_detach(c);
 }
 
 /*
@@ -1978,14 +2006,17 @@ static void bdrv_replace_child_noperm(BdrvChild *child,
         assert(bdrv_get_aio_context(old_bs) == bdrv_get_aio_context(new_bs));
     }
     if (old_bs) {
+        /* Detach first so that the recursive drain sections coming from @child
+         * are already gone and we only end the drain sections that came from
+         * elsewhere. */
+        if (child->role->detach) {
+            child->role->detach(child);
+        }
         if (old_bs->quiesce_counter && child->role->drained_end) {
             for (i = 0; i < old_bs->quiesce_counter; i++) {
                 child->role->drained_end(child);
             }
         }
-        if (child->role->detach) {
-            child->role->detach(child);
-        }
         QLIST_REMOVE(child, next_parent);
     }
 
@@ -1999,6 +2030,9 @@ static void bdrv_replace_child_noperm(BdrvChild *child,
             }
         }
 
+        /* Attach only after starting new drained sections, so that recursive
+         * drain sections coming from @child don't get an extra .drained_begin
+         * callback. */
         if (child->role->attach) {
             child->role->attach(child);
         }
diff --git a/block/io.c b/block/io.c
index 6befef166d..2eb0783506 100644
--- a/block/io.c
+++ b/block/io.c
@@ -219,11 +219,6 @@ static bool bdrv_drain_recurse(BlockDriverState *bs)
     return waited;
 }
 
-static void bdrv_do_drained_begin(BlockDriverState *bs, bool recursive,
-                                  BdrvChild *parent);
-static void bdrv_do_drained_end(BlockDriverState *bs, bool recursive,
-                                BdrvChild *parent);
-
 static void bdrv_co_drain_bh_cb(void *opaque)
 {
     BdrvCoDrainData *data = opaque;
@@ -270,8 +265,8 @@ static void coroutine_fn bdrv_co_yield_to_drain(BlockDriverState *bs,
     assert(data.done);
 }
 
-static void bdrv_do_drained_begin(BlockDriverState *bs, bool recursive,
-                                  BdrvChild *parent)
+void bdrv_do_drained_begin(BlockDriverState *bs, bool recursive,
+                           BdrvChild *parent)
 {
     BdrvChild *child, *next;
 
@@ -290,6 +285,7 @@ static void bdrv_do_drained_begin(BlockDriverState *bs, bool recursive,
     bdrv_drain_recurse(bs);
 
     if (recursive) {
+        bs->recursive_quiesce_counter++;
         QLIST_FOREACH_SAFE(child, &bs->children, next, next) {
             bdrv_do_drained_begin(child->bs, true, child);
         }
@@ -306,8 +302,8 @@ void bdrv_subtree_drained_begin(BlockDriverState *bs)
     bdrv_do_drained_begin(bs, true, NULL);
 }
 
-static void bdrv_do_drained_end(BlockDriverState *bs, bool recursive,
-                                BdrvChild *parent)
+void bdrv_do_drained_end(BlockDriverState *bs, bool recursive,
+                         BdrvChild *parent)
 {
     BdrvChild *child, *next;
     int old_quiesce_counter;
@@ -327,6 +323,7 @@ static void bdrv_do_drained_end(BlockDriverState *bs, bool recursive,
     }
 
     if (recursive) {
+        bs->recursive_quiesce_counter--;
         QLIST_FOREACH_SAFE(child, &bs->children, next, next) {
             bdrv_do_drained_end(child->bs, true, child);
         }
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 17/19] test-bdrv-drain: Test graph changes in drained section
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
                   ` (15 preceding siblings ...)
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 16/19] block: Allow graph changes in subtree drained section Kevin Wolf
@ 2017-12-20 10:34 ` Kevin Wolf
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 18/19] commit: Simplify reopen of base Kevin Wolf
                   ` (2 subsequent siblings)
  19 siblings, 0 replies; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:34 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 tests/test-bdrv-drain.c | 80 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 80 insertions(+)

diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
index b7322c5826..6803e64950 100644
--- a/tests/test-bdrv-drain.c
+++ b/tests/test-bdrv-drain.c
@@ -411,6 +411,85 @@ static void test_multiparent(void)
     blk_unref(blk_b);
 }
 
+static void test_graph_change(void)
+{
+    BlockBackend *blk_a, *blk_b;
+    BlockDriverState *bs_a, *bs_b, *backing;
+    BDRVTestState *a_s, *b_s, *backing_s;
+
+    blk_a = blk_new(BLK_PERM_ALL, BLK_PERM_ALL);
+    bs_a = bdrv_new_open_driver(&bdrv_test, "test-node-a", BDRV_O_RDWR,
+                                &error_abort);
+    a_s = bs_a->opaque;
+    blk_insert_bs(blk_a, bs_a, &error_abort);
+
+    blk_b = blk_new(BLK_PERM_ALL, BLK_PERM_ALL);
+    bs_b = bdrv_new_open_driver(&bdrv_test, "test-node-b", BDRV_O_RDWR,
+                                &error_abort);
+    b_s = bs_b->opaque;
+    blk_insert_bs(blk_b, bs_b, &error_abort);
+
+    backing = bdrv_new_open_driver(&bdrv_test, "backing", 0, &error_abort);
+    backing_s = backing->opaque;
+    bdrv_set_backing_hd(bs_a, backing, &error_abort);
+
+    g_assert_cmpint(bs_a->quiesce_counter, ==, 0);
+    g_assert_cmpint(bs_b->quiesce_counter, ==, 0);
+    g_assert_cmpint(backing->quiesce_counter, ==, 0);
+    g_assert_cmpint(a_s->drain_count, ==, 0);
+    g_assert_cmpint(b_s->drain_count, ==, 0);
+    g_assert_cmpint(backing_s->drain_count, ==, 0);
+
+    do_drain_begin(BDRV_SUBTREE_DRAIN, bs_a);
+    do_drain_begin(BDRV_SUBTREE_DRAIN, bs_a);
+    do_drain_begin(BDRV_SUBTREE_DRAIN, bs_a);
+    do_drain_begin(BDRV_SUBTREE_DRAIN, bs_b);
+    do_drain_begin(BDRV_SUBTREE_DRAIN, bs_b);
+
+    bdrv_set_backing_hd(bs_b, backing, &error_abort);
+    g_assert_cmpint(bs_a->quiesce_counter, ==, 5);
+    g_assert_cmpint(bs_b->quiesce_counter, ==, 5);
+    g_assert_cmpint(backing->quiesce_counter, ==, 5);
+    g_assert_cmpint(a_s->drain_count, ==, 5);
+    g_assert_cmpint(b_s->drain_count, ==, 5);
+    g_assert_cmpint(backing_s->drain_count, ==, 5);
+
+    bdrv_set_backing_hd(bs_b, NULL, &error_abort);
+    g_assert_cmpint(bs_a->quiesce_counter, ==, 3);
+    g_assert_cmpint(bs_b->quiesce_counter, ==, 2);
+    g_assert_cmpint(backing->quiesce_counter, ==, 3);
+    g_assert_cmpint(a_s->drain_count, ==, 3);
+    g_assert_cmpint(b_s->drain_count, ==, 2);
+    g_assert_cmpint(backing_s->drain_count, ==, 3);
+
+    bdrv_set_backing_hd(bs_b, backing, &error_abort);
+    g_assert_cmpint(bs_a->quiesce_counter, ==, 5);
+    g_assert_cmpint(bs_b->quiesce_counter, ==, 5);
+    g_assert_cmpint(backing->quiesce_counter, ==, 5);
+    g_assert_cmpint(a_s->drain_count, ==, 5);
+    g_assert_cmpint(b_s->drain_count, ==, 5);
+    g_assert_cmpint(backing_s->drain_count, ==, 5);
+
+    do_drain_end(BDRV_SUBTREE_DRAIN, bs_b);
+    do_drain_end(BDRV_SUBTREE_DRAIN, bs_b);
+    do_drain_end(BDRV_SUBTREE_DRAIN, bs_a);
+    do_drain_end(BDRV_SUBTREE_DRAIN, bs_a);
+    do_drain_end(BDRV_SUBTREE_DRAIN, bs_a);
+
+    g_assert_cmpint(bs_a->quiesce_counter, ==, 0);
+    g_assert_cmpint(bs_b->quiesce_counter, ==, 0);
+    g_assert_cmpint(backing->quiesce_counter, ==, 0);
+    g_assert_cmpint(a_s->drain_count, ==, 0);
+    g_assert_cmpint(b_s->drain_count, ==, 0);
+    g_assert_cmpint(backing_s->drain_count, ==, 0);
+
+    bdrv_unref(backing);
+    bdrv_unref(bs_a);
+    bdrv_unref(bs_b);
+    blk_unref(blk_a);
+    blk_unref(blk_b);
+}
+
 
 typedef struct TestBlockJob {
     BlockJob common;
@@ -553,6 +632,7 @@ int main(int argc, char **argv)
 
     g_test_add_func("/bdrv-drain/nested", test_nested);
     g_test_add_func("/bdrv-drain/multiparent", test_multiparent);
+    g_test_add_func("/bdrv-drain/graph-change", test_graph_change);
 
     g_test_add_func("/bdrv-drain/blockjob/drain_all", test_blockjob_drain_all);
     g_test_add_func("/bdrv-drain/blockjob/drain", test_blockjob_drain);
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 18/19] commit: Simplify reopen of base
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
                   ` (16 preceding siblings ...)
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 17/19] test-bdrv-drain: Test graph changes in " Kevin Wolf
@ 2017-12-20 10:34 ` Kevin Wolf
  2017-12-20 13:32   ` Fam Zheng
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 19/19] block: Keep nodes drained between reopen_queue/multiple Kevin Wolf
  2017-12-20 10:54 ` [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Paolo Bonzini
  19 siblings, 1 reply; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:34 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

Since commit bde70715, base is the only node that is reopened in
commit_start(). This means that the code, which still involves an
explicit BlockReopenQueue, can now be simplified by using bdrv_reopen().

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 block/commit.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/block/commit.c b/block/commit.c
index c5327551ce..bb6c904704 100644
--- a/block/commit.c
+++ b/block/commit.c
@@ -277,7 +277,6 @@ void commit_start(const char *job_id, BlockDriverState *bs,
                   const char *filter_node_name, Error **errp)
 {
     CommitBlockJob *s;
-    BlockReopenQueue *reopen_queue = NULL;
     int orig_base_flags;
     BlockDriverState *iter;
     BlockDriverState *commit_top_bs = NULL;
@@ -299,12 +298,7 @@ void commit_start(const char *job_id, BlockDriverState *bs,
     /* convert base to r/w, if necessary */
     orig_base_flags = bdrv_get_flags(base);
     if (!(orig_base_flags & BDRV_O_RDWR)) {
-        reopen_queue = bdrv_reopen_queue(reopen_queue, base, NULL,
-                                         orig_base_flags | BDRV_O_RDWR);
-    }
-
-    if (reopen_queue) {
-        bdrv_reopen_multiple(bdrv_get_aio_context(bs), reopen_queue, &local_err);
+        bdrv_reopen(base, orig_base_flags | BDRV_O_RDWR, &local_err);
         if (local_err != NULL) {
             error_propagate(errp, local_err);
             goto fail;
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [Qemu-devel] [PATCH 19/19] block: Keep nodes drained between reopen_queue/multiple
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
                   ` (17 preceding siblings ...)
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 18/19] commit: Simplify reopen of base Kevin Wolf
@ 2017-12-20 10:34 ` Kevin Wolf
  2017-12-20 13:34   ` Fam Zheng
  2017-12-20 10:54 ` [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Paolo Bonzini
  19 siblings, 1 reply; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 10:34 UTC (permalink / raw)
  To: qemu-block; +Cc: kwolf, famz, pbonzini, qemu-devel

The bdrv_reopen*() implementation doesn't like it if the graph is
changed between queuing nodes for reopen and actually reopening them
(one of the reasons is that queuing can be recursive).

So instead of draining the device only in bdrv_reopen_multiple(),
require that callers already drained all affected nodes, and assert this
in bdrv_reopen_queue().

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
 block.c             | 23 ++++++++++++++++-------
 block/replication.c |  6 ++++++
 qemu-io-cmds.c      |  3 +++
 3 files changed, 25 insertions(+), 7 deletions(-)

diff --git a/block.c b/block.c
index 37f8639fe9..cfd39bea10 100644
--- a/block.c
+++ b/block.c
@@ -2774,6 +2774,7 @@ BlockDriverState *bdrv_open(const char *filename, const char *reference,
  * returns a pointer to bs_queue, which is either the newly allocated
  * bs_queue, or the existing bs_queue being used.
  *
+ * bs must be drained between bdrv_reopen_queue() and bdrv_reopen_multiple().
  */
 static BlockReopenQueue *bdrv_reopen_queue_child(BlockReopenQueue *bs_queue,
                                                  BlockDriverState *bs,
@@ -2789,6 +2790,11 @@ static BlockReopenQueue *bdrv_reopen_queue_child(BlockReopenQueue *bs_queue,
     BdrvChild *child;
     QDict *old_options, *explicit_options;
 
+    /* Make sure that the caller remembered to use a drained section. This is
+     * important to avoid graph changes between the recursive queuing here and
+     * bdrv_reopen_multiple(). */
+    assert(bs->quiesce_counter > 0);
+
     if (bs_queue == NULL) {
         bs_queue = g_new0(BlockReopenQueue, 1);
         QSIMPLEQ_INIT(bs_queue);
@@ -2913,6 +2919,8 @@ BlockReopenQueue *bdrv_reopen_queue(BlockReopenQueue *bs_queue,
  * If all devices prepare successfully, then the changes are committed
  * to all devices.
  *
+ * All affected nodes must be drained between bdrv_reopen_queue() and
+ * bdrv_reopen_multiple().
  */
 int bdrv_reopen_multiple(AioContext *ctx, BlockReopenQueue *bs_queue, Error **errp)
 {
@@ -2922,11 +2930,8 @@ int bdrv_reopen_multiple(AioContext *ctx, BlockReopenQueue *bs_queue, Error **er
 
     assert(bs_queue != NULL);
 
-    aio_context_release(ctx);
-    bdrv_drain_all_begin();
-    aio_context_acquire(ctx);
-
     QSIMPLEQ_FOREACH(bs_entry, bs_queue, entry) {
+        assert(bs_entry->state.bs->quiesce_counter > 0);
         if (bdrv_reopen_prepare(&bs_entry->state, bs_queue, &local_err)) {
             error_propagate(errp, local_err);
             goto cleanup;
@@ -2955,8 +2960,6 @@ cleanup:
     }
     g_free(bs_queue);
 
-    bdrv_drain_all_end();
-
     return ret;
 }
 
@@ -2966,12 +2969,18 @@ int bdrv_reopen(BlockDriverState *bs, int bdrv_flags, Error **errp)
 {
     int ret = -1;
     Error *local_err = NULL;
-    BlockReopenQueue *queue = bdrv_reopen_queue(NULL, bs, NULL, bdrv_flags);
+    BlockReopenQueue *queue;
 
+    bdrv_subtree_drained_begin(bs);
+
+    queue = bdrv_reopen_queue(NULL, bs, NULL, bdrv_flags);
     ret = bdrv_reopen_multiple(bdrv_get_aio_context(bs), queue, &local_err);
     if (local_err != NULL) {
         error_propagate(errp, local_err);
     }
+
+    bdrv_subtree_drained_end(bs);
+
     return ret;
 }
 
diff --git a/block/replication.c b/block/replication.c
index e41e293d2b..b1ea3caa4b 100644
--- a/block/replication.c
+++ b/block/replication.c
@@ -394,6 +394,9 @@ static void reopen_backing_file(BlockDriverState *bs, bool writable,
         new_secondary_flags = s->orig_secondary_flags;
     }
 
+    bdrv_subtree_drained_begin(s->hidden_disk->bs);
+    bdrv_subtree_drained_begin(s->secondary_disk->bs);
+
     if (orig_hidden_flags != new_hidden_flags) {
         reopen_queue = bdrv_reopen_queue(reopen_queue, s->hidden_disk->bs, NULL,
                                          new_hidden_flags);
@@ -409,6 +412,9 @@ static void reopen_backing_file(BlockDriverState *bs, bool writable,
                              reopen_queue, &local_err);
         error_propagate(errp, local_err);
     }
+
+    bdrv_subtree_drained_end(s->hidden_disk->bs);
+    bdrv_subtree_drained_end(s->secondary_disk->bs);
 }
 
 static void backup_job_cleanup(BlockDriverState *bs)
diff --git a/qemu-io-cmds.c b/qemu-io-cmds.c
index de8e3de726..a6a70fc3dc 100644
--- a/qemu-io-cmds.c
+++ b/qemu-io-cmds.c
@@ -2013,8 +2013,11 @@ static int reopen_f(BlockBackend *blk, int argc, char **argv)
     opts = qopts ? qemu_opts_to_qdict(qopts, NULL) : NULL;
     qemu_opts_reset(&reopen_opts);
 
+    bdrv_subtree_drained_begin(bs);
     brq = bdrv_reopen_queue(NULL, bs, opts, flags);
     bdrv_reopen_multiple(bdrv_get_aio_context(bs), brq, &local_err);
+    bdrv_subtree_drained_end(bs);
+
     if (local_err) {
         error_report_err(local_err);
     } else {
-- 
2.13.6

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 16/19] block: Allow graph changes in subtree drained section
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 16/19] block: Allow graph changes in subtree drained section Kevin Wolf
@ 2017-12-20 10:51   ` Paolo Bonzini
  2017-12-20 11:18     ` Kevin Wolf
  0 siblings, 1 reply; 34+ messages in thread
From: Paolo Bonzini @ 2017-12-20 10:51 UTC (permalink / raw)
  To: Kevin Wolf, qemu-block; +Cc: famz, qemu-devel

On 20/12/2017 11:34, Kevin Wolf wrote:
>      .inherit_options = bdrv_inherited_options,
>      .drained_begin   = bdrv_child_cb_drained_begin,
>      .drained_end     = bdrv_child_cb_drained_end,
> +    .attach          = bdrv_child_cb_attach,
> +    .detach          = bdrv_child_cb_detach,
>      .inactivate      = bdrv_child_cb_inactivate,
>  };
>  
> @@ -911,6 +933,8 @@ const BdrvChildRole child_format = {
>      .inherit_options = bdrv_inherited_fmt_options,
>      .drained_begin   = bdrv_child_cb_drained_begin,
>      .drained_end     = bdrv_child_cb_drained_end,
> +    .attach          = bdrv_child_cb_attach,
> +    .detach          = bdrv_child_cb_detach,
>      .inactivate      = bdrv_child_cb_inactivate,

Is there any case of a BdrvChildRole that doesn't want these callbacks?
Maybe the functions should be called after ->attach and before ->detach
(e.g. bdrv_{,un}apply_subtree_drain), rather than modifying the
BdrvChildRole implementations.

Then they can be put in block/io.c, and bdrv_do_drained_* can remain
static.  (I would also consider extracting block/drain.c, but it is
painful to do it now that you have this nice series---so let's do it after).

Paolo

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 14/19] test-bdrv-drain: Test behaviour in coroutine context
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 14/19] test-bdrv-drain: Test behaviour in coroutine context Kevin Wolf
@ 2017-12-20 10:53   ` Paolo Bonzini
  0 siblings, 0 replies; 34+ messages in thread
From: Paolo Bonzini @ 2017-12-20 10:53 UTC (permalink / raw)
  To: Kevin Wolf, qemu-block; +Cc: famz, qemu-devel

On 20/12/2017 11:34, Kevin Wolf wrote:
> If bdrv_do_drained_begin/end() are called in coroutine context, they
> first use a BH to get out of the coroutine context. Call some existing
> tests again from a coroutine to cover this code path.
> 
> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> ---
>  tests/test-bdrv-drain.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 51 insertions(+)
> 
> diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
> index c00d96bb2f..a1e5693f33 100644
> --- a/tests/test-bdrv-drain.c
> +++ b/tests/test-bdrv-drain.c
> @@ -188,6 +188,30 @@ static void test_drv_cb_drain_subtree(void)
>      test_drv_cb_common(BDRV_SUBTREE_DRAIN, true);
>  }
>  
> +static coroutine_fn void test_drv_cb_coroutine_entry(void *opaque)
> +{
> +    bool *done = opaque;
> +
> +    // XXX bdrv_drain_all() doesn't work in coroutine context
> +    //test_drv_cb_drain_all();
> +    test_drv_cb_drain();
> +    test_drv_cb_drain_subtree();
> +
> +    *done = true;
> +}
> +
> +static void test_drv_cb_coroutine(void)
> +{
> +    Coroutine *co;
> +    bool done;
> +
> +    co = qemu_coroutine_create(test_drv_cb_coroutine_entry, &done);
> +    qemu_coroutine_enter(co);
> +    while (!done) {
> +        aio_poll(qemu_get_aio_context(), true);
> +    }
> +}
> +
>  static void test_quiesce_common(enum drain_type drain_type, bool recursive)
>  {
>      BlockBackend *blk;
> @@ -235,6 +259,30 @@ static void test_quiesce_drain_subtree(void)
>      test_quiesce_common(BDRV_SUBTREE_DRAIN, true);
>  }
>  
> +static coroutine_fn void test_quiesce_coroutine_entry(void *opaque)
> +{
> +    bool *done = opaque;
> +
> +    // XXX bdrv_drain_all() doesn't work in coroutine context
> +    //test_quiesce_drain_all();
> +    test_quiesce_drain();
> +    test_quiesce_drain_subtree();
> +
> +    *done = true;
> +}
> +
> +static void test_quiesce_coroutine(void)
> +{
> +    Coroutine *co;
> +    bool done;
> +
> +    co = qemu_coroutine_create(test_quiesce_coroutine_entry, &done);
> +    qemu_coroutine_enter(co);
> +    while (!done) {
> +        aio_poll(qemu_get_aio_context(), true);
> +    }
> +}
> +
>  static void test_nested(void)
>  {
>      BlockBackend *blk;
> @@ -421,11 +469,14 @@ int main(int argc, char **argv)
>      g_test_add_func("/bdrv-drain/driver-cb/drain", test_drv_cb_drain);
>      g_test_add_func("/bdrv-drain/driver-cb/drain_subtree",
>                      test_drv_cb_drain_subtree);
> +    g_test_add_func("/bdrv-drain/driver-cb/coroutine", test_drv_cb_coroutine);
> +
>  
>      g_test_add_func("/bdrv-drain/quiesce/drain_all", test_quiesce_drain_all);
>      g_test_add_func("/bdrv-drain/quiesce/drain", test_quiesce_drain);
>      g_test_add_func("/bdrv-drain/quiesce/drain_subtree",
>                      test_quiesce_drain_subtree);
> +    g_test_add_func("/bdrv-drain/quiesce/coroutine", test_quiesce_coroutine);

Maybe split the two sub-tests of test_{drv_cb,quiesce}_coroutine into
separate g_test_add_func?

Thanks,

Paolo

>      g_test_add_func("/bdrv-drain/nested", test_nested);
>  
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2
  2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
                   ` (18 preceding siblings ...)
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 19/19] block: Keep nodes drained between reopen_queue/multiple Kevin Wolf
@ 2017-12-20 10:54 ` Paolo Bonzini
  19 siblings, 0 replies; 34+ messages in thread
From: Paolo Bonzini @ 2017-12-20 10:54 UTC (permalink / raw)
  To: Kevin Wolf, qemu-block; +Cc: famz, qemu-devel

On 20/12/2017 11:33, Kevin Wolf wrote:
> This is the second part of my work to fix drain and hopefully to prevent
> it from attracting bugs as much as it did in the past. There is
> definitely at least a third part coming after this, see below.
> 
> In this series, the following improvments are made:
> * Fix several bugs and inconsistencies
> * Create lots of unit tests for the drain functions
> * Introduce bdrv_subtree_drained_begin/end() to drain a whole subtree
>   rather than only a single node
> * Use this to make bdrv_reopen() safe (graph changes in callbacks
>   called during its internal bdrv_drain_all() frequently broke it)
> 
> Planned for part three: Make graph modifications in callbacks safe by
> avoiding BDRV_POLL_WHILE() calls while we're recursing through the
> graph. We can recurse before BDRV_POLL_WHILE(), after it or while
> evaluating its condition, but never call any callbacks while we're
> iterating child or parent lists.

This is very nice, thanks for it!  I only had two comments, one of them
pretty minor.

Paolo

> Fam Zheng (1):
>   block: Remove unused bdrv_requests_pending
> 
> Kevin Wolf (18):
>   block: Assert drain_all is only called from main AioContext
>   block: Make bdrv_drain() driver callbacks non-recursive
>   test-bdrv-drain: Test callback for bdrv_drain
>   test-bdrv-drain: Test bs->quiesce_counter
>   blockjob: Pause job on draining any job BDS
>   test-bdrv-drain: Test drain vs. block jobs
>   block: Don't block_job_pause_all() in bdrv_drain_all()
>   block: Nested drain_end must still call callbacks
>   test-bdrv-drain: Test nested drain sections
>   block: Don't notify parents in drain call chain
>   block: Add bdrv_subtree_drained_begin/end()
>   test-bdrv-drain: Tests for bdrv_subtree_drain
>   test-bdrv-drain: Test behaviour in coroutine context
>   test-bdrv-drain: Recursive draining with multiple parents
>   block: Allow graph changes in subtree drained section
>   test-bdrv-drain: Test graph changes in drained section
>   commit: Simplify reopen of base
>   block: Keep nodes drained between reopen_queue/multiple
> 
>  include/block/block.h     |  20 +-
>  include/block/block_int.h |   3 +-
>  block.c                   |  74 +++++--
>  block/commit.c            |   8 +-
>  block/io.c                | 128 ++++++++----
>  block/replication.c       |   6 +
>  blockjob.c                |  22 +-
>  qemu-io-cmds.c            |   3 +
>  tests/test-bdrv-drain.c   | 520 +++++++++++++++++++++++++++++++++++++++++++++-
>  9 files changed, 696 insertions(+), 88 deletions(-)
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 16/19] block: Allow graph changes in subtree drained section
  2017-12-20 10:51   ` Paolo Bonzini
@ 2017-12-20 11:18     ` Kevin Wolf
  2017-12-20 11:31       ` [Qemu-devel] [Qemu-block] " Paolo Bonzini
  0 siblings, 1 reply; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 11:18 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-block, famz, qemu-devel

Am 20.12.2017 um 11:51 hat Paolo Bonzini geschrieben:
> On 20/12/2017 11:34, Kevin Wolf wrote:
> >      .inherit_options = bdrv_inherited_options,
> >      .drained_begin   = bdrv_child_cb_drained_begin,
> >      .drained_end     = bdrv_child_cb_drained_end,
> > +    .attach          = bdrv_child_cb_attach,
> > +    .detach          = bdrv_child_cb_detach,
> >      .inactivate      = bdrv_child_cb_inactivate,
> >  };
> >  
> > @@ -911,6 +933,8 @@ const BdrvChildRole child_format = {
> >      .inherit_options = bdrv_inherited_fmt_options,
> >      .drained_begin   = bdrv_child_cb_drained_begin,
> >      .drained_end     = bdrv_child_cb_drained_end,
> > +    .attach          = bdrv_child_cb_attach,
> > +    .detach          = bdrv_child_cb_detach,
> >      .inactivate      = bdrv_child_cb_inactivate,
> 
> Is there any case of a BdrvChildRole that doesn't want these callbacks?
> Maybe the functions should be called after ->attach and before ->detach
> (e.g. bdrv_{,un}apply_subtree_drain), rather than modifying the
> BdrvChildRole implementations.

At first I intended to implement it directly in
bdrv_replace_child_noperm(), but the thing is that you need the
bs->recursive_quiesce_counter of the parent BDS - but not all parents of
a BdrvChild are even a BDS. It could also be a BB root child or a block
job child. This is why we only have a void *opaque rather than a BDS
pointer for the parent.

The other option would be an additional BdrvChildRole callback like
.get_recursive_quiesce_counter, but compared to that, I like some code
in .attach/.detach better.

> Then they can be put in block/io.c, and bdrv_do_drained_* can remain
> static.  (I would also consider extracting block/drain.c, but it is
> painful to do it now that you have this nice series---so let's do it after).

I can keep that in mind for part 3 (or 4, whatever it may become).

Kevin

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [PATCH 16/19] block: Allow graph changes in subtree drained section
  2017-12-20 11:18     ` Kevin Wolf
@ 2017-12-20 11:31       ` Paolo Bonzini
  2017-12-20 13:04         ` Kevin Wolf
  0 siblings, 1 reply; 34+ messages in thread
From: Paolo Bonzini @ 2017-12-20 11:31 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: famz, qemu-devel, qemu-block

On 20/12/2017 12:18, Kevin Wolf wrote:
> Am 20.12.2017 um 11:51 hat Paolo Bonzini geschrieben:
>> On 20/12/2017 11:34, Kevin Wolf wrote:
>>>      .inherit_options = bdrv_inherited_options,
>>>      .drained_begin   = bdrv_child_cb_drained_begin,
>>>      .drained_end     = bdrv_child_cb_drained_end,
>>> +    .attach          = bdrv_child_cb_attach,
>>> +    .detach          = bdrv_child_cb_detach,
>>>      .inactivate      = bdrv_child_cb_inactivate,
>>>  };
>>>  
>>> @@ -911,6 +933,8 @@ const BdrvChildRole child_format = {
>>>      .inherit_options = bdrv_inherited_fmt_options,
>>>      .drained_begin   = bdrv_child_cb_drained_begin,
>>>      .drained_end     = bdrv_child_cb_drained_end,
>>> +    .attach          = bdrv_child_cb_attach,
>>> +    .detach          = bdrv_child_cb_detach,
>>>      .inactivate      = bdrv_child_cb_inactivate,
>>
>> Is there any case of a BdrvChildRole that doesn't want these callbacks?
>> Maybe the functions should be called after ->attach and before ->detach
>> (e.g. bdrv_{,un}apply_subtree_drain), rather than modifying the
>> BdrvChildRole implementations.
> 
> At first I intended to implement it directly in
> bdrv_replace_child_noperm(), but the thing is that you need the
> bs->recursive_quiesce_counter of the parent BDS - but not all parents of
> a BdrvChild are even a BDS. It could also be a BB root child or a block
> job child. This is why we only have a void *opaque rather than a BDS
> pointer for the parent.
> 
> The other option would be an additional BdrvChildRole callback like
> .get_recursive_quiesce_counter, but compared to that, I like some code
> in .attach/.detach better.

I see.  What about keeping the callbacks, but exporting

void bdrv_apply_subtree_drain(BlockDriverState *child,
			      BlockDriverState *new_parent);
void bdrv_unapply_subtree_drain(BlockDriverState *child,
			        BlockDriverState *old_parent);

instead of bdrv_do_drained_{begin,end}?

Thanks,

Paolo

>> Then they can be put in block/io.c, and bdrv_do_drained_* can remain
>> static.  (I would also consider extracting block/drain.c, but it is
>> painful to do it now that you have this nice series---so let's do it after).
> 
> I can keep that in mind for part 3 (or 4, whatever it may become).
> 
> Kevin
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 02/19] block: Assert drain_all is only called from main AioContext
  2017-12-20 10:33 ` [Qemu-devel] [PATCH 02/19] block: Assert drain_all is only called from main AioContext Kevin Wolf
@ 2017-12-20 12:14   ` Fam Zheng
  0 siblings, 0 replies; 34+ messages in thread
From: Fam Zheng @ 2017-12-20 12:14 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: qemu-block, pbonzini, qemu-devel

On Wed, 12/20 11:33, Kevin Wolf wrote:
> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> ---
>  block/io.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/block/io.c b/block/io.c
> index cf780c3cb0..b94740b8ff 100644
> --- a/block/io.c
> +++ b/block/io.c
> @@ -330,6 +330,12 @@ void bdrv_drain_all_begin(void)
>      BdrvNextIterator it;
>      GSList *aio_ctxs = NULL, *ctx;
>  
> +    /* BDRV_POLL_WHILE() for a node can only be called from its own I/O thread
> +     * or the main loop AioContext. We potentially use BDRV_POLL_WHILE() on
> +     * nodes in several different AioContexts, so make sure we're in the main
> +     * context. */
> +    assert(qemu_get_current_aio_context() == qemu_get_aio_context());
> +
>      block_job_pause_all();
>  
>      for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
> -- 
> 2.13.6
> 

Reviewed-by: Fam Zheng <famz@redhat.com>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [Qemu-block] [PATCH 16/19] block: Allow graph changes in subtree drained section
  2017-12-20 11:31       ` [Qemu-devel] [Qemu-block] " Paolo Bonzini
@ 2017-12-20 13:04         ` Kevin Wolf
  0 siblings, 0 replies; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 13:04 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: famz, qemu-devel, qemu-block

Am 20.12.2017 um 12:31 hat Paolo Bonzini geschrieben:
> On 20/12/2017 12:18, Kevin Wolf wrote:
> > Am 20.12.2017 um 11:51 hat Paolo Bonzini geschrieben:
> >> On 20/12/2017 11:34, Kevin Wolf wrote:
> >>>      .inherit_options = bdrv_inherited_options,
> >>>      .drained_begin   = bdrv_child_cb_drained_begin,
> >>>      .drained_end     = bdrv_child_cb_drained_end,
> >>> +    .attach          = bdrv_child_cb_attach,
> >>> +    .detach          = bdrv_child_cb_detach,
> >>>      .inactivate      = bdrv_child_cb_inactivate,
> >>>  };
> >>>  
> >>> @@ -911,6 +933,8 @@ const BdrvChildRole child_format = {
> >>>      .inherit_options = bdrv_inherited_fmt_options,
> >>>      .drained_begin   = bdrv_child_cb_drained_begin,
> >>>      .drained_end     = bdrv_child_cb_drained_end,
> >>> +    .attach          = bdrv_child_cb_attach,
> >>> +    .detach          = bdrv_child_cb_detach,
> >>>      .inactivate      = bdrv_child_cb_inactivate,
> >>
> >> Is there any case of a BdrvChildRole that doesn't want these callbacks?
> >> Maybe the functions should be called after ->attach and before ->detach
> >> (e.g. bdrv_{,un}apply_subtree_drain), rather than modifying the
> >> BdrvChildRole implementations.
> > 
> > At first I intended to implement it directly in
> > bdrv_replace_child_noperm(), but the thing is that you need the
> > bs->recursive_quiesce_counter of the parent BDS - but not all parents of
> > a BdrvChild are even a BDS. It could also be a BB root child or a block
> > job child. This is why we only have a void *opaque rather than a BDS
> > pointer for the parent.
> > 
> > The other option would be an additional BdrvChildRole callback like
> > .get_recursive_quiesce_counter, but compared to that, I like some code
> > in .attach/.detach better.
> 
> I see.  What about keeping the callbacks, but exporting
> 
> void bdrv_apply_subtree_drain(BlockDriverState *child,
> 			      BlockDriverState *new_parent);
> void bdrv_unapply_subtree_drain(BlockDriverState *child,
> 			        BlockDriverState *old_parent);
> 
> instead of bdrv_do_drained_{begin,end}?

Sure, that can be done. (BdrvChild *child, though.)

Kevin

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 03/19] block: Make bdrv_drain() driver callbacks non-recursive
  2017-12-20 10:33 ` [Qemu-devel] [PATCH 03/19] block: Make bdrv_drain() driver callbacks non-recursive Kevin Wolf
@ 2017-12-20 13:16   ` Fam Zheng
  2017-12-20 13:24     ` Kevin Wolf
  0 siblings, 1 reply; 34+ messages in thread
From: Fam Zheng @ 2017-12-20 13:16 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: qemu-block, pbonzini, qemu-devel

On Wed, 12/20 11:33, Kevin Wolf wrote:
> bdrv_drain_begin() doesn't increase bs->quiesce_counter recursively,

Pretty trivial but:

s/bdrv_drain_begin/bdrv_drained_begin/

> which means that the child nodes are not actually drained. To keep this
> consistent, we also shouldn't call the block driver callbacks.
> 
> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> ---
>  block/io.c | 16 +++++++++-------
>  1 file changed, 9 insertions(+), 7 deletions(-)
> 
> diff --git a/block/io.c b/block/io.c
> index b94740b8ff..91a52e2d82 100644
> --- a/block/io.c
> +++ b/block/io.c
> @@ -158,7 +158,7 @@ static void coroutine_fn bdrv_drain_invoke_entry(void *opaque)
>  }
>  
>  /* Recursively call BlockDriver.bdrv_co_drain_begin/end callbacks */
> -static void bdrv_drain_invoke(BlockDriverState *bs, bool begin)
> +static void bdrv_drain_invoke(BlockDriverState *bs, bool begin, bool recursive)
>  {
>      BdrvChild *child, *tmp;
>      BdrvCoDrainData data = { .bs = bs, .done = false, .begin = begin};
> @@ -172,8 +172,10 @@ static void bdrv_drain_invoke(BlockDriverState *bs, bool begin)
>      bdrv_coroutine_enter(bs, data.co);
>      BDRV_POLL_WHILE(bs, !data.done);
>  
> -    QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) {
> -        bdrv_drain_invoke(child->bs, begin);
> +    if (recursive) {
> +        QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) {
> +            bdrv_drain_invoke(child->bs, begin, true);
> +        }
>      }
>  }
>  
> @@ -265,7 +267,7 @@ void bdrv_drained_begin(BlockDriverState *bs)
>          bdrv_parent_drained_begin(bs);
>      }
>  
> -    bdrv_drain_invoke(bs, true);
> +    bdrv_drain_invoke(bs, true, false);

I'm not sure I understand this patch. If we call bdrv_drained_begin on a quorum
BDS that has a QED child, we do want to call bdrv_qed_co_drain_begin() on the
child, don't we? I think after this patch, we don't do that any more?

The same question arises when I look at throttle_co_drain_begin()..

Fam

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 03/19] block: Make bdrv_drain() driver callbacks non-recursive
  2017-12-20 13:16   ` Fam Zheng
@ 2017-12-20 13:24     ` Kevin Wolf
  2017-12-20 13:31       ` Fam Zheng
  0 siblings, 1 reply; 34+ messages in thread
From: Kevin Wolf @ 2017-12-20 13:24 UTC (permalink / raw)
  To: Fam Zheng; +Cc: qemu-block, pbonzini, qemu-devel

Am 20.12.2017 um 14:16 hat Fam Zheng geschrieben:
> On Wed, 12/20 11:33, Kevin Wolf wrote:
> > bdrv_drain_begin() doesn't increase bs->quiesce_counter recursively,
> 
> Pretty trivial but:
> 
> s/bdrv_drain_begin/bdrv_drained_begin/

Yes, thanks.

> > which means that the child nodes are not actually drained. To keep this
> > consistent, we also shouldn't call the block driver callbacks.
> > 
> > Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> > ---
> >  block/io.c | 16 +++++++++-------
> >  1 file changed, 9 insertions(+), 7 deletions(-)
> > 
> > diff --git a/block/io.c b/block/io.c
> > index b94740b8ff..91a52e2d82 100644
> > --- a/block/io.c
> > +++ b/block/io.c
> > @@ -158,7 +158,7 @@ static void coroutine_fn bdrv_drain_invoke_entry(void *opaque)
> >  }
> >  
> >  /* Recursively call BlockDriver.bdrv_co_drain_begin/end callbacks */
> > -static void bdrv_drain_invoke(BlockDriverState *bs, bool begin)
> > +static void bdrv_drain_invoke(BlockDriverState *bs, bool begin, bool recursive)
> >  {
> >      BdrvChild *child, *tmp;
> >      BdrvCoDrainData data = { .bs = bs, .done = false, .begin = begin};
> > @@ -172,8 +172,10 @@ static void bdrv_drain_invoke(BlockDriverState *bs, bool begin)
> >      bdrv_coroutine_enter(bs, data.co);
> >      BDRV_POLL_WHILE(bs, !data.done);
> >  
> > -    QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) {
> > -        bdrv_drain_invoke(child->bs, begin);
> > +    if (recursive) {
> > +        QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) {
> > +            bdrv_drain_invoke(child->bs, begin, true);
> > +        }
> >      }
> >  }
> >  
> > @@ -265,7 +267,7 @@ void bdrv_drained_begin(BlockDriverState *bs)
> >          bdrv_parent_drained_begin(bs);
> >      }
> >  
> > -    bdrv_drain_invoke(bs, true);
> > +    bdrv_drain_invoke(bs, true, false);
> 
> I'm not sure I understand this patch. If we call bdrv_drained_begin on a quorum
> BDS that has a QED child, we do want to call bdrv_qed_co_drain_begin() on the
> child, don't we? I think after this patch, we don't do that any more?
> 
> The same question arises when I look at throttle_co_drain_begin()..

We don't increase bs->quiesce_counter for child nodes and don't notify
their parents, so they aren't really drained. Calling the BlcokDriver
callbacks is the only thing we did recursively. We should do one thing
consistently, and for bdrv_drained_begin/end() that is draining a single
node without the children.

Later in the series, I introduce a bdrv_subtree_drained_begin/end()
which doesn't only call the callbacks recursively, but also increases
bs->quiesce_counter etc. so that the child nodes are actually drained.

There are use cases for both functions.

Kevin

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 03/19] block: Make bdrv_drain() driver callbacks non-recursive
  2017-12-20 13:24     ` Kevin Wolf
@ 2017-12-20 13:31       ` Fam Zheng
  0 siblings, 0 replies; 34+ messages in thread
From: Fam Zheng @ 2017-12-20 13:31 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: qemu-block, pbonzini, qemu-devel

On Wed, 12/20 14:24, Kevin Wolf wrote:
> Am 20.12.2017 um 14:16 hat Fam Zheng geschrieben:
> > On Wed, 12/20 11:33, Kevin Wolf wrote:
> > > bdrv_drain_begin() doesn't increase bs->quiesce_counter recursively,
> > 
> > Pretty trivial but:
> > 
> > s/bdrv_drain_begin/bdrv_drained_begin/
> 
> Yes, thanks.
> 
> > > which means that the child nodes are not actually drained. To keep this
> > > consistent, we also shouldn't call the block driver callbacks.
> > > 
> > > Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> > > ---
> > >  block/io.c | 16 +++++++++-------
> > >  1 file changed, 9 insertions(+), 7 deletions(-)
> > > 
> > > diff --git a/block/io.c b/block/io.c
> > > index b94740b8ff..91a52e2d82 100644
> > > --- a/block/io.c
> > > +++ b/block/io.c
> > > @@ -158,7 +158,7 @@ static void coroutine_fn bdrv_drain_invoke_entry(void *opaque)
> > >  }
> > >  
> > >  /* Recursively call BlockDriver.bdrv_co_drain_begin/end callbacks */
> > > -static void bdrv_drain_invoke(BlockDriverState *bs, bool begin)
> > > +static void bdrv_drain_invoke(BlockDriverState *bs, bool begin, bool recursive)
> > >  {
> > >      BdrvChild *child, *tmp;
> > >      BdrvCoDrainData data = { .bs = bs, .done = false, .begin = begin};
> > > @@ -172,8 +172,10 @@ static void bdrv_drain_invoke(BlockDriverState *bs, bool begin)
> > >      bdrv_coroutine_enter(bs, data.co);
> > >      BDRV_POLL_WHILE(bs, !data.done);
> > >  
> > > -    QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) {
> > > -        bdrv_drain_invoke(child->bs, begin);
> > > +    if (recursive) {
> > > +        QLIST_FOREACH_SAFE(child, &bs->children, next, tmp) {
> > > +            bdrv_drain_invoke(child->bs, begin, true);
> > > +        }
> > >      }
> > >  }
> > >  
> > > @@ -265,7 +267,7 @@ void bdrv_drained_begin(BlockDriverState *bs)
> > >          bdrv_parent_drained_begin(bs);
> > >      }
> > >  
> > > -    bdrv_drain_invoke(bs, true);
> > > +    bdrv_drain_invoke(bs, true, false);
> > 
> > I'm not sure I understand this patch. If we call bdrv_drained_begin on a quorum
> > BDS that has a QED child, we do want to call bdrv_qed_co_drain_begin() on the
> > child, don't we? I think after this patch, we don't do that any more?
> > 
> > The same question arises when I look at throttle_co_drain_begin()..
> 
> We don't increase bs->quiesce_counter for child nodes and don't notify
> their parents, so they aren't really drained. Calling the BlcokDriver
> callbacks is the only thing we did recursively. We should do one thing
> consistently, and for bdrv_drained_begin/end() that is draining a single
> node without the children.
> 
> Later in the series, I introduce a bdrv_subtree_drained_begin/end()
> which doesn't only call the callbacks recursively, but also increases
> bs->quiesce_counter etc. so that the child nodes are actually drained.
> 
> There are use cases for both functions.
> 

OK, thanks.

Reviewed-by: Fam Zheng <famz@redhat.com>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 18/19] commit: Simplify reopen of base
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 18/19] commit: Simplify reopen of base Kevin Wolf
@ 2017-12-20 13:32   ` Fam Zheng
  0 siblings, 0 replies; 34+ messages in thread
From: Fam Zheng @ 2017-12-20 13:32 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: qemu-block, pbonzini, qemu-devel

On Wed, 12/20 11:34, Kevin Wolf wrote:
> Since commit bde70715, base is the only node that is reopened in
> commit_start(). This means that the code, which still involves an
> explicit BlockReopenQueue, can now be simplified by using bdrv_reopen().
> 
> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> ---
>  block/commit.c | 8 +-------
>  1 file changed, 1 insertion(+), 7 deletions(-)
> 
> diff --git a/block/commit.c b/block/commit.c
> index c5327551ce..bb6c904704 100644
> --- a/block/commit.c
> +++ b/block/commit.c
> @@ -277,7 +277,6 @@ void commit_start(const char *job_id, BlockDriverState *bs,
>                    const char *filter_node_name, Error **errp)
>  {
>      CommitBlockJob *s;
> -    BlockReopenQueue *reopen_queue = NULL;
>      int orig_base_flags;
>      BlockDriverState *iter;
>      BlockDriverState *commit_top_bs = NULL;
> @@ -299,12 +298,7 @@ void commit_start(const char *job_id, BlockDriverState *bs,
>      /* convert base to r/w, if necessary */
>      orig_base_flags = bdrv_get_flags(base);
>      if (!(orig_base_flags & BDRV_O_RDWR)) {
> -        reopen_queue = bdrv_reopen_queue(reopen_queue, base, NULL,
> -                                         orig_base_flags | BDRV_O_RDWR);
> -    }
> -
> -    if (reopen_queue) {
> -        bdrv_reopen_multiple(bdrv_get_aio_context(bs), reopen_queue, &local_err);
> +        bdrv_reopen(base, orig_base_flags | BDRV_O_RDWR, &local_err);
>          if (local_err != NULL) {
>              error_propagate(errp, local_err);
>              goto fail;
> -- 
> 2.13.6
> 

Reviewed-by: Fam Zheng <famz@redhat.com>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 19/19] block: Keep nodes drained between reopen_queue/multiple
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 19/19] block: Keep nodes drained between reopen_queue/multiple Kevin Wolf
@ 2017-12-20 13:34   ` Fam Zheng
  0 siblings, 0 replies; 34+ messages in thread
From: Fam Zheng @ 2017-12-20 13:34 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: qemu-block, pbonzini, qemu-devel

On Wed, 12/20 11:34, Kevin Wolf wrote:
> The bdrv_reopen*() implementation doesn't like it if the graph is
> changed between queuing nodes for reopen and actually reopening them
> (one of the reasons is that queuing can be recursive).
> 
> So instead of draining the device only in bdrv_reopen_multiple(),
> require that callers already drained all affected nodes, and assert this
> in bdrv_reopen_queue().
> 
> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> ---
>  block.c             | 23 ++++++++++++++++-------
>  block/replication.c |  6 ++++++
>  qemu-io-cmds.c      |  3 +++
>  3 files changed, 25 insertions(+), 7 deletions(-)
> 
> diff --git a/block.c b/block.c
> index 37f8639fe9..cfd39bea10 100644
> --- a/block.c
> +++ b/block.c
> @@ -2774,6 +2774,7 @@ BlockDriverState *bdrv_open(const char *filename, const char *reference,
>   * returns a pointer to bs_queue, which is either the newly allocated
>   * bs_queue, or the existing bs_queue being used.
>   *
> + * bs must be drained between bdrv_reopen_queue() and bdrv_reopen_multiple().
>   */
>  static BlockReopenQueue *bdrv_reopen_queue_child(BlockReopenQueue *bs_queue,
>                                                   BlockDriverState *bs,
> @@ -2789,6 +2790,11 @@ static BlockReopenQueue *bdrv_reopen_queue_child(BlockReopenQueue *bs_queue,
>      BdrvChild *child;
>      QDict *old_options, *explicit_options;
>  
> +    /* Make sure that the caller remembered to use a drained section. This is
> +     * important to avoid graph changes between the recursive queuing here and
> +     * bdrv_reopen_multiple(). */
> +    assert(bs->quiesce_counter > 0);
> +
>      if (bs_queue == NULL) {
>          bs_queue = g_new0(BlockReopenQueue, 1);
>          QSIMPLEQ_INIT(bs_queue);
> @@ -2913,6 +2919,8 @@ BlockReopenQueue *bdrv_reopen_queue(BlockReopenQueue *bs_queue,
>   * If all devices prepare successfully, then the changes are committed
>   * to all devices.
>   *
> + * All affected nodes must be drained between bdrv_reopen_queue() and
> + * bdrv_reopen_multiple().
>   */
>  int bdrv_reopen_multiple(AioContext *ctx, BlockReopenQueue *bs_queue, Error **errp)
>  {
> @@ -2922,11 +2930,8 @@ int bdrv_reopen_multiple(AioContext *ctx, BlockReopenQueue *bs_queue, Error **er
>  
>      assert(bs_queue != NULL);
>  
> -    aio_context_release(ctx);
> -    bdrv_drain_all_begin();
> -    aio_context_acquire(ctx);
> -
>      QSIMPLEQ_FOREACH(bs_entry, bs_queue, entry) {
> +        assert(bs_entry->state.bs->quiesce_counter > 0);
>          if (bdrv_reopen_prepare(&bs_entry->state, bs_queue, &local_err)) {
>              error_propagate(errp, local_err);
>              goto cleanup;
> @@ -2955,8 +2960,6 @@ cleanup:
>      }
>      g_free(bs_queue);
>  
> -    bdrv_drain_all_end();
> -
>      return ret;
>  }
>  
> @@ -2966,12 +2969,18 @@ int bdrv_reopen(BlockDriverState *bs, int bdrv_flags, Error **errp)
>  {
>      int ret = -1;
>      Error *local_err = NULL;
> -    BlockReopenQueue *queue = bdrv_reopen_queue(NULL, bs, NULL, bdrv_flags);
> +    BlockReopenQueue *queue;
>  
> +    bdrv_subtree_drained_begin(bs);
> +
> +    queue = bdrv_reopen_queue(NULL, bs, NULL, bdrv_flags);
>      ret = bdrv_reopen_multiple(bdrv_get_aio_context(bs), queue, &local_err);
>      if (local_err != NULL) {
>          error_propagate(errp, local_err);
>      }
> +
> +    bdrv_subtree_drained_end(bs);
> +
>      return ret;
>  }
>  
> diff --git a/block/replication.c b/block/replication.c
> index e41e293d2b..b1ea3caa4b 100644
> --- a/block/replication.c
> +++ b/block/replication.c
> @@ -394,6 +394,9 @@ static void reopen_backing_file(BlockDriverState *bs, bool writable,
>          new_secondary_flags = s->orig_secondary_flags;
>      }
>  
> +    bdrv_subtree_drained_begin(s->hidden_disk->bs);
> +    bdrv_subtree_drained_begin(s->secondary_disk->bs);
> +
>      if (orig_hidden_flags != new_hidden_flags) {
>          reopen_queue = bdrv_reopen_queue(reopen_queue, s->hidden_disk->bs, NULL,
>                                           new_hidden_flags);
> @@ -409,6 +412,9 @@ static void reopen_backing_file(BlockDriverState *bs, bool writable,
>                               reopen_queue, &local_err);
>          error_propagate(errp, local_err);
>      }
> +
> +    bdrv_subtree_drained_end(s->hidden_disk->bs);
> +    bdrv_subtree_drained_end(s->secondary_disk->bs);
>  }
>  
>  static void backup_job_cleanup(BlockDriverState *bs)
> diff --git a/qemu-io-cmds.c b/qemu-io-cmds.c
> index de8e3de726..a6a70fc3dc 100644
> --- a/qemu-io-cmds.c
> +++ b/qemu-io-cmds.c
> @@ -2013,8 +2013,11 @@ static int reopen_f(BlockBackend *blk, int argc, char **argv)
>      opts = qopts ? qemu_opts_to_qdict(qopts, NULL) : NULL;
>      qemu_opts_reset(&reopen_opts);
>  
> +    bdrv_subtree_drained_begin(bs);
>      brq = bdrv_reopen_queue(NULL, bs, opts, flags);
>      bdrv_reopen_multiple(bdrv_get_aio_context(bs), brq, &local_err);
> +    bdrv_subtree_drained_end(bs);
> +
>      if (local_err) {
>          error_report_err(local_err);
>      } else {
> -- 
> 2.13.6
> 

Reviewed-by: Fam Zheng <famz@redhat.com>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 08/19] block: Don't block_job_pause_all() in bdrv_drain_all()
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 08/19] block: Don't block_job_pause_all() in bdrv_drain_all() Kevin Wolf
@ 2017-12-21 23:24   ` Eric Blake
  0 siblings, 0 replies; 34+ messages in thread
From: Eric Blake @ 2017-12-21 23:24 UTC (permalink / raw)
  To: Kevin Wolf, qemu-block; +Cc: pbonzini, famz, qemu-devel

On 12/20/2017 04:34 AM, Kevin Wolf wrote:
> Block jobs are already paused using the BdrvChildRole drain callbacks,
> so we don't need an additionall block_job_pause_all() call.

s/additionall/additional/

> 
> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> ---
>   block/io.c              |  4 ----
>   tests/test-bdrv-drain.c | 10 ++++------
>   2 files changed, 4 insertions(+), 10 deletions(-)
> 
-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [Qemu-devel] [PATCH 11/19] block: Don't notify parents in drain call chain
  2017-12-20 10:34 ` [Qemu-devel] [PATCH 11/19] block: Don't notify parents in drain call chain Kevin Wolf
@ 2017-12-21 23:26   ` Eric Blake
  0 siblings, 0 replies; 34+ messages in thread
From: Eric Blake @ 2017-12-21 23:26 UTC (permalink / raw)
  To: Kevin Wolf, qemu-block; +Cc: pbonzini, famz, qemu-devel

On 12/20/2017 04:34 AM, Kevin Wolf wrote:
> This is in preparation for subtree drains, i.e. drained sections that
> affect not only a single node, but recursively all child nodes, too.
> 
> Calling the parent callbacks for drain is pointless when we just came
> from that parent node recursively and leads to multiple increases of
> bs->quiesce_counter in a single drain call. Don't do it.
> 
> In order for this to work correctly, the parent callback must be called
> for every bdrv_drain_begin/end() call, not only for the outermost one:
> 
> If we have a node N with two parents A and B, recursive draining of A
> should cause the quiesce_counter of B to increased because its child N

either 'to be increased' or 'to increase'

> is drained independently of B. If now B is recursively drained, too, A
> must increase its quiesce_counter because N is drained independently of
> A only now, even if N is going from quiesce_counter 1 to 2.
> 
> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> ---
>   include/block/block.h |  4 ++--
>   block.c               | 13 +++++++++----
>   block/io.c            | 47 ++++++++++++++++++++++++++++++++++-------------
>   3 files changed, 45 insertions(+), 19 deletions(-)
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3266
Virtualization:  qemu.org | libvirt.org

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2017-12-21 23:26 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-12-20 10:33 [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
2017-12-20 10:33 ` [Qemu-devel] [PATCH 01/19] block: Remove unused bdrv_requests_pending Kevin Wolf
2017-12-20 10:33 ` [Qemu-devel] [PATCH 02/19] block: Assert drain_all is only called from main AioContext Kevin Wolf
2017-12-20 12:14   ` Fam Zheng
2017-12-20 10:33 ` [Qemu-devel] [PATCH 03/19] block: Make bdrv_drain() driver callbacks non-recursive Kevin Wolf
2017-12-20 13:16   ` Fam Zheng
2017-12-20 13:24     ` Kevin Wolf
2017-12-20 13:31       ` Fam Zheng
2017-12-20 10:33 ` [Qemu-devel] [PATCH 04/19] test-bdrv-drain: Test callback for bdrv_drain Kevin Wolf
2017-12-20 10:33 ` [Qemu-devel] [PATCH 05/19] test-bdrv-drain: Test bs->quiesce_counter Kevin Wolf
2017-12-20 10:33 ` [Qemu-devel] [PATCH 06/19] blockjob: Pause job on draining any job BDS Kevin Wolf
2017-12-20 10:34 ` [Qemu-devel] [PATCH 07/19] test-bdrv-drain: Test drain vs. block jobs Kevin Wolf
2017-12-20 10:34 ` [Qemu-devel] [PATCH 08/19] block: Don't block_job_pause_all() in bdrv_drain_all() Kevin Wolf
2017-12-21 23:24   ` Eric Blake
2017-12-20 10:34 ` [Qemu-devel] [PATCH 09/19] block: Nested drain_end must still call callbacks Kevin Wolf
2017-12-20 10:34 ` [Qemu-devel] [PATCH 10/19] test-bdrv-drain: Test nested drain sections Kevin Wolf
2017-12-20 10:34 ` [Qemu-devel] [PATCH 11/19] block: Don't notify parents in drain call chain Kevin Wolf
2017-12-21 23:26   ` Eric Blake
2017-12-20 10:34 ` [Qemu-devel] [PATCH 12/19] block: Add bdrv_subtree_drained_begin/end() Kevin Wolf
2017-12-20 10:34 ` [Qemu-devel] [PATCH 13/19] test-bdrv-drain: Tests for bdrv_subtree_drain Kevin Wolf
2017-12-20 10:34 ` [Qemu-devel] [PATCH 14/19] test-bdrv-drain: Test behaviour in coroutine context Kevin Wolf
2017-12-20 10:53   ` Paolo Bonzini
2017-12-20 10:34 ` [Qemu-devel] [PATCH 15/19] test-bdrv-drain: Recursive draining with multiple parents Kevin Wolf
2017-12-20 10:34 ` [Qemu-devel] [PATCH 16/19] block: Allow graph changes in subtree drained section Kevin Wolf
2017-12-20 10:51   ` Paolo Bonzini
2017-12-20 11:18     ` Kevin Wolf
2017-12-20 11:31       ` [Qemu-devel] [Qemu-block] " Paolo Bonzini
2017-12-20 13:04         ` Kevin Wolf
2017-12-20 10:34 ` [Qemu-devel] [PATCH 17/19] test-bdrv-drain: Test graph changes in " Kevin Wolf
2017-12-20 10:34 ` [Qemu-devel] [PATCH 18/19] commit: Simplify reopen of base Kevin Wolf
2017-12-20 13:32   ` Fam Zheng
2017-12-20 10:34 ` [Qemu-devel] [PATCH 19/19] block: Keep nodes drained between reopen_queue/multiple Kevin Wolf
2017-12-20 13:34   ` Fam Zheng
2017-12-20 10:54 ` [Qemu-devel] [PATCH 00/19] Drain fixes and cleanups, part 2 Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).