qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: qemu-block@nongnu.org
Cc: kwolf@redhat.com, mreitz@redhat.com, pbonzini@redhat.com,
	famz@redhat.com, stefanha@redhat.com, eblake@redhat.com,
	qemu-devel@nongnu.org
Subject: [Qemu-devel] [PATCH v2 02/20] block: Use bdrv_do_drain_begin/end in bdrv_drain_all()
Date: Tue, 29 May 2018 19:21:38 +0200	[thread overview]
Message-ID: <20180529172156.29311-3-kwolf@redhat.com> (raw)
In-Reply-To: <20180529172156.29311-1-kwolf@redhat.com>

bdrv_do_drain_begin/end() implement already everything that
bdrv_drain_all_begin/end() need and currently still do manually: Disable
external events, call parent drain callbacks, call block driver
callbacks.

It also does two more things:

The first is incrementing bs->quiesce_counter. bdrv_drain_all() already
stood out in the test case by behaving different from the other drain
variants. Adding this is not only safe, but in fact a bug fix.

The second is calling bdrv_drain_recurse(). We already do that later in
the same function in a loop, so basically doing an early first iteration
doesn't hurt.

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/io.c              | 10 ++--------
 tests/test-bdrv-drain.c | 14 ++++----------
 2 files changed, 6 insertions(+), 18 deletions(-)

diff --git a/block/io.c b/block/io.c
index 1e4e2f40ea..73579b31cf 100644
--- a/block/io.c
+++ b/block/io.c
@@ -413,11 +413,8 @@ void bdrv_drain_all_begin(void)
     for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
         AioContext *aio_context = bdrv_get_aio_context(bs);
 
-        /* Stop things in parent-to-child order */
         aio_context_acquire(aio_context);
-        aio_disable_external(aio_context);
-        bdrv_parent_drained_begin(bs, NULL);
-        bdrv_drain_invoke(bs, true, true);
+        bdrv_do_drained_begin(bs, true, NULL);
         aio_context_release(aio_context);
 
         if (!g_slist_find(aio_ctxs, aio_context)) {
@@ -458,11 +455,8 @@ void bdrv_drain_all_end(void)
     for (bs = bdrv_first(&it); bs; bs = bdrv_next(&it)) {
         AioContext *aio_context = bdrv_get_aio_context(bs);
 
-        /* Re-enable things in child-to-parent order */
         aio_context_acquire(aio_context);
-        bdrv_drain_invoke(bs, false, true);
-        bdrv_parent_drained_end(bs, NULL);
-        aio_enable_external(aio_context);
+        bdrv_do_drained_end(bs, true, NULL);
         aio_context_release(aio_context);
     }
 }
diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
index 1682e2b72d..4c3e93de0b 100644
--- a/tests/test-bdrv-drain.c
+++ b/tests/test-bdrv-drain.c
@@ -276,8 +276,7 @@ static void test_quiesce_common(enum drain_type drain_type, bool recursive)
 
 static void test_quiesce_drain_all(void)
 {
-    // XXX drain_all doesn't quiesce
-    //test_quiesce_common(BDRV_DRAIN_ALL, true);
+    test_quiesce_common(BDRV_DRAIN_ALL, true);
 }
 
 static void test_quiesce_drain(void)
@@ -319,12 +318,7 @@ static void test_nested(void)
 
     for (outer = 0; outer < DRAIN_TYPE_MAX; outer++) {
         for (inner = 0; inner < DRAIN_TYPE_MAX; inner++) {
-            /* XXX bdrv_drain_all() doesn't increase the quiesce_counter */
-            int bs_quiesce      = (outer != BDRV_DRAIN_ALL) +
-                                  (inner != BDRV_DRAIN_ALL);
-            int backing_quiesce = (outer == BDRV_SUBTREE_DRAIN) +
-                                  (inner == BDRV_SUBTREE_DRAIN);
-            int backing_cb_cnt  = (outer != BDRV_DRAIN) +
+            int backing_quiesce = (outer != BDRV_DRAIN) +
                                   (inner != BDRV_DRAIN);
 
             g_assert_cmpint(bs->quiesce_counter, ==, 0);
@@ -335,10 +329,10 @@ static void test_nested(void)
             do_drain_begin(outer, bs);
             do_drain_begin(inner, bs);
 
-            g_assert_cmpint(bs->quiesce_counter, ==, bs_quiesce);
+            g_assert_cmpint(bs->quiesce_counter, ==, 2);
             g_assert_cmpint(backing->quiesce_counter, ==, backing_quiesce);
             g_assert_cmpint(s->drain_count, ==, 2);
-            g_assert_cmpint(backing_s->drain_count, ==, backing_cb_cnt);
+            g_assert_cmpint(backing_s->drain_count, ==, backing_quiesce);
 
             do_drain_end(inner, bs);
             do_drain_end(outer, bs);
-- 
2.13.6

  parent reply	other threads:[~2018-05-29 17:22 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-29 17:21 [Qemu-devel] [PATCH v2 00/20] Drain fixes and cleanups, part 3 Kevin Wolf
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 01/20] test-bdrv-drain: bdrv_drain() works with cross-AioContext events Kevin Wolf
2018-05-29 17:21 ` Kevin Wolf [this message]
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 03/20] block: Remove 'recursive' parameter from bdrv_drain_invoke() Kevin Wolf
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 04/20] block: Don't manually poll in bdrv_drain_all() Kevin Wolf
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 05/20] tests/test-bdrv-drain: bdrv_drain_all() works in coroutines now Kevin Wolf
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 06/20] block: Avoid unnecessary aio_poll() in AIO_WAIT_WHILE() Kevin Wolf
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 07/20] block: Really pause block jobs on drain Kevin Wolf
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 08/20] block: Remove bdrv_drain_recurse() Kevin Wolf
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 09/20] test-bdrv-drain: Add test for node deletion Kevin Wolf
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 10/20] block: Drain recursively with a single BDRV_POLL_WHILE() Kevin Wolf
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 11/20] test-bdrv-drain: Test node deletion in subtree recursion Kevin Wolf
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 12/20] block: Don't poll in parent drain callbacks Kevin Wolf
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 13/20] test-bdrv-drain: Graph change through parent callback Kevin Wolf
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 14/20] block: Defer .bdrv_drain_begin callback to polling phase Kevin Wolf
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 15/20] test-bdrv-drain: Test that bdrv_drain_invoke() doesn't poll Kevin Wolf
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 16/20] block: Allow AIO_WAIT_WHILE with NULL ctx Kevin Wolf
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 17/20] block: Move bdrv_drain_all_begin() out of coroutine context Kevin Wolf
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 18/20] block: ignore_bds_parents parameter for drain functions Kevin Wolf
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 19/20] block: Allow graph changes in bdrv_drain_all_begin/end sections Kevin Wolf
2018-05-29 17:21 ` [Qemu-devel] [PATCH v2 20/20] test-bdrv-drain: Test graph changes in drain_all section Kevin Wolf
2018-05-29 17:45 ` [Qemu-devel] [PATCH v2 00/20] Drain fixes and cleanups, part 3 no-reply
2018-06-11 12:23 ` Kevin Wolf
2018-06-15 16:08   ` [Qemu-devel] [Qemu-block] " Kevin Wolf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180529172156.29311-3-kwolf@redhat.com \
    --to=kwolf@redhat.com \
    --cc=eblake@redhat.com \
    --cc=famz@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).