From: Kevin Wolf <kwolf@redhat.com>
To: qemu-block@nongnu.org
Cc: kwolf@redhat.com, famz@redhat.com, pbonzini@redhat.com,
qemu-devel@nongnu.org
Subject: [Qemu-devel] [PATCH v2 19/19] block: Keep nodes drained between reopen_queue/multiple
Date: Thu, 21 Dec 2017 15:22:51 +0100 [thread overview]
Message-ID: <20171221142251.18366-20-kwolf@redhat.com> (raw)
In-Reply-To: <20171221142251.18366-1-kwolf@redhat.com>
The bdrv_reopen*() implementation doesn't like it if the graph is
changed between queuing nodes for reopen and actually reopening them
(one of the reasons is that queuing can be recursive).
So instead of draining the device only in bdrv_reopen_multiple(),
require that callers already drained all affected nodes, and assert this
in bdrv_reopen_queue().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
---
block.c | 23 ++++++++++++++++-------
block/replication.c | 6 ++++++
qemu-io-cmds.c | 3 +++
3 files changed, 25 insertions(+), 7 deletions(-)
diff --git a/block.c b/block.c
index 56e446cf9d..6b4401760f 100644
--- a/block.c
+++ b/block.c
@@ -2766,6 +2766,7 @@ BlockDriverState *bdrv_open(const char *filename, const char *reference,
* returns a pointer to bs_queue, which is either the newly allocated
* bs_queue, or the existing bs_queue being used.
*
+ * bs must be drained between bdrv_reopen_queue() and bdrv_reopen_multiple().
*/
static BlockReopenQueue *bdrv_reopen_queue_child(BlockReopenQueue *bs_queue,
BlockDriverState *bs,
@@ -2781,6 +2782,11 @@ static BlockReopenQueue *bdrv_reopen_queue_child(BlockReopenQueue *bs_queue,
BdrvChild *child;
QDict *old_options, *explicit_options;
+ /* Make sure that the caller remembered to use a drained section. This is
+ * important to avoid graph changes between the recursive queuing here and
+ * bdrv_reopen_multiple(). */
+ assert(bs->quiesce_counter > 0);
+
if (bs_queue == NULL) {
bs_queue = g_new0(BlockReopenQueue, 1);
QSIMPLEQ_INIT(bs_queue);
@@ -2905,6 +2911,8 @@ BlockReopenQueue *bdrv_reopen_queue(BlockReopenQueue *bs_queue,
* If all devices prepare successfully, then the changes are committed
* to all devices.
*
+ * All affected nodes must be drained between bdrv_reopen_queue() and
+ * bdrv_reopen_multiple().
*/
int bdrv_reopen_multiple(AioContext *ctx, BlockReopenQueue *bs_queue, Error **errp)
{
@@ -2914,11 +2922,8 @@ int bdrv_reopen_multiple(AioContext *ctx, BlockReopenQueue *bs_queue, Error **er
assert(bs_queue != NULL);
- aio_context_release(ctx);
- bdrv_drain_all_begin();
- aio_context_acquire(ctx);
-
QSIMPLEQ_FOREACH(bs_entry, bs_queue, entry) {
+ assert(bs_entry->state.bs->quiesce_counter > 0);
if (bdrv_reopen_prepare(&bs_entry->state, bs_queue, &local_err)) {
error_propagate(errp, local_err);
goto cleanup;
@@ -2947,8 +2952,6 @@ cleanup:
}
g_free(bs_queue);
- bdrv_drain_all_end();
-
return ret;
}
@@ -2958,12 +2961,18 @@ int bdrv_reopen(BlockDriverState *bs, int bdrv_flags, Error **errp)
{
int ret = -1;
Error *local_err = NULL;
- BlockReopenQueue *queue = bdrv_reopen_queue(NULL, bs, NULL, bdrv_flags);
+ BlockReopenQueue *queue;
+ bdrv_subtree_drained_begin(bs);
+
+ queue = bdrv_reopen_queue(NULL, bs, NULL, bdrv_flags);
ret = bdrv_reopen_multiple(bdrv_get_aio_context(bs), queue, &local_err);
if (local_err != NULL) {
error_propagate(errp, local_err);
}
+
+ bdrv_subtree_drained_end(bs);
+
return ret;
}
diff --git a/block/replication.c b/block/replication.c
index e41e293d2b..b1ea3caa4b 100644
--- a/block/replication.c
+++ b/block/replication.c
@@ -394,6 +394,9 @@ static void reopen_backing_file(BlockDriverState *bs, bool writable,
new_secondary_flags = s->orig_secondary_flags;
}
+ bdrv_subtree_drained_begin(s->hidden_disk->bs);
+ bdrv_subtree_drained_begin(s->secondary_disk->bs);
+
if (orig_hidden_flags != new_hidden_flags) {
reopen_queue = bdrv_reopen_queue(reopen_queue, s->hidden_disk->bs, NULL,
new_hidden_flags);
@@ -409,6 +412,9 @@ static void reopen_backing_file(BlockDriverState *bs, bool writable,
reopen_queue, &local_err);
error_propagate(errp, local_err);
}
+
+ bdrv_subtree_drained_end(s->hidden_disk->bs);
+ bdrv_subtree_drained_end(s->secondary_disk->bs);
}
static void backup_job_cleanup(BlockDriverState *bs)
diff --git a/qemu-io-cmds.c b/qemu-io-cmds.c
index de8e3de726..a6a70fc3dc 100644
--- a/qemu-io-cmds.c
+++ b/qemu-io-cmds.c
@@ -2013,8 +2013,11 @@ static int reopen_f(BlockBackend *blk, int argc, char **argv)
opts = qopts ? qemu_opts_to_qdict(qopts, NULL) : NULL;
qemu_opts_reset(&reopen_opts);
+ bdrv_subtree_drained_begin(bs);
brq = bdrv_reopen_queue(NULL, bs, opts, flags);
bdrv_reopen_multiple(bdrv_get_aio_context(bs), brq, &local_err);
+ bdrv_subtree_drained_end(bs);
+
if (local_err) {
error_report_err(local_err);
} else {
--
2.13.6
prev parent reply other threads:[~2017-12-21 14:24 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-12-21 14:22 [Qemu-devel] [PATCH v2 00/19] Drain fixes and cleanups, part 2 Kevin Wolf
2017-12-21 14:22 ` [Qemu-devel] [PATCH v2 01/19] block: Remove unused bdrv_requests_pending Kevin Wolf
2018-01-03 16:09 ` [Qemu-devel] [Qemu-block] " Alberto Garcia
2017-12-21 14:22 ` [Qemu-devel] [PATCH v2 02/19] block: Assert drain_all is only called from main AioContext Kevin Wolf
2018-01-08 16:09 ` [Qemu-devel] [Qemu-block] " Alberto Garcia
2017-12-21 14:22 ` [Qemu-devel] [PATCH v2 03/19] block: Make bdrv_drain() driver callbacks non-recursive Kevin Wolf
2017-12-21 14:22 ` [Qemu-devel] [PATCH v2 04/19] test-bdrv-drain: Test callback for bdrv_drain Kevin Wolf
2017-12-21 14:22 ` [Qemu-devel] [PATCH v2 05/19] test-bdrv-drain: Test bs->quiesce_counter Kevin Wolf
2017-12-21 14:22 ` [Qemu-devel] [PATCH v2 06/19] blockjob: Pause job on draining any job BDS Kevin Wolf
2018-01-08 14:44 ` [Qemu-devel] [Qemu-block] " Alberto Garcia
2017-12-21 14:22 ` [Qemu-devel] [PATCH v2 07/19] test-bdrv-drain: Test drain vs. block jobs Kevin Wolf
2018-01-08 15:21 ` [Qemu-devel] [Qemu-block] " Alberto Garcia
2017-12-21 14:22 ` [Qemu-devel] [PATCH v2 08/19] block: Don't block_job_pause_all() in bdrv_drain_all() Kevin Wolf
2018-01-08 15:15 ` [Qemu-devel] [Qemu-block] " Alberto Garcia
2017-12-21 14:22 ` [Qemu-devel] [PATCH v2 09/19] block: Nested drain_end must still call callbacks Kevin Wolf
2018-01-08 15:41 ` [Qemu-devel] [Qemu-block] " Alberto Garcia
2018-01-08 18:00 ` Kevin Wolf
2017-12-21 14:22 ` [Qemu-devel] [PATCH v2 10/19] test-bdrv-drain: Test nested drain sections Kevin Wolf
2017-12-21 14:22 ` [Qemu-devel] [PATCH v2 11/19] block: Don't notify parents in drain call chain Kevin Wolf
2017-12-21 14:22 ` [Qemu-devel] [PATCH v2 12/19] block: Add bdrv_subtree_drained_begin/end() Kevin Wolf
2017-12-21 14:22 ` [Qemu-devel] [PATCH v2 13/19] test-bdrv-drain: Tests for bdrv_subtree_drain Kevin Wolf
2017-12-21 14:22 ` [Qemu-devel] [PATCH v2 14/19] test-bdrv-drain: Test behaviour in coroutine context Kevin Wolf
2017-12-21 14:22 ` [Qemu-devel] [PATCH v2 15/19] test-bdrv-drain: Recursive draining with multiple parents Kevin Wolf
2017-12-21 14:22 ` [Qemu-devel] [PATCH v2 16/19] block: Allow graph changes in subtree drained section Kevin Wolf
2017-12-21 14:22 ` [Qemu-devel] [PATCH v2 17/19] test-bdrv-drain: Test graph changes in " Kevin Wolf
2017-12-21 14:22 ` [Qemu-devel] [PATCH v2 18/19] commit: Simplify reopen of base Kevin Wolf
2017-12-21 14:22 ` Kevin Wolf [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20171221142251.18366-20-kwolf@redhat.com \
--to=kwolf@redhat.com \
--cc=famz@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).