From: Kevin Wolf <kwolf@redhat.com>
To: qemu-block@nongnu.org
Cc: kwolf@redhat.com, mreitz@redhat.com, quintela@redhat.com,
dgilbert@redhat.com, stefanha@redhat.com, famz@redhat.com,
qemu-devel@nongnu.org
Subject: [Qemu-devel] [PATCH 2/4] migration: Inactivate images after .save_live_complete_precopy()
Date: Tue, 23 May 2017 16:01:02 +0200 [thread overview]
Message-ID: <1495548064-10926-3-git-send-email-kwolf@redhat.com> (raw)
In-Reply-To: <1495548064-10926-1-git-send-email-kwolf@redhat.com>
Block migration may still access the image during its
.save_live_complete_precopy() implementation, so we should only
inactivate the image afterwards.
Another reason for the change is that inactivating an image fails when
there is still a non-device BlockBackend using it, which includes the
BBs used by block migration. We want to give block migration a chance to
release the BBs before trying to inactivate the image (this will be done
in another patch).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
migration/migration.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 0304c01..846ba09 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1787,17 +1787,19 @@ static void migration_completion(MigrationState *s, int current_active_state,
if (!ret) {
ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
+ if (ret >= 0) {
+ qemu_file_set_rate_limit(s->to_dst_file, INT64_MAX);
+ qemu_savevm_state_complete_precopy(s->to_dst_file, false);
+ }
/*
* Don't mark the image with BDRV_O_INACTIVE flag if
* we will go into COLO stage later.
*/
if (ret >= 0 && !migrate_colo_enabled()) {
ret = bdrv_inactivate_all();
- }
- if (ret >= 0) {
- qemu_file_set_rate_limit(s->to_dst_file, INT64_MAX);
- qemu_savevm_state_complete_precopy(s->to_dst_file, false);
- s->block_inactive = true;
+ if (ret >= 0) {
+ s->block_inactive = true;
+ }
}
}
qemu_mutex_unlock_iothread();
--
1.8.3.1
next prev parent reply other threads:[~2017-05-23 14:01 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-23 14:01 [Qemu-devel] [PATCH 0/4] Block migration (migrate -b) fixes Kevin Wolf
2017-05-23 14:01 ` [Qemu-devel] [PATCH 1/4] block: Fix anonymous BBs in blk_root_inactivate() Kevin Wolf
2017-05-23 15:30 ` Eric Blake
2017-05-23 16:36 ` Juan Quintela
2017-05-23 14:01 ` Kevin Wolf [this message]
2017-05-23 15:36 ` [Qemu-devel] [PATCH 2/4] migration: Inactivate images after .save_live_complete_precopy() Eric Blake
2017-05-24 11:34 ` Juan Quintela
2017-05-23 14:01 ` [Qemu-devel] [PATCH 3/4] migration/block: Clean up BBs in block_save_complete() Kevin Wolf
2017-05-23 15:41 ` Eric Blake
2017-05-23 14:01 ` [Qemu-devel] [PATCH 4/4] qemu-iotests: Block migration test Kevin Wolf
2017-05-23 15:46 ` Eric Blake
2017-05-23 16:18 ` Kevin Wolf
2017-05-23 16:29 ` Eric Blake
2017-05-24 6:23 ` [Qemu-devel] [PATCH 0/4] Block migration (migrate -b) fixes Fam Zheng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1495548064-10926-3-git-send-email-kwolf@redhat.com \
--to=kwolf@redhat.com \
--cc=dgilbert@redhat.com \
--cc=famz@redhat.com \
--cc=mreitz@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).