* [Qemu-devel] [PATCH v2 01/13] migration: assert colo instead of check
2018-01-03 12:20 [Qemu-devel] [PATCH v2 00/13] migration: cleanup migration_thread() Peter Xu
@ 2018-01-03 12:20 ` Peter Xu
2018-01-03 12:23 ` Juan Quintela
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 02/13] migration: qemu_savevm_state_cleanup() in cleanup Peter Xu
` (11 subsequent siblings)
12 siblings, 1 reply; 23+ messages in thread
From: Peter Xu @ 2018-01-03 12:20 UTC (permalink / raw)
To: qemu-devel; +Cc: Laurent Vivier, Juan Quintela, Dr . David Alan Gilbert, peterx
When reaching here if we are still "active" it means we must be in colo
state. After a quick discussion offlist, we decided to use the safer
error_report().
Finally I want to use "switch" here rather than lots of complicated if
clauses.
Signed-off-by: Peter Xu <peterx@redhat.com>
---
migration/migration.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/migration/migration.c b/migration/migration.c
index 4de3b551fe..5a12738447 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2309,7 +2309,15 @@ static void *migration_thread(void *opaque)
}
runstate_set(RUN_STATE_POSTMIGRATE);
} else {
- if (s->state == MIGRATION_STATUS_ACTIVE && enable_colo) {
+ if (s->state == MIGRATION_STATUS_ACTIVE) {
+ /*
+ * We should really assert here, but since it's during
+ * migration, let's try to reduce the usage of assertions.
+ */
+ if (!enable_colo) {
+ error_report("%s: critical error: calling COLO code without "
+ "COLO enabled", __func__);
+ }
migrate_start_colo_process(s);
qemu_savevm_state_cleanup();
/*
--
2.14.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [Qemu-devel] [PATCH v2 01/13] migration: assert colo instead of check
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 01/13] migration: assert colo instead of check Peter Xu
@ 2018-01-03 12:23 ` Juan Quintela
0 siblings, 0 replies; 23+ messages in thread
From: Juan Quintela @ 2018-01-03 12:23 UTC (permalink / raw)
To: Peter Xu; +Cc: qemu-devel, Laurent Vivier, Dr . David Alan Gilbert
Peter Xu <peterx@redhat.com> wrote:
> When reaching here if we are still "active" it means we must be in colo
> state. After a quick discussion offlist, we decided to use the safer
> error_report().
>
> Finally I want to use "switch" here rather than lots of complicated if
> clauses.
>
> Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
> ---
> migration/migration.c | 10 +++++++++-
> 1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/migration/migration.c b/migration/migration.c
> index 4de3b551fe..5a12738447 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -2309,7 +2309,15 @@ static void *migration_thread(void *opaque)
> }
> runstate_set(RUN_STATE_POSTMIGRATE);
> } else {
> - if (s->state == MIGRATION_STATUS_ACTIVE && enable_colo) {
> + if (s->state == MIGRATION_STATUS_ACTIVE) {
> + /*
> + * We should really assert here, but since it's during
> + * migration, let's try to reduce the usage of assertions.
> + */
> + if (!enable_colo) {
> + error_report("%s: critical error: calling COLO code without "
> + "COLO enabled", __func__);
I will put on the error message something like:
"%s: critical error: State ACTIVE without COLO being enable. That is
forbidden/imposible".
And then you don't need the previous comment?
Later, Juan.
> + }
> migrate_start_colo_process(s);
> qemu_savevm_state_cleanup();
> /*
^ permalink raw reply [flat|nested] 23+ messages in thread
* [Qemu-devel] [PATCH v2 02/13] migration: qemu_savevm_state_cleanup() in cleanup
2018-01-03 12:20 [Qemu-devel] [PATCH v2 00/13] migration: cleanup migration_thread() Peter Xu
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 01/13] migration: assert colo instead of check Peter Xu
@ 2018-01-03 12:20 ` Peter Xu
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 03/13] migration: remove "enable_colo" var Peter Xu
` (10 subsequent siblings)
12 siblings, 0 replies; 23+ messages in thread
From: Peter Xu @ 2018-01-03 12:20 UTC (permalink / raw)
To: qemu-devel; +Cc: Laurent Vivier, Juan Quintela, Dr . David Alan Gilbert, peterx
Moving existing callers all into migrate_fd_cleanup(). It simplifies
migration_thread() a bit.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
migration/migration.c | 10 ++--------
1 file changed, 2 insertions(+), 8 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 5a12738447..2c2140006c 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1077,6 +1077,8 @@ static void migrate_fd_cleanup(void *opaque)
qemu_bh_delete(s->cleanup_bh);
s->cleanup_bh = NULL;
+ qemu_savevm_state_cleanup();
+
if (s->to_dst_file) {
Error *local_err = NULL;
@@ -2290,13 +2292,6 @@ static void *migration_thread(void *opaque)
end_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
qemu_mutex_lock_iothread();
- /*
- * The resource has been allocated by migration will be reused in COLO
- * process, so don't release them.
- */
- if (!enable_colo) {
- qemu_savevm_state_cleanup();
- }
if (s->state == MIGRATION_STATUS_COMPLETED) {
uint64_t transferred_bytes = qemu_ftell(s->to_dst_file);
s->total_time = end_time - s->total_time;
@@ -2319,7 +2314,6 @@ static void *migration_thread(void *opaque)
"COLO enabled", __func__);
}
migrate_start_colo_process(s);
- qemu_savevm_state_cleanup();
/*
* Fixme: we will run VM in COLO no matter its old running state.
* After exited COLO, we will keep running.
--
2.14.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [Qemu-devel] [PATCH v2 03/13] migration: remove "enable_colo" var
2018-01-03 12:20 [Qemu-devel] [PATCH v2 00/13] migration: cleanup migration_thread() Peter Xu
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 01/13] migration: assert colo instead of check Peter Xu
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 02/13] migration: qemu_savevm_state_cleanup() in cleanup Peter Xu
@ 2018-01-03 12:20 ` Peter Xu
2018-01-03 12:24 ` Juan Quintela
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 04/13] migration: split use of MigrationState.total_time Peter Xu
` (9 subsequent siblings)
12 siblings, 1 reply; 23+ messages in thread
From: Peter Xu @ 2018-01-03 12:20 UTC (permalink / raw)
To: qemu-devel; +Cc: Laurent Vivier, Juan Quintela, Dr . David Alan Gilbert, peterx
It's only used once, clean it up a bit.
Signed-off-by: Peter Xu <peterx@redhat.com>
---
migration/migration.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 2c2140006c..9ba96ae301 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2177,7 +2177,6 @@ static void *migration_thread(void *opaque)
bool entered_postcopy = false;
/* The active state we expect to be in; ACTIVE or POSTCOPY_ACTIVE */
enum MigrationStatus current_active_state = MIGRATION_STATUS_ACTIVE;
- bool enable_colo = migrate_colo_enabled();
rcu_register_thread();
@@ -2309,7 +2308,7 @@ static void *migration_thread(void *opaque)
* We should really assert here, but since it's during
* migration, let's try to reduce the usage of assertions.
*/
- if (!enable_colo) {
+ if (!migrate_colo_enabled()) {
error_report("%s: critical error: calling COLO code without "
"COLO enabled", __func__);
}
--
2.14.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [Qemu-devel] [PATCH v2 04/13] migration: split use of MigrationState.total_time
2018-01-03 12:20 [Qemu-devel] [PATCH v2 00/13] migration: cleanup migration_thread() Peter Xu
` (2 preceding siblings ...)
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 03/13] migration: remove "enable_colo" var Peter Xu
@ 2018-01-03 12:20 ` Peter Xu
2018-01-03 12:25 ` Juan Quintela
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 05/13] migration: move vm_old_running into global state Peter Xu
` (8 subsequent siblings)
12 siblings, 1 reply; 23+ messages in thread
From: Peter Xu @ 2018-01-03 12:20 UTC (permalink / raw)
To: qemu-devel; +Cc: Laurent Vivier, Juan Quintela, Dr . David Alan Gilbert, peterx
It was used either to:
1. store initial timestamp of migration start, and
2. store total time used by last migration
Let's provide two parameters for each of them. Mix use of the two is
slightly misleading.
Signed-off-by: Peter Xu <peterx@redhat.com>
---
migration/migration.c | 7 ++++---
migration/migration.h | 3 +++
2 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 9ba96ae301..343368c089 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -613,7 +613,7 @@ MigrationInfo *qmp_query_migrate(Error **errp)
info->has_status = true;
info->has_total_time = true;
info->total_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME)
- - s->total_time;
+ - s->start_time;
info->has_expected_downtime = true;
info->expected_downtime = s->expected_downtime;
info->has_setup_time = true;
@@ -1270,7 +1270,8 @@ MigrationState *migrate_init(void)
migrate_set_state(&s->state, MIGRATION_STATUS_NONE, MIGRATION_STATUS_SETUP);
- s->total_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
+ s->start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
+ s->total_time = 0;
return s;
}
@@ -2293,7 +2294,7 @@ static void *migration_thread(void *opaque)
qemu_mutex_lock_iothread();
if (s->state == MIGRATION_STATUS_COMPLETED) {
uint64_t transferred_bytes = qemu_ftell(s->to_dst_file);
- s->total_time = end_time - s->total_time;
+ s->total_time = end_time - s->start_time;
if (!entered_postcopy) {
s->downtime = end_time - start_time;
}
diff --git a/migration/migration.h b/migration/migration.h
index 663415fe48..233ad68705 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -103,6 +103,9 @@ struct MigrationState
} rp_state;
double mbps;
+ /* Timestamp when recent migration starts (ms) */
+ int64_t start_time;
+ /* Total time used by latest migration (ms) */
int64_t total_time;
int64_t downtime;
int64_t expected_downtime;
--
2.14.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [Qemu-devel] [PATCH v2 05/13] migration: move vm_old_running into global state
2018-01-03 12:20 [Qemu-devel] [PATCH v2 00/13] migration: cleanup migration_thread() Peter Xu
` (3 preceding siblings ...)
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 04/13] migration: split use of MigrationState.total_time Peter Xu
@ 2018-01-03 12:20 ` Peter Xu
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 06/13] migration: introduce downtime_start Peter Xu
` (7 subsequent siblings)
12 siblings, 0 replies; 23+ messages in thread
From: Peter Xu @ 2018-01-03 12:20 UTC (permalink / raw)
To: qemu-devel; +Cc: Laurent Vivier, Juan Quintela, Dr . David Alan Gilbert, peterx
Firstly, it was passed around. Let's just move it into MigrationState
just like many other variables as state of migration, renaming it to
vm_was_running.
One thing to mention is that for postcopy, we actually don't need this
knowledge at all since postcopy can't resume a VM even if it fails (we
can see that from the old code too: when we try to resume we also check
against "entered_postcopy" variable). So further we do this:
- in postcopy_start(), we don't update vm_old_running since useless
- in migration_thread(), we don't need to check entered_postcopy when
resume, since it's only used for precopy.
Comment this out too for that variable definition.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
migration/migration.c | 17 +++++++----------
migration/migration.h | 6 ++++++
2 files changed, 13 insertions(+), 10 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 343368c089..ca0c600178 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1272,6 +1272,7 @@ MigrationState *migrate_init(void)
s->start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
s->total_time = 0;
+ s->vm_was_running = false;
return s;
}
@@ -1846,7 +1847,7 @@ static int await_return_path_close_on_source(MigrationState *ms)
* Switch from normal iteration to postcopy
* Returns non-0 on error
*/
-static int postcopy_start(MigrationState *ms, bool *old_vm_running)
+static int postcopy_start(MigrationState *ms)
{
int ret;
QIOChannelBuffer *bioc;
@@ -1864,7 +1865,6 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
trace_postcopy_start_set_run();
qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER);
- *old_vm_running = runstate_is_running();
global_state_store();
ret = vm_stop_force_state(RUN_STATE_FINISH_MIGRATE);
if (ret < 0) {
@@ -2055,11 +2055,9 @@ static int migration_maybe_pause(MigrationState *s,
*
* @s: Current migration state
* @current_active_state: The migration state we expect to be in
- * @*old_vm_running: Pointer to old_vm_running flag
* @*start_time: Pointer to time to update
*/
static void migration_completion(MigrationState *s, int current_active_state,
- bool *old_vm_running,
int64_t *start_time)
{
int ret;
@@ -2068,7 +2066,7 @@ static void migration_completion(MigrationState *s, int current_active_state,
qemu_mutex_lock_iothread();
*start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER);
- *old_vm_running = runstate_is_running();
+ s->vm_was_running = runstate_is_running();
ret = global_state_store();
if (!ret) {
@@ -2174,7 +2172,6 @@ static void *migration_thread(void *opaque)
int64_t threshold_size = 0;
int64_t start_time = initial_time;
int64_t end_time;
- bool old_vm_running = false;
bool entered_postcopy = false;
/* The active state we expect to be in; ACTIVE or POSTCOPY_ACTIVE */
enum MigrationStatus current_active_state = MIGRATION_STATUS_ACTIVE;
@@ -2233,7 +2230,7 @@ static void *migration_thread(void *opaque)
pend_nonpost <= threshold_size &&
atomic_read(&s->start_postcopy)) {
- if (!postcopy_start(s, &old_vm_running)) {
+ if (!postcopy_start(s)) {
current_active_state = MIGRATION_STATUS_POSTCOPY_ACTIVE;
entered_postcopy = true;
}
@@ -2245,7 +2242,7 @@ static void *migration_thread(void *opaque)
} else {
trace_migration_thread_low_pending(pending_size);
migration_completion(s, current_active_state,
- &old_vm_running, &start_time);
+ &start_time);
break;
}
}
@@ -2318,9 +2315,9 @@ static void *migration_thread(void *opaque)
* Fixme: we will run VM in COLO no matter its old running state.
* After exited COLO, we will keep running.
*/
- old_vm_running = true;
+ s->vm_was_running = true;
}
- if (old_vm_running && !entered_postcopy) {
+ if (s->vm_was_running) {
vm_start();
} else {
if (runstate_check(RUN_STATE_FINISH_MIGRATE)) {
diff --git a/migration/migration.h b/migration/migration.h
index 233ad68705..45053ae8c5 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -111,6 +111,12 @@ struct MigrationState
int64_t expected_downtime;
bool enabled_capabilities[MIGRATION_CAPABILITY__MAX];
int64_t setup_time;
+ /*
+ * Whether guest was running when we enter the completion stage.
+ * If migration is interrupted by any reason, we need to continue
+ * running the guest on source.
+ */
+ bool vm_was_running;
/* Flag set once the migration has been asked to enter postcopy */
bool start_postcopy;
--
2.14.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [Qemu-devel] [PATCH v2 06/13] migration: introduce downtime_start
2018-01-03 12:20 [Qemu-devel] [PATCH v2 00/13] migration: cleanup migration_thread() Peter Xu
` (4 preceding siblings ...)
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 05/13] migration: move vm_old_running into global state Peter Xu
@ 2018-01-03 12:20 ` Peter Xu
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 07/13] migration: introduce migrate_calculate_complete Peter Xu
` (6 subsequent siblings)
12 siblings, 0 replies; 23+ messages in thread
From: Peter Xu @ 2018-01-03 12:20 UTC (permalink / raw)
To: qemu-devel; +Cc: Laurent Vivier, Juan Quintela, Dr . David Alan Gilbert, peterx
Introduce MigrationState.downtime_start to replace the local variable
"start_time" in migration_thread to avoid passing things around.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
migration/migration.c | 12 ++++--------
migration/migration.h | 2 ++
2 files changed, 6 insertions(+), 8 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index ca0c600178..bfe1a76254 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2055,16 +2055,14 @@ static int migration_maybe_pause(MigrationState *s,
*
* @s: Current migration state
* @current_active_state: The migration state we expect to be in
- * @*start_time: Pointer to time to update
*/
-static void migration_completion(MigrationState *s, int current_active_state,
- int64_t *start_time)
+static void migration_completion(MigrationState *s, int current_active_state)
{
int ret;
if (s->state == MIGRATION_STATUS_ACTIVE) {
qemu_mutex_lock_iothread();
- *start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
+ s->downtime_start = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER);
s->vm_was_running = runstate_is_running();
ret = global_state_store();
@@ -2170,7 +2168,6 @@ static void *migration_thread(void *opaque)
* measured bandwidth
*/
int64_t threshold_size = 0;
- int64_t start_time = initial_time;
int64_t end_time;
bool entered_postcopy = false;
/* The active state we expect to be in; ACTIVE or POSTCOPY_ACTIVE */
@@ -2241,8 +2238,7 @@ static void *migration_thread(void *opaque)
qemu_savevm_state_iterate(s->to_dst_file, entered_postcopy);
} else {
trace_migration_thread_low_pending(pending_size);
- migration_completion(s, current_active_state,
- &start_time);
+ migration_completion(s, current_active_state);
break;
}
}
@@ -2293,7 +2289,7 @@ static void *migration_thread(void *opaque)
uint64_t transferred_bytes = qemu_ftell(s->to_dst_file);
s->total_time = end_time - s->start_time;
if (!entered_postcopy) {
- s->downtime = end_time - start_time;
+ s->downtime = end_time - s->downtime_start;
}
if (s->total_time) {
s->mbps = (((double) transferred_bytes * 8.0) /
diff --git a/migration/migration.h b/migration/migration.h
index 45053ae8c5..867383f18d 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -107,6 +107,8 @@ struct MigrationState
int64_t start_time;
/* Total time used by latest migration (ms) */
int64_t total_time;
+ /* Timestamp when VM is down (ms) to migrate the last stuff */
+ int64_t downtime_start;
int64_t downtime;
int64_t expected_downtime;
bool enabled_capabilities[MIGRATION_CAPABILITY__MAX];
--
2.14.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [Qemu-devel] [PATCH v2 07/13] migration: introduce migrate_calculate_complete
2018-01-03 12:20 [Qemu-devel] [PATCH v2 00/13] migration: cleanup migration_thread() Peter Xu
` (5 preceding siblings ...)
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 06/13] migration: introduce downtime_start Peter Xu
@ 2018-01-03 12:20 ` Peter Xu
2018-01-03 12:26 ` Juan Quintela
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 08/13] migration: use switch at the end of migration Peter Xu
` (5 subsequent siblings)
12 siblings, 1 reply; 23+ messages in thread
From: Peter Xu @ 2018-01-03 12:20 UTC (permalink / raw)
To: qemu-devel; +Cc: Laurent Vivier, Juan Quintela, Dr . David Alan Gilbert, peterx
Generalize the calculation part when migration complete into a
function to simplify migration_thread().
Signed-off-by: Peter Xu <peterx@redhat.com>
---
migration/migration.c | 31 ++++++++++++++++++++-----------
1 file changed, 20 insertions(+), 11 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index bfe1a76254..72f0b586f7 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2151,6 +2151,25 @@ bool migrate_colo_enabled(void)
return s->enabled_capabilities[MIGRATION_CAPABILITY_X_COLO];
}
+static void migration_calculate_complete(MigrationState *s)
+{
+ uint64_t bytes = qemu_ftell(s->to_dst_file);
+ int64_t end_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
+
+ s->total_time = end_time - s->start_time;
+ if (!s->downtime) {
+ /*
+ * It's still not set, so we are precopy migration. For
+ * postcopy, downtime is calculated during postcopy_start().
+ */
+ s->downtime = end_time - s->downtime_start;
+ }
+
+ if (s->total_time) {
+ s->mbps = ((double) bytes * 8.0) / s->total_time / 1000;
+ }
+}
+
/*
* Master migration thread on the source VM.
* It drives the migration and pumps the data down the outgoing channel.
@@ -2168,7 +2187,6 @@ static void *migration_thread(void *opaque)
* measured bandwidth
*/
int64_t threshold_size = 0;
- int64_t end_time;
bool entered_postcopy = false;
/* The active state we expect to be in; ACTIVE or POSTCOPY_ACTIVE */
enum MigrationStatus current_active_state = MIGRATION_STATUS_ACTIVE;
@@ -2282,19 +2300,10 @@ static void *migration_thread(void *opaque)
trace_migration_thread_after_loop();
/* If we enabled cpu throttling for auto-converge, turn it off. */
cpu_throttle_stop();
- end_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
qemu_mutex_lock_iothread();
if (s->state == MIGRATION_STATUS_COMPLETED) {
- uint64_t transferred_bytes = qemu_ftell(s->to_dst_file);
- s->total_time = end_time - s->start_time;
- if (!entered_postcopy) {
- s->downtime = end_time - s->downtime_start;
- }
- if (s->total_time) {
- s->mbps = (((double) transferred_bytes * 8.0) /
- ((double) s->total_time)) / 1000;
- }
+ migration_calculate_complete(s);
runstate_set(RUN_STATE_POSTMIGRATE);
} else {
if (s->state == MIGRATION_STATUS_ACTIVE) {
--
2.14.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [Qemu-devel] [PATCH v2 08/13] migration: use switch at the end of migration
2018-01-03 12:20 [Qemu-devel] [PATCH v2 00/13] migration: cleanup migration_thread() Peter Xu
` (6 preceding siblings ...)
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 07/13] migration: introduce migrate_calculate_complete Peter Xu
@ 2018-01-03 12:20 ` Peter Xu
2018-01-03 12:27 ` Juan Quintela
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 09/13] migration: cleanup stats update into function Peter Xu
` (4 subsequent siblings)
12 siblings, 1 reply; 23+ messages in thread
From: Peter Xu @ 2018-01-03 12:20 UTC (permalink / raw)
To: qemu-devel; +Cc: Laurent Vivier, Juan Quintela, Dr . David Alan Gilbert, peterx
It converts the old if clauses into switch, explicitly mentions the
possible migration states. The old nested "if"s are not clear on what
we do on different states.
Signed-off-by: Peter Xu <peterx@redhat.com>
---
migration/migration.c | 44 +++++++++++++++++++++++++++-----------------
1 file changed, 27 insertions(+), 17 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 72f0b586f7..4dd21f8636 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2302,26 +2302,30 @@ static void *migration_thread(void *opaque)
cpu_throttle_stop();
qemu_mutex_lock_iothread();
- if (s->state == MIGRATION_STATUS_COMPLETED) {
+ switch (s->state) {
+ case MIGRATION_STATUS_COMPLETED:
migration_calculate_complete(s);
runstate_set(RUN_STATE_POSTMIGRATE);
- } else {
- if (s->state == MIGRATION_STATUS_ACTIVE) {
- /*
- * We should really assert here, but since it's during
- * migration, let's try to reduce the usage of assertions.
- */
- if (!migrate_colo_enabled()) {
- error_report("%s: critical error: calling COLO code without "
- "COLO enabled", __func__);
- }
- migrate_start_colo_process(s);
- /*
- * Fixme: we will run VM in COLO no matter its old running state.
- * After exited COLO, we will keep running.
- */
- s->vm_was_running = true;
+ break;
+
+ case MIGRATION_STATUS_ACTIVE:
+ /*
+ * We should really assert here, but since it's during
+ * migration, let's try to reduce the usage of assertions.
+ */
+ if (!migrate_colo_enabled()) {
+ error_report("%s: critical error: calling COLO code without "
+ "COLO enabled", __func__);
}
+ migrate_start_colo_process(s);
+ /*
+ * Fixme: we will run VM in COLO no matter its old running state.
+ * After exited COLO, we will keep running.
+ */
+ s->vm_was_running = true;
+ /* Fallthrough */
+ case MIGRATION_STATUS_FAILED:
+ case MIGRATION_STATUS_CANCELLED:
if (s->vm_was_running) {
vm_start();
} else {
@@ -2329,6 +2333,12 @@ static void *migration_thread(void *opaque)
runstate_set(RUN_STATE_POSTMIGRATE);
}
}
+ break;
+
+ default:
+ /* Should not reach here, but if so, forgive the VM. */
+ error_report("%s: Unknown ending state %d", __func__, s->state);
+ break;
}
qemu_bh_schedule(s->cleanup_bh);
qemu_mutex_unlock_iothread();
--
2.14.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [Qemu-devel] [PATCH v2 09/13] migration: cleanup stats update into function
2018-01-03 12:20 [Qemu-devel] [PATCH v2 00/13] migration: cleanup migration_thread() Peter Xu
` (7 preceding siblings ...)
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 08/13] migration: use switch at the end of migration Peter Xu
@ 2018-01-03 12:20 ` Peter Xu
2018-01-03 12:28 ` Juan Quintela
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 10/13] migration: major cleanup for migrate iterations Peter Xu
` (3 subsequent siblings)
12 siblings, 1 reply; 23+ messages in thread
From: Peter Xu @ 2018-01-03 12:20 UTC (permalink / raw)
To: qemu-devel; +Cc: Laurent Vivier, Juan Quintela, Dr . David Alan Gilbert, peterx
We have quite a few lines in migration_thread() that calculates some
statistics for the migration interations. Isolate it into a single
function to improve readability.
Signed-off-by: Peter Xu <peterx@redhat.com>
---
migration/migration.c | 86 ++++++++++++++++++++++++++++++---------------------
migration/migration.h | 11 +++++++
2 files changed, 61 insertions(+), 36 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 4dd21f8636..de872801e6 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1273,6 +1273,8 @@ MigrationState *migrate_init(void)
s->start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
s->total_time = 0;
s->vm_was_running = false;
+ s->iteration_initial_bytes = 0;
+ s->threshold_size = 0;
return s;
}
@@ -2170,6 +2172,43 @@ static void migration_calculate_complete(MigrationState *s)
}
}
+static void migration_update_counters(MigrationState *s,
+ int64_t current_time)
+{
+ uint64_t transferred, time_spent;
+ int64_t threshold_size;
+ double bandwidth;
+
+ if (current_time < s->iteration_start_time + BUFFER_DELAY) {
+ return;
+ }
+
+ transferred = qemu_ftell(s->to_dst_file) - s->iteration_initial_bytes;
+ time_spent = current_time - s->iteration_start_time;
+ bandwidth = (double)transferred / time_spent;
+ threshold_size = bandwidth * s->parameters.downtime_limit;
+
+ s->mbps = (((double) transferred * 8.0) /
+ ((double) time_spent / 1000.0)) / 1000.0 / 1000.0;
+
+ /*
+ * if we haven't sent anything, we don't want to
+ * recalculate. 10000 is a small enough number for our purposes
+ */
+ if (ram_counters.dirty_pages_rate && transferred > 10000) {
+ s->expected_downtime = ram_counters.dirty_pages_rate *
+ qemu_target_page_size() / bandwidth;
+ }
+
+ qemu_file_reset_rate_limit(s->to_dst_file);
+
+ s->iteration_start_time = current_time;
+ s->iteration_initial_bytes = qemu_ftell(s->to_dst_file);
+
+ trace_migrate_transferred(transferred, time_spent,
+ bandwidth, threshold_size);
+}
+
/*
* Master migration thread on the source VM.
* It drives the migration and pumps the data down the outgoing channel.
@@ -2177,22 +2216,15 @@ static void migration_calculate_complete(MigrationState *s)
static void *migration_thread(void *opaque)
{
MigrationState *s = opaque;
- /* Used by the bandwidth calcs, updated later */
- int64_t initial_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
int64_t setup_start = qemu_clock_get_ms(QEMU_CLOCK_HOST);
- int64_t initial_bytes = 0;
- /*
- * The final stage happens when the remaining data is smaller than
- * this threshold; it's calculated from the requested downtime and
- * measured bandwidth
- */
- int64_t threshold_size = 0;
bool entered_postcopy = false;
/* The active state we expect to be in; ACTIVE or POSTCOPY_ACTIVE */
enum MigrationStatus current_active_state = MIGRATION_STATUS_ACTIVE;
rcu_register_thread();
+ s->iteration_start_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
+
qemu_savevm_state_header(s->to_dst_file);
/*
@@ -2232,17 +2264,17 @@ static void *migration_thread(void *opaque)
if (!qemu_file_rate_limit(s->to_dst_file)) {
uint64_t pend_post, pend_nonpost;
- qemu_savevm_state_pending(s->to_dst_file, threshold_size,
+ qemu_savevm_state_pending(s->to_dst_file, s->threshold_size,
&pend_nonpost, &pend_post);
pending_size = pend_nonpost + pend_post;
- trace_migrate_pending(pending_size, threshold_size,
+ trace_migrate_pending(pending_size, s->threshold_size,
pend_post, pend_nonpost);
- if (pending_size && pending_size >= threshold_size) {
+ if (pending_size && pending_size >= s->threshold_size) {
/* Still a significant amount to transfer */
if (migrate_postcopy() &&
s->state != MIGRATION_STATUS_POSTCOPY_ACTIVE &&
- pend_nonpost <= threshold_size &&
+ pend_nonpost <= s->threshold_size &&
atomic_read(&s->start_postcopy)) {
if (!postcopy_start(s)) {
@@ -2267,33 +2299,15 @@ static void *migration_thread(void *opaque)
trace_migration_thread_file_err();
break;
}
+
current_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
- if (current_time >= initial_time + BUFFER_DELAY) {
- uint64_t transferred_bytes = qemu_ftell(s->to_dst_file) -
- initial_bytes;
- uint64_t time_spent = current_time - initial_time;
- double bandwidth = (double)transferred_bytes / time_spent;
- threshold_size = bandwidth * s->parameters.downtime_limit;
-
- s->mbps = (((double) transferred_bytes * 8.0) /
- ((double) time_spent / 1000.0)) / 1000.0 / 1000.0;
-
- trace_migrate_transferred(transferred_bytes, time_spent,
- bandwidth, threshold_size);
- /* if we haven't sent anything, we don't want to recalculate
- 10000 is a small enough number for our purposes */
- if (ram_counters.dirty_pages_rate && transferred_bytes > 10000) {
- s->expected_downtime = ram_counters.dirty_pages_rate *
- qemu_target_page_size() / bandwidth;
- }
- qemu_file_reset_rate_limit(s->to_dst_file);
- initial_time = current_time;
- initial_bytes = qemu_ftell(s->to_dst_file);
- }
+ migration_update_counters(s, current_time);
+
if (qemu_file_rate_limit(s->to_dst_file)) {
/* usleep expects microseconds */
- g_usleep((initial_time + BUFFER_DELAY - current_time)*1000);
+ g_usleep((s->iteration_start_time + BUFFER_DELAY -
+ current_time) * 1000);
}
}
diff --git a/migration/migration.h b/migration/migration.h
index 867383f18d..786d971ce2 100644
--- a/migration/migration.h
+++ b/migration/migration.h
@@ -90,6 +90,17 @@ struct MigrationState
QEMUBH *cleanup_bh;
QEMUFile *to_dst_file;
+ /* bytes already send at the beggining of current interation */
+ uint64_t iteration_initial_bytes;
+ /* time at the start of current iteration */
+ int64_t iteration_start_time;
+ /*
+ * The final stage happens when the remaining data is smaller than
+ * this threshold; it's calculated from the requested downtime and
+ * measured bandwidth
+ */
+ int64_t threshold_size;
+
/* params from 'migrate-set-parameters' */
MigrationParameters parameters;
--
2.14.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [Qemu-devel] [PATCH v2 10/13] migration: major cleanup for migrate iterations
2018-01-03 12:20 [Qemu-devel] [PATCH v2 00/13] migration: cleanup migration_thread() Peter Xu
` (8 preceding siblings ...)
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 09/13] migration: cleanup stats update into function Peter Xu
@ 2018-01-03 12:20 ` Peter Xu
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 11/13] migration: put the finish part into a new function Peter Xu
` (2 subsequent siblings)
12 siblings, 0 replies; 23+ messages in thread
From: Peter Xu @ 2018-01-03 12:20 UTC (permalink / raw)
To: qemu-devel; +Cc: Laurent Vivier, Juan Quintela, Dr . David Alan Gilbert, peterx
The major work for migration iterations are to move RAM/block/... data
via qemu_savevm_state_iterate(). Generalize those part into a single
function.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
migration/migration.c | 90 +++++++++++++++++++++++++++++++--------------------
1 file changed, 55 insertions(+), 35 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index de872801e6..2b42d6c9a1 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2056,11 +2056,11 @@ static int migration_maybe_pause(MigrationState *s,
* The caller 'breaks' the loop when this returns.
*
* @s: Current migration state
- * @current_active_state: The migration state we expect to be in
*/
-static void migration_completion(MigrationState *s, int current_active_state)
+static void migration_completion(MigrationState *s)
{
int ret;
+ int current_active_state = s->state;
if (s->state == MIGRATION_STATUS_ACTIVE) {
qemu_mutex_lock_iothread();
@@ -2209,6 +2209,51 @@ static void migration_update_counters(MigrationState *s,
bandwidth, threshold_size);
}
+/* Migration thread iteration status */
+typedef enum {
+ MIG_ITERATE_RESUME, /* Resume current iteration */
+ MIG_ITERATE_SKIP, /* Skip current iteration */
+ MIG_ITERATE_BREAK, /* Break the loop */
+} MigIterateState;
+
+/*
+ * Return true if continue to the next iteration directly, false
+ * otherwise.
+ */
+static MigIterateState migration_iteration_run(MigrationState *s)
+{
+ uint64_t pending_size, pend_post, pend_nonpost;
+ bool in_postcopy = s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE;
+
+ qemu_savevm_state_pending(s->to_dst_file, s->threshold_size,
+ &pend_nonpost, &pend_post);
+ pending_size = pend_nonpost + pend_post;
+
+ trace_migrate_pending(pending_size, s->threshold_size,
+ pend_post, pend_nonpost);
+
+ if (pending_size && pending_size >= s->threshold_size) {
+ /* Still a significant amount to transfer */
+ if (migrate_postcopy() && !in_postcopy &&
+ pend_nonpost <= s->threshold_size &&
+ atomic_read(&s->start_postcopy)) {
+ if (postcopy_start(s)) {
+ error_report("%s: postcopy failed to start", __func__);
+ }
+ return MIG_ITERATE_SKIP;
+ }
+ /* Just another iteration step */
+ qemu_savevm_state_iterate(s->to_dst_file,
+ s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE);
+ } else {
+ trace_migration_thread_low_pending(pending_size);
+ migration_completion(s);
+ return MIG_ITERATE_BREAK;
+ }
+
+ return MIG_ITERATE_RESUME;
+}
+
/*
* Master migration thread on the source VM.
* It drives the migration and pumps the data down the outgoing channel.
@@ -2217,9 +2262,6 @@ static void *migration_thread(void *opaque)
{
MigrationState *s = opaque;
int64_t setup_start = qemu_clock_get_ms(QEMU_CLOCK_HOST);
- bool entered_postcopy = false;
- /* The active state we expect to be in; ACTIVE or POSTCOPY_ACTIVE */
- enum MigrationStatus current_active_state = MIGRATION_STATUS_ACTIVE;
rcu_register_thread();
@@ -2259,43 +2301,21 @@ static void *migration_thread(void *opaque)
while (s->state == MIGRATION_STATUS_ACTIVE ||
s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE) {
int64_t current_time;
- uint64_t pending_size;
if (!qemu_file_rate_limit(s->to_dst_file)) {
- uint64_t pend_post, pend_nonpost;
-
- qemu_savevm_state_pending(s->to_dst_file, s->threshold_size,
- &pend_nonpost, &pend_post);
- pending_size = pend_nonpost + pend_post;
- trace_migrate_pending(pending_size, s->threshold_size,
- pend_post, pend_nonpost);
- if (pending_size && pending_size >= s->threshold_size) {
- /* Still a significant amount to transfer */
-
- if (migrate_postcopy() &&
- s->state != MIGRATION_STATUS_POSTCOPY_ACTIVE &&
- pend_nonpost <= s->threshold_size &&
- atomic_read(&s->start_postcopy)) {
-
- if (!postcopy_start(s)) {
- current_active_state = MIGRATION_STATUS_POSTCOPY_ACTIVE;
- entered_postcopy = true;
- }
-
- continue;
- }
- /* Just another iteration step */
- qemu_savevm_state_iterate(s->to_dst_file, entered_postcopy);
- } else {
- trace_migration_thread_low_pending(pending_size);
- migration_completion(s, current_active_state);
+ MigIterateState iter_state = migration_iteration_run(s);
+ if (iter_state == MIG_ITERATE_SKIP) {
+ continue;
+ } else if (iter_state == MIG_ITERATE_BREAK) {
break;
}
}
if (qemu_file_get_error(s->to_dst_file)) {
- migrate_set_state(&s->state, current_active_state,
- MIGRATION_STATUS_FAILED);
+ if (migration_is_setup_or_active(s->state)) {
+ migrate_set_state(&s->state, s->state,
+ MIGRATION_STATUS_FAILED);
+ }
trace_migration_thread_file_err();
break;
}
--
2.14.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [Qemu-devel] [PATCH v2 11/13] migration: put the finish part into a new function
2018-01-03 12:20 [Qemu-devel] [PATCH v2 00/13] migration: cleanup migration_thread() Peter Xu
` (9 preceding siblings ...)
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 10/13] migration: major cleanup for migrate iterations Peter Xu
@ 2018-01-03 12:20 ` Peter Xu
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 12/13] migration: remove some block_cleanup_parameters() Peter Xu
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 13/13] migration: remove notify in fd_error Peter Xu
12 siblings, 0 replies; 23+ messages in thread
From: Peter Xu @ 2018-01-03 12:20 UTC (permalink / raw)
To: qemu-devel; +Cc: Laurent Vivier, Juan Quintela, Dr . David Alan Gilbert, peterx
This patch only moved the last part of migration_thread() into a new
function migration_iteration_finish() to make it much shorter. With
previous works to remove some local variables, now it's fairly easy to
do that.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
migration/migration.c | 94 +++++++++++++++++++++++++++------------------------
1 file changed, 49 insertions(+), 45 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 2b42d6c9a1..16eb24c8b3 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -2254,6 +2254,54 @@ static MigIterateState migration_iteration_run(MigrationState *s)
return MIG_ITERATE_RESUME;
}
+static void migration_iteration_finish(MigrationState *s)
+{
+ /* If we enabled cpu throttling for auto-converge, turn it off. */
+ cpu_throttle_stop();
+
+ qemu_mutex_lock_iothread();
+ switch (s->state) {
+ case MIGRATION_STATUS_COMPLETED:
+ migration_calculate_complete(s);
+ runstate_set(RUN_STATE_POSTMIGRATE);
+ break;
+
+ case MIGRATION_STATUS_ACTIVE:
+ /*
+ * We should really assert here, but since it's during
+ * migration, let's try to reduce the usage of assertions.
+ */
+ if (!migrate_colo_enabled()) {
+ error_report("%s: critical error: calling COLO code without "
+ "COLO enabled", __func__);
+ }
+ migrate_start_colo_process(s);
+ /*
+ * Fixme: we will run VM in COLO no matter its old running state.
+ * After exited COLO, we will keep running.
+ */
+ s->vm_was_running = true;
+ /* Fallthrough */
+ case MIGRATION_STATUS_FAILED:
+ case MIGRATION_STATUS_CANCELLED:
+ if (s->vm_was_running) {
+ vm_start();
+ } else {
+ if (runstate_check(RUN_STATE_FINISH_MIGRATE)) {
+ runstate_set(RUN_STATE_POSTMIGRATE);
+ }
+ }
+ break;
+
+ default:
+ /* Should not reach here, but if so, forgive the VM. */
+ error_report("%s: Unknown ending state %d", __func__, s->state);
+ break;
+ }
+ qemu_bh_schedule(s->cleanup_bh);
+ qemu_mutex_unlock_iothread();
+}
+
/*
* Master migration thread on the source VM.
* It drives the migration and pumps the data down the outgoing channel.
@@ -2332,51 +2380,7 @@ static void *migration_thread(void *opaque)
}
trace_migration_thread_after_loop();
- /* If we enabled cpu throttling for auto-converge, turn it off. */
- cpu_throttle_stop();
-
- qemu_mutex_lock_iothread();
- switch (s->state) {
- case MIGRATION_STATUS_COMPLETED:
- migration_calculate_complete(s);
- runstate_set(RUN_STATE_POSTMIGRATE);
- break;
-
- case MIGRATION_STATUS_ACTIVE:
- /*
- * We should really assert here, but since it's during
- * migration, let's try to reduce the usage of assertions.
- */
- if (!migrate_colo_enabled()) {
- error_report("%s: critical error: calling COLO code without "
- "COLO enabled", __func__);
- }
- migrate_start_colo_process(s);
- /*
- * Fixme: we will run VM in COLO no matter its old running state.
- * After exited COLO, we will keep running.
- */
- s->vm_was_running = true;
- /* Fallthrough */
- case MIGRATION_STATUS_FAILED:
- case MIGRATION_STATUS_CANCELLED:
- if (s->vm_was_running) {
- vm_start();
- } else {
- if (runstate_check(RUN_STATE_FINISH_MIGRATE)) {
- runstate_set(RUN_STATE_POSTMIGRATE);
- }
- }
- break;
-
- default:
- /* Should not reach here, but if so, forgive the VM. */
- error_report("%s: Unknown ending state %d", __func__, s->state);
- break;
- }
- qemu_bh_schedule(s->cleanup_bh);
- qemu_mutex_unlock_iothread();
-
+ migration_iteration_finish(s);
rcu_unregister_thread();
return NULL;
}
--
2.14.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [Qemu-devel] [PATCH v2 12/13] migration: remove some block_cleanup_parameters()
2018-01-03 12:20 [Qemu-devel] [PATCH v2 00/13] migration: cleanup migration_thread() Peter Xu
` (10 preceding siblings ...)
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 11/13] migration: put the finish part into a new function Peter Xu
@ 2018-01-03 12:20 ` Peter Xu
2018-01-03 12:29 ` Juan Quintela
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 13/13] migration: remove notify in fd_error Peter Xu
12 siblings, 1 reply; 23+ messages in thread
From: Peter Xu @ 2018-01-03 12:20 UTC (permalink / raw)
To: qemu-devel; +Cc: Laurent Vivier, Juan Quintela, Dr . David Alan Gilbert, peterx
Keep the one in migrate_fd_cancel() would be enough. Removing the other
two.
Signed-off-by: Peter Xu <peterx@redhat.com>
---
migration/migration.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/migration/migration.c b/migration/migration.c
index 16eb24c8b3..fbb41b8887 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1130,7 +1130,6 @@ void migrate_fd_error(MigrationState *s, const Error *error)
MIGRATION_STATUS_FAILED);
migrate_set_error(s, error);
notifier_list_notify(&migration_state_notifiers, s);
- block_cleanup_parameters(s);
}
static void migrate_fd_cancel(MigrationState *s)
@@ -1176,7 +1175,6 @@ static void migrate_fd_cancel(MigrationState *s)
s->block_inactive = false;
}
}
- block_cleanup_parameters(s);
}
void add_migration_state_change_notifier(Notifier *notify)
--
2.14.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* [Qemu-devel] [PATCH v2 13/13] migration: remove notify in fd_error
2018-01-03 12:20 [Qemu-devel] [PATCH v2 00/13] migration: cleanup migration_thread() Peter Xu
` (11 preceding siblings ...)
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 12/13] migration: remove some block_cleanup_parameters() Peter Xu
@ 2018-01-03 12:20 ` Peter Xu
2018-01-03 12:31 ` Juan Quintela
12 siblings, 1 reply; 23+ messages in thread
From: Peter Xu @ 2018-01-03 12:20 UTC (permalink / raw)
To: qemu-devel; +Cc: Laurent Vivier, Juan Quintela, Dr . David Alan Gilbert, peterx
It should be called in migrate_fd_cleanup too.
Signed-off-by: Peter Xu <peterx@redhat.com>
---
migration/migration.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/migration/migration.c b/migration/migration.c
index fbb41b8887..c671f0a662 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -1129,7 +1129,6 @@ void migrate_fd_error(MigrationState *s, const Error *error)
migrate_set_state(&s->state, MIGRATION_STATUS_SETUP,
MIGRATION_STATUS_FAILED);
migrate_set_error(s, error);
- notifier_list_notify(&migration_state_notifiers, s);
}
static void migrate_fd_cancel(MigrationState *s)
--
2.14.3
^ permalink raw reply related [flat|nested] 23+ messages in thread
* Re: [Qemu-devel] [PATCH v2 13/13] migration: remove notify in fd_error
2018-01-03 12:20 ` [Qemu-devel] [PATCH v2 13/13] migration: remove notify in fd_error Peter Xu
@ 2018-01-03 12:31 ` Juan Quintela
2018-01-04 2:18 ` Peter Xu
0 siblings, 1 reply; 23+ messages in thread
From: Juan Quintela @ 2018-01-03 12:31 UTC (permalink / raw)
To: Peter Xu; +Cc: qemu-devel, Laurent Vivier, Dr . David Alan Gilbert
Peter Xu <peterx@redhat.com> wrote:
> It should be called in migrate_fd_cleanup too.
It is *already* called in migrate_fd_cleanup.
I think we should add a comment stating that we _always_ end calling
migrate_fd_cleanup, independently of how the migration ends.
> Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
I can also fix the comment when pulling if you agree with the change.
Later, Juan.
^ permalink raw reply [flat|nested] 23+ messages in thread
* Re: [Qemu-devel] [PATCH v2 13/13] migration: remove notify in fd_error
2018-01-03 12:31 ` Juan Quintela
@ 2018-01-04 2:18 ` Peter Xu
0 siblings, 0 replies; 23+ messages in thread
From: Peter Xu @ 2018-01-04 2:18 UTC (permalink / raw)
To: Juan Quintela; +Cc: qemu-devel, Laurent Vivier, Dr . David Alan Gilbert
On Wed, Jan 03, 2018 at 01:31:01PM +0100, Juan Quintela wrote:
> Peter Xu <peterx@redhat.com> wrote:
> > It should be called in migrate_fd_cleanup too.
>
> It is *already* called in migrate_fd_cleanup.
>
> I think we should add a comment stating that we _always_ end calling
> migrate_fd_cleanup, independently of how the migration ends.
>
>
> > Signed-off-by: Peter Xu <peterx@redhat.com>
>
> Reviewed-by: Juan Quintela <quintela@redhat.com>
>
> I can also fix the comment when pulling if you agree with the change.
Yes. Please modify according to your suggestions (including the other
patch comment). Thanks for that!
--
Peter Xu
^ permalink raw reply [flat|nested] 23+ messages in thread