qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 1/1] Postcopy+spice: Pass spice migration data earlier
@ 2016-02-22 17:17 Dr. David Alan Gilbert (git)
  2016-02-23  8:26 ` Amit Shah
  0 siblings, 1 reply; 2+ messages in thread
From: Dr. David Alan Gilbert (git) @ 2016-02-22 17:17 UTC (permalink / raw)
  To: qemu-devel, quintela, amit.shah, kraxel, jdenemar

From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>

Spice hooks the migration status changes to figure out when to
transmit information to the new spice server; but the migration
status in postcopy doesn't quite fit - the destination starts
running before the end of the source migration.

It's not a case of hanging off the migration status change to
postcopy-active either, since that happens before we stop the
guest CPU.

Fix it by sending a notify just after sending the device state,
and adding a flag that can be tested by the notify receiver.

Symptom:
   spice handover doesn't work with the error:
   red_worker.c:11540:display_channel_wait_for_migrate_data: timeout

Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
---
 include/migration/migration.h |  4 ++++
 migration/migration.c         | 14 ++++++++++++++
 ui/spice-core.c               |  3 ++-
 3 files changed, 20 insertions(+), 1 deletion(-)

diff --git a/include/migration/migration.h b/include/migration/migration.h
index 74684ad..97622e4 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -159,6 +159,8 @@ struct MigrationState
 
     /* Flag set once the migration has been asked to enter postcopy */
     bool start_postcopy;
+    /* Flag set after postcopy has sent the device state */
+    bool postcopy_after_devices;
 
     /* Flag set once the migration thread is running (and needs joining) */
     bool migration_thread_running;
@@ -212,6 +214,8 @@ bool migration_has_finished(MigrationState *);
 bool migration_has_failed(MigrationState *);
 /* True if outgoing migration has entered postcopy phase */
 bool migration_in_postcopy(MigrationState *);
+/* ...and after the device transmission */
+bool migration_in_postcopy_after_devices(MigrationState *);
 MigrationState *migrate_get_current(void);
 
 void migrate_compress_threads_create(void);
diff --git a/migration/migration.c b/migration/migration.c
index a64cfcd..fc5e50b 100644
--- a/migration/migration.c
+++ b/migration/migration.c
@@ -905,6 +905,11 @@ bool migration_in_postcopy(MigrationState *s)
     return (s->state == MIGRATION_STATUS_POSTCOPY_ACTIVE);
 }
 
+bool migration_in_postcopy_after_devices(MigrationState *s)
+{
+    return migration_in_postcopy(s) && s->postcopy_after_devices;
+}
+
 MigrationState *migrate_init(const MigrationParams *params)
 {
     MigrationState *s = migrate_get_current();
@@ -930,6 +935,7 @@ MigrationState *migrate_init(const MigrationParams *params)
     s->setup_time = 0;
     s->dirty_sync_count = 0;
     s->start_postcopy = false;
+    s->postcopy_after_devices = false;
     s->migration_thread_running = false;
     s->last_req_rb = NULL;
 
@@ -1489,6 +1495,14 @@ static int postcopy_start(MigrationState *ms, bool *old_vm_running)
         goto fail_closefb;
     }
     qemu_fclose(fb);
+
+    /* Send a notify to give a chance for anything that needs to happen
+     * at the transition to postcopy and after the device state; in particular
+     * spice needs to trigger a transition now
+     */
+    ms->postcopy_after_devices = true;
+    notifier_list_notify(&migration_state_notifiers, ms);
+
     ms->downtime =  qemu_clock_get_ms(QEMU_CLOCK_REALTIME) - time_at_stop;
 
     qemu_mutex_unlock_iothread();
diff --git a/ui/spice-core.c b/ui/spice-core.c
index 4dbd99a..11e72d5 100644
--- a/ui/spice-core.c
+++ b/ui/spice-core.c
@@ -568,7 +568,8 @@ static void migration_state_notifier(Notifier *notifier, void *data)
 
     if (migration_in_setup(s)) {
         spice_server_migrate_start(spice_server);
-    } else if (migration_has_finished(s)) {
+    } else if (migration_has_finished(s) ||
+               migration_in_postcopy_after_devices(s)) {
         spice_server_migrate_end(spice_server, true);
         spice_have_target_host = false;
     } else if (migration_has_failed(s)) {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [Qemu-devel] [PATCH 1/1] Postcopy+spice: Pass spice migration data earlier
  2016-02-22 17:17 [Qemu-devel] [PATCH 1/1] Postcopy+spice: Pass spice migration data earlier Dr. David Alan Gilbert (git)
@ 2016-02-23  8:26 ` Amit Shah
  0 siblings, 0 replies; 2+ messages in thread
From: Amit Shah @ 2016-02-23  8:26 UTC (permalink / raw)
  To: Dr. David Alan Gilbert (git); +Cc: jdenemar, kraxel, qemu-devel, quintela

On (Mon) 22 Feb 2016 [17:17:32], Dr. David Alan Gilbert (git) wrote:
> From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
> 
> Spice hooks the migration status changes to figure out when to
> transmit information to the new spice server; but the migration
> status in postcopy doesn't quite fit - the destination starts
> running before the end of the source migration.
> 
> It's not a case of hanging off the migration status change to
> postcopy-active either, since that happens before we stop the
> guest CPU.
> 
> Fix it by sending a notify just after sending the device state,
> and adding a flag that can be tested by the notify receiver.
> 
> Symptom:
>    spice handover doesn't work with the error:
>    red_worker.c:11540:display_channel_wait_for_migrate_data: timeout
> 
> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

Reviewed-by: Amit Shah <amit.shah@redhat.com>


		Amit

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2016-02-23  8:27 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-02-22 17:17 [Qemu-devel] [PATCH 1/1] Postcopy+spice: Pass spice migration data earlier Dr. David Alan Gilbert (git)
2016-02-23  8:26 ` Amit Shah

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).