qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
@ 2012-06-05  5:49 Yonit Halperin
  2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 1/5] notifiers: add support for async notifiers handlers Yonit Halperin
                   ` (5 more replies)
  0 siblings, 6 replies; 28+ messages in thread
From: Yonit Halperin @ 2012-06-05  5:49 UTC (permalink / raw)
  To: qemu-devel; +Cc: Yonit Halperin, aliguori, alevy, kraxel

Hi,

I'm sending this patch series again. This time with an additional patch 
for setting a migrate_end notifier completion callback for spice migration
interface. I've also added more detailed commit messages.

This patch series introduces async handlers for notifiers, and integrates them
with migration state change notifications.

Asynchronous migration completion notifier is essential for allowing spice to cleanly
complete the src server connection to the client, and transfer the connection to the target.
Currently, as soon as the migration completes, the src qemu can be closed by the
management, and spice cannot complete the spice-connection migration.

In order to support spice seamless migration, next to these patches, I plan to add:
(1) notifier for switching from the live phase of the migration to the non-live phase,
    before completing savevm.
    Spice will use this notification to "finalize" the connection to the client: send
    and receive all in-flight data.
(2) add vmstates for spice data that need to be migrated, e.g., usb/agent/smartcard
    buffers that were sent from the client and haven't been written to device yet.
    We would also want to migrate data that will allow us to continue the new spice
    connection from the same point the old one stopped. Without requiring special
    treatment in the client side.

Regards,
Yonit.

Yonit Halperin (5):
  notifiers: add support for async notifiers handlers
  migration: moving migration start code to a separated routine
  migration: moving migration completion code to a separated routine
  migration: replace migration state change notifier with async
    notifiers
  spice: turn spice "migration end" handler to be async

 input.c         |    2 +-
 migration.c     |  154 ++++++++++++++++++++++++++++++++++++++++---------------
 migration.h     |   11 +++-
 notify.c        |   78 ++++++++++++++++++++++++++--
 notify.h        |   55 ++++++++++++++++++--
 qemu-timer.c    |    2 +-
 ui/spice-core.c |   58 +++++++++++++++------
 vl.c            |    2 +-
 8 files changed, 290 insertions(+), 72 deletions(-)

-- 
1.7.7.6

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Qemu-devel] [RFC PATCH 1/5] notifiers: add support for async notifiers handlers
  2012-06-05  5:49 [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers Yonit Halperin
@ 2012-06-05  5:49 ` Yonit Halperin
  2012-06-05  8:36   ` Gerd Hoffmann
  2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 2/5] migration: moving migration start code to a separated routine Yonit Halperin
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 28+ messages in thread
From: Yonit Halperin @ 2012-06-05  5:49 UTC (permalink / raw)
  To: qemu-devel; +Cc: Yonit Halperin, aliguori, alevy, kraxel

This patch defines 2 subtypes of notifiers, sync and async. Both of
them can be added to a notifiers list.
The patch adds optional complete_cb to the notifiers list. complete_cb is called
when all the async notifiers have completed.

Signed-off-by: Yonit Halperin <yhalperi@redhat.com>
---
 input.c      |    2 +-
 migration.c  |    2 +-
 notify.c     |   78 +++++++++++++++++++++++++++++++++++++++++++++++++++++++---
 notify.h     |   55 +++++++++++++++++++++++++++++++++++++---
 qemu-timer.c |    2 +-
 vl.c         |    2 +-
 6 files changed, 128 insertions(+), 13 deletions(-)

diff --git a/input.c b/input.c
index 6968b31..06f6f9f 100644
--- a/input.c
+++ b/input.c
@@ -274,5 +274,5 @@ void qemu_add_mouse_mode_change_notifier(Notifier *notify)
 
 void qemu_remove_mouse_mode_change_notifier(Notifier *notify)
 {
-    notifier_remove(notify);
+    notifier_remove(&notify->base);
 }
diff --git a/migration.c b/migration.c
index 3f485d3..acaf293 100644
--- a/migration.c
+++ b/migration.c
@@ -320,7 +320,7 @@ void add_migration_state_change_notifier(Notifier *notify)
 
 void remove_migration_state_change_notifier(Notifier *notify)
 {
-    notifier_remove(notify);
+    notifier_remove(&notify->base);
 }
 
 bool migration_is_active(MigrationState *s)
diff --git a/notify.c b/notify.c
index 12282a6..c67e50e 100644
--- a/notify.c
+++ b/notify.c
@@ -19,23 +19,93 @@
 void notifier_list_init(NotifierList *list)
 {
     QLIST_INIT(&list->notifiers);
+    QLIST_INIT(&list->wait_notifiers);
 }
 
 void notifier_list_add(NotifierList *list, Notifier *notifier)
 {
-    QLIST_INSERT_HEAD(&list->notifiers, notifier, node);
+    notifier->base.type = NOTIFIER_TYPE_SYNC;
+    QLIST_INSERT_HEAD(&list->notifiers, &notifier->base, node);
 }
 
-void notifier_remove(Notifier *notifier)
+void notifier_list_add_async(NotifierList *list, AsyncNotifier *notifier)
+{
+    notifier->base.type = NOTIFIER_TYPE_ASYNC;
+    QLIST_INSERT_HEAD(&list->notifiers, &notifier->base, node);
+}
+
+void notifier_remove(BaseNotifier *notifier)
 {
     QLIST_REMOVE(notifier, node);
 }
 
+static void notified_complete_cb(AsyncNotifier *notifier, void *opaque)
+{
+    NotifierList *list = opaque;
+
+    QLIST_REMOVE(notifier, wait_node);
+
+    if (QLIST_EMPTY(&list->wait_notifiers) && !list->during_notify) {
+        if (list->complete_cb) {
+            list->complete_cb(list->complete_opaque);
+        }
+    }
+}
+
 void notifier_list_notify(NotifierList *list, void *data)
 {
-    Notifier *notifier, *next;
+    BaseNotifier *notifier, *next;
+    bool async = false;
+
+    if (notifier_list_async_waiting(list)) {
+        AsyncNotifier *wait_notifier, *wait_next;
+
+        fprintf(stderr, "%s: previous notify hasn't completed\n", __func__);
+        QLIST_FOREACH_SAFE(wait_notifier, &list->wait_notifiers,
+                           wait_node, wait_next) {
+            QLIST_REMOVE(wait_notifier, wait_node);
+        }
+    }
+
+    list->during_notify = true;
 
     QLIST_FOREACH_SAFE(notifier, &list->notifiers, node, next) {
-        notifier->notify(notifier, data);
+        switch (notifier->type) {
+        case NOTIFIER_TYPE_SYNC:
+            {
+                Notifier *sync_notifier;
+
+                sync_notifier = container_of(notifier, Notifier, base);
+                sync_notifier->notify(sync_notifier, data);
+                break;
+            }
+        case NOTIFIER_TYPE_ASYNC:
+            {
+                AsyncNotifier *async_notifier;
+
+                async = true;
+                async_notifier = container_of(notifier, AsyncNotifier, base);
+                QLIST_INSERT_HEAD(&list->wait_notifiers,
+                                  async_notifier,
+                                  wait_node);
+                async_notifier->notify_async(async_notifier, data,
+                                             notified_complete_cb, list);
+                break;
+            }
+        default:
+            fprintf(stderr, "%s: invalid notifier type %d\n", __func__,
+                    notifier->type);
+            break;
+        }
     }
+
+    list->during_notify = false;
+    if ((!async || !notifier_list_async_waiting(list)) && list->complete_cb) {
+        list->complete_cb(list->complete_opaque);
+    }
+}
+
+bool notifier_list_async_waiting(NotifierList *list)
+{
+    return !QLIST_EMPTY(&list->wait_notifiers);
 }
diff --git a/notify.h b/notify.h
index 03cf26c..8660920 100644
--- a/notify.h
+++ b/notify.h
@@ -16,28 +16,73 @@
 
 #include "qemu-queue.h"
 
+typedef enum NotifierType {
+    NOTIFIER_TYPE_NONE,
+    NOTIFIER_TYPE_SYNC,
+    NOTIFIER_TYPE_ASYNC,
+} NotifierType;
+
+typedef struct BaseNotifier BaseNotifier;
+
+struct BaseNotifier {
+    QLIST_ENTRY(BaseNotifier) node;
+    NotifierType type;
+};
 typedef struct Notifier Notifier;
 
 struct Notifier
 {
+    BaseNotifier base;
     void (*notify)(Notifier *notifier, void *data);
-    QLIST_ENTRY(Notifier) node;
 };
 
+typedef struct AsyncNotifier AsyncNotifier;
+typedef void (NotifiedCompletionFunc)(AsyncNotifier *notifier, void *opaque);
+
+struct AsyncNotifier {
+    BaseNotifier base;
+    void (*notify_async)(AsyncNotifier *notifier, void *data,
+                         NotifiedCompletionFunc *complete_cb, void *cb_data);
+    QLIST_ENTRY(AsyncNotifier) wait_node;
+};
+
+typedef void (NotifyListCompletion)(void *opaque);
+
 typedef struct NotifierList
 {
-    QLIST_HEAD(, Notifier) notifiers;
+    QLIST_HEAD(, BaseNotifier) notifiers;
+
+    NotifyListCompletion *complete_cb;
+    void *complete_opaque;
+
+    QLIST_HEAD(, AsyncNotifier) wait_notifiers;
+    bool during_notify;
 } NotifierList;
 
-#define NOTIFIER_LIST_INITIALIZER(head) \
-    { QLIST_HEAD_INITIALIZER((head).notifiers) }
+#define NOTIFIER_LIST_INITIALIZER(head)                      \
+    { QLIST_HEAD_INITIALIZER((head).notifiers),              \
+      NULL,                                                  \
+      NULL,                                                  \
+      QLIST_HEAD_INITIALIZER((head).wait_notifiers)          \
+    }
 
+#define ASYNC_NOTIFIER_LIST_INITIALIZER(head, cb, cb_data)   \
+    { QLIST_HEAD_INITIALIZER((head).notifiers),              \
+      cb,                                                    \
+      cb_data,                                               \
+      QLIST_HEAD_INITIALIZER((head).wait_notifiers)          \
+    }
 void notifier_list_init(NotifierList *list);
 
 void notifier_list_add(NotifierList *list, Notifier *notifier);
+void notifier_list_add_async(NotifierList *list, AsyncNotifier *notifier);
 
-void notifier_remove(Notifier *notifier);
+void notifier_remove(BaseNotifier *notifier);
 
 void notifier_list_notify(NotifierList *list, void *data);
 
+/* returns true when there are async notifiers that still hasn't
+ * called complete_cb for the last notification */
+bool notifier_list_async_waiting(NotifierList *list);
+
 #endif
diff --git a/qemu-timer.c b/qemu-timer.c
index de98977..2b2f84a 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -430,7 +430,7 @@ void qemu_register_clock_reset_notifier(QEMUClock *clock, Notifier *notifier)
 
 void qemu_unregister_clock_reset_notifier(QEMUClock *clock, Notifier *notifier)
 {
-    notifier_remove(notifier);
+    notifier_remove(&notifier->base);
 }
 
 void init_clocks(void)
diff --git a/vl.c b/vl.c
index 23ab3a3..646b16b 100644
--- a/vl.c
+++ b/vl.c
@@ -2183,7 +2183,7 @@ void qemu_add_exit_notifier(Notifier *notify)
 
 void qemu_remove_exit_notifier(Notifier *notify)
 {
-    notifier_remove(notify);
+    notifier_remove(&notify->base);
 }
 
 static void qemu_run_exit_notifiers(void)
-- 
1.7.7.6

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [RFC PATCH 2/5] migration: moving migration start code to a separated routine
  2012-06-05  5:49 [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers Yonit Halperin
  2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 1/5] notifiers: add support for async notifiers handlers Yonit Halperin
@ 2012-06-05  5:49 ` Yonit Halperin
  2012-06-05  8:44   ` Gerd Hoffmann
  2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 3/5] migration: moving migration completion " Yonit Halperin
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 28+ messages in thread
From: Yonit Halperin @ 2012-06-05  5:49 UTC (permalink / raw)
  To: qemu-devel; +Cc: Yonit Halperin, aliguori, alevy, kraxel

Preparation for asynchronous migration state change notifiers.
In a following patch the migrate_start routine will be used as
the completion callback of the "migration start" notifiers list.

Signed-off-by: Yonit Halperin <yhalperi@redhat.com>
---
 migration.c |   73 +++++++++++++++++++++++++++++++++++++++++++++-------------
 migration.h |    2 +
 2 files changed, 58 insertions(+), 17 deletions(-)

diff --git a/migration.c b/migration.c
index acaf293..91c807d 100644
--- a/migration.c
+++ b/migration.c
@@ -41,6 +41,14 @@ enum {
     MIG_STATE_COMPLETED,
 };
 
+enum {
+   MIGRATION_PROTOCOL_ERROR,
+   MIGRATION_PROTOCOL_TCP,
+   MIGRATION_PROTOCOL_EXEC,
+   MIGRATION_PROTOCOL_UNIX,
+   MIGRATION_PROTOCOL_FD,
+};
+
 #define MAX_THROTTLE  (32 << 20)      /* Migration speed throttling */
 
 static NotifierList migration_state_notifiers =
@@ -361,13 +369,16 @@ void migrate_fd_connect(MigrationState *s)
     migrate_fd_put_ready(s);
 }
 
-static MigrationState *migrate_init(int blk, int inc)
+static MigrationState *migrate_init(int protocol, const char *protocol_param,
+                                    int blk, int inc)
 {
     MigrationState *s = migrate_get_current();
     int64_t bandwidth_limit = s->bandwidth_limit;
 
     memset(s, 0, sizeof(*s));
     s->bandwidth_limit = bandwidth_limit;
+    s->protocol = protocol;
+    s->protocol_param = g_strdup(protocol_param);
     s->blk = blk;
     s->shared = inc;
 
@@ -389,13 +400,50 @@ void migrate_del_blocker(Error *reason)
     migration_blockers = g_slist_remove(migration_blockers, reason);
 }
 
+static void migrate_start(MigrationState *s, Error **errp)
+{
+    int ret;
+
+    switch (s->protocol) {
+    case MIGRATION_PROTOCOL_TCP:
+        ret = tcp_start_outgoing_migration(s, s->protocol_param, errp);
+        break;
+#if !defined(WIN32)
+    case MIGRATION_PROTOCOL_EXEC:
+        ret = exec_start_outgoing_migration(s, s->protocol_param);
+        break;
+    case MIGRATION_PROTOCOL_UNIX:
+        ret = unix_start_outgoing_migration(s, s->protocol_param);
+        break;
+    case MIGRATION_PROTOCOL_FD:
+        ret = fd_start_outgoing_migration(s, s->protocol_param);
+        break;
+#endif
+    default:
+        ret = -EPROTONOSUPPORT;
+    }
+
+    g_free(s->protocol_param);
+    s->protocol_param = NULL;
+
+    if (ret < 0) {
+        if (!error_is_set(errp)) {
+            DPRINTF("migration failed: %s\n", strerror(-ret));
+            /* FIXME: we should return meaningful errors */
+            error_set(errp, QERR_UNDEFINED_ERROR);
+        }
+        return;
+    }
+    notifier_list_notify(&migration_state_notifiers, s);
+}
+
 void qmp_migrate(const char *uri, bool has_blk, bool blk,
                  bool has_inc, bool inc, bool has_detach, bool detach,
                  Error **errp)
 {
     MigrationState *s = migrate_get_current();
     const char *p;
-    int ret;
+    int migrate_protocol;
 
     if (s->state == MIG_STATE_ACTIVE) {
         error_set(errp, QERR_MIGRATION_ACTIVE);
@@ -411,33 +459,24 @@ void qmp_migrate(const char *uri, bool has_blk, bool blk,
         return;
     }
 
-    s = migrate_init(blk, inc);
-
     if (strstart(uri, "tcp:", &p)) {
-        ret = tcp_start_outgoing_migration(s, p, errp);
+        migrate_protocol = MIGRATION_PROTOCOL_TCP;
 #if !defined(WIN32)
     } else if (strstart(uri, "exec:", &p)) {
-        ret = exec_start_outgoing_migration(s, p);
+        migrate_protocol = MIGRATION_PROTOCOL_EXEC;
     } else if (strstart(uri, "unix:", &p)) {
-        ret = unix_start_outgoing_migration(s, p);
+        migrate_protocol = MIGRATION_PROTOCOL_UNIX;
     } else if (strstart(uri, "fd:", &p)) {
-        ret = fd_start_outgoing_migration(s, p);
+        migrate_protocol = MIGRATION_PROTOCOL_FD;
 #endif
     } else {
         error_set(errp, QERR_INVALID_PARAMETER_VALUE, "uri", "a valid migration protocol");
         return;
     }
+    s = migrate_init(migrate_protocol, p, blk, inc);
 
-    if (ret < 0) {
-        if (!error_is_set(errp)) {
-            DPRINTF("migration failed: %s\n", strerror(-ret));
-            /* FIXME: we should return meaningful errors */
-            error_set(errp, QERR_UNDEFINED_ERROR);
-        }
-        return;
-    }
+    migrate_start(s, errp);
 
-    notifier_list_notify(&migration_state_notifiers, s);
 }
 
 void qmp_migrate_cancel(Error **errp)
diff --git a/migration.h b/migration.h
index 2e9ca2e..5ad67d7 100644
--- a/migration.h
+++ b/migration.h
@@ -33,6 +33,8 @@ struct MigrationState
     void *opaque;
     int blk;
     int shared;
+    int protocol;
+    char *protocol_param;
 };
 
 void process_incoming_migration(QEMUFile *f);
-- 
1.7.7.6

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [RFC PATCH 3/5] migration: moving migration completion code to a separated routine
  2012-06-05  5:49 [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers Yonit Halperin
  2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 1/5] notifiers: add support for async notifiers handlers Yonit Halperin
  2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 2/5] migration: moving migration start code to a separated routine Yonit Halperin
@ 2012-06-05  5:49 ` Yonit Halperin
  2012-06-05  8:46   ` Gerd Hoffmann
  2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 4/5] migration: replace migration state change notifier with async notifiers Yonit Halperin
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 28+ messages in thread
From: Yonit Halperin @ 2012-06-05  5:49 UTC (permalink / raw)
  To: qemu-devel; +Cc: Yonit Halperin, aliguori, alevy, kraxel

Preparation for asynchronous migration state change notifiers.
In the following patch the migrate_end routine will be used as
the completion callback of the "migration end" notifiers list.

Signed-off-by: Yonit Halperin <yhalperi@redhat.com>
---
 migration.c |   31 ++++++++++++++++---------------
 migration.h |    1 +
 2 files changed, 17 insertions(+), 15 deletions(-)

diff --git a/migration.c b/migration.c
index 91c807d..c86611d 100644
--- a/migration.c
+++ b/migration.c
@@ -187,24 +187,32 @@ static int migrate_fd_cleanup(MigrationState *s)
     return ret;
 }
 
+static void migrate_end(MigrationState *s, int end_state)
+{
+    s->state = end_state;
+    if (s->state == MIG_STATE_COMPLETED) {
+        runstate_set(RUN_STATE_POSTMIGRATE);
+    } else if (s->state == MIG_STATE_ERROR && s->start_vm_in_error) {
+        vm_start();
+    }
+    notifier_list_notify(&migration_state_notifiers, s);
+}
+
 void migrate_fd_error(MigrationState *s)
 {
     DPRINTF("setting error state\n");
-    s->state = MIG_STATE_ERROR;
-    notifier_list_notify(&migration_state_notifiers, s);
     migrate_fd_cleanup(s);
+    migrate_end(s, MIG_STATE_ERROR);
 }
 
 static void migrate_fd_completed(MigrationState *s)
 {
     DPRINTF("setting completed state\n");
     if (migrate_fd_cleanup(s) < 0) {
-        s->state = MIG_STATE_ERROR;
+        migrate_end(s, MIG_STATE_ERROR);
     } else {
-        s->state = MIG_STATE_COMPLETED;
-        runstate_set(RUN_STATE_POSTMIGRATE);
+        migrate_end(s, MIG_STATE_COMPLETED);
     }
-    notifier_list_notify(&migration_state_notifiers, s);
 }
 
 static void migrate_fd_put_notify(void *opaque)
@@ -257,7 +265,7 @@ static void migrate_fd_put_ready(void *opaque)
     if (ret < 0) {
         migrate_fd_error(s);
     } else if (ret == 1) {
-        int old_vm_running = runstate_is_running();
+        s->start_vm_in_error = runstate_is_running();
 
         DPRINTF("done iterating\n");
         qemu_system_wakeup_request(QEMU_WAKEUP_REASON_OTHER);
@@ -268,11 +276,6 @@ static void migrate_fd_put_ready(void *opaque)
         } else {
             migrate_fd_completed(s);
         }
-        if (s->state != MIG_STATE_COMPLETED) {
-            if (old_vm_running) {
-                vm_start();
-            }
-        }
     }
 }
 
@@ -283,11 +286,9 @@ static void migrate_fd_cancel(MigrationState *s)
 
     DPRINTF("cancelling migration\n");
 
-    s->state = MIG_STATE_CANCELLED;
-    notifier_list_notify(&migration_state_notifiers, s);
     qemu_savevm_state_cancel(s->file);
-
     migrate_fd_cleanup(s);
+    migrate_end(s, MIG_STATE_CANCELLED);
 }
 
 static void migrate_fd_wait_for_unfreeze(void *opaque)
diff --git a/migration.h b/migration.h
index 5ad67d7..6a0f49f 100644
--- a/migration.h
+++ b/migration.h
@@ -35,6 +35,7 @@ struct MigrationState
     int shared;
     int protocol;
     char *protocol_param;
+    bool start_vm_in_error;
 };
 
 void process_incoming_migration(QEMUFile *f);
-- 
1.7.7.6

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [RFC PATCH 4/5] migration: replace migration state change notifier with async notifiers
  2012-06-05  5:49 [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers Yonit Halperin
                   ` (2 preceding siblings ...)
  2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 3/5] migration: moving migration completion " Yonit Halperin
@ 2012-06-05  5:49 ` Yonit Halperin
  2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 5/5] spice: turn spice "migration end" handler to be async Yonit Halperin
  2012-06-05 11:59 ` [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers Anthony Liguori
  5 siblings, 0 replies; 28+ messages in thread
From: Yonit Halperin @ 2012-06-05  5:49 UTC (permalink / raw)
  To: qemu-devel; +Cc: Yonit Halperin, aliguori, alevy, kraxel

The patch replaces the existing state change notifier list, with two
explicit notifier lists for migration start and migration end. Unlike
the previous notifier list, the current notifications take place before
the actual status change, and also allow async handlers. Hence, it is
possible to delay the migration progress, till the async handlers have
completed.

Note that this patch still leaves the registered notifier handlers synchronous, i.e.,
they call the notifier completion callback immediately.

Signed-off-by: Yonit Halperin <yhalperi@redhat.com>
---
 migration.c     |   84 +++++++++++++++++++++++++++++++++++++-----------------
 migration.h     |    8 ++++-
 ui/spice-core.c |   31 ++++++++++++++------
 3 files changed, 84 insertions(+), 39 deletions(-)

diff --git a/migration.c b/migration.c
index c86611d..869a8ab 100644
--- a/migration.c
+++ b/migration.c
@@ -51,8 +51,15 @@ enum {
 
 #define MAX_THROTTLE  (32 << 20)      /* Migration speed throttling */
 
-static NotifierList migration_state_notifiers =
-    NOTIFIER_LIST_INITIALIZER(migration_state_notifiers);
+static void migrate_start(void *opaque);
+static void migrate_end(void *opaque);
+
+static NotifierList migration_start_notifiers =
+    ASYNC_NOTIFIER_LIST_INITIALIZER(migration_start_notifiers,
+                                    migrate_start, NULL);
+static NotifierList migration_end_notifiers =
+    ASYNC_NOTIFIER_LIST_INITIALIZER(migration_end_notifiers,
+                                    migrate_end, NULL);
 
 /* When we add fault tolerance, we could have several
    migrations at once.  For now we don't need to add
@@ -187,31 +194,44 @@ static int migrate_fd_cleanup(MigrationState *s)
     return ret;
 }
 
-static void migrate_end(MigrationState *s, int end_state)
+static void migrate_notify_end(MigrationState *s, int end_state)
+{
+    bool migrate_success = (end_state == MIG_STATE_COMPLETED);
+
+    if (!s->end_was_notified) {
+        s->end_state = end_state;
+        migration_end_notifiers.complete_cb = migrate_end;
+        migration_end_notifiers.complete_opaque = s;
+        s->end_was_notified = true;
+        notifier_list_notify(&migration_end_notifiers, &migrate_success);
+    }
+}
+
+static void migrate_end(void *opaque)
 {
-    s->state = end_state;
+    MigrationState *s = opaque;
+
+    s->state = s->end_state;
     if (s->state == MIG_STATE_COMPLETED) {
         runstate_set(RUN_STATE_POSTMIGRATE);
     } else if (s->state == MIG_STATE_ERROR && s->start_vm_in_error) {
         vm_start();
     }
-    notifier_list_notify(&migration_state_notifiers, s);
 }
 
 void migrate_fd_error(MigrationState *s)
 {
-    DPRINTF("setting error state\n");
     migrate_fd_cleanup(s);
-    migrate_end(s, MIG_STATE_ERROR);
+    migrate_notify_end(s, MIG_STATE_ERROR);
 }
 
 static void migrate_fd_completed(MigrationState *s)
 {
     DPRINTF("setting completed state\n");
     if (migrate_fd_cleanup(s) < 0) {
-        migrate_end(s, MIG_STATE_ERROR);
+        migrate_notify_end(s, MIG_STATE_ERROR);
     } else {
-        migrate_end(s, MIG_STATE_COMPLETED);
+        migrate_notify_end(s, MIG_STATE_COMPLETED);
     }
 }
 
@@ -281,14 +301,16 @@ static void migrate_fd_put_ready(void *opaque)
 
 static void migrate_fd_cancel(MigrationState *s)
 {
-    if (s->state != MIG_STATE_ACTIVE)
+    if (s->state != MIG_STATE_ACTIVE ||
+        notifier_list_async_waiting(&migration_end_notifiers)) {
         return;
+    }
 
     DPRINTF("cancelling migration\n");
 
     qemu_savevm_state_cancel(s->file);
     migrate_fd_cleanup(s);
-    migrate_end(s, MIG_STATE_CANCELLED);
+    migrate_notify_end(s, MIG_STATE_CANCELLED);
 }
 
 static void migrate_fd_wait_for_unfreeze(void *opaque)
@@ -322,14 +344,19 @@ static int migrate_fd_close(void *opaque)
     return s->close(s);
 }
 
-void add_migration_state_change_notifier(Notifier *notify)
+void migration_add_start_notifier(AsyncNotifier *notify)
+{
+    notifier_list_add_async(&migration_start_notifiers, notify);
+}
+
+void migration_add_end_notifier(AsyncNotifier *notify)
 {
-    notifier_list_add(&migration_state_notifiers, notify);
+    notifier_list_add_async(&migration_end_notifiers, notify);
 }
 
-void remove_migration_state_change_notifier(Notifier *notify)
+void migration_remove_state_notifer(AsyncNotifier *notifier)
 {
-    notifier_remove(&notify->base);
+    notifier_remove(&notifier->base);
 }
 
 bool migration_is_active(MigrationState *s)
@@ -401,13 +428,15 @@ void migrate_del_blocker(Error *reason)
     migration_blockers = g_slist_remove(migration_blockers, reason);
 }
 
-static void migrate_start(MigrationState *s, Error **errp)
+static void migrate_start(void *opaque)
 {
+    MigrationState *s = opaque;
     int ret;
+    Error *err = NULL;
 
     switch (s->protocol) {
     case MIGRATION_PROTOCOL_TCP:
-        ret = tcp_start_outgoing_migration(s, s->protocol_param, errp);
+        ret = tcp_start_outgoing_migration(s, s->protocol_param, &err);
         break;
 #if !defined(WIN32)
     case MIGRATION_PROTOCOL_EXEC:
@@ -425,17 +454,18 @@ static void migrate_start(MigrationState *s, Error **errp)
     }
 
     g_free(s->protocol_param);
-    s->protocol_param = NULL;
-
     if (ret < 0) {
-        if (!error_is_set(errp)) {
-            DPRINTF("migration failed: %s\n", strerror(-ret));
-            /* FIXME: we should return meaningful errors */
-            error_set(errp, QERR_UNDEFINED_ERROR);
+        DPRINTF("migration failed: %s\n", strerror(-ret));
+        if (error_is_set(&err)) {
+            fprintf(stderr, "migrate: %s\n", error_get_pretty(err));
         }
+        /* if migration state is not ACTIVE, another migration can start
+         * before all the registered async notifieres completed. In this case
+         * notifier_list_notify, for migration start notification,
+         * will handle not waiting for the previous notification to complete */
+        migrate_notify_end(s, MIG_STATE_ERROR);
         return;
     }
-    notifier_list_notify(&migration_state_notifiers, s);
 }
 
 void qmp_migrate(const char *uri, bool has_blk, bool blk,
@@ -475,9 +505,9 @@ void qmp_migrate(const char *uri, bool has_blk, bool blk,
         return;
     }
     s = migrate_init(migrate_protocol, p, blk, inc);
-
-    migrate_start(s, errp);
-
+    migration_start_notifiers.complete_cb = migrate_start;
+    migration_start_notifiers.complete_opaque = s;
+    notifier_list_notify(&migration_start_notifiers, NULL);
 }
 
 void qmp_migrate_cancel(Error **errp)
diff --git a/migration.h b/migration.h
index 6a0f49f..eeed6ec 100644
--- a/migration.h
+++ b/migration.h
@@ -36,6 +36,8 @@ struct MigrationState
     int protocol;
     char *protocol_param;
     bool start_vm_in_error;
+    int end_state;
+    bool end_was_notified;
 };
 
 void process_incoming_migration(QEMUFile *f);
@@ -69,8 +71,10 @@ void migrate_fd_error(MigrationState *s);
 
 void migrate_fd_connect(MigrationState *s);
 
-void add_migration_state_change_notifier(Notifier *notify);
-void remove_migration_state_change_notifier(Notifier *notify);
+void migration_add_start_notifier(AsyncNotifier *notify);
+/* notification also contains whether the migration was successful */
+void migration_add_end_notifier(AsyncNotifier *notify);
+void migration_remove_state_notifer(AsyncNotifier *notifier);
 bool migration_is_active(MigrationState *);
 bool migration_has_finished(MigrationState *);
 bool migration_has_failed(MigrationState *);
diff --git a/ui/spice-core.c b/ui/spice-core.c
index 4fc48f8..d85c212 100644
--- a/ui/spice-core.c
+++ b/ui/spice-core.c
@@ -41,7 +41,8 @@
 /* core bits */
 
 static SpiceServer *spice_server;
-static Notifier migration_state;
+static AsyncNotifier migrate_start_notifier;
+static AsyncNotifier migrate_end_notifier;
 static const char *auth = "spice";
 static char *auth_passwd;
 static time_t auth_expires = TIME_MAX;
@@ -476,23 +477,31 @@ SpiceInfo *qmp_query_spice(Error **errp)
     return info;
 }
 
-static void migration_state_notifier(Notifier *notifier, void *data)
+static void migrate_start_notify_func(AsyncNotifier *notifier, void *data,
+                                      NotifiedCompletionFunc *complete_cb,
+                                      void *cb_data)
 {
-    MigrationState *s = data;
-
-    if (migration_is_active(s)) {
 #ifdef SPICE_INTERFACE_MIGRATION
-        spice_server_migrate_start(spice_server);
+    spice_server_migrate_start(spice_server);
 #endif
-    } else if (migration_has_finished(s)) {
+    complete_cb(notifier, cb_data);
+}
+
+static void migrate_end_notify_func(AsyncNotifier *notifier, void *data,
+                                    NotifiedCompletionFunc *complete_cb,
+                                    void *cb_data)
+{
+    bool success_end = *(bool *)data;
+    if (success_end) {
 #ifndef SPICE_INTERFACE_MIGRATION
         spice_server_migrate_switch(spice_server);
 #else
         spice_server_migrate_end(spice_server, true);
-    } else if (migration_has_failed(s)) {
+    } else {
         spice_server_migrate_end(spice_server, false);
 #endif
     }
+    complete_cb(notifier, cb_data);
 }
 
 int qemu_spice_migrate_info(const char *hostname, int port, int tls_port,
@@ -707,8 +716,10 @@ void qemu_spice_init(void)
     };
     using_spice = 1;
 
-    migration_state.notify = migration_state_notifier;
-    add_migration_state_change_notifier(&migration_state);
+    migrate_start_notifier.notify_async = migrate_start_notify_func;
+    migration_add_start_notifier(&migrate_start_notifier);
+    migrate_end_notifier.notify_async = migrate_end_notify_func;
+    migration_add_end_notifier(&migrate_end_notifier);
 #ifdef SPICE_INTERFACE_MIGRATION
     spice_migrate.sin.base.sif = &migrate_interface.base;
     spice_migrate.connect_complete.cb = NULL;
-- 
1.7.7.6

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Qemu-devel] [RFC PATCH 5/5] spice: turn spice "migration end" handler to be async
  2012-06-05  5:49 [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers Yonit Halperin
                   ` (3 preceding siblings ...)
  2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 4/5] migration: replace migration state change notifier with async notifiers Yonit Halperin
@ 2012-06-05  5:49 ` Yonit Halperin
  2012-06-05 11:59 ` [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers Anthony Liguori
  5 siblings, 0 replies; 28+ messages in thread
From: Yonit Halperin @ 2012-06-05  5:49 UTC (permalink / raw)
  To: qemu-devel; +Cc: Yonit Halperin, aliguori, alevy, kraxel

Instead of immediatly calling the notifier completion callback,
the notification handler assigns a completion callback
to spice's migration interface. Spice should call this
callback when it completes handling the migration state change.

Signed-off-by: Yonit Halperin <yhalperi@redhat.com>
---
 ui/spice-core.c |   29 ++++++++++++++++++++++-------
 1 files changed, 22 insertions(+), 7 deletions(-)

diff --git a/ui/spice-core.c b/ui/spice-core.c
index d85c212..053f06f 100644
--- a/ui/spice-core.c
+++ b/ui/spice-core.c
@@ -282,9 +282,14 @@ typedef struct SpiceMigration {
         MonitorCompletion *cb;
         void *opaque;
     } connect_complete;
+    struct {
+        NotifiedCompletionFunc *cb;
+        void *opaque;
+    } end_complete;
 } SpiceMigration;
 
 static void migrate_connect_complete_cb(SpiceMigrateInstance *sin);
+static void migrate_end_complete_cb(SpiceMigrateInstance *sin);
 
 static const SpiceMigrateInterface migrate_interface = {
     .base.type = SPICE_INTERFACE_MIGRATION,
@@ -292,7 +297,7 @@ static const SpiceMigrateInterface migrate_interface = {
     .base.major_version = SPICE_INTERFACE_MIGRATION_MAJOR,
     .base.minor_version = SPICE_INTERFACE_MIGRATION_MINOR,
     .migrate_connect_complete = migrate_connect_complete_cb,
-    .migrate_end_complete = NULL,
+    .migrate_end_complete = migrate_end_complete_cb,
 };
 
 static SpiceMigration spice_migrate;
@@ -305,6 +310,15 @@ static void migrate_connect_complete_cb(SpiceMigrateInstance *sin)
     }
     sm->connect_complete.cb = NULL;
 }
+
+static void migrate_end_complete_cb(SpiceMigrateInstance *sin)
+{
+    SpiceMigration *sm = container_of(sin, SpiceMigration, sin);
+    if (sm->end_complete.cb) {
+        sm->end_complete.cb(&migrate_end_notifier, sm->end_complete.opaque);
+    }
+    sm->end_complete.cb = NULL;
+}
 #endif
 
 /* config string parsing */
@@ -492,16 +506,17 @@ static void migrate_end_notify_func(AsyncNotifier *notifier, void *data,
                                     void *cb_data)
 {
     bool success_end = *(bool *)data;
+
+#ifdef SPICE_INTERFACE_MIGRATION
+    spice_migrate.end_complete.cb = complete_cb;
+    spice_migrate.end_complete.opaque = cb_data;
+    spice_server_migrate_end(spice_server, success_end);
+#else
     if (success_end) {
-#ifndef SPICE_INTERFACE_MIGRATION
         spice_server_migrate_switch(spice_server);
-#else
-        spice_server_migrate_end(spice_server, true);
-    } else {
-        spice_server_migrate_end(spice_server, false);
-#endif
     }
     complete_cb(notifier, cb_data);
+#endif
 }
 
 int qemu_spice_migrate_info(const char *hostname, int port, int tls_port,
-- 
1.7.7.6

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 1/5] notifiers: add support for async notifiers handlers
  2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 1/5] notifiers: add support for async notifiers handlers Yonit Halperin
@ 2012-06-05  8:36   ` Gerd Hoffmann
  0 siblings, 0 replies; 28+ messages in thread
From: Gerd Hoffmann @ 2012-06-05  8:36 UTC (permalink / raw)
  To: Yonit Halperin; +Cc: aliguori, alevy, qemu-devel

  Hi,

> +static void notified_complete_cb(AsyncNotifier *notifier, void *opaque)
> +{

There is no need to implement this as callback (unlike the notifier
_list_ completion callback).  Just have a public notifier_complete()
function which async notifiers are supposed to call when done.

>  void notifier_list_notify(NotifierList *list, void *data)
>  {
> -    Notifier *notifier, *next;
> +    BaseNotifier *notifier, *next;
> +    bool async = false;
> +
> +    if (notifier_list_async_waiting(list)) {

assert(!notifier_list_async_waiting(list)) ?

Silently removing entries from the wait_notifier list here is asking for
trouble.  We should have a notifier_list_cancel() function instead which
also calls Notifier->cancel() for all pending async notifiers
(implementing that can wait until we have an actual need for it).

> +struct BaseNotifier {

>  struct Notifier
>  {
> +    BaseNotifier base;
>      void (*notify)(Notifier *notifier, void *data);

> +struct AsyncNotifier {
> +    BaseNotifier base;
> +    void (*notify_async)(AsyncNotifier *notifier, void *data,
> +                         NotifiedCompletionFunc *complete_cb, void *cb_data);

I don't see a need for three types here, especially as there will be no
difference between notify() and notify_async() prototypes once the
notifier completion callback is gone.  I'd suggest to just extent Notifier.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 2/5] migration: moving migration start code to a separated routine
  2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 2/5] migration: moving migration start code to a separated routine Yonit Halperin
@ 2012-06-05  8:44   ` Gerd Hoffmann
  0 siblings, 0 replies; 28+ messages in thread
From: Gerd Hoffmann @ 2012-06-05  8:44 UTC (permalink / raw)
  To: Yonit Halperin; +Cc: aliguori, alevy, qemu-devel

  Hi,

> -static MigrationState *migrate_init(int blk, int inc)
> +static MigrationState *migrate_init(int protocol, const char *protocol_param,
> +                                    int blk, int inc)

> +    s->protocol = protocol;
> +    s->protocol_param = g_strdup(protocol_param);

Hmm, I think I would store the complete uri here ...

>      if (strstart(uri, "tcp:", &p)) {
> -        ret = tcp_start_outgoing_migration(s, p, errp);
> +        migrate_protocol = MIGRATION_PROTOCOL_TCP;
>  #if !defined(WIN32)
>      } else if (strstart(uri, "exec:", &p)) {
> -        ret = exec_start_outgoing_migration(s, p);
> +        migrate_protocol = MIGRATION_PROTOCOL_EXEC;
>      } else if (strstart(uri, "unix:", &p)) {
> -        ret = unix_start_outgoing_migration(s, p);
> +        migrate_protocol = MIGRATION_PROTOCOL_UNIX;
>      } else if (strstart(uri, "fd:", &p)) {
> -        ret = fd_start_outgoing_migration(s, p);
> +        migrate_protocol = MIGRATION_PROTOCOL_FD;
>  #endif

... then just move that uri parsing code block to migrate_start().

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 3/5] migration: moving migration completion code to a separated routine
  2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 3/5] migration: moving migration completion " Yonit Halperin
@ 2012-06-05  8:46   ` Gerd Hoffmann
  0 siblings, 0 replies; 28+ messages in thread
From: Gerd Hoffmann @ 2012-06-05  8:46 UTC (permalink / raw)
  To: Yonit Halperin; +Cc: aliguori, alevy, qemu-devel

> +    bool start_vm_in_error;

start_vm_on_error?

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-05  5:49 [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers Yonit Halperin
                   ` (4 preceding siblings ...)
  2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 5/5] spice: turn spice "migration end" handler to be async Yonit Halperin
@ 2012-06-05 11:59 ` Anthony Liguori
  2012-06-05 13:15   ` Gerd Hoffmann
  5 siblings, 1 reply; 28+ messages in thread
From: Anthony Liguori @ 2012-06-05 11:59 UTC (permalink / raw)
  To: Yonit Halperin; +Cc: aliguori, alevy, qemu-devel, kraxel

On 06/05/2012 01:49 PM, Yonit Halperin wrote:
> Hi,
>
> I'm sending this patch series again. This time with an additional patch
> for setting a migrate_end notifier completion callback for spice migration
> interface. I've also added more detailed commit messages.
>
> This patch series introduces async handlers for notifiers, and integrates them
> with migration state change notifications.
>
> Asynchronous migration completion notifier is essential for allowing spice to cleanly
> complete the src server connection to the client, and transfer the connection to the target.
> Currently, as soon as the migration completes, the src qemu can be closed by the
> management, and spice cannot complete the spice-connection migration.
>
> In order to support spice seamless migration, next to these patches, I plan to add:
> (1) notifier for switching from the live phase of the migration to the non-live phase,
>      before completing savevm.
>      Spice will use this notification to "finalize" the connection to the client: send
>      and receive all in-flight data.

Absolutely not.  This is hideously ugly and affects a bunch of code.

Spice is *not* getting a hook in migration where it gets to add arbitrary 
amounts of downtime to the migration traffic.  That's a terrible idea.

I'd like to be more constructive in my response, but you aren't explaining the 
problem well enough for me to offer an alternative solution.  You need to find 
another way to solve this problem.

Regards,

Anthony Liguori

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-05 11:59 ` [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers Anthony Liguori
@ 2012-06-05 13:15   ` Gerd Hoffmann
  2012-06-05 13:38     ` Eric Blake
  2012-06-06  9:10     ` Yonit Halperin
  0 siblings, 2 replies; 28+ messages in thread
From: Gerd Hoffmann @ 2012-06-05 13:15 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: alevy, aliguori, Yonit Halperin, qemu-devel

  Hi,

> Absolutely not.  This is hideously ugly and affects a bunch of code.
> 
> Spice is *not* getting a hook in migration where it gets to add
> arbitrary amounts of downtime to the migration traffic.  That's a
> terrible idea.
> 
> I'd like to be more constructive in my response, but you aren't
> explaining the problem well enough for me to offer an alternative
> solution.  You need to find another way to solve this problem.

Very short version:  The requirement is simply to not kill qemu on the
source side until the source spice-server has finished session handover
to the target spice-server.

Long version:  spice-client connects automatically to the target
machine, so the user ideally doesn't notice that his virtual machine was
just migrated over to another host.

Today this happens via "switch-host", which is a simple message asking
the spice client to connect to the new host.

We want move to "seamless migration" model where we don't start over
from scratch, but hand over the session from the source to the target.
Advantage is that various state cached in spice-client will stay valid
and doesn't need to be retransmitted.  It also requires a handshake
between spice-servers on source and target.  libvirt killing qemu on the
source host before the handshake is done isn't exactly helpful.

[ Side note: In theory this issue exists even today: in case the data
  pipe to the client is full spice-server will queue up the switch-host
  message and qemu might be killed before it is sent out.  In practice
  it doesn't happen though because it goes through the low-traffic main
  channel so the socket buffers usually have enougth space. ]

So, the big question is how to tackle the issue?

Option (1): Wait until spice-server is done before signaling completion
to libvirt.  This is what this patch series implements.

Advantage is that it is completely transparent for libvirt, thats why I
like it.

Disadvantage is that it indeed adds a small delay for the spice-server
handshake.  The target qemu doesn't process main loop events while the
incoming migration is running, and because of that the spice-server
handshake doesn't run in parallel with the final stage of vm migration,
which it could in theory.

BTW: There will be no "arbitrary amounts of downtime".  Seamless spice
client migration is pretty pointless if it doesn't finish within a
fraction of a second, so we can go with a very short timeout there.

Option (2): Add a new QMP event which is emmitted when spice-server is
done, then make libvirt wait for it before killing qemu.

Obvious disadvantage is that it requires libvirt changes.

Option (3): Your suggestion?

thanks,
  Gerd

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-05 13:15   ` Gerd Hoffmann
@ 2012-06-05 13:38     ` Eric Blake
  2012-06-05 21:37       ` Anthony Liguori
  2012-06-06  9:10     ` Yonit Halperin
  1 sibling, 1 reply; 28+ messages in thread
From: Eric Blake @ 2012-06-05 13:38 UTC (permalink / raw)
  To: Gerd Hoffmann
  Cc: Yonit Halperin, aliguori, alevy, qemu-devel, Anthony Liguori

[-- Attachment #1: Type: text/plain, Size: 1864 bytes --]

On 06/05/2012 07:15 AM, Gerd Hoffmann wrote:
>   Hi,
> 
>> Absolutely not.  This is hideously ugly and affects a bunch of code.
>>
>> Spice is *not* getting a hook in migration where it gets to add
>> arbitrary amounts of downtime to the migration traffic.  That's a
>> terrible idea.
>>

> So, the big question is how to tackle the issue?
> 
> Option (1): Wait until spice-server is done before signaling completion
> to libvirt.  This is what this patch series implements.
> 
> Advantage is that it is completely transparent for libvirt, thats why I
> like it.
> 
> Disadvantage is that it indeed adds a small delay for the spice-server
> handshake.  The target qemu doesn't process main loop events while the
> incoming migration is running, and because of that the spice-server
> handshake doesn't run in parallel with the final stage of vm migration,
> which it could in theory.
> 
> BTW: There will be no "arbitrary amounts of downtime".  Seamless spice
> client migration is pretty pointless if it doesn't finish within a
> fraction of a second, so we can go with a very short timeout there.
> 
> Option (2): Add a new QMP event which is emmitted when spice-server is
> done, then make libvirt wait for it before killing qemu.
> 
> Obvious disadvantage is that it requires libvirt changes.

But there was recently a proposal for a new monitor command that will
let libvirt query which events a given qemu supports, and therefore
libvirt can at least know in advance whether to expect and wait for the
event, or to fall back to some other option.  Just because libvirt would
require a change doesn't necessarily rule out this option.

> 
> Option (3): Your suggestion?
> 
> thanks,
>   Gerd
> 
> 
> 

-- 
Eric Blake   eblake@redhat.com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 620 bytes --]

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-05 13:38     ` Eric Blake
@ 2012-06-05 21:37       ` Anthony Liguori
  0 siblings, 0 replies; 28+ messages in thread
From: Anthony Liguori @ 2012-06-05 21:37 UTC (permalink / raw)
  To: Eric Blake; +Cc: Yonit Halperin, aliguori, alevy, Gerd Hoffmann, qemu-devel

On 06/05/2012 09:38 PM, Eric Blake wrote:
> On 06/05/2012 07:15 AM, Gerd Hoffmann wrote:
>>    Hi,
>>
>>> Absolutely not.  This is hideously ugly and affects a bunch of code.
>>>
>>> Spice is *not* getting a hook in migration where it gets to add
>>> arbitrary amounts of downtime to the migration traffic.  That's a
>>> terrible idea.
>>>
>
>> So, the big question is how to tackle the issue?
>>
>> Option (1): Wait until spice-server is done before signaling completion
>> to libvirt.  This is what this patch series implements.
>>
>> Advantage is that it is completely transparent for libvirt, thats why I
>> like it.
>>
>> Disadvantage is that it indeed adds a small delay for the spice-server
>> handshake.  The target qemu doesn't process main loop events while the
>> incoming migration is running, and because of that the spice-server
>> handshake doesn't run in parallel with the final stage of vm migration,
>> which it could in theory.
>>
>> BTW: There will be no "arbitrary amounts of downtime".  Seamless spice
>> client migration is pretty pointless if it doesn't finish within a
>> fraction of a second, so we can go with a very short timeout there.
>>
>> Option (2): Add a new QMP event which is emmitted when spice-server is
>> done, then make libvirt wait for it before killing qemu.
>>
>> Obvious disadvantage is that it requires libvirt changes.
>
> But there was recently a proposal for a new monitor command that will
> let libvirt query which events a given qemu supports, and therefore
> libvirt can at least know in advance whether to expect and wait for the
> event, or to fall back to some other option.  Just because libvirt would
> require a change doesn't necessarily rule out this option.

Right, this approach sounds much, much better to me too because it doesn't 
affect migration downtime.

Regards,

Anthony Liguori

>
>>
>> Option (3): Your suggestion?
>>
>> thanks,
>>    Gerd
>>
>>
>>
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-05 13:15   ` Gerd Hoffmann
  2012-06-05 13:38     ` Eric Blake
@ 2012-06-06  9:10     ` Yonit Halperin
  2012-06-06  9:22       ` Anthony Liguori
  1 sibling, 1 reply; 28+ messages in thread
From: Yonit Halperin @ 2012-06-06  9:10 UTC (permalink / raw)
  To: Gerd Hoffmann, Anthony Liguori; +Cc: aliguori, alevy, qemu-devel

Hi,

I would like to add some more points to Gerd's explanation:
On 06/05/2012 04:15 PM, Gerd Hoffmann wrote:
>    Hi,
>
>> Absolutely not.  This is hideously ugly and affects a bunch of code.
>>
>> Spice is *not* getting a hook in migration where it gets to add
>> arbitrary amounts of downtime to the migration traffic.  That's a
>> terrible idea.
>>
>> I'd like to be more constructive in my response, but you aren't
>> explaining the problem well enough for me to offer an alternative
>> solution.  You need to find another way to solve this problem.
Actually, this is not the first time we address you with this issues. 
For example: 
http://lists.gnu.org/archive/html/qemu-devel/2012-03/msg01805.html (The 
first part of the above discussion is not directly related to the 
current one). I'll try to explain in more details:

As Gerd mentioned, migrating the spice connection smoothly requires the 
src server to keep running and send/receive data to/from the client, 
after migration has already completed, till the client completely 
transfers to the target. The suggested patch series only delays the 
migration state change from ACTIVE to COMPLETED/ERROR/CANCELED, till 
spice signals it has completed its part in migration.
As I see it, if spice connection does exists, its migration should be 
treated as a non separate part of the whole migration process, and thus, 
the migration state shouldn't change from ACTIVE, till spice has 
completed its part. Hence, I don't think we should have a qmp event for 
signaling libvirt about spice migration.


The second challenge we are facing, which I addressed in the "plans" 
part of the cover-letter, and on which I think you (anthony) actually 
replied, is how to tackle migrating spice data from the src server to 
the target server. Such data can be usb/smartcard packets sent from a 
device connected on the client, to the server, and that haven't reached 
the device. Or partial data that has been read from a guest character 
device and that haven't been sent to the client. Other data can be 
internal server-client state data we would wish to keep on the server in 
order to avoid establishing the connection to the target from scratch, 
and possibly also suffer from a slower responsiveness at start.
In the cover-letter I suggested to transfer spice migration data via the 
vmstate infrastructure. The other alternative which we also discussed in 
the link above, is to transfer the data via the client. The latter also 
requires holding the src process alive after migration completion, in 
order to manage to complete transferring the data from the src to the 
client.
The vmstate option has the advantages of faster data transfer (src->dst, 
instead of src->client->dst), and in addition employing an  already 
existing reliable mechanism for data migration. The disadvantage is that 
in order to have an updated vmstate we need to communicate with spice 
client and get all in-flight data before saving the vmstate. So, we can 
either busy wait on the relevant fds during the pre_save of the 
vmstates, or have async pre_save, so that the main loop will be active 
(but I think that it can be risky once the non-live phase started), or 
have an async notifier for changing from live to non-live phase, (spice 
will be able to update the vmstates during this notification handler). 
Of course, we would in any case use a timeout in order to prevent too 
long delay.

To summarize, since we can still use the client to transfer data from 
the src to the target (instead of using vmstate), the major requirement 
of spice, is to keep the src running after migration has completed.

Yonit.

>
> Very short version:  The requirement is simply to not kill qemu on the
> source side until the source spice-server has finished session handover
> to the target spice-server.
>
> Long version:  spice-client connects automatically to the target
> machine, so the user ideally doesn't notice that his virtual machine was
> just migrated over to another host.
>
> Today this happens via "switch-host", which is a simple message asking
> the spice client to connect to the new host.
>
> We want move to "seamless migration" model where we don't start over
> from scratch, but hand over the session from the source to the target.
> Advantage is that various state cached in spice-client will stay valid
> and doesn't need to be retransmitted.  It also requires a handshake
> between spice-servers on source and target.  libvirt killing qemu on the
> source host before the handshake is done isn't exactly helpful.
>
> [ Side note: In theory this issue exists even today: in case the data
>    pipe to the client is full spice-server will queue up the switch-host
>    message and qemu might be killed before it is sent out.  In practice
>    it doesn't happen though because it goes through the low-traffic main
>    channel so the socket buffers usually have enougth space. ]
>
> So, the big question is how to tackle the issue?
>
> Option (1): Wait until spice-server is done before signaling completion
> to libvirt.  This is what this patch series implements.
>
> Advantage is that it is completely transparent for libvirt, thats why I
> like it.
>
> Disadvantage is that it indeed adds a small delay for the spice-server
> handshake.  The target qemu doesn't process main loop events while the
> incoming migration is running, and because of that the spice-server
> handshake doesn't run in parallel with the final stage of vm migration,
> which it could in theory.
>
> BTW: There will be no "arbitrary amounts of downtime".  Seamless spice
> client migration is pretty pointless if it doesn't finish within a
> fraction of a second, so we can go with a very short timeout there.
>
> Option (2): Add a new QMP event which is emmitted when spice-server is
> done, then make libvirt wait for it before killing qemu.
>
> Obvious disadvantage is that it requires libvirt changes.
>
> Option (3): Your suggestion?
>
> thanks,
>    Gerd
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-06  9:10     ` Yonit Halperin
@ 2012-06-06  9:22       ` Anthony Liguori
  2012-06-06 10:54         ` Alon Levy
  2012-06-06 12:01         ` Yonit Halperin
  0 siblings, 2 replies; 28+ messages in thread
From: Anthony Liguori @ 2012-06-06  9:22 UTC (permalink / raw)
  To: Yonit Halperin; +Cc: aliguori, alevy, Gerd Hoffmann, qemu-devel

On 06/06/2012 05:10 PM, Yonit Halperin wrote:
> Hi,
>
> I would like to add some more points to Gerd's explanation:
> On 06/05/2012 04:15 PM, Gerd Hoffmann wrote:
>> Hi,
>>
>>> Absolutely not. This is hideously ugly and affects a bunch of code.
>>>
>>> Spice is *not* getting a hook in migration where it gets to add
>>> arbitrary amounts of downtime to the migration traffic. That's a
>>> terrible idea.
>>>
>>> I'd like to be more constructive in my response, but you aren't
>>> explaining the problem well enough for me to offer an alternative
>>> solution. You need to find another way to solve this problem.
> Actually, this is not the first time we address you with this issues. For
> example: http://lists.gnu.org/archive/html/qemu-devel/2012-03/msg01805.html (The
> first part of the above discussion is not directly related to the current one).
> I'll try to explain in more details:
>
> As Gerd mentioned, migrating the spice connection smoothly requires the src
> server to keep running and send/receive data to/from the client, after migration
> has already completed, till the client completely transfers to the target. The
> suggested patch series only delays the migration state change from ACTIVE to
> COMPLETED/ERROR/CANCELED, till spice signals it has completed its part in
> migration.
> As I see it, if spice connection does exists, its migration should be treated as
> a non separate part of the whole migration process, and thus, the migration
> state shouldn't change from ACTIVE, till spice has completed its part. Hence, I
> don't think we should have a qmp event for signaling libvirt about spice migration.

Spice client migration has nothing to do with guest migration.  Trying to abuse 
QEMU to support it is not acceptable.

> The second challenge we are facing, which I addressed in the "plans" part of the
> cover-letter, and on which I think you (anthony) actually replied, is how to
> tackle migrating spice data from the src server to the target server. Such data
> can be usb/smartcard packets sent from a device connected on the client, to the
> server, and that haven't reached the device. Or partial data that has been read
> from a guest character device and that haven't been sent to the client. Other
> data can be internal server-client state data we would wish to keep on the
> server in order to avoid establishing the connection to the target from scratch,
> and possibly also suffer from a slower responsiveness at start.
> In the cover-letter I suggested to transfer spice migration data via the vmstate
> infrastructure. The other alternative which we also discussed in the link above,
> is to transfer the data via the client. The latter also requires holding the src
> process alive after migration completion, in order to manage to complete
> transferring the data from the src to the client.

<-->

> To summarize, since we can still use the client to transfer data from the src to
> the target (instead of using vmstate), the major requirement of spice, is to
> keep the src running after migration has completed.

So send a QMP event and call it a day.

Regards,

Anthony Liguori

>
> Yonit.
>
>>
>> Very short version: The requirement is simply to not kill qemu on the
>> source side until the source spice-server has finished session handover
>> to the target spice-server.
>>
>> Long version: spice-client connects automatically to the target
>> machine, so the user ideally doesn't notice that his virtual machine was
>> just migrated over to another host.
>>
>> Today this happens via "switch-host", which is a simple message asking
>> the spice client to connect to the new host.
>>
>> We want move to "seamless migration" model where we don't start over
>> from scratch, but hand over the session from the source to the target.
>> Advantage is that various state cached in spice-client will stay valid
>> and doesn't need to be retransmitted. It also requires a handshake
>> between spice-servers on source and target. libvirt killing qemu on the
>> source host before the handshake is done isn't exactly helpful.
>>
>> [ Side note: In theory this issue exists even today: in case the data
>> pipe to the client is full spice-server will queue up the switch-host
>> message and qemu might be killed before it is sent out. In practice
>> it doesn't happen though because it goes through the low-traffic main
>> channel so the socket buffers usually have enougth space. ]
>>
>> So, the big question is how to tackle the issue?
>>
>> Option (1): Wait until spice-server is done before signaling completion
>> to libvirt. This is what this patch series implements.
>>
>> Advantage is that it is completely transparent for libvirt, thats why I
>> like it.
>>
>> Disadvantage is that it indeed adds a small delay for the spice-server
>> handshake. The target qemu doesn't process main loop events while the
>> incoming migration is running, and because of that the spice-server
>> handshake doesn't run in parallel with the final stage of vm migration,
>> which it could in theory.
>>
>> BTW: There will be no "arbitrary amounts of downtime". Seamless spice
>> client migration is pretty pointless if it doesn't finish within a
>> fraction of a second, so we can go with a very short timeout there.
>>
>> Option (2): Add a new QMP event which is emmitted when spice-server is
>> done, then make libvirt wait for it before killing qemu.
>>
>> Obvious disadvantage is that it requires libvirt changes.
>>
>> Option (3): Your suggestion?
>>
>> thanks,
>> Gerd
>>
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-06  9:22       ` Anthony Liguori
@ 2012-06-06 10:54         ` Alon Levy
  2012-06-06 11:05           ` Anthony Liguori
  2012-06-06 12:01         ` Yonit Halperin
  1 sibling, 1 reply; 28+ messages in thread
From: Alon Levy @ 2012-06-06 10:54 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: aliguori, Yonit Halperin, Gerd Hoffmann, qemu-devel

On Wed, Jun 06, 2012 at 05:22:21PM +0800, Anthony Liguori wrote:
> On 06/06/2012 05:10 PM, Yonit Halperin wrote:
> >Hi,
> >
> >I would like to add some more points to Gerd's explanation:
> >On 06/05/2012 04:15 PM, Gerd Hoffmann wrote:
> >>Hi,
> >>
> >>>Absolutely not. This is hideously ugly and affects a bunch of code.
> >>>
> >>>Spice is *not* getting a hook in migration where it gets to add
> >>>arbitrary amounts of downtime to the migration traffic. That's a
> >>>terrible idea.
> >>>
> >>>I'd like to be more constructive in my response, but you aren't
> >>>explaining the problem well enough for me to offer an alternative
> >>>solution. You need to find another way to solve this problem.
> >Actually, this is not the first time we address you with this issues. For
> >example: http://lists.gnu.org/archive/html/qemu-devel/2012-03/msg01805.html (The
> >first part of the above discussion is not directly related to the current one).
> >I'll try to explain in more details:
> >
> >As Gerd mentioned, migrating the spice connection smoothly requires the src
> >server to keep running and send/receive data to/from the client, after migration
> >has already completed, till the client completely transfers to the target. The
> >suggested patch series only delays the migration state change from ACTIVE to
> >COMPLETED/ERROR/CANCELED, till spice signals it has completed its part in
> >migration.
> >As I see it, if spice connection does exists, its migration should be treated as
> >a non separate part of the whole migration process, and thus, the migration
> >state shouldn't change from ACTIVE, till spice has completed its part. Hence, I
> >don't think we should have a qmp event for signaling libvirt about spice migration.
> 
> Spice client migration has nothing to do with guest migration.  Trying to

I don't understand this POV. If it were a VNC connection instead of a
Spice one would it make a difference? If there is an active VNC client
then it is there as a result of a user choosing to use it, so it should
be treated as part of the user experience and not as something external.
The experience from ignoring this and choosing to treat the remote
console as an unrelated part is bound to be suboptimal.

> abuse QEMU to support it is not acceptable.
> 
> >The second challenge we are facing, which I addressed in the "plans" part of the
> >cover-letter, and on which I think you (anthony) actually replied, is how to
> >tackle migrating spice data from the src server to the target server. Such data
> >can be usb/smartcard packets sent from a device connected on the client, to the
> >server, and that haven't reached the device. Or partial data that has been read
> >from a guest character device and that haven't been sent to the client. Other
> >data can be internal server-client state data we would wish to keep on the
> >server in order to avoid establishing the connection to the target from scratch,
> >and possibly also suffer from a slower responsiveness at start.
> >In the cover-letter I suggested to transfer spice migration data via the vmstate
> >infrastructure. The other alternative which we also discussed in the link above,
> >is to transfer the data via the client. The latter also requires holding the src
> >process alive after migration completion, in order to manage to complete
> >transferring the data from the src to the client.
> 
> <-->
> 
> >To summarize, since we can still use the client to transfer data from the src to
> >the target (instead of using vmstate), the major requirement of spice, is to
> >keep the src running after migration has completed.
> 
> So send a QMP event and call it a day.
> 
> Regards,
> 
> Anthony Liguori
> 
> >
> >Yonit.
> >
> >>
> >>Very short version: The requirement is simply to not kill qemu on the
> >>source side until the source spice-server has finished session handover
> >>to the target spice-server.
> >>
> >>Long version: spice-client connects automatically to the target
> >>machine, so the user ideally doesn't notice that his virtual machine was
> >>just migrated over to another host.
> >>
> >>Today this happens via "switch-host", which is a simple message asking
> >>the spice client to connect to the new host.
> >>
> >>We want move to "seamless migration" model where we don't start over
> >>from scratch, but hand over the session from the source to the target.
> >>Advantage is that various state cached in spice-client will stay valid
> >>and doesn't need to be retransmitted. It also requires a handshake
> >>between spice-servers on source and target. libvirt killing qemu on the
> >>source host before the handshake is done isn't exactly helpful.
> >>
> >>[ Side note: In theory this issue exists even today: in case the data
> >>pipe to the client is full spice-server will queue up the switch-host
> >>message and qemu might be killed before it is sent out. In practice
> >>it doesn't happen though because it goes through the low-traffic main
> >>channel so the socket buffers usually have enougth space. ]
> >>
> >>So, the big question is how to tackle the issue?
> >>
> >>Option (1): Wait until spice-server is done before signaling completion
> >>to libvirt. This is what this patch series implements.
> >>
> >>Advantage is that it is completely transparent for libvirt, thats why I
> >>like it.
> >>
> >>Disadvantage is that it indeed adds a small delay for the spice-server
> >>handshake. The target qemu doesn't process main loop events while the
> >>incoming migration is running, and because of that the spice-server
> >>handshake doesn't run in parallel with the final stage of vm migration,
> >>which it could in theory.
> >>
> >>BTW: There will be no "arbitrary amounts of downtime". Seamless spice
> >>client migration is pretty pointless if it doesn't finish within a
> >>fraction of a second, so we can go with a very short timeout there.
> >>
> >>Option (2): Add a new QMP event which is emmitted when spice-server is
> >>done, then make libvirt wait for it before killing qemu.
> >>
> >>Obvious disadvantage is that it requires libvirt changes.
> >>
> >>Option (3): Your suggestion?
> >>
> >>thanks,
> >>Gerd
> >>
> >
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-06 10:54         ` Alon Levy
@ 2012-06-06 11:05           ` Anthony Liguori
  2012-06-06 11:27             ` Alon Levy
  0 siblings, 1 reply; 28+ messages in thread
From: Anthony Liguori @ 2012-06-06 11:05 UTC (permalink / raw)
  To: Yonit Halperin, Gerd Hoffmann, aliguori, qemu-devel

On 06/06/2012 06:54 PM, Alon Levy wrote:
> On Wed, Jun 06, 2012 at 05:22:21PM +0800, Anthony Liguori wrote:
>> On 06/06/2012 05:10 PM, Yonit Halperin wrote:
>> Spice client migration has nothing to do with guest migration.  Trying to
>
> I don't understand this POV. If it were a VNC connection instead of a
> Spice one would it make a difference?

Of course, I would say yes if it was VNC.  Because the only possibly way I could 
disagree with something Spice related is because I'm biased against it.

Give me the benefit of the doubt at least.  More importantly, try to stop and 
think about what I'm saying before you assume the anti-Spice brigade is coming 
in to rain on your parade.

> If there is an active VNC client
> then it is there as a result of a user choosing to use it, so it should
> be treated as part of the user experience and not as something external.
> The experience from ignoring this and choosing to treat the remote
> console as an unrelated part is bound to be suboptimal.

Guest migration affects correctness!

If the Spice client is slow (even due to network lag) in responding to your 
flush message, you will disrupt the guest and potentially drop network 
connections and/or cause lockup detectors to trigger.

Migrating the Spice client is a UI feature.  It has absolutely no affect no the 
workloads that are running in the guest.

Impacting migration *correctness* in order to support a UI feature is 
unacceptable especially when there are ways to achieve the same results without 
having any impact on correctness.

We have had a simple rule with migration in QEMU.  Nothing gets to impact 
downtime with migration.  No device gets magic hooks or anything like that.  Go 
read the TPM threads if you want to see another example of this.

Regards,

Anthony Liguori

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-06 11:05           ` Anthony Liguori
@ 2012-06-06 11:27             ` Alon Levy
  2012-06-06 11:49               ` Anthony Liguori
  0 siblings, 1 reply; 28+ messages in thread
From: Alon Levy @ 2012-06-06 11:27 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: aliguori, Yonit Halperin, Gerd Hoffmann, qemu-devel

On Wed, Jun 06, 2012 at 07:05:44PM +0800, Anthony Liguori wrote:
> On 06/06/2012 06:54 PM, Alon Levy wrote:
> >On Wed, Jun 06, 2012 at 05:22:21PM +0800, Anthony Liguori wrote:
> >>On 06/06/2012 05:10 PM, Yonit Halperin wrote:
> >>Spice client migration has nothing to do with guest migration.  Trying to
> >
> >I don't understand this POV. If it were a VNC connection instead of a
> >Spice one would it make a difference?
> 
> Of course, I would say yes if it was VNC.  Because the only possibly way I
> could disagree with something Spice related is because I'm biased against
> it.
> 
> Give me the benefit of the doubt at least.  More importantly, try to stop
> and think about what I'm saying before you assume the anti-Spice brigade is
> coming in to rain on your parade.

I stand corrected.

> 
> >If there is an active VNC client
> >then it is there as a result of a user choosing to use it, so it should
> >be treated as part of the user experience and not as something external.
> >The experience from ignoring this and choosing to treat the remote
> >console as an unrelated part is bound to be suboptimal.
> 
> Guest migration affects correctness!
> 
> If the Spice client is slow (even due to network lag) in responding to your
> flush message, you will disrupt the guest and potentially drop network
> connections and/or cause lockup detectors to trigger.
> 

OK, you think any timeout here would be too large.

> Migrating the Spice client is a UI feature.  It has absolutely no affect no
> the workloads that are running in the guest.
> 
> Impacting migration *correctness* in order to support a UI feature is
> unacceptable especially when there are ways to achieve the same results
> without having any impact on correctness.
> 
> We have had a simple rule with migration in QEMU.  Nothing gets to impact
> downtime with migration.  No device gets magic hooks or anything like that.
> Go read the TPM threads if you want to see another example of this.

OK.

> 
> Regards,
> 
> Anthony Liguori
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-06 11:27             ` Alon Levy
@ 2012-06-06 11:49               ` Anthony Liguori
  0 siblings, 0 replies; 28+ messages in thread
From: Anthony Liguori @ 2012-06-06 11:49 UTC (permalink / raw)
  To: Yonit Halperin, Gerd Hoffmann, aliguori, qemu-devel

On 06/06/2012 07:27 PM, Alon Levy wrote:
>>> If there is an active VNC client
>>> then it is there as a result of a user choosing to use it, so it should
>>> be treated as part of the user experience and not as something external.
>>> The experience from ignoring this and choosing to treat the remote
>>> console as an unrelated part is bound to be suboptimal.
>>
>> Guest migration affects correctness!
>>
>> If the Spice client is slow (even due to network lag) in responding to your
>> flush message, you will disrupt the guest and potentially drop network
>> connections and/or cause lockup detectors to trigger.
>>
>
> OK, you think any timeout here would be too large.

What would it's value be?

Migration is convergent and our downtime estimate is just that--an estimate. 
It's literally always a crap-shoot as to whether the actual migration will 
complete fast enough.

What do you propose the timeout to be?  1us?  Can you even do a round trip to a 
client in 1us?  50us?  I still question whether a round trip is feasible in that 
time period and you've blown away the default 30us time anyway.

Even 1us would be too much though.

Regards,

Anthony Liguori

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-06  9:22       ` Anthony Liguori
  2012-06-06 10:54         ` Alon Levy
@ 2012-06-06 12:01         ` Yonit Halperin
  2012-06-06 12:08           ` Anthony Liguori
  1 sibling, 1 reply; 28+ messages in thread
From: Yonit Halperin @ 2012-06-06 12:01 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: aliguori, alevy, Gerd Hoffmann, qemu-devel

On 06/06/2012 12:22 PM, Anthony Liguori wrote:
> On 06/06/2012 05:10 PM, Yonit Halperin wrote:
>> Hi,
>>
>> I would like to add some more points to Gerd's explanation:
>> On 06/05/2012 04:15 PM, Gerd Hoffmann wrote:
>>> Hi,
>>>
>>>> Absolutely not. This is hideously ugly and affects a bunch of code.
>>>>
>>>> Spice is *not* getting a hook in migration where it gets to add
>>>> arbitrary amounts of downtime to the migration traffic. That's a
>>>> terrible idea.
>>>>
>>>> I'd like to be more constructive in my response, but you aren't
>>>> explaining the problem well enough for me to offer an alternative
>>>> solution. You need to find another way to solve this problem.
>> Actually, this is not the first time we address you with this issues. For
>> example:
>> http://lists.gnu.org/archive/html/qemu-devel/2012-03/msg01805.html (The
>> first part of the above discussion is not directly related to the
>> current one).
>> I'll try to explain in more details:
>>
>> As Gerd mentioned, migrating the spice connection smoothly requires
>> the src
>> server to keep running and send/receive data to/from the client, after
>> migration
>> has already completed, till the client completely transfers to the
>> target. The
>> suggested patch series only delays the migration state change from
>> ACTIVE to
>> COMPLETED/ERROR/CANCELED, till spice signals it has completed its part in
>> migration.
>> As I see it, if spice connection does exists, its migration should be
>> treated as
>> a non separate part of the whole migration process, and thus, the
>> migration
>> state shouldn't change from ACTIVE, till spice has completed its part.
>> Hence, I
>> don't think we should have a qmp event for signaling libvirt about
>> spice migration.
>
> Spice client migration has nothing to do with guest migration. Trying to
> abuse QEMU to support it is not acceptable.
>
>> The second challenge we are facing, which I addressed in the "plans"
>> part of the
>> cover-letter, and on which I think you (anthony) actually replied, is
>> how to
>> tackle migrating spice data from the src server to the target server.
>> Such data
>> can be usb/smartcard packets sent from a device connected on the
>> client, to the
>> server, and that haven't reached the device. Or partial data that has
>> been read
>> from a guest character device and that haven't been sent to the
>> client. Other
>> data can be internal server-client state data we would wish to keep on
>> the
>> server in order to avoid establishing the connection to the target
>> from scratch,
>> and possibly also suffer from a slower responsiveness at start.
>> In the cover-letter I suggested to transfer spice migration data via
>> the vmstate
>> infrastructure. The other alternative which we also discussed in the
>> link above,
>> is to transfer the data via the client. The latter also requires
>> holding the src
>> process alive after migration completion, in order to manage to complete
>> transferring the data from the src to the client.
>
> <-->
>
>> To summarize, since we can still use the client to transfer data from
>> the src to
>> the target (instead of using vmstate), the major requirement of spice,
>> is to
>> keep the src running after migration has completed.
>
> So send a QMP event and call it a day.
>
Using a QMP event is making spice seamless migration dependent on 
libvirt version. Delaying the status change to "migration completed", 
(1) doesn't affect qemu migration time, the migration has already 
completed, and (2) will allow spice to seamlessly migrate, no matter 
which libvirt version is used.

Yonit.
> Regards,
>
> Anthony Liguori
>
>>
>> Yonit.
>>
>>>
>>> Very short version: The requirement is simply to not kill qemu on the
>>> source side until the source spice-server has finished session handover
>>> to the target spice-server.
>>>
>>> Long version: spice-client connects automatically to the target
>>> machine, so the user ideally doesn't notice that his virtual machine was
>>> just migrated over to another host.
>>>
>>> Today this happens via "switch-host", which is a simple message asking
>>> the spice client to connect to the new host.
>>>
>>> We want move to "seamless migration" model where we don't start over
>>> from scratch, but hand over the session from the source to the target.
>>> Advantage is that various state cached in spice-client will stay valid
>>> and doesn't need to be retransmitted. It also requires a handshake
>>> between spice-servers on source and target. libvirt killing qemu on the
>>> source host before the handshake is done isn't exactly helpful.
>>>
>>> [ Side note: In theory this issue exists even today: in case the data
>>> pipe to the client is full spice-server will queue up the switch-host
>>> message and qemu might be killed before it is sent out. In practice
>>> it doesn't happen though because it goes through the low-traffic main
>>> channel so the socket buffers usually have enougth space. ]
>>>
>>> So, the big question is how to tackle the issue?
>>>
>>> Option (1): Wait until spice-server is done before signaling completion
>>> to libvirt. This is what this patch series implements.
>>>
>>> Advantage is that it is completely transparent for libvirt, thats why I
>>> like it.
>>>
>>> Disadvantage is that it indeed adds a small delay for the spice-server
>>> handshake. The target qemu doesn't process main loop events while the
>>> incoming migration is running, and because of that the spice-server
>>> handshake doesn't run in parallel with the final stage of vm migration,
>>> which it could in theory.
>>>
>>> BTW: There will be no "arbitrary amounts of downtime". Seamless spice
>>> client migration is pretty pointless if it doesn't finish within a
>>> fraction of a second, so we can go with a very short timeout there.
>>>
>>> Option (2): Add a new QMP event which is emmitted when spice-server is
>>> done, then make libvirt wait for it before killing qemu.
>>>
>>> Obvious disadvantage is that it requires libvirt changes.
>>>
>>> Option (3): Your suggestion?
>>>
>>> thanks,
>>> Gerd
>>>
>>
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-06 12:01         ` Yonit Halperin
@ 2012-06-06 12:08           ` Anthony Liguori
  2012-06-06 12:15             ` Alon Levy
  0 siblings, 1 reply; 28+ messages in thread
From: Anthony Liguori @ 2012-06-06 12:08 UTC (permalink / raw)
  To: Yonit Halperin; +Cc: aliguori, alevy, Gerd Hoffmann, qemu-devel

On 06/06/2012 08:01 PM, Yonit Halperin wrote:
> On 06/06/2012 12:22 PM, Anthony Liguori wrote:
>>
>> So send a QMP event and call it a day.
>>
> Using a QMP event is making spice seamless migration dependent on libvirt
> version.

That is not an acceptable justification.

> Delaying the status change to "migration completed", (1) doesn't affect
> qemu migration time, the migration has already completed, and (2) will allow
> spice to seamlessly migrate, no matter which libvirt version is used.

(1) libvirt starts the destination with -S and starts it manually IIUC.  It 
waits for the migration completed event to do this.

Seriously, just add the event.  Async notifiers are not an option.

Regards,

Anthony Liguori

>
> Yonit.
>> Regards,
>>
>> Anthony Liguori
>>
>>>
>>> Yonit.
>>>
>>>>
>>>> Very short version: The requirement is simply to not kill qemu on the
>>>> source side until the source spice-server has finished session handover
>>>> to the target spice-server.
>>>>
>>>> Long version: spice-client connects automatically to the target
>>>> machine, so the user ideally doesn't notice that his virtual machine was
>>>> just migrated over to another host.
>>>>
>>>> Today this happens via "switch-host", which is a simple message asking
>>>> the spice client to connect to the new host.
>>>>
>>>> We want move to "seamless migration" model where we don't start over
>>>> from scratch, but hand over the session from the source to the target.
>>>> Advantage is that various state cached in spice-client will stay valid
>>>> and doesn't need to be retransmitted. It also requires a handshake
>>>> between spice-servers on source and target. libvirt killing qemu on the
>>>> source host before the handshake is done isn't exactly helpful.
>>>>
>>>> [ Side note: In theory this issue exists even today: in case the data
>>>> pipe to the client is full spice-server will queue up the switch-host
>>>> message and qemu might be killed before it is sent out. In practice
>>>> it doesn't happen though because it goes through the low-traffic main
>>>> channel so the socket buffers usually have enougth space. ]
>>>>
>>>> So, the big question is how to tackle the issue?
>>>>
>>>> Option (1): Wait until spice-server is done before signaling completion
>>>> to libvirt. This is what this patch series implements.
>>>>
>>>> Advantage is that it is completely transparent for libvirt, thats why I
>>>> like it.
>>>>
>>>> Disadvantage is that it indeed adds a small delay for the spice-server
>>>> handshake. The target qemu doesn't process main loop events while the
>>>> incoming migration is running, and because of that the spice-server
>>>> handshake doesn't run in parallel with the final stage of vm migration,
>>>> which it could in theory.
>>>>
>>>> BTW: There will be no "arbitrary amounts of downtime". Seamless spice
>>>> client migration is pretty pointless if it doesn't finish within a
>>>> fraction of a second, so we can go with a very short timeout there.
>>>>
>>>> Option (2): Add a new QMP event which is emmitted when spice-server is
>>>> done, then make libvirt wait for it before killing qemu.
>>>>
>>>> Obvious disadvantage is that it requires libvirt changes.
>>>>
>>>> Option (3): Your suggestion?
>>>>
>>>> thanks,
>>>> Gerd
>>>>
>>>
>>
>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-06 12:08           ` Anthony Liguori
@ 2012-06-06 12:15             ` Alon Levy
  2012-06-06 12:17               ` Anthony Liguori
  0 siblings, 1 reply; 28+ messages in thread
From: Alon Levy @ 2012-06-06 12:15 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: aliguori, Yonit Halperin, Gerd Hoffmann, qemu-devel

On Wed, Jun 06, 2012 at 08:08:40PM +0800, Anthony Liguori wrote:
> On 06/06/2012 08:01 PM, Yonit Halperin wrote:
> >On 06/06/2012 12:22 PM, Anthony Liguori wrote:
> >>
> >>So send a QMP event and call it a day.
> >>
> >Using a QMP event is making spice seamless migration dependent on libvirt
> >version.
> 
> That is not an acceptable justification.

To let spice know that libvirt doesn't support the new event would
require libvirt capabilities advertisement to qemu. Is that acceptable?

> 
> >Delaying the status change to "migration completed", (1) doesn't affect
> >qemu migration time, the migration has already completed, and (2) will allow
> >spice to seamlessly migrate, no matter which libvirt version is used.
> 
> (1) libvirt starts the destination with -S and starts it manually IIUC.  It
> waits for the migration completed event to do this.
> 
> Seriously, just add the event.  Async notifiers are not an option.
> 
> Regards,
> 
> Anthony Liguori
> 
> >
> >Yonit.
> >>Regards,
> >>
> >>Anthony Liguori
> >>
> >>>
> >>>Yonit.
> >>>
> >>>>
> >>>>Very short version: The requirement is simply to not kill qemu on the
> >>>>source side until the source spice-server has finished session handover
> >>>>to the target spice-server.
> >>>>
> >>>>Long version: spice-client connects automatically to the target
> >>>>machine, so the user ideally doesn't notice that his virtual machine was
> >>>>just migrated over to another host.
> >>>>
> >>>>Today this happens via "switch-host", which is a simple message asking
> >>>>the spice client to connect to the new host.
> >>>>
> >>>>We want move to "seamless migration" model where we don't start over
> >>>>from scratch, but hand over the session from the source to the target.
> >>>>Advantage is that various state cached in spice-client will stay valid
> >>>>and doesn't need to be retransmitted. It also requires a handshake
> >>>>between spice-servers on source and target. libvirt killing qemu on the
> >>>>source host before the handshake is done isn't exactly helpful.
> >>>>
> >>>>[ Side note: In theory this issue exists even today: in case the data
> >>>>pipe to the client is full spice-server will queue up the switch-host
> >>>>message and qemu might be killed before it is sent out. In practice
> >>>>it doesn't happen though because it goes through the low-traffic main
> >>>>channel so the socket buffers usually have enougth space. ]
> >>>>
> >>>>So, the big question is how to tackle the issue?
> >>>>
> >>>>Option (1): Wait until spice-server is done before signaling completion
> >>>>to libvirt. This is what this patch series implements.
> >>>>
> >>>>Advantage is that it is completely transparent for libvirt, thats why I
> >>>>like it.
> >>>>
> >>>>Disadvantage is that it indeed adds a small delay for the spice-server
> >>>>handshake. The target qemu doesn't process main loop events while the
> >>>>incoming migration is running, and because of that the spice-server
> >>>>handshake doesn't run in parallel with the final stage of vm migration,
> >>>>which it could in theory.
> >>>>
> >>>>BTW: There will be no "arbitrary amounts of downtime". Seamless spice
> >>>>client migration is pretty pointless if it doesn't finish within a
> >>>>fraction of a second, so we can go with a very short timeout there.
> >>>>
> >>>>Option (2): Add a new QMP event which is emmitted when spice-server is
> >>>>done, then make libvirt wait for it before killing qemu.
> >>>>
> >>>>Obvious disadvantage is that it requires libvirt changes.
> >>>>
> >>>>Option (3): Your suggestion?
> >>>>
> >>>>thanks,
> >>>>Gerd
> >>>>
> >>>
> >>
> >
> 
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-06 12:15             ` Alon Levy
@ 2012-06-06 12:17               ` Anthony Liguori
  2012-06-06 12:30                 ` Alon Levy
  0 siblings, 1 reply; 28+ messages in thread
From: Anthony Liguori @ 2012-06-06 12:17 UTC (permalink / raw)
  To: Yonit Halperin, aliguori, Gerd Hoffmann, qemu-devel

On 06/06/2012 08:15 PM, Alon Levy wrote:
> On Wed, Jun 06, 2012 at 08:08:40PM +0800, Anthony Liguori wrote:
>> On 06/06/2012 08:01 PM, Yonit Halperin wrote:
>>> On 06/06/2012 12:22 PM, Anthony Liguori wrote:
>>>>
>>>> So send a QMP event and call it a day.
>>>>
>>> Using a QMP event is making spice seamless migration dependent on libvirt
>>> version.
>>
>> That is not an acceptable justification.
>
> To let spice know that libvirt doesn't support the new event would
> require libvirt capabilities advertisement to qemu. Is that acceptable?

I literally have danpb's event introspection patches from Luiz's PULL request 
testing on my system right now to be pushed.

So this is already a solved problem.

Regards,

Anthony Liguori

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-06 12:17               ` Anthony Liguori
@ 2012-06-06 12:30                 ` Alon Levy
  2012-06-06 12:34                   ` Anthony Liguori
  2012-06-06 13:03                   ` Gerd Hoffmann
  0 siblings, 2 replies; 28+ messages in thread
From: Alon Levy @ 2012-06-06 12:30 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: aliguori, Yonit Halperin, Gerd Hoffmann, qemu-devel

On Wed, Jun 06, 2012 at 08:17:29PM +0800, Anthony Liguori wrote:
> On 06/06/2012 08:15 PM, Alon Levy wrote:
> >On Wed, Jun 06, 2012 at 08:08:40PM +0800, Anthony Liguori wrote:
> >>On 06/06/2012 08:01 PM, Yonit Halperin wrote:
> >>>On 06/06/2012 12:22 PM, Anthony Liguori wrote:
> >>>>
> >>>>So send a QMP event and call it a day.
> >>>>
> >>>Using a QMP event is making spice seamless migration dependent on libvirt
> >>>version.
> >>
> >>That is not an acceptable justification.
> >
> >To let spice know that libvirt doesn't support the new event would
> >require libvirt capabilities advertisement to qemu. Is that acceptable?
> 
> I literally have danpb's event introspection patches from Luiz's PULL
> request testing on my system right now to be pushed.
this?: [PATCH 29/29] Add 'query-events' command to QMP to query async events

This is about libvirt getting qemu's event list. I am talking about qemu
getting libvirt to say "we support event SPICE_MIGRATE_DONE". i.e. Yet
another capability negotiation, during the handshake QMP phase.

> 
> So this is already a solved problem.
> 
> Regards,
> 
> Anthony Liguori
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-06 12:30                 ` Alon Levy
@ 2012-06-06 12:34                   ` Anthony Liguori
  2012-06-06 13:03                   ` Gerd Hoffmann
  1 sibling, 0 replies; 28+ messages in thread
From: Anthony Liguori @ 2012-06-06 12:34 UTC (permalink / raw)
  To: Yonit Halperin, aliguori, Gerd Hoffmann, qemu-devel

On 06/06/2012 08:30 PM, Alon Levy wrote:
> On Wed, Jun 06, 2012 at 08:17:29PM +0800, Anthony Liguori wrote:
>> On 06/06/2012 08:15 PM, Alon Levy wrote:
>>> On Wed, Jun 06, 2012 at 08:08:40PM +0800, Anthony Liguori wrote:
>>>> On 06/06/2012 08:01 PM, Yonit Halperin wrote:
>>>>> On 06/06/2012 12:22 PM, Anthony Liguori wrote:
>>>>>>
>>>>>> So send a QMP event and call it a day.
>>>>>>
>>>>> Using a QMP event is making spice seamless migration dependent on libvirt
>>>>> version.
>>>>
>>>> That is not an acceptable justification.
>>>
>>> To let spice know that libvirt doesn't support the new event would
>>> require libvirt capabilities advertisement to qemu. Is that acceptable?
>>
>> I literally have danpb's event introspection patches from Luiz's PULL
>> request testing on my system right now to be pushed.
> this?: [PATCH 29/29] Add 'query-events' command to QMP to query async events
>
> This is about libvirt getting qemu's event list. I am talking about qemu
> getting libvirt to say "we support event SPICE_MIGRATE_DONE". i.e. Yet
> another capability negotiation, during the handshake QMP phase.

qemu -spice foo,seamless-migration=on

Doesn't seem that hard to me...

Regards,

Anthony Liguori

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-06 12:30                 ` Alon Levy
  2012-06-06 12:34                   ` Anthony Liguori
@ 2012-06-06 13:03                   ` Gerd Hoffmann
  2012-06-06 14:52                     ` Alon Levy
  1 sibling, 1 reply; 28+ messages in thread
From: Gerd Hoffmann @ 2012-06-06 13:03 UTC (permalink / raw)
  To: Anthony Liguori, Yonit Halperin, aliguori, qemu-devel

  Hi,

>> I literally have danpb's event introspection patches from Luiz's PULL
>> request testing on my system right now to be pushed.

Good.

> this?: [PATCH 29/29] Add 'query-events' command to QMP to query async events
> 
> This is about libvirt getting qemu's event list. I am talking about qemu
> getting libvirt to say "we support event SPICE_MIGRATE_DONE".

Why do you think we need this?  Other way around (libvirt detecting
whenever qemu supports SPICE_MIGRATE_DONE event) should be good enougth, no?

> i.e. Yet
> another capability negotiation, during the handshake QMP phase.

I'm sure libvirt will use query-events for other reasons anyway, so
there is no extra overhead for this.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-06 13:03                   ` Gerd Hoffmann
@ 2012-06-06 14:52                     ` Alon Levy
  2012-06-06 15:00                       ` Gerd Hoffmann
  0 siblings, 1 reply; 28+ messages in thread
From: Alon Levy @ 2012-06-06 14:52 UTC (permalink / raw)
  To: Gerd Hoffmann; +Cc: aliguori, Yonit Halperin, qemu-devel, Anthony Liguori

On Wed, Jun 06, 2012 at 03:03:37PM +0200, Gerd Hoffmann wrote:
>   Hi,
> 
> >> I literally have danpb's event introspection patches from Luiz's PULL
> >> request testing on my system right now to be pushed.
> 
> Good.
> 
> > this?: [PATCH 29/29] Add 'query-events' command to QMP to query async events
> > 
> > This is about libvirt getting qemu's event list. I am talking about qemu
> > getting libvirt to say "we support event SPICE_MIGRATE_DONE".
> 
> Why do you think we need this?  Other way around (libvirt detecting
> whenever qemu supports SPICE_MIGRATE_DONE event) should be good enougth, no?
> 
> > i.e. Yet
> > another capability negotiation, during the handshake QMP phase.
> 
> I'm sure libvirt will use query-events for other reasons anyway, so
> there is no extra overhead for this.

What Anthony suggested, using a command line switch, would work fine for
the problem I am talking about. The problem is how do we know that
libvirt will support our new event. Libvirt using query-events doesn't
help - unless you suggest we intercept it and record "libvirt is aware
of our new event, it probably supports it", but that's obviously wrong.

If libvirt doesn't support this event we want to fall back to
semi-seamless migration, if it does we want to do seamless migration by
waiting for the source to complete guest migration, send the spice
client a notification that the target is alive, then send this
event, and libvirt will then close the source vm - migration downtime is
unaffected since we only delay the closing of the vm after it is "dead",
i.e. stopped and migrated.

> 
> cheers,
>   Gerd
> 
> 

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers
  2012-06-06 14:52                     ` Alon Levy
@ 2012-06-06 15:00                       ` Gerd Hoffmann
  0 siblings, 0 replies; 28+ messages in thread
From: Gerd Hoffmann @ 2012-06-06 15:00 UTC (permalink / raw)
  To: Anthony Liguori, Yonit Halperin, aliguori, qemu-devel

  Hi,

> If libvirt doesn't support this event we want to fall back to
> semi-seamless migration,

Ah, ok.  Yes, a new -spice option will work here.

cheers,
  Gerd

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2012-06-06 15:00 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-06-05  5:49 [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers Yonit Halperin
2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 1/5] notifiers: add support for async notifiers handlers Yonit Halperin
2012-06-05  8:36   ` Gerd Hoffmann
2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 2/5] migration: moving migration start code to a separated routine Yonit Halperin
2012-06-05  8:44   ` Gerd Hoffmann
2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 3/5] migration: moving migration completion " Yonit Halperin
2012-06-05  8:46   ` Gerd Hoffmann
2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 4/5] migration: replace migration state change notifier with async notifiers Yonit Halperin
2012-06-05  5:49 ` [Qemu-devel] [RFC PATCH 5/5] spice: turn spice "migration end" handler to be async Yonit Halperin
2012-06-05 11:59 ` [Qemu-devel] [RFC PATCH 0/5] asynchronous migration state change handlers Anthony Liguori
2012-06-05 13:15   ` Gerd Hoffmann
2012-06-05 13:38     ` Eric Blake
2012-06-05 21:37       ` Anthony Liguori
2012-06-06  9:10     ` Yonit Halperin
2012-06-06  9:22       ` Anthony Liguori
2012-06-06 10:54         ` Alon Levy
2012-06-06 11:05           ` Anthony Liguori
2012-06-06 11:27             ` Alon Levy
2012-06-06 11:49               ` Anthony Liguori
2012-06-06 12:01         ` Yonit Halperin
2012-06-06 12:08           ` Anthony Liguori
2012-06-06 12:15             ` Alon Levy
2012-06-06 12:17               ` Anthony Liguori
2012-06-06 12:30                 ` Alon Levy
2012-06-06 12:34                   ` Anthony Liguori
2012-06-06 13:03                   ` Gerd Hoffmann
2012-06-06 14:52                     ` Alon Levy
2012-06-06 15:00                       ` Gerd Hoffmann

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).