qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH 0/3] dataplane: virtio-blk live migration with x-data-plane=on
@ 2013-07-17  9:35 Stefan Hajnoczi
  2013-07-17  9:35 ` [Qemu-devel] [PATCH 1/3] dataplane: sync virtio.c and vring.c virtqueue state Stefan Hajnoczi
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Stefan Hajnoczi @ 2013-07-17  9:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Stefan Hajnoczi, Juan Quintela

These patches add live migration support to -device virtio-blk-pci,x-data-plane=on.

Patch 1 has already been posted and merged into the block tree.  I have
included it for convenience.

Patches 2 & 3 implement a switch from dataplane mode back to regular virtio-blk
mode when migration starts.  This way live migration works.

If migration is cancelled or the guest accesses the virtio-blk device after
completion, dataplane starts again.

Since this approach is so small, it's more palatable for QEMU 1.6 than trying
to make vring.c log dirty memory.  It makes dataplane usable in situations
where live migration is a requirement.

Stefan Hajnoczi (3):
  dataplane: sync virtio.c and vring.c virtqueue state
  migration: notify migration state before starting thread
  dataplane: enable virtio-blk x-data-plane=on live migration

 hw/block/dataplane/virtio-blk.c     | 19 +++++++++----------
 hw/block/virtio-blk.c               | 32 ++++++++++++++++++++++++++++++++
 hw/virtio/dataplane/vring.c         |  8 +++++---
 include/hw/virtio/dataplane/vring.h |  2 +-
 include/hw/virtio/virtio-blk.h      |  1 +
 migration.c                         |  4 +++-
 6 files changed, 51 insertions(+), 15 deletions(-)

-- 
1.8.1.4

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [Qemu-devel] [PATCH 1/3] dataplane: sync virtio.c and vring.c virtqueue state
  2013-07-17  9:35 [Qemu-devel] [PATCH 0/3] dataplane: virtio-blk live migration with x-data-plane=on Stefan Hajnoczi
@ 2013-07-17  9:35 ` Stefan Hajnoczi
  2013-07-17  9:35 ` [Qemu-devel] [PATCH 2/3] migration: notify migration state before starting thread Stefan Hajnoczi
  2013-07-17  9:35 ` [Qemu-devel] [PATCH 3/3] dataplane: enable virtio-blk x-data-plane=on live migration Stefan Hajnoczi
  2 siblings, 0 replies; 14+ messages in thread
From: Stefan Hajnoczi @ 2013-07-17  9:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Stefan Hajnoczi, Juan Quintela

Load the virtio.c state into vring.c when we start dataplane mode and
vice versa when stopping dataplane mode.  This patch makes it possible
to start and stop dataplane any time while the guest is running.

This is very useful since it will allow us to go back to QEMU main loop
for bdrv_drain_all() and live migration.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/virtio-blk.c     | 2 +-
 hw/virtio/dataplane/vring.c         | 8 +++++---
 include/hw/virtio/dataplane/vring.h | 2 +-
 3 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 0356665..2faed43 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -537,7 +537,7 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
     /* Clean up guest notifier (irq) */
     k->set_guest_notifiers(qbus->parent, 1, false);
 
-    vring_teardown(&s->vring);
+    vring_teardown(&s->vring, s->vdev, 0);
     s->started = false;
     s->stopping = false;
 }
diff --git a/hw/virtio/dataplane/vring.c b/hw/virtio/dataplane/vring.c
index e0d6e83..82cc151 100644
--- a/hw/virtio/dataplane/vring.c
+++ b/hw/virtio/dataplane/vring.c
@@ -39,8 +39,8 @@ bool vring_setup(Vring *vring, VirtIODevice *vdev, int n)
 
     vring_init(&vring->vr, virtio_queue_get_num(vdev, n), vring_ptr, 4096);
 
-    vring->last_avail_idx = 0;
-    vring->last_used_idx = 0;
+    vring->last_avail_idx = virtio_queue_get_last_avail_idx(vdev, n);
+    vring->last_used_idx = vring->vr.used->idx;
     vring->signalled_used = 0;
     vring->signalled_used_valid = false;
 
@@ -49,8 +49,10 @@ bool vring_setup(Vring *vring, VirtIODevice *vdev, int n)
     return true;
 }
 
-void vring_teardown(Vring *vring)
+void vring_teardown(Vring *vring, VirtIODevice *vdev, int n)
 {
+    virtio_queue_set_last_avail_idx(vdev, n, vring->last_avail_idx);
+
     hostmem_finalize(&vring->hostmem);
 }
 
diff --git a/include/hw/virtio/dataplane/vring.h b/include/hw/virtio/dataplane/vring.h
index 9380cb5..c0b69ff 100644
--- a/include/hw/virtio/dataplane/vring.h
+++ b/include/hw/virtio/dataplane/vring.h
@@ -50,7 +50,7 @@ static inline void vring_set_broken(Vring *vring)
 }
 
 bool vring_setup(Vring *vring, VirtIODevice *vdev, int n);
-void vring_teardown(Vring *vring);
+void vring_teardown(Vring *vring, VirtIODevice *vdev, int n);
 void vring_disable_notification(VirtIODevice *vdev, Vring *vring);
 bool vring_enable_notification(VirtIODevice *vdev, Vring *vring);
 bool vring_should_notify(VirtIODevice *vdev, Vring *vring);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [Qemu-devel] [PATCH 2/3] migration: notify migration state before starting thread
  2013-07-17  9:35 [Qemu-devel] [PATCH 0/3] dataplane: virtio-blk live migration with x-data-plane=on Stefan Hajnoczi
  2013-07-17  9:35 ` [Qemu-devel] [PATCH 1/3] dataplane: sync virtio.c and vring.c virtqueue state Stefan Hajnoczi
@ 2013-07-17  9:35 ` Stefan Hajnoczi
  2013-07-17 10:22   ` Paolo Bonzini
  2013-07-17  9:35 ` [Qemu-devel] [PATCH 3/3] dataplane: enable virtio-blk x-data-plane=on live migration Stefan Hajnoczi
  2 siblings, 1 reply; 14+ messages in thread
From: Stefan Hajnoczi @ 2013-07-17  9:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Stefan Hajnoczi, Juan Quintela

The migration thread runs outside the QEMU global mutex when possible.
Therefore we must notify migration state change *before* starting the
migration thread.

This allows registered listeners to act before live migration iterations
begin.  Therefore they can get into a state that allows for live
migration.  When the migration thread starts everything will be ready.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 migration.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/migration.c b/migration.c
index 9f5a423..b4daf13 100644
--- a/migration.c
+++ b/migration.c
@@ -625,7 +625,9 @@ void migrate_fd_connect(MigrationState *s)
     qemu_file_set_rate_limit(s->file,
                              s->bandwidth_limit / XFER_LIMIT_RATIO);
 
+    /* Notify before starting migration thread */
+    notifier_list_notify(&migration_state_notifiers, s);
+
     qemu_thread_create(&s->thread, migration_thread, s,
                        QEMU_THREAD_JOINABLE);
-    notifier_list_notify(&migration_state_notifiers, s);
 }
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [Qemu-devel] [PATCH 3/3] dataplane: enable virtio-blk x-data-plane=on live migration
  2013-07-17  9:35 [Qemu-devel] [PATCH 0/3] dataplane: virtio-blk live migration with x-data-plane=on Stefan Hajnoczi
  2013-07-17  9:35 ` [Qemu-devel] [PATCH 1/3] dataplane: sync virtio.c and vring.c virtqueue state Stefan Hajnoczi
  2013-07-17  9:35 ` [Qemu-devel] [PATCH 2/3] migration: notify migration state before starting thread Stefan Hajnoczi
@ 2013-07-17  9:35 ` Stefan Hajnoczi
  2013-07-17 10:26   ` Paolo Bonzini
  2 siblings, 1 reply; 14+ messages in thread
From: Stefan Hajnoczi @ 2013-07-17  9:35 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Stefan Hajnoczi, Juan Quintela

Although the dataplane thread does not cooperate with dirty memory
logging yet it's fairly easy to temporarily disable dataplane during
live migration.  This way virtio-blk can live migrate when
x-data-plane=on.

The dataplane thread will restart after migration is cancelled or if the
guest resuming virtio-blk operation after migration completes.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 17 ++++++++---------
 hw/block/virtio-blk.c           | 32 ++++++++++++++++++++++++++++++++
 include/hw/virtio/virtio-blk.h  |  1 +
 3 files changed, 41 insertions(+), 9 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 2faed43..411becc 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -18,7 +18,6 @@
 #include "qemu/error-report.h"
 #include "hw/virtio/dataplane/vring.h"
 #include "ioq.h"
-#include "migration/migration.h"
 #include "block/block.h"
 #include "hw/virtio/virtio-blk.h"
 #include "virtio-blk.h"
@@ -69,8 +68,6 @@ struct VirtIOBlockDataPlane {
                                              queue */
 
     unsigned int num_reqs;
-
-    Error *migration_blocker;
 };
 
 /* Raise an interrupt to signal guest, if necessary */
@@ -418,6 +415,14 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *blk,
         return false;
     }
 
+    /* If dataplane is (re-)enabled while the guest is running there could be
+     * block jobs that can conflict.
+     */
+    if (bdrv_in_use(blk->conf.bs)) {
+        error_report("cannot start dataplane thread while device is in use");
+        return false;
+    }
+
     fd = raw_get_aio_fd(blk->conf.bs);
     if (fd < 0) {
         error_report("drive is incompatible with x-data-plane, "
@@ -433,10 +438,6 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *blk,
     /* Prevent block operations that conflict with data plane thread */
     bdrv_set_in_use(blk->conf.bs, 1);
 
-    error_setg(&s->migration_blocker,
-            "x-data-plane does not support migration");
-    migrate_add_blocker(s->migration_blocker);
-
     *dataplane = s;
     return true;
 }
@@ -448,8 +449,6 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
     }
 
     virtio_blk_data_plane_stop(s);
-    migrate_del_blocker(s->migration_blocker);
-    error_free(s->migration_blocker);
     bdrv_set_in_use(s->blk->conf.bs, 0);
     g_free(s);
 }
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index cf12469..cca0c77 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -19,6 +19,7 @@
 #include "hw/virtio/virtio-blk.h"
 #ifdef CONFIG_VIRTIO_BLK_DATA_PLANE
 # include "dataplane/virtio-blk.h"
+# include "migration/migration.h"
 #endif
 #include "block/scsi.h"
 #ifdef __linux__
@@ -628,6 +629,34 @@ void virtio_blk_set_conf(DeviceState *dev, VirtIOBlkConf *blk)
     memcpy(&(s->blk), blk, sizeof(struct VirtIOBlkConf));
 }
 
+#ifdef CONFIG_VIRTIO_BLK_DATA_PLANE
+/* Disable dataplane thread during live migration since it does not
+ * update the dirty memory bitmap yet.
+ */
+static void virtio_blk_migration_state_changed(Notifier *notifier, void *data)
+{
+    VirtIOBlock *s = container_of(notifier, VirtIOBlock,
+                                  migration_state_notifier);
+    MigrationState *mig = data;
+
+    if (migration_is_active(mig)) {
+        if (!s->dataplane) {
+            return;
+        }
+        virtio_blk_data_plane_destroy(s->dataplane);
+        s->dataplane = NULL;
+    } else if (migration_has_finished(mig) ||
+               migration_has_failed(mig)) {
+        if (s->dataplane) {
+            return;
+        }
+        bdrv_drain_all(); /* complete in-flight non-dataplane requests */
+        virtio_blk_data_plane_create(VIRTIO_DEVICE(s), &s->blk,
+                                     &s->dataplane);
+    }
+}
+#endif /* CONFIG_VIRTIO_BLK_DATA_PLANE */
+
 static int virtio_blk_device_init(VirtIODevice *vdev)
 {
     DeviceState *qdev = DEVICE(vdev);
@@ -664,6 +693,8 @@ static int virtio_blk_device_init(VirtIODevice *vdev)
         virtio_cleanup(vdev);
         return -1;
     }
+    s->migration_state_notifier.notify = virtio_blk_migration_state_changed;
+    add_migration_state_change_notifier(&s->migration_state_notifier);
 #endif
 
     s->change = qemu_add_vm_change_state_handler(virtio_blk_dma_restart_cb, s);
@@ -683,6 +714,7 @@ static int virtio_blk_device_exit(DeviceState *dev)
     VirtIODevice *vdev = VIRTIO_DEVICE(dev);
     VirtIOBlock *s = VIRTIO_BLK(dev);
 #ifdef CONFIG_VIRTIO_BLK_DATA_PLANE
+    remove_migration_state_change_notifier(&s->migration_state_notifier);
     virtio_blk_data_plane_destroy(s->dataplane);
     s->dataplane = NULL;
 #endif
diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
index fc71853..b87cf49 100644
--- a/include/hw/virtio/virtio-blk.h
+++ b/include/hw/virtio/virtio-blk.h
@@ -125,6 +125,7 @@ typedef struct VirtIOBlock {
     unsigned short sector_mask;
     VMChangeStateEntry *change;
 #ifdef CONFIG_VIRTIO_BLK_DATA_PLANE
+    Notifier migration_state_notifier;
     struct VirtIOBlockDataPlane *dataplane;
 #endif
 } VirtIOBlock;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 2/3] migration: notify migration state before starting thread
  2013-07-17  9:35 ` [Qemu-devel] [PATCH 2/3] migration: notify migration state before starting thread Stefan Hajnoczi
@ 2013-07-17 10:22   ` Paolo Bonzini
  0 siblings, 0 replies; 14+ messages in thread
From: Paolo Bonzini @ 2013-07-17 10:22 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Kevin Wolf, qemu-devel, Juan Quintela

Il 17/07/2013 11:35, Stefan Hajnoczi ha scritto:
> The migration thread runs outside the QEMU global mutex when possible.
> Therefore we must notify migration state change *before* starting the
> migration thread.
> 
> This allows registered listeners to act before live migration iterations
> begin.  Therefore they can get into a state that allows for live
> migration.  When the migration thread starts everything will be ready.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  migration.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/migration.c b/migration.c
> index 9f5a423..b4daf13 100644
> --- a/migration.c
> +++ b/migration.c
> @@ -625,7 +625,9 @@ void migrate_fd_connect(MigrationState *s)
>      qemu_file_set_rate_limit(s->file,
>                               s->bandwidth_limit / XFER_LIMIT_RATIO);
>  
> +    /* Notify before starting migration thread */
> +    notifier_list_notify(&migration_state_notifiers, s);
> +
>      qemu_thread_create(&s->thread, migration_thread, s,
>                         QEMU_THREAD_JOINABLE);
> -    notifier_list_notify(&migration_state_notifiers, s);
>  }
> 

Acked-by: Paolo Bonzini <pbonzini@redhat.com>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 3/3] dataplane: enable virtio-blk x-data-plane=on live migration
  2013-07-17  9:35 ` [Qemu-devel] [PATCH 3/3] dataplane: enable virtio-blk x-data-plane=on live migration Stefan Hajnoczi
@ 2013-07-17 10:26   ` Paolo Bonzini
  2013-07-23 13:39     ` Stefan Hajnoczi
  0 siblings, 1 reply; 14+ messages in thread
From: Paolo Bonzini @ 2013-07-17 10:26 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Kevin Wolf, qemu-devel, Juan Quintela

Il 17/07/2013 11:35, Stefan Hajnoczi ha scritto:
> Although the dataplane thread does not cooperate with dirty memory
> logging yet it's fairly easy to temporarily disable dataplane during
> live migration.  This way virtio-blk can live migrate when
> x-data-plane=on.
> 
> The dataplane thread will restart after migration is cancelled or if the
> guest resuming virtio-blk operation after migration completes.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  hw/block/dataplane/virtio-blk.c | 17 ++++++++---------
>  hw/block/virtio-blk.c           | 32 ++++++++++++++++++++++++++++++++
>  include/hw/virtio/virtio-blk.h  |  1 +
>  3 files changed, 41 insertions(+), 9 deletions(-)
> 
> diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
> index 2faed43..411becc 100644
> --- a/hw/block/dataplane/virtio-blk.c
> +++ b/hw/block/dataplane/virtio-blk.c
> @@ -18,7 +18,6 @@
>  #include "qemu/error-report.h"
>  #include "hw/virtio/dataplane/vring.h"
>  #include "ioq.h"
> -#include "migration/migration.h"
>  #include "block/block.h"
>  #include "hw/virtio/virtio-blk.h"
>  #include "virtio-blk.h"
> @@ -69,8 +68,6 @@ struct VirtIOBlockDataPlane {
>                                               queue */
>  
>      unsigned int num_reqs;
> -
> -    Error *migration_blocker;
>  };
>  
>  /* Raise an interrupt to signal guest, if necessary */
> @@ -418,6 +415,14 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *blk,
>          return false;
>      }
>  
> +    /* If dataplane is (re-)enabled while the guest is running there could be
> +     * block jobs that can conflict.
> +     */
> +    if (bdrv_in_use(blk->conf.bs)) {
> +        error_report("cannot start dataplane thread while device is in use");
> +        return false;
> +    }
> +
>      fd = raw_get_aio_fd(blk->conf.bs);
>      if (fd < 0) {
>          error_report("drive is incompatible with x-data-plane, "
> @@ -433,10 +438,6 @@ bool virtio_blk_data_plane_create(VirtIODevice *vdev, VirtIOBlkConf *blk,
>      /* Prevent block operations that conflict with data plane thread */
>      bdrv_set_in_use(blk->conf.bs, 1);
>  
> -    error_setg(&s->migration_blocker,
> -            "x-data-plane does not support migration");
> -    migrate_add_blocker(s->migration_blocker);
> -
>      *dataplane = s;
>      return true;
>  }
> @@ -448,8 +449,6 @@ void virtio_blk_data_plane_destroy(VirtIOBlockDataPlane *s)
>      }
>  
>      virtio_blk_data_plane_stop(s);
> -    migrate_del_blocker(s->migration_blocker);
> -    error_free(s->migration_blocker);
>      bdrv_set_in_use(s->blk->conf.bs, 0);
>      g_free(s);
>  }
> diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
> index cf12469..cca0c77 100644
> --- a/hw/block/virtio-blk.c
> +++ b/hw/block/virtio-blk.c
> @@ -19,6 +19,7 @@
>  #include "hw/virtio/virtio-blk.h"
>  #ifdef CONFIG_VIRTIO_BLK_DATA_PLANE
>  # include "dataplane/virtio-blk.h"
> +# include "migration/migration.h"
>  #endif
>  #include "block/scsi.h"
>  #ifdef __linux__
> @@ -628,6 +629,34 @@ void virtio_blk_set_conf(DeviceState *dev, VirtIOBlkConf *blk)
>      memcpy(&(s->blk), blk, sizeof(struct VirtIOBlkConf));
>  }
>  
> +#ifdef CONFIG_VIRTIO_BLK_DATA_PLANE
> +/* Disable dataplane thread during live migration since it does not
> + * update the dirty memory bitmap yet.
> + */
> +static void virtio_blk_migration_state_changed(Notifier *notifier, void *data)
> +{
> +    VirtIOBlock *s = container_of(notifier, VirtIOBlock,
> +                                  migration_state_notifier);
> +    MigrationState *mig = data;
> +
> +    if (migration_is_active(mig)) {
> +        if (!s->dataplane) {
> +            return;
> +        }
> +        virtio_blk_data_plane_destroy(s->dataplane);
> +        s->dataplane = NULL;
> +    } else if (migration_has_finished(mig) ||
> +               migration_has_failed(mig)) {
> +        if (s->dataplane) {
> +            return;
> +        }
> +        bdrv_drain_all(); /* complete in-flight non-dataplane requests */
> +        virtio_blk_data_plane_create(VIRTIO_DEVICE(s), &s->blk,
> +                                     &s->dataplane);
> +    }
> +}

Perhaps you can call bdrv_set_in_use here (set it to 1 after
destruction, and to 0 before creation), so that you do not need the
check in virtio_blk_data_plane_create?

> +#endif /* CONFIG_VIRTIO_BLK_DATA_PLANE */
> +
>  static int virtio_blk_device_init(VirtIODevice *vdev)
>  {
>      DeviceState *qdev = DEVICE(vdev);
> @@ -664,6 +693,8 @@ static int virtio_blk_device_init(VirtIODevice *vdev)
>          virtio_cleanup(vdev);
>          return -1;
>      }
> +    s->migration_state_notifier.notify = virtio_blk_migration_state_changed;
> +    add_migration_state_change_notifier(&s->migration_state_notifier);
>  #endif
>  
>      s->change = qemu_add_vm_change_state_handler(virtio_blk_dma_restart_cb, s);
> @@ -683,6 +714,7 @@ static int virtio_blk_device_exit(DeviceState *dev)
>      VirtIODevice *vdev = VIRTIO_DEVICE(dev);
>      VirtIOBlock *s = VIRTIO_BLK(dev);
>  #ifdef CONFIG_VIRTIO_BLK_DATA_PLANE
> +    remove_migration_state_change_notifier(&s->migration_state_notifier);
>      virtio_blk_data_plane_destroy(s->dataplane);
>      s->dataplane = NULL;
>  #endif
> diff --git a/include/hw/virtio/virtio-blk.h b/include/hw/virtio/virtio-blk.h
> index fc71853..b87cf49 100644
> --- a/include/hw/virtio/virtio-blk.h
> +++ b/include/hw/virtio/virtio-blk.h
> @@ -125,6 +125,7 @@ typedef struct VirtIOBlock {
>      unsigned short sector_mask;
>      VMChangeStateEntry *change;
>  #ifdef CONFIG_VIRTIO_BLK_DATA_PLANE
> +    Notifier migration_state_notifier;
>      struct VirtIOBlockDataPlane *dataplane;
>  #endif
>  } VirtIOBlock;
> 

Only a stopgap measure, but it's self-contained and easy to revert,
which makes it a brilliant solution.  Just one nit above to make it even
more self-contained.

Paolo

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] dataplane: virtio-blk live migration with x-data-plane=on
@ 2013-07-19  5:33 yinyin
  2013-07-23  8:51 ` Stefan Hajnoczi
  0 siblings, 1 reply; 14+ messages in thread
From: yinyin @ 2013-07-19  5:33 UTC (permalink / raw)
  To: stefanha; +Cc: qemu-devel

hi, stefan:
	I use systemtap to test this patch,the migration will success. But I found the dataplane will start again after migration start. the virtio_blk_handle_output will start dataplane.


virtio_blk_data_plane_stop pid:29037 tid:29037
 0x6680fe : virtio_blk_data_plane_stop+0x0/0x232 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x667da2 : virtio_blk_data_plane_destroy+0x33/0x70 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x66a2e4 : virtio_blk_migration_state_changed+0x7c/0x12d [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x7740f4 : notifier_list_notify+0x59/0x79 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x5a5b16 : migrate_fd_connect+0xc6/0x104 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x5a3dba : tcp_wait_for_connect+0x6e/0x84 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x76c755 : wait_for_connect+0x170/0x19a [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x5a2e2c : qemu_iohandler_poll+0xec/0x188 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x5a3834 : main_loop_wait+0x92/0xc9 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x630083 : main_loop+0x5d/0x82 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x636954 : main+0x3666/0x369a [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x37cf01ecdd [/lib64/libc-2.12.so+0x1ecdd/0x393000]
virtio_blk_data_plane_start pid:29037 tid:29037
 0x667ddf : virtio_blk_data_plane_start+0x0/0x31f [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x669802 : virtio_blk_handle_output+0x9c/0x118 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x6ac6ff : virtio_queue_notify_vq+0x92/0xa8 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x6ae128 : virtio_queue_host_notifier_read+0x50/0x66 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x6ae1bb : virtio_queue_set_host_notifier_fd_handler+0x7d/0x93 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x59cdf1 : virtio_pci_set_host_notifier_internal+0x132/0x157 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x59e98c : virtio_pci_set_host_notifier+0x73/0x8e [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x6682a8 : virtio_blk_data_plane_stop+0x1aa/0x232 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x667da2 : virtio_blk_data_plane_destroy+0x33/0x70 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x66a2e4 : virtio_blk_migration_state_changed+0x7c/0x12d [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x7740f4 : notifier_list_notify+0x59/0x79 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x5a5b16 : migrate_fd_connect+0xc6/0x104 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x5a3dba : tcp_wait_for_connect+0x6e/0x84 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x76c755 : wait_for_connect+0x170/0x19a [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x5a2e2c : qemu_iohandler_poll+0xec/0x188 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x5a3834 : main_loop_wait+0x92/0xc9 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x630083 : main_loop+0x5d/0x82 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x636954 : main+0x3666/0x369a [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x37cf01ecdd [/lib64/libc-2.12.so+0x1ecdd/0x393000]


systemtap scripts:
probe process("/root/dataplane/qemu/x86_64-softmmu/qemu-kvm").function("virtio_blk_data_plane_start")
{
  printf("virtio_blk_data_plane_start pid:%d tid:%d\n",pid(),tid())
  print_ubacktrace();
}


probe process("/root/dataplane/qemu/x86_64-softmmu/qemu-kvm").function("virtio_blk_data_plane_stop")
{
  printf("virtio_blk_data_plane_stop pid:%d tid:%d\n",pid(),tid())
  print_ubacktrace();
}


Yin Yin
yin.yin@cs2c.com.cn

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] dataplane: virtio-blk live migration with x-data-plane=on
  2013-07-19  5:33 [Qemu-devel] [PATCH 0/3] dataplane: virtio-blk live migration with x-data-plane=on yinyin
@ 2013-07-23  8:51 ` Stefan Hajnoczi
  2013-07-23  9:19   ` yinyin
  0 siblings, 1 reply; 14+ messages in thread
From: Stefan Hajnoczi @ 2013-07-23  8:51 UTC (permalink / raw)
  To: yinyin; +Cc: qemu-devel

On Fri, Jul 19, 2013 at 01:33:12PM +0800, yinyin wrote:
> 	I use systemtap to test this patch,the migration will success. But I found the dataplane will start again after migration start. the virtio_blk_handle_output will start dataplane.

Hi Yin Yin,
Thank you for testing the patch.  It is not clear to me whether you
encountered a problem or not.

It is expected that the destination will start the dataplane thread.
Was there a crash or some reason why you posted these traces?

Stefan

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] dataplane: virtio-blk live migration with x-data-plane=on
  2013-07-23  8:51 ` Stefan Hajnoczi
@ 2013-07-23  9:19   ` yinyin
  2013-07-23  9:30     ` Andreas Färber
  2013-07-23 13:20     ` Stefan Hajnoczi
  0 siblings, 2 replies; 14+ messages in thread
From: yinyin @ 2013-07-23  9:19 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: qemu-devel

Hi, Stefan:
	during the migration, the source, not the destination, will start dataplane again....
	I think the process of migration with dataplane as follows:
1. migration begin to start
2. the migration source stop the dataplane
3. do migration ...
4. migration completed, the destination start the dataplane.

when migration start, the source dataplane should already stopped, and not start again, if there is no cancel or abort.
But the trace show that, in step 3 above, the source dataplane will be start by virtio_blk_handle_output. I'm afraid of some inconsistent will happen there.Is it right?

There is no crash found, I just use this trace to understand the flow of dataplane migration. 

Yin Yin
yin.yin@cs2c.com.cn

 
在 2013-7-23,下午4:51,Stefan Hajnoczi <stefanha@redhat.com> 写道:

> On Fri, Jul 19, 2013 at 01:33:12PM +0800, yinyin wrote:
>> 	I use systemtap to test this patch,the migration will success. But I found the dataplane will start again after migration start. the virtio_blk_handle_output will start dataplane.
> 
> Hi Yin Yin,
> Thank you for testing the patch.  It is not clear to me whether you
> encountered a problem or not.
> 
> It is expected that the destination will start the dataplane thread.
> Was there a crash or some reason why you posted these traces?
> 
> Stefan
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] dataplane: virtio-blk live migration with x-data-plane=on
  2013-07-23  9:19   ` yinyin
@ 2013-07-23  9:30     ` Andreas Färber
  2013-07-23  9:43       ` yinyin
  2013-07-23 13:20     ` Stefan Hajnoczi
  1 sibling, 1 reply; 14+ messages in thread
From: Andreas Färber @ 2013-07-23  9:30 UTC (permalink / raw)
  To: yinyin; +Cc: qemu-devel, Stefan Hajnoczi

Hi,

Am 23.07.2013 11:19, schrieb yinyin:
> Hi, Stefan:
> 	during the migration, the source, not the destination, will start dataplane again....
> 	I think the process of migration with dataplane as follows:
> 1. migration begin to start
> 2. the migration source stop the dataplane
> 3. do migration ...
> 4. migration completed, the destination start the dataplane.

I can't speak for the dataplane, but in general the source guest is
expected to continue working during live migration (that's the "live"
part) until it has been fully transferred to the destination.

HTH,
Andreas

> when migration start, the source dataplane should already stopped, and not start again, if there is no cancel or abort.
> But the trace show that, in step 3 above, the source dataplane will be start by virtio_blk_handle_output. I'm afraid of some inconsistent will happen there.Is it right?
> 
> There is no crash found, I just use this trace to understand the flow of dataplane migration. 
> 
> Yin Yin
> yin.yin@cs2c.com.cn
> 
>  
> 在 2013-7-23,下午4:51,Stefan Hajnoczi <stefanha@redhat.com> 写道:
> 
>> On Fri, Jul 19, 2013 at 01:33:12PM +0800, yinyin wrote:
>>> 	I use systemtap to test this patch,the migration will success. But I found the dataplane will start again after migration start. the virtio_blk_handle_output will start dataplane.
>>
>> Hi Yin Yin,
>> Thank you for testing the patch.  It is not clear to me whether you
>> encountered a problem or not.
>>
>> It is expected that the destination will start the dataplane thread.
>> Was there a crash or some reason why you posted these traces?
>>
>> Stefan
>>
> 
> 


-- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] dataplane: virtio-blk live migration with x-data-plane=on
  2013-07-23  9:30     ` Andreas Färber
@ 2013-07-23  9:43       ` yinyin
  2013-07-23  9:59         ` Andreas Färber
  0 siblings, 1 reply; 14+ messages in thread
From: yinyin @ 2013-07-23  9:43 UTC (permalink / raw)
  To: Andreas Färber; +Cc: qemu-devel, Stefan Hajnoczi

Hi,
在 2013-7-23,下午5:30,Andreas Färber <afaerber@suse.de> 写道:

> Hi,
> 
> Am 23.07.2013 11:19, schrieb yinyin:
>> Hi, Stefan:
>> 	during the migration, the source, not the destination, will start dataplane again....
>> 	I think the process of migration with dataplane as follows:
>> 1. migration begin to start
>> 2. the migration source stop the dataplane
>> 3. do migration ...
>> 4. migration completed, the destination start the dataplane.
> 
> I can't speak for the dataplane, but in general the source guest is
> expected to continue working during live migration (that's the "live"
> part) until it has been fully transferred to the destination.

when dataplane stop, the source guest can continue work, but not use dataplane thread.

> 
> HTH,
> Andreas
> 
>> when migration start, the source dataplane should already stopped, and not start again, if there is no cancel or abort.
>> But the trace show that, in step 3 above, the source dataplane will be start by virtio_blk_handle_output. I'm afraid of some inconsistent will happen there.Is it right?
>> 
>> There is no crash found, I just use this trace to understand the flow of dataplane migration. 
>> 
>> Yin Yin
>> yin.yin@cs2c.com.cn
>> 
>> 
>> 在 2013-7-23,下午4:51,Stefan Hajnoczi <stefanha@redhat.com> 写道:
>> 
>>> On Fri, Jul 19, 2013 at 01:33:12PM +0800, yinyin wrote:
>>>> 	I use systemtap to test this patch,the migration will success. But I found the dataplane will start again after migration start. the virtio_blk_handle_output will start dataplane.
>>> 
>>> Hi Yin Yin,
>>> Thank you for testing the patch.  It is not clear to me whether you
>>> encountered a problem or not.
>>> 
>>> It is expected that the destination will start the dataplane thread.
>>> Was there a crash or some reason why you posted these traces?
>>> 
>>> Stefan
>>> 
>> 
>> 
> 
> 
> -- 
> SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
> GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] dataplane: virtio-blk live migration with x-data-plane=on
  2013-07-23  9:43       ` yinyin
@ 2013-07-23  9:59         ` Andreas Färber
  0 siblings, 0 replies; 14+ messages in thread
From: Andreas Färber @ 2013-07-23  9:59 UTC (permalink / raw)
  To: yinyin; +Cc: qemu-devel, Stefan Hajnoczi

Am 23.07.2013 11:43, schrieb yinyin:
> 在 2013-7-23,下午5:30,Andreas Färber <afaerber@suse.de> 写道:
>> Am 23.07.2013 11:19, schrieb yinyin:
>>> 	during the migration, the source, not the destination, will start dataplane again....
>>> 	I think the process of migration with dataplane as follows:
>>> 1. migration begin to start
>>> 2. the migration source stop the dataplane
>>> 3. do migration ...
>>> 4. migration completed, the destination start the dataplane.
>>
>> I can't speak for the dataplane, but in general the source guest is
>> expected to continue working during live migration (that's the "live"
>> part) until it has been fully transferred to the destination.
> 
> when dataplane stop, the source guest can continue work, but not use dataplane thread.

So how would it do I/O then?

Andreas

-- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 0/3] dataplane: virtio-blk live migration with x-data-plane=on
  2013-07-23  9:19   ` yinyin
  2013-07-23  9:30     ` Andreas Färber
@ 2013-07-23 13:20     ` Stefan Hajnoczi
  1 sibling, 0 replies; 14+ messages in thread
From: Stefan Hajnoczi @ 2013-07-23 13:20 UTC (permalink / raw)
  To: yinyin; +Cc: qemu-devel, Stefan Hajnoczi

On Tue, Jul 23, 2013 at 11:19 AM, yinyin <yin.yin@cs2c.com.cn> wrote:
>         during the migration, the source, not the destination, will start dataplane again....

Thanks for explaining.  The backtrace you posted is harmless.  The
code is written to work like this.

I have annotated it explaining what is going on:

virtio_blk_data_plane_start pid:29037 tid:29037
 0x37cf01ecdd [/lib64/libc-2.12.so+0x1ecdd/0x393000]
 0x636954 : main+0x3666/0x369a [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x630083 : main_loop+0x5d/0x82 [/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x5a3834 : main_loop_wait+0x92/0xc9
[/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x5a2e2c : qemu_iohandler_poll+0xec/0x188
[/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x76c755 : wait_for_connect+0x170/0x19a
[/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x5a3dba : tcp_wait_for_connect+0x6e/0x84
[/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x5a5b16 : migrate_fd_connect+0xc6/0x104
[/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]

Live migration is start on the source.

 0x7740f4 : notifier_list_notify+0x59/0x79
[/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]

Before starting migration we notify listeners that migration is starting.

 0x66a2e4 : virtio_blk_migration_state_changed+0x7c/0x12d
[/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x667da2 : virtio_blk_data_plane_destroy+0x33/0x70
[/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]

Dataplane is a listener, it wants to know when migration begins.  It will stop
dataplane and switch to regular virtio-blk operation during live migration
iterations (while RAM is being transferred but the VM is still running on the
source).

 0x6682a8 : virtio_blk_data_plane_stop+0x1aa/0x232
[/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]

We are stopping dataplane right here.

 0x59e98c : virtio_pci_set_host_notifier+0x73/0x8e
[/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x59cdf1 : virtio_pci_set_host_notifier_internal+0x132/0x157
[/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x6ae1bb : virtio_queue_set_host_notifier_fd_handler+0x7d/0x93
[/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x6ae128 : virtio_queue_host_notifier_read+0x50/0x66
[/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]
 0x6ac6ff : virtio_queue_notify_vq+0x92/0xa8
[/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]

As part of stopping dataplane we first flush any pending virtqueue kicks.

 0x669802 : virtio_blk_handle_output+0x9c/0x118
[/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]

There was a pending virtqueue kick from the guest so we will complete it before
stopping dataplane.

 0x667ddf : virtio_blk_data_plane_start+0x0/0x31f
[/root/dataplane/qemu/x86_64-softmmu/qemu-kvm]

The dataplane thread was not running so it is temporarily started to process
these final requests.  Once they are finished it will stop again.

Stefan

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Qemu-devel] [PATCH 3/3] dataplane: enable virtio-blk x-data-plane=on live migration
  2013-07-17 10:26   ` Paolo Bonzini
@ 2013-07-23 13:39     ` Stefan Hajnoczi
  0 siblings, 0 replies; 14+ messages in thread
From: Stefan Hajnoczi @ 2013-07-23 13:39 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: Kevin Wolf, qemu-devel, Stefan Hajnoczi, Juan Quintela

On Wed, Jul 17, 2013 at 12:26:54PM +0200, Paolo Bonzini wrote:
> Il 17/07/2013 11:35, Stefan Hajnoczi ha scritto:
> > @@ -628,6 +629,34 @@ void virtio_blk_set_conf(DeviceState *dev, VirtIOBlkConf *blk)
> >      memcpy(&(s->blk), blk, sizeof(struct VirtIOBlkConf));
> >  }
> >  
> > +#ifdef CONFIG_VIRTIO_BLK_DATA_PLANE
> > +/* Disable dataplane thread during live migration since it does not
> > + * update the dirty memory bitmap yet.
> > + */
> > +static void virtio_blk_migration_state_changed(Notifier *notifier, void *data)
> > +{
> > +    VirtIOBlock *s = container_of(notifier, VirtIOBlock,
> > +                                  migration_state_notifier);
> > +    MigrationState *mig = data;
> > +
> > +    if (migration_is_active(mig)) {
> > +        if (!s->dataplane) {
> > +            return;
> > +        }
> > +        virtio_blk_data_plane_destroy(s->dataplane);
> > +        s->dataplane = NULL;
> > +    } else if (migration_has_finished(mig) ||
> > +               migration_has_failed(mig)) {
> > +        if (s->dataplane) {
> > +            return;
> > +        }
> > +        bdrv_drain_all(); /* complete in-flight non-dataplane requests */
> > +        virtio_blk_data_plane_create(VIRTIO_DEVICE(s), &s->blk,
> > +                                     &s->dataplane);
> > +    }
> > +}
> 
> Perhaps you can call bdrv_set_in_use here (set it to 1 after
> destruction, and to 0 before creation), so that you do not need the
> check in virtio_blk_data_plane_create?

The bdrv_in_use() check fixes a bug that was present before this series.
Therefore I split it into a separate commit in v3 and CCed
qemu-stable@nongnu.org.

Stefan

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2013-07-23 13:39 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-07-17  9:35 [Qemu-devel] [PATCH 0/3] dataplane: virtio-blk live migration with x-data-plane=on Stefan Hajnoczi
2013-07-17  9:35 ` [Qemu-devel] [PATCH 1/3] dataplane: sync virtio.c and vring.c virtqueue state Stefan Hajnoczi
2013-07-17  9:35 ` [Qemu-devel] [PATCH 2/3] migration: notify migration state before starting thread Stefan Hajnoczi
2013-07-17 10:22   ` Paolo Bonzini
2013-07-17  9:35 ` [Qemu-devel] [PATCH 3/3] dataplane: enable virtio-blk x-data-plane=on live migration Stefan Hajnoczi
2013-07-17 10:26   ` Paolo Bonzini
2013-07-23 13:39     ` Stefan Hajnoczi
  -- strict thread matches above, loose matches on Subject: below --
2013-07-19  5:33 [Qemu-devel] [PATCH 0/3] dataplane: virtio-blk live migration with x-data-plane=on yinyin
2013-07-23  8:51 ` Stefan Hajnoczi
2013-07-23  9:19   ` yinyin
2013-07-23  9:30     ` Andreas Färber
2013-07-23  9:43       ` yinyin
2013-07-23  9:59         ` Andreas Färber
2013-07-23 13:20     ` Stefan Hajnoczi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).