qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC PATCH 0/2] Fix migration with NFS & iscsi/Fiber channel
@ 2011-11-09 19:16 Juan Quintela
  2011-11-09 19:16 ` [Qemu-devel] [PATCH 1/2] Reopen files after migration Juan Quintela
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Juan Quintela @ 2011-11-09 19:16 UTC (permalink / raw)
  To: qemu-devel

Hi

Some formats (like qcow2) need to "reread" its metadata after
migration (notice that after discussing it with Kevin, it looks like
even raw needs this re-read because it can be resized nowadays).

Notice that this is different of the consistence issues that we used
to have and fixed on NFS with close() + open().  That was discussed
that we only support cache=none for migration, or you have a coherent
clustered filesystem.

The second patch is a big NO-NO.  It only works for some devices, and
Christoph NACKd it.  It is only shown to see what needs to be done.

About the 1st one, now that rely on cache=none or a coherent clustered
filesystem, the operation that we need is "discard all metadata" and
re-read it for disk.  I tried to split qcow2_open() into two functions
(one that opens the file, and other that "reads" the data and failed,
but Kevin told that it shouldn't be difficult).  Kevin?

Related with previous one, it appears that Libvirt would also preffer
to have qemu not-reopen the file, and instead re-read the data,
because for some reason (that I have forgot), they give explicit
permission and then remove it to qemu.  Dan, do you remember?

This series goes as RFC, because we need to arrive to a solution for
qcow2.  In RHEL5/6 we have the equivalent of patch1 integrated to fix
this issue.

Comments?

Later, Juan.


[ comment for v1 of the RFC]

This patch set creates infrastructure to invalidate buffers on
migration target machine.  The best way to see the problems is:

# create a new qcow2 image
qemu-img create -f qcow2 foo.img
# start the destination host
qemu .... path=foo.img....
# start the source host with one installation
qemu .... path=foo.img....
# migrate after lots of disk writes

Destination will have "read" the beggining of the blocks of the file
(where the headers are).  There are two bugs here:
a- we need to re-read the image after migration, to have the new values
   (reopening fixes it)
b- we need to be sure that we read the new blocks that are on the server,
   not the buffered ones locally from the start of the run.
   NFS: flush on source and close + open on target invalidates the cache
   Block devices: on linux, BLKFLSBUF invalidates all the buffers for that
   device.  This fixes iSCSI & FiberChannel.

I tested iSCSI & NFS.  NFS patch have been on RHEL5 kvm forever (I just
forget to send the patch upstream).  Our NFS gurus & cluster gurus told
that this is enough for linux to ensure consistence.

Once there, I fixed a couple of minor bugs (the first 3 patches):
- migration should exit with error 1 as everything else.
- memory leak on drive_uninit.
- fix cleanup on error on drive_init()

Juan Quintela (2):
  Reopen files after migration
  drive_open: Add invalidate option for block devices

 block.h           |    2 ++
 block/raw-posix.c |   24 ++++++++++++++++++++++++
 blockdev.c        |   43 ++++++++++++++++++++++++++++++++++++++-----
 blockdev.h        |    6 ++++++
 migration.c       |    6 ++++++
 5 files changed, 76 insertions(+), 5 deletions(-)

-- 
1.7.7

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH 1/2] Reopen files after migration
  2011-11-09 19:16 [Qemu-devel] [RFC PATCH 0/2] Fix migration with NFS & iscsi/Fiber channel Juan Quintela
@ 2011-11-09 19:16 ` Juan Quintela
  2011-11-09 20:00   ` Anthony Liguori
                     ` (2 more replies)
  2011-11-09 19:16 ` [Qemu-devel] [PATCH 2/2] drive_open: Add invalidate option for block devices Juan Quintela
                   ` (2 subsequent siblings)
  3 siblings, 3 replies; 13+ messages in thread
From: Juan Quintela @ 2011-11-09 19:16 UTC (permalink / raw)
  To: qemu-devel

We need to invalidate the Read Cache on the destination, otherwise we
have corruption.  Easy way to reproduce it is:

- create an qcow2 images
- start qemu on destination of migration (qemu .... -incoming tcp:...)
- start qemu on source of migration and do one install.
- migrate at the end of install (when lot of disk IO has happened).

Destination of migration has a local copy of the L1/L2 tables that existed
at the beginning, before the install started.  We have disk corruption at
this point.  The solution (for NFS) is to just re-open the file.  Operations
have to happen in this order:

- source of migration: flush()
- destination: close(file);
- destination: open(file)

it is not necesary that source of migration close the file.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 blockdev.c  |   43 ++++++++++++++++++++++++++++++++++++++-----
 blockdev.h  |    6 ++++++
 migration.c |    6 ++++++
 3 files changed, 50 insertions(+), 5 deletions(-)

diff --git a/blockdev.c b/blockdev.c
index 0827bf7..a10de7a 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -182,6 +182,7 @@ static void drive_uninit(DriveInfo *dinfo)
     qemu_opts_del(dinfo->opts);
     bdrv_delete(dinfo->bdrv);
     g_free(dinfo->id);
+    g_free(dinfo->file);
     QTAILQ_REMOVE(&drives, dinfo, next);
     g_free(dinfo);
 }
@@ -216,6 +217,37 @@ static int parse_block_error_action(const char *buf, int is_read)
     }
 }

+static int drive_open(DriveInfo *dinfo)
+{
+    int res = bdrv_open(dinfo->bdrv, dinfo->file,
+                        dinfo->bdrv_flags, dinfo->drv);
+
+    if (res < 0) {
+        fprintf(stderr, "qemu: could not open disk image %s: %s\n",
+                        dinfo->file, strerror(errno));
+    }
+    return res;
+}
+
+int drives_reinit(void)
+{
+    DriveInfo *dinfo;
+
+    QTAILQ_FOREACH(dinfo, &drives, next) {
+        if (dinfo->opened && !bdrv_is_read_only(dinfo->bdrv)) {
+            int res;
+            bdrv_close(dinfo->bdrv);
+            res = drive_open(dinfo);
+            if (res) {
+                fprintf(stderr, "qemu: re-open of %s failed with error %d\n",
+                        dinfo->file, res);
+                return res;
+            }
+        }
+    }
+    return 0;
+}
+
 DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
 {
     const char *buf;
@@ -236,7 +268,6 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
     const char *devaddr;
     DriveInfo *dinfo;
     int snapshot = 0;
-    int ret;

     translation = BIOS_ATA_TRANSLATION_AUTO;
     media = MEDIA_DISK;
@@ -514,10 +545,12 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)

     bdrv_flags |= ro ? 0 : BDRV_O_RDWR;

-    ret = bdrv_open(dinfo->bdrv, file, bdrv_flags, drv);
-    if (ret < 0) {
-        error_report("could not open disk image %s: %s",
-                     file, strerror(-ret));
+    dinfo->file = g_strdup(file);
+    dinfo->bdrv_flags = bdrv_flags;
+    dinfo->drv = drv;
+    dinfo->opened = 1;
+
+    if (drive_open(dinfo) < 0) {
         goto err;
     }

diff --git a/blockdev.h b/blockdev.h
index 3587786..733eb72 100644
--- a/blockdev.h
+++ b/blockdev.h
@@ -38,6 +38,10 @@ struct DriveInfo {
     char serial[BLOCK_SERIAL_STRLEN + 1];
     QTAILQ_ENTRY(DriveInfo) next;
     int refcount;
+    int opened;
+    int bdrv_flags;
+    char *file;
+    BlockDriver *drv;
 };

 DriveInfo *drive_get(BlockInterfaceType type, int bus, int unit);
@@ -53,6 +57,8 @@ QemuOpts *drive_add(BlockInterfaceType type, int index, const char *file,
                     const char *optstr);
 DriveInfo *drive_init(QemuOpts *arg, int default_to_scsi);

+extern int drives_reinit(void);
+
 /* device-hotplug */

 DriveInfo *add_init_drive(const char *opts);
diff --git a/migration.c b/migration.c
index 4b17566..764b233 100644
--- a/migration.c
+++ b/migration.c
@@ -17,6 +17,7 @@
 #include "buffered_file.h"
 #include "sysemu.h"
 #include "block.h"
+#include "blockdev.h"
 #include "qemu_socket.h"
 #include "block-migration.h"
 #include "qmp-commands.h"
@@ -89,6 +90,11 @@ void process_incoming_migration(QEMUFile *f)
     qemu_announce_self();
     DPRINTF("successfully loaded vm state\n");

+    if (drives_reinit() != 0) {
+        fprintf(stderr, "reopening of drives failed\n");
+        exit(1);
+    }
+
     if (autostart) {
         vm_start();
     } else {
-- 
1.7.7

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [Qemu-devel] [PATCH 2/2] drive_open: Add invalidate option for block devices
  2011-11-09 19:16 [Qemu-devel] [RFC PATCH 0/2] Fix migration with NFS & iscsi/Fiber channel Juan Quintela
  2011-11-09 19:16 ` [Qemu-devel] [PATCH 1/2] Reopen files after migration Juan Quintela
@ 2011-11-09 19:16 ` Juan Quintela
  2011-11-10 11:33   ` Kevin Wolf
  2011-11-10 10:34 ` [Qemu-devel] [RFC PATCH 0/2] Fix migration with NFS & iscsi/Fiber channel Stefan Hajnoczi
  2011-11-23 15:46 ` Juan Quintela
  3 siblings, 1 reply; 13+ messages in thread
From: Juan Quintela @ 2011-11-09 19:16 UTC (permalink / raw)
  To: qemu-devel

Linux allows to invalidate block devices.  This is needed for the incoming
migration part.

Signed-off-by: Juan Quintela <quintela@redhat.com>
---
 block.h           |    2 ++
 block/raw-posix.c |   24 ++++++++++++++++++++++++
 blockdev.c        |    8 ++++----
 3 files changed, 30 insertions(+), 4 deletions(-)

diff --git a/block.h b/block.h
index 38cd748..517b446 100644
--- a/block.h
+++ b/block.h
@@ -61,6 +61,8 @@ typedef struct BlockDevOps {
 #define BDRV_O_NATIVE_AIO  0x0080 /* use native AIO instead of the thread pool */
 #define BDRV_O_NO_BACKING  0x0100 /* don't open the backing file */
 #define BDRV_O_NO_FLUSH    0x0200 /* disable flushing on this disk */
+#define BDRV_O_INVALIDATE  0x0400 /* invalidate buffer cache for this device.
+                                     re-read things from server */

 #define BDRV_O_CACHE_MASK  (BDRV_O_NOCACHE | BDRV_O_CACHE_WB | BDRV_O_NO_FLUSH)

diff --git a/block/raw-posix.c b/block/raw-posix.c
index a3de373..84303a0 100644
--- a/block/raw-posix.c
+++ b/block/raw-posix.c
@@ -52,6 +52,7 @@
 #include <sys/param.h>
 #include <linux/cdrom.h>
 #include <linux/fd.h>
+#include <linux/fs.h>
 #endif
 #if defined (__FreeBSD__) || defined(__FreeBSD_kernel__)
 #include <sys/disk.h>
@@ -218,6 +219,29 @@ static int raw_open_common(BlockDriverState *bs, const char *filename,
     s->fd = fd;
     s->aligned_buf = NULL;

+#ifdef __linux__
+    if ((bdrv_flags & BDRV_O_INVALIDATE)) {
+        struct stat buf;
+        int res;
+
+        res = fstat(fd, &buf);
+
+        if (res < 0) {
+            return -errno;
+        }
+
+        if (S_ISBLK(buf.st_mode)) {
+            printf("we are in a block device: %s\n", filename);
+            res = ioctl(fd, BLKFLSBUF, 0);
+            if (res < 0) {
+                fprintf(stderr, "qemu: buffer invalidation of %s"
+                        " failed with error %d\n", filename, errno);
+                return -errno;
+            }
+        }
+    }
+#endif /* __linux__ */
+
     if ((bdrv_flags & BDRV_O_NOCACHE)) {
         /*
          * Allocate a buffer for read/modify/write cycles.  Chose the size
diff --git a/blockdev.c b/blockdev.c
index a10de7a..ea02ee7 100644
--- a/blockdev.c
+++ b/blockdev.c
@@ -217,10 +217,10 @@ static int parse_block_error_action(const char *buf, int is_read)
     }
 }

-static int drive_open(DriveInfo *dinfo)
+static int drive_open(DriveInfo *dinfo, int extra_flags)
 {
     int res = bdrv_open(dinfo->bdrv, dinfo->file,
-                        dinfo->bdrv_flags, dinfo->drv);
+                        dinfo->bdrv_flags | extra_flags, dinfo->drv);

     if (res < 0) {
         fprintf(stderr, "qemu: could not open disk image %s: %s\n",
@@ -237,7 +237,7 @@ int drives_reinit(void)
         if (dinfo->opened && !bdrv_is_read_only(dinfo->bdrv)) {
             int res;
             bdrv_close(dinfo->bdrv);
-            res = drive_open(dinfo);
+            res = drive_open(dinfo, BDRV_O_INVALIDATE);
             if (res) {
                 fprintf(stderr, "qemu: re-open of %s failed with error %d\n",
                         dinfo->file, res);
@@ -550,7 +550,7 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
     dinfo->drv = drv;
     dinfo->opened = 1;

-    if (drive_open(dinfo) < 0) {
+    if (drive_open(dinfo, 0) < 0) {
         goto err;
     }

-- 
1.7.7

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/2] Reopen files after migration
  2011-11-09 19:16 ` [Qemu-devel] [PATCH 1/2] Reopen files after migration Juan Quintela
@ 2011-11-09 20:00   ` Anthony Liguori
  2011-11-09 21:10     ` Juan Quintela
  2011-11-09 23:30   ` Lucas Meneghel Rodrigues
  2011-11-23 23:32   ` Anthony Liguori
  2 siblings, 1 reply; 13+ messages in thread
From: Anthony Liguori @ 2011-11-09 20:00 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

On 11/09/2011 01:16 PM, Juan Quintela wrote:
> We need to invalidate the Read Cache on the destination, otherwise we
> have corruption.  Easy way to reproduce it is:
>
> - create an qcow2 images
> - start qemu on destination of migration (qemu .... -incoming tcp:...)
> - start qemu on source of migration and do one install.
> - migrate at the end of install (when lot of disk IO has happened).
>
> Destination of migration has a local copy of the L1/L2 tables that existed
> at the beginning, before the install started.  We have disk corruption at
> this point.  The solution (for NFS) is to just re-open the file.  Operations
> have to happen in this order:
>
> - source of migration: flush()
> - destination: close(file);
> - destination: open(file)
>
> it is not necesary that source of migration close the file.
>
> Signed-off-by: Juan Quintela<quintela@redhat.com>

Couple thoughts:

1) Pretty sure this would break -snapshot.  I do test migration with -snapshot 
so please don't break it.

2) I don't think this is going to work very well with encrypted drives.

Perhaps we could do something like:

http://mid.gmane.org/1284213896-12705-2-git-send-email-aliguori@us.ibm.com

And do reopen as a default implementation.  That way we don't have to do reopen 
for formats that don't need it (raw) or can flush caches without reopening the 
file (qed).

It doesn't fix NFS close-to-open, but I think the right way to do that is to 
defer the open, not to reopen.

Regards,

Anthony Liguori

> ---
>   blockdev.c  |   43 ++++++++++++++++++++++++++++++++++++++-----
>   blockdev.h  |    6 ++++++
>   migration.c |    6 ++++++
>   3 files changed, 50 insertions(+), 5 deletions(-)
>
> diff --git a/blockdev.c b/blockdev.c
> index 0827bf7..a10de7a 100644
> --- a/blockdev.c
> +++ b/blockdev.c
> @@ -182,6 +182,7 @@ static void drive_uninit(DriveInfo *dinfo)
>       qemu_opts_del(dinfo->opts);
>       bdrv_delete(dinfo->bdrv);
>       g_free(dinfo->id);
> +    g_free(dinfo->file);
>       QTAILQ_REMOVE(&drives, dinfo, next);
>       g_free(dinfo);
>   }
> @@ -216,6 +217,37 @@ static int parse_block_error_action(const char *buf, int is_read)
>       }
>   }
>
> +static int drive_open(DriveInfo *dinfo)
> +{
> +    int res = bdrv_open(dinfo->bdrv, dinfo->file,
> +                        dinfo->bdrv_flags, dinfo->drv);
> +
> +    if (res<  0) {
> +        fprintf(stderr, "qemu: could not open disk image %s: %s\n",
> +                        dinfo->file, strerror(errno));
> +    }
> +    return res;
> +}
> +
> +int drives_reinit(void)
> +{
> +    DriveInfo *dinfo;
> +
> +    QTAILQ_FOREACH(dinfo,&drives, next) {
> +        if (dinfo->opened&&  !bdrv_is_read_only(dinfo->bdrv)) {
> +            int res;
> +            bdrv_close(dinfo->bdrv);
> +            res = drive_open(dinfo);
> +            if (res) {
> +                fprintf(stderr, "qemu: re-open of %s failed with error %d\n",
> +                        dinfo->file, res);
> +                return res;
> +            }
> +        }
> +    }
> +    return 0;
> +}
> +
>   DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
>   {
>       const char *buf;
> @@ -236,7 +268,6 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
>       const char *devaddr;
>       DriveInfo *dinfo;
>       int snapshot = 0;
> -    int ret;
>
>       translation = BIOS_ATA_TRANSLATION_AUTO;
>       media = MEDIA_DISK;
> @@ -514,10 +545,12 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
>
>       bdrv_flags |= ro ? 0 : BDRV_O_RDWR;
>
> -    ret = bdrv_open(dinfo->bdrv, file, bdrv_flags, drv);
> -    if (ret<  0) {
> -        error_report("could not open disk image %s: %s",
> -                     file, strerror(-ret));
> +    dinfo->file = g_strdup(file);
> +    dinfo->bdrv_flags = bdrv_flags;
> +    dinfo->drv = drv;
> +    dinfo->opened = 1;
> +
> +    if (drive_open(dinfo)<  0) {
>           goto err;
>       }
>
> diff --git a/blockdev.h b/blockdev.h
> index 3587786..733eb72 100644
> --- a/blockdev.h
> +++ b/blockdev.h
> @@ -38,6 +38,10 @@ struct DriveInfo {
>       char serial[BLOCK_SERIAL_STRLEN + 1];
>       QTAILQ_ENTRY(DriveInfo) next;
>       int refcount;
> +    int opened;
> +    int bdrv_flags;
> +    char *file;
> +    BlockDriver *drv;
>   };
>
>   DriveInfo *drive_get(BlockInterfaceType type, int bus, int unit);
> @@ -53,6 +57,8 @@ QemuOpts *drive_add(BlockInterfaceType type, int index, const char *file,
>                       const char *optstr);
>   DriveInfo *drive_init(QemuOpts *arg, int default_to_scsi);
>
> +extern int drives_reinit(void);
> +
>   /* device-hotplug */
>
>   DriveInfo *add_init_drive(const char *opts);
> diff --git a/migration.c b/migration.c
> index 4b17566..764b233 100644
> --- a/migration.c
> +++ b/migration.c
> @@ -17,6 +17,7 @@
>   #include "buffered_file.h"
>   #include "sysemu.h"
>   #include "block.h"
> +#include "blockdev.h"
>   #include "qemu_socket.h"
>   #include "block-migration.h"
>   #include "qmp-commands.h"
> @@ -89,6 +90,11 @@ void process_incoming_migration(QEMUFile *f)
>       qemu_announce_self();
>       DPRINTF("successfully loaded vm state\n");
>
> +    if (drives_reinit() != 0) {
> +        fprintf(stderr, "reopening of drives failed\n");
> +        exit(1);
> +    }
> +
>       if (autostart) {
>           vm_start();
>       } else {

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/2] Reopen files after migration
  2011-11-09 20:00   ` Anthony Liguori
@ 2011-11-09 21:10     ` Juan Quintela
  2011-11-09 21:16       ` Anthony Liguori
  0 siblings, 1 reply; 13+ messages in thread
From: Juan Quintela @ 2011-11-09 21:10 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: qemu-devel

Anthony Liguori <anthony@codemonkey.ws> wrote:
> On 11/09/2011 01:16 PM, Juan Quintela wrote:
>> We need to invalidate the Read Cache on the destination, otherwise we
>> have corruption.  Easy way to reproduce it is:
>>
>> - create an qcow2 images
>> - start qemu on destination of migration (qemu .... -incoming tcp:...)
>> - start qemu on source of migration and do one install.
>> - migrate at the end of install (when lot of disk IO has happened).
>>
>> Destination of migration has a local copy of the L1/L2 tables that existed
>> at the beginning, before the install started.  We have disk corruption at
>> this point.  The solution (for NFS) is to just re-open the file.  Operations
>> have to happen in this order:
>>
>> - source of migration: flush()
>> - destination: close(file);
>> - destination: open(file)
>>
>> it is not necesary that source of migration close the file.
>>
>> Signed-off-by: Juan Quintela<quintela@redhat.com>
>
> Couple thoughts:
>
> 1) Pretty sure this would break -snapshot.  I do test migration with
> -snapshot so please don't break it.

Can you give me one example?  I don't know how to use -snapshot with migration.

> 2) I don't think this is going to work very well with encrypted drives.

To be hones, no clue.

> Perhaps we could do something like:
>
> http://mid.gmane.org/1284213896-12705-2-git-send-email-aliguori@us.ibm.com

That is something like I wanted to know.

> And do reopen as a default implementation.  That way we don't have to
> do reopen for formats that don't need it (raw)

Kevin told me that know that we allow online resize, we should also
update that for raw, but I haven't tested to be sure one way or another.

> or can flush caches without reopening the file (qed).

qcow2 could be told to flush caches, it is that the code is not there.
It shouldn't be _that_ difficult.  But I am not able to understand
anymore block_open <-> block_file_open relationship.

> It doesn't fix NFS close-to-open, but I think the right way to do that
> is to defer the open, not to reopen.

Fully agree here, that would be another way to fix it.  See that in my
other answer I showed that Markus already have problems with ide + cmos,
so I think that we should have:

- initialization done before we open files/block/<whatever you call it>
- open files/block/...
- late initialization that uses that (almost nothing needs to be here
  and should be easy to audit).

About NFS, iSCSI, FC, my understanding is that if you use anything
different than cache=none you are playing with fire, and will get burned
sooner or later (it took quite a bit for Christoph to make me understand
that, but now I fully agree with him).

Later, Juan.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/2] Reopen files after migration
  2011-11-09 21:10     ` Juan Quintela
@ 2011-11-09 21:16       ` Anthony Liguori
  2011-11-10 11:30         ` Kevin Wolf
  0 siblings, 1 reply; 13+ messages in thread
From: Anthony Liguori @ 2011-11-09 21:16 UTC (permalink / raw)
  To: quintela; +Cc: qemu-devel

On 11/09/2011 03:10 PM, Juan Quintela wrote:
> Anthony Liguori<anthony@codemonkey.ws>  wrote:
>> On 11/09/2011 01:16 PM, Juan Quintela wrote:
>>> We need to invalidate the Read Cache on the destination, otherwise we
>>> have corruption.  Easy way to reproduce it is:
>>>
>>> - create an qcow2 images
>>> - start qemu on destination of migration (qemu .... -incoming tcp:...)
>>> - start qemu on source of migration and do one install.
>>> - migrate at the end of install (when lot of disk IO has happened).
>>>
>>> Destination of migration has a local copy of the L1/L2 tables that existed
>>> at the beginning, before the install started.  We have disk corruption at
>>> this point.  The solution (for NFS) is to just re-open the file.  Operations
>>> have to happen in this order:
>>>
>>> - source of migration: flush()
>>> - destination: close(file);
>>> - destination: open(file)
>>>
>>> it is not necesary that source of migration close the file.
>>>
>>> Signed-off-by: Juan Quintela<quintela@redhat.com>
>>
>> Couple thoughts:
>>
>> 1) Pretty sure this would break -snapshot.  I do test migration with
>> -snapshot so please don't break it.
>
> Can you give me one example?  I don't know how to use -snapshot with migration.

This is totally unsafe but has always worked for me.  On the same box:

$ qemu -hda foo.img -snapshot

$ qemu -hda foo.img -snapshot -incoming tcp:localhost:1025

This is not the *only* way I test migration but it's very convenient for sniff 
testing.  The problem with your patch is that it assumes that once you've opened 
a file, the name still exists.  But that is not universally true.  It needs to 
degrade in a useful way.

I think just deferring open is probably the best strategy.

>
>> 2) I don't think this is going to work very well with encrypted drives.
>
> To be hones, no clue.

Deferring open addresses this is a nice way I think.

>> Perhaps we could do something like:
>>
>> http://mid.gmane.org/1284213896-12705-2-git-send-email-aliguori@us.ibm.com
>
> That is something like I wanted to know.
>
>> And do reopen as a default implementation.  That way we don't have to
>> do reopen for formats that don't need it (raw)
>
> Kevin told me that know that we allow online resize, we should also
> update that for raw, but I haven't tested to be sure one way or another.
>
>> or can flush caches without reopening the file (qed).
>
> qcow2 could be told to flush caches, it is that the code is not there.
> It shouldn't be _that_ difficult.  But I am not able to understand
> anymore block_open<->  block_file_open relationship.
>
>> It doesn't fix NFS close-to-open, but I think the right way to do that
>> is to defer the open, not to reopen.
>
> Fully agree here, that would be another way to fix it.  See that in my
> other answer I showed that Markus already have problems with ide + cmos,
> so I think that we should have:

I've posted patches that delay the geometry guess until the device model is 
initialized.  That avoids this particular problem.

Regards,

Anthony Liguori

>
> - initialization done before we open files/block/<whatever you call it>
> - open files/block/...
> - late initialization that uses that (almost nothing needs to be here
>    and should be easy to audit).
>
> About NFS, iSCSI, FC, my understanding is that if you use anything
> different than cache=none you are playing with fire, and will get burned
> sooner or later (it took quite a bit for Christoph to make me understand
> that, but now I fully agree with him).
>
> Later, Juan.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/2] Reopen files after migration
  2011-11-09 19:16 ` [Qemu-devel] [PATCH 1/2] Reopen files after migration Juan Quintela
  2011-11-09 20:00   ` Anthony Liguori
@ 2011-11-09 23:30   ` Lucas Meneghel Rodrigues
  2011-11-23 23:32   ` Anthony Liguori
  2 siblings, 0 replies; 13+ messages in thread
From: Lucas Meneghel Rodrigues @ 2011-11-09 23:30 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

On 11/09/2011 05:16 PM, Juan Quintela wrote:
> We need to invalidate the Read Cache on the destination, otherwise we
> have corruption.  Easy way to reproduce it is:
>
> - create an qcow2 images
> - start qemu on destination of migration (qemu .... -incoming tcp:...)
> - start qemu on source of migration and do one install.
> - migrate at the end of install (when lot of disk IO has happened).
>
> Destination of migration has a local copy of the L1/L2 tables that existed
> at the beginning, before the install started.  We have disk corruption at
> this point.  The solution (for NFS) is to just re-open the file.  Operations
> have to happen in this order:
>
> - source of migration: flush()
> - destination: close(file);
> - destination: open(file)

I've ran 2 test jobs, that run autotest stress test on a vm that is 
going through ping pong bg migration using 3 migration protocols, tcp, 
unix and exec:

* qemu-kvm.git
* qemu-kvm.git patched with this patch

With your patch all tests PASS, with unpatched tree, all of them FAIL. 
So your solution improves the situation quite dramatically.

> it is not necesary that source of migration close the file.
>
> Signed-off-by: Juan Quintela<quintela@redhat.com>
> ---
>   blockdev.c  |   43 ++++++++++++++++++++++++++++++++++++++-----
>   blockdev.h  |    6 ++++++
>   migration.c |    6 ++++++
>   3 files changed, 50 insertions(+), 5 deletions(-)
>
> diff --git a/blockdev.c b/blockdev.c
> index 0827bf7..a10de7a 100644
> --- a/blockdev.c
> +++ b/blockdev.c
> @@ -182,6 +182,7 @@ static void drive_uninit(DriveInfo *dinfo)
>       qemu_opts_del(dinfo->opts);
>       bdrv_delete(dinfo->bdrv);
>       g_free(dinfo->id);
> +    g_free(dinfo->file);
>       QTAILQ_REMOVE(&drives, dinfo, next);
>       g_free(dinfo);
>   }
> @@ -216,6 +217,37 @@ static int parse_block_error_action(const char *buf, int is_read)
>       }
>   }
>
> +static int drive_open(DriveInfo *dinfo)
> +{
> +    int res = bdrv_open(dinfo->bdrv, dinfo->file,
> +                        dinfo->bdrv_flags, dinfo->drv);
> +
> +    if (res<  0) {
> +        fprintf(stderr, "qemu: could not open disk image %s: %s\n",
> +                        dinfo->file, strerror(errno));
> +    }
> +    return res;
> +}
> +
> +int drives_reinit(void)
> +{
> +    DriveInfo *dinfo;
> +
> +    QTAILQ_FOREACH(dinfo,&drives, next) {
> +        if (dinfo->opened&&  !bdrv_is_read_only(dinfo->bdrv)) {
> +            int res;
> +            bdrv_close(dinfo->bdrv);
> +            res = drive_open(dinfo);
> +            if (res) {
> +                fprintf(stderr, "qemu: re-open of %s failed with error %d\n",
> +                        dinfo->file, res);
> +                return res;
> +            }
> +        }
> +    }
> +    return 0;
> +}
> +
>   DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
>   {
>       const char *buf;
> @@ -236,7 +268,6 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
>       const char *devaddr;
>       DriveInfo *dinfo;
>       int snapshot = 0;
> -    int ret;
>
>       translation = BIOS_ATA_TRANSLATION_AUTO;
>       media = MEDIA_DISK;
> @@ -514,10 +545,12 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
>
>       bdrv_flags |= ro ? 0 : BDRV_O_RDWR;
>
> -    ret = bdrv_open(dinfo->bdrv, file, bdrv_flags, drv);
> -    if (ret<  0) {
> -        error_report("could not open disk image %s: %s",
> -                     file, strerror(-ret));
> +    dinfo->file = g_strdup(file);
> +    dinfo->bdrv_flags = bdrv_flags;
> +    dinfo->drv = drv;
> +    dinfo->opened = 1;
> +
> +    if (drive_open(dinfo)<  0) {
>           goto err;
>       }
>
> diff --git a/blockdev.h b/blockdev.h
> index 3587786..733eb72 100644
> --- a/blockdev.h
> +++ b/blockdev.h
> @@ -38,6 +38,10 @@ struct DriveInfo {
>       char serial[BLOCK_SERIAL_STRLEN + 1];
>       QTAILQ_ENTRY(DriveInfo) next;
>       int refcount;
> +    int opened;
> +    int bdrv_flags;
> +    char *file;
> +    BlockDriver *drv;
>   };
>
>   DriveInfo *drive_get(BlockInterfaceType type, int bus, int unit);
> @@ -53,6 +57,8 @@ QemuOpts *drive_add(BlockInterfaceType type, int index, const char *file,
>                       const char *optstr);
>   DriveInfo *drive_init(QemuOpts *arg, int default_to_scsi);
>
> +extern int drives_reinit(void);
> +
>   /* device-hotplug */
>
>   DriveInfo *add_init_drive(const char *opts);
> diff --git a/migration.c b/migration.c
> index 4b17566..764b233 100644
> --- a/migration.c
> +++ b/migration.c
> @@ -17,6 +17,7 @@
>   #include "buffered_file.h"
>   #include "sysemu.h"
>   #include "block.h"
> +#include "blockdev.h"
>   #include "qemu_socket.h"
>   #include "block-migration.h"
>   #include "qmp-commands.h"
> @@ -89,6 +90,11 @@ void process_incoming_migration(QEMUFile *f)
>       qemu_announce_self();
>       DPRINTF("successfully loaded vm state\n");
>
> +    if (drives_reinit() != 0) {
> +        fprintf(stderr, "reopening of drives failed\n");
> +        exit(1);
> +    }
> +
>       if (autostart) {
>           vm_start();
>       } else {

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/2] Fix migration with NFS & iscsi/Fiber channel
  2011-11-09 19:16 [Qemu-devel] [RFC PATCH 0/2] Fix migration with NFS & iscsi/Fiber channel Juan Quintela
  2011-11-09 19:16 ` [Qemu-devel] [PATCH 1/2] Reopen files after migration Juan Quintela
  2011-11-09 19:16 ` [Qemu-devel] [PATCH 2/2] drive_open: Add invalidate option for block devices Juan Quintela
@ 2011-11-10 10:34 ` Stefan Hajnoczi
  2011-11-23 15:46 ` Juan Quintela
  3 siblings, 0 replies; 13+ messages in thread
From: Stefan Hajnoczi @ 2011-11-10 10:34 UTC (permalink / raw)
  To: Juan Quintela; +Cc: Josh Durgin, qemu-devel

On Wed, Nov 9, 2011 at 7:16 PM, Juan Quintela <quintela@redhat.com> wrote:
> This series goes as RFC, because we need to arrive to a solution for
> qcow2.  In RHEL5/6 we have the equivalent of patch1 integrated to fix
> this issue.

We need to solve this for all block drivers, not just qcow2.

Josh: Have you ever tested live migration with rbd?  Just want to
check if there are any issues to be aware of.  I asked on #ceph and it
seems multiple initiators to a RADOS block device are allowed - this
means Ceph doesn't impose an additional requirement beyond what were
trying to solve for file systems here.

Let me share the situation with QED.  Opening the image file causes
the L1 table to be loaded and reading the first cluster will also
cache an L2 table.  Furthermore, QED has a dirty bit in the file
header to detect unclean shutdown.  The dirty bit will be triggered if
both source and destination machines have the image open for
read-write access simultaneously.  Because of this we either need to
delay opening the image on the destination or we need to open the
image in read-only mode on the destination and then reopen in the
actual mode once the destination has flushed.

I think the delayed open solution is cleanest but probably requires a
final state during live migration where the source says, "I've
transferred everything now, please prepare to take over".  At that
point the destination can open all block devices and if there is an
error can still fail migration.  Unfortunately doing this at the
critical point during live migration means that there is a
latency/timeout situation in case the destination has trouble opening
block devices.  We'd want to abort migration and continue running on
the source but a large timeout means long down-time during failed
migration.

Thoughts?

Stefan

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/2] Reopen files after migration
  2011-11-09 21:16       ` Anthony Liguori
@ 2011-11-10 11:30         ` Kevin Wolf
  0 siblings, 0 replies; 13+ messages in thread
From: Kevin Wolf @ 2011-11-10 11:30 UTC (permalink / raw)
  To: Anthony Liguori; +Cc: qemu-devel, quintela

Am 09.11.2011 22:16, schrieb Anthony Liguori:
> On 11/09/2011 03:10 PM, Juan Quintela wrote:
>> Anthony Liguori<anthony@codemonkey.ws>  wrote:
>>> On 11/09/2011 01:16 PM, Juan Quintela wrote:
>>>> We need to invalidate the Read Cache on the destination, otherwise we
>>>> have corruption.  Easy way to reproduce it is:
>>>>
>>>> - create an qcow2 images
>>>> - start qemu on destination of migration (qemu .... -incoming tcp:...)
>>>> - start qemu on source of migration and do one install.
>>>> - migrate at the end of install (when lot of disk IO has happened).
>>>>
>>>> Destination of migration has a local copy of the L1/L2 tables that existed
>>>> at the beginning, before the install started.  We have disk corruption at
>>>> this point.  The solution (for NFS) is to just re-open the file.  Operations
>>>> have to happen in this order:
>>>>
>>>> - source of migration: flush()
>>>> - destination: close(file);
>>>> - destination: open(file)
>>>>
>>>> it is not necesary that source of migration close the file.
>>>>
>>>> Signed-off-by: Juan Quintela<quintela@redhat.com>
>>>
>>> Couple thoughts:
>>>
>>> 1) Pretty sure this would break -snapshot.  I do test migration with
>>> -snapshot so please don't break it.
>>
>> Can you give me one example?  I don't know how to use -snapshot with migration.
> 
> This is totally unsafe but has always worked for me.  On the same box:
> 
> $ qemu -hda foo.img -snapshot
> 
> $ qemu -hda foo.img -snapshot -incoming tcp:localhost:1025

It's always amazing to see how people depend on insane things like this. :-)

> This is not the *only* way I test migration but it's very convenient for sniff 
> testing.  The problem with your patch is that it assumes that once you've opened 
> a file, the name still exists.  But that is not universally true.  It needs to 
> degrade in a useful way.
> 
> I think just deferring open is probably the best strategy.

One of the problems with deferring open was that we want to detect
simple errors (typo in the file name, or whatever) before doing the real
live migration. The proposed solution was a read-only open/close
sequence at the start, but I believe this would break your use of
-snapshot as well.

Kevin

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] drive_open: Add invalidate option for block devices
  2011-11-09 19:16 ` [Qemu-devel] [PATCH 2/2] drive_open: Add invalidate option for block devices Juan Quintela
@ 2011-11-10 11:33   ` Kevin Wolf
  2011-11-10 16:45     ` Juan Quintela
  0 siblings, 1 reply; 13+ messages in thread
From: Kevin Wolf @ 2011-11-10 11:33 UTC (permalink / raw)
  To: Juan Quintela; +Cc: qemu-devel

Am 09.11.2011 20:16, schrieb Juan Quintela:
> Linux allows to invalidate block devices.  This is needed for the incoming
> migration part.
> 
> Signed-off-by: Juan Quintela <quintela@redhat.com>

I think Christoph said that this ioctl kills ramdisks? Or was that
something different?

Kevin

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 2/2] drive_open: Add invalidate option for block devices
  2011-11-10 11:33   ` Kevin Wolf
@ 2011-11-10 16:45     ` Juan Quintela
  0 siblings, 0 replies; 13+ messages in thread
From: Juan Quintela @ 2011-11-10 16:45 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: qemu-devel

Kevin Wolf <kwolf@redhat.com> wrote:
> Am 09.11.2011 20:16, schrieb Juan Quintela:
>> Linux allows to invalidate block devices.  This is needed for the incoming
>> migration part.
>> 
>> Signed-off-by: Juan Quintela <quintela@redhat.com>
>
> I think Christoph said that this ioctl kills ramdisks? Or was that
> something different?

On patch 0/2 I said that I was not proposing this.  That this "kind" of
fixed the problem for iscsi and linux, but that it is not "reliable".

Patch 2/2 was to start discussion, and to show the problem we had to
fix, not a "solution".

As said during the discussion:
- clustered filesystems: they are good
- non-coherentent shared storage (NFS, iSCSI, ...): needs cache=none,
  anything else is insane
- formats (qcow2): needs reopen, or at least reload metadata.

And the discussion is how to go from here.

Later, Juan.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [RFC PATCH 0/2] Fix migration with NFS & iscsi/Fiber channel
  2011-11-09 19:16 [Qemu-devel] [RFC PATCH 0/2] Fix migration with NFS & iscsi/Fiber channel Juan Quintela
                   ` (2 preceding siblings ...)
  2011-11-10 10:34 ` [Qemu-devel] [RFC PATCH 0/2] Fix migration with NFS & iscsi/Fiber channel Stefan Hajnoczi
@ 2011-11-23 15:46 ` Juan Quintela
  3 siblings, 0 replies; 13+ messages in thread
From: Juan Quintela @ 2011-11-23 15:46 UTC (permalink / raw)
  To: qemu-devel
  Cc: Blue Swirl, Andrzej Zaborowski, Michael Walle, Richard Henderson,
	Edgar E. Iglesias

Juan Quintela <quintela@redhat.com> wrote:

<snip>

Hi

I did the stupid thing, and send the wrong directory of patches, sorry
for any inconveniences.

/me writes 100 times: I have to re-read the whole command line before
sending patches

/me writes another 100 times

Later, Juan.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [Qemu-devel] [PATCH 1/2] Reopen files after migration
  2011-11-09 19:16 ` [Qemu-devel] [PATCH 1/2] Reopen files after migration Juan Quintela
  2011-11-09 20:00   ` Anthony Liguori
  2011-11-09 23:30   ` Lucas Meneghel Rodrigues
@ 2011-11-23 23:32   ` Anthony Liguori
  2 siblings, 0 replies; 13+ messages in thread
From: Anthony Liguori @ 2011-11-23 23:32 UTC (permalink / raw)
  To: Juan Quintela
  Cc: Andrzej Zaborowski, qemu-devel, Blue Swirl, Michael Walle,
	Edgar E. Iglesias, Richard Henderson

On 11/23/2011 09:34 AM, Juan Quintela wrote:
> We need to invalidate the Read Cache on the destination, otherwise we
> have corruption.  Easy way to reproduce it is:
>
> - create an qcow2 images
> - start qemu on destination of migration (qemu .... -incoming tcp:...)
> - start qemu on source of migration and do one install.
> - migrate at the end of install (when lot of disk IO has happened).

Have you actually tried this on the master?  It should work because of:

commit 06d9260ffa9dfa0e96e015501e43480ab66f96f6
Author: Anthony Liguori <aliguori@us.ibm.com>
Date:   Mon Nov 14 15:09:46 2011 -0600

     qcow2: implement bdrv_invalidate_cache (v2)

> Destination of migration has a local copy of the L1/L2 tables that existed
> at the beginning, before the install started.  We have disk corruption at
> this point.  The solution (for NFS) is to just re-open the file.  Operations
> have to happen in this order:
>
> - source of migration: flush()
> - destination: close(file);
> - destination: open(file)

You cannot reliably coordinate this with this series.  You never actually close 
the file on the source so I can't see how it would even work with this series.

I thought we had a long discussion on this and all agreed that opening O_DIRECT 
and fcntl()'ing it away was the best solution here?

Regards,

Anthony Liguori

>
> it is not necesary that source of migration close the file.
>
> Signed-off-by: Juan Quintela<quintela@redhat.com>
> ---
>   blockdev.c  |   43 ++++++++++++++++++++++++++++++++++++++-----
>   blockdev.h  |    6 ++++++
>   migration.c |    6 ++++++
>   3 files changed, 50 insertions(+), 5 deletions(-)
>
> diff --git a/blockdev.c b/blockdev.c
> index 0827bf7..a10de7a 100644
> --- a/blockdev.c
> +++ b/blockdev.c
> @@ -182,6 +182,7 @@ static void drive_uninit(DriveInfo *dinfo)
>       qemu_opts_del(dinfo->opts);
>       bdrv_delete(dinfo->bdrv);
>       g_free(dinfo->id);
> +    g_free(dinfo->file);
>       QTAILQ_REMOVE(&drives, dinfo, next);
>       g_free(dinfo);
>   }
> @@ -216,6 +217,37 @@ static int parse_block_error_action(const char *buf, int is_read)
>       }
>   }
>
> +static int drive_open(DriveInfo *dinfo)
> +{
> +    int res = bdrv_open(dinfo->bdrv, dinfo->file,
> +                        dinfo->bdrv_flags, dinfo->drv);
> +
> +    if (res<  0) {
> +        fprintf(stderr, "qemu: could not open disk image %s: %s\n",
> +                        dinfo->file, strerror(errno));
> +    }
> +    return res;
> +}
> +
> +int drives_reinit(void)
> +{
> +    DriveInfo *dinfo;
> +
> +    QTAILQ_FOREACH(dinfo,&drives, next) {
> +        if (dinfo->opened&&  !bdrv_is_read_only(dinfo->bdrv)) {
> +            int res;
> +            bdrv_close(dinfo->bdrv);
> +            res = drive_open(dinfo);
> +            if (res) {
> +                fprintf(stderr, "qemu: re-open of %s failed with error %d\n",
> +                        dinfo->file, res);
> +                return res;
> +            }
> +        }
> +    }
> +    return 0;
> +}
> +
>   DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
>   {
>       const char *buf;
> @@ -236,7 +268,6 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
>       const char *devaddr;
>       DriveInfo *dinfo;
>       int snapshot = 0;
> -    int ret;
>
>       translation = BIOS_ATA_TRANSLATION_AUTO;
>       media = MEDIA_DISK;
> @@ -514,10 +545,12 @@ DriveInfo *drive_init(QemuOpts *opts, int default_to_scsi)
>
>       bdrv_flags |= ro ? 0 : BDRV_O_RDWR;
>
> -    ret = bdrv_open(dinfo->bdrv, file, bdrv_flags, drv);
> -    if (ret<  0) {
> -        error_report("could not open disk image %s: %s",
> -                     file, strerror(-ret));
> +    dinfo->file = g_strdup(file);
> +    dinfo->bdrv_flags = bdrv_flags;
> +    dinfo->drv = drv;
> +    dinfo->opened = 1;
> +
> +    if (drive_open(dinfo)<  0) {
>           goto err;
>       }
>
> diff --git a/blockdev.h b/blockdev.h
> index 3587786..733eb72 100644
> --- a/blockdev.h
> +++ b/blockdev.h
> @@ -38,6 +38,10 @@ struct DriveInfo {
>       char serial[BLOCK_SERIAL_STRLEN + 1];
>       QTAILQ_ENTRY(DriveInfo) next;
>       int refcount;
> +    int opened;
> +    int bdrv_flags;
> +    char *file;
> +    BlockDriver *drv;
>   };
>
>   DriveInfo *drive_get(BlockInterfaceType type, int bus, int unit);
> @@ -53,6 +57,8 @@ QemuOpts *drive_add(BlockInterfaceType type, int index, const char *file,
>                       const char *optstr);
>   DriveInfo *drive_init(QemuOpts *arg, int default_to_scsi);
>
> +extern int drives_reinit(void);
> +
>   /* device-hotplug */
>
>   DriveInfo *add_init_drive(const char *opts);
> diff --git a/migration.c b/migration.c
> index 4b17566..764b233 100644
> --- a/migration.c
> +++ b/migration.c
> @@ -17,6 +17,7 @@
>   #include "buffered_file.h"
>   #include "sysemu.h"
>   #include "block.h"
> +#include "blockdev.h"
>   #include "qemu_socket.h"
>   #include "block-migration.h"
>   #include "qmp-commands.h"
> @@ -89,6 +90,11 @@ void process_incoming_migration(QEMUFile *f)
>       qemu_announce_self();
>       DPRINTF("successfully loaded vm state\n");
>
> +    if (drives_reinit() != 0) {
> +        fprintf(stderr, "reopening of drives failed\n");
> +        exit(1);
> +    }
> +
>       if (autostart) {
>           vm_start();
>       } else {

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2011-11-23 23:32 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-11-09 19:16 [Qemu-devel] [RFC PATCH 0/2] Fix migration with NFS & iscsi/Fiber channel Juan Quintela
2011-11-09 19:16 ` [Qemu-devel] [PATCH 1/2] Reopen files after migration Juan Quintela
2011-11-09 20:00   ` Anthony Liguori
2011-11-09 21:10     ` Juan Quintela
2011-11-09 21:16       ` Anthony Liguori
2011-11-10 11:30         ` Kevin Wolf
2011-11-09 23:30   ` Lucas Meneghel Rodrigues
2011-11-23 23:32   ` Anthony Liguori
2011-11-09 19:16 ` [Qemu-devel] [PATCH 2/2] drive_open: Add invalidate option for block devices Juan Quintela
2011-11-10 11:33   ` Kevin Wolf
2011-11-10 16:45     ` Juan Quintela
2011-11-10 10:34 ` [Qemu-devel] [RFC PATCH 0/2] Fix migration with NFS & iscsi/Fiber channel Stefan Hajnoczi
2011-11-23 15:46 ` Juan Quintela

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).