qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v2] sheepdog: fix loadvm operation
@ 2013-04-24 16:47 Liu Yuan
  2013-04-25  4:12 ` Liu Yuan
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Liu Yuan @ 2013-04-24 16:47 UTC (permalink / raw)
  To: sheepdog; +Cc: Kevin Wolf, Stefan Hajnoczi, qemu-devel, MORITA Kazutaka

From: Liu Yuan <tailai.ly@taobao.com>

Currently the 'loadvm' opertaion works as following:
1. switch to the snapshot
2. mark current working VDI as a snapshot
3. rely on sd_create_branch to create a new working VDI based on the snapshot

This works not the same as other format as QCOW2. For e.g,

qemu > savevm # get a live snapshot snap1
qemu > savevm # snap2
qemu > loadvm 1 # This will steally create snap3 of the working VDI

Which will result in following snapshot chain:

base <-- snap1 <-- snap2 <-- snap3
          ^
          |
      working VDI

snap3 was unnecessarily created and might be annoying users.

This patch discard the unnecessary 'snap3' creation. and implement
rollback(loadvm) operation to the specified snapshot by
1. switch to the snapshot
2. delete working VDI
3. rely on sd_create_branch to create a new working VDI based on the snapshot

The snapshot chain for above example will be:

base <-- snap1 <-- snap2
          ^
          |
      working VDI

Cc: qemu-devel@nongnu.org
Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Liu Yuan <tailai.ly@taobao.com>
---
v2:
 - use do_req() because sd_delete isn't in coroutine
 - don't break old behavior if we boot up on the snapshot by using s->reverted
   to indicate if we delete working VDI successfully
 - fix a subtle case that sd_create_branch() isn't called yet while another
   'loadvm' is executed

 block/sheepdog.c |   52 +++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 51 insertions(+), 1 deletion(-)

diff --git a/block/sheepdog.c b/block/sheepdog.c
index 2fe0783..43829a7 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -36,6 +36,7 @@
 #define SD_OP_GET_VDI_INFO   0x14
 #define SD_OP_READ_VDIS      0x15
 #define SD_OP_FLUSH_VDI      0x16
+#define SD_OP_DEL_VDI        0x17
 
 #define SD_FLAG_CMD_WRITE    0x01
 #define SD_FLAG_CMD_COW      0x02
@@ -301,6 +302,7 @@ typedef struct BDRVSheepdogState {
     bool is_snapshot;
     uint32_t cache_flags;
     bool discard_supported;
+    bool reverted;
 
     char *host_spec;
     bool is_unix;
@@ -1553,7 +1555,9 @@ static int sd_create_branch(BDRVSheepdogState *s)
 
     buf = g_malloc(SD_INODE_SIZE);
 
-    ret = do_sd_create(s, s->name, s->inode.vdi_size, s->inode.vdi_id, &vid, 1);
+    /* If reverted, we create a working VDI otherwise go snapshot-create */
+    ret = do_sd_create(s, s->name, s->inode.vdi_size, s->inode.vdi_id, &vid,
+                       !s->reverted);
     if (ret) {
         goto out;
     }
@@ -1578,6 +1582,7 @@ static int sd_create_branch(BDRVSheepdogState *s)
     memcpy(&s->inode, buf, sizeof(s->inode));
 
     s->is_snapshot = false;
+    s->reverted = false;
     ret = 0;
     dprintf("%" PRIx32 " was newly created.\n", s->inode.vdi_id);
 
@@ -1869,6 +1874,41 @@ cleanup:
     return ret;
 }
 
+/* Delete current working VDI by the name */
+static int sd_delete(BDRVSheepdogState *s, char *name)
+{
+    unsigned int wlen = SD_MAX_VDI_LEN, rlen = 0;
+    SheepdogVdiReq hdr = {
+        .opcode = SD_OP_DEL_VDI,
+        .vdi_id = s->inode.vdi_id,
+        .data_length = wlen,
+        .flags = SD_FLAG_CMD_WRITE,
+    };
+    SheepdogVdiRsp *rsp = (SheepdogVdiRsp *)&hdr;
+    int fd, ret;
+
+    fd = connect_to_sdog(s);
+    if (fd < 0) {
+        return fd;
+    }
+
+    ret = do_req(fd, (SheepdogReq *)&hdr, name, &wlen, &rlen);
+    closesocket(fd);
+    if (ret || (rsp->result != SD_RES_SUCCESS &&
+                rsp->result != SD_RES_NO_VDI)) {
+        error_report("%s, %s", sd_strerror(rsp->result), name);
+        return -1;
+    }
+
+    return 0;
+}
+
+/*
+ * We implement rollback(loadvm) operation to the specified snapshot by
+ * 1) switch to the snapshot
+ * 2) delete working VDI
+ * 3) rely on sd_create_branch to create a new working VDI based on the snapshot
+ */
 static int sd_snapshot_goto(BlockDriverState *bs, const char *snapshot_id)
 {
     BDRVSheepdogState *s = bs->opaque;
@@ -1924,6 +1964,16 @@ static int sd_snapshot_goto(BlockDriverState *bs, const char *snapshot_id)
 
     s->is_snapshot = true;
 
+    /*
+     * Even If deletion fails, we will just create extra snapshot based on
+     * the workding VDI which was supposed to be deleted. So no need to
+     * false bail out.
+     */
+    ret = sd_delete(s, vdi);
+    if (!ret) {
+        s->reverted = true;
+    }
+
     g_free(buf);
     g_free(old_s);
 
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH v2] sheepdog: fix loadvm operation
  2013-04-24 16:47 [Qemu-devel] [PATCH v2] sheepdog: fix loadvm operation Liu Yuan
@ 2013-04-25  4:12 ` Liu Yuan
  2013-04-25  8:11   ` [Qemu-devel] [sheepdog] " MORITA Kazutaka
  2013-04-25  8:42 ` [Qemu-devel] [PATCH v3] " Liu Yuan
  2013-04-25 12:49 ` [Qemu-devel] [PATCH v4] " Liu Yuan
  2 siblings, 1 reply; 15+ messages in thread
From: Liu Yuan @ 2013-04-25  4:12 UTC (permalink / raw)
  To: sheepdog; +Cc: Kevin Wolf, Stefan Hajnoczi, qemu-devel, MORITA Kazutaka

On 04/25/2013 12:47 AM, Liu Yuan wrote:
> - don't break old behavior if we boot up on the snapshot by using s->reverted
>    to indicate if we delete working VDI successfully

If we implement 'boot from snapshot' == loadvm snapshot, we don't need
s->reverted. What do you think, Kazutaka? With this idea, qemu -hda
sheepdog:test:id will be virtually the same as qemu -hda sheepdog:test
-loadvm id.

I don't understand the usecase for snapshting current workding VDI and
then restore to the specified snapshot.

Thanks,
Yuan

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [sheepdog] [PATCH v2] sheepdog: fix loadvm operation
  2013-04-25  4:12 ` Liu Yuan
@ 2013-04-25  8:11   ` MORITA Kazutaka
  2013-04-25  8:27     ` Liu Yuan
  0 siblings, 1 reply; 15+ messages in thread
From: MORITA Kazutaka @ 2013-04-25  8:11 UTC (permalink / raw)
  To: Liu Yuan; +Cc: Kevin Wolf, sheepdog, qemu-devel, Stefan Hajnoczi

At Thu, 25 Apr 2013 12:12:58 +0800,
Liu Yuan wrote:
> 
> On 04/25/2013 12:47 AM, Liu Yuan wrote:
> > - don't break old behavior if we boot up on the snapshot by using s->reverted
> >    to indicate if we delete working VDI successfully
> 
> If we implement 'boot from snapshot' == loadvm snapshot, we don't need
> s->reverted. What do you think, Kazutaka? With this idea, qemu -hda
> sheepdog:test:id will be virtually the same as qemu -hda sheepdog:test
> -loadvm id.
> 
> I don't understand the usecase for snapshting current workding VDI and
> then restore to the specified snapshot.

Are you suggesting that booting from snapshot should discard the
current state like savevm?  If yes, it looks okay to me.

Thanks,

Kazutaka

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [sheepdog] [PATCH v2] sheepdog: fix loadvm operation
  2013-04-25  8:11   ` [Qemu-devel] [sheepdog] " MORITA Kazutaka
@ 2013-04-25  8:27     ` Liu Yuan
  0 siblings, 0 replies; 15+ messages in thread
From: Liu Yuan @ 2013-04-25  8:27 UTC (permalink / raw)
  To: MORITA Kazutaka; +Cc: Kevin Wolf, sheepdog, qemu-devel, Stefan Hajnoczi

On 04/25/2013 04:11 PM, MORITA Kazutaka wrote:
> Are you suggesting that booting from snapshot should discard the
> current state like savevm?  If yes, it looks okay to me.

Yeah, this will make code cleaner. I'll update v3.

Thanks
Yuan

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Qemu-devel] [PATCH v3] sheepdog: fix loadvm operation
  2013-04-24 16:47 [Qemu-devel] [PATCH v2] sheepdog: fix loadvm operation Liu Yuan
  2013-04-25  4:12 ` Liu Yuan
@ 2013-04-25  8:42 ` Liu Yuan
  2013-04-25  9:40   ` [Qemu-devel] [sheepdog] " MORITA Kazutaka
  2013-04-25 12:49 ` [Qemu-devel] [PATCH v4] " Liu Yuan
  2 siblings, 1 reply; 15+ messages in thread
From: Liu Yuan @ 2013-04-25  8:42 UTC (permalink / raw)
  To: sheepdog; +Cc: Kevin Wolf, Stefan Hajnoczi, qemu-devel, MORITA Kazutaka

From: Liu Yuan <tailai.ly@taobao.com>

Currently the 'loadvm' opertaion works as following:
1. switch to the snapshot
2. mark current working VDI as a snapshot
3. rely on sd_create_branch to create a new working VDI based on the snapshot

This works not the same as other format as QCOW2. For e.g,

qemu > savevm # get a live snapshot snap1
qemu > savevm # snap2
qemu > loadvm 1 # This will steally create snap3 of the working VDI

Which will result in following snapshot chain:

base <-- snap1 <-- snap2 <-- snap3
          ^
          |
      working VDI

snap3 was unnecessarily created and might be annoying users.

This patch discard the unnecessary 'snap3' creation. and implement
rollback(loadvm) operation to the specified snapshot by
1. switch to the snapshot
2. delete working VDI
3. rely on sd_create_branch to create a new working VDI based on the snapshot

The snapshot chain for above example will be:

base <-- snap1 <-- snap2
          ^
          |
      working VDI

As a spin-off, boot from snapshot behave the same as 'loadvm' that discard
current vm state.

Cc: qemu-devel@nongnu.org
Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Liu Yuan <tailai.ly@taobao.com>
---
v3:
 - let boot from snapshot behave like 'loadvm'

v2:
 - use do_req() because sd_delete isn't in coroutine
 - don't break old behavior if we boot up on the snapshot by using s->reverted
   to indicate if we delete working VDI successfully
 - fix a subtle case that sd_create_branch() isn't called yet while another
   'loadvm' is executed

 block/sheepdog.c |   46 +++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 45 insertions(+), 1 deletion(-)

diff --git a/block/sheepdog.c b/block/sheepdog.c
index 9f30a87..019ccbe 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -36,6 +36,7 @@
 #define SD_OP_GET_VDI_INFO   0x14
 #define SD_OP_READ_VDIS      0x15
 #define SD_OP_FLUSH_VDI      0x16
+#define SD_OP_DEL_VDI        0x17
 
 #define SD_FLAG_CMD_WRITE    0x01
 #define SD_FLAG_CMD_COW      0x02
@@ -1569,6 +1570,35 @@ out:
     sd_finish_aiocb(acb);
 }
 
+/* Delete current working VDI on the snapshot chain */
+static bool sd_delete(BDRVSheepdogState *s)
+{
+    unsigned int wlen = SD_MAX_VDI_LEN, rlen = 0;
+    SheepdogVdiReq hdr = {
+        .opcode = SD_OP_DEL_VDI,
+        .vdi_id = s->inode.vdi_id,
+        .data_length = wlen,
+        .flags = SD_FLAG_CMD_WRITE,
+    };
+    SheepdogVdiRsp *rsp = (SheepdogVdiRsp *)&hdr;
+    int fd, ret;
+
+    fd = connect_to_sdog(s);
+    if (fd < 0) {
+        return false;
+    }
+
+    ret = do_req(fd, (SheepdogReq *)&hdr, s->name, &wlen, &rlen);
+    closesocket(fd);
+    if (ret || (rsp->result != SD_RES_SUCCESS &&
+                rsp->result != SD_RES_NO_VDI)) {
+        error_report("%s, %s", sd_strerror(rsp->result), s->name);
+        return false;
+    }
+
+    return true;
+}
+
 /*
  * Create a writable VDI from a snapshot
  */
@@ -1577,12 +1607,20 @@ static int sd_create_branch(BDRVSheepdogState *s)
     int ret, fd;
     uint32_t vid;
     char *buf;
+    bool deleted;
 
     dprintf("%" PRIx32 " is snapshot.\n", s->inode.vdi_id);
 
     buf = g_malloc(SD_INODE_SIZE);
 
-    ret = do_sd_create(s, s->name, s->inode.vdi_size, s->inode.vdi_id, &vid, 1);
+    /*
+     * Even If deletion fails, we will just create extra snapshot based on
+     * the workding VDI which was supposed to be deleted. So no need to
+     * false bail out.
+     */
+    deleted = sd_delete(s);
+    ret = do_sd_create(s, s->name, s->inode.vdi_size, s->inode.vdi_id, &vid,
+                       !deleted);
     if (ret) {
         goto out;
     }
@@ -1898,6 +1936,12 @@ cleanup:
     return ret;
 }
 
+/*
+ * We implement rollback(loadvm) operation to the specified snapshot by
+ * 1) switch to the snapshot
+ * 2) rely on sd_create_branch to delete working VDI and
+ * 3) create a new working VDI based on the speicified snapshot
+ */
 static int sd_snapshot_goto(BlockDriverState *bs, const char *snapshot_id)
 {
     BDRVSheepdogState *s = bs->opaque;
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [sheepdog] [PATCH v3] sheepdog: fix loadvm operation
  2013-04-25  8:42 ` [Qemu-devel] [PATCH v3] " Liu Yuan
@ 2013-04-25  9:40   ` MORITA Kazutaka
  2013-04-25  9:44     ` Liu Yuan
  0 siblings, 1 reply; 15+ messages in thread
From: MORITA Kazutaka @ 2013-04-25  9:40 UTC (permalink / raw)
  To: Liu Yuan; +Cc: Kevin Wolf, sheepdog, qemu-devel, Stefan Hajnoczi

At Thu, 25 Apr 2013 16:42:34 +0800,
Liu Yuan wrote:
> 
> +/* Delete current working VDI on the snapshot chain */
> +static bool sd_delete(BDRVSheepdogState *s)
> +{
> +    unsigned int wlen = SD_MAX_VDI_LEN, rlen = 0;
> +    SheepdogVdiReq hdr = {
> +        .opcode = SD_OP_DEL_VDI,
> +        .vdi_id = s->inode.vdi_id,
> +        .data_length = wlen,
> +        .flags = SD_FLAG_CMD_WRITE,
> +    };
> +    SheepdogVdiRsp *rsp = (SheepdogVdiRsp *)&hdr;
> +    int fd, ret;
> +
> +    fd = connect_to_sdog(s);
> +    if (fd < 0) {
> +        return false;
> +    }
> +
> +    ret = do_req(fd, (SheepdogReq *)&hdr, s->name, &wlen, &rlen);
> +    closesocket(fd);
> +    if (ret || (rsp->result != SD_RES_SUCCESS &&
> +                rsp->result != SD_RES_NO_VDI)) {
> +        error_report("%s, %s", sd_strerror(rsp->result), s->name);
> +        return false;
> +    }
> +
> +    return true;
> +}

Isn't it better to show an error message when the result code is
SD_RES_NO_VDI?

Thanks,

Kazutaka

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [sheepdog] [PATCH v3] sheepdog: fix loadvm operation
  2013-04-25  9:40   ` [Qemu-devel] [sheepdog] " MORITA Kazutaka
@ 2013-04-25  9:44     ` Liu Yuan
  2013-04-25 10:03       ` MORITA Kazutaka
  0 siblings, 1 reply; 15+ messages in thread
From: Liu Yuan @ 2013-04-25  9:44 UTC (permalink / raw)
  To: MORITA Kazutaka; +Cc: Kevin Wolf, sheepdog, qemu-devel, Stefan Hajnoczi

On 04/25/2013 05:40 PM, MORITA Kazutaka wrote:
> Isn't it better to show an error message when the result code is
> SD_RES_NO_VDI?

This information isn't useful even for debugging, what it for?

Thanks,
Yuan

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [sheepdog] [PATCH v3] sheepdog: fix loadvm operation
  2013-04-25  9:44     ` Liu Yuan
@ 2013-04-25 10:03       ` MORITA Kazutaka
  2013-04-25 12:32         ` Liu Yuan
  0 siblings, 1 reply; 15+ messages in thread
From: MORITA Kazutaka @ 2013-04-25 10:03 UTC (permalink / raw)
  To: Liu Yuan; +Cc: MORITA Kazutaka, Kevin Wolf, sheepdog, qemu-devel,
	Stefan Hajnoczi

At Thu, 25 Apr 2013 17:44:41 +0800,
Liu Yuan wrote:
> 
> On 04/25/2013 05:40 PM, MORITA Kazutaka wrote:
> > Isn't it better to show an error message when the result code is
> > SD_RES_NO_VDI?
> 
> This information isn't useful even for debugging, what it for?

The block driver tries to delete the vdi, but the sheepdog servers
return "No such vdi" - I thought that something goes wrong in this
case.

What's the scenario where the sheepdog servers return SD_RES_NO_VDI?
Can we ignore it safely?

Thanks,

Kazutaka

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [sheepdog] [PATCH v3] sheepdog: fix loadvm operation
  2013-04-25 10:03       ` MORITA Kazutaka
@ 2013-04-25 12:32         ` Liu Yuan
  0 siblings, 0 replies; 15+ messages in thread
From: Liu Yuan @ 2013-04-25 12:32 UTC (permalink / raw)
  To: MORITA Kazutaka; +Cc: Kevin Wolf, sheepdog, qemu-devel, Stefan Hajnoczi

On 04/25/2013 06:03 PM, MORITA Kazutaka wrote:
> The block driver tries to delete the vdi, but the sheepdog servers
> return "No such vdi" - I thought that something goes wrong in this
> case.
> 
> What's the scenario where the sheepdog servers return SD_RES_NO_VDI?
> Can we ignore it safely?

V2 has this problem, if we loadvm twice in a short of time that no
sd_create_branch is called in the time window, the second loadvm will
get this NO_VDI error. But with V3 we don't have this problem. Anyway it
is okay to printf message for this case for v3. I'll update v4.

Thanks,
Yuan

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Qemu-devel] [PATCH v4] sheepdog: fix loadvm operation
  2013-04-24 16:47 [Qemu-devel] [PATCH v2] sheepdog: fix loadvm operation Liu Yuan
  2013-04-25  4:12 ` Liu Yuan
  2013-04-25  8:42 ` [Qemu-devel] [PATCH v3] " Liu Yuan
@ 2013-04-25 12:49 ` Liu Yuan
  2013-04-25 13:06   ` [Qemu-devel] [sheepdog] " MORITA Kazutaka
  2013-04-26 11:41   ` [Qemu-devel] " Stefan Hajnoczi
  2 siblings, 2 replies; 15+ messages in thread
From: Liu Yuan @ 2013-04-25 12:49 UTC (permalink / raw)
  To: sheepdog; +Cc: Kevin Wolf, Stefan Hajnoczi, qemu-devel, MORITA Kazutaka

From: Liu Yuan <tailai.ly@taobao.com>

Currently the 'loadvm' opertaion works as following:
1. switch to the snapshot
2. mark current working VDI as a snapshot
3. rely on sd_create_branch to create a new working VDI based on the snapshot

This works not the same as other format as QCOW2. For e.g,

qemu > savevm # get a live snapshot snap1
qemu > savevm # snap2
qemu > loadvm 1 # This will steally create snap3 of the working VDI

Which will result in following snapshot chain:

base <-- snap1 <-- snap2 <-- snap3
          ^
          |
      working VDI

snap3 was unnecessarily created and might be annoying users.

This patch discard the unnecessary 'snap3' creation. and implement
rollback(loadvm) operation to the specified snapshot by
1. switch to the snapshot
2. delete working VDI
3. rely on sd_create_branch to create a new working VDI based on the snapshot

The snapshot chain for above example will be:

base <-- snap1 <-- snap2
          ^
          |
      working VDI

Cc: qemu-devel@nongnu.org
Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Liu Yuan <tailai.ly@taobao.com>
---
v4:
 - print an error message when NO_VDI found

v3:
 - let boot from snapshot behave like 'loadvm'

v2:
 - use do_req() because sd_delete isn't in coroutine
 - don't break old behavior if we boot up on the snapshot by using s->reverted
   to indicate if we delete working VDI successfully
 - fix a subtle case that sd_create_branch() isn't called yet while another
   'loadvm' is executed

 block/sheepdog.c |   54 +++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 53 insertions(+), 1 deletion(-)

diff --git a/block/sheepdog.c b/block/sheepdog.c
index 9f30a87..366ec7d 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -36,6 +36,7 @@
 #define SD_OP_GET_VDI_INFO   0x14
 #define SD_OP_READ_VDIS      0x15
 #define SD_OP_FLUSH_VDI      0x16
+#define SD_OP_DEL_VDI        0x17
 
 #define SD_FLAG_CMD_WRITE    0x01
 #define SD_FLAG_CMD_COW      0x02
@@ -1569,6 +1570,43 @@ out:
     sd_finish_aiocb(acb);
 }
 
+/* Delete current working VDI on the snapshot chain */
+static bool sd_delete(BDRVSheepdogState *s)
+{
+    unsigned int wlen = SD_MAX_VDI_LEN, rlen = 0;
+    SheepdogVdiReq hdr = {
+        .opcode = SD_OP_DEL_VDI,
+        .vdi_id = s->inode.vdi_id,
+        .data_length = wlen,
+        .flags = SD_FLAG_CMD_WRITE,
+    };
+    SheepdogVdiRsp *rsp = (SheepdogVdiRsp *)&hdr;
+    int fd, ret;
+
+    fd = connect_to_sdog(s);
+    if (fd < 0) {
+        return false;
+    }
+
+    ret = do_req(fd, (SheepdogReq *)&hdr, s->name, &wlen, &rlen);
+    closesocket(fd);
+    if (ret) {
+        return false;
+    }
+    switch (rsp->result) {
+    case SD_RES_NO_VDI:
+        error_report("%s was already deleted", s->name);
+        /* fall through */
+    case SD_RES_SUCCESS:
+        break;
+    default:
+        error_report("%s, %s", sd_strerror(rsp->result), s->name);
+        return false;
+    }
+
+    return true;
+}
+
 /*
  * Create a writable VDI from a snapshot
  */
@@ -1577,12 +1615,20 @@ static int sd_create_branch(BDRVSheepdogState *s)
     int ret, fd;
     uint32_t vid;
     char *buf;
+    bool deleted;
 
     dprintf("%" PRIx32 " is snapshot.\n", s->inode.vdi_id);
 
     buf = g_malloc(SD_INODE_SIZE);
 
-    ret = do_sd_create(s, s->name, s->inode.vdi_size, s->inode.vdi_id, &vid, 1);
+    /*
+     * Even If deletion fails, we will just create extra snapshot based on
+     * the workding VDI which was supposed to be deleted. So no need to
+     * false bail out.
+     */
+    deleted = sd_delete(s);
+    ret = do_sd_create(s, s->name, s->inode.vdi_size, s->inode.vdi_id, &vid,
+                       !deleted);
     if (ret) {
         goto out;
     }
@@ -1898,6 +1944,12 @@ cleanup:
     return ret;
 }
 
+/*
+ * We implement rollback(loadvm) operation to the specified snapshot by
+ * 1) switch to the snapshot
+ * 2) rely on sd_create_branch to delete working VDI and
+ * 3) create a new working VDI based on the speicified snapshot
+ */
 static int sd_snapshot_goto(BlockDriverState *bs, const char *snapshot_id)
 {
     BDRVSheepdogState *s = bs->opaque;
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [sheepdog] [PATCH v4] sheepdog: fix loadvm operation
  2013-04-25 12:49 ` [Qemu-devel] [PATCH v4] " Liu Yuan
@ 2013-04-25 13:06   ` MORITA Kazutaka
  2013-04-26  9:04     ` Liu Yuan
  2013-04-26 11:41   ` [Qemu-devel] " Stefan Hajnoczi
  1 sibling, 1 reply; 15+ messages in thread
From: MORITA Kazutaka @ 2013-04-25 13:06 UTC (permalink / raw)
  To: Liu Yuan; +Cc: Kevin Wolf, sheepdog, qemu-devel, Stefan Hajnoczi

At Thu, 25 Apr 2013 20:49:39 +0800,
Liu Yuan wrote:
> 
> From: Liu Yuan <tailai.ly@taobao.com>
> 
> Currently the 'loadvm' opertaion works as following:
> 1. switch to the snapshot
> 2. mark current working VDI as a snapshot
> 3. rely on sd_create_branch to create a new working VDI based on the snapshot
> 
> This works not the same as other format as QCOW2. For e.g,
> 
> qemu > savevm # get a live snapshot snap1
> qemu > savevm # snap2
> qemu > loadvm 1 # This will steally create snap3 of the working VDI
> 
> Which will result in following snapshot chain:
> 
> base <-- snap1 <-- snap2 <-- snap3
>           ^
>           |
>       working VDI
> 
> snap3 was unnecessarily created and might be annoying users.
> 
> This patch discard the unnecessary 'snap3' creation. and implement
> rollback(loadvm) operation to the specified snapshot by
> 1. switch to the snapshot
> 2. delete working VDI
> 3. rely on sd_create_branch to create a new working VDI based on the snapshot
> 
> The snapshot chain for above example will be:
> 
> base <-- snap1 <-- snap2
>           ^
>           |
>       working VDI
> 
> Cc: qemu-devel@nongnu.org
> Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
> Cc: Kevin Wolf <kwolf@redhat.com>
> Cc: Stefan Hajnoczi <stefanha@redhat.com>
> Signed-off-by: Liu Yuan <tailai.ly@taobao.com>
> ---
> v4:
>  - print an error message when NO_VDI found
> 
> v3:
>  - let boot from snapshot behave like 'loadvm'
> 
> v2:
>  - use do_req() because sd_delete isn't in coroutine
>  - don't break old behavior if we boot up on the snapshot by using s->reverted
>    to indicate if we delete working VDI successfully
>  - fix a subtle case that sd_create_branch() isn't called yet while another
>    'loadvm' is executed
> 
>  block/sheepdog.c |   54 +++++++++++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 53 insertions(+), 1 deletion(-)

Reviewed-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [sheepdog] [PATCH v4] sheepdog: fix loadvm operation
  2013-04-25 13:06   ` [Qemu-devel] [sheepdog] " MORITA Kazutaka
@ 2013-04-26  9:04     ` Liu Yuan
  2013-04-26 11:39       ` Stefan Hajnoczi
  0 siblings, 1 reply; 15+ messages in thread
From: Liu Yuan @ 2013-04-26  9:04 UTC (permalink / raw)
  To: MORITA Kazutaka; +Cc: Kevin Wolf, sheepdog, qemu-devel, Stefan Hajnoczi

On 04/25/2013 09:06 PM, MORITA Kazutaka wrote:
> At Thu, 25 Apr 2013 20:49:39 +0800,
> Liu Yuan wrote:
>>
>> From: Liu Yuan <tailai.ly@taobao.com>
>>
>> Currently the 'loadvm' opertaion works as following:
>> 1. switch to the snapshot
>> 2. mark current working VDI as a snapshot
>> 3. rely on sd_create_branch to create a new working VDI based on the snapshot
>>
>> This works not the same as other format as QCOW2. For e.g,
>>
>> qemu > savevm # get a live snapshot snap1
>> qemu > savevm # snap2
>> qemu > loadvm 1 # This will steally create snap3 of the working VDI
>>
>> Which will result in following snapshot chain:
>>
>> base <-- snap1 <-- snap2 <-- snap3
>>           ^
>>           |
>>       working VDI
>>
>> snap3 was unnecessarily created and might be annoying users.
>>
>> This patch discard the unnecessary 'snap3' creation. and implement
>> rollback(loadvm) operation to the specified snapshot by
>> 1. switch to the snapshot
>> 2. delete working VDI
>> 3. rely on sd_create_branch to create a new working VDI based on the snapshot
>>
>> The snapshot chain for above example will be:
>>
>> base <-- snap1 <-- snap2
>>           ^
>>           |
>>       working VDI
>>
>> Cc: qemu-devel@nongnu.org
>> Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
>> Cc: Kevin Wolf <kwolf@redhat.com>
>> Cc: Stefan Hajnoczi <stefanha@redhat.com>
>> Signed-off-by: Liu Yuan <tailai.ly@taobao.com>
>> ---
>> v4:
>>  - print an error message when NO_VDI found
>>
>> v3:
>>  - let boot from snapshot behave like 'loadvm'
>>
>> v2:
>>  - use do_req() because sd_delete isn't in coroutine
>>  - don't break old behavior if we boot up on the snapshot by using s->reverted
>>    to indicate if we delete working VDI successfully
>>  - fix a subtle case that sd_create_branch() isn't called yet while another
>>    'loadvm' is executed
>>
>>  block/sheepdog.c |   54 +++++++++++++++++++++++++++++++++++++++++++++++++++++-
>>  1 file changed, 53 insertions(+), 1 deletion(-)
> 
> Reviewed-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>

Missed this one, Stefan?

Thanks,
Yuan

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [sheepdog] [PATCH v4] sheepdog: fix loadvm operation
  2013-04-26  9:04     ` Liu Yuan
@ 2013-04-26 11:39       ` Stefan Hajnoczi
  2013-04-26 11:48         ` Liu Yuan
  0 siblings, 1 reply; 15+ messages in thread
From: Stefan Hajnoczi @ 2013-04-26 11:39 UTC (permalink / raw)
  To: Liu Yuan; +Cc: MORITA Kazutaka, Kevin Wolf, sheepdog, qemu-devel

On Fri, Apr 26, 2013 at 05:04:06PM +0800, Liu Yuan wrote:
> On 04/25/2013 09:06 PM, MORITA Kazutaka wrote:
> > At Thu, 25 Apr 2013 20:49:39 +0800,
> > Liu Yuan wrote:
> >>
> >> From: Liu Yuan <tailai.ly@taobao.com>
> >>
> >> Currently the 'loadvm' opertaion works as following:
> >> 1. switch to the snapshot
> >> 2. mark current working VDI as a snapshot
> >> 3. rely on sd_create_branch to create a new working VDI based on the snapshot
> >>
> >> This works not the same as other format as QCOW2. For e.g,
> >>
> >> qemu > savevm # get a live snapshot snap1
> >> qemu > savevm # snap2
> >> qemu > loadvm 1 # This will steally create snap3 of the working VDI
> >>
> >> Which will result in following snapshot chain:
> >>
> >> base <-- snap1 <-- snap2 <-- snap3
> >>           ^
> >>           |
> >>       working VDI
> >>
> >> snap3 was unnecessarily created and might be annoying users.
> >>
> >> This patch discard the unnecessary 'snap3' creation. and implement
> >> rollback(loadvm) operation to the specified snapshot by
> >> 1. switch to the snapshot
> >> 2. delete working VDI
> >> 3. rely on sd_create_branch to create a new working VDI based on the snapshot
> >>
> >> The snapshot chain for above example will be:
> >>
> >> base <-- snap1 <-- snap2
> >>           ^
> >>           |
> >>       working VDI
> >>
> >> Cc: qemu-devel@nongnu.org
> >> Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
> >> Cc: Kevin Wolf <kwolf@redhat.com>
> >> Cc: Stefan Hajnoczi <stefanha@redhat.com>
> >> Signed-off-by: Liu Yuan <tailai.ly@taobao.com>
> >> ---
> >> v4:
> >>  - print an error message when NO_VDI found
> >>
> >> v3:
> >>  - let boot from snapshot behave like 'loadvm'
> >>
> >> v2:
> >>  - use do_req() because sd_delete isn't in coroutine
> >>  - don't break old behavior if we boot up on the snapshot by using s->reverted
> >>    to indicate if we delete working VDI successfully
> >>  - fix a subtle case that sd_create_branch() isn't called yet while another
> >>    'loadvm' is executed
> >>
> >>  block/sheepdog.c |   54 +++++++++++++++++++++++++++++++++++++++++++++++++++++-
> >>  1 file changed, 53 insertions(+), 1 deletion(-)
> > 
> > Reviewed-by: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
> 
> Missed this one, Stefan?

Please send patches as top-level email threads instead of replies so
that they get noticed.

Thanks,
Stefan

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [PATCH v4] sheepdog: fix loadvm operation
  2013-04-25 12:49 ` [Qemu-devel] [PATCH v4] " Liu Yuan
  2013-04-25 13:06   ` [Qemu-devel] [sheepdog] " MORITA Kazutaka
@ 2013-04-26 11:41   ` Stefan Hajnoczi
  1 sibling, 0 replies; 15+ messages in thread
From: Stefan Hajnoczi @ 2013-04-26 11:41 UTC (permalink / raw)
  To: Liu Yuan; +Cc: Kevin Wolf, sheepdog, qemu-devel, MORITA Kazutaka

On Thu, Apr 25, 2013 at 08:49:39PM +0800, Liu Yuan wrote:
> From: Liu Yuan <tailai.ly@taobao.com>
> 
> Currently the 'loadvm' opertaion works as following:
> 1. switch to the snapshot
> 2. mark current working VDI as a snapshot
> 3. rely on sd_create_branch to create a new working VDI based on the snapshot
> 
> This works not the same as other format as QCOW2. For e.g,
> 
> qemu > savevm # get a live snapshot snap1
> qemu > savevm # snap2
> qemu > loadvm 1 # This will steally create snap3 of the working VDI
> 
> Which will result in following snapshot chain:
> 
> base <-- snap1 <-- snap2 <-- snap3
>           ^
>           |
>       working VDI
> 
> snap3 was unnecessarily created and might be annoying users.
> 
> This patch discard the unnecessary 'snap3' creation. and implement
> rollback(loadvm) operation to the specified snapshot by
> 1. switch to the snapshot
> 2. delete working VDI
> 3. rely on sd_create_branch to create a new working VDI based on the snapshot
> 
> The snapshot chain for above example will be:
> 
> base <-- snap1 <-- snap2
>           ^
>           |
>       working VDI
> 
> Cc: qemu-devel@nongnu.org
> Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
> Cc: Kevin Wolf <kwolf@redhat.com>
> Cc: Stefan Hajnoczi <stefanha@redhat.com>
> Signed-off-by: Liu Yuan <tailai.ly@taobao.com>
> ---
> v4:
>  - print an error message when NO_VDI found
> 
> v3:
>  - let boot from snapshot behave like 'loadvm'
> 
> v2:
>  - use do_req() because sd_delete isn't in coroutine
>  - don't break old behavior if we boot up on the snapshot by using s->reverted
>    to indicate if we delete working VDI successfully
>  - fix a subtle case that sd_create_branch() isn't called yet while another
>    'loadvm' is executed
> 
>  block/sheepdog.c |   54 +++++++++++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 53 insertions(+), 1 deletion(-)

Thanks, applied to my block tree:
https://github.com/stefanha/qemu/commits/block

Stefan

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Qemu-devel] [sheepdog] [PATCH v4] sheepdog: fix loadvm operation
  2013-04-26 11:39       ` Stefan Hajnoczi
@ 2013-04-26 11:48         ` Liu Yuan
  0 siblings, 0 replies; 15+ messages in thread
From: Liu Yuan @ 2013-04-26 11:48 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: MORITA Kazutaka, Kevin Wolf, sheepdog, qemu-devel

On 04/26/2013 07:39 PM, Stefan Hajnoczi wrote:
> Please send patches as top-level email threads instead of replies so
> that they get noticed.

Okay, will do for future patches. Thanks for reminding.

Yuan

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2013-04-26 11:48 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-04-24 16:47 [Qemu-devel] [PATCH v2] sheepdog: fix loadvm operation Liu Yuan
2013-04-25  4:12 ` Liu Yuan
2013-04-25  8:11   ` [Qemu-devel] [sheepdog] " MORITA Kazutaka
2013-04-25  8:27     ` Liu Yuan
2013-04-25  8:42 ` [Qemu-devel] [PATCH v3] " Liu Yuan
2013-04-25  9:40   ` [Qemu-devel] [sheepdog] " MORITA Kazutaka
2013-04-25  9:44     ` Liu Yuan
2013-04-25 10:03       ` MORITA Kazutaka
2013-04-25 12:32         ` Liu Yuan
2013-04-25 12:49 ` [Qemu-devel] [PATCH v4] " Liu Yuan
2013-04-25 13:06   ` [Qemu-devel] [sheepdog] " MORITA Kazutaka
2013-04-26  9:04     ` Liu Yuan
2013-04-26 11:39       ` Stefan Hajnoczi
2013-04-26 11:48         ` Liu Yuan
2013-04-26 11:41   ` [Qemu-devel] " Stefan Hajnoczi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).