From: Liu Yuan <namei.unix@gmail.com>
To: sheepdog@lists.wpkg.org
Cc: Kevin Wolf <kwolf@redhat.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
qemu-devel@nongnu.org,
MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Subject: [Qemu-devel] [PATCH v3] sheepdog: fix loadvm operation
Date: Thu, 25 Apr 2013 16:42:34 +0800 [thread overview]
Message-ID: <1366879354-5120-1-git-send-email-namei.unix@gmail.com> (raw)
In-Reply-To: <1366822079-6582-1-git-send-email-namei.unix@gmail.com>
From: Liu Yuan <tailai.ly@taobao.com>
Currently the 'loadvm' opertaion works as following:
1. switch to the snapshot
2. mark current working VDI as a snapshot
3. rely on sd_create_branch to create a new working VDI based on the snapshot
This works not the same as other format as QCOW2. For e.g,
qemu > savevm # get a live snapshot snap1
qemu > savevm # snap2
qemu > loadvm 1 # This will steally create snap3 of the working VDI
Which will result in following snapshot chain:
base <-- snap1 <-- snap2 <-- snap3
^
|
working VDI
snap3 was unnecessarily created and might be annoying users.
This patch discard the unnecessary 'snap3' creation. and implement
rollback(loadvm) operation to the specified snapshot by
1. switch to the snapshot
2. delete working VDI
3. rely on sd_create_branch to create a new working VDI based on the snapshot
The snapshot chain for above example will be:
base <-- snap1 <-- snap2
^
|
working VDI
As a spin-off, boot from snapshot behave the same as 'loadvm' that discard
current vm state.
Cc: qemu-devel@nongnu.org
Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Liu Yuan <tailai.ly@taobao.com>
---
v3:
- let boot from snapshot behave like 'loadvm'
v2:
- use do_req() because sd_delete isn't in coroutine
- don't break old behavior if we boot up on the snapshot by using s->reverted
to indicate if we delete working VDI successfully
- fix a subtle case that sd_create_branch() isn't called yet while another
'loadvm' is executed
block/sheepdog.c | 46 +++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 45 insertions(+), 1 deletion(-)
diff --git a/block/sheepdog.c b/block/sheepdog.c
index 9f30a87..019ccbe 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -36,6 +36,7 @@
#define SD_OP_GET_VDI_INFO 0x14
#define SD_OP_READ_VDIS 0x15
#define SD_OP_FLUSH_VDI 0x16
+#define SD_OP_DEL_VDI 0x17
#define SD_FLAG_CMD_WRITE 0x01
#define SD_FLAG_CMD_COW 0x02
@@ -1569,6 +1570,35 @@ out:
sd_finish_aiocb(acb);
}
+/* Delete current working VDI on the snapshot chain */
+static bool sd_delete(BDRVSheepdogState *s)
+{
+ unsigned int wlen = SD_MAX_VDI_LEN, rlen = 0;
+ SheepdogVdiReq hdr = {
+ .opcode = SD_OP_DEL_VDI,
+ .vdi_id = s->inode.vdi_id,
+ .data_length = wlen,
+ .flags = SD_FLAG_CMD_WRITE,
+ };
+ SheepdogVdiRsp *rsp = (SheepdogVdiRsp *)&hdr;
+ int fd, ret;
+
+ fd = connect_to_sdog(s);
+ if (fd < 0) {
+ return false;
+ }
+
+ ret = do_req(fd, (SheepdogReq *)&hdr, s->name, &wlen, &rlen);
+ closesocket(fd);
+ if (ret || (rsp->result != SD_RES_SUCCESS &&
+ rsp->result != SD_RES_NO_VDI)) {
+ error_report("%s, %s", sd_strerror(rsp->result), s->name);
+ return false;
+ }
+
+ return true;
+}
+
/*
* Create a writable VDI from a snapshot
*/
@@ -1577,12 +1607,20 @@ static int sd_create_branch(BDRVSheepdogState *s)
int ret, fd;
uint32_t vid;
char *buf;
+ bool deleted;
dprintf("%" PRIx32 " is snapshot.\n", s->inode.vdi_id);
buf = g_malloc(SD_INODE_SIZE);
- ret = do_sd_create(s, s->name, s->inode.vdi_size, s->inode.vdi_id, &vid, 1);
+ /*
+ * Even If deletion fails, we will just create extra snapshot based on
+ * the workding VDI which was supposed to be deleted. So no need to
+ * false bail out.
+ */
+ deleted = sd_delete(s);
+ ret = do_sd_create(s, s->name, s->inode.vdi_size, s->inode.vdi_id, &vid,
+ !deleted);
if (ret) {
goto out;
}
@@ -1898,6 +1936,12 @@ cleanup:
return ret;
}
+/*
+ * We implement rollback(loadvm) operation to the specified snapshot by
+ * 1) switch to the snapshot
+ * 2) rely on sd_create_branch to delete working VDI and
+ * 3) create a new working VDI based on the speicified snapshot
+ */
static int sd_snapshot_goto(BlockDriverState *bs, const char *snapshot_id)
{
BDRVSheepdogState *s = bs->opaque;
--
1.7.9.5
next prev parent reply other threads:[~2013-04-25 8:42 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-04-24 16:47 [Qemu-devel] [PATCH v2] sheepdog: fix loadvm operation Liu Yuan
2013-04-25 4:12 ` Liu Yuan
2013-04-25 8:11 ` [Qemu-devel] [sheepdog] " MORITA Kazutaka
2013-04-25 8:27 ` Liu Yuan
2013-04-25 8:42 ` Liu Yuan [this message]
2013-04-25 9:40 ` [Qemu-devel] [sheepdog] [PATCH v3] " MORITA Kazutaka
2013-04-25 9:44 ` Liu Yuan
2013-04-25 10:03 ` MORITA Kazutaka
2013-04-25 12:32 ` Liu Yuan
2013-04-25 12:49 ` [Qemu-devel] [PATCH v4] " Liu Yuan
2013-04-25 13:06 ` [Qemu-devel] [sheepdog] " MORITA Kazutaka
2013-04-26 9:04 ` Liu Yuan
2013-04-26 11:39 ` Stefan Hajnoczi
2013-04-26 11:48 ` Liu Yuan
2013-04-26 11:41 ` [Qemu-devel] " Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1366879354-5120-1-git-send-email-namei.unix@gmail.com \
--to=namei.unix@gmail.com \
--cc=kwolf@redhat.com \
--cc=morita.kazutaka@lab.ntt.co.jp \
--cc=qemu-devel@nongnu.org \
--cc=sheepdog@lists.wpkg.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).