From: Liu Yuan <namei.unix@gmail.com>
To: sheepdog@lists.wpkg.org
Cc: Kevin Wolf <kwolf@redhat.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
qemu-devel@nongnu.org,
MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Subject: [Qemu-devel] [PATCH v4 2/2] sheepdog: support 'qemu-img snapshot -a'
Date: Sat, 8 Jun 2013 01:54:26 +0800 [thread overview]
Message-ID: <1370627666-6689-3-git-send-email-namei.unix@gmail.com> (raw)
In-Reply-To: <1370627666-6689-1-git-send-email-namei.unix@gmail.com>
Just call sd_create_branch() in the snapshot_goto to rollback the image is good
enough. With this patch, 'loadvm' process for sheepdog is modified:
Suppose we have a snapshot chain A --> B --> C, we do 'loadvm A' so as to get
a new chain,
A --> B
|
V
C1
in the old code:
1 reload inode of A (in snapshot_goto)
2 read vmstate via A's vdi_id (loadvm_state)
3 delete C and create C1, reload inode of C1 (sd_create_branch on write)
with this patch applied:
1 reload inode of A, delete C and create C1 (in snapshot_goto)
2 read vmstate via C1's parent, that is A's vdi_id (loadvm_state)
This will fix the possible bug that QEMU exit between 2 and 3 in the old code
Cc: qemu-devel@nongnu.org
Cc: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Liu Yuan <namei.unix@gmail.com>
---
block/sheepdog.c | 12 +++++-------
1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/block/sheepdog.c b/block/sheepdog.c
index 94218ac..1b7c3f1 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -2071,14 +2071,11 @@ static int sd_snapshot_goto(BlockDriverState *bs, const char *snapshot_id)
goto out;
}
- if (!s->inode.vm_state_size) {
- error_report("Invalid snapshot");
- ret = -ENOENT;
+ ret = sd_create_branch(s);
+ if (ret) {
goto out;
}
- s->is_snapshot = true;
-
g_free(old_s);
return 0;
@@ -2196,8 +2193,9 @@ static int do_load_save_vmstate(BDRVSheepdogState *s, uint8_t *data,
int fd, ret = 0, remaining = size;
unsigned int data_len;
uint64_t vmstate_oid;
- uint32_t vdi_index;
uint64_t offset;
+ uint32_t vdi_index;
+ uint32_t vdi_id = load ? s->inode.parent_vdi_id : s->inode.vdi_id;
fd = connect_to_sdog(s);
if (fd < 0) {
@@ -2210,7 +2208,7 @@ static int do_load_save_vmstate(BDRVSheepdogState *s, uint8_t *data,
data_len = MIN(remaining, SD_DATA_OBJ_SIZE - offset);
- vmstate_oid = vid_to_vmstate_oid(s->inode.vdi_id, vdi_index);
+ vmstate_oid = vid_to_vmstate_oid(vdi_id, vdi_index);
create = (offset == 0);
if (load) {
--
1.7.9.5
next prev parent reply other threads:[~2013-06-07 17:55 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-06-07 17:54 [Qemu-devel] [PATCH v4 0/2] fix 'qemu-img snapshot -a' operation for sheepdog Liu Yuan
2013-06-07 17:54 ` [Qemu-devel] [PATCH 1/2] sheepdog: fix snapshot tag initialization Liu Yuan
2013-06-07 17:54 ` Liu Yuan [this message]
2013-06-10 8:58 ` [Qemu-devel] [PATCH v4 0/2] fix 'qemu-img snapshot -a' operation for sheepdog Kevin Wolf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1370627666-6689-3-git-send-email-namei.unix@gmail.com \
--to=namei.unix@gmail.com \
--cc=kwolf@redhat.com \
--cc=morita.kazutaka@lab.ntt.co.jp \
--cc=qemu-devel@nongnu.org \
--cc=sheepdog@lists.wpkg.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).