* [Qemu-devel] Combining "loadvm" and "-snapshot"
@ 2009-01-20 23:58 Stephen McCamant
2009-01-21 9:12 ` Kevin Wolf
0 siblings, 1 reply; 4+ messages in thread
From: Stephen McCamant @ 2009-01-20 23:58 UTC (permalink / raw)
To: qemu-devel
Loading snapshots with "loadvm" and running with a scratch QCOW2 disk
image using "-snapshot" are both useful features, but they would be
even more valuable if they could be used together. For instance, I'd
like to have a disk image containing several snapshoted states, and to
be able to load any of them into a QEMU instance with "-snapshot" to
play with them without changing the base disk. But this doesn't seem
to be supported at the moment, since "loadvm" can only load from the
scratch image, not the one backing image.
I looked into an apparently unapplied patch that Eddie Kohler sent
last year that enables a related functionality: being able to load
snapshots from a disk image that's read-only.
(http://www.mail-archive.com/qemu-devel@nongnu.org/msg15774.html).
This targets a related application, but it seems less useful because
after loading the snapshot, the VM's disk is still read-only, so any
writes will fail with an IO error. This provides the most protection
for the original image, but limits what you can do. (It would still be
useful alongside what I propose below.)
I've tried implementing an alternate approach that makes "loadvm" work
under "-snapshot" by finding and loading the VM state from the backing
image, if a snapshot with that name isn't found in the scratch image.
In some preliminary testing, this seems to do what I want; you can
also make new snapshots with savevm that live only in the scratch
image (and so go away at the end of the session). I also made "info
snapshots" print snapshots from both images. Some remaining questions:
* Snapshot IDs aren't unique between the two images, which looks weird
in the output of "info snapshots" and means you have to load by tag.
* You still can't copy snapshots back to the backing image using
"commit".
The patch below shows what I've been working on. I've modified
bdrv_snapshot_goto with a new out parameter representing the
BlockDriverState to load the VM state from (potentially different from
the one passed as an argument).
-- Stephen
diff --git a/block.c b/block.c
index 3250327..0fa7a28 100644
--- a/block.c
+++ b/block.c
@@ -1138,14 +1138,25 @@ int bdrv_snapshot_create(BlockDriverState *bs,
}
int bdrv_snapshot_goto(BlockDriverState *bs,
- const char *snapshot_id)
+ const char *snapshot_id,
+ BlockDriverState **vm_state_bs)
{
BlockDriver *drv = bs->drv;
+ int ret;
if (!drv)
return -ENOMEDIUM;
if (!drv->bdrv_snapshot_goto)
return -ENOTSUP;
- return drv->bdrv_snapshot_goto(bs, snapshot_id);
+ ret = drv->bdrv_snapshot_goto(bs, snapshot_id);
+ if (ret == -ENOENT && bs->backing_hd && drv == bs->backing_hd->drv) {
+ /* Try loading a snapshot from the backing device */
+ if (vm_state_bs)
+ *vm_state_bs = bs->backing_hd;
+ return drv->bdrv_snapshot_goto(bs->backing_hd, snapshot_id);
+ }
+ if (vm_state_bs)
+ *vm_state_bs = bs;
+ return ret;
}
int bdrv_snapshot_delete(BlockDriverState *bs, const char *snapshot_id)
@@ -1166,7 +1177,28 @@ int bdrv_snapshot_list(BlockDriverState *bs,
return -ENOMEDIUM;
if (!drv->bdrv_snapshot_list)
return -ENOTSUP;
- return drv->bdrv_snapshot_list(bs, psn_info);
+ if (bs->backing_hd) {
+ /* Combine snapshot lists from backing and local devices */
+ BlockDriverState *back_bs = bs->backing_hd;
+ BlockDriver *back_drv = back_bs->drv;
+ QEMUSnapshotInfo *backing_snaps, *local_snaps;
+ int backing_snap_cnt =
+ back_drv->bdrv_snapshot_list(back_bs, &backing_snaps);
+ int local_snap_cnt = drv->bdrv_snapshot_list(bs, &local_snaps);
+ int snap_cnt = backing_snap_cnt + local_snap_cnt;
+ QEMUSnapshotInfo *ret_snaps =
+ qemu_malloc(snap_cnt * sizeof(QEMUSnapshotInfo));
+ memcpy(ret_snaps, backing_snaps,
+ backing_snap_cnt * sizeof(QEMUSnapshotInfo));
+ memcpy(ret_snaps + backing_snap_cnt, local_snaps,
+ local_snap_cnt * sizeof(QEMUSnapshotInfo));
+ qemu_free(backing_snaps);
+ qemu_free(local_snaps);
+ *psn_info = ret_snaps;
+ return snap_cnt;
+ } else {
+ return drv->bdrv_snapshot_list(bs, psn_info);
+ }
}
#define NB_SUFFIXES 4
diff --git a/block.h b/block.h
index c3314a1..b064d21 100644
--- a/block.h
+++ b/block.h
@@ -146,7 +146,8 @@ void bdrv_get_backing_filename(BlockDriverState *bs,
int bdrv_snapshot_create(BlockDriverState *bs,
QEMUSnapshotInfo *sn_info);
int bdrv_snapshot_goto(BlockDriverState *bs,
- const char *snapshot_id);
+ const char *snapshot_id,
+ BlockDriverState **vm_state_bs);
int bdrv_snapshot_delete(BlockDriverState *bs, const char *snapshot_id);
int bdrv_snapshot_list(BlockDriverState *bs,
QEMUSnapshotInfo **psn_info);
diff --git a/qemu-img.c b/qemu-img.c
index 555ab5f..161af2b 100644
--- a/qemu-img.c
+++ b/qemu-img.c
@@ -832,7 +832,7 @@ static void img_snapshot(int argc, char **argv)
break;
case SNAPSHOT_APPLY:
- ret = bdrv_snapshot_goto(bs, snapshot_name);
+ ret = bdrv_snapshot_goto(bs, snapshot_name, NULL);
if (ret)
error("Could not apply snapshot '%s': %d (%s)",
snapshot_name, ret, strerror(-ret));
diff --git a/savevm.c b/savevm.c
index 729e849..0b265ea 100644
--- a/savevm.c
+++ b/savevm.c
@@ -1116,7 +1116,7 @@ void do_savevm(const char *name)
void do_loadvm(const char *name)
{
- BlockDriverState *bs, *bs1;
+ BlockDriverState *bs, *bs1, *vm_state_bs;
BlockDriverInfo bdi1, *bdi = &bdi1;
QEMUSnapshotInfo sn;
QEMUFile *f;
@@ -1138,7 +1138,7 @@ void do_loadvm(const char *name)
for(i = 0; i <= nb_drives; i++) {
bs1 = drives_table[i].bdrv;
if (bdrv_has_snapshot(bs1)) {
- ret = bdrv_snapshot_goto(bs1, name);
+ ret = bdrv_snapshot_goto(bs1, name, &vm_state_bs);
if (ret < 0) {
if (bs != bs1)
term_printf("Warning: ");
@@ -1163,19 +1163,19 @@ void do_loadvm(const char *name)
}
}
- if (bdrv_get_info(bs, bdi) < 0 || bdi->vm_state_offset <= 0) {
+ if (bdrv_get_info(vm_state_bs, bdi) < 0 || bdi->vm_state_offset <= 0) {
term_printf("Device %s does not support VM state snapshots\n",
bdrv_get_device_name(bs));
return;
}
/* Don't even try to load empty VM states */
- ret = bdrv_snapshot_find(bs, &sn, name);
+ ret = bdrv_snapshot_find(vm_state_bs, &sn, name);
if ((ret >= 0) && (sn.vm_state_size == 0))
goto the_end;
/* restore the VM state */
- f = qemu_fopen_bdrv(bs, bdi->vm_state_offset, 0);
+ f = qemu_fopen_bdrv(vm_state_bs, bdi->vm_state_offset, 0);
if (!f) {
term_printf("Could not open VM state file\n");
goto the_end;
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] Combining "loadvm" and "-snapshot"
2009-01-20 23:58 [Qemu-devel] Combining "loadvm" and "-snapshot" Stephen McCamant
@ 2009-01-21 9:12 ` Kevin Wolf
2009-01-21 21:09 ` Stephen McCamant
0 siblings, 1 reply; 4+ messages in thread
From: Kevin Wolf @ 2009-01-21 9:12 UTC (permalink / raw)
To: qemu-devel
Stephen McCamant schrieb:
> I've tried implementing an alternate approach that makes "loadvm" work
> under "-snapshot" by finding and loading the VM state from the backing
> image, if a snapshot with that name isn't found in the scratch image.
> In some preliminary testing, this seems to do what I want; you can
> also make new snapshots with savevm that live only in the scratch
> image (and so go away at the end of the session).
I think, this is the point where your approach will break. A snapshot in
the scratch image is still based on a snapshot in the backing image. So
on loading the scratch snapshot you would also need to load the snapshot
for the backing file. However, you don't know which one because qcow2
can't save a snapshot name for the backing file.
Kevin
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] Combining "loadvm" and "-snapshot"
2009-01-21 9:12 ` Kevin Wolf
@ 2009-01-21 21:09 ` Stephen McCamant
2009-01-22 10:01 ` Kevin Wolf
0 siblings, 1 reply; 4+ messages in thread
From: Stephen McCamant @ 2009-01-21 21:09 UTC (permalink / raw)
To: qemu-devel
>>>>> "KW" == Kevin Wolf <kwolf@suse.de> writes:
KW> Stephen McCamant schrieb:
SMcC> I've tried implementing an alternate approach that makes
SMcC> "loadvm" work under "-snapshot" by finding and loading the VM
SMcC> state from the backing image, if a snapshot with that name isn't
SMcC> found in the scratch image. In some preliminary testing, this
SMcC> seems to do what I want; you can also make new snapshots with
SMcC> savevm that live only in the scratch image (and so go away at
SMcC> the end of the session).
KW> I think, this is the point where your approach will break. A
KW> snapshot in the scratch image is still based on a snapshot in the
KW> backing image. So on loading the scratch snapshot you would also
KW> need to load the snapshot for the backing file. However, you don't
KW> know which one because qcow2 can't save a snapshot name for the
KW> backing file.
Ah yes, I think I understand the problem you're referring to. For
instance, if you have two states snap1 and snap2 in the backing image,
and then you save a snapshot "snap1a" in the scratch image based on
snap1, then load snap2, then try to load snap1a, the disk will have an
inconsistent state that's a mixture of snap2 and the differences
between snap1 and snap1a. By playing around with file and directory
contents in snapshots (and unmounting and remounting to bypass the
buffer cache), I can make this show up as corrupted file contents.
I think you could work around this problem by loading the appropriate
backing disk snapshot first, but the need for that isn't intuitive, so
I can see it easily tripping users up. (I presume you can get into
similar trouble if you modify a backing image that a persistent
scratch image is pointing to, though in that case it's clearer what
you're doing wrong.)
As you suggest, the right fix would seem to be to have the scratch
disk snapshots remember what backing disk state they're based on,
though there's no field for that in the current QCOW2 format. It looks
like there is some space left for expansion at the end of the snapshot
structure, though that would be a bigger change. Or if you limit
yourself to the "-snapshot" case, and not persistent backed disk
images, it would be enough to keep the information in memory.
More generally, what I'd like to figure out would be some subset of
this functionality that can have a simple implementation and that's
relatively fool-proof for users, without foreclosing the possibility
of enabling more combinations in the future. Though it was nice to see
that savevm/loadvm in the scratch image mainly worked without
additional effort, I don't really have a need for it. In fact, I would
be satisfied even if the only allowed combination was a "-loadvm"
command-line option used at the same time as "-snapshot", and the
monitor commands were just as limited as they are now.
-- Stephen
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] Combining "loadvm" and "-snapshot"
2009-01-21 21:09 ` Stephen McCamant
@ 2009-01-22 10:01 ` Kevin Wolf
0 siblings, 0 replies; 4+ messages in thread
From: Kevin Wolf @ 2009-01-22 10:01 UTC (permalink / raw)
To: qemu-devel
Stephen McCamant schrieb:
>>>>>> "KW" == Kevin Wolf <kwolf@suse.de> writes:
>
> KW> Stephen McCamant schrieb:
>
> SMcC> I've tried implementing an alternate approach that makes
> SMcC> "loadvm" work under "-snapshot" by finding and loading the VM
> SMcC> state from the backing image, if a snapshot with that name isn't
> SMcC> found in the scratch image. In some preliminary testing, this
> SMcC> seems to do what I want; you can also make new snapshots with
> SMcC> savevm that live only in the scratch image (and so go away at
> SMcC> the end of the session).
>
> KW> I think, this is the point where your approach will break. A
> KW> snapshot in the scratch image is still based on a snapshot in the
> KW> backing image. So on loading the scratch snapshot you would also
> KW> need to load the snapshot for the backing file. However, you don't
> KW> know which one because qcow2 can't save a snapshot name for the
> KW> backing file.
>
> Ah yes, I think I understand the problem you're referring to. For
> instance, if you have two states snap1 and snap2 in the backing image,
> and then you save a snapshot "snap1a" in the scratch image based on
> snap1, then load snap2, then try to load snap1a, the disk will have an
> inconsistent state that's a mixture of snap2 and the differences
> between snap1 and snap1a. By playing around with file and directory
> contents in snapshots (and unmounting and remounting to bypass the
> buffer cache), I can make this show up as corrupted file contents.
Right, this is the scenario I meant.
> I think you could work around this problem by loading the appropriate
> backing disk snapshot first, but the need for that isn't intuitive, so
> I can see it easily tripping users up. (I presume you can get into
> similar trouble if you modify a backing image that a persistent
> scratch image is pointing to, though in that case it's clearer what
> you're doing wrong.)
We really should avoid to even make it possible to get in such states.
It means that at least the scratch image will be horribly corrupted
(which is even worse if you don't limit this scenario to -snapshot). I
really don't want to imagine what happens when you commit to the backing
file then...
> As you suggest, the right fix would seem to be to have the scratch
> disk snapshots remember what backing disk state they're based on,
> though there's no field for that in the current QCOW2 format. It looks
> like there is some space left for expansion at the end of the snapshot
> structure, though that would be a bigger change. Or if you limit
> yourself to the "-snapshot" case, and not persistent backed disk
> images, it would be enough to keep the information in memory.
Yes, that should work in any case. Only being able to use it with
-snapshot is a major limitation, though.
But you're right that there seems to be this extra_data_size field in
the snapshot structure that I completely missed until now. I don't know
if it's respected everywhere, but I think this could be worth a try. If
it works this would be a really nice feature.
Kevin
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2009-01-22 9:55 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-01-20 23:58 [Qemu-devel] Combining "loadvm" and "-snapshot" Stephen McCamant
2009-01-21 9:12 ` Kevin Wolf
2009-01-21 21:09 ` Stephen McCamant
2009-01-22 10:01 ` Kevin Wolf
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).