* [Qemu-devel] Re: KVM call agenda for Oct 19
2010-10-18 15:43 [Qemu-devel] " Juan Quintela
@ 2010-10-19 2:11 ` Chris Wright
2010-10-19 12:48 ` Dor Laor
0 siblings, 1 reply; 19+ messages in thread
From: Chris Wright @ 2010-10-19 2:11 UTC (permalink / raw)
To: Juan Quintela; +Cc: chrisw, Venkateswararao Jujjuri (JV), qemu-devel, kvm
* Juan Quintela (quintela@redhat.com) wrote:
>
> Please send in any agenda items you are interested in covering.
- 0.13.X -stable handoff
- 0.14 planning
- threadlet work
- virtfs proposals
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: KVM call agenda for Oct 19
2010-10-19 2:11 ` [Qemu-devel] " Chris Wright
@ 2010-10-19 12:48 ` Dor Laor
2010-10-19 12:55 ` Avi Kivity
` (2 more replies)
0 siblings, 3 replies; 19+ messages in thread
From: Dor Laor @ 2010-10-19 12:48 UTC (permalink / raw)
To: Chris Wright
Cc: chrisw, kvm, Juan Quintela, qemu-devel, Ayal Baron,
Venkateswararao Jujjuri (JV)
On 10/19/2010 04:11 AM, Chris Wright wrote:
> * Juan Quintela (quintela@redhat.com) wrote:
>>
>> Please send in any agenda items you are interested in covering.
>
> - 0.13.X -stable handoff
> - 0.14 planning
> - threadlet work
> - virtfs proposals
>
- Live snapshots
- We were asked to add this feature for external qcow2
images. Will simple approach of fsync + tracking each requested
backing file (it can be per vDisk) and re-open the new image would
be accepted?
- Integration with FS freeze for consistent guest app snapshot
Many apps do not sync their ram state to disk correctly or frequent
enough. Physical world backup software calls fs freeze on xfs and
VSS for windows to make the backup consistent.
In order to integrated this with live snapshots we need a guest
agent to trigger the guest fs freeze.
We can either have qemu communicate with the agent directly through
virtio-serial or have a mgmt daemon use virtio-serial to
communicate with the guest in addition to QMP messages about the
live snapshot state.
Preferences? The first solution complicates qemu while the second
complicates mgmt.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: KVM call agenda for Oct 19
2010-10-19 12:48 ` Dor Laor
@ 2010-10-19 12:55 ` Avi Kivity
2010-10-19 12:58 ` Dor Laor
2010-10-19 13:22 ` Anthony Liguori
2010-10-19 13:28 ` Anthony Liguori
2 siblings, 1 reply; 19+ messages in thread
From: Avi Kivity @ 2010-10-19 12:55 UTC (permalink / raw)
To: dlaor
Cc: chrisw, kvm, Juan Quintela, qemu-devel, Chris Wright, Ayal Baron,
Venkateswararao Jujjuri (JV)
On 10/19/2010 02:48 PM, Dor Laor wrote:
> On 10/19/2010 04:11 AM, Chris Wright wrote:
>> * Juan Quintela (quintela@redhat.com) wrote:
>>>
>>> Please send in any agenda items you are interested in covering.
>>
>> - 0.13.X -stable handoff
>> - 0.14 planning
>> - threadlet work
>> - virtfs proposals
>>
>
> - Live snapshots
> - We were asked to add this feature for external qcow2
> images. Will simple approach of fsync + tracking each requested
> backing file (it can be per vDisk) and re-open the new image would
> be accepted?
> - Integration with FS freeze for consistent guest app snapshot
> Many apps do not sync their ram state to disk correctly or frequent
> enough. Physical world backup software calls fs freeze on xfs and
> VSS for windows to make the backup consistent.
> In order to integrated this with live snapshots we need a guest
> agent to trigger the guest fs freeze.
> We can either have qemu communicate with the agent directly through
> virtio-serial or have a mgmt daemon use virtio-serial to
> communicate with the guest in addition to QMP messages about the
> live snapshot state.
> Preferences? The first solution complicates qemu while the second
> complicates mgmt.
Third option, make the freeze path management -> qemu -> virtio-blk ->
guest kernel -> file systems. The advantage is that it's easy to
associate file systems with a block device this way.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: KVM call agenda for Oct 19
2010-10-19 12:55 ` Avi Kivity
@ 2010-10-19 12:58 ` Dor Laor
2010-10-19 13:03 ` Avi Kivity
0 siblings, 1 reply; 19+ messages in thread
From: Dor Laor @ 2010-10-19 12:58 UTC (permalink / raw)
To: Avi Kivity
Cc: chrisw, kvm, Juan Quintela, qemu-devel, Chris Wright, Ayal Baron,
Venkateswararao Jujjuri (JV)
On 10/19/2010 02:55 PM, Avi Kivity wrote:
> On 10/19/2010 02:48 PM, Dor Laor wrote:
>> On 10/19/2010 04:11 AM, Chris Wright wrote:
>>> * Juan Quintela (quintela@redhat.com) wrote:
>>>>
>>>> Please send in any agenda items you are interested in covering.
>>>
>>> - 0.13.X -stable handoff
>>> - 0.14 planning
>>> - threadlet work
>>> - virtfs proposals
>>>
>>
>> - Live snapshots
>> - We were asked to add this feature for external qcow2
>> images. Will simple approach of fsync + tracking each requested
>> backing file (it can be per vDisk) and re-open the new image would
>> be accepted?
>> - Integration with FS freeze for consistent guest app snapshot
>> Many apps do not sync their ram state to disk correctly or frequent
>> enough. Physical world backup software calls fs freeze on xfs and
>> VSS for windows to make the backup consistent.
>> In order to integrated this with live snapshots we need a guest
>> agent to trigger the guest fs freeze.
>> We can either have qemu communicate with the agent directly through
>> virtio-serial or have a mgmt daemon use virtio-serial to
>> communicate with the guest in addition to QMP messages about the
>> live snapshot state.
>> Preferences? The first solution complicates qemu while the second
>> complicates mgmt.
>
> Third option, make the freeze path management -> qemu -> virtio-blk ->
> guest kernel -> file systems. The advantage is that it's easy to
> associate file systems with a block device this way.
OTH the userspace freeze path already exist and now you create another
path. What about FS that span over LVM with multiple drives? IDE/SCSI?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: KVM call agenda for Oct 19
2010-10-19 12:58 ` Dor Laor
@ 2010-10-19 13:03 ` Avi Kivity
2010-10-19 13:18 ` Anthony Liguori
0 siblings, 1 reply; 19+ messages in thread
From: Avi Kivity @ 2010-10-19 13:03 UTC (permalink / raw)
To: dlaor
Cc: chrisw, kvm, Juan Quintela, qemu-devel, Chris Wright, Ayal Baron,
Venkateswararao Jujjuri (JV)
On 10/19/2010 02:58 PM, Dor Laor wrote:
> On 10/19/2010 02:55 PM, Avi Kivity wrote:
>> On 10/19/2010 02:48 PM, Dor Laor wrote:
>>> On 10/19/2010 04:11 AM, Chris Wright wrote:
>>>> * Juan Quintela (quintela@redhat.com) wrote:
>>>>>
>>>>> Please send in any agenda items you are interested in covering.
>>>>
>>>> - 0.13.X -stable handoff
>>>> - 0.14 planning
>>>> - threadlet work
>>>> - virtfs proposals
>>>>
>>>
>>> - Live snapshots
>>> - We were asked to add this feature for external qcow2
>>> images. Will simple approach of fsync + tracking each requested
>>> backing file (it can be per vDisk) and re-open the new image would
>>> be accepted?
>>> - Integration with FS freeze for consistent guest app snapshot
>>> Many apps do not sync their ram state to disk correctly or frequent
>>> enough. Physical world backup software calls fs freeze on xfs and
>>> VSS for windows to make the backup consistent.
>>> In order to integrated this with live snapshots we need a guest
>>> agent to trigger the guest fs freeze.
>>> We can either have qemu communicate with the agent directly through
>>> virtio-serial or have a mgmt daemon use virtio-serial to
>>> communicate with the guest in addition to QMP messages about the
>>> live snapshot state.
>>> Preferences? The first solution complicates qemu while the second
>>> complicates mgmt.
>>
>> Third option, make the freeze path management -> qemu -> virtio-blk ->
>> guest kernel -> file systems. The advantage is that it's easy to
>> associate file systems with a block device this way.
>
> OTH the userspace freeze path already exist and now you create another
> path.
I guess we would still have a userspace daemon; instead of talking to
virtio-serial it talks to virtio-blk. So:
management -> qemu -> virtio-blk -> guest driver -> kernel fs
resolver -> daemon -> apps
Yuck.
> What about FS that span over LVM with multiple drives? IDE/SCSI?
Good points.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: KVM call agenda for Oct 19
2010-10-19 13:03 ` Avi Kivity
@ 2010-10-19 13:18 ` Anthony Liguori
0 siblings, 0 replies; 19+ messages in thread
From: Anthony Liguori @ 2010-10-19 13:18 UTC (permalink / raw)
To: Avi Kivity
Cc: chrisw, kvm, Juan Quintela, dlaor, qemu-devel, Chris Wright,
Ayal Baron, Michael D Roth, Venkateswararao Jujjuri (JV)
On 10/19/2010 08:03 AM, Avi Kivity wrote:
> On 10/19/2010 02:58 PM, Dor Laor wrote:
>> On 10/19/2010 02:55 PM, Avi Kivity wrote:
>>> On 10/19/2010 02:48 PM, Dor Laor wrote:
>>>> On 10/19/2010 04:11 AM, Chris Wright wrote:
>>>>> * Juan Quintela (quintela@redhat.com) wrote:
>>>>>>
>>>>>> Please send in any agenda items you are interested in covering.
>>>>>
>>>>> - 0.13.X -stable handoff
>>>>> - 0.14 planning
>>>>> - threadlet work
>>>>> - virtfs proposals
>>>>>
>>>>
>>>> - Live snapshots
>>>> - We were asked to add this feature for external qcow2
>>>> images. Will simple approach of fsync + tracking each requested
>>>> backing file (it can be per vDisk) and re-open the new image would
>>>> be accepted?
>>>> - Integration with FS freeze for consistent guest app snapshot
>>>> Many apps do not sync their ram state to disk correctly or frequent
>>>> enough. Physical world backup software calls fs freeze on xfs and
>>>> VSS for windows to make the backup consistent.
>>>> In order to integrated this with live snapshots we need a guest
>>>> agent to trigger the guest fs freeze.
>>>> We can either have qemu communicate with the agent directly through
>>>> virtio-serial or have a mgmt daemon use virtio-serial to
>>>> communicate with the guest in addition to QMP messages about the
>>>> live snapshot state.
>>>> Preferences? The first solution complicates qemu while the second
>>>> complicates mgmt.
>>>
>>> Third option, make the freeze path management -> qemu -> virtio-blk ->
>>> guest kernel -> file systems. The advantage is that it's easy to
>>> associate file systems with a block device this way.
>>
>> OTH the userspace freeze path already exist and now you create
>> another path.
>
> I guess we would still have a userspace daemon; instead of talking to
> virtio-serial it talks to virtio-blk. So:
>
> management -> qemu -> virtio-blk -> guest driver -> kernel fs
> resolver -> daemon -> apps
>
> Yuck.
Yeah, in Windows, I'm pretty sure the freeze API is a userspace
concept. Various apps can hook into it to serialize their state.
At the risk of stealing Mike's thunder, we've actually been working on a
simple guest agent exactly for this type of task. Mike's planning an
RFC for later this week but for those that are interested the repo is at
http://repo.or.cz/w/qemu/mdroth.git
Regards,
Anthony Liguori
>
>> What about FS that span over LVM with multiple drives? IDE/SCSI?
>
> Good points.
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: KVM call agenda for Oct 19
2010-10-19 12:48 ` Dor Laor
2010-10-19 12:55 ` Avi Kivity
@ 2010-10-19 13:22 ` Anthony Liguori
2010-10-19 13:27 ` Avi Kivity
2010-10-19 13:28 ` Anthony Liguori
2 siblings, 1 reply; 19+ messages in thread
From: Anthony Liguori @ 2010-10-19 13:22 UTC (permalink / raw)
To: dlaor
Cc: chrisw, kvm, Juan Quintela, qemu-devel, Chris Wright, Ayal Baron,
Venkateswararao Jujjuri (JV)
On 10/19/2010 07:48 AM, Dor Laor wrote:
> On 10/19/2010 04:11 AM, Chris Wright wrote:
>> * Juan Quintela (quintela@redhat.com) wrote:
>>>
>>> Please send in any agenda items you are interested in covering.
>>
>> - 0.13.X -stable handoff
>> - 0.14 planning
>> - threadlet work
>> - virtfs proposals
>>
>
> - Live snapshots
> - We were asked to add this feature for external qcow2
> images. Will simple approach of fsync + tracking each requested
> backing file (it can be per vDisk) and re-open the new image would
> be accepted?
I had assumed that this would involve:
qemu -hda windows.img
(qemu) snapshot ide0-disk0 snap0.img
1) create snap0.img internally by doing the equivalent of `qemu-img
create -f qcow2 -b windows.img snap0.img'
2) bdrv_flush('ide0-disk0')
3) bdrv_open(snap0.img)
4) bdrv_close(windows.img)
5) rename('windows.img', 'windows.img.tmp')
6) rename('snap0.img', 'windows.img')
7) rename('windows.img.tmp', 'snap0.img')
Regards,
Anthony Liguori
> - Integration with FS freeze for consistent guest app snapshot
> Many apps do not sync their ram state to disk correctly or frequent
> enough. Physical world backup software calls fs freeze on xfs and
> VSS for windows to make the backup consistent.
> In order to integrated this with live snapshots we need a guest
> agent to trigger the guest fs freeze.
> We can either have qemu communicate with the agent directly through
> virtio-serial or have a mgmt daemon use virtio-serial to
> communicate with the guest in addition to QMP messages about the
> live snapshot state.
> Preferences? The first solution complicates qemu while the second
> complicates mgmt.
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: KVM call agenda for Oct 19
2010-10-19 13:22 ` Anthony Liguori
@ 2010-10-19 13:27 ` Avi Kivity
2010-10-19 13:33 ` Anthony Liguori
0 siblings, 1 reply; 19+ messages in thread
From: Avi Kivity @ 2010-10-19 13:27 UTC (permalink / raw)
To: Anthony Liguori
Cc: chrisw, kvm, Juan Quintela, dlaor, qemu-devel, Chris Wright,
Ayal Baron, Venkateswararao Jujjuri (JV)
On 10/19/2010 03:22 PM, Anthony Liguori wrote:
>
> I had assumed that this would involve:
>
> qemu -hda windows.img
>
> (qemu) snapshot ide0-disk0 snap0.img
>
> 1) create snap0.img internally by doing the equivalent of `qemu-img
> create -f qcow2 -b windows.img snap0.img'
> 2) bdrv_flush('ide0-disk0')
> 3) bdrv_open(snap0.img)
> 4) bdrv_close(windows.img)
> 5) rename('windows.img', 'windows.img.tmp')
> 6) rename('snap0.img', 'windows.img')
> 7) rename('windows.img.tmp', 'snap0.img')
>
Looks reasonable.
Would be interesting to look at this as a use case for the threading
work. We should eventually be able to create a snapshot without
stalling vcpus (stalling I/O of course allowed).
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: KVM call agenda for Oct 19
2010-10-19 12:48 ` Dor Laor
2010-10-19 12:55 ` Avi Kivity
2010-10-19 13:22 ` Anthony Liguori
@ 2010-10-19 13:28 ` Anthony Liguori
2 siblings, 0 replies; 19+ messages in thread
From: Anthony Liguori @ 2010-10-19 13:28 UTC (permalink / raw)
To: dlaor
Cc: chrisw, kvm, Juan Quintela, qemu-devel, Chris Wright, Ayal Baron,
Alon Levy, Venkateswararao Jujjuri (JV)
On 10/19/2010 07:48 AM, Dor Laor wrote:
> On 10/19/2010 04:11 AM, Chris Wright wrote:
>> * Juan Quintela (quintela@redhat.com) wrote:
>>>
>>> Please send in any agenda items you are interested in covering.
>>
>> - 0.13.X -stable handoff
>> - 0.14 planning
>> - threadlet work
>> - virtfs proposals
>>
>
> - Live snapshots
> - We were asked to add this feature for external qcow2
> images. Will simple approach of fsync + tracking each requested
> backing file (it can be per vDisk) and re-open the new image would
> be accepted?
> - Integration with FS freeze for consistent guest app snapshot
> Many apps do not sync their ram state to disk correctly or frequent
> enough. Physical world backup software calls fs freeze on xfs and
> VSS for windows to make the backup consistent.
> In order to integrated this with live snapshots we need a guest
> agent to trigger the guest fs freeze.
> We can either have qemu communicate with the agent directly through
> virtio-serial or have a mgmt daemon use virtio-serial to
> communicate with the guest in addition to QMP messages about the
> live snapshot state.
> Preferences? The first solution complicates qemu while the second
> complicates mgmt.
- usb-ccid (aka external device modules)
We probably won't get to it for today's call, but we should try to queue
this topic up for discussion. We have a similar situation with vtpm
(existing device model that wants to integrate with QEMU). My position
so far has been that we should avoid external device models because of
difficulty integrating QEMU features with external device models.
However, I'd like to hear opinions from a wider audience.
Regards,
Anthony Liguori
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: KVM call agenda for Oct 19
2010-10-19 13:27 ` Avi Kivity
@ 2010-10-19 13:33 ` Anthony Liguori
2010-10-19 13:38 ` Stefan Hajnoczi
0 siblings, 1 reply; 19+ messages in thread
From: Anthony Liguori @ 2010-10-19 13:33 UTC (permalink / raw)
To: Avi Kivity
Cc: chrisw, kvm, Juan Quintela, dlaor, qemu-devel, Chris Wright,
Ayal Baron, Venkateswararao Jujjuri (JV)
On 10/19/2010 08:27 AM, Avi Kivity wrote:
> On 10/19/2010 03:22 PM, Anthony Liguori wrote:
>>
>> I had assumed that this would involve:
>>
>> qemu -hda windows.img
>>
>> (qemu) snapshot ide0-disk0 snap0.img
>>
>> 1) create snap0.img internally by doing the equivalent of `qemu-img
>> create -f qcow2 -b windows.img snap0.img'
>> 2) bdrv_flush('ide0-disk0')
>> 3) bdrv_open(snap0.img)
>> 4) bdrv_close(windows.img)
>> 5) rename('windows.img', 'windows.img.tmp')
>> 6) rename('snap0.img', 'windows.img')
>> 7) rename('windows.img.tmp', 'snap0.img')
>>
>
> Looks reasonable.
>
> Would be interesting to look at this as a use case for the threading
> work. We should eventually be able to create a snapshot without
> stalling vcpus (stalling I/O of course allowed).
If we had another block-level command, like bdrv_aio_freeze(), that
queued all pending requests until the given callback completed, it would
be very easy to do this entirely asynchronously. For instance:
bdrv_aio_freeze(create_snapshot)
create_snapshot():
bdrv_aio_flush(done_flush)
done_flush():
bdrv_open(...)
bdrv_close(...)
...
Of course, closing a device while it's being frozen is probably a recipe
for disaster but you get the idea :-)
Regards,
Anthony Liguori
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: KVM call agenda for Oct 19
2010-10-19 13:33 ` Anthony Liguori
@ 2010-10-19 13:38 ` Stefan Hajnoczi
2010-10-19 13:55 ` Avi Kivity
0 siblings, 1 reply; 19+ messages in thread
From: Stefan Hajnoczi @ 2010-10-19 13:38 UTC (permalink / raw)
To: Anthony Liguori
Cc: chrisw, kvm, Juan Quintela, dlaor, qemu-devel, Chris Wright,
Ayal Baron, Avi Kivity, Venkateswararao Jujjuri (JV)
On Tue, Oct 19, 2010 at 2:33 PM, Anthony Liguori <anthony@codemonkey.ws> wrote:
> On 10/19/2010 08:27 AM, Avi Kivity wrote:
>>
>> On 10/19/2010 03:22 PM, Anthony Liguori wrote:
>>>
>>> I had assumed that this would involve:
>>>
>>> qemu -hda windows.img
>>>
>>> (qemu) snapshot ide0-disk0 snap0.img
>>>
>>> 1) create snap0.img internally by doing the equivalent of `qemu-img
>>> create -f qcow2 -b windows.img snap0.img'
>>> 2) bdrv_flush('ide0-disk0')
>>> 3) bdrv_open(snap0.img)
>>> 4) bdrv_close(windows.img)
>>> 5) rename('windows.img', 'windows.img.tmp')
>>> 6) rename('snap0.img', 'windows.img')
>>> 7) rename('windows.img.tmp', 'snap0.img')
>>>
>>
>> Looks reasonable.
>>
>> Would be interesting to look at this as a use case for the threading work.
>> We should eventually be able to create a snapshot without stalling vcpus
>> (stalling I/O of course allowed).
>
> If we had another block-level command, like bdrv_aio_freeze(), that queued
> all pending requests until the given callback completed, it would be very
> easy to do this entirely asynchronously. For instance:
>
> bdrv_aio_freeze(create_snapshot)
>
> create_snapshot():
> bdrv_aio_flush(done_flush)
>
> done_flush():
> bdrv_open(...)
> bdrv_close(...)
> ...
>
> Of course, closing a device while it's being frozen is probably a recipe for
> disaster but you get the idea :-)
bdrv_aio_freeze() or any mechanism to deal with pending requests in
the generic block code would be a good step for future "live" support
of other operations like truncate.
Stefan
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: KVM call agenda for Oct 19
2010-10-19 13:38 ` Stefan Hajnoczi
@ 2010-10-19 13:55 ` Avi Kivity
0 siblings, 0 replies; 19+ messages in thread
From: Avi Kivity @ 2010-10-19 13:55 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: chrisw, kvm, Juan Quintela, dlaor, qemu-devel, Chris Wright,
Ayal Baron, Venkateswararao Jujjuri (JV)
On 10/19/2010 03:38 PM, Stefan Hajnoczi wrote:
> bdrv_aio_freeze() or any mechanism to deal with pending requests in
> the generic block code would be a good step for future "live" support
> of other operations like truncate.
+ logical disk grow, etc.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: KVM call agenda for Oct 19
[not found] <314565543.45891287507100965.JavaMail.root@zmail07.collab.prod.int.phx2.redhat.com>
@ 2010-10-19 16:54 ` Ayal Baron
2010-10-19 17:09 ` Anthony Liguori
0 siblings, 1 reply; 19+ messages in thread
From: Ayal Baron @ 2010-10-19 16:54 UTC (permalink / raw)
To: Anthony Liguori
Cc: chrisw, kvm, Juan Quintela, dlaor, qemu-devel, Chris Wright,
Venkateswararao Jujjuri (JV)
----- "Anthony Liguori" <anthony@codemonkey.ws> wrote:
> On 10/19/2010 07:48 AM, Dor Laor wrote:
> > On 10/19/2010 04:11 AM, Chris Wright wrote:
> >> * Juan Quintela (quintela@redhat.com) wrote:
> >>>
> >>> Please send in any agenda items you are interested in covering.
> >>
> >> - 0.13.X -stable handoff
> >> - 0.14 planning
> >> - threadlet work
> >> - virtfs proposals
> >>
> >
> > - Live snapshots
> > - We were asked to add this feature for external qcow2
> > images. Will simple approach of fsync + tracking each requested
> > backing file (it can be per vDisk) and re-open the new image
> would
> > be accepted?
>
> I had assumed that this would involve:
>
> qemu -hda windows.img
>
> (qemu) snapshot ide0-disk0 snap0.img
>
> 1) create snap0.img internally by doing the equivalent of `qemu-img
> create -f qcow2 -b windows.img snap0.img'
> 2) bdrv_flush('ide0-disk0')
> 3) bdrv_open(snap0.img)
> 4) bdrv_close(windows.img)
> 5) rename('windows.img', 'windows.img.tmp')
> 6) rename('snap0.img', 'windows.img')
> 7) rename('windows.img.tmp', 'snap0.img')
All the rename logic assumes files, need to take into account devices as well (namely LVs)
Also, just to make sure, this should support multiple images (concurrent snapshot of all of them or a subset).
Otherwise looks good.
>
> Regards,
>
> Anthony Liguori
>
> > - Integration with FS freeze for consistent guest app snapshot
> > Many apps do not sync their ram state to disk correctly or
> frequent
> > enough. Physical world backup software calls fs freeze on xfs
> and
> > VSS for windows to make the backup consistent.
> > In order to integrated this with live snapshots we need a guest
> > agent to trigger the guest fs freeze.
> > We can either have qemu communicate with the agent directly
> through
> > virtio-serial or have a mgmt daemon use virtio-serial to
> > communicate with the guest in addition to QMP messages about
> the
> > live snapshot state.
> > Preferences? The first solution complicates qemu while the
> second
> > complicates mgmt.
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: KVM call agenda for Oct 19
2010-10-19 16:54 ` Ayal Baron
@ 2010-10-19 17:09 ` Anthony Liguori
2010-10-20 9:18 ` Kevin Wolf
0 siblings, 1 reply; 19+ messages in thread
From: Anthony Liguori @ 2010-10-19 17:09 UTC (permalink / raw)
To: Ayal Baron
Cc: chrisw, kvm, Juan Quintela, dlaor, qemu-devel, Chris Wright,
Venkateswararao Jujjuri (JV)
On 10/19/2010 11:54 AM, Ayal Baron wrote:
> ----- "Anthony Liguori"<anthony@codemonkey.ws> wrote:
>
>
>> On 10/19/2010 07:48 AM, Dor Laor wrote:
>>
>>> On 10/19/2010 04:11 AM, Chris Wright wrote:
>>>
>>>> * Juan Quintela (quintela@redhat.com) wrote:
>>>>
>>>>> Please send in any agenda items you are interested in covering.
>>>>>
>>>> - 0.13.X -stable handoff
>>>> - 0.14 planning
>>>> - threadlet work
>>>> - virtfs proposals
>>>>
>>>>
>>> - Live snapshots
>>> - We were asked to add this feature for external qcow2
>>> images. Will simple approach of fsync + tracking each requested
>>> backing file (it can be per vDisk) and re-open the new image
>>>
>> would
>>
>>> be accepted?
>>>
>> I had assumed that this would involve:
>>
>> qemu -hda windows.img
>>
>> (qemu) snapshot ide0-disk0 snap0.img
>>
>> 1) create snap0.img internally by doing the equivalent of `qemu-img
>> create -f qcow2 -b windows.img snap0.img'
>> 2) bdrv_flush('ide0-disk0')
>> 3) bdrv_open(snap0.img)
>> 4) bdrv_close(windows.img)
>> 5) rename('windows.img', 'windows.img.tmp')
>> 6) rename('snap0.img', 'windows.img')
>> 7) rename('windows.img.tmp', 'snap0.img')
>>
> All the rename logic assumes files, need to take into account devices as well (namely LVs)
>
Sure, just s/rename/lvrename/g.
The renaming step can be optional and a management tool can take care of
that. It's really just there for convenience since the user expectation
is that when you give a name of a snapshot, that the snapshot is
reflected in that name not that the new in-use image is that name.
> Also, just to make sure, this should support multiple images (concurrent snapshot of all of them or a subset).
>
Yeah, concurrent is a little trickier. Simple solution is for a
management tool to just do a stop + multiple snapshots + cont. It's
equivalent to what we'd do if we don't do it aio which is probably how
we'd do the first implementation.
But in the long term, I think the most elegant solution would be to
expose the freeze api via QMP and let a management tool freeze multiple
devices, then start taking snapshots, then unfreeze them when all
snapshots are complete.
Regards,
Anthony Liguori
> Otherwise looks good.
>
>
>> Regards,
>>
>> Anthony Liguori
>>
>>
>>> - Integration with FS freeze for consistent guest app snapshot
>>> Many apps do not sync their ram state to disk correctly or
>>>
>> frequent
>>
>>> enough. Physical world backup software calls fs freeze on xfs
>>>
>> and
>>
>>> VSS for windows to make the backup consistent.
>>> In order to integrated this with live snapshots we need a guest
>>> agent to trigger the guest fs freeze.
>>> We can either have qemu communicate with the agent directly
>>>
>> through
>>
>>> virtio-serial or have a mgmt daemon use virtio-serial to
>>> communicate with the guest in addition to QMP messages about
>>>
>> the
>>
>>> live snapshot state.
>>> Preferences? The first solution complicates qemu while the
>>>
>> second
>>
>>> complicates mgmt.
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: KVM call agenda for Oct 19
[not found] <512838278.79671287521779658.JavaMail.root@zmail07.collab.prod.int.phx2.redhat.com>
@ 2010-10-19 20:57 ` Ayal Baron
2010-10-19 21:19 ` Anthony Liguori
0 siblings, 1 reply; 19+ messages in thread
From: Ayal Baron @ 2010-10-19 20:57 UTC (permalink / raw)
To: Anthony Liguori
Cc: chrisw, kvm, Juan Quintela, dlaor, qemu-devel, Chris Wright,
Venkateswararao Jujjuri (JV)
----- "Anthony Liguori" <anthony@codemonkey.ws> wrote:
> On 10/19/2010 11:54 AM, Ayal Baron wrote:
> > ----- "Anthony Liguori"<anthony@codemonkey.ws> wrote:
> >
> >
> >> On 10/19/2010 07:48 AM, Dor Laor wrote:
> >>
> >>> On 10/19/2010 04:11 AM, Chris Wright wrote:
> >>>
> >>>> * Juan Quintela (quintela@redhat.com) wrote:
> >>>>
> >>>>> Please send in any agenda items you are interested in covering.
> >>>>>
> >>>> - 0.13.X -stable handoff
> >>>> - 0.14 planning
> >>>> - threadlet work
> >>>> - virtfs proposals
> >>>>
> >>>>
> >>> - Live snapshots
> >>> - We were asked to add this feature for external qcow2
> >>> images. Will simple approach of fsync + tracking each
> requested
> >>> backing file (it can be per vDisk) and re-open the new image
> >>>
> >> would
> >>
> >>> be accepted?
> >>>
> >> I had assumed that this would involve:
> >>
> >> qemu -hda windows.img
> >>
> >> (qemu) snapshot ide0-disk0 snap0.img
> >>
> >> 1) create snap0.img internally by doing the equivalent of
> `qemu-img
> >> create -f qcow2 -b windows.img snap0.img'
> >> 2) bdrv_flush('ide0-disk0')
> >> 3) bdrv_open(snap0.img)
> >> 4) bdrv_close(windows.img)
> >> 5) rename('windows.img', 'windows.img.tmp')
> >> 6) rename('snap0.img', 'windows.img')
> >> 7) rename('windows.img.tmp', 'snap0.img')
> >>
> > All the rename logic assumes files, need to take into account
> devices as well (namely LVs)
> >
>
> Sure, just s/rename/lvrename/g.
No can do. In our setup, lvm is running in a clustered env in a single writer multiple readers configuration. Vm may be running on a reader which is not allowed to lvrename (would corrupt the entire VG).
>
> The renaming step can be optional and a management tool can take care
> of
> that. It's really just there for convenience since the user
> expectation
> is that when you give a name of a snapshot, that the snapshot is
> reflected in that name not that the new in-use image is that name.
So keeping it optional is good.
>
> > Also, just to make sure, this should support multiple images
> (concurrent snapshot of all of them or a subset).
> >
>
> Yeah, concurrent is a little trickier. Simple solution is for a
> management tool to just do a stop + multiple snapshots + cont. It's
> equivalent to what we'd do if we don't do it aio which is probably how
>
> we'd do the first implementation.
>
> But in the long term, I think the most elegant solution would be to
> expose the freeze api via QMP and let a management tool freeze
> multiple
> devices, then start taking snapshots, then unfreeze them when all
> snapshots are complete.
>
> Regards,
>
> Anthony Liguori
qemu should call the freeze as part of the process (for all of the relevant devices) then take the snapshots then thaw.
>
> > Otherwise looks good.
> >
> >
> >> Regards,
> >>
> >> Anthony Liguori
> >>
> >>
> >>> - Integration with FS freeze for consistent guest app snapshot
> >>> Many apps do not sync their ram state to disk correctly or
> >>>
> >> frequent
> >>
> >>> enough. Physical world backup software calls fs freeze on
> xfs
> >>>
> >> and
> >>
> >>> VSS for windows to make the backup consistent.
> >>> In order to integrated this with live snapshots we need a
> guest
> >>> agent to trigger the guest fs freeze.
> >>> We can either have qemu communicate with the agent directly
> >>>
> >> through
> >>
> >>> virtio-serial or have a mgmt daemon use virtio-serial to
> >>> communicate with the guest in addition to QMP messages about
> >>>
> >> the
> >>
> >>> live snapshot state.
> >>> Preferences? The first solution complicates qemu while the
> >>>
> >> second
> >>
> >>> complicates mgmt.
> >>> --
> >>> To unsubscribe from this list: send the line "unsubscribe kvm" in
> >>> the body of a message to majordomo@vger.kernel.org
> >>> More majordomo info at
> http://vger.kernel.org/majordomo-info.html
> >>>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: KVM call agenda for Oct 19
2010-10-19 20:57 ` [Qemu-devel] Re: KVM call agenda for Oct 19 Ayal Baron
@ 2010-10-19 21:19 ` Anthony Liguori
0 siblings, 0 replies; 19+ messages in thread
From: Anthony Liguori @ 2010-10-19 21:19 UTC (permalink / raw)
To: Ayal Baron
Cc: chrisw, kvm, Juan Quintela, dlaor, qemu-devel, Chris Wright,
Venkateswararao Jujjuri (JV)
On 10/19/2010 03:57 PM, Ayal Baron wrote:
>> Yeah, concurrent is a little trickier. Simple solution is for a
>> management tool to just do a stop + multiple snapshots + cont. It's
>> equivalent to what we'd do if we don't do it aio which is probably how
>>
>> we'd do the first implementation.
>>
>> But in the long term, I think the most elegant solution would be to
>> expose the freeze api via QMP and let a management tool freeze
>> multiple
>> devices, then start taking snapshots, then unfreeze them when all
>> snapshots are complete.
>>
>> Regards,
>>
>> Anthony Liguori
>>
> qemu should call the freeze as part of the process (for all of the relevant devices) then take the snapshots then thaw.
>
Yeah, I'm not opposed to us providing simpler interfaces in addition to
or in lieu of lower level interfaces.
Regards,
Anthony Liguori
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: KVM call agenda for Oct 19
2010-10-19 17:09 ` Anthony Liguori
@ 2010-10-20 9:18 ` Kevin Wolf
2010-10-20 9:41 ` Ayal Baron
2010-10-20 13:05 ` Anthony Liguori
0 siblings, 2 replies; 19+ messages in thread
From: Kevin Wolf @ 2010-10-20 9:18 UTC (permalink / raw)
To: Anthony Liguori
Cc: chrisw, kvm, Juan Quintela, dlaor, qemu-devel, Chris Wright,
Ayal Baron, Venkateswararao Jujjuri (JV)
Am 19.10.2010 19:09, schrieb Anthony Liguori:
> On 10/19/2010 11:54 AM, Ayal Baron wrote:
>> ----- "Anthony Liguori"<anthony@codemonkey.ws> wrote:
>>
>>
>>> On 10/19/2010 07:48 AM, Dor Laor wrote:
>>>
>>>> On 10/19/2010 04:11 AM, Chris Wright wrote:
>>>>
>>>>> * Juan Quintela (quintela@redhat.com) wrote:
>>>>>
>>>>>> Please send in any agenda items you are interested in covering.
>>>>>>
>>>>> - 0.13.X -stable handoff
>>>>> - 0.14 planning
>>>>> - threadlet work
>>>>> - virtfs proposals
>>>>>
>>>>>
>>>> - Live snapshots
>>>> - We were asked to add this feature for external qcow2
>>>> images. Will simple approach of fsync + tracking each requested
>>>> backing file (it can be per vDisk) and re-open the new image
>>>>
>>> would
>>>
>>>> be accepted?
>>>>
>>> I had assumed that this would involve:
>>>
>>> qemu -hda windows.img
>>>
>>> (qemu) snapshot ide0-disk0 snap0.img
>>>
>>> 1) create snap0.img internally by doing the equivalent of `qemu-img
>>> create -f qcow2 -b windows.img snap0.img'
>>> 2) bdrv_flush('ide0-disk0')
>>> 3) bdrv_open(snap0.img)
>>> 4) bdrv_close(windows.img)
>>> 5) rename('windows.img', 'windows.img.tmp')
>>> 6) rename('snap0.img', 'windows.img')
>>> 7) rename('windows.img.tmp', 'snap0.img')
>>>
>> All the rename logic assumes files, need to take into account devices as well (namely LVs)
>>
>
> Sure, just s/rename/lvrename/g.
That would mean that you need to have both backing file and new COW
image on LVs.
> The renaming step can be optional and a management tool can take care of
> that. It's really just there for convenience since the user expectation
> is that when you give a name of a snapshot, that the snapshot is
> reflected in that name not that the new in-use image is that name.
I think that depends on the terminology you use.
If you call it doing a snapshot, then probably people expect that the
snapshot is a new file and they continue to work on the same file (and
they may not understand that removing the snapshot destroys the "main"
image).
If you call it something like creating a new branch, they will expect
that the old file stays as it is and they create something new on top of
that.
So maybe we shouldn't start doing renames (which we cannot do for
anything but files anyway, consider not only LVs, but also nbd or http
backends), but rather think of a good name for the operation.
Kevin
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: KVM call agenda for Oct 19
2010-10-20 9:18 ` Kevin Wolf
@ 2010-10-20 9:41 ` Ayal Baron
2010-10-20 13:05 ` Anthony Liguori
1 sibling, 0 replies; 19+ messages in thread
From: Ayal Baron @ 2010-10-20 9:41 UTC (permalink / raw)
To: Kevin Wolf
Cc: chrisw, kvm, Juan Quintela, dlaor, qemu-devel, Chris Wright,
Venkateswararao Jujjuri (JV)
----- "Kevin Wolf" <kwolf@redhat.com> wrote:
> Am 19.10.2010 19:09, schrieb Anthony Liguori:
> > On 10/19/2010 11:54 AM, Ayal Baron wrote:
> >> ----- "Anthony Liguori"<anthony@codemonkey.ws> wrote:
> >>
> >>
> >>> On 10/19/2010 07:48 AM, Dor Laor wrote:
> >>>
> >>>> On 10/19/2010 04:11 AM, Chris Wright wrote:
> >>>>
> >>>>> * Juan Quintela (quintela@redhat.com) wrote:
> >>>>>
> >>>>>> Please send in any agenda items you are interested in
> covering.
> >>>>>>
> >>>>> - 0.13.X -stable handoff
> >>>>> - 0.14 planning
> >>>>> - threadlet work
> >>>>> - virtfs proposals
> >>>>>
> >>>>>
> >>>> - Live snapshots
> >>>> - We were asked to add this feature for external qcow2
> >>>> images. Will simple approach of fsync + tracking each
> requested
> >>>> backing file (it can be per vDisk) and re-open the new
> image
> >>>>
> >>> would
> >>>
> >>>> be accepted?
> >>>>
> >>> I had assumed that this would involve:
> >>>
> >>> qemu -hda windows.img
> >>>
> >>> (qemu) snapshot ide0-disk0 snap0.img
> >>>
> >>> 1) create snap0.img internally by doing the equivalent of
> `qemu-img
> >>> create -f qcow2 -b windows.img snap0.img'
> >>> 2) bdrv_flush('ide0-disk0')
> >>> 3) bdrv_open(snap0.img)
> >>> 4) bdrv_close(windows.img)
> >>> 5) rename('windows.img', 'windows.img.tmp')
> >>> 6) rename('snap0.img', 'windows.img')
> >>> 7) rename('windows.img.tmp', 'snap0.img')
> >>>
> >> All the rename logic assumes files, need to take into account
> devices as well (namely LVs)
> >>
> >
> > Sure, just s/rename/lvrename/g.
>
> That would mean that you need to have both backing file and new COW
> image on LVs.
That is indeed the way we work (LVs all the way) and you are correct that qemu should not assume this, but as Anthony said, the rename bit should be optional (and we would opt to go without) if at all.
>
> > The renaming step can be optional and a management tool can take
> care of
> > that. It's really just there for convenience since the user
> expectation
> > is that when you give a name of a snapshot, that the snapshot is
> > reflected in that name not that the new in-use image is that name.
>
> I think that depends on the terminology you use.
>
> If you call it doing a snapshot, then probably people expect that the
> snapshot is a new file and they continue to work on the same file
> (and
> they may not understand that removing the snapshot destroys the
> "main"
> image).
>
> If you call it something like creating a new branch, they will expect
> that the old file stays as it is and they create something new on top
> of
> that.
>
> So maybe we shouldn't start doing renames (which we cannot do for
> anything but files anyway, consider not only LVs, but also nbd or
> http
> backends), but rather think of a good name for the operation.
>
> Kevin
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [Qemu-devel] Re: KVM call agenda for Oct 19
2010-10-20 9:18 ` Kevin Wolf
2010-10-20 9:41 ` Ayal Baron
@ 2010-10-20 13:05 ` Anthony Liguori
1 sibling, 0 replies; 19+ messages in thread
From: Anthony Liguori @ 2010-10-20 13:05 UTC (permalink / raw)
To: Kevin Wolf
Cc: chrisw, kvm, Juan Quintela, dlaor, qemu-devel, Chris Wright,
Ayal Baron, Venkateswararao Jujjuri (JV)
On 10/20/2010 04:18 AM, Kevin Wolf wrote:
> Am 19.10.2010 19:09, schrieb Anthony Liguori:
>
>> On 10/19/2010 11:54 AM, Ayal Baron wrote:
>>
>>> ----- "Anthony Liguori"<anthony@codemonkey.ws> wrote:
>>>
>>>
>>>
>>>> On 10/19/2010 07:48 AM, Dor Laor wrote:
>>>>
>>>>
>>>>> On 10/19/2010 04:11 AM, Chris Wright wrote:
>>>>>
>>>>>
>>>>>> * Juan Quintela (quintela@redhat.com) wrote:
>>>>>>
>>>>>>
>>>>>>> Please send in any agenda items you are interested in covering.
>>>>>>>
>>>>>>>
>>>>>> - 0.13.X -stable handoff
>>>>>> - 0.14 planning
>>>>>> - threadlet work
>>>>>> - virtfs proposals
>>>>>>
>>>>>>
>>>>>>
>>>>> - Live snapshots
>>>>> - We were asked to add this feature for external qcow2
>>>>> images. Will simple approach of fsync + tracking each requested
>>>>> backing file (it can be per vDisk) and re-open the new image
>>>>>
>>>>>
>>>> would
>>>>
>>>>
>>>>> be accepted?
>>>>>
>>>>>
>>>> I had assumed that this would involve:
>>>>
>>>> qemu -hda windows.img
>>>>
>>>> (qemu) snapshot ide0-disk0 snap0.img
>>>>
>>>> 1) create snap0.img internally by doing the equivalent of `qemu-img
>>>> create -f qcow2 -b windows.img snap0.img'
>>>> 2) bdrv_flush('ide0-disk0')
>>>> 3) bdrv_open(snap0.img)
>>>> 4) bdrv_close(windows.img)
>>>> 5) rename('windows.img', 'windows.img.tmp')
>>>> 6) rename('snap0.img', 'windows.img')
>>>> 7) rename('windows.img.tmp', 'snap0.img')
>>>>
>>>>
>>> All the rename logic assumes files, need to take into account devices as well (namely LVs)
>>>
>>>
>> Sure, just s/rename/lvrename/g.
>>
> That would mean that you need to have both backing file and new COW
> image on LVs.
>
Yeah, I guess there are two options. You could force a user to create
the new leaf image or you could make the command take a blockdev spec
excluding the backing_file and automatically insert the backing_file
attribute into the spec before creating the bs.
>> The renaming step can be optional and a management tool can take care of
>> that. It's really just there for convenience since the user expectation
>> is that when you give a name of a snapshot, that the snapshot is
>> reflected in that name not that the new in-use image is that name.
>>
> I think that depends on the terminology you use.
>
> If you call it doing a snapshot, then probably people expect that the
> snapshot is a new file and they continue to work on the same file (and
> they may not understand that removing the snapshot destroys the "main"
> image).
>
> If you call it something like creating a new branch, they will expect
> that the old file stays as it is and they create something new on top of
> that.
>
> So maybe we shouldn't start doing renames (which we cannot do for
> anything but files anyway, consider not only LVs, but also nbd or http
> backends), but rather think of a good name for the operation.
>
Yeah, that's a reasonable point.
Regards,
Anthony Liguori
> Kevin
>
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2010-10-20 13:06 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <512838278.79671287521779658.JavaMail.root@zmail07.collab.prod.int.phx2.redhat.com>
2010-10-19 20:57 ` [Qemu-devel] Re: KVM call agenda for Oct 19 Ayal Baron
2010-10-19 21:19 ` Anthony Liguori
[not found] <314565543.45891287507100965.JavaMail.root@zmail07.collab.prod.int.phx2.redhat.com>
2010-10-19 16:54 ` Ayal Baron
2010-10-19 17:09 ` Anthony Liguori
2010-10-20 9:18 ` Kevin Wolf
2010-10-20 9:41 ` Ayal Baron
2010-10-20 13:05 ` Anthony Liguori
2010-10-18 15:43 [Qemu-devel] " Juan Quintela
2010-10-19 2:11 ` [Qemu-devel] " Chris Wright
2010-10-19 12:48 ` Dor Laor
2010-10-19 12:55 ` Avi Kivity
2010-10-19 12:58 ` Dor Laor
2010-10-19 13:03 ` Avi Kivity
2010-10-19 13:18 ` Anthony Liguori
2010-10-19 13:22 ` Anthony Liguori
2010-10-19 13:27 ` Avi Kivity
2010-10-19 13:33 ` Anthony Liguori
2010-10-19 13:38 ` Stefan Hajnoczi
2010-10-19 13:55 ` Avi Kivity
2010-10-19 13:28 ` Anthony Liguori
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).