From mboxrd@z Thu Jan 1 00:00:00 1970 From: Anthony Liguori Subject: Re: [Qemu-devel] Re: KVM call agenda for Oct 19 Date: Tue, 19 Oct 2010 12:09:36 -0500 Message-ID: <4CBDD0D0.6050101@codemonkey.ws> References: <394180839.46171287507268868.JavaMail.root@zmail07.collab.prod.int.phx2.redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Cc: Chris Wright , Juan Quintela , chrisw@redhat.com, "Venkateswararao Jujjuri (JV)" , qemu-devel@nongnu.org, kvm@vger.kernel.org, dlaor@redhat.com To: Ayal Baron Return-path: Received: from mail-gy0-f174.google.com ([209.85.160.174]:40492 "EHLO mail-gy0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754071Ab0JSRJk (ORCPT ); Tue, 19 Oct 2010 13:09:40 -0400 Received: by gyg13 with SMTP id 13so114641gyg.19 for ; Tue, 19 Oct 2010 10:09:40 -0700 (PDT) In-Reply-To: <394180839.46171287507268868.JavaMail.root@zmail07.collab.prod.int.phx2.redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On 10/19/2010 11:54 AM, Ayal Baron wrote: > ----- "Anthony Liguori" wrote: > > >> On 10/19/2010 07:48 AM, Dor Laor wrote: >> >>> On 10/19/2010 04:11 AM, Chris Wright wrote: >>> >>>> * Juan Quintela (quintela@redhat.com) wrote: >>>> >>>>> Please send in any agenda items you are interested in covering. >>>>> >>>> - 0.13.X -stable handoff >>>> - 0.14 planning >>>> - threadlet work >>>> - virtfs proposals >>>> >>>> >>> - Live snapshots >>> - We were asked to add this feature for external qcow2 >>> images. Will simple approach of fsync + tracking each requested >>> backing file (it can be per vDisk) and re-open the new image >>> >> would >> >>> be accepted? >>> >> I had assumed that this would involve: >> >> qemu -hda windows.img >> >> (qemu) snapshot ide0-disk0 snap0.img >> >> 1) create snap0.img internally by doing the equivalent of `qemu-img >> create -f qcow2 -b windows.img snap0.img' >> 2) bdrv_flush('ide0-disk0') >> 3) bdrv_open(snap0.img) >> 4) bdrv_close(windows.img) >> 5) rename('windows.img', 'windows.img.tmp') >> 6) rename('snap0.img', 'windows.img') >> 7) rename('windows.img.tmp', 'snap0.img') >> > All the rename logic assumes files, need to take into account devices as well (namely LVs) > Sure, just s/rename/lvrename/g. The renaming step can be optional and a management tool can take care of that. It's really just there for convenience since the user expectation is that when you give a name of a snapshot, that the snapshot is reflected in that name not that the new in-use image is that name. > Also, just to make sure, this should support multiple images (concurrent snapshot of all of them or a subset). > Yeah, concurrent is a little trickier. Simple solution is for a management tool to just do a stop + multiple snapshots + cont. It's equivalent to what we'd do if we don't do it aio which is probably how we'd do the first implementation. But in the long term, I think the most elegant solution would be to expose the freeze api via QMP and let a management tool freeze multiple devices, then start taking snapshots, then unfreeze them when all snapshots are complete. Regards, Anthony Liguori > Otherwise looks good. > > >> Regards, >> >> Anthony Liguori >> >> >>> - Integration with FS freeze for consistent guest app snapshot >>> Many apps do not sync their ram state to disk correctly or >>> >> frequent >> >>> enough. Physical world backup software calls fs freeze on xfs >>> >> and >> >>> VSS for windows to make the backup consistent. >>> In order to integrated this with live snapshots we need a guest >>> agent to trigger the guest fs freeze. >>> We can either have qemu communicate with the agent directly >>> >> through >> >>> virtio-serial or have a mgmt daemon use virtio-serial to >>> communicate with the guest in addition to QMP messages about >>> >> the >> >>> live snapshot state. >>> Preferences? The first solution complicates qemu while the >>> >> second >> >>> complicates mgmt. >>> -- >>> To unsubscribe from this list: send the line "unsubscribe kvm" in >>> the body of a message to majordomo@vger.kernel.org >>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>>