From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=41450 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1P8JFi-00059q-SP for qemu-devel@nongnu.org; Tue, 19 Oct 2010 16:57:48 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1P8JFh-0007DU-GO for qemu-devel@nongnu.org; Tue, 19 Oct 2010 16:57:46 -0400 Received: from mx4-phx2.redhat.com ([209.132.183.25]:42069) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1P8JFh-0007DO-9k for qemu-devel@nongnu.org; Tue, 19 Oct 2010 16:57:45 -0400 Date: Tue, 19 Oct 2010 16:57:25 -0400 (EDT) From: Ayal Baron Message-ID: <1642827860.79861287521845369.JavaMail.root@zmail07.collab.prod.int.phx2.redhat.com> In-Reply-To: <512838278.79671287521779658.JavaMail.root@zmail07.collab.prod.int.phx2.redhat.com> Subject: Re: [Qemu-devel] Re: KVM call agenda for Oct 19 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: chrisw@redhat.com, kvm@vger.kernel.org, Juan Quintela , dlaor@redhat.com, qemu-devel@nongnu.org, Chris Wright , "Venkateswararao Jujjuri (JV)" ----- "Anthony Liguori" wrote: > On 10/19/2010 11:54 AM, Ayal Baron wrote: > > ----- "Anthony Liguori" wrote: > > > > > >> On 10/19/2010 07:48 AM, Dor Laor wrote: > >> > >>> On 10/19/2010 04:11 AM, Chris Wright wrote: > >>> > >>>> * Juan Quintela (quintela@redhat.com) wrote: > >>>> > >>>>> Please send in any agenda items you are interested in covering. > >>>>> > >>>> - 0.13.X -stable handoff > >>>> - 0.14 planning > >>>> - threadlet work > >>>> - virtfs proposals > >>>> > >>>> > >>> - Live snapshots > >>> - We were asked to add this feature for external qcow2 > >>> images. Will simple approach of fsync + tracking each > requested > >>> backing file (it can be per vDisk) and re-open the new image > >>> > >> would > >> > >>> be accepted? > >>> > >> I had assumed that this would involve: > >> > >> qemu -hda windows.img > >> > >> (qemu) snapshot ide0-disk0 snap0.img > >> > >> 1) create snap0.img internally by doing the equivalent of > `qemu-img > >> create -f qcow2 -b windows.img snap0.img' > >> 2) bdrv_flush('ide0-disk0') > >> 3) bdrv_open(snap0.img) > >> 4) bdrv_close(windows.img) > >> 5) rename('windows.img', 'windows.img.tmp') > >> 6) rename('snap0.img', 'windows.img') > >> 7) rename('windows.img.tmp', 'snap0.img') > >> > > All the rename logic assumes files, need to take into account > devices as well (namely LVs) > > > > Sure, just s/rename/lvrename/g. No can do. In our setup, lvm is running in a clustered env in a single writer multiple readers configuration. Vm may be running on a reader which is not allowed to lvrename (would corrupt the entire VG). > > The renaming step can be optional and a management tool can take care > of > that. It's really just there for convenience since the user > expectation > is that when you give a name of a snapshot, that the snapshot is > reflected in that name not that the new in-use image is that name. So keeping it optional is good. > > > Also, just to make sure, this should support multiple images > (concurrent snapshot of all of them or a subset). > > > > Yeah, concurrent is a little trickier. Simple solution is for a > management tool to just do a stop + multiple snapshots + cont. It's > equivalent to what we'd do if we don't do it aio which is probably how > > we'd do the first implementation. > > But in the long term, I think the most elegant solution would be to > expose the freeze api via QMP and let a management tool freeze > multiple > devices, then start taking snapshots, then unfreeze them when all > snapshots are complete. > > Regards, > > Anthony Liguori qemu should call the freeze as part of the process (for all of the relevant devices) then take the snapshots then thaw. > > > Otherwise looks good. > > > > > >> Regards, > >> > >> Anthony Liguori > >> > >> > >>> - Integration with FS freeze for consistent guest app snapshot > >>> Many apps do not sync their ram state to disk correctly or > >>> > >> frequent > >> > >>> enough. Physical world backup software calls fs freeze on > xfs > >>> > >> and > >> > >>> VSS for windows to make the backup consistent. > >>> In order to integrated this with live snapshots we need a > guest > >>> agent to trigger the guest fs freeze. > >>> We can either have qemu communicate with the agent directly > >>> > >> through > >> > >>> virtio-serial or have a mgmt daemon use virtio-serial to > >>> communicate with the guest in addition to QMP messages about > >>> > >> the > >> > >>> live snapshot state. > >>> Preferences? The first solution complicates qemu while the > >>> > >> second > >> > >>> complicates mgmt. > >>> -- > >>> To unsubscribe from this list: send the line "unsubscribe kvm" in > >>> the body of a message to majordomo@vger.kernel.org > >>> More majordomo info at > http://vger.kernel.org/majordomo-info.html > >>>