From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:42313) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Qe5qN-000662-Q8 for qemu-devel@nongnu.org; Tue, 05 Jul 2011 09:39:17 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Qe5qL-0006zo-MU for qemu-devel@nongnu.org; Tue, 05 Jul 2011 09:39:15 -0400 Received: from mx1.redhat.com ([209.132.183.28]:57109) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Qe5qL-0006zh-2k for qemu-devel@nongnu.org; Tue, 05 Jul 2011 09:39:13 -0400 Message-ID: <4E1313FA.1060905@redhat.com> Date: Tue, 05 Jul 2011 16:39:06 +0300 From: Dor Laor MIME-Version: 1.0 References: <20110628194106.GA17443@amt.cnet> <4E0ADAE0.6040204@redhat.com> <20110629154134.GA6631@amt.cnet> <20110630143620.GA4366@amt.cnet> <4E0C8D90.8050305@redhat.com> <20110630183829.GA8752@amt.cnet> <4E12C4F5.9000100@redhat.com> <20110705125858.GA21254@amt.cnet> In-Reply-To: <20110705125858.GA21254@amt.cnet> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] KVM call agenda for June 28 Reply-To: dlaor@redhat.com List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Marcelo Tosatti Cc: Kevin Wolf , Chris Wright , KVM devel mailing list , quintela@redhat.com, Stefan Hajnoczi , qemu-devel@nongnu.org, Avi Kivity , jes sorensen On 07/05/2011 03:58 PM, Marcelo Tosatti wrote: > On Tue, Jul 05, 2011 at 01:40:08PM +0100, Stefan Hajnoczi wrote: >> On Tue, Jul 5, 2011 at 9:01 AM, Dor Laor wrote: >>> I tried to re-arrange all of the requirements and use cases using this wiki >>> page: http://wiki.qemu.org/Features/LiveBlockMigration >>> >>> It would be the best to agree upon the most interesting use cases (while we >>> make sure we cover future ones) and agree to them. >>> The next step is to set the interface for all the various verbs since the >>> implementation seems to be converging. >> >> Live block copy was supposed to support snapshot merge. I think the >> current favored approach is to make the source image a backing file to >> the destination image and essentially do image streaming. >> >> Using this mechanism for snapshot merge is tricky. The COW file >> already uses the read-only snapshot base image. So now we cannot >> trivally copy the COW file contents back into the snapshot base image >> using live block copy. > > It never did. Live copy creates a new image were both snapshot and > "current" are copied to. > > This is similar with image streaming. Not sure I realize what's bad to do in-place merge: Let's suppose we have this COW chain: base <-- s1 <-- s2 Now a live snapshot is created over s2, s2 becomes RO and s3 is RW: base <-- s1 <-- s2 <-- s3 Now we've done with s2 (post backup) and like to merge s3 into s2. With your approach we use live copy of s3 into newSnap: base <-- s1 <-- s2 <-- s3 base <-- s1 <-- newSnap When it is over s2 and s3 can be erased. The down side is the IOs for copying s2 data and the temporary storage. I guess temp storage is cheap but excessive IO are expensive. My approach was to collapse s3 into s2 and erase s3 eventually: before: base <-- s1 <-- s2 <-- s3 after: base <-- s1 <-- s2 If we use live block copy using mirror driver it should be safe as long as we keep the ordering of new writes into s3 during the execution. Even a failure in the the middle won't cause harm since the management will keep using s3 until it gets success event. > >> It seems like snapshot merge will require dedicated code that reads >> the allocated clusters from the COW file and writes them back into the >> base image. >> >> A very inefficient alternative would be to create a third image, the >> "merge" image file, which has the COW file as its backing file: >> snapshot (base) -> cow -> merge >> >> All data from snapshot and cow is copied into merge and then snapshot >> and cow can be deleted. But this approach is results in full data >> copying and uses potentially 3x space if cow is close to the size of >> snapshot. > > Management can set a higher limit on the size of data that is merged, > and create a new base once exceeded. This avoids copying excessive > amounts of data. > >> Any other ideas that reuse live block copy for snapshot merge? >> >> Stefan > >