From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:59237) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RAivL-0007Vz-9s for qemu-devel@nongnu.org; Mon, 03 Oct 2011 09:51:19 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1RAivJ-0005A5-P2 for qemu-devel@nongnu.org; Mon, 03 Oct 2011 09:51:15 -0400 Received: from mail-yw0-f45.google.com ([209.85.213.45]:52050) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1RAivJ-00059v-L9 for qemu-devel@nongnu.org; Mon, 03 Oct 2011 09:51:13 -0400 Received: by ywm39 with SMTP id 39so4575547ywm.4 for ; Mon, 03 Oct 2011 06:51:13 -0700 (PDT) Message-ID: <4E89BDCE.2010502@codemonkey.ws> Date: Mon, 03 Oct 2011 08:51:10 -0500 From: Anthony Liguori MIME-Version: 1.0 References: <1316443309-23843-1-git-send-email-mdroth@linux.vnet.ibm.com> <4E88C7DB.9090105@linux.vnet.ibm.com> <20111002210802.GC8072@redhat.com> <4E89B0D4.3090203@us.ibm.com> <20111003133802.GD18920@redhat.com> In-Reply-To: <20111003133802.GD18920@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC] New Migration Protocol using Visitor Interface List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: aliguori@linux.vnet.ibm.com, Anthony Liguori , Stefan Berger , qemu-devel@nongnu.org, Michael Roth On 10/03/2011 08:38 AM, Michael S. Tsirkin wrote: > On Mon, Oct 03, 2011 at 07:55:48AM -0500, Anthony Liguori wrote: >> On 10/02/2011 04:08 PM, Michael S. Tsirkin wrote: >>> On Sun, Oct 02, 2011 at 04:21:47PM -0400, Stefan Berger wrote: >>>> >>>>> 4) Implement the BERVisitor and make this the default migration protocol. >>>>> >>>>> Most of the work will be in 1), though with the implementation in this series we should be able to do it incrementally. I'm not sure if the best approach is doing the mechanical phase 1 conversion, then doing phase 2 sometime after 4), doing phase 1 + 2 as part of 1), or just doing VMState conversions which gives basically the same capabilities as phase 1 + 2. >>>>> >>>>> Thoughts? >>>> Is anyone working on this? If not I may give it a shot (tomorrow++) >>>> for at least some of the primitives... for enabling vNVRAM metadata >>>> of course. Indefinite length encoding of constructed data types I >>>> suppose won't be used otherwise the visitor interface seems wrong >>>> for parsing and skipping of extra data towards the end of a >>>> structure if version n wrote the stream and appended some of its >>>> version n data and now version m< n is trying to read the struct >>>> and needs to skip the version [m+1, n ] data fields ... in that case >>>> the de-serialization of the stream should probably be stream-driven >>>> rather than structure-driven. >>>> >>>> Stefan >>> >>> Yes I've been struggling with that exactly. >>> Anthony, any thoughts? >> >> It just depends on how you write your visitor. If you used >> sequences, you'd probably do something like this: >> >> start_struct -> >> check for sequence tag, push starting offset and size onto stack >> increment offset to next tag >> >> type_int (et al) -> >> check for explicit type, parse data >> increment offset to next tag >> >> end_struct -> >> pop starting offset and size to temp variables >> set offset to starting offset + size >> >> This is roughly how the QMP input marshaller works FWIW. >> >> Regards, >> >> Anthony Liguori > > One thing I worry about is enabling zero copy for > large string types (e.g. memory migration). Memory shouldn't be done through Visitors. It should be handled as a special case. > So we need to be able to see a tag for memory page + address, > read that from socket directly at the correct virtual address. > > Probably, we can avoid using visitors for memory, and hope > everything else can stand an extra copy since it's small. > > But then, why do we worry about the size of > encoded device state as Anthony seems to do? There's a significant difference between the cost of something on the wire and the cost of doing a memcpy. The cost of the data on the wire is directly proportional to downtime. So if we increase the size of the device state by a factor of 10, we increase the minimum downtime by a factor of 10. Of course, *if* the size of device state is already negligible with respect to the minimum downtime, then it doesn't matter. This is easy to quantify though. For a normal migration session today, what's the total size of the device state in relation to the calculated bandwidth of the minimum downtime? If it's very small, then we can add names and not worry about it. Regards, Anthony Liguori