From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=48322 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1OBtxF-0000F3-HO for qemu-devel@nongnu.org; Tue, 11 May 2010 14:13:22 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1OBtxD-0004AE-IL for qemu-devel@nongnu.org; Tue, 11 May 2010 14:13:17 -0400 Received: from mx1.redhat.com ([209.132.183.28]:19025) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1OBtxD-00049z-9L for qemu-devel@nongnu.org; Tue, 11 May 2010 14:13:15 -0400 Message-ID: <4BE99E32.3020507@redhat.com> Date: Tue, 11 May 2010 21:13:06 +0300 From: Avi Kivity MIME-Version: 1.0 References: <1271872408-22842-1-git-send-email-cam@cs.ualberta.ca> <4BE84172.9080305@codemonkey.ws> <4BE847CB.7050503@codemonkey.ws> <4BE90E6D.7070007@redhat.com> <4BE9572B.3010104@codemonkey.ws> <4BE963C9.9090308@redhat.com> <4BE96F50.1040506@redhat.com> <4BE97CE6.6000001@codemonkey.ws> <4BE98E4A.3010708@codemonkey.ws> In-Reply-To: <4BE98E4A.3010708@codemonkey.ws> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: [PATCH v5 4/5] Inter-VM shared memory PCI device List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: Cam Macdonell , qemu-devel@nongnu.org, kvm@vger.kernel.org On 05/11/2010 08:05 PM, Anthony Liguori wrote: > On 05/11/2010 11:39 AM, Cam Macdonell wrote: >> >> Most of the people I hear from who are using my patch are using a peer >> model to share data between applications (simulations, JVMs, etc). >> But guest-to-host applications work as well of course. >> >> I think "transparent migration" can be achieved by making the >> connected/disconnected state transparent to the application. >> >> When using the shared memory server, the server has to be setup anyway >> on the new host and copying the memory region could be part of that as >> well if the application needs the contents preserved. I don't think >> it has to be handled by the savevm/loadvm operations. There's little >> difference between naming one VM the master or letting the shared >> memory server act like a master. > > Except that to make it work with the shared memory server, you need > the server to participate in the live migration protocol which is > something I'd prefer to avoid at it introduces additional down time. We can tunnel its migration data through qemu. Of course, gathering its dirty bitmap will be interesting. DSM may be the way to go here (we can even live migrate qemu through DSM: share the guest address space and immediately start running on the destination node; the guest will fault its memory to the destination. An advantage is that that the cpu load is immediately transferred. -- Do not meddle in the internals of kernels, for they are subtle and quick to panic.