From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=42732 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1OBu29-0004sR-VY for qemu-devel@nongnu.org; Tue, 11 May 2010 14:18:22 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1OBsta-0005hk-Ir for qemu-devel@nongnu.org; Tue, 11 May 2010 13:05:34 -0400 Received: from mail-qy0-f188.google.com ([209.85.221.188]:36962) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1OBstU-0005gk-CL for qemu-devel@nongnu.org; Tue, 11 May 2010 13:05:22 -0400 Received: by qyk26 with SMTP id 26so8013876qyk.19 for ; Tue, 11 May 2010 10:05:16 -0700 (PDT) Message-ID: <4BE98E4A.3010708@codemonkey.ws> Date: Tue, 11 May 2010 12:05:14 -0500 From: Anthony Liguori MIME-Version: 1.0 References: <1271872408-22842-1-git-send-email-cam@cs.ualberta.ca> <4BE84172.9080305@codemonkey.ws> <4BE847CB.7050503@codemonkey.ws> <4BE90E6D.7070007@redhat.com> <4BE9572B.3010104@codemonkey.ws> <4BE963C9.9090308@redhat.com> <4BE96F50.1040506@redhat.com> <4BE97CE6.6000001@codemonkey.ws> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: [PATCH v5 4/5] Inter-VM shared memory PCI device List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Cam Macdonell Cc: Avi Kivity , kvm@vger.kernel.org, qemu-devel@nongnu.org On 05/11/2010 11:39 AM, Cam Macdonell wrote: > > Most of the people I hear from who are using my patch are using a peer > model to share data between applications (simulations, JVMs, etc). > But guest-to-host applications work as well of course. > > I think "transparent migration" can be achieved by making the > connected/disconnected state transparent to the application. > > When using the shared memory server, the server has to be setup anyway > on the new host and copying the memory region could be part of that as > well if the application needs the contents preserved. I don't think > it has to be handled by the savevm/loadvm operations. There's little > difference between naming one VM the master or letting the shared > memory server act like a master. > Except that to make it work with the shared memory server, you need the server to participate in the live migration protocol which is something I'd prefer to avoid at it introduces additional down time. Regards, Anthony Liguori > I think abstractions on top of shared memory could handle > disconnection issues (sort of how TCP handles them for networks) if > the application needs it. Again, my opinion is to leave it to the > application to decide what it necessary. > > Cam >