From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NpPka-0001BL-67 for qemu-devel@nongnu.org; Wed, 10 Mar 2010 12:31:16 -0500 Received: from [199.232.76.173] (port=60829 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NpPkY-00019C-JA for qemu-devel@nongnu.org; Wed, 10 Mar 2010 12:31:14 -0500 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1NpPkW-0006A9-C6 for qemu-devel@nongnu.org; Wed, 10 Mar 2010 12:31:14 -0500 Received: from mx1.redhat.com ([209.132.183.28]:64420) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NpPkV-00069p-TE for qemu-devel@nongnu.org; Wed, 10 Mar 2010 12:31:12 -0500 Message-ID: <4B97D752.3080700@redhat.com> Date: Wed, 10 Mar 2010 19:30:58 +0200 From: Avi Kivity MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH] Inter-VM shared memory PCI device References: <1267833161-25267-1-git-send-email-cam@cs.ualberta.ca> <201003072254.00040.paul@codesourcery.com> <4B94C8CD.2030808@redhat.com> <201003081303.45179.paul@codesourcery.com> <4B94F89B.3060504@redhat.com> <4B96C15A.2040600@codemonkey.ws> <4B97659E.2090603@redhat.com> <4B97D349.1030105@codemonkey.ws> In-Reply-To: <4B97D349.1030105@codemonkey.ws> Content-Type: text/plain; charset=ISO-8859-15; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: Cam Macdonell , Paul Brook , kvm@vger.kernel.org, qemu-devel@nongnu.org On 03/10/2010 07:13 PM, Anthony Liguori wrote: > On 03/10/2010 03:25 AM, Avi Kivity wrote: >> On 03/09/2010 11:44 PM, Anthony Liguori wrote: >>>> Ah yes. For cross tcg environments you can map the memory using >>>> mmio callbacks instead of directly, and issue the appropriate >>>> barriers there. >>> >>> >>> Not good enough unless you want to severely restrict the use of >>> shared memory within the guest. >>> >>> For instance, it's going to be useful to assume that you atomic >>> instructions remain atomic. Crossing architecture boundaries here >>> makes these assumptions invalid. A barrier is not enough. >> >> You could make the mmio callbacks flow to the shared memory server >> over the unix-domain socket, which would then serialize them. Still >> need to keep RMWs as single operations. When the host supports it, >> implement the operation locally (you can't render cmpxchg16b on i386, >> for example). > > But now you have a requirement that the shmem server runs in lock-step > with the guest VCPU which has to happen for every single word of data > transferred. > Alternative implementation: expose a futex in a shared memory object and use that to serialize access. Now all accesses happen from vcpu context, and as long as there is no contention, should be fast, at least relative to tcg. > You're much better off using a bulk-data transfer API that relaxes > coherency requirements. IOW, shared memory doesn't make sense for TCG > :-) Rather, tcg doesn't make sense for shared memory smp. But we knew that already. -- error compiling committee.c: too many arguments to function