From mboxrd@z Thu Jan 1 00:00:00 1970 From: Anthony Liguori Subject: Re: [PATCH v5 4/5] Inter-VM shared memory PCI device Date: Tue, 11 May 2010 08:10:03 -0500 Message-ID: <4BE9572B.3010104@codemonkey.ws> References: <1271872408-22842-1-git-send-email-cam@cs.ualberta.ca> <1271872408-22842-3-git-send-email-cam@cs.ualberta.ca> <1271872408-22842-4-git-send-email-cam@cs.ualberta.ca> <1271872408-22842-5-git-send-email-cam@cs.ualberta.ca> <4BE7F517.5010707@redhat.com> <4BE82623.4000905@redhat.com> <4BE82877.1040408@codemonkey.ws> <4BE83B69.4040904@redhat.com> <4BE84172.9080305@codemonkey.ws> <4BE847CB.7050503@codemonkey.ws> <4BE90E6D.7070007@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: Cam Macdonell , kvm@vger.kernel.org, qemu-devel@nongnu.org To: Avi Kivity Return-path: Received: from mail-vw0-f46.google.com ([209.85.212.46]:64181 "EHLO mail-vw0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756660Ab0EKNKJ (ORCPT ); Tue, 11 May 2010 09:10:09 -0400 Received: by vws17 with SMTP id 17so1014287vws.19 for ; Tue, 11 May 2010 06:10:08 -0700 (PDT) In-Reply-To: <4BE90E6D.7070007@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On 05/11/2010 02:59 AM, Avi Kivity wrote: >> (Replying again to list) >> >> What data structure would you use? For a lockless ring queue, you >> can only support a single producer and consumer. To achieve >> bidirectional communication in virtio, we always use two queues. > > > You don't have to use a lockless ring queue. You can use locks > (spinlocks without interrupt support, full mutexes with interrupts) > and any data structure you like. Say a hash table + LRU for a shared > cache. Yeah, the mailslot enables this. I think the question boils down to whether we can support transparent peer connections and disconnections. I think that's important in order to support transparent live migration. If you have two peers that are disconnected and then connect to each other, there's simply no way to choose who's content gets preserved. It's necessary to designate one peer as a master in order to break the tie. So this could simply involve an additional option to the shared memory driver: role=master|peer. If role=master, when a new shared memory segment is mapped, the contents of the BAR ram is memcpy()'d to the shared memory segment. In either case, the contents of the shared memory segment should be memcpy()'d to the BAR ram whenever the shared memory segment is disconnected. I believe role=master should be default because I think a relationship of master/slave is going to be much more common than peering. >> >> If you're adding additional queues to support other levels of >> communication, you can always use different areas of shared memory. > > You'll need O(n^2) shared memory areas (n=peer count), and it is a lot > less flexible that real shared memory. Consider using threading where > the only communication among threads is a pipe (erlang?) I can't think of a use of multiple peers via shared memory today with virtualization. I know lots of master/slave uses of shared memory though. I agree that it's useful to support from an academic perspective but I don't believe it's going to be the common use. Regards, Anthony Liguori