From mboxrd@z Thu Jan 1 00:00:00 1970 From: Sasha Levin Subject: Re: [PATCH] kvm tools: adds a PCI device that exports a host shared segment as a PCI BAR in the guest Date: Thu, 25 Aug 2011 14:25:48 +0300 Message-ID: <1314271548.3692.22.camel@lappy> References: <20110824222510.GC14835@dancer.ca.sandia.gov> <232C9ABA-F703-4AE5-83BC-774C715D4D8F@suse.de> <20110825044913.GA24996@dancer.ca.sandia.gov> <1314248794.32391.60.camel@jaguar> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: Pekka Enberg , David Evensky , Alexander Graf , David Evensky , kvm@vger.kernel.org, cam@cs.ualberta.ca To: Stefan Hajnoczi Return-path: Received: from mail-fx0-f46.google.com ([209.85.161.46]:60381 "EHLO mail-fx0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753341Ab1HYLZx (ORCPT ); Thu, 25 Aug 2011 07:25:53 -0400 Received: by fxh19 with SMTP id 19so1665182fxh.19 for ; Thu, 25 Aug 2011 04:25:52 -0700 (PDT) In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: On Thu, 2011-08-25 at 11:59 +0100, Stefan Hajnoczi wrote: > On Thu, Aug 25, 2011 at 11:37 AM, Pekka Enberg wrote: > > Hi Stefan, > > > > On Thu, Aug 25, 2011 at 1:31 PM, Stefan Hajnoczi wrote: > >>> It's obviously not competing. One thing you might want to consider is > >>> making the guest interface compatible with ivshmem. Is there any reason > >>> we shouldn't do that? I don't consider that a requirement, just nice to > >>> have. > >> > >> The point of implementing the same interface as ivshmem is that users > >> don't need to rejig guests or applications in order to switch between > >> hypervisors. A different interface also prevents same-to-same > >> benchmarks. > >> > >> There is little benefit to creating another virtual device interface > >> when a perfectly good one already exists. The question should be: how > >> is this shmem device different and better than ivshmem? If there is > >> no justification then implement the ivshmem interface. > > > > So which interface are we actually taking about? Userspace/kernel in the > > guest or hypervisor/guest kernel? > > The hardware interface. Same PCI BAR layout and semantics. > > > Either way, while it would be nice to share the interface but it's not a > > *requirement* for tools/kvm unless ivshmem is specified in the virtio > > spec or the driver is in mainline Linux. We don't intend to require people > > to implement non-standard and non-Linux QEMU interfaces. OTOH, > > ivshmem would make the PCI ID problem go away. > > Introducing yet another non-standard and non-Linux interface doesn't > help though. If there is no significant improvement over ivshmem then > it makes sense to let ivshmem gain critical mass and more users > instead of fragmenting the space. I support doing it ivshmem-compatible, though it doesn't have to be a requirement right now (that is, use this patch as a base and build it towards ivshmem - which shouldn't be an issue since this patch provides the PCI+SHM parts which are required by ivshmem anyway). ivshmem is a good, documented, stable interface backed by a lot of research and testing behind it. Looking at the spec it's obvious that Cam had KVM in mind when designing it and thats exactly what we want to have in the KVM tool. David, did you have any plans to extend it to become ivshmem-compatible? If not, would turning it into such break any code that depends on it horribly? -- Sasha.