From mboxrd@z Thu Jan 1 00:00:00 1970 From: Anthony Liguori Subject: Re: [PATCH/RFC 7/9] Virtual network guest device driver Date: Wed, 16 May 2007 12:28:00 -0500 Message-ID: <464B3F20.4030904@us.ibm.com> References: <1178903957.25135.13.camel@cotte.boeblingen.de.ibm.com> <1178904965.25135.34.camel@cotte.boeblingen.de.ibm.com> <13426df10705111244w1578ebedy8259bc42ca1f588d@mail.gmail.com> <4644CE15.6080505@us.ibm.com> <4644E456.2060507@us.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: Jimi Xenidis , Christian Borntraeger , "jmk-zzFmDc4TPjtKvsKVC3L/VUEOCMrvLtNR@public.gmane.org" , Martin Schwidefsky , "kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org" To: Eric Van Hensbergen Return-path: In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org Errors-To: kvm-devel-bounces-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org List-Id: kvm.vger.kernel.org Eric Van Hensbergen wrote: > On 5/11/07, Anthony Liguori wrote: >> >> There's definitely a conversation to have here. There are going to be a >> lot of small devices that would benefit from a common transport >> mechanism. Someone mentioned a PV entropy device on LKML. A >> host=>guest filesystem is another consumer of such an interface. >> >> I'm inclined to think though that the abstraction point should be the >> transport and not the actual protocol. My concern with standardizing on >> a protocol like 9p would be that one would lose some potential >> optimizations (like passing PFN's directly between guest and host). >> > > I think that there are two layers - having a standard, well defined, > simple shared memory transport between partitions (or between > emulators and the host system) is certainly a prerequisite. There are > lots of different decisions to made here: What do you think about a socket interface? I'm not sure how discovery would work yet, but there are a few PV socket implementations for Xen at the moment. > a) does it communicate with userspace, kernelspace, or both? sockets are usable for both userspace/kernespace. > b) is it multi-channel? prioritized? interrupt driven or poll driven? Of course, arguments can be made for any of these depending on the circumstance. I think you'd have to start with something simple that would cover the most number of users (non-multiplexed, interrupt driven). > c) how big are the buffers? is it packetized? This could probably be tweaked with sockopts. I suspect you would have an implementation for Xen, KVM, etc. and support a common set of options (and possible some per-VM type of options). > d) can all of these parameters be something controllable from userspace? > e) I'm sure there are many others that I can't be bothered to think > of on a Friday The biggest point of contention would probably be what goes in the sockaddr structure. Thoughts? Regards, Anthony Liguori > Regardless of the details, I think we can definitely come together on > a common mechanism here and avoid lots of duplication in the drivers > are already there and which will follow. My personal preference is to > keep things as simple and flat as possible. No XML, no multiple > stacks and daemons to contend with. > > What runs on top of the transport is no doubt going to be a touchy > subject for some time to come. Many of Ron's arguments for 9p mostly > apply to this upper level. I/we will be pursuing this as a unified PV > resource sharing mechanism over the next few months in combination > with reorganization and optimization of the Linux 9p code. LANL has > also been making progress in this same direction. I'd have gotten > started sooner, but I was waiting for my new Thinkpad so that I can > actually run KVM ;) > >> >> So is there any reason to even tie 9p to KVM? Why not just have a >> common PV transport that 9p can use. For certain things, it may make >> sense (like v9fs). >> > > Well, I think we were discussing tying KVM to 9p, not vice-versa. > > My personal view is that developing a generalized solution for > resource sharing of all manner of devices and services across > virtualization, emulation, and network boundaries is a better way to > spend our time than writing a bunch of specific > drivers/protocols/interfaces for each type of device and each type of > interconnect. > > -eric ------------------------------------------------------------------------- This SF.net email is sponsored by DB2 Express Download DB2 Express C - the FREE version of DB2 express and take control of your XML. No limits. Just data. Click to get it now. http://sourceforge.net/powerbar/db2/