From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1LQmgB-00018z-MK for qemu-devel@nongnu.org; Sat, 24 Jan 2009 12:52:23 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1LQmgA-00018f-Lg for qemu-devel@nongnu.org; Sat, 24 Jan 2009 12:52:23 -0500 Received: from [199.232.76.173] (port=36314 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1LQmgA-00018c-I9 for qemu-devel@nongnu.org; Sat, 24 Jan 2009 12:52:22 -0500 Received: from yx-out-1718.google.com ([74.125.44.158]:57826) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1LQmgA-0006QB-1x for qemu-devel@nongnu.org; Sat, 24 Jan 2009 12:52:22 -0500 Received: by yx-out-1718.google.com with SMTP id 3so2166714yxi.82 for ; Sat, 24 Jan 2009 09:52:20 -0800 (PST) Message-ID: <497B5546.5060000@codemonkey.ws> Date: Sat, 24 Jan 2009 11:52:06 -0600 From: Anthony Liguori MIME-Version: 1.0 References: <4979D80D.307@us.ibm.com> <20090124171928.GA30108@redhat.com> In-Reply-To: <20090124171928.GA30108@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: [Qemu-devel] Re: A new direction for vmchannel? Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Daniel P. Berrange" Cc: Eric Van Hensbergen , Chris Wright , Gleb Natapov , kvm-devel , Dor Laor , "qemu-devel@nongnu.org" , Avi Kivity Daniel P. Berrange wrote: > On Fri, Jan 23, 2009 at 08:45:33AM -0600, Anthony Liguori wrote: > >> The userspace configuration aspects of the current implementation of >> vmchannel are pretty annoying. Moreover, we would like to make use of >> something like vmchannel in a kernel driver and I fear that it's going >> to be difficult to do that. >> >> So here's an alternative proposal. >> >> Around 2.6.27ish, Eric and I added 9p over virtio support to v9fs. This >> is all upstream. We backported the v9fs modules all the way back to >> 2.6.18. I have a 9p client and server library and patches available for >> QEMU. We were using this for a file system pass through but we could >> also use it as a synthetic file system in the guest (like sysfs). >> >> The guest would just have to mount a directory in a well known location, >> and then you could get vmchannel like semantics by just opening a file >> read/write. Better yet though would be if we actually exposed vmchannel >> as 9p so that management applications could implement sysfs-like >> hierarchies. >> >> I think there could be a great deal of utility in something like. For >> portability to Windows (if an app cared), it would have to access the >> mount point through a library of some sort. We would need a Windows >> virtio-9p driver that exposed the 9p session down to userspace. We >> could then use our 9p client library in the portability library for Windows. >> >> Virtually all of the code is available for this today, the kernel bits >> are already upstream, there's a reasonable story for Windows, and >> there's very little that the guest can do to get in the way of things. >> >> The only thing that could potentially be an issue is SELinux. I assume >> you'd have to do an SELinux policy for the guest application anyway >> though so it shouldn't be a problem. >> > > For use cases where you are exposing metadata from the host to the guest > this would be a very convenient approach indeed. As asked elsewhere in this > thread, my main thought would be about how well it suits a application that > wants a generic stream based connection between host & guest ? Efficient > integration into a poll(2) based event loop would be key to that. You mean for a very large number of files (determining which property has changed?). The way you would do this today, without special inotify support, is to have a special file in the hierarchy called "change-notify". You can write a list of deliminated files and whenever one of those files becomes readable, the file itself will become readable (returning a deliminated list of files that have changed since last read). This way, you get a file you can select on for a very large number of files. That said, it would be nice to add proper inotify support to v9fs too. > Regular > files don't offer that kind of ability ordinarily, and not clear whether > fifo's would be provided for in p9fs between host/guest ? > I'm going to put together a patch this weekend and I'll include a streaming example. Basically, you just ignore the file offset and read/write to the file to your heart's content. Regards, Anthony Liguori > In any case, if we have a usable p9fs backend for QEMU, I don't see why we > shouldn't integrate that in QEMU, regardless of whether it serves the more > general vmchannel use cases. Sharing filesystems is an interesting idea in > its own right after all. > > I also really don't like guest deployment / configuration complexity that > is accompanying the NIC device based vmchannel approach. There are just > far too many things that can go wrong with it wrt the guest OS & apps using > networking. IMHO, the core motivation of vmchannel is to have a secure > guest <-> host data transport that can we relied upon from the moment > guest userspace appears, preferrably with zero guest admin configuration > requirements, and no need for authentication keys to establish guest > identity. UNIX domain sockets are a great example of this ideal, providing > a reliable data stream for the localhost before network makes any appearance, > with builtin client authentication via SCM_CREDS. > > Regards, > Daniel >