From mboxrd@z Thu Jan 1 00:00:00 1970 From: Anthony Liguori Subject: Re: A new direction for vmchannel? Date: Sat, 24 Jan 2009 13:30:23 -0600 Message-ID: <497B6C4F.1040803@codemonkey.ws> References: <4979D80D.307@us.ibm.com> <20090124171928.GA30108@redhat.com> <497B5546.5060000@codemonkey.ws> <20090124183912.GA7900@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: "Daniel P. Berrange" , Dor Laor , Avi Kivity , "qemu-devel@nongnu.org" , Eric Van Hensbergen , kvm-devel , Chris Wright To: Gleb Natapov Return-path: Received: from yw-out-2324.google.com ([74.125.46.28]:30948 "EHLO yw-out-2324.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752942AbZAXTaj (ORCPT ); Sat, 24 Jan 2009 14:30:39 -0500 Received: by yw-out-2324.google.com with SMTP id 9so2362723ywe.1 for ; Sat, 24 Jan 2009 11:30:37 -0800 (PST) In-Reply-To: <20090124183912.GA7900@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: Gleb Natapov wrote: > On Sat, Jan 24, 2009 at 11:52:06AM -0600, Anthony Liguori wrote: > >>> For use cases where you are exposing metadata from the host to the guest >>> this would be a very convenient approach indeed. As asked elsewhere in this >>> thread, my main thought would be about how well it suits a application that >>> wants a generic stream based connection between host & guest ? >>> Efficient integration into a poll(2) based event loop would be key to >>> that. >>> >> You mean for a very large number of files (determining which property >> has changed?). >> >> > I think what Daniel means is that for file to have stream semantic it is > not enough to ignore offset on read/write, but poll also should behave > similar to how it behaves with char device fd. With regular files poll > will always report that fd is ready for I/O. > Thinking more about this, the difficulty is that poll() only has useful semantics when you're dealing with a buffered stream of some sort. That is, poll() is only really capable of asking whether there is data pending in your read buffer. With 9P, you have to explicitly send a read request. You can implement buffered IO by simply sending constant read requests such that there is always one read request pending. I don't think it's useful to do this in the kernel. Unfortunately, there's no way to do async IO in userspace that doesn't suck so that would make this pretty difficult. We could use a thread pool, but that's somewhat soul crushing and doesn't scale well. I think that puts a requirement on v9fs to support linux-aio. Regards, Anthony Liguori > -- > Gleb. >