From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=34888 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PBTPN-0003MC-T8 for qemu-devel@nongnu.org; Thu, 28 Oct 2010 10:24:51 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PBTPM-00018f-Dd for qemu-devel@nongnu.org; Thu, 28 Oct 2010 10:24:49 -0400 Received: from mx1.redhat.com ([209.132.183.28]:36665) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PBTPM-00018Y-4S for qemu-devel@nongnu.org; Thu, 28 Oct 2010 10:24:48 -0400 Message-ID: <4CC98784.7020907@redhat.com> Date: Thu, 28 Oct 2010 16:24:04 +0200 From: Avi Kivity MIME-Version: 1.0 Subject: Re: [Qemu-devel] Re: [PATCH] Implement a virtio GPU transport References: <4CAC9CD1.2050601@collabora.co.uk> <4CB1D79A.6070805@redhat.com> <4CBD739A.2010500@collabora.co.uk> <4CBD7560.6080207@redhat.com> <4CC8226F.5080807@collabora.co.uk> <4CC94203.1080207@redhat.com> <4CC9647A.50108@collabora.co.uk> In-Reply-To: <4CC9647A.50108@collabora.co.uk> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Ian Molton Cc: virtualization@lists.osdl.org, linux-kernel@vger.kernel.org, QEMU Developers On 10/28/2010 01:54 PM, Ian Molton wrote: >> Well, I like to review an implementation against a spec. > > > True, but then all that would prove is that I can write a spec to > match the code. It would also allow us to check that the spec matches the requirements. Those two steps are easier than checking that the code matches the requirements. > The code is proof of concept. the kernel bit is pretty simple, but I'd > like to get some idea of whether the rest of the code will be accepted > given that theres not much point in having any one (or two) of these > components exist without the other. I guess some graphics people need to be involved. > >> Better, but still unsatisfying. If the server is busy, the caller would >> block. I guess it's expected since it's called from ->fsync(). I'm not >> sure whether that's the best interface, perhaps aio_writev is better. > > The caller is intended to block as the host must perform GL rendering > before allowing the guests process to continue. Why is that? Can't we pipeline the process? > > The only real bottleneck is that processes will block trying to submit > data if another process is performing rendering, but that will only be > solved when the renderer is made multithreaded. The same would happen > on a real GPU if it had only one queue too. > > If you look at the host code, you can see that the data is already > buffered per-process, in a pretty sensible way. if the renderer itself > were made a seperate thread, then this problem magically disappears > (the queuing code on the host is pretty fast). Well, this is out of my area of expertise. I don't like it, but if it's acceptable to the gpu people, okay. > > In testing, the overhead of this was pretty small anyway. Running a > few dozen glxgears and a copy of ioquake3 simultaneously on an intel > video card managed the same framerate with the same CPU utilisation, > both with the old code and the version I just posted. Contention > during rendering just isn't much of an issue. -- error compiling committee.c: too many arguments to function