From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NUOvk-0002C5-4F for qemu-devel@nongnu.org; Mon, 11 Jan 2010 13:23:56 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1NUOvf-00029d-AK for qemu-devel@nongnu.org; Mon, 11 Jan 2010 13:23:55 -0500 Received: from [199.232.76.173] (port=39838 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NUOvf-00029a-7F for qemu-devel@nongnu.org; Mon, 11 Jan 2010 13:23:51 -0500 Received: from mx1.redhat.com ([209.132.183.28]:56983) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NUOve-00019s-1A for qemu-devel@nongnu.org; Mon, 11 Jan 2010 13:23:51 -0500 Date: Mon, 11 Jan 2010 20:20:48 +0200 From: "Michael S. Tsirkin" Message-ID: <20100111182048.GA12121@redhat.com> References: <1263195647.2005.44.camel@localhost> <4B4AE1BD.4000400@redhat.com> <20100111134248.GA25622@lst.de> <4B4B2C5F.7050403@codemonkey.ws> <4B4B35AF.3010706@redhat.com> <4B4B3796.1010106@codemonkey.ws> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4B4B3796.1010106@codemonkey.ws> Subject: [Qemu-devel] Re: Re: [RFC][PATCH] performance improvement for windows guests, running on top of virtio block device List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: Vadim Rozenfeld , Dor Laor , Christoph Hellwig , Avi Kivity , qemu-devel On Mon, Jan 11, 2010 at 08:37:10AM -0600, Anthony Liguori wrote: > On 01/11/2010 08:29 AM, Avi Kivity wrote: >> On 01/11/2010 03:49 PM, Anthony Liguori wrote: >>>> So instead of disabling notify while requests are active we might want >>>> to only disable it while we are inside virtio_blk_handle_output. >>>> Something like the following minimally tested patch: >>> >>> >>> I'd suggest that we get even more aggressive and install an idle >>> bottom half that checks the queue for newly submitted requests. If >>> we keep getting requests submitted before a new one completes, we'll >>> never take an I/O exit. >>> >> >> That has the downside of bouncing a cache line on unrelated exits. > > The read and write sides of the ring are widely separated in physical > memory specifically to avoid cache line bouncing. > >> It probably doesn't matter with qemu as it is now, since it will >> bounce qemu_mutex, but it will hurt with large guests (especially if >> they have many rings). >> >> IMO we should get things to work well without riding on unrelated >> exits, especially as we're trying to reduce those exits. > > A block I/O request can potentially be very, very long lived. By > serializing requests like this, there's a high likelihood that it's > going to kill performance with anything capable of processing multiple > requests. > > OTOH, if we aggressively poll the ring when we have an opportunity to, > there's very little down side to that and it addresses the serialization > problem. > >>> The same approach is probably a good idea for virtio-net. >> >> With vhost-net you don't see exits. > > The point is, when we've disabled notification, we should poll on the > ring for additional requests instead of waiting for one to complete > before looking at another one. > > Even with vhost-net, this logic is still applicable although potentially > harder to achieve. > > Regards, > > Anthony Liguori > vhost net does this already: it has a mode where it poll when skbs have left send queue: for tap this is when they have crossed the bridge, for packet socket this is when they have been transmitted. -- MST