From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NULOY-0001TY-Gv for qemu-devel@nongnu.org; Mon, 11 Jan 2010 09:37:26 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1NULOT-0001QJ-HM for qemu-devel@nongnu.org; Mon, 11 Jan 2010 09:37:26 -0500 Received: from [199.232.76.173] (port=35139 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NULOT-0001QB-D7 for qemu-devel@nongnu.org; Mon, 11 Jan 2010 09:37:21 -0500 Received: from mail-yx0-f188.google.com ([209.85.210.188]:43189) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NULOS-0005tx-T1 for qemu-devel@nongnu.org; Mon, 11 Jan 2010 09:37:21 -0500 Received: by yxe26 with SMTP id 26so20649718yxe.4 for ; Mon, 11 Jan 2010 06:37:12 -0800 (PST) Message-ID: <4B4B3796.1010106@codemonkey.ws> Date: Mon, 11 Jan 2010 08:37:10 -0600 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] Re: [RFC][PATCH] performance improvement for windows guests, running on top of virtio block device References: <1263195647.2005.44.camel@localhost> <4B4AE1BD.4000400@redhat.com> <20100111134248.GA25622@lst.de> <4B4B2C5F.7050403@codemonkey.ws> <4B4B35AF.3010706@redhat.com> In-Reply-To: <4B4B35AF.3010706@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: qemu-devel , Dor Laor , Christoph Hellwig , Vadim Rozenfeld On 01/11/2010 08:29 AM, Avi Kivity wrote: > On 01/11/2010 03:49 PM, Anthony Liguori wrote: >>> So instead of disabling notify while requests are active we might want >>> to only disable it while we are inside virtio_blk_handle_output. >>> Something like the following minimally tested patch: >> >> >> I'd suggest that we get even more aggressive and install an idle >> bottom half that checks the queue for newly submitted requests. If >> we keep getting requests submitted before a new one completes, we'll >> never take an I/O exit. >> > > That has the downside of bouncing a cache line on unrelated exits. The read and write sides of the ring are widely separated in physical memory specifically to avoid cache line bouncing. > It probably doesn't matter with qemu as it is now, since it will > bounce qemu_mutex, but it will hurt with large guests (especially if > they have many rings). > > IMO we should get things to work well without riding on unrelated > exits, especially as we're trying to reduce those exits. A block I/O request can potentially be very, very long lived. By serializing requests like this, there's a high likelihood that it's going to kill performance with anything capable of processing multiple requests. OTOH, if we aggressively poll the ring when we have an opportunity to, there's very little down side to that and it addresses the serialization problem. >> The same approach is probably a good idea for virtio-net. > > With vhost-net you don't see exits. The point is, when we've disabled notification, we should poll on the ring for additional requests instead of waiting for one to complete before looking at another one. Even with vhost-net, this logic is still applicable although potentially harder to achieve. Regards, Anthony Liguori