From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NUM6i-0005w6-6R for qemu-devel@nongnu.org; Mon, 11 Jan 2010 10:23:04 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1NUM6d-0005vL-HF for qemu-devel@nongnu.org; Mon, 11 Jan 2010 10:23:03 -0500 Received: from [199.232.76.173] (port=50619 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NUM6d-0005vH-Eb for qemu-devel@nongnu.org; Mon, 11 Jan 2010 10:22:59 -0500 Received: from mail-yw0-f176.google.com ([209.85.211.176]:46477) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NUM6c-00054w-NS for qemu-devel@nongnu.org; Mon, 11 Jan 2010 10:22:59 -0500 Received: by ywh6 with SMTP id 6so21493407ywh.4 for ; Mon, 11 Jan 2010 07:22:53 -0800 (PST) Message-ID: <4B4B424B.2070300@codemonkey.ws> Date: Mon, 11 Jan 2010 09:22:51 -0600 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] Re: [RFC][PATCH] performance improvement for windows guests, running on top of virtio block device References: <1263195647.2005.44.camel@localhost> <4B4AE1BD.4000400@redhat.com> <20100111134248.GA25622@lst.de> <4B4B2C5F.7050403@codemonkey.ws> <4B4B35AF.3010706@redhat.com> <4B4B3796.1010106@codemonkey.ws> <4B4B39D4.8060405@redhat.com> <4B4B4013.9030706@codemonkey.ws> <4B4B4199.9050603@redhat.com> In-Reply-To: <4B4B4199.9050603@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: qemu-devel , Dor Laor , Christoph Hellwig , Vadim Rozenfeld On 01/11/2010 09:19 AM, Avi Kivity wrote: >>>> OTOH, if we aggressively poll the ring when we have an opportunity >>>> to, there's very little down side to that and it addresses the >>>> serialization problem. >>> >>> But we can't guarantee that we'll get those opportunities, so it >>> doesn't address the problem in a general way. A guest that doesn't >>> use hpet and only has a single virtio-blk device will not have any >>> reason to exit to qemu. >> >> We can mitigate this with a timer but honestly, we need to do perf >> measurements to see. My feeling is that we will need some more >> aggressive form of polling than just waiting for IO completion. I >> don't think queue depth is enough because it assumes that all >> requests are equal. When dealing with cache=off or even just storage >> with it's own cache, that's simply not the case. > > Maybe we can adapt behaviour dynamically based on how fast results > come in. Based on our experiences with virtio-net, what I'd suggest is to make a lot of tunable options (ring size, various tx mitigation schemes, timeout durations, etc) and then we can do some deep performance studies to see how things interact with each other. I think we should do that before making any changes because I'm deeply concerned that we'll introduce significant performance regressions. Regards, Anthony Liguori