From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NUKe9-0008MS-DO for qemu-devel@nongnu.org; Mon, 11 Jan 2010 08:49:29 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1NUKe4-0008JF-Dq for qemu-devel@nongnu.org; Mon, 11 Jan 2010 08:49:29 -0500 Received: from [199.232.76.173] (port=42732 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NUKe4-0008J9-6r for qemu-devel@nongnu.org; Mon, 11 Jan 2010 08:49:24 -0500 Received: from qw-out-1920.google.com ([74.125.92.147]:62277) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NUKe3-00048M-RL for qemu-devel@nongnu.org; Mon, 11 Jan 2010 08:49:23 -0500 Received: by qw-out-1920.google.com with SMTP id 5so3696997qwc.4 for ; Mon, 11 Jan 2010 05:49:22 -0800 (PST) Message-ID: <4B4B2C5F.7050403@codemonkey.ws> Date: Mon, 11 Jan 2010 07:49:19 -0600 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] Re: [RFC][PATCH] performance improvement for windows guests, running on top of virtio block device References: <1263195647.2005.44.camel@localhost> <4B4AE1BD.4000400@redhat.com> <20100111134248.GA25622@lst.de> In-Reply-To: <20100111134248.GA25622@lst.de> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Christoph Hellwig Cc: qemu-devel , Dor Laor , Avi Kivity , Vadim Rozenfeld On 01/11/2010 07:42 AM, Christoph Hellwig wrote: > On Mon, Jan 11, 2010 at 10:30:53AM +0200, Avi Kivity wrote: > >> The patch has potential to reduce performance on volumes with multiple >> spindles. Consider two processes issuing sequential reads into a RAID >> array. With this patch, the reads will be executed sequentially rather >> than in parallel, so I think a follow-on patch to make the minimum depth >> a parameter (set by the guest? the host?) would be helpful. >> > Let's think about the life cycle of I/O requests a bit. > > We have an idle virtqueue (aka one virtio-blk device). The first (read) > request comes in, we get the virtio notify from the guest, which calls > into virtio_blk_handle_output. With the new code we now disable the > notify once we start processing the first request. If the second > request hits the queue before we call into virtio_blk_get_request > the second time we're fine even with the new code as we keep picking it > up. If however it hits after we leave virtio_blk_handle_output, but > before we complete the first request we do indeed introduce additional > latency. > > So instead of disabling notify while requests are active we might want > to only disable it while we are inside virtio_blk_handle_output. > Something like the following minimally tested patch: > I'd suggest that we get even more aggressive and install an idle bottom half that checks the queue for newly submitted requests. If we keep getting requests submitted before a new one completes, we'll never take an I/O exit. The same approach is probably a good idea for virtio-net. Regards, Anthony Liguori