From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1MJ4TH-00039x-RE for qemu-devel@nongnu.org; Tue, 23 Jun 2009 07:47:27 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1MJ4TD-00036Z-DB for qemu-devel@nongnu.org; Tue, 23 Jun 2009 07:47:27 -0400 Received: from [199.232.76.173] (port=59106 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1MJ4TD-00036L-5m for qemu-devel@nongnu.org; Tue, 23 Jun 2009 07:47:23 -0400 Received: from mx2.redhat.com ([66.187.237.31]:35989) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1MJ4TC-0000Hg-O7 for qemu-devel@nongnu.org; Tue, 23 Jun 2009 07:47:22 -0400 Message-ID: <4A40C085.8050701@redhat.com> Date: Tue, 23 Jun 2009 13:46:13 +0200 From: Kevin Wolf MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH] block-raw: Make cache=off default again References: <1245669483-7076-1-git-send-email-kwolf@redhat.com> <20090622113524.GA13583@lst.de> <4A3F6EA3.2010303@redhat.com> <4A3F7139.20401@redhat.com> <4A3F79C0.6000804@redhat.com> <4A3F7B87.6000605@redhat.com> <4A3F7E32.8090905@redhat.com> <20090623103019.GA14437@shareable.org> In-Reply-To: <20090623103019.GA14437@shareable.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jamie Lokier Cc: qemu-devel@nongnu.org, Avi Kivity , Christoph Hellwig Jamie Lokier schrieb: > Kevin Wolf wrote: >> What happens with virtio I still need to understand. Obviously, as soon >> as virtio decides to fall back to 4k requests, performance becomes >> terrible. > > Does emulating a disk with 4k sector size instead of 512 bytes help this? I just changed the virtio_blk code to always do the blk_queue_hardsect_size with 4096, didn't change the behaviour. I'm not sure if I have mentioned it in this thread: We have found that it helps to use the deadline elevator instead of cfq in either the host or the guest. I would accept this if it would only help when it's changed in the guest (after all, I don't know the Linux block layer very well), but I certainly don't understand how the host elevator could change the guest request sizes - and noone else on the internal mailing lists had an explanation either. Kevin