From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1MJZwS-0007yq-6b for qemu-devel@nongnu.org; Wed, 24 Jun 2009 17:23:40 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1MJZwM-0007va-Ui for qemu-devel@nongnu.org; Wed, 24 Jun 2009 17:23:39 -0400 Received: from [199.232.76.173] (port=42540 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1MJZwM-0007vS-PH for qemu-devel@nongnu.org; Wed, 24 Jun 2009 17:23:34 -0400 Received: from mail2.shareable.org ([80.68.89.115]:40979) by monty-python.gnu.org with esmtps (TLS-1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1MJZwM-0004AH-4R for qemu-devel@nongnu.org; Wed, 24 Jun 2009 17:23:34 -0400 Date: Wed, 24 Jun 2009 22:23:30 +0100 From: Jamie Lokier Subject: Re: [Qemu-devel] [PATCH] block-raw: Make cache=off default again Message-ID: <20090624212330.GC14121@shareable.org> References: <1245669483-7076-1-git-send-email-kwolf@redhat.com> <20090622113524.GA13583@lst.de> <4A3F6EA3.2010303@redhat.com> <4A3F7139.20401@redhat.com> <4A3F79C0.6000804@redhat.com> <4A3F7B87.6000605@redhat.com> <4A3F7E32.8090905@redhat.com> <20090623103019.GA14437@shareable.org> <4A40C085.8050701@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4A40C085.8050701@redhat.com> List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Kevin Wolf Cc: qemu-devel@nongnu.org, Avi Kivity , Christoph Hellwig Kevin Wolf wrote: > Jamie Lokier schrieb: > > Kevin Wolf wrote: > >> What happens with virtio I still need to understand. Obviously, as soon > >> as virtio decides to fall back to 4k requests, performance becomes > >> terrible. > > > > Does emulating a disk with 4k sector size instead of 512 bytes help this? > > I just changed the virtio_blk code to always do the > blk_queue_hardsect_size with 4096, didn't change the behaviour. You need quite a bit more than that to emulate a 4k sector size disk. There's the ATA/SCSI ID pages to update, and the special 512-bit offset tricky thing. > I'm not sure if I have mentioned it in this thread: We have found that > it helps to use the deadline elevator instead of cfq in either the host > or the guest. I would accept this if it would only help when it's > changed in the guest (after all, I don't know the Linux block layer very > well), but I certainly don't understand how the host elevator could > change the guest request sizes - and noone else on the internal mailing > lists had an explanation either. The host elevator will certainly affect the timing of I/O requests, which it receives from the guest, and it will also affect how requests are merged to make larger requests. So it's not surprising that the host elevator changes the sizes of request sizes when they reach the host disk. It shouldn't change the size of requests inside the guest, _before_ they reach the host. -- Jamie