From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gordan Bobic Subject: Re: Shouldn't cache=none be the default for drives? Date: Wed, 07 Apr 2010 16:17:44 +0100 Message-ID: <4BBCA218.1020306@bobich.net> References: <4BBC992D.3050905@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit To: kvm@vger.kernel.org Return-path: Received: from 78-86-195-86.zone2.bethere.co.uk ([78.86.195.86]:58368 "EHLO sentinel1.shatteredsilicon.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753431Ab0DGPRv (ORCPT ); Wed, 7 Apr 2010 11:17:51 -0400 Received: from tecra.shatteredsilicon.net (unknown [10.255.20.6]) by sentinel1.shatteredsilicon.net (Postfix) with ESMTP id C7715F886F for ; Wed, 7 Apr 2010 16:16:50 +0100 (BST) In-Reply-To: <4BBC992D.3050905@gmail.com> Sender: kvm-owner@vger.kernel.org List-ID: Troels Arvin wrote: > Hello, > > I'm conducting some performancetests with KVM-virtualized CentOSes. One > thing I noticed is that guest I/O performance seems to be significantly > better for virtio-based block devices ("drive"s) if the cache=none > argument is used. (This was with a rather powerful storage system > backend which is hard to saturate.) > > So: Why isn't cache=none be the default for drives? Is that the right question? Or is the right question "Why is cache=none faster?" What did you use for measuring the performance? I have found in the past that virtio block device was slower than IDE block device emulation. Gordan