From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Migun-00005V-Di for qemu-devel@nongnu.org; Tue, 01 Sep 2009 23:53:45 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1Migui-0008WK-VQ for qemu-devel@nongnu.org; Tue, 01 Sep 2009 23:53:45 -0400 Received: from [199.232.76.173] (port=45944 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Migui-0008WH-TN for qemu-devel@nongnu.org; Tue, 01 Sep 2009 23:53:40 -0400 Received: from verein.lst.de ([213.95.11.210]:55248) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_3DES_EDE_CBC_SHA1:24) (Exim 4.60) (envelope-from ) id 1Migui-0000ns-Ao for qemu-devel@nongnu.org; Tue, 01 Sep 2009 23:53:40 -0400 Date: Wed, 2 Sep 2009 05:53:37 +0200 From: Christoph Hellwig Subject: Re: [Qemu-devel] [PATCH 1/4] block: add enable_write_cache flag Message-ID: <20090902035337.GA18844@lst.de> References: <20090831201627.GA4811@lst.de> <20090831201651.GA4874@lst.de> <20090831220950.GB24318@shareable.org> <20090831221622.GA8834@lst.de> <4A9C5463.4090904@codemonkey.ws> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4A9C5463.4090904@codemonkey.ws> List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: Christoph Hellwig , qemu-devel@nongnu.org On Mon, Aug 31, 2009 at 05:53:23PM -0500, Anthony Liguori wrote: > I think we should pity our poor users and avoid adding yet another > obscure option that is likely to be misunderstood. > > Can someone do some benchmarking with cache=writeback and fdatasync > first and quantify what the real performance impact is? Some preliminary numbers because they are very interesting. Note that his is on a raid controller, not cheap ide disks. To make up for that I used an image file on ext3, which due to it's horrible fsync performance should be kind of a worst case. All these patches are with Linux 2.6.31-rc8 + my various barrier fixes on guest and host, using ext3 with barrier=1 on both. A kernel defconfig compile takes between 9m40s and 9m42s with data=writeback and barrieres disabled, and with fdatasync barriers enabled it actually is minimally faster, between 9m38s and 9m39s (given that I've only done three runs each this might fall under the boundary for measurement tolerances). For comparism the raw block device nodes with cache=none (just one run) is 9m36.759s, which is not far apart. A completely native run is 7m39.326, btw - and I fear much of the slowdown in KVM isn't I/O related.