From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=46075 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1OBbDZ-0008Mg-Cu for qemu-devel@nongnu.org; Mon, 10 May 2010 18:12:56 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1OBbDX-0008Dz-K7 for qemu-devel@nongnu.org; Mon, 10 May 2010 18:12:53 -0400 Received: from mail-vw0-f45.google.com ([209.85.212.45]:47538) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1OBbDX-0008Dt-Fx for qemu-devel@nongnu.org; Mon, 10 May 2010 18:12:51 -0400 Received: by vws10 with SMTP id 10so570813vws.4 for ; Mon, 10 May 2010 15:12:50 -0700 (PDT) Message-ID: <4BE884E0.5020703@codemonkey.ws> Date: Mon, 10 May 2010 17:12:48 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH 0/2] Enable qemu block layer to not flush References: <1273528310-7051-1-git-send-email-agraf@suse.de> <4BE881CB.3070303@codemonkey.ws> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alexander Graf Cc: kwolf@redhat.com, qemu-devel@nongnu.org, hch@lst.de On 05/10/2010 05:03 PM, Alexander Graf wrote: > On 10.05.2010, at 23:59, Anthony Liguori wrote: > > >> On 05/10/2010 04:51 PM, Alexander Graf wrote: >> >>> Thanks to recent improvements, qemu flushes guest data to disk when the guest >>> tells us to do so. >>> >>> This is great if we care about data consistency on host disk failures. In cases >>> where we don't it just creates additional overhead for no net win. One such use >>> case is the building of appliances in SUSE Studio. We write the resulting images >>> out of the build VM, but compress it directly afterwards. So if possible we'd >>> love to keep it in RAM. >>> >>> This patchset introduces a new block parameter to -drive called "flush" which >>> allows a user to disable flushing in odd scenarios like the above. To show the >>> difference in performance this makes, I have put together a small test case. >>> Inside the initrd, I call the following piece of code on a 500MB preallocated >>> vmdk image: >>> >>> >> This seems like it's asking for trouble to me. I'm not sure it's worth the minor performance gain. >> > The gain is little on my netbook where I did the test on. This is part of performance regressions from 0.10 to 0.12 where we're talking build times of 2 minutes going to 30. While writeback was most of the chunk, flushing still at least doubled the build times which is unacceptable for us. > There's got to be a better place to fix this. Disable barriers in your guests? Regards, Anthony Liguori > I also fail to see where it's asking for trouble. If we don't flush volatile data, things are good, no? > > > Alex > >