From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mark van Walraven Subject: Re: kvm-83 write performance raw Date: Tue, 3 Mar 2009 09:53:30 +1300 Message-ID: <20090302205330.GC20969@netvalue.net.nz> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm@vger.kernel.org To: Malinka Rellikwodahs Return-path: Received: from office.netvalue.net.nz ([202.37.129.7]:54067 "EHLO netvalue.net.nz" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752344AbZCBVTt (ORCPT ); Mon, 2 Mar 2009 16:19:49 -0500 Content-Disposition: inline In-Reply-To: Sender: kvm-owner@vger.kernel.org List-ID: On Mon, Mar 02, 2009 at 03:11:59PM -0500, Malinka Rellikwodahs wrote: > when running with a raw disk image as a file or a raw disk image on an > lvm vg, I'm getting very low performance on write (5-10 MB/s) however > when using qcow2 format disk image the write speed is much better > (~30MB/s), which is consistant with a very similar setup running > kvm-68. Unfortunately when running the test with qcow2 the system > becomes unresponsive for a brief time during the test. > The host is running raid5 and drbd (drive replication software), > however performance on the host is performaning well and avoiding the > drbd layer in the guest does not improve performance, but running on > qcow2 does. > > Any thoughts/suggestions of what could be wrong or what to do to fix this? RAID1 has *much* better write performance. With striping RAIDs, alignment is important. RAID controllers sometimes introduce hidden alignment offsets. Excessive read-ahead is a waste of time with a lot of small random I/O, which is what I see mostly with guests on flat disk images. With LVM, it pays to make sure the LVs are aligned to the disk. I prefer boundaries with multiples of at least 64-sectors, which makes the LVM overhead virtually disappear. I align the guest filesystems too, when I can. I don't think DRBD has an effect on alignment, but you might look at keeping the metadata on another drive. Block - rather than file - images are much faster. Hope this helps, Mark.