public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* Poor Write I/O Performance on KVM-79
@ 2009-01-04  6:03 Alexander Atticus
  2009-01-04 13:24 ` Avi Kivity
  2009-01-05 14:27 ` Thomas Mueller
  0 siblings, 2 replies; 8+ messages in thread
From: Alexander Atticus @ 2009-01-04  6:03 UTC (permalink / raw)
  To: kvm

Hello!

I have been experimenting with KVM and have been experiencing poor write I/O
performance.  I'm not sure whether I'm doing something wrong or if this is
just the current state of things.

While writing to the local array on the node running the guests I get about
200MB/s from dd (bs=1M count=1000) or about 90MB/s write performance from
iozone (sequencial) when I write to a 2G file with a 16M record length.  The
node is an 8 disk system using 3ware in a RAID50 configuration.  It has 8GB
of RAM.

The guests get much slower disk access. The guests are using file based
backends (tried both qcow2 and raw) with virtio support.  With no other
activity on the machine, I get about 6 to 7MB/s write performance from
iozone with the same test. Guests are running Debian lenny/sid with
2.6.26-1-686.

I don't know whether this is because of context switching or what.  Again,
I'm wondering how I can improve this performance or if there is something
I am doing wrong.  As a side note, I have also noticed some weirdness with
qcow2 files; some windows installations freeze and some disk corruption
running iozone on Linux guests.  All problems go away when I switch to raw
image files though.

I realize I take a hit by running file-based backends, and that the tests
aren't altogether accurate because with 8GB of RAM, I'm not saturating the
cache but they are still very disparate in numbers which concerns me.

Finally, does anyone know if KVM is now fully supporting SCSI pass-through
in KVM-79? Does this mean that I would vastly reduce context switching by
using an LVM backend device for guests or am I misunderstanding the benefits
of pass-through?

This is the iozone command:

# iozone -a -r 16M -g 2g -n 16g -m 2000

KVM command to launch guest:

# /usr/bin/kvm -S -M pc -m 1024 -smp 1 -name demo4 \
-uuid 5b474147-f581-9a21-ac7d-cdd0ce881c5c -monitor pty -boot c \
-drive file=/iso/debian-testing-i386-netinst.iso,if=ide,media=cdrom,index=2 \
-drive file=/srv/demo/demo4.img,if=virtio,index=0,boot=on -net \
nic,macaddr=00:16:16:64:e6:de,vlan=0 -net \
tap,fd=15,script=,vlan=0,ifname=vnet1 -serial pty -parallel none \
-usb -vnc 0.0.0.0:1 -k en-us

KVM/QEMU versions:

ii  kvm  79+dfsg-3
ii  qemu 0.9.1-8
Node Kernel:

2.6.26-1-amd64

Thanks,

-Alexander

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Poor Write I/O Performance on KVM-79
  2009-01-04  6:03 Poor Write I/O Performance on KVM-79 Alexander Atticus
@ 2009-01-04 13:24 ` Avi Kivity
  2009-01-04 18:07   ` Rodrigo Campos
  2009-01-05 14:27 ` Thomas Mueller
  1 sibling, 1 reply; 8+ messages in thread
From: Avi Kivity @ 2009-01-04 13:24 UTC (permalink / raw)
  To: Alexander Atticus; +Cc: kvm, Mark McLoughlin

Alexander Atticus wrote:
> Hello!
>
> I have been experimenting with KVM and have been experiencing poor write I/O
> performance.  I'm not sure whether I'm doing something wrong or if this is
> just the current state of things.
>
> While writing to the local array on the node running the guests I get about
> 200MB/s from dd (bs=1M count=1000) or about 90MB/s write performance from
> iozone (sequencial) when I write to a 2G file with a 16M record length.  The
> node is an 8 disk system using 3ware in a RAID50 configuration.  It has 8GB
> of RAM.
>
> The guests get much slower disk access. The guests are using file based
> backends (tried both qcow2 and raw) with virtio support.  With no other
> activity on the machine, I get about 6 to 7MB/s write performance from
> iozone with the same test. Guests are running Debian lenny/sid with
> 2.6.26-1-686.
>   

qcow2 will surely lead to miserable performance.  raw files are better.  
best is to use lvm.

> I don't know whether this is because of context switching or what.  Again,
> I'm wondering how I can improve this performance or if there is something
> I am doing wrong.  As a side note, I have also noticed some weirdness with
> qcow2 files; some windows installations freeze and some disk corruption
> running iozone on Linux guests.  All problems go away when I switch to raw
> image files though.
>
> I realize I take a hit by running file-based backends, and that the tests
> aren't altogether accurate because with 8GB of RAM, I'm not saturating the
> cache but they are still very disparate in numbers which concerns me.
>
> Finally, does anyone know if KVM is now fully supporting SCSI pass-through
> in KVM-79? Does this mean that I would vastly reduce context switching by
> using an LVM backend device for guests or am I misunderstanding the benefits
> of pass-through?
>   

scsi is much improved in kvm-82 though it still needs a lot more 
testing.  scsi pass-through is almost completely untested.

What is the kernel version in the guest? IIRC there were some serious 
limitations on the request size that were recently removed.


-- 
error compiling committee.c: too many arguments to function


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Poor Write I/O Performance on KVM-79
  2009-01-04 13:24 ` Avi Kivity
@ 2009-01-04 18:07   ` Rodrigo Campos
  2009-01-04 18:48     ` Florent
  2009-01-04 19:48     ` Avi Kivity
  0 siblings, 2 replies; 8+ messages in thread
From: Rodrigo Campos @ 2009-01-04 18:07 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Alexander Atticus, kvm, Mark McLoughlin

On Sun, Jan 4, 2009 at 11:24 AM, Avi Kivity <avi@redhat.com> wrote:
> Alexander Atticus wrote:
>>
>> Hello!
>>
>> I have been experimenting with KVM and have been experiencing poor write
>> I/O
>> performance.  I'm not sure whether I'm doing something wrong or if this is
>> just the current state of things.
>>
>> While writing to the local array on the node running the guests I get
>> about
>> 200MB/s from dd (bs=1M count=1000) or about 90MB/s write performance from
>> iozone (sequencial) when I write to a 2G file with a 16M record length.
>>  The
>> node is an 8 disk system using 3ware in a RAID50 configuration.  It has
>> 8GB
>> of RAM.
>>
>> The guests get much slower disk access. The guests are using file based
>> backends (tried both qcow2 and raw) with virtio support.  With no other
>> activity on the machine, I get about 6 to 7MB/s write performance from
>> iozone with the same test. Guests are running Debian lenny/sid with
>> 2.6.26-1-686.
>>
>
> qcow2 will surely lead to miserable performance.  raw files are better.
>  best is to use lvm.
>

What do you mean with best is to use lvm ?
You just say to use raw images on an lvm partition because you can
easily resize it ? Or somehow images only use the used space of the
raw file when used with lvm ? Or there's some trick to make it ?





Thanks a lot,

Rodrigo

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Poor Write I/O Performance on KVM-79
  2009-01-04 18:07   ` Rodrigo Campos
@ 2009-01-04 18:48     ` Florent
  2009-01-04 19:48     ` Avi Kivity
  1 sibling, 0 replies; 8+ messages in thread
From: Florent @ 2009-01-04 18:48 UTC (permalink / raw)
  To: kvm

Rodrigo,

KVM can use Logical Volume as a field of bytes to store the back-end of 
the guest HDD. Doing this, the overhead of the partition format is 
avoided at host side and things should be faster.

- Florent


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Poor Write I/O Performance on KVM-79
  2009-01-04 18:07   ` Rodrigo Campos
  2009-01-04 18:48     ` Florent
@ 2009-01-04 19:48     ` Avi Kivity
  2009-01-04 20:12       ` Rodrigo Campos
  1 sibling, 1 reply; 8+ messages in thread
From: Avi Kivity @ 2009-01-04 19:48 UTC (permalink / raw)
  To: Rodrigo Campos; +Cc: Alexander Atticus, kvm, Mark McLoughlin

Rodrigo Campos wrote:
>> qcow2 will surely lead to miserable performance.  raw files are better.
>>  best is to use lvm.
>>
>>     
>
> What do you mean with best is to use lvm ?
> You just say to use raw images on an lvm partition because you can
> easily resize it ? Or somehow images only use the used space of the
> raw file when used with lvm ? Or there's some trick to make it ?
>   

Using lvm directly (-drive file=/dev/vg/volume) is both most efficient 
and most reliable, as there are only a small amount of layers involved.  
However, you need to commit space in advance (you can grow your volume, 
but that takes guest involvement and cannot be done online at the moment).

Using a raw file over a filesystem will be slow since the host 
filesystem will be exercised, and due to fragmentation.  Raw files only 
occupy storage as they are used, but they are difficult to manage 
compared to qcow2 files.

Qcow2 files are most flexible, but the least performing.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Poor Write I/O Performance on KVM-79
  2009-01-04 19:48     ` Avi Kivity
@ 2009-01-04 20:12       ` Rodrigo Campos
  0 siblings, 0 replies; 8+ messages in thread
From: Rodrigo Campos @ 2009-01-04 20:12 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Alexander Atticus, kvm, Mark McLoughlin

On Sun, Jan 4, 2009 at 5:48 PM, Avi Kivity <avi@redhat.com> wrote:
> Rodrigo Campos wrote:
>>>
>>> qcow2 will surely lead to miserable performance.  raw files are better.
>>>  best is to use lvm.
>>>
>>>
>>
>> What do you mean with best is to use lvm ?
>> You just say to use raw images on an lvm partition because you can
>> easily resize it ? Or somehow images only use the used space of the
>> raw file when used with lvm ? Or there's some trick to make it ?
>>
>
> Using lvm directly (-drive file=/dev/vg/volume) is both most efficient and
> most reliable, as there are only a small amount of layers involved.
>  However, you need to commit space in advance (you can grow your volume, but
> that takes guest involvement and cannot be done online at the moment).
>
> Using a raw file over a filesystem will be slow since the host filesystem
> will be exercised, and due to fragmentation.  Raw files only occupy storage
> as they are used, but they are difficult to manage compared to qcow2 files.
>
> Qcow2 files are most flexible, but the least performing.

Ahhh. Thanks a lot for the explanation :)


Rodrigo

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Poor Write I/O Performance on KVM-79
  2009-01-04  6:03 Poor Write I/O Performance on KVM-79 Alexander Atticus
  2009-01-04 13:24 ` Avi Kivity
@ 2009-01-05 14:27 ` Thomas Mueller
  2009-01-05 14:30   ` Thomas Mueller
  1 sibling, 1 reply; 8+ messages in thread
From: Thomas Mueller @ 2009-01-05 14:27 UTC (permalink / raw)
  To: kvm

On Sat, 03 Jan 2009 22:03:20 -0800, Alexander Atticus wrote:

> Hello!
> 
> I have been experimenting with KVM and have been experiencing poor write
> I/O performance.  I'm not sure whether I'm doing something wrong or if
> this is just the current state of things.
> 
> While writing to the local array on the node running the guests I get
> about 200MB/s from dd (bs=1M count=1000) or about 90MB/s write
> performance from iozone (sequencial) when I write to a 2G file with a
> 16M record length.  The node is an 8 disk system using 3ware in a RAID50
> configuration.  It has 8GB of RAM.
> 
> The guests get much slower disk access. The guests are using file based
> backends (tried both qcow2 and raw) with virtio support.  With no other
> activity on the machine, I get about 6 to 7MB/s write performance from
> iozone with the same test. Guests are running Debian lenny/sid with
> 2.6.26-1-686.
> 
> ...
> 
> KVM command to launch guest:
> 
> # /usr/bin/kvm -S -M pc -m 1024 -smp 1 -name demo4 \ -uuid
> 5b474147-f581-9a21-ac7d-cdd0ce881c5c -monitor pty -boot c \ -drive
> file=/iso/debian-testing-i386-netinst.iso,if=ide,media=cdrom,index=2 \
> -drive file=/srv/demo/demo4.img,if=virtio,index=0,boot=on -net \
> nic,macaddr=00:16:16:64:e6:de,vlan=0 -net \
> tap,fd=15,script=,vlan=0,ifname=vnet1 -serial pty -parallel none \ -usb
> -vnc 0.0.0.0:1 -k en-us
>

if you use qcow2 image with kvm-79 you have to use cache=writeback 
parameter to get some speed.

see also this post from Anthony Liguori:
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/24765/focus=24811

- Thomas


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Poor Write I/O Performance on KVM-79
  2009-01-05 14:27 ` Thomas Mueller
@ 2009-01-05 14:30   ` Thomas Mueller
  0 siblings, 0 replies; 8+ messages in thread
From: Thomas Mueller @ 2009-01-05 14:30 UTC (permalink / raw)
  To: kvm


>>
> if you use qcow2 image with kvm-79 you have to use cache=writeback
> parameter to get some speed.

ok, not writeback but cache=writethrough ...
 
-Thomas


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2009-01-05 14:35 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-01-04  6:03 Poor Write I/O Performance on KVM-79 Alexander Atticus
2009-01-04 13:24 ` Avi Kivity
2009-01-04 18:07   ` Rodrigo Campos
2009-01-04 18:48     ` Florent
2009-01-04 19:48     ` Avi Kivity
2009-01-04 20:12       ` Rodrigo Campos
2009-01-05 14:27 ` Thomas Mueller
2009-01-05 14:30   ` Thomas Mueller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox