* 5x slower guest disk performance with virtio disk
@ 2011-12-15 14:28 Brian J. Murrell
2011-12-15 15:47 ` Stefan Pietsch
0 siblings, 1 reply; 7+ messages in thread
From: Brian J. Murrell @ 2011-12-15 14:28 UTC (permalink / raw)
To: kvm
[-- Attachment #1: Type: text/plain, Size: 6244 bytes --]
I have a CentOS 6 host system running a CentOS 6 KVM guest and the
guest seems to get about 5x slower disk throughput than the host:
host $ sudo dd of=/dev/datavol/disk1 if=/dev/zero bs=1M oflag=direct count=50000
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 150.36 s, 349 MB/s
host $ ssh guest dd of=/dev/vdb if=/dev/zero bs=1M oflag=direct count=50000
50000+0 records in
50000+0 records out
52428800000 bytes (52 GB) copied, 731.007 s, 71.7 MB/s
This is actually a vast improvement over earlier tests where it was a
factor of 40x difference. But even still, I thought virtio disks were
supposed to be much closer to native host speed than 5x.
/dev/datavol/disk1 is a RAIDed LV in an LVM volume group consisting of 5 disks:
$ sudo pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 lvm2 a- 420.16g 420.16g
/dev/sdb datavol lvm2 a- 465.76g 227.66g
/dev/sdc datavol lvm2 a- 465.76g 245.76g
/dev/sdd datavol lvm2 a- 465.76g 245.76g
/dev/sde datavol lvm2 a- 465.76g 245.76g
/dev/sdf datavol lvm2 a- 465.76g 245.76g
$ sudo lvdisplay --maps /dev/datavol/disk1
--- Logical volume ---
LV Name /dev/datavol/disk1
VG Name datavol
LV UUID 3gD1N3-ybAU-GzUO-8QBV-b6op-lsK9-GMNm3w
LV Write Access read/write
LV Status available
# open 1
LV Size 500.00 GiB
Current LE 128000
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Segments ---
Logical extent 0 to 127999:
Type striped
Stripes 5
Stripe size 4.00 KiB
Stripe 0:
Physical volume /dev/sdb
Physical extents 5120 to 30719
Stripe 1:
Physical volume /dev/sdc
Physical extents 5120 to 30719
Stripe 2:
Physical volume /dev/sdd
Physical extents 5120 to 30719
Stripe 3:
Physical volume /dev/sde
Physical extents 5120 to 30719
Stripe 4:
Physical volume /dev/sdf
Physical extents 5120 to 30719
The KVM command line is as follows:
$ tr '\0' '\n' < /proc/9409/cmdline
/usr/libexec/qemu-kvm
-S
-M
rhel6.0.0
-enable-kvm
-m
1024
-smp
1,sockets=1,cores=1,threads=1
-name
guest
-uuid
e2bf97e0-cfb7-444c-abc3-9efe262d8efe
-nodefconfig
-nodefaults
-chardev
socket,id=monitor,path=/var/lib/libvirt/qemu/guest.monitor,server,nowait
-mon
chardev=monitor,mode=control
-rtc
base=utc
-boot
c
-drive
file=/var/lib/libvirt/images/cdrom.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
-device
ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-drive
file=/var/lib/libvirt/images/guest.qcow2,if=none,id=drive-virtio-disk0,boot=on,format=qcow2
-device
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
-drive
file=/dev/datavol/disk1,if=none,id=drive-virtio-disk1,format=raw
-device
virtio-blk-pci,bus=pci.0,addr=0x6,drive=drive-virtio-disk1,id=virtio-disk1
-netdev
tap,fd=23,id=hostnet0
-device
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:ec:8d:47,bus=pci.0,addr=0x3
-chardev
pty,id=serial0
-device
isa-serial,chardev=serial0
-usb
-vnc
127.0.0.1:2
-vga
cirrus
-device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
That is started from virt-manager given the following configuration:
<domain type='kvm' id='18'>
<name>guest</name>
<uuid>e2bf97e0-cfb7-444c-abc3-9efe262d8efe</uuid>
<memory>1048576</memory>
<currentMemory>1048576</currentMemory>
<vcpu>1</vcpu>
<os>
<type arch='x86_64' machine='rhel6.0.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/var/lib/libvirt/images/cdrom.iso'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<alias name='ide0-1-0'/>
<address type='drive' controller='0' bus='1' unit='0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/guest.qcow2'/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' type='raw'/>
<source dev='/dev/datavol/disk1'/>
<target dev='vdb' bus='virtio'/>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
<controller type='ide' index='0'>
<alias name='ide0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<interface type='network'>
<mac address='52:54:00:ec:8d:47'/>
<source network='default'/>
<target dev='vnet2'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/3'/>
<target port='0'/>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/3'>
<source path='/dev/pts/3'/>
<target port='0'/>
<alias name='serial0'/>
</console>
<input type='mouse' bus='ps2'/>
<graphics type='vnc' port='5902' autoport='yes'/>
<video>
<model type='cirrus' vram='9216' heads='1'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</memballoon>
</devices>
</domain>
Any ideas why the huge performance difference?
Cheers,
b.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 262 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: 5x slower guest disk performance with virtio disk
2011-12-15 14:28 5x slower guest disk performance with virtio disk Brian J. Murrell
@ 2011-12-15 15:47 ` Stefan Pietsch
2011-12-15 16:55 ` Brian J. Murrell
0 siblings, 1 reply; 7+ messages in thread
From: Stefan Pietsch @ 2011-12-15 15:47 UTC (permalink / raw)
To: Brian J. Murrell; +Cc: kvm
* Brian J. Murrell <brian@interlinx.bc.ca> [2011-12-15 15:28]:
> I have a CentOS 6 host system running a CentOS 6 KVM guest and the
> guest seems to get about 5x slower disk throughput than the host:
>
> host $ sudo dd of=/dev/datavol/disk1 if=/dev/zero bs=1M oflag=direct count=50000
> 50000+0 records in
> 50000+0 records out
> 52428800000 bytes (52 GB) copied, 150.36 s, 349 MB/s
> host $ ssh guest dd of=/dev/vdb if=/dev/zero bs=1M oflag=direct count=50000
> 50000+0 records in
> 50000+0 records out
> 52428800000 bytes (52 GB) copied, 731.007 s, 71.7 MB/s
>
> This is actually a vast improvement over earlier tests where it was a
> factor of 40x difference. But even still, I thought virtio disks were
> supposed to be much closer to native host speed than 5x.
--- snip ---
Did you try to set the cache of the virtio disk to "none"?
Regards,
Stefan
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: 5x slower guest disk performance with virtio disk
2011-12-15 15:47 ` Stefan Pietsch
@ 2011-12-15 16:55 ` Brian J. Murrell
2011-12-15 17:16 ` Sasha Levin
0 siblings, 1 reply; 7+ messages in thread
From: Brian J. Murrell @ 2011-12-15 16:55 UTC (permalink / raw)
To: kvm
[-- Attachment #1: Type: text/plain, Size: 1122 bytes --]
On 11-12-15 10:47 AM, Stefan Pietsch wrote:
>
> Did you try to set the cache of the virtio disk to "none"?
I didn't. It was set at "default" in virt-manager and I suppose I just
assumed that "default" would be reasonable.
Changing to "none" has had a good effect indeed:
host $ ssh guest dd of=/dev/vdb if=/dev/zero bs=1M oflag=direct count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 49.9241 s, 210 MB/s
host $ ssh guest dd of=/dev/vdb if=/dev/zero bs=1M oflag=direct count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 54.7737 s, 191 MB/s
host $ ssh guest dd of=/dev/vdb if=/dev/zero bs=1M oflag=direct count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 50.9851 s, 206 MB/s
host $ sudo dd of=/dev/datavol/disk1 if=/dev/zero bs=1M oflag=direct count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 31.0545 s, 338 MB/s
So, about 2/3 of host speed now -- which is much better. Is 2/3 about
normal or should I be looking for more?
Thanks much!
b.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 262 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: 5x slower guest disk performance with virtio disk
2011-12-15 16:55 ` Brian J. Murrell
@ 2011-12-15 17:16 ` Sasha Levin
2011-12-15 17:27 ` Daniel P. Berrange
0 siblings, 1 reply; 7+ messages in thread
From: Sasha Levin @ 2011-12-15 17:16 UTC (permalink / raw)
To: Brian J. Murrell; +Cc: kvm
On Thu, 2011-12-15 at 11:55 -0500, Brian J. Murrell wrote:
> So, about 2/3 of host speed now -- which is much better. Is 2/3 about
> normal or should I be looking for more?
aio=native
Thats the qemu setting, I'm not sure where libvirt hides that.
--
Sasha.
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: 5x slower guest disk performance with virtio disk
2011-12-15 17:16 ` Sasha Levin
@ 2011-12-15 17:27 ` Daniel P. Berrange
2011-12-15 19:43 ` Brian J. Murrell
0 siblings, 1 reply; 7+ messages in thread
From: Daniel P. Berrange @ 2011-12-15 17:27 UTC (permalink / raw)
To: Sasha Levin; +Cc: Brian J. Murrell, kvm
On Thu, Dec 15, 2011 at 07:16:22PM +0200, Sasha Levin wrote:
> On Thu, 2011-12-15 at 11:55 -0500, Brian J. Murrell wrote:
> > So, about 2/3 of host speed now -- which is much better. Is 2/3 about
> > normal or should I be looking for more?
>
> aio=native
>
> Thats the qemu setting, I'm not sure where libvirt hides that.
<disk ...>
<driver io='threads|native'/>
...
</disk>
Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: 5x slower guest disk performance with virtio disk
2011-12-15 17:27 ` Daniel P. Berrange
@ 2011-12-15 19:43 ` Brian J. Murrell
2011-12-16 2:47 ` Simon Wilson
0 siblings, 1 reply; 7+ messages in thread
From: Brian J. Murrell @ 2011-12-15 19:43 UTC (permalink / raw)
To: kvm; +Cc: Sasha Levin
[-- Attachment #1: Type: text/plain, Size: 1077 bytes --]
On 11-12-15 12:27 PM, Daniel P. Berrange wrote:
> On Thu, Dec 15, 2011 at 07:16:22PM +0200, Sasha Levin wrote:
>> On Thu, 2011-12-15 at 11:55 -0500, Brian J. Murrell wrote:
>>> So, about 2/3 of host speed now -- which is much better. Is 2/3 about
>>> normal or should I be looking for more?
>>
>> aio=native
>>
>> Thats the qemu setting, I'm not sure where libvirt hides that.
>
> <disk ...>
> <driver io='threads|native'/>
> ...
> </disk>
When I try to "virsh edit" and add that "<driver io=.../>" it seems to
get stripped out of the config (as observed with "virsh dumpxml").
Doing some googling I discovered an alternate syntax (in message
https://www.redhat.com/archives/libvirt-users/2011-June/msg00004.html):
<driver name='qemu' type='raw' cache='none' io='native'/>
But the "io='native'" seems to get stripped out of that too.
FWIW, I have:
qemu-kvm-0.12.1.2-2.113.el6.x86_64
libvirt-client-0.8.1-27.el6.x86_64
installed here on CentOS 6.0. Maybe this aio= is not supported in the
above package(s)?
Cheers,
b.
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 262 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: 5x slower guest disk performance with virtio disk
2011-12-15 19:43 ` Brian J. Murrell
@ 2011-12-16 2:47 ` Simon Wilson
0 siblings, 0 replies; 7+ messages in thread
From: Simon Wilson @ 2011-12-16 2:47 UTC (permalink / raw)
To: Brian J. Murrell; +Cc: kvm, Sasha Levin
----- Message from "Brian J. Murrell" <brian@interlinx.bc.ca> ---------
Date: Thu, 15 Dec 2011 14:43:23 -0500
From: "Brian J. Murrell" <brian@interlinx.bc.ca>
Subject: Re: 5x slower guest disk performance with virtio disk
To: kvm@vger.kernel.org
Cc: Sasha Levin <levinsasha928@gmail.com>
> On 11-12-15 12:27 PM, Daniel P. Berrange wrote:
>> On Thu, Dec 15, 2011 at 07:16:22PM +0200, Sasha Levin wrote:
>>> On Thu, 2011-12-15 at 11:55 -0500, Brian J. Murrell wrote:
>>>> So, about 2/3 of host speed now -- which is much better. Is 2/3 about
>>>> normal or should I be looking for more?
>>>
>>> aio=native
>>>
>>> Thats the qemu setting, I'm not sure where libvirt hides that.
>>
>> <disk ...>
>> <driver io='threads|native'/>
>> ...
>> </disk>
>
> When I try to "virsh edit" and add that "<driver io=.../>" it seems to
> get stripped out of the config (as observed with "virsh dumpxml").
> Doing some googling I discovered an alternate syntax (in message
> https://www.redhat.com/archives/libvirt-users/2011-June/msg00004.html):
>
> <driver name='qemu' type='raw' cache='none' io='native'/>
>
> But the "io='native'" seems to get stripped out of that too.
>
> FWIW, I have:
>
> qemu-kvm-0.12.1.2-2.113.el6.x86_64
> libvirt-client-0.8.1-27.el6.x86_64
>
> installed here on CentOS 6.0. Maybe this aio= is not supported in the
> above package(s)?
>
You need to update CentOS, your version of libvirt does not support
io=native I don't believe. You are running into similar issues that I
have posted about (mostly unsuccessfully in terms of getting much
support LOL) here -
https://www.centos.org/modules/newbb/viewtopic.php?topic_id=33708&forum=55
From that thread:
With libvirt 0.8.7-6, qemu-kvm 0.12.1.2-2.144, and kernel 2.6.32-115,
you can use the "io=native" parameter in the KVM xml files. Bugzilla
591703 has the details, but basically my img file references in the VM
xml now reads like this:
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source file='/vm/pool/server06.img'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
</disk>
You may also want to search this list for my thread from November with
the title "Improving RAID5 write performance in a KVM VM"
Cheers
Simon.
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2011-12-16 2:47 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-12-15 14:28 5x slower guest disk performance with virtio disk Brian J. Murrell
2011-12-15 15:47 ` Stefan Pietsch
2011-12-15 16:55 ` Brian J. Murrell
2011-12-15 17:16 ` Sasha Levin
2011-12-15 17:27 ` Daniel P. Berrange
2011-12-15 19:43 ` Brian J. Murrell
2011-12-16 2:47 ` Simon Wilson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).