kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Qemu/KVM is 3x slower under libvirt
@ 2011-09-27 18:10 Reeted
  2011-09-28  7:51 ` [libvirt] " Daniel P. Berrange
  0 siblings, 1 reply; 16+ messages in thread
From: Reeted @ 2011-09-27 18:10 UTC (permalink / raw)
  To: kvm, libvir-list

I repost this, this time by also including the libvirt mailing list.

Info on my libvirt: it's the version in Ubuntu 11.04 Natty which is 
0.8.8-1ubuntu6.5 . I didn't recompile this one, while Kernel and 
qemu-kvm are vanilla and compiled by hand as described below.

My original message follows:

This is really strange.

I just installed a new host with kernel 3.0.3 and Qemu-KVM 0.14.1 
compiled by me.

I have created the first VM.
This is on LVM, virtio etc... if I run it directly from bash console, it 
boots in 8 seconds (it's a bare ubuntu with no graphics), while if I 
boot it under virsh (libvirt) it boots in 20-22 seconds. This is the 
time from after Grub to the login prompt, or from after Grub to the 
ssh-server up.

I was almost able to replicate the whole libvirt command line on the 
bash console, and it still goes almost 3x faster when launched from bash 
than with virsh start vmname. The part I wasn't able to replicate is the 
-netdev part because I still haven't understood the semantics of it.

This is my bash commandline:

/opt/qemu-kvm-0.14.1/bin/qemu-system-x86_64 -M pc-0.14 -enable-kvm -m 
2002 -smp 2,sockets=2,cores=1,threads=1 -name vmname1-1 -uuid 
ee75e28a-3bf3-78d9-3cba-65aa63973380 -nodefconfig -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname1-1.monitor,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -boot 
order=dc,menu=on -drive 
file=/dev/mapper/vgPtpVM-lvVM_Vmname1_d1,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none,aio=native 
-device 
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 
-drive 
if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none,aio=native 
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -net 
nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no  -usb -vnc 
127.0.0.1:0 -vga cirrus -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5


Which was taken from libvirt's command line. The only modifications I 
did to the original libvirt commandline (seen with ps aux) were:

- Removed -S

- Network was: -netdev tap,fd=17,id=hostnet0,vhost=on,vhostfd=18 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
Has been simplified to: -net nic,model=virtio -net 
tap,ifname=tap0,script=no,downscript=no
and manual bridging of the tap0 interface.


Firstly I had thought that this could be fault of the VNC: I have 
compiled qemu-kvm with no separate vnc thread. I thought that libvirt 
might have connected to the vnc server at all times and this could have 
slowed down the whole VM.
But then I also tried connecting vith vncviewer to the KVM machine 
launched directly from bash, and the speed of it didn't change. So no, 
it doesn't seem to be that.

BTW: is the slowdown of the VM on "no separate vnc thread" only in 
effect when somebody is actually connected to VNC, or always?

Also, note that the time difference is not visible in dmesg once the 
machine has booted. So it's not a slowdown in detecting devices. Devices 
are always detected within the first 3 seconds, according to dmesg, at 
3.6 seconds the first ext4 mount begins. It seems to be really the OS 
boot that is slow... it seems an hard disk performance problem.

Thank you
R.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [libvirt] Qemu/KVM is 3x slower under libvirt
  2011-09-27 18:10 Qemu/KVM is 3x slower under libvirt Reeted
@ 2011-09-28  7:51 ` Daniel P. Berrange
  2011-09-28  9:19   ` Reeted
  0 siblings, 1 reply; 16+ messages in thread
From: Daniel P. Berrange @ 2011-09-28  7:51 UTC (permalink / raw)
  To: Reeted; +Cc: kvm, libvir-list

On Tue, Sep 27, 2011 at 08:10:21PM +0200, Reeted wrote:
> I repost this, this time by also including the libvirt mailing list.
> 
> Info on my libvirt: it's the version in Ubuntu 11.04 Natty which is
> 0.8.8-1ubuntu6.5 . I didn't recompile this one, while Kernel and
> qemu-kvm are vanilla and compiled by hand as described below.
> 
> My original message follows:
> 
> This is really strange.
> 
> I just installed a new host with kernel 3.0.3 and Qemu-KVM 0.14.1
> compiled by me.
> 
> I have created the first VM.
> This is on LVM, virtio etc... if I run it directly from bash
> console, it boots in 8 seconds (it's a bare ubuntu with no
> graphics), while if I boot it under virsh (libvirt) it boots in
> 20-22 seconds. This is the time from after Grub to the login prompt,
> or from after Grub to the ssh-server up.
>
> I was almost able to replicate the whole libvirt command line on the
> bash console, and it still goes almost 3x faster when launched from
> bash than with virsh start vmname. The part I wasn't able to
> replicate is the -netdev part because I still haven't understood the
> semantics of it.

-netdev is just an alternative way of setting up networking that
avoids QEMU's nasty VLAN concept. Using -netdev allows QEMU to
use more efficient codepaths for networking, which should improve
the network performance.

> This is my bash commandline:
> 
> /opt/qemu-kvm-0.14.1/bin/qemu-system-x86_64 -M pc-0.14 -enable-kvm
> -m 2002 -smp 2,sockets=2,cores=1,threads=1 -name vmname1-1 -uuid
> ee75e28a-3bf3-78d9-3cba-65aa63973380 -nodefconfig -nodefaults
> -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname1-1.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc
> -boot order=dc,menu=on -drive file=/dev/mapper/vgPtpVM-lvVM_Vmname1_d1,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none,aio=native
> -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
> -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none,aio=native
> -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
> -net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no
> -usb -vnc 127.0.0.1:0 -vga cirrus -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5


This shows KVM is being requested, but we should validate that KVM is
definitely being activated when under libvirt. You can test this by
doing:

    virsh qemu-monitor-command vmname1 'info kvm'

> Which was taken from libvirt's command line. The only modifications
> I did to the original libvirt commandline (seen with ps aux) were:
> 
> - Removed -S

Fine, has no effect on performance.

> - Network was: -netdev tap,fd=17,id=hostnet0,vhost=on,vhostfd=18
> -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
> Has been simplified to: -net nic,model=virtio -net
> tap,ifname=tap0,script=no,downscript=no
> and manual bridging of the tap0 interface.

You could have equivalently used

 -netdev tap,ifname=tap0,script=no,downscript=no,id=hostnet0,vhost=on
 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3

That said, I don't expect this has anything todo with the performance
since booting a guest rarely involves much network I/O unless you're
doing something odd like NFS-root / iSCSI-root.

> Firstly I had thought that this could be fault of the VNC: I have
> compiled qemu-kvm with no separate vnc thread. I thought that
> libvirt might have connected to the vnc server at all times and this
> could have slowed down the whole VM.
> But then I also tried connecting vith vncviewer to the KVM machine
> launched directly from bash, and the speed of it didn't change. So
> no, it doesn't seem to be that.

Yeah, I have never seen VNC be responsible for the kind of slowdown
you describe.

> BTW: is the slowdown of the VM on "no separate vnc thread" only in
> effect when somebody is actually connected to VNC, or always?

Probably, but again I dont think it is likely to be relevant here.

> Also, note that the time difference is not visible in dmesg once the
> machine has booted. So it's not a slowdown in detecting devices.
> Devices are always detected within the first 3 seconds, according to
> dmesg, at 3.6 seconds the first ext4 mount begins. It seems to be
> really the OS boot that is slow... it seems an hard disk performance
> problem.


There are a couple of things that would be different between running the
VM directly, vs via libvirt.

 - Security drivers - SELinux/AppArmour
 - CGroups

If it is was AppArmour causing this slowdown I don't think you would have
been the first person to complain, so lets ignore that. Which leaves
cgroups as a likely culprit. Do a

  grep cgroup /proc/mounts

if any of them are mounted, then for each cgroups mount in turn,

 - Umount the cgroup
 - Restart libvirtd
 - Test your guest boot performance


Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [libvirt] Qemu/KVM is 3x slower under libvirt
  2011-09-28  7:51 ` [libvirt] " Daniel P. Berrange
@ 2011-09-28  9:19   ` Reeted
  2011-09-28  9:28     ` Daniel P. Berrange
  0 siblings, 1 reply; 16+ messages in thread
From: Reeted @ 2011-09-28  9:19 UTC (permalink / raw)
  To: Daniel P. Berrange; +Cc: kvm, libvir-list

On 09/28/11 09:51, Daniel P. Berrange wrote:
> On Tue, Sep 27, 2011 at 08:10:21PM +0200, Reeted wrote:
>> I repost this, this time by also including the libvirt mailing list.
>>
>> Info on my libvirt: it's the version in Ubuntu 11.04 Natty which is
>> 0.8.8-1ubuntu6.5 . I didn't recompile this one, while Kernel and
>> qemu-kvm are vanilla and compiled by hand as described below.
>>
>> My original message follows:
>>
>> This is really strange.
>>
>> I just installed a new host with kernel 3.0.3 and Qemu-KVM 0.14.1
>> compiled by me.
>>
>> I have created the first VM.
>> This is on LVM, virtio etc... if I run it directly from bash
>> console, it boots in 8 seconds (it's a bare ubuntu with no
>> graphics), while if I boot it under virsh (libvirt) it boots in
>> 20-22 seconds. This is the time from after Grub to the login prompt,
>> or from after Grub to the ssh-server up.
>>
>> I was almost able to replicate the whole libvirt command line on the
>> bash console, and it still goes almost 3x faster when launched from
>> bash than with virsh start vmname. The part I wasn't able to
>> replicate is the -netdev part because I still haven't understood the
>> semantics of it.
> -netdev is just an alternative way of setting up networking that
> avoids QEMU's nasty VLAN concept. Using -netdev allows QEMU to
> use more efficient codepaths for networking, which should improve
> the network performance.
>
>> This is my bash commandline:
>>
>> /opt/qemu-kvm-0.14.1/bin/qemu-system-x86_64 -M pc-0.14 -enable-kvm
>> -m 2002 -smp 2,sockets=2,cores=1,threads=1 -name vmname1-1 -uuid
>> ee75e28a-3bf3-78d9-3cba-65aa63973380 -nodefconfig -nodefaults
>> -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname1-1.monitor,server,nowait
>> -mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc
>> -boot order=dc,menu=on -drive file=/dev/mapper/vgPtpVM-lvVM_Vmname1_d1,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none,aio=native
>> -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
>> -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none,aio=native
>> -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
>> -net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no
>> -usb -vnc 127.0.0.1:0 -vga cirrus -device
>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
>
> This shows KVM is being requested, but we should validate that KVM is
> definitely being activated when under libvirt. You can test this by
> doing:
>
>      virsh qemu-monitor-command vmname1 'info kvm'

kvm support: enabled

I think I would see a higher impact if it was KVM not enabled.

>> Which was taken from libvirt's command line. The only modifications
>> I did to the original libvirt commandline (seen with ps aux) were:
>>
>> - Removed -S
> Fine, has no effect on performance.
>
>> - Network was: -netdev tap,fd=17,id=hostnet0,vhost=on,vhostfd=18
>> -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
>> Has been simplified to: -net nic,model=virtio -net
>> tap,ifname=tap0,script=no,downscript=no
>> and manual bridging of the tap0 interface.
> You could have equivalently used
>
>   -netdev tap,ifname=tap0,script=no,downscript=no,id=hostnet0,vhost=on
>   -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3

It's this! It's this!! (thanks for the line)

It raises boot time by 10-13 seconds

But now I don't know where to look.... During boot there is a pause 
usually between /scripts/init-bottom  (Ubuntu 11.04 guest) and the 
appearance of login prompt, however that is not really meaningful  
because there is probably much background activity going on there, with 
init etc. which don't display messages


init-bottom does just this

---------------------------------
#!/bin/sh -e
# initramfs init-bottom script for udev

PREREQ=""

# Output pre-requisites
prereqs()
{
         echo "$PREREQ"
}

case "$1" in
     prereqs)
         prereqs
         exit 0
         ;;
esac


# Stop udevd, we'll miss a few events while we run init, but we catch up
pkill udevd

# Move /dev to the real filesystem
mount -n -o move /dev ${rootmnt}/dev
---------------------------------

It doesn't look like it should take time to execute.
So there is probably some other background activity going on... and that 
is slower, but I don't know what that is.


Another thing that can be noticed is that the dmesg message:

[   13.290173] eth0: no IPv6 routers present

(which is also the last message)

happens on average 1 (one) second earlier in the fast case (-net) than 
in the slow case (-netdev)


> That said, I don't expect this has anything todo with the performance
> since booting a guest rarely involves much network I/O unless you're
> doing something odd like NFS-root / iSCSI-root.

No there is nothing like that. No network disks or nfs.

I had ntpdate, but I removed that and it changed nothing.


>> Firstly I had thought that this could be fault of the VNC: I have
>> compiled qemu-kvm with no separate vnc thread. I thought that
>> libvirt might have connected to the vnc server at all times and this
>> could have slowed down the whole VM.
>> But then I also tried connecting vith vncviewer to the KVM machine
>> launched directly from bash, and the speed of it didn't change. So
>> no, it doesn't seem to be that.
> Yeah, I have never seen VNC be responsible for the kind of slowdown
> you describe.

No it's not that, now I am using SDL and commandline in both cases (fast 
and slow)

>> BTW: is the slowdown of the VM on "no separate vnc thread" only in
>> effect when somebody is actually connected to VNC, or always?
> Probably, but again I dont think it is likely to be relevant here.

"Probably" always, or "probably" only when somebody is connected?



>> Also, note that the time difference is not visible in dmesg once the
>> machine has booted. So it's not a slowdown in detecting devices.
>> Devices are always detected within the first 3 seconds, according to
>> dmesg, at 3.6 seconds the first ext4 mount begins. It seems to be
>> really the OS boot that is slow... it seems an hard disk performance
>> problem.
>
> There are a couple of things that would be different between running the
> VM directly, vs via libvirt.
>
>   - Security drivers - SELinux/AppArmour

No selinux on the host or guests

>   - CGroups
>
> If it is was AppArmour causing this slowdown I don't think you would have
> been the first person to complain, so lets ignore that. Which leaves
> cgroups as a likely culprit. Do a
>
>    grep cgroup /proc/mounts

No cgroups mounted on the host


> if any of them are mounted, then for each cgroups mount in turn,
>
>   - Umount the cgroup
>   - Restart libvirtd
>   - Test your guest boot performance

Thanks for your help!

Do you have an idea of what to test now?

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [libvirt] Qemu/KVM is 3x slower under libvirt
  2011-09-28  9:19   ` Reeted
@ 2011-09-28  9:28     ` Daniel P. Berrange
  2011-09-28  9:49       ` Reeted
  0 siblings, 1 reply; 16+ messages in thread
From: Daniel P. Berrange @ 2011-09-28  9:28 UTC (permalink / raw)
  To: Reeted; +Cc: kvm, libvir-list

On Wed, Sep 28, 2011 at 11:19:43AM +0200, Reeted wrote:
> On 09/28/11 09:51, Daniel P. Berrange wrote:
> >>This is my bash commandline:
> >>
> >>/opt/qemu-kvm-0.14.1/bin/qemu-system-x86_64 -M pc-0.14 -enable-kvm
> >>-m 2002 -smp 2,sockets=2,cores=1,threads=1 -name vmname1-1 -uuid
> >>ee75e28a-3bf3-78d9-3cba-65aa63973380 -nodefconfig -nodefaults
> >>-chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname1-1.monitor,server,nowait
> >>-mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc
> >>-boot order=dc,menu=on -drive file=/dev/mapper/vgPtpVM-lvVM_Vmname1_d1,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none,aio=native
> >>-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
> >>-drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none,aio=native
> >>-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
> >>-net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no
> >>-usb -vnc 127.0.0.1:0 -vga cirrus -device
> >>virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
> >
> >This shows KVM is being requested, but we should validate that KVM is
> >definitely being activated when under libvirt. You can test this by
> >doing:
> >
> >     virsh qemu-monitor-command vmname1 'info kvm'
> 
> kvm support: enabled
> 
> I think I would see a higher impact if it was KVM not enabled.
> 
> >>Which was taken from libvirt's command line. The only modifications
> >>I did to the original libvirt commandline (seen with ps aux) were:


> >>- Network was: -netdev tap,fd=17,id=hostnet0,vhost=on,vhostfd=18
> >>-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
> >>Has been simplified to: -net nic,model=virtio -net
> >>tap,ifname=tap0,script=no,downscript=no
> >>and manual bridging of the tap0 interface.
> >You could have equivalently used
> >
> >  -netdev tap,ifname=tap0,script=no,downscript=no,id=hostnet0,vhost=on
> >  -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
> 
> It's this! It's this!! (thanks for the line)
> 
> It raises boot time by 10-13 seconds

Ok, that is truely bizarre and I don't really have any explanation
for why that is. I guess you could try 'vhost=off' too and see if that
makes the difference.

> 
> But now I don't know where to look.... During boot there is a pause
> usually between /scripts/init-bottom  (Ubuntu 11.04 guest) and the
> appearance of login prompt, however that is not really meaningful
> because there is probably much background activity going on there,
> with init etc. which don't display messages
> 
> 
> init-bottom does just this
> 
> ---------------------------------
> #!/bin/sh -e
> # initramfs init-bottom script for udev
> 
> PREREQ=""
> 
> # Output pre-requisites
> prereqs()
> {
>         echo "$PREREQ"
> }
> 
> case "$1" in
>     prereqs)
>         prereqs
>         exit 0
>         ;;
> esac
> 
> 
> # Stop udevd, we'll miss a few events while we run init, but we catch up
> pkill udevd
> 
> # Move /dev to the real filesystem
> mount -n -o move /dev ${rootmnt}/dev
> ---------------------------------
> 
> It doesn't look like it should take time to execute.
> So there is probably some other background activity going on... and
> that is slower, but I don't know what that is.
> 
> 
> Another thing that can be noticed is that the dmesg message:
> 
> [   13.290173] eth0: no IPv6 routers present
> 
> (which is also the last message)
> 
> happens on average 1 (one) second earlier in the fast case (-net)
> than in the slow case (-netdev)

Hmm, none of that looks particularly suspect. So I don't really have
much idea what else to try apart from the 'vhost=off' possibilty.


Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [libvirt] Qemu/KVM is 3x slower under libvirt
  2011-09-28  9:28     ` Daniel P. Berrange
@ 2011-09-28  9:49       ` Reeted
  2011-09-28  9:53         ` [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on) Daniel P. Berrange
  2011-09-29  0:39         ` [libvirt] Qemu/KVM is 3x slower under libvirt Chris Wright
  0 siblings, 2 replies; 16+ messages in thread
From: Reeted @ 2011-09-28  9:49 UTC (permalink / raw)
  To: Daniel P. Berrange; +Cc: kvm, libvir-list

On 09/28/11 11:28, Daniel P. Berrange wrote:
> On Wed, Sep 28, 2011 at 11:19:43AM +0200, Reeted wrote:
>> On 09/28/11 09:51, Daniel P. Berrange wrote:
>>>> This is my bash commandline:
>>>>
>>>> /opt/qemu-kvm-0.14.1/bin/qemu-system-x86_64 -M pc-0.14 -enable-kvm
>>>> -m 2002 -smp 2,sockets=2,cores=1,threads=1 -name vmname1-1 -uuid
>>>> ee75e28a-3bf3-78d9-3cba-65aa63973380 -nodefconfig -nodefaults
>>>> -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname1-1.monitor,server,nowait
>>>> -mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc
>>>> -boot order=dc,menu=on -drive file=/dev/mapper/vgPtpVM-lvVM_Vmname1_d1,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none,aio=native
>>>> -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
>>>> -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none,aio=native
>>>> -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
>>>> -net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no
>>>> -usb -vnc 127.0.0.1:0 -vga cirrus -device
>>>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
>>> This shows KVM is being requested, but we should validate that KVM is
>>> definitely being activated when under libvirt. You can test this by
>>> doing:
>>>
>>>      virsh qemu-monitor-command vmname1 'info kvm'
>> kvm support: enabled
>>
>> I think I would see a higher impact if it was KVM not enabled.
>>
>>>> Which was taken from libvirt's command line. The only modifications
>>>> I did to the original libvirt commandline (seen with ps aux) were:
>
>>>> - Network was: -netdev tap,fd=17,id=hostnet0,vhost=on,vhostfd=18
>>>> -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
>>>> Has been simplified to: -net nic,model=virtio -net
>>>> tap,ifname=tap0,script=no,downscript=no
>>>> and manual bridging of the tap0 interface.
>>> You could have equivalently used
>>>
>>>   -netdev tap,ifname=tap0,script=no,downscript=no,id=hostnet0,vhost=on
>>>   -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
>> It's this! It's this!! (thanks for the line)
>>
>> It raises boot time by 10-13 seconds
> Ok, that is truely bizarre and I don't really have any explanation
> for why that is. I guess you could try 'vhost=off' too and see if that
> makes the difference.

YES!
It's the vhost. With vhost=on it takes about 12 seconds more time to boot.

...meaning? :-)


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on)
  2011-09-28  9:49       ` Reeted
@ 2011-09-28  9:53         ` Daniel P. Berrange
  2011-09-28 10:19           ` Reeted
  2011-09-29  0:39         ` [libvirt] Qemu/KVM is 3x slower under libvirt Chris Wright
  1 sibling, 1 reply; 16+ messages in thread
From: Daniel P. Berrange @ 2011-09-28  9:53 UTC (permalink / raw)
  To: Reeted; +Cc: kvm, libvir-list

On Wed, Sep 28, 2011 at 11:49:01AM +0200, Reeted wrote:
> On 09/28/11 11:28, Daniel P. Berrange wrote:
> >On Wed, Sep 28, 2011 at 11:19:43AM +0200, Reeted wrote:
> >>On 09/28/11 09:51, Daniel P. Berrange wrote:
> >>>>This is my bash commandline:
> >>>>
> >>>>/opt/qemu-kvm-0.14.1/bin/qemu-system-x86_64 -M pc-0.14 -enable-kvm
> >>>>-m 2002 -smp 2,sockets=2,cores=1,threads=1 -name vmname1-1 -uuid
> >>>>ee75e28a-3bf3-78d9-3cba-65aa63973380 -nodefconfig -nodefaults
> >>>>-chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname1-1.monitor,server,nowait
> >>>>-mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc
> >>>>-boot order=dc,menu=on -drive file=/dev/mapper/vgPtpVM-lvVM_Vmname1_d1,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none,aio=native
> >>>>-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
> >>>>-drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none,aio=native
> >>>>-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
> >>>>-net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no
> >>>>-usb -vnc 127.0.0.1:0 -vga cirrus -device
> >>>>virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
> >>>This shows KVM is being requested, but we should validate that KVM is
> >>>definitely being activated when under libvirt. You can test this by
> >>>doing:
> >>>
> >>>     virsh qemu-monitor-command vmname1 'info kvm'
> >>kvm support: enabled
> >>
> >>I think I would see a higher impact if it was KVM not enabled.
> >>
> >>>>Which was taken from libvirt's command line. The only modifications
> >>>>I did to the original libvirt commandline (seen with ps aux) were:
> >
> >>>>- Network was: -netdev tap,fd=17,id=hostnet0,vhost=on,vhostfd=18
> >>>>-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
> >>>>Has been simplified to: -net nic,model=virtio -net
> >>>>tap,ifname=tap0,script=no,downscript=no
> >>>>and manual bridging of the tap0 interface.
> >>>You could have equivalently used
> >>>
> >>>  -netdev tap,ifname=tap0,script=no,downscript=no,id=hostnet0,vhost=on
> >>>  -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
> >>It's this! It's this!! (thanks for the line)
> >>
> >>It raises boot time by 10-13 seconds
> >Ok, that is truely bizarre and I don't really have any explanation
> >for why that is. I guess you could try 'vhost=off' too and see if that
> >makes the difference.
> 
> YES!
> It's the vhost. With vhost=on it takes about 12 seconds more time to boot.
> 
> ...meaning? :-)

I've no idea. I was always under the impression that 'vhost=on' was
the 'make it go much faster' switch. So something is going wrong
here that I cna't explain.

Perhaps one of the network people on this list can explain...


To turn vhost off in the libvirt XML, you should be able to use
<driver name='qemu'/> for the interface in question,eg


    <interface type='user'>
      <mac address='52:54:00:e5:48:58'/>
      <model type='virtio'/>
      <driver name='qemu'/>
    </interface>

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on)
  2011-09-28  9:53         ` [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on) Daniel P. Berrange
@ 2011-09-28 10:19           ` Reeted
  2011-09-28 10:29             ` Daniel P. Berrange
  2011-09-28 12:56             ` Richard W.M. Jones
  0 siblings, 2 replies; 16+ messages in thread
From: Reeted @ 2011-09-28 10:19 UTC (permalink / raw)
  To: Daniel P. Berrange; +Cc: libvir-list, kvm

On 09/28/11 11:53, Daniel P. Berrange wrote:
> On Wed, Sep 28, 2011 at 11:49:01AM +0200, Reeted wrote:
>> YES!
>> It's the vhost. With vhost=on it takes about 12 seconds more time to boot.
>>
>> ...meaning? :-)
> I've no idea. I was always under the impression that 'vhost=on' was
> the 'make it go much faster' switch. So something is going wrong
> here that I cna't explain.
>
> Perhaps one of the network people on this list can explain...
>
>
> To turn vhost off in the libvirt XML, you should be able to use
> <driver name='qemu'/>  for the interface in question,eg
>
>
>      <interface type='user'>
>        <mac address='52:54:00:e5:48:58'/>
>        <model type='virtio'/>
>        <driver name='qemu'/>
>      </interface>


Ok that seems to work: it removes the vhost part in the virsh launch 
hence cutting down 12secs of boot time.

If nobody comes out with an explanation of why, I will open another 
thread on the kvm list for this. I would probably need to test disk 
performance on vhost=on to see if it degrades or it's for another reason 
that boot time is increased.

Thanks so much for your help Daniel,
Reeted

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on)
  2011-09-28 10:19           ` Reeted
@ 2011-09-28 10:29             ` Daniel P. Berrange
  2011-09-28 12:56             ` Richard W.M. Jones
  1 sibling, 0 replies; 16+ messages in thread
From: Daniel P. Berrange @ 2011-09-28 10:29 UTC (permalink / raw)
  To: Reeted; +Cc: kvm, libvir-list

On Wed, Sep 28, 2011 at 12:19:09PM +0200, Reeted wrote:
> On 09/28/11 11:53, Daniel P. Berrange wrote:
> >On Wed, Sep 28, 2011 at 11:49:01AM +0200, Reeted wrote:
> >>YES!
> >>It's the vhost. With vhost=on it takes about 12 seconds more time to boot.
> >>
> >>...meaning? :-)
> >I've no idea. I was always under the impression that 'vhost=on' was
> >the 'make it go much faster' switch. So something is going wrong
> >here that I cna't explain.
> >
> >Perhaps one of the network people on this list can explain...
> >
> >
> >To turn vhost off in the libvirt XML, you should be able to use
> ><driver name='qemu'/>  for the interface in question,eg
> >
> >
> >     <interface type='user'>
> >       <mac address='52:54:00:e5:48:58'/>
> >       <model type='virtio'/>
> >       <driver name='qemu'/>
> >     </interface>
> 
> 
> Ok that seems to work: it removes the vhost part in the virsh launch
> hence cutting down 12secs of boot time.
> 
> If nobody comes out with an explanation of why, I will open another
> thread on the kvm list for this. I would probably need to test disk
> performance on vhost=on to see if it degrades or it's for another
> reason that boot time is increased.

Be sure to CC the qemu-devel mailing list too next time, since that has
a wider audience who might be able to help


Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on)
  2011-09-28 10:19           ` Reeted
  2011-09-28 10:29             ` Daniel P. Berrange
@ 2011-09-28 12:56             ` Richard W.M. Jones
  2011-09-28 14:51               ` Reeted
  1 sibling, 1 reply; 16+ messages in thread
From: Richard W.M. Jones @ 2011-09-28 12:56 UTC (permalink / raw)
  To: Reeted; +Cc: Daniel P. Berrange, libvir-list, kvm

On Wed, Sep 28, 2011 at 12:19:09PM +0200, Reeted wrote:
> On 09/28/11 11:53, Daniel P. Berrange wrote:
> >On Wed, Sep 28, 2011 at 11:49:01AM +0200, Reeted wrote:
> >>YES!
> >>It's the vhost. With vhost=on it takes about 12 seconds more time to boot.
> >>
> >>...meaning? :-)
> >I've no idea. I was always under the impression that 'vhost=on' was
> >the 'make it go much faster' switch. So something is going wrong
> >here that I cna't explain.
> >
> >Perhaps one of the network people on this list can explain...
> >
> >
> >To turn vhost off in the libvirt XML, you should be able to use
> ><driver name='qemu'/>  for the interface in question,eg
> >
> >
> >     <interface type='user'>
> >       <mac address='52:54:00:e5:48:58'/>
> >       <model type='virtio'/>
> >       <driver name='qemu'/>
> >     </interface>
> 
> 
> Ok that seems to work: it removes the vhost part in the virsh launch
> hence cutting down 12secs of boot time.
> 
> If nobody comes out with an explanation of why, I will open another
> thread on the kvm list for this. I would probably need to test disk
> performance on vhost=on to see if it degrades or it's for another
> reason that boot time is increased.

Is it using CPU during this time, or is the qemu-kvm process idle?

It wouldn't be the first time that a network option ROM sat around
waiting for an imaginary console user to press a key.

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
libguestfs lets you edit virtual machines.  Supports shell scripting,
bindings from many languages.  http://libguestfs.org

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on)
  2011-09-28 12:56             ` Richard W.M. Jones
@ 2011-09-28 14:51               ` Reeted
  2011-09-28 23:27                 ` Reeted
  0 siblings, 1 reply; 16+ messages in thread
From: Reeted @ 2011-09-28 14:51 UTC (permalink / raw)
  To: Richard W.M. Jones; +Cc: Daniel P. Berrange, libvir-list, kvm

On 09/28/11 14:56, Richard W.M. Jones wrote:
> On Wed, Sep 28, 2011 at 12:19:09PM +0200, Reeted wrote:
>> Ok that seems to work: it removes the vhost part in the virsh launch
>> hence cutting down 12secs of boot time.
>>
>> If nobody comes out with an explanation of why, I will open another
>> thread on the kvm list for this. I would probably need to test disk
>> performance on vhost=on to see if it degrades or it's for another
>> reason that boot time is increased.
> Is it using CPU during this time, or is the qemu-kvm process idle?
>
> It wouldn't be the first time that a network option ROM sat around
> waiting for an imaginary console user to press a key.
>
> Rich.

Of the two qemu-kvm processes (threads?) which I see consuming CPU for 
that VM, one is at about 20%, the other at about 10%. I think it's doing 
something but maybe not much, or maybe it's really I/O bound and the I/O 
is slow (as I originarily thought). I will perform some disk benchmarks 
and follow up, but I can't do that right now...
Thank you

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on)
  2011-09-28 14:51               ` Reeted
@ 2011-09-28 23:27                 ` Reeted
  0 siblings, 0 replies; 16+ messages in thread
From: Reeted @ 2011-09-28 23:27 UTC (permalink / raw)
  To: Richard W.M. Jones; +Cc: Daniel P. Berrange, libvir-list, kvm

On 09/28/11 16:51, Reeted wrote:
> On 09/28/11 14:56, Richard W.M. Jones wrote:
>> On Wed, Sep 28, 2011 at 12:19:09PM +0200, Reeted wrote:
>>> Ok that seems to work: it removes the vhost part in the virsh launch
>>> hence cutting down 12secs of boot time.
>>>
>>> If nobody comes out with an explanation of why, I will open another
>>> thread on the kvm list for this. I would probably need to test disk
>>> performance on vhost=on to see if it degrades or it's for another
>>> reason that boot time is increased.
>> Is it using CPU during this time, or is the qemu-kvm process idle?
>>
>> It wouldn't be the first time that a network option ROM sat around
>> waiting for an imaginary console user to press a key.
>>
>> Rich.
>
> Of the two qemu-kvm processes (threads?) which I see consuming CPU for 
> that VM, one is at about 20%, the other at about 10%. I think it's 
> doing something but maybe not much, or maybe it's really I/O bound and 
> the I/O is slow (as I originarily thought). I will perform some disk 
> benchmarks and follow up, but I can't do that right now...
> Thank you

Ok still didn't do benchmarks but I am now quite a lot convinced that 
it's either a disk performance problem or cpu problem with vhostnet on. 
Not a network performance problem or idle wait.

Because I have installed another virtual machine now, which is a fedora 
core 6 (old!),  but with a debian natty kernel vmlinuz + initrd so that 
it supports virtio devices. The initrd part from Ubuntu is extremely 
short so finishes immediately, but then the fedora core 6 boot is much 
longer than with my previous ubuntu-barebone virtual machine, and with 
more messages, and I can see the various daemons being brought up one by 
one, and I can tell you such boot (and also the teardown of services 
during shutdown) is very much faster with vhostnet disabled.

With vhostnet disabled it takes 30seconds to come up (since after grub), 
and 28 seconds to shutdown.
With vhostnet enabled it takes 1m19sec to come up (since after grub), 
and 1m04sec to shutdown.


I have some ideas about disk benchmarking, that would be fio or simple 
dd. What could I use for CPU benchmarking? Would "openssl speed" be too 
simple?

Thank you

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [libvirt] Qemu/KVM is 3x slower under libvirt
  2011-09-28  9:49       ` Reeted
  2011-09-28  9:53         ` [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on) Daniel P. Berrange
@ 2011-09-29  0:39         ` Chris Wright
  2011-09-29 10:16           ` Reeted
  1 sibling, 1 reply; 16+ messages in thread
From: Chris Wright @ 2011-09-29  0:39 UTC (permalink / raw)
  To: Reeted; +Cc: Daniel P. Berrange, kvm, libvir-list

* Reeted (reeted@shiftmail.org) wrote:
> On 09/28/11 11:28, Daniel P. Berrange wrote:
> >On Wed, Sep 28, 2011 at 11:19:43AM +0200, Reeted wrote:
> >>On 09/28/11 09:51, Daniel P. Berrange wrote:
> >>>You could have equivalently used
> >>>
> >>>  -netdev tap,ifname=tap0,script=no,downscript=no,id=hostnet0,vhost=on
> >>>  -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
> >>It's this! It's this!! (thanks for the line)
> >>
> >>It raises boot time by 10-13 seconds
> >Ok, that is truely bizarre and I don't really have any explanation
> >for why that is. I guess you could try 'vhost=off' too and see if that
> >makes the difference.
> 
> YES!
> It's the vhost. With vhost=on it takes about 12 seconds more time to boot.

Can you help narrow down what is happening during the additional 12
seconds in the guest?  For example, does a quick simple boot to single
user mode happen at the same boot speed w/ and w/out vhost_net?

I'm guessing (hoping) that it's the network bring-up that is slow.
Are you using dhcp to get an IP address?  Does static IP have the same
slow down?

If it's just dhcp, can you recompile qemu with this patch and see if it
causes the same slowdown you saw w/ vhost?

diff --git a/hw/virtio-net.c b/hw/virtio-net.c
index 0b03b57..0c864f7 100644
--- a/hw/virtio-net.c
+++ b/hw/virtio-net.c
@@ -496,7 +496,7 @@ static int receive_header(VirtIONet *n, struct iovec *iov, int iovcnt,
     if (n->has_vnet_hdr) {
         memcpy(hdr, buf, sizeof(*hdr));
         offset = sizeof(*hdr);
-        work_around_broken_dhclient(hdr, buf + offset, size - offset);
+//        work_around_broken_dhclient(hdr, buf + offset, size - offset);
     }
 
     /* We only ever receive a struct virtio_net_hdr from the tapfd,

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [libvirt] Qemu/KVM is 3x slower under libvirt
  2011-09-29  0:39         ` [libvirt] Qemu/KVM is 3x slower under libvirt Chris Wright
@ 2011-09-29 10:16           ` Reeted
  2011-09-29 16:40             ` Chris Wright
  0 siblings, 1 reply; 16+ messages in thread
From: Reeted @ 2011-09-29 10:16 UTC (permalink / raw)
  To: Chris Wright; +Cc: Daniel P. Berrange, kvm, libvir-list

On 09/29/11 02:39, Chris Wright wrote:
> Can you help narrow down what is happening during the additional 12
> seconds in the guest?  For example, does a quick simple boot to single
> user mode happen at the same boot speed w/ and w/out vhost_net?

Not tried (would probably be too short to measure effectively) but I'd 
guess it would be the same as for multiuser, see also the FC6 sub-thread

> I'm guessing (hoping) that it's the network bring-up that is slow.
> Are you using dhcp to get an IP address?  Does static IP have the same
> slow down?

It's all static IP.

And please see my previous post, 1 hour before yours, regarding Fedora 
Core 6: the bring-up of eth0 in Fedora Core 6 is not particularly faster 
or slower than the rest. This is an overall system slowdown (I'd say 
either CPU or disk I/O) not related to the network (apart from being 
triggered by vhost_net).



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [libvirt] Qemu/KVM is 3x slower under libvirt
  2011-09-29 10:16           ` Reeted
@ 2011-09-29 16:40             ` Chris Wright
  2011-10-04 23:12               ` Qemu/KVM guest boots 2x slower with vhost_net Reeted
  0 siblings, 1 reply; 16+ messages in thread
From: Chris Wright @ 2011-09-29 16:40 UTC (permalink / raw)
  To: Reeted; +Cc: Chris Wright, Daniel P. Berrange, kvm, libvir-list

* Reeted (reeted@shiftmail.org) wrote:
> On 09/29/11 02:39, Chris Wright wrote:
> >Can you help narrow down what is happening during the additional 12
> >seconds in the guest?  For example, does a quick simple boot to single
> >user mode happen at the same boot speed w/ and w/out vhost_net?
> 
> Not tried (would probably be too short to measure effectively) but
> I'd guess it would be the same as for multiuser, see also the FC6
> sub-thread
> 
> >I'm guessing (hoping) that it's the network bring-up that is slow.
> >Are you using dhcp to get an IP address?  Does static IP have the same
> >slow down?
> 
> It's all static IP.
> 
> And please see my previous post, 1 hour before yours, regarding
> Fedora Core 6: the bring-up of eth0 in Fedora Core 6 is not
> particularly faster or slower than the rest. This is an overall
> system slowdown (I'd say either CPU or disk I/O) not related to the
> network (apart from being triggered by vhost_net).

OK, I re-read it (pretty sure FC6 had the old dhclient, which is why
I wondered).  That is odd.  No ideas are springing to mind.

thanks,
-chris

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Qemu/KVM guest boots 2x slower with vhost_net
  2011-09-29 16:40             ` Chris Wright
@ 2011-10-04 23:12               ` Reeted
  2011-10-09 21:47                 ` Reeted
  0 siblings, 1 reply; 16+ messages in thread
From: Reeted @ 2011-10-04 23:12 UTC (permalink / raw)
  To: kvm, libvir-list, qemu-devel
  Cc: Chris Wright, Richard W.M. Jones, Daniel P. Berrange

Hello all,
for people in qemu-devel list, you might want to have a look at the 
previous thread about this topic, at
http://www.spinics.net/lists/kvm/msg61537.html
but I will try to recap here.

I found that virtual machines in my host booted 2x slower (on average 
it's 2x slower, but probably some parts are at least 3x slower) under 
libvirt compared to manual qemu-kvm launch. With the help of Daniel I 
narrowed it down to the vhost_net presence (default active when launched 
by libvirt) i.e. with vhost_net, boot process is *UNIFORMLY* 2x slower.

The problem is still reproducible on my systems but these are going to 
go to production soon and I am quite busy, I might not have many more 
days for testing left. Might be just next saturday and sunday for 
testing this problem, so if you can write here some of your suggestions 
by saturday that would be most appreciated.


I have performed some benchmarks now, which I hadn't performed in the 
old thread:

openssl speed -multi 2 rsa : (cpu benchmark) show no performance 
difference with or without vhost_net
disk benchmarks : show no performance difference with or without vhost_net
the disk benchmarks were: (both with cache=none and cache=writeback)
dd streaming read
dd streaming write
fio 4k random read in all cases of cache=none, cache=writeback with host 
cache dropped before test, cache=writeback with all fio data in host 
cache (measures context switch)
fio 4k random write

So I couldn't reproduce the problem with any benchmark that came to my mind.

But in the boot process this is very visible.
I'll continue the description below
before that, here are the System Specifications:
---------------------------------------
Host is with kernel 3.0.3 and Qemu-KVM 0.14.1, both vanilla and compiled 
by me.
Libvirt is the version in Ubuntu 11.04 Natty which is 0.8.8-1ubuntu6.5 . 
I didn't recompile this one

VM disks are LVs of LVM on MD raid array.
The problem shows identically on both cache=none and cache=writeback. 
Aio native.

Physical CPUs are: dual westmere 6-core (12 cores total, + hyperthreading)
2 vCPUs per VM.
All VMs are idle or off except the VM being tested.

Guests are:
- multiple Ubuntu 11.04 Natty 64bit with their 2.6.38-8-virtual kernel: 
very-minimal Ubuntu installs with deboostrap (not from the Ubuntu installer)
- one Fedora Core 6 32bit with a 32bit 2.6.38-8-virtual kernel + initrd 
both taken from Ubuntu Natty 32bit (so I could have virtio). Standard 
install (except kernel replaced afterwards).
Always static IP address in all guests
---------------------------------------

All types of guests show this problem, but it is more visible in the FC6 
guest because the boot process is MUCH longer than in the 
debootstrap-installed ubuntus.

Please note that most of boot process, at least from a certain point 
onwards, appears to the eye uniformly 2x or 3x slower under vhost_net, 
and by boot process I mean, roughly, copying by hand from some screenshots:


Loading default keymap
Setting hostname
Setting up LVM - no volume groups found
checking ilesystems... clean ...
remounting root filesystem in read-write mode
mounting local filesystems
enabling local filesystems quotas
enabling /etc/fstab swaps
INIT entering runlevel 3
entering non-interactive startup
Starting sysstat: calling the system activity data collector (sadc)
Starting background readahead

********** starting from here it is everything, or almost everything, 
much slower

Checking for hardware changes
Bringing up loopback interface
Bringing up interface eth0
starting system logger
starting kernel logger
starting irqbalance
starting potmap
starting nfs statd
starting rpc idmapd
starting system message bus
mounting other filesystems
starting PC/SC smart card daemon (pcscd)
starint hidd ... can't open HIDP control socket : address familiy not 
supported by protocol (this is an error due to backporting a new ubuntu 
kernel to FC6)
starting autofs: loading autofs4
starting automount
starting acpi daemon
starting hpiod
starting hpssd
starting cups
starting sshd
starting ntpd
starting sendmail
starting sm-client
startingg console mouse services
starting crond
starting xfs
starting anacron
starting atd
starting youm-updatesd
starting Avahi daemon
starting HAL daemon


 From the point I marked, onwards, most are services, i.e. daemons 
listening from sockets, so I have thought that maybe the binding to a 
socket could have been slower under vhost_net, but trying to put nc in 
listening with: "nc -l 15000" is instantaneous, so I am not sure.

The shutdown of FC6 with basically the same services as above which tear 
down, is *also* much slower on vhost_net.

Thanks for any suggestions
R.

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: Qemu/KVM guest boots 2x slower with vhost_net
  2011-10-04 23:12               ` Qemu/KVM guest boots 2x slower with vhost_net Reeted
@ 2011-10-09 21:47                 ` Reeted
  0 siblings, 0 replies; 16+ messages in thread
From: Reeted @ 2011-10-09 21:47 UTC (permalink / raw)
  To: kvm, libvir-list, qemu-devel
  Cc: Chris Wright, Richard W.M. Jones, Daniel P. Berrange

On 10/05/11 01:12, Reeted wrote:
> .....
> I found that virtual machines in my host booted 2x slower ... to the 
> vhost_net presence
> ...

Just a small update,

Firstly: I cannot reproduce any slowness after boot by doing:

# time /etc/init.d/chrony restart
Restarting time daemon: Starting /usr/sbin/chronyd...
chronyd is running and online.
real    0m3.022s
user    0m0.000s
sys     0m0.000s

since this is a network service I expected it to show the problem, but 
it doesn't. It takes exactly same time with and without vhost_net.


Secondly, vhost_net appears to work correctly, because I have performed 
a NPtcp performance test between two guests in the same host, and these 
are the results:

vhost_net deactivated for both hosts:

NPtcp -h 192.168.7.81
     Send and receive buffers are 16384 and 87380 bytes
     (A bug in Linux doubles the requested buffer sizes)
     Now starting the main loop
     0:       1 bytes    917 times -->      0.08 Mbps in      92.07 usec
     1:       2 bytes   1086 times -->      0.18 Mbps in      86.04 usec
     2:       3 bytes   1162 times -->      0.27 Mbps in      85.08 usec
     3:       4 bytes    783 times -->      0.36 Mbps in      85.34 usec
     4:       6 bytes    878 times -->      0.54 Mbps in      85.42 usec
     5:       8 bytes    585 times -->      0.72 Mbps in      85.31 usec
     6:      12 bytes    732 times -->      1.07 Mbps in      85.52 usec
     7:      13 bytes    487 times -->      1.16 Mbps in      85.52 usec
     8:      16 bytes    539 times -->      1.43 Mbps in      85.26 usec
     9:      19 bytes    659 times -->      1.70 Mbps in      85.43 usec
     10:      21 bytes    739 times -->      1.77 Mbps in      90.71 usec
     11:      24 bytes    734 times -->      2.13 Mbps in      86.13 usec
     12:      27 bytes    822 times -->      2.22 Mbps in      92.80 usec
     13:      29 bytes    478 times -->      2.35 Mbps in      94.02 usec
     14:      32 bytes    513 times -->      2.60 Mbps in      93.75 usec
     15:      35 bytes    566 times -->      3.15 Mbps in      84.77 usec
     16:      45 bytes    674 times -->      4.01 Mbps in      85.56 usec
     17:      48 bytes    779 times -->      4.32 Mbps in      84.70 usec
     18:      51 bytes    811 times -->      4.61 Mbps in      84.32 usec
     19:      61 bytes    465 times -->      5.08 Mbps in      91.57 usec
     20:      64 bytes    537 times -->      5.22 Mbps in      93.46 usec
     21:      67 bytes    551 times -->      5.73 Mbps in      89.20 usec
     22:      93 bytes    602 times -->      8.28 Mbps in      85.73 usec
     23:      96 bytes    777 times -->      8.45 Mbps in      86.70 usec
     24:      99 bytes    780 times -->      8.71 Mbps in      86.72 usec
     25:     125 bytes    419 times -->     11.06 Mbps in      86.25 usec
     26:     128 bytes    575 times -->     11.38 Mbps in      85.80 usec
     27:     131 bytes    591 times -->     11.60 Mbps in      86.17 usec
     28:     189 bytes    602 times -->     16.55 Mbps in      87.14 usec
     29:     192 bytes    765 times -->     16.80 Mbps in      87.19 usec
     30:     195 bytes    770 times -->     17.11 Mbps in      86.94 usec
     31:     253 bytes    401 times -->     22.04 Mbps in      87.59 usec
     32:     256 bytes    568 times -->     22.64 Mbps in      86.25 usec
     33:     259 bytes    584 times -->     22.68 Mbps in      87.12 usec
     34:     381 bytes    585 times -->     33.19 Mbps in      87.58 usec
     35:     384 bytes    761 times -->     33.54 Mbps in      87.36 usec
     36:     387 bytes    766 times -->     33.91 Mbps in      87.08 usec
     37:     509 bytes    391 times -->     44.23 Mbps in      87.80 usec
     38:     512 bytes    568 times -->     44.70 Mbps in      87.39 usec
     39:     515 bytes    574 times -->     45.21 Mbps in      86.90 usec
     40:     765 bytes    580 times -->     66.05 Mbps in      88.36 usec
     41:     768 bytes    754 times -->     66.73 Mbps in      87.81 usec
     42:     771 bytes    760 times -->     67.02 Mbps in      87.77 usec
     43:    1021 bytes    384 times -->     88.04 Mbps in      88.48 usec
     44:    1024 bytes    564 times -->     88.30 Mbps in      88.48 usec
     45:    1027 bytes    566 times -->     88.63 Mbps in      88.40 usec
     46:    1533 bytes    568 times -->     71.75 Mbps in     163.00 usec
     47:    1536 bytes    408 times -->     72.11 Mbps in     162.51 usec
     48:    1539 bytes    410 times -->     71.71 Mbps in     163.75 usec
     49:    2045 bytes    204 times -->     95.40 Mbps in     163.55 usec
     50:    2048 bytes    305 times -->     95.26 Mbps in     164.02 usec
     51:    2051 bytes    305 times -->     95.33 Mbps in     164.14 usec
     52:    3069 bytes    305 times -->    141.16 Mbps in     165.87 usec
     53:    3072 bytes    401 times -->    142.19 Mbps in     164.83 usec
     54:    3075 bytes    404 times -->    150.68 Mbps in     155.70 usec
     55:    4093 bytes    214 times -->    192.36 Mbps in     162.33 usec
     56:    4096 bytes    307 times -->    193.21 Mbps in     161.74 usec
     57:    4099 bytes    309 times -->    213.24 Mbps in     146.66 usec
     58:    6141 bytes    341 times -->    330.80 Mbps in     141.63 usec
     59:    6144 bytes    470 times -->    328.09 Mbps in     142.87 usec
     60:    6147 bytes    466 times -->    330.53 Mbps in     141.89 usec
     61:    8189 bytes    235 times -->    437.29 Mbps in     142.87 usec
     62:    8192 bytes    349 times -->    436.23 Mbps in     143.27 usec
     63:    8195 bytes    349 times -->    436.99 Mbps in     143.08 usec
     64:   12285 bytes    349 times -->    625.88 Mbps in     149.75 usec
     65:   12288 bytes    445 times -->    626.27 Mbps in     149.70 usec
     66:   12291 bytes    445 times -->    626.15 Mbps in     149.76 usec
     67:   16381 bytes    222 times -->    793.58 Mbps in     157.48 usec
     68:   16384 bytes    317 times -->    806.90 Mbps in     154.91 usec
     69:   16387 bytes    322 times -->    796.81 Mbps in     156.90 usec
     70:   24573 bytes    318 times -->   1127.58 Mbps in     166.26 usec
     71:   24576 bytes    400 times -->   1125.20 Mbps in     166.64 usec
     72:   24579 bytes    400 times -->   1124.84 Mbps in     166.71 usec
     73:   32765 bytes    200 times -->   1383.86 Mbps in     180.64 usec
     74:   32768 bytes    276 times -->   1376.05 Mbps in     181.68 usec
     75:   32771 bytes    275 times -->   1377.47 Mbps in     181.51 usec
     76:   49149 bytes    275 times -->   1824.90 Mbps in     205.48 usec
     77:   49152 bytes    324 times -->   1813.95 Mbps in     206.73 usec
     78:   49155 bytes    322 times -->   1765.68 Mbps in     212.40 usec
     79:   65533 bytes    156 times -->   2193.44 Mbps in     227.94 usec
     80:   65536 bytes    219 times -->   2186.79 Mbps in     228.65 usec
     81:   65539 bytes    218 times -->   2186.98 Mbps in     228.64 usec
     82:   98301 bytes    218 times -->   2831.01 Mbps in     264.92 usec
     83:   98304 bytes    251 times -->   2804.76 Mbps in     267.40 usec
     84:   98307 bytes    249 times -->   2824.62 Mbps in     265.53 usec
     85:  131069 bytes    125 times -->   3106.48 Mbps in     321.90 usec
     86:  131072 bytes    155 times -->   3033.71 Mbps in     329.63 usec
     87:  131075 bytes    151 times -->   3044.89 Mbps in     328.43 usec
     88:  196605 bytes    152 times -->   4196.94 Mbps in     357.40 usec
     89:  196608 bytes    186 times -->   4358.25 Mbps in     344.17 usec
     90:  196611 bytes    193 times -->   4362.34 Mbps in     343.86 usec
     91:  262141 bytes     96 times -->   4654.49 Mbps in     429.69 usec
     92:  262144 bytes    116 times -->   4727.16 Mbps in     423.09 usec
     93:  262147 bytes    118 times -->   4697.22 Mbps in     425.79 usec
     94:  393213 bytes    117 times -->   5452.51 Mbps in     550.20 usec
     95:  393216 bytes    121 times -->   5360.27 Mbps in     559.67 usec
     96:  393219 bytes    119 times -->   5358.03 Mbps in     559.91 usec
     97:  524285 bytes     59 times -->   5053.83 Mbps in     791.47 usec
     98:  524288 bytes     63 times -->   5033.86 Mbps in     794.62 usec
     99:  524291 bytes     62 times -->   5691.44 Mbps in     702.81 usec
     100:  786429 bytes     71 times -->   5750.68 Mbps in    1043.35 usec
     101:  786432 bytes     63 times -->   5809.21 Mbps in    1032.84 usec
     102:  786435 bytes     64 times -->   5864.45 Mbps in    1023.12 usec
     103: 1048573 bytes     32 times -->   5755.24 Mbps in    1390.03 usec
     104: 1048576 bytes     35 times -->   6001.51 Mbps in    1333.00 usec
     105: 1048579 bytes     37 times -->   6099.40 Mbps in    1311.61 usec
     106: 1572861 bytes     38 times -->   6061.69 Mbps in    1979.64 usec
     107: 1572864 bytes     33 times -->   6144.15 Mbps in    1953.08 usec
     108: 1572867 bytes     34 times -->   6108.20 Mbps in    1964.58 usec
     109: 2097149 bytes     16 times -->   6128.72 Mbps in    2610.65 usec
     110: 2097152 bytes     19 times -->   6271.35 Mbps in    2551.29 usec
     111: 2097155 bytes     19 times -->   6273.55 Mbps in    2550.39 usec
     112: 3145725 bytes     19 times -->   6146.28 Mbps in    3904.79 usec
     113: 3145728 bytes     17 times -->   6288.29 Mbps in    3816.62 usec
     114: 3145731 bytes     17 times -->   6234.73 Mbps in    3849.41 usec
     115: 4194301 bytes      8 times -->   5852.76 Mbps in    5467.50 usec
     116: 4194304 bytes      9 times -->   5886.74 Mbps in    5435.94 usec
     117: 4194307 bytes      9 times -->   5887.35 Mbps in    5435.39 usec
     118: 6291453 bytes      9 times -->   4502.11 Mbps in   10661.67 usec
     119: 6291456 bytes      6 times -->   4541.26 Mbps in   10569.75 usec
     120: 6291459 bytes      6 times -->   4465.98 Mbps in   10747.93 usec
     121: 8388605 bytes      3 times -->   4601.84 Mbps in   13907.47 usec
     122: 8388608 bytes      3 times -->   4590.50 Mbps in   13941.84 usec
     123: 8388611 bytes      3 times -->   4195.17 Mbps in   15255.65 usec



vhost_net activated for both hosts:

NPtcp -h 192.168.7.81
     Send and receive buffers are 16384 and 87380 bytes
     (A bug in Linux doubles the requested buffer sizes)
     Now starting the main loop
     0:       1 bytes   1013 times -->      0.10 Mbps in      75.89 usec
     1:       2 bytes   1317 times -->      0.21 Mbps in      74.03 usec
     2:       3 bytes   1350 times -->      0.30 Mbps in      76.90 usec
     3:       4 bytes    866 times -->      0.43 Mbps in      71.27 usec
     4:       6 bytes   1052 times -->      0.60 Mbps in      76.02 usec
     5:       8 bytes    657 times -->      0.79 Mbps in      76.88 usec
     6:      12 bytes    812 times -->      1.24 Mbps in      73.72 usec
     7:      13 bytes    565 times -->      1.40 Mbps in      70.60 usec
     8:      16 bytes    653 times -->      1.58 Mbps in      77.05 usec
     9:      19 bytes    730 times -->      1.90 Mbps in      76.25 usec
     10:      21 bytes    828 times -->      1.98 Mbps in      80.85 usec
     11:      24 bytes    824 times -->      2.47 Mbps in      74.22 usec
     12:      27 bytes    954 times -->      2.73 Mbps in      75.45 usec
     13:      29 bytes    589 times -->      3.06 Mbps in      72.23 usec
     14:      32 bytes    668 times -->      3.26 Mbps in      74.84 usec
     15:      35 bytes    709 times -->      3.46 Mbps in      77.09 usec
     16:      45 bytes    741 times -->      4.50 Mbps in      76.35 usec
     17:      48 bytes    873 times -->      4.83 Mbps in      75.90 usec
     18:      51 bytes    905 times -->      5.50 Mbps in      70.72 usec
     19:      61 bytes    554 times -->      6.36 Mbps in      73.14 usec
     20:      64 bytes    672 times -->      6.28 Mbps in      77.77 usec
     21:      67 bytes    663 times -->      6.39 Mbps in      80.06 usec
     22:      93 bytes    671 times -->      9.44 Mbps in      75.15 usec
     23:      96 bytes    887 times -->      9.52 Mbps in      76.90 usec
     24:      99 bytes    880 times -->     10.55 Mbps in      71.57 usec
     25:     125 bytes    508 times -->     12.63 Mbps in      75.49 usec
     26:     128 bytes    657 times -->     12.30 Mbps in      79.38 usec
     27:     131 bytes    639 times -->     12.72 Mbps in      78.57 usec
     28:     189 bytes    660 times -->     18.36 Mbps in      78.55 usec
     29:     192 bytes    848 times -->     18.84 Mbps in      77.75 usec
     30:     195 bytes    864 times -->     18.91 Mbps in      78.69 usec
     31:     253 bytes    443 times -->     24.04 Mbps in      80.28 usec
     32:     256 bytes    620 times -->     26.61 Mbps in      73.40 usec
     33:     259 bytes    686 times -->     26.09 Mbps in      75.75 usec
     34:     381 bytes    672 times -->     40.04 Mbps in      72.59 usec
     35:     384 bytes    918 times -->     39.67 Mbps in      73.86 usec
     36:     387 bytes    906 times -->     40.68 Mbps in      72.58 usec
     37:     509 bytes    469 times -->     51.70 Mbps in      75.11 usec
     38:     512 bytes    664 times -->     51.55 Mbps in      75.77 usec
     39:     515 bytes    662 times -->     49.61 Mbps in      79.19 usec
     40:     765 bytes    637 times -->     75.91 Mbps in      76.89 usec
     41:     768 bytes    867 times -->     76.03 Mbps in      77.07 usec
     42:     771 bytes    866 times -->     76.21 Mbps in      77.19 usec
     43:    1021 bytes    436 times -->     99.46 Mbps in      78.32 usec
     44:    1024 bytes    637 times -->    100.04 Mbps in      78.10 usec
     45:    1027 bytes    641 times -->    100.06 Mbps in      78.31 usec
     46:    1533 bytes    641 times -->    113.15 Mbps in     103.36 usec
     47:    1536 bytes    644 times -->    127.72 Mbps in      91.75 usec
     48:    1539 bytes    727 times -->    102.87 Mbps in     114.14 usec
     49:    2045 bytes    293 times -->    177.68 Mbps in      87.81 usec
     50:    2048 bytes    569 times -->    103.58 Mbps in     150.85 usec
     51:    2051 bytes    331 times -->    107.53 Mbps in     145.52 usec
     52:    3069 bytes    344 times -->    204.05 Mbps in     114.75 usec
     53:    3072 bytes    580 times -->    207.53 Mbps in     112.93 usec
     54:    3075 bytes    590 times -->    211.37 Mbps in     110.99 usec
     55:    4093 bytes    301 times -->    285.23 Mbps in     109.48 usec
     56:    4096 bytes    456 times -->    317.27 Mbps in      98.50 usec
     57:    4099 bytes    507 times -->    332.92 Mbps in      93.93 usec
     58:    6141 bytes    532 times -->    462.96 Mbps in     101.20 usec
     59:    6144 bytes    658 times -->    451.75 Mbps in     103.76 usec
     60:    6147 bytes    642 times -->    478.19 Mbps in      98.07 usec
     61:    8189 bytes    340 times -->    743.81 Mbps in      84.00 usec
     62:    8192 bytes    595 times -->    695.89 Mbps in      89.81 usec
     63:    8195 bytes    556 times -->    702.95 Mbps in      88.94 usec
     64:   12285 bytes    562 times -->    945.94 Mbps in      99.08 usec
     65:   12288 bytes    672 times -->    870.86 Mbps in     107.65 usec
     66:   12291 bytes    619 times -->    954.94 Mbps in      98.20 usec
     67:   16381 bytes    339 times -->   1003.02 Mbps in     124.60 usec
     68:   16384 bytes    401 times -->    652.84 Mbps in     191.47 usec
     69:   16387 bytes    261 times -->    872.02 Mbps in     143.37 usec
     70:   24573 bytes    348 times -->   1105.61 Mbps in     169.57 usec
     71:   24576 bytes    393 times -->   1037.52 Mbps in     180.72 usec
     72:   24579 bytes    368 times -->   1066.39 Mbps in     175.85 usec
     73:   32765 bytes    189 times -->   1271.24 Mbps in     196.64 usec
     74:   32768 bytes    254 times -->   1253.73 Mbps in     199.41 usec
     75:   32771 bytes    250 times -->   1101.71 Mbps in     226.94 usec
     76:   49149 bytes    220 times -->   1704.99 Mbps in     219.93 usec
     77:   49152 bytes    303 times -->   1678.17 Mbps in     223.46 usec
     78:   49155 bytes    298 times -->   1648.32 Mbps in     227.52 usec
     79:   65533 bytes    146 times -->   1940.36 Mbps in     257.67 usec
     80:   65536 bytes    194 times -->   1785.37 Mbps in     280.05 usec
     81:   65539 bytes    178 times -->   2079.85 Mbps in     240.41 usec
     82:   98301 bytes    207 times -->   2840.36 Mbps in     264.04 usec
     83:   98304 bytes    252 times -->   3441.30 Mbps in     217.94 usec
     84:   98307 bytes    305 times -->   3575.33 Mbps in     209.78 usec
     85:  131069 bytes    158 times -->   3145.83 Mbps in     317.87 usec
     86:  131072 bytes    157 times -->   3283.65 Mbps in     304.54 usec
     87:  131075 bytes    164 times -->   3610.07 Mbps in     277.01 usec
     88:  196605 bytes    180 times -->   4921.05 Mbps in     304.81 usec
     89:  196608 bytes    218 times -->   4953.98 Mbps in     302.79 usec
     90:  196611 bytes    220 times -->   4841.76 Mbps in     309.81 usec
     91:  262141 bytes    107 times -->   4546.37 Mbps in     439.91 usec
     92:  262144 bytes    113 times -->   4730.30 Mbps in     422.81 usec
     93:  262147 bytes    118 times -->   5211.50 Mbps in     383.77 usec
     94:  393213 bytes    130 times -->   7191.67 Mbps in     417.15 usec
     95:  393216 bytes    159 times -->   7423.89 Mbps in     404.10 usec
     96:  393219 bytes    164 times -->   7321.70 Mbps in     409.74 usec
     97:  524285 bytes     81 times -->   7631.75 Mbps in     524.12 usec
     98:  524288 bytes     95 times -->   7287.79 Mbps in     548.86 usec
     99:  524291 bytes     91 times -->   7253.28 Mbps in     551.48 usec
     100:  786429 bytes     90 times -->   8451.33 Mbps in     709.94 usec
     101:  786432 bytes     93 times -->   8755.43 Mbps in     685.29 usec
     102:  786435 bytes     97 times -->   8740.15 Mbps in     686.49 usec
     103: 1048573 bytes     48 times -->   9220.97 Mbps in     867.59 usec
     104: 1048576 bytes     57 times -->   8512.15 Mbps in     939.83 usec
     105: 1048579 bytes     53 times -->   8556.70 Mbps in     934.94 usec
     106: 1572861 bytes     53 times -->   9566.40 Mbps in    1254.39 usec
     107: 1572864 bytes     53 times -->  10165.18 Mbps in    1180.50 usec
     108: 1572867 bytes     56 times -->  11420.63 Mbps in    1050.73 usec
     109: 2097149 bytes     31 times -->  11295.29 Mbps in    1416.52 usec
     110: 2097152 bytes     35 times -->  11869.30 Mbps in    1348.02 usec
     111: 2097155 bytes     37 times -->  11407.22 Mbps in    1402.62 usec
     112: 3145725 bytes     35 times -->  12821.47 Mbps in    1871.86 usec
     113: 3145728 bytes     35 times -->  11727.57 Mbps in    2046.46 usec
     114: 3145731 bytes     32 times -->  12803.10 Mbps in    1874.55 usec
     115: 4194301 bytes     17 times -->  10009.28 Mbps in    3197.03 usec
     116: 4194304 bytes     15 times -->  10283.54 Mbps in    3111.77 usec
     117: 4194307 bytes     16 times -->  10923.95 Mbps in    2929.34 usec
     118: 6291453 bytes     17 times -->  11959.10 Mbps in    4013.68 usec
     119: 6291456 bytes     16 times -->  10674.76 Mbps in    4496.59 usec
     120: 6291459 bytes     14 times -->  10868.07 Mbps in    4416.61 usec
     121: 8388605 bytes      7 times -->   9456.16 Mbps in    6768.07 usec
     122: 8388608 bytes      7 times -->   9303.58 Mbps in    6879.07 usec
     123: 8388611 bytes      7 times -->  10048.79 Mbps in    6368.93 usec

so the case vhost_net is indeed faster in NPtcp by a factor almost 2, 
but that's only visible for the very high speeds and buffer sizes. Also 
note that the speed in the vhost_net case is much more variable, even if 
it's higher on average.

I expected more difference...
Actually I expected no-vhost to be slower than what it is.
Kudos to the developers.


If you have any more ideas for the slower boot please tell. I am of 
course not worried about waiting 10-30 more seconds at boot time, I am 
worried that if there is some factor 2x or 3x slowness somewhere, that 
can bite me in production without me even realizing.

And after having seen the above no-vhost_net tcp benchmarks I guess I 
don't really need vhost_net active for these VMs during production, so I 
will just disable vhost_net to be on the safe side until I can track 
down the boot-time slowness somehow.

Thanks for your help
R.

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2011-10-09 21:48 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-09-27 18:10 Qemu/KVM is 3x slower under libvirt Reeted
2011-09-28  7:51 ` [libvirt] " Daniel P. Berrange
2011-09-28  9:19   ` Reeted
2011-09-28  9:28     ` Daniel P. Berrange
2011-09-28  9:49       ` Reeted
2011-09-28  9:53         ` [libvirt] Qemu/KVM is 3x slower under libvirt (due to vhost=on) Daniel P. Berrange
2011-09-28 10:19           ` Reeted
2011-09-28 10:29             ` Daniel P. Berrange
2011-09-28 12:56             ` Richard W.M. Jones
2011-09-28 14:51               ` Reeted
2011-09-28 23:27                 ` Reeted
2011-09-29  0:39         ` [libvirt] Qemu/KVM is 3x slower under libvirt Chris Wright
2011-09-29 10:16           ` Reeted
2011-09-29 16:40             ` Chris Wright
2011-10-04 23:12               ` Qemu/KVM guest boots 2x slower with vhost_net Reeted
2011-10-09 21:47                 ` Reeted

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).