public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* Networking latency - what to expect?
@ 2012-11-27 23:26 David Mohr
  2012-11-29 14:48 ` Julian Stecklina
  0 siblings, 1 reply; 4+ messages in thread
From: David Mohr @ 2012-11-27 23:26 UTC (permalink / raw)
  To: kvm

Hi,

we were investigating some performance issue that an application on our 
kvm VMs had and noticed that network latency is much worse in a VM 
compared to actual hardware.

Initially we were using virtio network cards, but not vhost-net. After 
enabling vhost the performance improved quite a bit, but is still only 
at around 50% of what the actual hardware can do.

The question is, what can we expect? CPU utilization was never higher 
than 50% kvm and 20% vhost. Other details are below, please let me know 
if you can recommend other tests or important details that I could 
provide.

I ran the following tests using netperf -t UDP_RR (all results are 
transactions/sec):
* host->host            19k
* vm->its host          17k
* vm->vm (same host)    22k
* vm->vm (diff. hosts)   7k

Host:     Debian squeeze using a 3.5.2 kernel
KVM:      1.1.2 (bpo)
Host CPU: Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
VM:       Debian squeeze using a 3.2.20 kernel
Network:  e1000 & gigabit switch
Command-line:
-cpu host -name testvm -m 12288 -smp 16 -pidfile 
/var/run/ganeti/kvm-hypervisor/pid/testvm -balloon virtio -daemonize 
-monitor 
unix:/var/run/ganeti/kvm-hypervisor/ctrl/testvm.monitor,server,nowait 
-serial 
unix:/var/run/ganeti/kvm-hypervisor/ctrl/testvm.serial,server,nowait 
-usbdevice tablet -vnc 127.0.0.1:5102 -netdev 
type=tap,id=netdev0,fd=8,vhost=on -device 
virtio-net-pci,mac=52:54:00:00:01:e6,netdev=netdev0 -netdev 
type=tap,id=netdev1,fd=9,vhost=on -device 
virtio-net-pci,mac=52:54:00:00:01:e7,netdev=netdev1 -qmp 
unix:/var/run/ganeti/kvm-hypervisor/ctrl/testvm.qmp,server,nowait -S

Thanks,
~David

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Networking latency - what to expect?
  2012-11-27 23:26 Networking latency - what to expect? David Mohr
@ 2012-11-29 14:48 ` Julian Stecklina
  2012-11-29 15:50   ` David Mohr
  0 siblings, 1 reply; 4+ messages in thread
From: Julian Stecklina @ 2012-11-29 14:48 UTC (permalink / raw)
  To: kvm

Thus spake David Mohr <damailings@mcbf.net>:

> * vm->vm (same host)    22k

This number is in the same ballpark as what I am seeing on pretty much
the same hardware.

AFAICS, there is little you can do to the current virtio->virtio code
path that would make this substantially faster.

Julian


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Networking latency - what to expect?
  2012-11-29 14:48 ` Julian Stecklina
@ 2012-11-29 15:50   ` David Mohr
  2012-11-29 16:35     ` Julian Stecklina
  0 siblings, 1 reply; 4+ messages in thread
From: David Mohr @ 2012-11-29 15:50 UTC (permalink / raw)
  To: kvm

On 2012-11-29 07:48, Julian Stecklina wrote:
> Thus spake David Mohr <damailings@mcbf.net>:
>
>> * vm->vm (same host)    22k
>
> This number is in the same ballpark as what I am seeing on pretty 
> much
> the same hardware.
>
> AFAICS, there is little you can do to the current virtio->virtio code
> path that would make this substantially faster.

Thanks for the feedback. Considering that it's better than the hardware 
network performance my main issue is actually the latency of 
communication between VMs on different hosts:
* vm->vm (diff. hosts)   7k

It is obvious that there is a lot more going on compared to same host 
communication, but only ~30% of the performance when the network 
hardware should not be slowing it down (too) much?

Thanks,
~David

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Networking latency - what to expect?
  2012-11-29 15:50   ` David Mohr
@ 2012-11-29 16:35     ` Julian Stecklina
  0 siblings, 0 replies; 4+ messages in thread
From: Julian Stecklina @ 2012-11-29 16:35 UTC (permalink / raw)
  To: David Mohr; +Cc: kvm

Thus spake David Mohr <damailings@mcbf.net>:

> On 2012-11-29 07:48, Julian Stecklina wrote:
>> Thus spake David Mohr <damailings@mcbf.net>:
>>
>>> * vm->vm (same host)    22k
>>
>> This number is in the same ballpark as what I am seeing on pretty
>> much
>> the same hardware.
>>
>> AFAICS, there is little you can do to the current virtio->virtio code
>> path that would make this substantially faster.
>
> Thanks for the feedback. Considering that it's better than the
> hardware network performance my main issue is actually the latency of
> communication between VMs on different hosts:
> * vm->vm (diff. hosts)   7k
>
> It is obvious that there is a lot more going on compared to same host
> communication, but only ~30% of the performance when the network
> hardware should not be slowing it down (too) much?

You are probably better of using SR-IOV NICs with PCI passthrough in
this case.

Maybe someone can comment on whether virtual interrupt delivery and
posted interrupts[1] are already usable. The first one should help in
either the virtio and SR-IOV scenarios, the latter only applies to
SR-IOV (and PCI passthrough in general).

Julian

[1] http://www.spinics.net/lists/kvm/msg82762.html

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2012-11-29 16:35 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-11-27 23:26 Networking latency - what to expect? David Mohr
2012-11-29 14:48 ` Julian Stecklina
2012-11-29 15:50   ` David Mohr
2012-11-29 16:35     ` Julian Stecklina

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox