kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* vhost-[pid] 100% CPU
@ 2014-04-06 14:06 Simon Chen
  2014-04-06 17:35 ` Bronek Kozicki
  0 siblings, 1 reply; 5+ messages in thread
From: Simon Chen @ 2014-04-06 14:06 UTC (permalink / raw)
  To: kvm

Hello,

I am using QEMU 1.6.0 on Linux 3.10.21. My VMs are using vhost-net in
a typical OpenStack setup: VM1->tap->linux
bridge->OVS->host1->physical network->host2->OVS->linux
bridge->tap->VM2.

It seems that under heavy network load, the vhost-[pid] processes on
the receiving side is using 100% CPU. The sender side has over 85%
utilized.

I am seeing unsatisfactory VM to VM network performance (using iperf
16 concurrent TCP connections, I can only get 1.5Gbps, while I've
heard people got to over 6Gbps at least), and I wonder if it has
something to do with vhost-net maxing out on CPU. If so, is there
anything I can tune the system?

On the host side, I have already tuned sysctl settings, spread the NIC
interrupts, and I believe RSS is on too.

Thanks.
-Simon

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: vhost-[pid] 100% CPU
  2014-04-06 14:06 vhost-[pid] 100% CPU Simon Chen
@ 2014-04-06 17:35 ` Bronek Kozicki
  2014-04-06 19:03   ` Simon Chen
  0 siblings, 1 reply; 5+ messages in thread
From: Bronek Kozicki @ 2014-04-06 17:35 UTC (permalink / raw)
  To: Simon Chen, kvm

On 06/04/2014 15:06, Simon Chen wrote:
> Hello,
>
> I am using QEMU 1.6.0 on Linux 3.10.21. My VMs are using vhost-net in
> a typical OpenStack setup: VM1->tap->linux
> bridge->OVS->host1->physical network->host2->OVS->linux
> bridge->tap->VM2.
>
> It seems that under heavy network load, the vhost-[pid] processes on
> the receiving side is using 100% CPU. The sender side has over 85%
> utilized.
>
> I am seeing unsatisfactory VM to VM network performance (using iperf
> 16 concurrent TCP connections, I can only get 1.5Gbps, while I've
> heard people got to over 6Gbps at least), and I wonder if it has
> something to do with vhost-net maxing out on CPU. If so, is there
> anything I can tune the system?

You could dedicate network card to your virtual machine, using PCI 
passthrough.


B.



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: vhost-[pid] 100% CPU
  2014-04-06 17:35 ` Bronek Kozicki
@ 2014-04-06 19:03   ` Simon Chen
  2014-04-08 20:49     ` Simon Chen
  0 siblings, 1 reply; 5+ messages in thread
From: Simon Chen @ 2014-04-06 19:03 UTC (permalink / raw)
  To: Bronek Kozicki; +Cc: kvm

Yes, I am aware of SR-IOV and its pros and cons.. I don't think
OpenStack supports the orchestration very well at this point, and you
lose the flexible filtering provided by iptables at hypervisor layer.

At this point, I am trying to see how much throughput a more
software-base solution can achieve. Like I said, I've seen people
achieving 6Gbps+ VM to VM throughput using OpenVSwitch and VXLAN
software tunneling. I am more curious to find out why my setup cannot
do that...

Thanks.

On Sun, Apr 6, 2014 at 1:35 PM, Bronek Kozicki <brok@spamcop.net> wrote:
> On 06/04/2014 15:06, Simon Chen wrote:
>>
>> Hello,
>>
>> I am using QEMU 1.6.0 on Linux 3.10.21. My VMs are using vhost-net in
>> a typical OpenStack setup: VM1->tap->linux
>> bridge->OVS->host1->physical network->host2->OVS->linux
>> bridge->tap->VM2.
>>
>> It seems that under heavy network load, the vhost-[pid] processes on
>> the receiving side is using 100% CPU. The sender side has over 85%
>> utilized.
>>
>> I am seeing unsatisfactory VM to VM network performance (using iperf
>> 16 concurrent TCP connections, I can only get 1.5Gbps, while I've
>> heard people got to over 6Gbps at least), and I wonder if it has
>> something to do with vhost-net maxing out on CPU. If so, is there
>> anything I can tune the system?
>
>
> You could dedicate network card to your virtual machine, using PCI
> passthrough.
>
>
> B.
>
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: vhost-[pid] 100% CPU
  2014-04-06 19:03   ` Simon Chen
@ 2014-04-08 20:49     ` Simon Chen
  2014-04-09  4:28       ` Jason Wang
  0 siblings, 1 reply; 5+ messages in thread
From: Simon Chen @ 2014-04-08 20:49 UTC (permalink / raw)
  To: Bronek Kozicki; +Cc: kvm

A little update on this..

I turned on multiqueue of vhost-net. Now the receiving VM is getting
traffic over all four queues - based on the CPU usage of the four
vhost-[pid] threads. For some reason, the sender is now pegging 100%
on one vhost-[pid] thread, although four are available.

Do I need to change anything inside of the VM to leverage all four TX
queues? I did do "ethtool -L eth0 combined 4" and that doesn't seem to
be sufficient.

Thanks.
-Simon


On Sun, Apr 6, 2014 at 3:03 PM, Simon Chen <simonchennj@gmail.com> wrote:
> Yes, I am aware of SR-IOV and its pros and cons.. I don't think
> OpenStack supports the orchestration very well at this point, and you
> lose the flexible filtering provided by iptables at hypervisor layer.
>
> At this point, I am trying to see how much throughput a more
> software-base solution can achieve. Like I said, I've seen people
> achieving 6Gbps+ VM to VM throughput using OpenVSwitch and VXLAN
> software tunneling. I am more curious to find out why my setup cannot
> do that...
>
> Thanks.
>
> On Sun, Apr 6, 2014 at 1:35 PM, Bronek Kozicki <brok@spamcop.net> wrote:
>> On 06/04/2014 15:06, Simon Chen wrote:
>>>
>>> Hello,
>>>
>>> I am using QEMU 1.6.0 on Linux 3.10.21. My VMs are using vhost-net in
>>> a typical OpenStack setup: VM1->tap->linux
>>> bridge->OVS->host1->physical network->host2->OVS->linux
>>> bridge->tap->VM2.
>>>
>>> It seems that under heavy network load, the vhost-[pid] processes on
>>> the receiving side is using 100% CPU. The sender side has over 85%
>>> utilized.
>>>
>>> I am seeing unsatisfactory VM to VM network performance (using iperf
>>> 16 concurrent TCP connections, I can only get 1.5Gbps, while I've
>>> heard people got to over 6Gbps at least), and I wonder if it has
>>> something to do with vhost-net maxing out on CPU. If so, is there
>>> anything I can tune the system?
>>
>>
>> You could dedicate network card to your virtual machine, using PCI
>> passthrough.
>>
>>
>> B.
>>
>>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: vhost-[pid] 100% CPU
  2014-04-08 20:49     ` Simon Chen
@ 2014-04-09  4:28       ` Jason Wang
  0 siblings, 0 replies; 5+ messages in thread
From: Jason Wang @ 2014-04-09  4:28 UTC (permalink / raw)
  To: Simon Chen; +Cc: Bronek Kozicki, kvm

On Tue, 2014-04-08 at 16:49 -0400, Simon Chen wrote:
> A little update on this..
> 
> I turned on multiqueue of vhost-net. Now the receiving VM is getting
> traffic over all four queues - based on the CPU usage of the four
> vhost-[pid] threads. For some reason, the sender is now pegging 100%
> on one vhost-[pid] thread, although four are available.
> 

Need to check how many vcpus does the sender use, multiqueue choose txq
based on processor id. If only one vcpu is used, the result is expected.
> Do I need to change anything inside of the VM to leverage all four TX
> queues? I did do "ethtool -L eth0 combined 4" and that doesn't seem to
> be sufficient.

No need any other configurations. I don't do iperf, but I can easily
make full usage of all queues when I start multiple sessions of netperf.

btw. In my Xeon(R) CPU E5-2650 machine, I can easily get 15Gbps+ of VM
to VM throughput without any optimization with net-next tree.
> 
> Thanks.
> -Simon
> 
> 
> On Sun, Apr 6, 2014 at 3:03 PM, Simon Chen <simonchennj@gmail.com> wrote:
> > Yes, I am aware of SR-IOV and its pros and cons.. I don't think
> > OpenStack supports the orchestration very well at this point, and you
> > lose the flexible filtering provided by iptables at hypervisor layer.
> >
> > At this point, I am trying to see how much throughput a more
> > software-base solution can achieve. Like I said, I've seen people
> > achieving 6Gbps+ VM to VM throughput using OpenVSwitch and VXLAN
> > software tunneling. I am more curious to find out why my setup cannot
> > do that...
> >
> > Thanks.
> >
> > On Sun, Apr 6, 2014 at 1:35 PM, Bronek Kozicki <brok@spamcop.net> wrote:
> >> On 06/04/2014 15:06, Simon Chen wrote:
> >>>
> >>> Hello,
> >>>
> >>> I am using QEMU 1.6.0 on Linux 3.10.21. My VMs are using vhost-net in
> >>> a typical OpenStack setup: VM1->tap->linux
> >>> bridge->OVS->host1->physical network->host2->OVS->linux
> >>> bridge->tap->VM2.
> >>>
> >>> It seems that under heavy network load, the vhost-[pid] processes on
> >>> the receiving side is using 100% CPU. The sender side has over 85%
> >>> utilized.
> >>>
> >>> I am seeing unsatisfactory VM to VM network performance (using iperf
> >>> 16 concurrent TCP connections, I can only get 1.5Gbps, while I've
> >>> heard people got to over 6Gbps at least), and I wonder if it has
> >>> something to do with vhost-net maxing out on CPU. If so, is there
> >>> anything I can tune the system?
> >>
> >>
> >> You could dedicate network card to your virtual machine, using PCI
> >> passthrough.
> >>
> >>
> >> B.
> >>
> >>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2014-04-09  4:28 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-04-06 14:06 vhost-[pid] 100% CPU Simon Chen
2014-04-06 17:35 ` Bronek Kozicki
2014-04-06 19:03   ` Simon Chen
2014-04-08 20:49     ` Simon Chen
2014-04-09  4:28       ` Jason Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).