From: Jason Wang <jasowang@redhat.com>
To: Simon Chen <simonchennj@gmail.com>
Cc: Bronek Kozicki <brok@spamcop.net>, kvm@vger.kernel.org
Subject: Re: vhost-[pid] 100% CPU
Date: Wed, 09 Apr 2014 12:28:16 +0800 [thread overview]
Message-ID: <1397017696.31545.13.camel@localhost> (raw)
In-Reply-To: <CANj2Ebd+gyuwLqQER05-o+3csTaK9GL_2Bo2Hjp5kYHNtNxF-w@mail.gmail.com>
On Tue, 2014-04-08 at 16:49 -0400, Simon Chen wrote:
> A little update on this..
>
> I turned on multiqueue of vhost-net. Now the receiving VM is getting
> traffic over all four queues - based on the CPU usage of the four
> vhost-[pid] threads. For some reason, the sender is now pegging 100%
> on one vhost-[pid] thread, although four are available.
>
Need to check how many vcpus does the sender use, multiqueue choose txq
based on processor id. If only one vcpu is used, the result is expected.
> Do I need to change anything inside of the VM to leverage all four TX
> queues? I did do "ethtool -L eth0 combined 4" and that doesn't seem to
> be sufficient.
No need any other configurations. I don't do iperf, but I can easily
make full usage of all queues when I start multiple sessions of netperf.
btw. In my Xeon(R) CPU E5-2650 machine, I can easily get 15Gbps+ of VM
to VM throughput without any optimization with net-next tree.
>
> Thanks.
> -Simon
>
>
> On Sun, Apr 6, 2014 at 3:03 PM, Simon Chen <simonchennj@gmail.com> wrote:
> > Yes, I am aware of SR-IOV and its pros and cons.. I don't think
> > OpenStack supports the orchestration very well at this point, and you
> > lose the flexible filtering provided by iptables at hypervisor layer.
> >
> > At this point, I am trying to see how much throughput a more
> > software-base solution can achieve. Like I said, I've seen people
> > achieving 6Gbps+ VM to VM throughput using OpenVSwitch and VXLAN
> > software tunneling. I am more curious to find out why my setup cannot
> > do that...
> >
> > Thanks.
> >
> > On Sun, Apr 6, 2014 at 1:35 PM, Bronek Kozicki <brok@spamcop.net> wrote:
> >> On 06/04/2014 15:06, Simon Chen wrote:
> >>>
> >>> Hello,
> >>>
> >>> I am using QEMU 1.6.0 on Linux 3.10.21. My VMs are using vhost-net in
> >>> a typical OpenStack setup: VM1->tap->linux
> >>> bridge->OVS->host1->physical network->host2->OVS->linux
> >>> bridge->tap->VM2.
> >>>
> >>> It seems that under heavy network load, the vhost-[pid] processes on
> >>> the receiving side is using 100% CPU. The sender side has over 85%
> >>> utilized.
> >>>
> >>> I am seeing unsatisfactory VM to VM network performance (using iperf
> >>> 16 concurrent TCP connections, I can only get 1.5Gbps, while I've
> >>> heard people got to over 6Gbps at least), and I wonder if it has
> >>> something to do with vhost-net maxing out on CPU. If so, is there
> >>> anything I can tune the system?
> >>
> >>
> >> You could dedicate network card to your virtual machine, using PCI
> >> passthrough.
> >>
> >>
> >> B.
> >>
> >>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
prev parent reply other threads:[~2014-04-09 4:28 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-04-06 14:06 vhost-[pid] 100% CPU Simon Chen
2014-04-06 17:35 ` Bronek Kozicki
2014-04-06 19:03 ` Simon Chen
2014-04-08 20:49 ` Simon Chen
2014-04-09 4:28 ` Jason Wang [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1397017696.31545.13.camel@localhost \
--to=jasowang@redhat.com \
--cc=brok@spamcop.net \
--cc=kvm@vger.kernel.org \
--cc=simonchennj@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).