From: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
To: Wei Xu <wexu@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>,
mst@redhat.com, netdev@vger.kernel.org, davem@davemloft.net
Subject: Re: Regression in throughput between kvm guests over virtual bridge
Date: Tue, 7 Nov 2017 20:02:48 -0500 [thread overview]
Message-ID: <1611b26f-0997-3b22-95f5-debf57b7be8c@linux.vnet.ibm.com> (raw)
In-Reply-To: <20171104233519.7jwja7t2itooyeak@Wei-Dev>
On 11/04/2017 07:35 PM, Wei Xu wrote:
> On Fri, Nov 03, 2017 at 12:30:12AM -0400, Matthew Rosato wrote:
>> On 10/31/2017 03:07 AM, Wei Xu wrote:
>>> On Thu, Oct 26, 2017 at 01:53:12PM -0400, Matthew Rosato wrote:
>>>>
>>>>>
>>>>> Are you using the same binding as mentioned in previous mail sent by you? it
>>>>> might be caused by cpu convention between pktgen and vhost, could you please
>>>>> try to run pktgen from another idle cpu by adjusting the binding?
>>>>
>>>> I don't think that's the case -- I can cause pktgen to hang in the guest
>>>> without any cpu binding, and with vhost disabled even.
>>>
>>> Yes, I did a test and it also hangs in guest, before we figure it out,
>>> maybe you try udp with uperf with this case?
>>>
>>> VM -> Host
>>> Host -> VM
>>> VM -> VM
>>>
>>
>> Here are averaged run numbers (Gbps throughput) across 4.12, 4.13 and
>> net-next with and without Jason's recent "vhost_net: conditionally
>> enable tx polling" applied (referred to as 'patch' below). 1 uperf
>> instance in each case:
>
> Thanks a lot for the test.
>
>>
>> uperf TCP:
>> 4.12 4.13 4.13+patch net-next net-next+patch
>> ----------------------------------------------------------------------
>> VM->VM 35.2 16.5 20.84 22.2 24.36
>
> Are you using the same server/test suite? You mentioned the number was around
> 28Gb for 4.12 and it dropped about 40% for 4.13, it seems thing changed, are
> there any options for performance tuning on the server to maximize the cpu
> utilization?
I experience some volatility as I am running on 1 of multiple LPARs
available to this system (they are sharing physical resources). But I
think the real issue was that I left my guest environment set to 4
vcpus, but was binding assuming there was 1 vcpu (was working on
something else, forgot to change back). This likely tainted my most
recent results, sorry.
>
> I had similar experience on x86 server and desktop before and it made that
> the result number always went up and down pretty much.
>
>> VM->Host 42.15 43.57 44.90 30.83 32.26
>> Host->VM 53.17 41.51 42.18 37.05 37.30
>
> This is a bit odd, I remember you said there was no regression while
> testing Host>VM, wasn't it?
>
>>
>> uperf UDP:
>> 4.12 4.13 4.13+patch net-next net-next+patch
>> ----------------------------------------------------------------------
>> VM->VM 24.93 21.63 25.09 8.86 9.62
>> VM->Host 40.21 38.21 39.72 8.74 9.35
>> Host->VM 31.26 30.18 31.25 7.2 9.26
>
> This case should be quite similar with pkgten, if you got improvement with
> pktgen, usually it was also the same for UDP, could you please try to disable
> tso, gso, gro, ufo on all host tap devices and guest virtio-net devices? Currently
> the most significant tests would be like this AFAICT:
>
> Host->VM 4.12 4.13
> TCP:
> UDP:
> pktgen:
>
> Don't want to bother you too much, so maybe 4.12 & 4.13 without Jason's patch should
> work since we have seen positive number for that, you can also temporarily skip
> net-next as well.
Here are the requested numbers, averaged over numerous runs -- guest is
4GB+1vcpu, host uperf/pktgen bound to 1 host CPU + qemu and vhost thread
pinned to other unique host CPUs. tso, gso, gro, ufo disabled on host
taps / guest virtio-net devs as requested:
Host->VM 4.12 4.13
TCP: 9.92Gb/s 6.44Gb/s
UDP: 5.77Gb/s 6.63Gb/s
pktgen: 1572403pps 1904265pps
UDP/pktgen both show improvement from 4.12->4.13. More interesting,
however, is that I am seeing the TCP regression for the first time from
host->VM. I wonder if the combination of CPU binding + disabling of one
or more of tso/gso/gro/ufo is related.
>
> If you see UDP and pktgen are aligned, then it might be helpful to continue
> the other two cases, otherwise we fail in the first place.
I will start gathering those numbers tomorrow.
>
>> The net is that Jason's recent patch definitely improves things across
>> the board at 4.13 as well as at net-next -- But the VM<->VM TCP numbers
>> I am observing are still lower than base 4.12.
>
> Cool.
>
>>
>> A separate concern is why my UDP numbers look so bad on net-next (have
>> not bisected this yet).
>
> This might be another issue, I am in vacation, will try it on x86 once back
> to work on next Wednesday.
>
> Wei
>
>>
>
next prev parent reply other threads:[~2017-11-08 1:02 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-09-12 17:56 Regression in throughput between kvm guests over virtual bridge Matthew Rosato
2017-09-13 1:16 ` Jason Wang
2017-09-13 8:13 ` Jason Wang
2017-09-13 16:59 ` Matthew Rosato
2017-09-14 4:21 ` Jason Wang
2017-09-15 3:36 ` Matthew Rosato
2017-09-15 8:55 ` Jason Wang
2017-09-15 19:19 ` Matthew Rosato
2017-09-18 3:13 ` Jason Wang
2017-09-18 4:14 ` [PATCH] vhost_net: conditionally enable tx polling kbuild test robot
2017-09-18 7:36 ` Regression in throughput between kvm guests over virtual bridge Jason Wang
2017-09-18 18:11 ` Matthew Rosato
2017-09-20 6:27 ` Jason Wang
2017-09-20 19:38 ` Matthew Rosato
2017-09-22 4:03 ` Jason Wang
2017-09-25 20:18 ` Matthew Rosato
2017-10-05 20:07 ` Matthew Rosato
2017-10-11 2:41 ` Jason Wang
2017-10-12 18:31 ` Wei Xu
2017-10-18 20:17 ` Matthew Rosato
2017-10-23 2:06 ` Jason Wang
2017-10-23 2:13 ` Michael S. Tsirkin
2017-10-25 20:21 ` Matthew Rosato
2017-10-26 9:44 ` Wei Xu
2017-10-26 17:53 ` Matthew Rosato
2017-10-31 7:07 ` Wei Xu
2017-10-31 7:00 ` Jason Wang
2017-11-03 4:30 ` Matthew Rosato
2017-11-04 23:35 ` Wei Xu
2017-11-08 1:02 ` Matthew Rosato [this message]
2017-11-11 20:59 ` Matthew Rosato
2017-11-12 18:34 ` Wei Xu
2017-11-14 20:11 ` Matthew Rosato
2017-11-20 19:25 ` Matthew Rosato
2017-11-27 16:21 ` Wei Xu
2017-11-28 1:36 ` Jason Wang
2017-11-28 2:44 ` Matthew Rosato
2017-11-28 18:00 ` Wei Xu
2017-11-28 3:51 ` Wei Xu
2017-11-12 15:40 ` Wei Xu
2017-10-23 13:57 ` Wei Xu
2017-10-25 20:31 ` Matthew Rosato
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1611b26f-0997-3b22-95f5-debf57b7be8c@linux.vnet.ibm.com \
--to=mjrosato@linux.vnet.ibm.com \
--cc=davem@davemloft.net \
--cc=jasowang@redhat.com \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=wexu@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).