From mboxrd@z Thu Jan 1 00:00:00 1970 From: Matthew Rosato Subject: Re: Regression in throughput between kvm guests over virtual bridge Date: Sat, 11 Nov 2017 15:59:54 -0500 Message-ID: <101d1fdf-9df1-44bd-73a7-e7d8fbc09160@linux.vnet.ibm.com> References: <78678f33-c9ba-bf85-7778-b2d0676b78dd@linux.vnet.ibm.com> <038445a6-9dd5-30c2-aac0-ab5efbfa7024@linux.vnet.ibm.com> <20171012183132.qrbgnmvki6lpgt4a@Wei-Dev> <376f8939-1990-abf6-1f5f-57b3822f94fe@redhat.com> <20171026094415.uyogf2iw7yoavnoc@Wei-Dev> <20171031070717.wcbgrp6thrjmtrh3@Wei-Dev> <56710dc8-f289-0211-db97-1a1ea29e38f7@linux.vnet.ibm.com> <20171104233519.7jwja7t2itooyeak@Wei-Dev> <1611b26f-0997-3b22-95f5-debf57b7be8c@linux.vnet.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Cc: Jason Wang , mst@redhat.com, netdev@vger.kernel.org, davem@davemloft.net To: Wei Xu Return-path: Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:43270 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752087AbdKKVAB (ORCPT ); Sat, 11 Nov 2017 16:00:01 -0500 Received: from pps.filterd (m0098416.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id vABKwiD9190229 for ; Sat, 11 Nov 2017 16:00:00 -0500 Received: from e36.co.us.ibm.com (e36.co.us.ibm.com [32.97.110.154]) by mx0b-001b2d01.pphosted.com with ESMTP id 2e5uq96t5c-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Sat, 11 Nov 2017 16:00:00 -0500 Received: from localhost by e36.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Sat, 11 Nov 2017 13:59:59 -0700 In-Reply-To: <1611b26f-0997-3b22-95f5-debf57b7be8c@linux.vnet.ibm.com> Content-Language: en-US Sender: netdev-owner@vger.kernel.org List-ID: >> This case should be quite similar with pkgten, if you got improvement with >> pktgen, usually it was also the same for UDP, could you please try to disable >> tso, gso, gro, ufo on all host tap devices and guest virtio-net devices? Currently >> the most significant tests would be like this AFAICT: >> >> Host->VM 4.12 4.13 >> TCP: >> UDP: >> pktgen: >> >> Don't want to bother you too much, so maybe 4.12 & 4.13 without Jason's patch should >> work since we have seen positive number for that, you can also temporarily skip >> net-next as well. > > Here are the requested numbers, averaged over numerous runs -- guest is > 4GB+1vcpu, host uperf/pktgen bound to 1 host CPU + qemu and vhost thread > pinned to other unique host CPUs. tso, gso, gro, ufo disabled on host > taps / guest virtio-net devs as requested: > > Host->VM 4.12 4.13 > TCP: 9.92Gb/s 6.44Gb/s > UDP: 5.77Gb/s 6.63Gb/s > pktgen: 1572403pps 1904265pps > > UDP/pktgen both show improvement from 4.12->4.13. More interesting, > however, is that I am seeing the TCP regression for the first time from > host->VM. I wonder if the combination of CPU binding + disabling of one > or more of tso/gso/gro/ufo is related. > >> >> If you see UDP and pktgen are aligned, then it might be helpful to continue >> the other two cases, otherwise we fail in the first place. > I continued running many iterations of these tests between 4.12 and 4.13.. My throughput findings can be summarized as: VM->VM case: UDP: roughly equivalent TCP: Consistent regression (5-10%) VM->Host Both UDP and TCP traffic are roughly equivalent. Host->VM UDP+pktgen: improvement (5-10%), but inconsistent TCP: Consistent regression (25-30%) Host->VM UDP and pktgen seemed to show improvement in some runs, and in others seemed to mirror 4.12-level performance. The TCP regression for VM->VM is no surprise, we started with that. It's still consistent, but smaller in this specific environment. The TCP regression in Host->VM is interesting because I wasn't seeing it consistently before binding CPUs + disabling tso/gso/gro/ufo. Also interesting because of how large it is -- By any chance can you see this regression on x86 with the same configuration?