From mboxrd@z Thu Jan 1 00:00:00 1970 From: Matthew Rosato Subject: Re: Regression in throughput between kvm guests over virtual bridge Date: Fri, 3 Nov 2017 00:30:12 -0400 Message-ID: <56710dc8-f289-0211-db97-1a1ea29e38f7@linux.vnet.ibm.com> References: <129a01d9-de9b-f3f1-935c-128e73153df6@linux.vnet.ibm.com> <3f824b0e-65f9-c69c-5421-2c5f6b349b09@redhat.com> <78678f33-c9ba-bf85-7778-b2d0676b78dd@linux.vnet.ibm.com> <038445a6-9dd5-30c2-aac0-ab5efbfa7024@linux.vnet.ibm.com> <20171012183132.qrbgnmvki6lpgt4a@Wei-Dev> <376f8939-1990-abf6-1f5f-57b3822f94fe@redhat.com> <20171026094415.uyogf2iw7yoavnoc@Wei-Dev> <20171031070717.wcbgrp6thrjmtrh3@Wei-Dev> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Cc: Jason Wang , mst@redhat.com, netdev@vger.kernel.org, davem@davemloft.net To: Wei Xu Return-path: Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:33040 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750707AbdKCEaV (ORCPT ); Fri, 3 Nov 2017 00:30:21 -0400 Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.21/8.16.0.21) with SMTP id vA34T0wV139469 for ; Fri, 3 Nov 2017 00:30:21 -0400 Received: from e37.co.us.ibm.com (e37.co.us.ibm.com [32.97.110.158]) by mx0a-001b2d01.pphosted.com with ESMTP id 2e0h3n9710-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Fri, 03 Nov 2017 00:30:20 -0400 Received: from localhost by e37.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 2 Nov 2017 22:30:19 -0600 In-Reply-To: <20171031070717.wcbgrp6thrjmtrh3@Wei-Dev> Content-Language: en-US Sender: netdev-owner@vger.kernel.org List-ID: On 10/31/2017 03:07 AM, Wei Xu wrote: > On Thu, Oct 26, 2017 at 01:53:12PM -0400, Matthew Rosato wrote: >> >>> >>> Are you using the same binding as mentioned in previous mail sent by you? it >>> might be caused by cpu convention between pktgen and vhost, could you please >>> try to run pktgen from another idle cpu by adjusting the binding? >> >> I don't think that's the case -- I can cause pktgen to hang in the guest >> without any cpu binding, and with vhost disabled even. > > Yes, I did a test and it also hangs in guest, before we figure it out, > maybe you try udp with uperf with this case? > > VM -> Host > Host -> VM > VM -> VM > Here are averaged run numbers (Gbps throughput) across 4.12, 4.13 and net-next with and without Jason's recent "vhost_net: conditionally enable tx polling" applied (referred to as 'patch' below). 1 uperf instance in each case: uperf TCP: 4.12 4.13 4.13+patch net-next net-next+patch ---------------------------------------------------------------------- VM->VM 35.2 16.5 20.84 22.2 24.36 VM->Host 42.15 43.57 44.90 30.83 32.26 Host->VM 53.17 41.51 42.18 37.05 37.30 uperf UDP: 4.12 4.13 4.13+patch net-next net-next+patch ---------------------------------------------------------------------- VM->VM 24.93 21.63 25.09 8.86 9.62 VM->Host 40.21 38.21 39.72 8.74 9.35 Host->VM 31.26 30.18 31.25 7.2 9.26 The net is that Jason's recent patch definitely improves things across the board at 4.13 as well as at net-next -- But the VM<->VM TCP numbers I am observing are still lower than base 4.12. A separate concern is why my UDP numbers look so bad on net-next (have not bisected this yet).