From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Hanoch Haim (hhaim)" Subject: Re: DPDK 17.02 RC-3 performance degradation of ~10% Date: Tue, 14 Feb 2017 12:31:56 +0000 Message-ID: <6ee7449acb434fafb80c5cb1b970be15@XCH-RTP-017.cisco.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Cc: "Ido Barnea (ibarnea)" , "Hanoch Haim (hhaim)" To: "Mcnamara, John" , "dev@dpdk.org" Return-path: Received: from alln-iport-1.cisco.com (alln-iport-1.cisco.com [173.37.142.88]) by dpdk.org (Postfix) with ESMTP id 3990EA2F for ; Tue, 14 Feb 2017 13:32:02 +0100 (CET) In-Reply-To: Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi John, thank you for the fast response.=20 I assume that Intel tests are more like rx->tx tests.=20 In our case we are doing mostly tx, which is more similar to dpdk-pkt-gen=20 The cases that we cached the mbuf was affected the most.=20 We expect to see the same issue with a simple DPDK application Thanks, Hanoh -----Original Message----- From: Mcnamara, John [mailto:john.mcnamara@intel.com]=20 Sent: Tuesday, February 14, 2017 2:19 PM To: Hanoch Haim (hhaim); dev@dpdk.org Cc: Ido Barnea (ibarnea) Subject: RE: [dpdk-dev] DPDK 17.02 RC-3 performance degradation of ~10% > -----Original Message----- > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hanoch Haim=20 > (hhaim) > Sent: Tuesday, February 14, 2017 11:45 AM > To: dev@dpdk.org > Cc: Ido Barnea (ibarnea) ; Hanoch Haim (hhaim)=20 > > Subject: [dpdk-dev] DPDK 17.02 RC-3 performance degradation of ~10% >=20 > Hi, >=20 > We (trex traffic generator project) upgrade DPDK version from 16.07 to > 17.02arc-3 and we experienced a performance degradation on the=20 > following > NICS: >=20 > XL710 : 10-15% > ixgbe : 8% in one case > mlx5 : 8% 2 case > X710 : no impact (same driver as XL710) > VIC : no impact >=20 > It might be related to DPDK infra change as it affects more than one=20 > driver (maybe mbuf?). > Wanted to know if this is expected before investing more into this.=20 > The Y axis numbers in all the following charts (from Grafana) are=20 > MPPS/core which means how many cycles per packet the CPU invest.=20 > Bigger MPPS/core means less CPU cycles to send packet. Hi, Thanks for the update. From a quick check with the Intel test team they hav= en't seen this. They would have flagged it if they had. Maybe someone from = Mellanox/Netronome could confirm as well. Could you do a git-bisect to identify the change that caused this? John