From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Hanoch Haim (hhaim)" Subject: DPDK 17.02 RC-3 performance degradation of ~10% Date: Tue, 14 Feb 2017 11:44:38 +0000 Message-ID: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Cc: "Ido Barnea (ibarnea)" , "Hanoch Haim (hhaim)" To: "dev@dpdk.org" Return-path: Received: from rcdn-iport-8.cisco.com (rcdn-iport-8.cisco.com [173.37.86.79]) by dpdk.org (Postfix) with ESMTP id C84A111D4 for ; Tue, 14 Feb 2017 12:44:40 +0100 (CET) Received: from XCH-RTP-017.cisco.com (xch-rtp-017.cisco.com [64.101.220.157]) by alln-core-12.cisco.com (8.14.5/8.14.5) with ESMTP id v1EBidTc021605 (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256 verify=FAIL) for ; Tue, 14 Feb 2017 11:44:39 GMT Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi, We (trex traffic generator project) upgrade DPDK version from 16.07 to 17.0= 2arc-3 and we experienced a performance degradation on the following NICS: XL710 : 10-15% ixgbe : 8% in one case mlx5 : 8% 2 case X710 : no impact (same driver as XL710) VIC : no impact It might be related to DPDK infra change as it affects more than one driver= (maybe mbuf?). Wanted to know if this is expected before investing more into this. The Y a= xis numbers in all the following charts (from Grafana) are MPPS/core which = means how many cycles per packet the CPU invest. Bigger MPPS/core means les= s CPU cycles to send packet. Labeles: trex-08 (XL710) trex-09 (X710) trex-07 (mlx5) kiwi02 (ixgbe) Grafanna charts can be seen here: (we can't share pointers to Grafanna) https://snag.gy/wGYp3c.jpg More information on the tests and setups can be found in this document https://trex-tgn.cisco.com/trex/doc/trex_analytics.html link to the project https://github.com/cisco-system-traffic-generator/trex-core Thanks, Hanoh