From mboxrd@z Thu Jan 1 00:00:00 1970 From: Thomas Monjalon Subject: Re: [RFC PATCH 0/2] performance utility in testpmd Date: Thu, 21 Apr 2016 11:54:12 +0200 Message-ID: <1946900.ocWSxO32dE@xps13> References: <1461192195-104070-1-git-send-email-zhihong.wang@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7Bit Cc: dev@dpdk.org, Pablo de Lara To: Zhihong Wang Return-path: Received: from mail-wm0-f50.google.com (mail-wm0-f50.google.com [74.125.82.50]) by dpdk.org (Postfix) with ESMTP id 6C5ED2BD4 for ; Thu, 21 Apr 2016 11:54:35 +0200 (CEST) Received: by mail-wm0-f50.google.com with SMTP id n3so123842433wmn.0 for ; Thu, 21 Apr 2016 02:54:35 -0700 (PDT) In-Reply-To: <1461192195-104070-1-git-send-email-zhihong.wang@intel.com> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 2016-04-20 18:43, Zhihong Wang: > This RFC patch proposes a general purpose forwarding engine in testpmd > namely "portfwd", to enable performance analysis and tuning for poll mode > drivers in vSwitching scenarios. > > > Problem statement > ----------------- > > vSwitching is more I/O bound in a lot of cases since there are a lot of > LLC/cross-core memory accesses. > > In order to reveal memory/cache behavior in real usage scenarios and enable > efficient performance analysis and tuning for vSwitching, DPDK needs a > sample application that supports traffic flow close to real deployment, > e.g. multi-tenancy, service chaining. > > There is a vhost sample application currently to enable simple vSwitching > scenarios, it comes with several limitations: > > 1) Traffic flow is too simple and not flexible > > 2) Switching based on MAC/VLAN only > > 3) Not enough performance metrics > > > Proposed solution > ----------------- > > The testpmd sample application is a good choice, it's a powerful poll mode > driver management framework hosts various forwarding engine. Not sure it is a good choice. The goal of testpmd is to test every PMD features. How far can we go in adding some stack processing while keeping it easily maintainable? > Now with the vhost pmd feature, it can also handle vhost devices, only a > new forwarding engine is needed to make use of it. Why a new forwarding engine is needed for vhost? > portfwd is implemented to this end. > > Features of portfwd: > > 1) Build up traffic from simple rx/tx to complex scenarios easily > > 2) Rich performance statistics for all ports Have you checked CONFIG_RTE_TEST_PMD_RECORD_CORE_CYCLES and CONFIG_RTE_TEST_PMD_RECORD_BURST_STATS? > 3) Core affinity manipulation > > 4) Commands for run time configuration > > Notice that portfwd has fair performance, but it's not for getting the > "maximum" numbers: > > 1) It buffers packets for burst send efficiency analysis, which increase > latency > > 2) It touches the packet header and collect performance statistics which > adds overheads > > These "extra" overheads are actually what happens in real applications. [...] > Implementation details > ---------------------- > > To enable flexible traffic flow setup, each port has 2 ways to forward > packets in portfwd: Should not it be 2 forward engines? Please first describe the existing engines to help making a decision. > 1) Forward based on dst ip [...] > 2) Forward to a fixed port [...]