From mboxrd@z Thu Jan 1 00:00:00 1970 From: Simon Guo Subject: Re: [PATCH v5] app/testpmd: add option ring-bind-lcpu to bind Q with CPU Date: Thu, 25 Jan 2018 11:40:02 +0800 Message-ID: <20180125034002.GA3674@simonLocalRHEL7.x64> References: <6A0DE07E22DDAD4C9103DF62FEBC09093B7109A8@shsmsx102.ccr.corp.intel.com> <1515810914-18762-1-git-send-email-wei.guo.simon@gmail.com> <2601191342CEEE43887BDE71AB9772588627E492@irsmsx105.ger.corp.intel.com> <20180117091337.GA30690@simonLocalRHEL7.x64> <2601191342CEEE43887BDE71AB9772588628010D@irsmsx105.ger.corp.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: "Lu, Wenzhuo" , "dev@dpdk.org" , Thomas Monjalon To: "Ananyev, Konstantin" Return-path: Received: from mail-pg0-f53.google.com (mail-pg0-f53.google.com [74.125.83.53]) by dpdk.org (Postfix) with ESMTP id C4AC0397D for ; Thu, 25 Jan 2018 04:40:06 +0100 (CET) Received: by mail-pg0-f53.google.com with SMTP id u1so4234409pgr.0 for ; Wed, 24 Jan 2018 19:40:06 -0800 (PST) Content-Disposition: inline In-Reply-To: <2601191342CEEE43887BDE71AB9772588628010D@irsmsx105.ger.corp.intel.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Konstantin, On Thu, Jan 18, 2018 at 12:14:05PM +0000, Ananyev, Konstantin wrote: > Hi Simon, > > > > > Hi, Konstantin, > > On Tue, Jan 16, 2018 at 12:38:35PM +0000, Ananyev, Konstantin wrote: > > > > > > > > > > -----Original Message----- > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of wei.guo.simon@gmail.com > > > > Sent: Saturday, January 13, 2018 2:35 AM > > > > To: Lu, Wenzhuo > > > > Cc: dev@dpdk.org; Thomas Monjalon ; Simon Guo > > > > Subject: [dpdk-dev] [PATCH v5] app/testpmd: add option ring-bind-lcpu to bind Q with CPU > > > > > > > > From: Simon Guo > > > > > > > > Currently the rx/tx queue is allocated from the buffer pool on socket of: > > > > - port's socket if --port-numa-config specified > > > > - or ring-numa-config setting per port > > > > > > > > All the above will "bind" queue to single socket per port configuration. > > > > But it can actually archieve better performance if one port's queue can > > > > be spread across multiple NUMA nodes, and the rx/tx queue is allocated > > > > per lcpu socket. > > > > > > > > This patch adds a new option "--ring-bind-lcpu"(no parameter). With > > > > this, testpmd can utilize the PCI-e bus bandwidth on another NUMA > > > > nodes. > > > > > > > > When --port-numa-config or --ring-numa-config option is specified, this > > > > --ring-bind-lcpu option will be suppressed. > > > > > > Instead of introducing one more option - wouldn't it be better to > > > allow user manually to define flows and assign them to particular lcores? > > > Then the user will be able to create any FWD configuration he/she likes. > > > Something like: > > > lcore X add flow rxq N,Y txq M,Z > > > > > > Which would mean - on lcore X recv packets from port=N, rx_queue=Y, > > > and send them through port=M,tx_queue=Z. > > Thanks for the comment. > > Will it be a too compliated solution for user since it will need to define > > specifically for each lcore? We might have hundreds of lcores in current > > modern platforms. > > Why for all lcores? > Only for ones that will do packet forwarding. > Also if configuration becomes too complex(/big) to be done manually > user can write a script that will generate set of testpmd commands > to achieve desired layout. It might not be an issue for skillful users, but it will be difficult for others. --ring-bind-lcpu will help to simply this for them. Thanks, - Simon