From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jerin Jacob Subject: Re: [PATCH v3 1/2] mempool: add stack (lifo) mempool handler Date: Mon, 20 Jun 2016 19:52:07 +0530 Message-ID: <20160620142205.GA4118@localhost.localdomain> References: <1463669335-30378-1-git-send-email-david.hunt@intel.com> <1466428091-115821-2-git-send-email-david.hunt@intel.com> <20160620132506.GA3301@localhost.localdomain> <3416153.NDoMD8TpjF@xps13> <2601191342CEEE43887BDE71AB97725836B73750@irsmsx105.ger.corp.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Cc: Thomas Monjalon , "dev@dpdk.org" , "Hunt, David" , "olivier.matz@6wind.com" , "viktorin@rehivetech.com" , "shreyansh.jain@nxp.com" To: "Ananyev, Konstantin" Return-path: Received: from na01-bl2-obe.outbound.protection.outlook.com (mail-bl2on0100.outbound.protection.outlook.com [65.55.169.100]) by dpdk.org (Postfix) with ESMTP id D39066CCD for ; Mon, 20 Jun 2016 16:22:29 +0200 (CEST) Content-Disposition: inline In-Reply-To: <2601191342CEEE43887BDE71AB97725836B73750@irsmsx105.ger.corp.intel.com> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Mon, Jun 20, 2016 at 01:58:04PM +0000, Ananyev, Konstantin wrote: > > > > -----Original Message----- > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Thomas Monjalon > > Sent: Monday, June 20, 2016 2:54 PM > > To: Jerin Jacob > > Cc: dev@dpdk.org; Hunt, David; olivier.matz@6wind.com; viktorin@rehivetech.com; shreyansh.jain@nxp.com > > Subject: Re: [dpdk-dev] [PATCH v3 1/2] mempool: add stack (lifo) mempool handler > > > > 2016-06-20 18:55, Jerin Jacob: > > > On Mon, Jun 20, 2016 at 02:08:10PM +0100, David Hunt wrote: > > > > This is a mempool handler that is useful for pipelining apps, where > > > > the mempool cache doesn't really work - example, where we have one > > > > core doing rx (and alloc), and another core doing Tx (and return). In > > > > such a case, the mempool ring simply cycles through all the mbufs, > > > > resulting in a LLC miss on every mbuf allocated when the number of > > > > mbufs is large. A stack recycles buffers more effectively in this > > > > case. > > > > > > > > Signed-off-by: David Hunt > > > > --- > > > > lib/librte_mempool/Makefile | 1 + > > > > lib/librte_mempool/rte_mempool_stack.c | 145 +++++++++++++++++++++++++++++++++ > > > > > > How about moving new mempool handlers to drivers/mempool? (or similar). > > > In future, adding HW specific handlers in lib/librte_mempool/ may be bad idea. > > > > You're probably right. > > However we need to check and understand what a HW mempool handler will be. > > I imagine the first of them will have to move handlers in drivers/ > > Does it mean it we'll have to move mbuf into drivers too? > Again other libs do use mempool too. > Why not just lib/librte_mempool/arch/ > ? I was proposing only to move only the new handler(lib/librte_mempool/rte_mempool_stack.c). Not any library or any other common code. Just like DPDK crypto device, Even if it is software implementation its better to move in driver/crypto instead of lib/librte_cryptodev "lib/librte_mempool/arch/" is not correct place as it is platform specific not architecture specific and HW mempool device may be PCIe or platform device. > Konstantin > > > > Jerin, are you volunteer?