From mboxrd@z Thu Jan 1 00:00:00 1970 From: Shreyansh Jain Subject: Re: [PATCH 6/6] app/crypto-perf: use single mempool Date: Mon, 11 Sep 2017 18:40:41 +0530 Message-ID: References: <20170818080520.43088-1-pablo.de.lara.guarch@intel.com> <20170818080520.43088-7-pablo.de.lara.guarch@intel.com> <9F7182E3F746AB4EA17801C148F3C60433039119@IRSMSX101.ger.corp.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Cc: "Doherty, Declan" , "Trahe, Fiona" , "Jain, Deepak K" , "Griffin, John" , "jerin.jacob@caviumnetworks.com" , "hemant.agrawal@nxp.com" , "dev@dpdk.org" To: "De Lara Guarch, Pablo" , Akhil Goyal Return-path: Received: from NAM01-SN1-obe.outbound.protection.outlook.com (mail-sn1nam01on0067.outbound.protection.outlook.com [104.47.32.67]) by dpdk.org (Postfix) with ESMTP id E69EE592B for ; Mon, 11 Sep 2017 14:59:45 +0200 (CEST) In-Reply-To: Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hello Pablo, I have a comment inline: On Monday 11 September 2017 04:38 PM, De Lara Guarch, Pablo wrote: >> -----Original Message----- >> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Akhil Goyal >> Sent: Wednesday, August 30, 2017 9:31 AM >> To: De Lara Guarch, Pablo ; Doherty, >> Declan ; Trahe, Fiona >> ; Jain, Deepak K ; >> Griffin, John ; >> jerin.jacob@caviumnetworks.com; hemant.agrawal@nxp.com >> Cc: dev@dpdk.org >> Subject: Re: [dpdk-dev] [PATCH 6/6] app/crypto-perf: use single >> mempool >> >> Hi Pablo, >> On 8/18/2017 1:35 PM, Pablo de Lara wrote: >>> In order to improve memory utilization, a single mempool is created, >>> containing the crypto operation and mbufs (one if operation is >>> in-place, two if out-of-place). >>> This way, a single object is allocated and freed per operation, >>> reducing the amount of memory in cache, which improves scalability. >>> >>> Signed-off-by: Pablo de Lara >>> --- >>> app/test-crypto-perf/cperf_ops.c | 96 ++++++-- >>> app/test-crypto-perf/cperf_ops.h | 2 +- >>> app/test-crypto-perf/cperf_test_latency.c | 350 ++++++++++++------- > --- >> ---- >>> app/test-crypto-perf/cperf_test_throughput.c | 347 >>> ++++++++++++------ >> -------- >>> app/test-crypto-perf/cperf_test_verify.c | 356 ++++++++++++-------- > --- >> ---- >>> 5 files changed, 553 insertions(+), 598 deletions(-) >>> >> NACK. >> This patch replaces rte_pktmbuf_pool_create with the >> rte_mempool_create for mbufs, which is not a preferred way to allocate > memory for pktmbuf. >> >> Any example/test application in DPDK should not be using this, as this >> kind of usages will not be compatible for all dpdk drivers in general. >> >> This kind of usages of rte_mempool_create will not work for any >> devices using hw offloaded memory pools for pktmbuf. >> one such example is dpaa2. > > Hi Akhil, > > Sorry for the delay on this reply and thanks for the review. > > I think, since we are not getting the buffers from the NIC, but we are allocating > them ourselves, it is not strictly required to call rte_pktmbuf_pool_create. > In the end, we only need them for memory for the crypto PMDs and we are not touching > anything in them, so I think using calling rte_mempool_create should work ok. > Having a single mempool would be way more performant and would avoid the scalability > issues that we are having in this application now, and knowing that this application > was created to test crypto PMD performance, I think it is worth trying this out. > > What is it exactly needed for dpaa2? Is the mempool handler? If I recall correctly: This is the call flow when rte_pktmbuf_pool_create is called: - rte_pktmbuf_pool_create `-> rte_mempool_create_empty `-> allocate and fill mempool object with defaults `-> rte_mempool_set_ops_byname `-> sets mempool handler to RTE_MBUF_DEFAULT_MEMPOOL_OPS `-> rte_mempool_populate_default `-> calls pool handler specific enqueue/dequeue but that of rte_mempool_create is: - rte_mempool_create `-> rte_mempool_create_empty `-> allocate and fill mempool object with defaults `-> rte_mempool_set_ops_byname `-> set to one of ring_*_* No check/logic for configuration defined handler like RTE_MBUF_DEFAULT_MEMPOOL_OPS `-> rte_mempool_populate_default `-> calls ring* handler specific enqueue/dequeue Calling rte_mempool_create bypasses the check for any mempool handler configured through the build system. > Would it work for you if I create the mempool in a similar way as what > rte_pktmbuf_pool_create is doing? Calling rte_mempool_set_ops_byname? Yes, but that would mean using the combination of rte_mempool_create_empty and rte_mempool_set_ops_byname which, eventually, would be equal to using rte_pktmbuf_pool_create. rte_mempool_set_ops_byname over a mempool created by rte_mempool_create would mean changing the enqueue/dequeue operations *after* the mempool has been populated. That would be incorrect. I am not sure of what the intent it - whether these buffers should be allowed to be offloaded to hardware. If yes, then rte_mempool_create wouldn't help. > > Thanks! > Pablo > > >> >> -Akhil