From mboxrd@z Thu Jan 1 00:00:00 1970 From: Hemant Agrawal Subject: Re: [PATCH 0/2] Allow application set mempool handle Date: Mon, 19 Jun 2017 17:22:46 +0530 Message-ID: References: <20170601080559.10684-1-santosh.shukla@caviumnetworks.com> Mime-Version: 1.0 Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 7bit Cc: To: Santosh Shukla , , Return-path: Received: from NAM02-CY1-obe.outbound.protection.outlook.com (mail-cys01nam02on0076.outbound.protection.outlook.com [104.47.37.76]) by dpdk.org (Postfix) with ESMTP id 5F54E968 for ; Mon, 19 Jun 2017 13:52:54 +0200 (CEST) In-Reply-To: <20170601080559.10684-1-santosh.shukla@caviumnetworks.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 6/1/2017 1:35 PM, Santosh Shukla wrote: > Some platform can have two different NICs for example external PCI Intel > 40G card and Integrated NIC like vNIC/octeontx/dpaa2. > > Both NICs like to use their preferred pool e.g. external PCI card/ vNIC's > preferred pool would be the ring based pool and octeontx/dpaa2 preferred would > be ext-mempools. > Right now, Framework doesn't support such case. Only one pool can be > used across two different NIC's. For that, user has to statically set > CONFIG_RTE_MEMPOOL_DEFAULT_OPS=. > > So proposing two approaches: > Patch 1) Introducing eal option --pkt-mempool= > Patch 2) Introducing ethdev API called _get_preferred_pool(), where PMD driver > gets a chance to advertise their pool capability to the application. And based > on that hint- application creates pools for that driver. > The idea is good. it will help the vendors with hw mempool support. On a similar line, I also submitted a patch to check the existence of a mempool instance. http://dpdk.org/dev/patchwork/patch/15877/ Option 1) requires manual knowledge of underlying NIC and different commands for different machines. Option 2) this will help more as it allows the application to take decision autonomously. In addition to it, we can also extend the overall MEMPOOL_OPS support. 3) currently we support defining only one "RTE_MBUF_DEFAULT_MEMPOOL_OPS" this can be supported to publish a priority list of MEMPOOL_OPS in config. if one is not available, application can try the next one in priority list as supported by the platform. 4) we can also try something, where the existing application can also be supported. - default mempool is configured as alias. This is with empty ops. - based on the mempool detections on the bus, the bus configure the mempool ops internally with the actual ones. > Santosh Shukla (2): > eal: Introducing option to set mempool handle > ether/ethdev: Allow pmd to advertise preferred pool capability > > lib/librte_eal/bsdapp/eal/eal.c | 9 +++++++ > lib/librte_eal/bsdapp/eal/rte_eal_version.map | 7 +++++ > lib/librte_eal/common/eal_common_options.c | 3 +++ > lib/librte_eal/common/eal_internal_cfg.h | 2 ++ > lib/librte_eal/common/eal_options.h | 2 ++ > lib/librte_eal/common/include/rte_eal.h | 9 +++++++ > lib/librte_eal/linuxapp/eal/eal.c | 36 +++++++++++++++++++++++++ > lib/librte_eal/linuxapp/eal/rte_eal_version.map | 7 +++++ > lib/librte_ether/rte_ethdev.c | 16 +++++++++++ > lib/librte_ether/rte_ethdev.h | 21 +++++++++++++++ > lib/librte_ether/rte_ether_version.map | 7 +++++ > lib/librte_mbuf/rte_mbuf.c | 8 ++++-- > 12 files changed, 125 insertions(+), 2 deletions(-) >