From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Hunt, David" Subject: Re: [PATCH 0/5] add external mempool manager Date: Fri, 29 Jan 2016 13:40:40 +0000 Message-ID: <56AB6BD8.9000403@intel.com> References: <1453829155-1366-1-git-send-email-david.hunt@intel.com> <20160128172631.GA11992@localhost.localdomain> Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Cc: dev@dpdk.org To: Jerin Jacob Return-path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 6416EC67C for ; Fri, 29 Jan 2016 14:40:42 +0100 (CET) In-Reply-To: <20160128172631.GA11992@localhost.localdomain> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 28/01/2016 17:26, Jerin Jacob wrote: > On Tue, Jan 26, 2016 at 05:25:50PM +0000, David Hunt wrote: >> Hi all on the list. >> >> Here's a proposed patch for an external mempool manager >> >> The External Mempool Manager is an extension to the mempool API that allows >> users to add and use an external mempool manager, which allows external memory >> subsystems such as external hardware memory management systems and software >> based memory allocators to be used with DPDK. > > I like this approach.It will be useful for external hardware memory > pool managers. > > BTW, Do you encounter any performance impact on changing to function > pointer based approach? Jerin, Thanks for your comments. The performance impacts I've seen depends on whether I'm using an object cache for the mempool or not. Without object cache, I see between 0-10% degradation. With object cache, I see a slight performance gain of between 0-5%. But that will most likely vary from system to system. >> The existing API to the internal DPDK mempool manager will remain unchanged >> and will be backward compatible. >> >> There are two aspects to external mempool manager. >> 1. Adding the code for your new mempool handler. This is achieved by adding a >> new mempool handler source file into the librte_mempool library, and >> using the REGISTER_MEMPOOL_HANDLER macro. >> 2. Using the new API to call rte_mempool_create_ext to create a new mempool >> using the name parameter to identify which handler to use. >> >> New API calls added >> 1. A new mempool 'create' function which accepts mempool handler name. >> 2. A new mempool 'rte_get_mempool_handler' function which accepts mempool >> handler name, and returns the index to the relevant set of callbacks for >> that mempool handler >> >> Several external mempool managers may be used in the same application. A new >> mempool can then be created by using the new 'create' function, providing the >> mempool handler name to point the mempool to the relevant mempool manager >> callback structure. >> >> The old 'create' function can still be called by legacy programs, and will >> internally work out the mempool handle based on the flags provided (single >> producer, single consumer, etc). By default handles are created internally to >> implement the built-in DPDK mempool manager and mempool types. >> >> The external mempool manager needs to provide the following functions. >> 1. alloc - allocates the mempool memory, and adds each object onto a ring >> 2. put - puts an object back into the mempool once an application has >> finished with it >> 3. get - gets an object from the mempool for use by the application >> 4. get_count - gets the number of available objects in the mempool >> 5. free - frees the mempool memory >> >> Every time a get/put/get_count is called from the application/PMD, the >> callback for that mempool is called. These functions are in the fastpath, >> and any unoptimised handlers may limit performance. >> >> The new APIs are as follows: >> >> 1. rte_mempool_create_ext >> >> struct rte_mempool * >> rte_mempool_create_ext(const char * name, unsigned n, >> unsigned cache_size, unsigned private_data_size, >> int socket_id, unsigned flags, >> const char * handler_name); >> >> 2. rte_get_mempool_handler >> >> int16_t >> rte_get_mempool_handler(const char *name); > > Do we need above public API as, in any case we need rte_mempool* pointer to > operate on mempools(which has the index anyway)? > > May a similar functional API with different name/return will be > better to figure out, given "name" registered or not in ethernet driver > which has dependency on a particular HW pool manager. Good point. An earlier revision required getting the index first, then passing that to the create_ext call. Now that the call is by name, the 'get' is mostly redundant. As you suggest, we may need an API for checking the existence of a particular manager/handler. Then again, we could always return an error from the create_ext api if it fails to find that handler. I'll remove the 'get' for the moment. Thanks, David.