From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jerin Jacob Subject: Re: usages issue with external mempool Date: Wed, 27 Jul 2016 15:21:29 +0530 Message-ID: <20160727095128.GA11679@localhost.localdomain> References: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Cc: David Hunt , "dev@dpdk.org" , Thomas Monjalon , "olivier.matz@6wind.com" , "viktorin@rehivetech.com" , Shreyansh Jain To: Hemant Agrawal Return-path: Received: from NAM03-CO1-obe.outbound.protection.outlook.com (mail-co1nam03on0042.outbound.protection.outlook.com [104.47.40.42]) by dpdk.org (Postfix) with ESMTP id 3F9E85594 for ; Wed, 27 Jul 2016 11:51:55 +0200 (CEST) Content-Disposition: inline In-Reply-To: List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On Tue, Jul 26, 2016 at 10:11:13AM +0000, Hemant Agrawal wrote: > Hi, > There was lengthy discussions w.r.t external mempool patches. However, I am still finding usages issue with the agreed approach. > > The existing API to create packet mempool, "rte_pktmbuf_pool_create" does not provide the option to change the object init iterator. This may be the reason that many applications (e.g. OVS) are using rte_mempool_create to create packet mempool with their own object iterator (e.g. ovs_rte_pktmbuf_init). > > e.g the existing usages are: > dmp->mp = rte_mempool_create(mp_name, mp_size, MBUF_SIZE(mtu), > MP_CACHE_SZ, > sizeof(struct rte_pktmbuf_pool_private), > rte_pktmbuf_pool_init, NULL, > ovs_rte_pktmbuf_init, NULL, > socket_id, 0); > > > With the new API set for packet pool create, this need to be changed to: > > dmp->mp = rte_mempool_create_empty(mp_name, mp_size, MBUF_SIZE(mtu), > MP_CACHE_SZ, > sizeof(struct rte_pktmbuf_pool_private), > socket_id, 0); > if (dmp->mp == NULL) > break; > > rte_errno = rte_mempool_set_ops_byname(dmp-mp, > RTE_MBUF_DEFAULT_MEMPOOL_OPS, NULL); > if (rte_errno != 0) { > RTE_LOG(ERR, MBUF, "error setting mempool handler\n"); > return NULL; > } > rte_pktmbuf_pool_init(dmp->mp, NULL); > > ret = rte_mempool_populate_default(dmp->mp); > if (ret < 0) { > rte_mempool_free(dmp->mp); > rte_errno = -ret; > return NULL; > } > > rte_mempool_obj_iter(dmp->mp, ovs_rte_pktmbuf_init, NULL); > > This is not a user friendly approach to ask for changing 1 API to 6 new APIs. Or, am I missing something? I agree, To me, this is very bad. I have raised this concern earlier also Since applications like OVS goes through "rte_mempool_create" for even packet buffer pool creation. IMO it make senses to extend "rte_mempool_create" to take one more argument to provide external pool handler name(NULL for default). I don't see any valid technical reason to treat external pool handler based mempool creation API different from default handler. Oliver, David Thoughts ? If we agree on this then may be I can send the API deprecation notices for rte_mempool_create for v16.11 Jerin > > I think, we should do one of the following: > > 1. Enhance "rte_pktmbuf_pool_create" to optionally accept "rte_mempool_obj_cb_t *obj_init, void *obj_init_arg" as inputs. If obj_init is not present, default can be used. > 2. Create a new wrapper API (e.g. e_pktmbuf_pool_create_new) with the above said behavior e.g.: > /* helper to create a mbuf pool */ > struct rte_mempool * > rte_pktmbuf_pool_create_new(const char *name, unsigned n, > unsigned cache_size, uint16_t priv_size, uint16_t data_room_size, > rte_mempool_obj_cb_t *obj_init, void *obj_init_arg, > int socket_id) > 3. Let the existing rte_mempool_create accept flag as "MEMPOOL_F_HW_PKT_POOL". Obviously, if this flag is set - all other flag values should be ignored. This was discussed earlier also. > > Please share your opinion. > > Regards, > Hemant > >