From mboxrd@z Thu Jan 1 00:00:00 1970 From: Cunming Liang Subject: [PATCH v2 13/15] mempool: add support to non-EAL thread Date: Wed, 28 Jan 2015 14:59:23 +0800 Message-ID: <1422428365-5875-14-git-send-email-cunming.liang@intel.com> References: <1421914598-2747-1-git-send-email-cunming.liang@intel.com> <1422428365-5875-1-git-send-email-cunming.liang@intel.com> To: dev-VfR2kkLFssw@public.gmane.org Return-path: In-Reply-To: <1422428365-5875-1-git-send-email-cunming.liang-ral2JQCrhuEAvxtiuMwx3w@public.gmane.org> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces-VfR2kkLFssw@public.gmane.org Sender: "dev" For non-EAL thread, bypass per lcore cache, directly use ring pool. It allows using rte_mempool in either EAL thread or any user pthread. As in non-EAL thread, it directly rely on rte_ring and it's none preemptive. It doesn't suggest to run multi-pthread/cpu which compete the rte_mempool. It will get bad performance and has critical risk if scheduling policy is RT. Signed-off-by: Cunming Liang --- lib/librte_mempool/rte_mempool.h | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-) diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 3314651..4845f27 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -198,10 +198,12 @@ struct rte_mempool { * Number to add to the object-oriented statistics. */ #ifdef RTE_LIBRTE_MEMPOOL_DEBUG -#define __MEMPOOL_STAT_ADD(mp, name, n) do { \ - unsigned __lcore_id = rte_lcore_id(); \ - mp->stats[__lcore_id].name##_objs += n; \ - mp->stats[__lcore_id].name##_bulk += 1; \ +#define __MEMPOOL_STAT_ADD(mp, name, n) do { \ + unsigned __lcore_id = rte_lcore_id(); \ + if (__lcore_id < RTE_MAX_LCORE) { \ + mp->stats[__lcore_id].name##_objs += n; \ + mp->stats[__lcore_id].name##_bulk += 1; \ + } \ } while(0) #else #define __MEMPOOL_STAT_ADD(mp, name, n) do {} while(0) @@ -767,8 +769,9 @@ __mempool_put_bulk(struct rte_mempool *mp, void * const *obj_table, __MEMPOOL_STAT_ADD(mp, put, n); #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 - /* cache is not enabled or single producer */ - if (unlikely(cache_size == 0 || is_mp == 0)) + /* cache is not enabled or single producer or none EAL thread */ + if (unlikely(cache_size == 0 || is_mp == 0 || + lcore_id >= RTE_MAX_LCORE)) goto ring_enqueue; /* Go straight to ring if put would overflow mem allocated for cache */ @@ -952,7 +955,8 @@ __mempool_get_bulk(struct rte_mempool *mp, void **obj_table, uint32_t cache_size = mp->cache_size; /* cache is not enabled or single consumer */ - if (unlikely(cache_size == 0 || is_mc == 0 || n >= cache_size)) + if (unlikely(cache_size == 0 || is_mc == 0 || + n >= cache_size || lcore_id >= RTE_MAX_LCORE)) goto ring_dequeue; cache = &mp->local_cache[lcore_id]; -- 1.8.1.4