From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE13FC04AAF for ; Tue, 21 May 2019 09:19:55 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id 702B12173E for ; Tue, 21 May 2019 09:19:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 702B12173E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DFF4314EC; Tue, 21 May 2019 11:19:53 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 72126A69 for ; Tue, 21 May 2019 11:19:52 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 21 May 2019 02:19:51 -0700 X-ExtLoop1: 1 Received: from npg-dpdk-xiao-1.sh.intel.com ([10.67.111.145]) by orsmga002.jf.intel.com with ESMTP; 21 May 2019 02:19:50 -0700 From: Xiao Wang To: olivier.matz@6wind.com Cc: dev@dpdk.org, arybchenko@solarflare.com, Xiao Wang Date: Tue, 21 May 2019 17:03:21 +0800 Message-Id: <20190521090321.138062-1-xiao.w.wang@intel.com> X-Mailer: git-send-email 2.15.1 Subject: [dpdk-dev] [PATCH] mempool: optimize copy in cache get X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use rte_memcpy to improve the pointer array copy. This optimization method has already been applied to __mempool_generic_put() [1], this patch applies it to __mempool_generic_get(). Slight performance gain can be observed in testpmd txonly test. [1] 863bfb47449 ("mempool: optimize copy in cache") Signed-off-by: Xiao Wang --- lib/librte_mempool/rte_mempool.h | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) diff --git a/lib/librte_mempool/rte_mempool.h b/lib/librte_mempool/rte_mempool.h index 8053f7a04..975da8d22 100644 --- a/lib/librte_mempool/rte_mempool.h +++ b/lib/librte_mempool/rte_mempool.h @@ -1344,15 +1344,11 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table, unsigned int n, struct rte_mempool_cache *cache) { int ret; - uint32_t index, len; - void **cache_objs; /* No cache provided or cannot be satisfied from cache */ if (unlikely(cache == NULL || n >= cache->size)) goto ring_dequeue; - cache_objs = cache->objs; - /* Can this be satisfied from the cache? */ if (cache->len < n) { /* No. Backfill the cache first, and then fill from it */ @@ -1375,8 +1371,7 @@ __mempool_generic_get(struct rte_mempool *mp, void **obj_table, } /* Now fill in the response ... */ - for (index = 0, len = cache->len - 1; index < n; ++index, len--, obj_table++) - *obj_table = cache_objs[len]; + rte_memcpy(obj_table, &cache->objs[cache->len - n], sizeof(void *) * n); cache->len -= n; -- 2.15.1