From mboxrd@z Thu Jan 1 00:00:00 1970 From: Thomas Monjalon Subject: Re: [PATCH v8 1/3] eal/x86: run-time dispatch over memcpy Date: Thu, 19 Oct 2017 08:58:54 +0200 Message-ID: <1661434.2yK3chXuTC@xps> References: <1507206794-79941-1-git-send-email-xiaoyun.li@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7Bit Cc: "Ananyev, Konstantin" , "Richardson, Bruce" , dev@dpdk.org, "Lu, Wenzhuo" , "Zhang, Helin" , "ophirmu@mellanox.com" To: "Li, Xiaoyun" Return-path: Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by dpdk.org (Postfix) with ESMTP id 837687CB3 for ; Thu, 19 Oct 2017 08:58:57 +0200 (CEST) In-Reply-To: List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 19/10/2017 04:45, Li, Xiaoyun: > Hi > > > > > > > > The significant change of this patch is to call a function pointer > > > > for packet size > 128 (RTE_X86_MEMCPY_THRESH). > > > The perf drop is due to function call replacing inline. > > > > > > > Please could you provide some benchmark numbers? > > > I ran memcpy_perf_test which would show the time cost of memcpy. I ran > > > it on broadwell with sse and avx2. > > > But I just draw pictures and looked at the trend not computed the > > > exact percentage. Sorry about that. > > > The picture shows results of copy size of 2, 4, 6, 8, 9, 12, 16, 32, > > > 64, 128, 192, 256, 320, 384, 448, 512, 768, 1024, 1518, 1522, 1536, > > > 1600, 2048, 2560, 3072, 3584, 4096, 4608, 5120, 5632, 6144, 6656, 7168, > > 7680, 8192. > > > In my test, the size grows, the drop degrades. (Using copy time > > > indicates the > > > perf.) From the trend picture, when the size is smaller than 128 > > > bytes, the perf drops a lot, almost 50%. And above 128 bytes, it > > > approaches the original dpdk. > > > I computed it right now, it shows that when greater than 128 bytes and > > > smaller than 1024 bytes, the perf drops about 15%. When above 1024 > > > bytes, the perf drops about 4%. > > > > > > > From a test done at Mellanox, there might be a performance > > > > degradation of about 15% in testpmd txonly with AVX2. > > > > I did tests on X710, XXV710, X540 and MT27710 but didn't see performance degradation. > > I used command "./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 -- -I" and set fwd txonly. > I tested it on v17.11-rc1, then revert my patch and tested it again. > Show port stats all and see the throughput pps. But the results are similar and no drop. > > Did I miss something? I do not understand. Yesterday you confirmed a 15% drop with buffers between 128 and 1024 bytes. But you do not see this drop in your txonly tests, right? > > Another thing, I will test testpmd txonly with intel nics and mellanox these > > days. > > And try adjusting the RTE_X86_MEMCPY_THRESH to see if there is any > > improvement. > > > > > > Is there someone else seeing a performance degradation?