From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Hunt, David" Subject: Re: [PATCH] mbuf: replace c memcpy code semantics with optimized rte_memcpy Date: Fri, 27 May 2016 14:45:19 +0100 Message-ID: <57484F6F.40204@intel.com> References: <1464101442-10501-1-git-send-email-jerin.jacob@caviumnetworks.com> <57446C63.4040605@6wind.com> <20160524151654.GA10870@localhost.localdomain> <57482079.1050605@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Cc: dev@dpdk.org, thomas.monjalon@6wind.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com To: Jerin Jacob , Olivier Matz Return-path: Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 00A5E214A for ; Fri, 27 May 2016 15:45:22 +0200 (CEST) In-Reply-To: <57482079.1050605@intel.com> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 5/27/2016 11:24 AM, Hunt, David wrote: > > > On 5/24/2016 4:17 PM, Jerin Jacob wrote: >> On Tue, May 24, 2016 at 04:59:47PM +0200, Olivier Matz wrote: >> >>> Are you seeing some performance improvement by using rte_memcpy()? >> Yes, In some case, In default case, It was replaced with memcpy by the >> compiler itself(gcc 5.3). But when I tried external mempool manager >> patch and >> then performance dropped almost 800Kpps. Debugging further it turns >> out that >> external mempool managers unrelated change was knocking out the memcpy. >> explicit rte_memcpy brought back 500Kpps. Remaing 300Kpps drop is still >> unknown(In my test setup, packets are in the local cache, so it must be >> something do with __mempool_put_bulk text alignment change or similar. >> >> Anyone else observed performance drop with external poolmanager? >> >> Jerin > > Jerin, > I'm seeing a 300kpps drop in throughput when I apply this on top > of the external > mempool manager patch. If you're seeing an increase if you apply this > patch first, then > a drop when applying the mempool manager, the two patches must be > conflicting in > some way. We probably need to investigate further. > Regards, > Dave. > On further investigation, I now have a setup with no performance degradation. My previous tests were accessing the NICS on a different NUMA node. Once I initiated testPMD with the correct coremask, the difference between pre and post rte_memcpy patch is negligible (maybe 0.1% drop). Regards, Dave.