From mboxrd@z Thu Jan 1 00:00:00 1970 From: Saber Rezvani Subject: Re: IXGBE throughput loss with 4+ cores Date: Tue, 28 Aug 2018 21:35:33 +0430 Message-ID: References: <74400e6a-91ba-3648-0980-47ceae1089a7@zoho.com> <20180828090142.1262c5ea@shemminger-XPS-13-9360> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Cc: dev@dpdk.org To: Stephen Hemminger Return-path: Received: from sender-pp-092.zoho.com (sender-pp-092.zoho.com [135.84.80.237]) by dpdk.org (Postfix) with ESMTP id CA280DED for ; Tue, 28 Aug 2018 19:05:43 +0200 (CEST) In-Reply-To: <20180828090142.1262c5ea@shemminger-XPS-13-9360> Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 08/28/2018 08:31 PM, Stephen Hemminger wrote: > On Tue, 28 Aug 2018 17:34:27 +0430 > Saber Rezvani wrote: > >> Hi, >> >> >> I have run multi_process/symmetric_mp example in DPDK example directory. >> For a one process its throughput is line rate but as I increase the >> number of cores I see decrease in throughput. For example, If the number >> of queues set to 4 and each queue assigns to a single core, then the >> throughput will be something about 9.4. if 8 queues, then throughput >> will be 8.5. >> >> I have read the following, but it was not convincing. >> >> http://mails.dpdk.org/archives/dev/2015-October/024960.html >> >> >> I am eagerly looking forward to hearing from you, all. >> >> >> Best wishes, >> >> Saber >> >> > Not completely surprising. If you have more cores than packet line rate > then the number of packets returned for each call to rx_burst will be less. > With large number of cores, most of the time will be spent doing reads of > PCI registers for no packets! Indeed pktgen says it is generating traffic at line rate, but receiving less than 10 Gb/s. So, it that case there should be something that causes the reduction in throughput :(