From mboxrd@z Thu Jan 1 00:00:00 1970 From: Saber Rezvani Subject: Re: IXGBE RX packet loss with 5+ cores Date: Thu, 1 Nov 2018 10:12:32 +0330 Message-ID: References: <561D2498.1060707@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: quoted-printable To: dev@dpdk.org Return-path: Received: from sender-pp-092.zoho.com (sender-pp-092.zoho.com [135.84.80.237]) by dpdk.org (Postfix) with ESMTP id 9C0E42C2F for ; Thu, 1 Nov 2018 07:42:39 +0100 (CET) In-Reply-To: <561D2498.1060707@intel.com> Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" According to the problem to this thread --> http://mails.dpdk.org/archives/dev/2015-October/024966.html =C2=A0Venkatesan, Venky mentioned the following reason: To add a little more detail - this ends up being both a bandwidth and a transaction bottleneck. Not only do you add an increased transaction count, you also add a huge amount of bandwidth overhead (each 16 byte descriptor is preceded by a PCI-E TLP which is about the same size). So what ends up happening in the case where the incoming packets are bifurcated to different queues (1 per queue) is that you have 2x the number of transactions (1 for the packet and one for the descriptor) and then we essentially double the bandwidth used because you now have the TLP overhead per descriptor write. But I couldn't figure out why we have bandwidth and transaction bottleneck. Can anyone help me? Best regards, Saber