From mboxrd@z Thu Jan 1 00:00:00 1970 From: Declan Doherty Subject: Re: [PATCH v3] net/bonding: reduce slave starvation on rx poll Date: Tue, 21 Mar 2017 12:24:00 +0000 Message-ID: References: <20170307223918.33906-1-keith.wiles@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit To: Keith Wiles , dev@dpdk.org Return-path: Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id C028E133F for ; Tue, 21 Mar 2017 13:24:03 +0100 (CET) In-Reply-To: <20170307223918.33906-1-keith.wiles@intel.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Acking the correct version of the patch this time. On 07/03/2017 10:39 PM, Keith Wiles wrote: > When polling the bonded ports for RX packets the old driver would > always start with the first slave in the list. If the requested > number of packets is filled on the first port in a two port config > then the second port could be starved or have larger number of > missed packet errors. > > The code attempts to start with a different slave each time RX poll > is done to help eliminate starvation of slave ports. The effect of > the previous code was much lower performance for two slaves in the > bond then just the one slave. > > The performance drop was detected when the application can not poll > the rings of rx packets fast enough and the packets per second for > two or more ports was at the threshold thoughput of the application. > At this threshold the slaves would see very little or no drops in > the case of one slave. Then enable the second slave you would see > a large drop rate on the two slave bond and reduction in thoughput. > > Signed-off-by: Keith Wiles > --- ... > Acked-by: Declan Doherty