From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Ravinandan Arakali" Subject: RE: High CPU utilization with Bonding driver ? Date: Tue, 29 Mar 2005 11:13:30 -0800 Message-ID: <001f01c53493$654896a0$3a10100a@pc.s2io.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: , , "'Leonid. Grossman \(E-mail\)'" , "'Raghavendra. Koushik \(E-mail\)'" Return-path: To: "'Arthur Kepner'" In-Reply-To: Sender: netdev-bounce@oss.sgi.com Errors-to: netdev-bounce@oss.sgi.com List-Id: netdev.vger.kernel.org Arthur, Thanks for the reply. Not yet. Will try out the patch. Ravi -----Original Message----- From: Arthur Kepner [mailto:akepner@sgi.com] Sent: Tuesday, March 29, 2005 10:29 AM To: Ravinandan Arakali Cc: netdev@oss.sgi.com; bonding-devel@lists.sourceforge.net; Leonid. Grossman (E-mail); Raghavendra. Koushik (E-mail) Subject: Re: High CPU utilization with Bonding driver ? On Tue, 29 Mar 2005, Ravinandan Arakali wrote: > .... > Results(8 nttcp/chariot streams): > --------------------------------- > 1. Combined throughputs(but no bonding): > 3.1 + 6.2 = 9.3 Gbps with 58% CPU idle. > > 2. eth0 and eth1 bonded together in LACP mode: > 8.2 Gbps with 1% CPU idle. > > From the above results, when Bonding driver is used(#2), the CPUs are > completely maxed out compared to the case when traffic is run > simultaneously on both the cards(#1). > Can anybody suggest some reasons for the above behavior ? > Ravi; Have you tried this patch? http://marc.theaimsgroup.com/?l=linux-netdev&m=111091146828779&w=2 If not, it will likely go a long way to solving your problem. -- Arthur