From mboxrd@z Thu Jan 1 00:00:00 1970 From: Stephen Hemminger Subject: Re: under-performing bonded interfaces Date: Wed, 21 Dec 2011 17:36:08 -0800 Message-ID: <20111221173608.0f04bc8b@nehalam.linuxnetplumber.net> References: <4EC44ECB.4050201@candelatech.com> <1321491449.2709.90.camel@bwh-desktop> <1321498314.2885.78.camel@deadeye> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: Ben Hutchings , Ben Greear , netdev@vger.kernel.org To: Simon Chen Return-path: Received: from mail.vyatta.com ([76.74.103.46]:36058 "EHLO mail.vyatta.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752016Ab1LVBgL (ORCPT ); Wed, 21 Dec 2011 20:36:11 -0500 In-Reply-To: Sender: netdev-owner@vger.kernel.org List-ID: On Wed, 21 Dec 2011 20:26:04 -0500 Simon Chen wrote: > Hi folks, > > I added an Intel X520 card to both the sender and receiver... Now I > have two 10G ports on a PCIe 2.0 x8 slot (5Gx8), so the bandwidth of > the PCI bus shouldn't be the bottleneck. > > Now the throughput test gives me around 16Gbps in aggregate. Any ideas > how I can push closer to 20G? I don't quite understand where the > bottleneck is now. In my experience, Intel dual port cards can not run at full speed when both ports are in use. You need separate slots to hit full line rate.