netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Simon Chen <simonchennj@gmail.com>
To: Ben Hutchings <bhutchings@solarflare.com>
Cc: Ben Greear <greearb@candelatech.com>, netdev@vger.kernel.org
Subject: Re: under-performing bonded interfaces
Date: Wed, 21 Dec 2011 21:28:45 -0500	[thread overview]
Message-ID: <CANj2Ebd_6YrPV333mAk2QCE2Mu05rtM_EcHr5d9bKnNaHxHoLQ@mail.gmail.com> (raw)
In-Reply-To: <CANj2Eben0hrP6KwxyA1WPqiqzm84w=J2_sdtrKtGvxdftuksqg@mail.gmail.com>

Just a bit more info...
I have 24 cores, and the interrupts for each 10G NIC are distributed
to all 24 cores...
I am using the most recent 3.7.17 driver ixgbe driver from Intel.
layer3+4 xmit policy.

-Simon

On Wed, Dec 21, 2011 at 8:26 PM, Simon Chen <simonchennj@gmail.com> wrote:
> Hi folks,
>
> I added an Intel X520 card to both the sender and receiver... Now I
> have two 10G ports on a PCIe 2.0 x8 slot (5Gx8), so the bandwidth of
> the PCI bus shouldn't be the bottleneck.
>
> Now the throughput test gives me around 16Gbps in aggregate. Any ideas
> how I can push closer to 20G? I don't quite understand where the
> bottleneck is now.
>
> Thanks.
> -Simon
>
> On Wed, Nov 16, 2011 at 9:51 PM, Ben Hutchings
> <bhutchings@solarflare.com> wrote:
>> On Wed, 2011-11-16 at 20:38 -0500, Simon Chen wrote:
>>> Thanks, Ben. That's good discovery...
>>>
>>> Are you saying that both 10G NICs are on the same PCIe x4 slot, so
>>> that they're subject to the 12G throughput bottleneck?
>>
>> I assumed you were using 2 ports on the same board, i.e. the same slot.
>> If you were using 1 port each of 2 boards then I would have expected
>> them both to be usable at full speed.  So far as I can remember, PCIe
>> bridges are usually set up so there isn't contention for bandwidth
>> between slots.
>>
>> Ben.
>>
>> --
>> Ben Hutchings, Staff Engineer, Solarflare
>> Not speaking for my employer; that's the marketing department's job.
>> They asked us to note that Solarflare product names are trademarked.
>>

  parent reply	other threads:[~2011-12-22  2:28 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-11-16 23:44 under-performing bonded interfaces Simon Chen
2011-11-17  0:01 ` Ben Greear
2011-11-17  0:05   ` Simon Chen
2011-11-17  0:07     ` Ben Greear
2011-11-17  0:57     ` Ben Hutchings
2011-11-17  1:38       ` Simon Chen
2011-11-17  1:45         ` Simon Chen
2011-11-17  1:45         ` Rick Jones
2011-11-17  2:51         ` Ben Hutchings
2011-12-22  1:26           ` Simon Chen
2011-12-22  1:36             ` Stephen Hemminger
2011-12-22  3:31               ` Ben Greear
2011-12-22  2:28             ` Simon Chen [this message]
2011-12-22  5:43             ` Eric Dumazet
2011-12-23 15:03               ` Simon Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CANj2Ebd_6YrPV333mAk2QCE2Mu05rtM_EcHr5d9bKnNaHxHoLQ@mail.gmail.com \
    --to=simonchennj@gmail.com \
    --cc=bhutchings@solarflare.com \
    --cc=greearb@candelatech.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).