netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Breno Leitao <leitao@linux.vnet.ibm.com>
To: "Brandeburg, Jesse" <jesse.brandeburg@intel.com>, rick.jones2@hp.com
Cc: netdev@vger.kernel.org
Subject: RE: e1000 performance issue in 4 simultaneous links
Date: Fri, 11 Jan 2008 14:20:44 -0200	[thread overview]
Message-ID: <1200068444.9349.20.camel@cafe> (raw)
In-Reply-To: <36D9DB17C6DE9E40B059440DB8D95F5204275B04@orsmsx418.amr.corp.intel.com>

On Thu, 2008-01-10 at 12:52 -0800, Brandeburg, Jesse wrote:
> Breno Leitao wrote:
> > When I run netperf in just one interface, I get 940.95 * 10^6 bits/sec
> > of transfer rate. If I run 4 netperf against 4 different interfaces, I
> > get around 720 * 10^6 bits/sec.
> 
> I hope this explanation makes sense, but what it comes down to is that
> combining hardware round robin balancing with NAPI is a BAD IDEA.  In
> general the behavior of hardware round robin balancing is bad and I'm
> sure it is causing all sorts of other performance issues that you may
> not even be aware of.
I've made another test removing the ppc IRQ Round Robin scheme, bonded
each interface (eth6, eth7, eth16 and eth17) to different CPUs (CPU1,
CPU2, CPU3 and CPU4) and I also get around around 720 * 10^6 bits/s in
average.

Take a look at the interrupt table this time: 

io-dolphins:~/leitao # cat /proc/interrupts  | grep eth[1]*[67]
277:         15    1362450         13         14         13         14         15         18   XICS      Level     eth6
278:         12         13    1348681         19         13         15         10         11   XICS      Level     eth7
323:         11         18         17    1348426         18         11         11         13   XICS      Level     eth16
324:         12         16         11         19    1402709         13         14         11   XICS      Level     eth17


I also tried to bound all the 4 interface IRQ to a single CPU (CPU0)
using the noirqdistrib boot paramenter, and the performance was a little
worse.

Rick, 
  The 2 interface test that I showed in my first email, was run in two
different NIC. Also, I am running netperf with the following command
"netperf -H <hostname> -T 0,8" while netserver is running without any
argument at all. Also, running vmstat in parallel shows that there is no
bottleneck in the CPU. Take a look: 

procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 2  0      0 6714732  16168 227440    0    0     8     2  203   21  0  1 98  0  0
 0  0      0 6715120  16176 227440    0    0     0    28 16234  505  0 16 83  0  1
 0  0      0 6715516  16176 227440    0    0     0     0 16251  518  0 16 83  0  1
 1  0      0 6715252  16176 227440    0    0     0     1 16316  497  0 15 84  0  1
 0  0      0 6716092  16176 227440    0    0     0     0 16300  520  0 16 83  0  1
 0  0      0 6716320  16180 227440    0    0     0     1 16354  486  0 15 84  0  1
 

Thanks!

-- 
Breno Leitao <leitao@linux.vnet.ibm.com>


  parent reply	other threads:[~2008-01-11 16:20 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-01-10 16:17 e1000 performance issue in 4 simultaneous links Breno Leitao
2008-01-10 16:36 ` Ben Hutchings
2008-01-10 16:51   ` Jeba Anandhan
2008-01-10 17:31   ` Breno Leitao
2008-01-10 18:18     ` Kok, Auke
2008-01-10 18:37     ` Rick Jones
2008-01-10 18:26 ` Rick Jones
2008-01-10 20:52 ` Brandeburg, Jesse
2008-01-11  1:28   ` David Miller
2008-01-11 11:09     ` Benny Amorsen
2008-01-12  1:41       ` David Miller
2008-01-12  5:13         ` Denys Fedoryshchenko
2008-01-30 16:57           ` Kok, Auke
2008-01-11 16:20   ` Breno Leitao [this message]
2008-01-11 16:48     ` Eric Dumazet
2008-01-11 17:36       ` Denys Fedoryshchenko
2008-01-11 18:45         ` Breno Leitao
2008-01-11 18:19       ` Breno Leitao
2008-01-11 18:48         ` Rick Jones

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1200068444.9349.20.camel@cafe \
    --to=leitao@linux.vnet.ibm.com \
    --cc=jesse.brandeburg@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=rick.jones2@hp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).