netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* e1000 full-duplex TCP performance well below wire speed
@ 2008-01-30 12:23 Bruce Allen
  2008-01-30 17:36 ` Brandeburg, Jesse
  2008-01-30 19:17 ` Ben Greear
  0 siblings, 2 replies; 32+ messages in thread
From: Bruce Allen @ 2008-01-30 12:23 UTC (permalink / raw)
  To: netdev; +Cc: Carsten Aulbert, Henning Fehrmann, Bruce Allen

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1769 bytes --]

(Pádraig Brady has suggested that I post this to Netdev.  It was 
originally posted to LKML here: http://lkml.org/lkml/2008/1/30/141 )


Dear NetDev,

We've connected a pair of modern high-performance boxes with integrated copper 
Gb/s Intel NICS, with an ethernet crossover cable, and have run some netperf 
full duplex TCP tests.  The transfer rates are well below wire speed.  We're 
reporting this as a kernel bug, because we expect a vanilla kernel with default 
settings to give wire speed (or close to wire speed) performance in this case. 
We DO see wire speed in simplex transfers. The behavior has been verified on 
multiple machines with identical hardware.

Details:
Kernel version: 2.6.23.12
ethernet NIC: Intel 82573L
ethernet driver: e1000 version 7.3.20-k2
motherboard: Supermicro PDSML-LN2+ (one quad core Intel Xeon X3220, Intel 3000 
chipset, 8GB memory)

The test was done with various mtu sizes ranging from 1500 to 9000, with 
ethernet flow control switched on and off, and using reno and cubic as a TCP 
congestion control.

The behavior depends on the setup. In one test we used cubic congestion 
control, flow control off. The transfer rate in one direction was above 0.9Gb/s 
while in the other direction it was 0.6 to 0.8 Gb/s. After 15-20s the rates 
flipped. Perhaps the two steams are fighting for resources. (The performance of 
a full duplex stream should be close to 1Gb/s in both directions.)  A graph of 
the transfer speed as a function of time is here: 
https://n0.aei.uni-hannover.de/networktest/node19-new20-noflow.jpg
Red shows transmit and green shows receive (please ignore other plots):

We're happy to do additional testing, if that would help, and very grateful for 
any advice!

Bruce Allen
Carsten Aulbert
Henning Fehrmann

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2008-02-01 19:58 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-01-30 12:23 e1000 full-duplex TCP performance well below wire speed Bruce Allen
2008-01-30 17:36 ` Brandeburg, Jesse
2008-01-30 18:45   ` Rick Jones
2008-01-30 23:15     ` Bruce Allen
2008-01-31 11:35     ` Carsten Aulbert
2008-01-31 17:55       ` Rick Jones
2008-02-01 19:57         ` Carsten Aulbert
2008-01-30 23:07   ` Bruce Allen
2008-01-31  5:43     ` Brandeburg, Jesse
2008-01-31  8:31       ` Bruce Allen
2008-01-31 18:08         ` Kok, Auke
2008-01-31 18:38           ` Rick Jones
2008-01-31 18:47             ` Kok, Auke
2008-01-31 19:07               ` Rick Jones
2008-01-31 19:13           ` Bruce Allen
2008-01-31 19:32             ` Kok, Auke
2008-01-31 19:48               ` Bruce Allen
2008-02-01  6:27                 ` Bill Fink
2008-02-01  7:54                   ` Bruce Allen
2008-01-31 15:12       ` Carsten Aulbert
2008-01-31 17:20         ` Brandeburg, Jesse
2008-01-31 17:27           ` Carsten Aulbert
2008-01-31 17:33             ` Brandeburg, Jesse
2008-01-31 18:11             ` running aggregate netperf TCP_RR " Rick Jones
2008-01-31 18:03         ` Rick Jones
2008-01-31 15:18       ` Carsten Aulbert
2008-01-31  9:17     ` Andi Kleen
2008-01-31  9:59       ` Bruce Allen
2008-01-31 16:09       ` Carsten Aulbert
2008-01-31 18:15         ` Kok, Auke
2008-01-30 19:17 ` Ben Greear
2008-01-30 22:33   ` Bruce Allen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).