netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Achieved 10Gbit/s bidirectional routing
@ 2009-07-15 16:50 Jesper Dangaard Brouer
  2009-07-16  3:22 ` Bill Fink
  0 siblings, 1 reply; 7+ messages in thread
From: Jesper Dangaard Brouer @ 2009-07-15 16:50 UTC (permalink / raw)
  To: netdev@vger.kernel.org
  Cc: David S. Miller, Robert Olsson, Waskiewicz Jr, Peter P,
	Ronciak, John, jesse.brandeburg, Stephen Hemminger,
	Linux Kernel Mailing List


I'm giving a talk at LinuxCon, about 10Gbit/s routing on standard
hardware running Linux.

  http://linuxcon.linuxfoundation.org/meetings/1585
  https://events.linuxfoundation.org/lc09o17

I'm getting some really good 10Gbit/s bidirectional routing results
with Intels latest 82599 chip. (I got two pre-release engineering
samples directly from Intel, thanks Peter)

Using a Core i7-920, and tuning the memory according to the RAMs
X.M.P. settings DDR3-1600MHz, notice this also increases the QPI to
6.4GT/s.  (Motherboard P6T6 WS revolution)

With big 1514 bytes packets, I can basically do 10Gbit/s wirespeed
bidirectional routing.

Notice bidirectional routing means that we actually has to move approx
40Gbit/s through memory and in-and-out of the interfaces.

Formatted quick view using 'ifstat -b'

  eth31-in   eth31-out   eth32-in  eth32-out
    9.57  +    9.52  +     9.51 +     9.60  = 38.20 Gbit/s
    9.60  +    9.55  +     9.52 +     9.62  = 38.29 Gbit/s
    9.61  +    9.53  +     9.52 +     9.62  = 38.28 Gbit/s
    9.61  +    9.53  +     9.54 +     9.62  = 38.30 Gbit/s

[Adding an extra NIC]

Another observation is that I'm hitting some kind of bottleneck on the
PCI-express switch.  Adding an extra NIC in a PCIe slot connected to
the same PCIe switch, does not scale beyond 40Gbit/s collective
throughput.

But, I happened to have a special motherboard ASUS P6T6 WS revolution,
which has an additional PCIe switch chip NVIDIA's NF200.

Connecting two dual port 10GbE NICs via two different PCI-express
switch chips, makes things scale again!  I have achieved a collective
throughput of 66.25 Gbit/s.  This results is also influenced by my
pktgen machines cannot keep up, and I'm getting closer to the memory
bandwidth limits.

FYI: I found a really good reference explaining the PCI-express
architecture, written by Intel:

 http://download.intel.com/design/intarch/papers/321071.pdf

I'm not sure how to explain the PCI-express chip bottleneck I'm
seeing, but my guess is that I'm limited by the number of outstanding
packets/DMA-transfers and the latency for the DMA operations.

Does any one have datasheets on the X58 and NVIDIA's NF200 PCI-express
chips, that can tell me the number of outstanding transfers they
support?

-- 
Med venlig hilsen / Best regards
  Jesper Brouer
  ComX Networks A/S
  Linux Network developer
  Cand. Scient Datalog / MSc.
  Author of http://adsl-optimizer.dk
  LinkedIn: http://www.linkedin.com/in/brouer


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2009-07-18  7:14 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-07-15 16:50 Achieved 10Gbit/s bidirectional routing Jesper Dangaard Brouer
2009-07-16  3:22 ` Bill Fink
2009-07-16  9:39   ` Jesper Dangaard Brouer
2009-07-16 15:38     ` Bill Fink
2009-07-17 20:35       ` Willy Tarreau
2009-07-17 23:38         ` Bill Fink
2009-07-18  7:14         ` Jesper Dangaard Brouer

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).