public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* How to optimize routing performance
@ 2001-03-15  7:23 Mårten Wikström
  2001-03-15 12:32 ` Rik van Riel
  2001-03-15 18:09 ` J Sloan
  0 siblings, 2 replies; 21+ messages in thread
From: Mårten Wikström @ 2001-03-15  7:23 UTC (permalink / raw)
  To: 'linux-kernel@vger.kernel.org'

I've performed a test on the routing capacity of a Linux 2.4.2 box versus a
FreeBSD 4.2 box. I used two Pentium Pro 200Mhz computers with 64Mb memory,
and two DEC 100Mbit ethernet cards. I used a Smartbits test-tool to measure
the packet throughput and the packet size was set to 64 bytes. Linux dropped
no packets up to about 27000 packets/s, but then it started to drop packets
at higher rates. Worse yet, the output rate actually decreased, so at the
input rate of 40000 packets/s almost no packets got through. The behaviour
of FreeBSD was different, it showed a steadily increased output rate up to
about 70000 packets/s before the output rate decreased. (Then the output
rate was apprx. 40000 packets/s).
I have not made any special optimizations, aside from not having any
background processes running.

So, my question is: are these figures true, or is it possible to optimize
the kernel somehow? The only changes I have made to the kernel config was to
disable advanced routing.

Thanks,

Mårten


^ permalink raw reply	[flat|nested] 21+ messages in thread
* Re: How to optimize routing performance
@ 2001-03-15 14:19 Robert Olsson
  2001-03-16  0:38 ` Rik van Riel
  0 siblings, 1 reply; 21+ messages in thread
From: Robert Olsson @ 2001-03-15 14:19 UTC (permalink / raw)
  To: Mårten_Wikström
  Cc: Rik van Riel, 'linux-kernel@vger.kernel.org', netdev


Rik van Riel writes:
 > On Thu, 15 Mar 2001, [ISO-8859-1] Mårten Wikström wrote:
 > 
 > > I've performed a test on the routing capacity of a Linux 2.4.2 box
 > > versus a FreeBSD 4.2 box. I used two Pentium Pro 200Mhz computers with
 > > 64Mb memory, and two DEC 100Mbit ethernet cards. I used a Smartbits
 > > test-tool to measure the packet throughput and the packet size was set
 > > to 64 bytes. Linux dropped no packets up to about 27000 packets/s, but
 > > then it started to drop packets at higher rates. Worse yet, the output
 > > rate actually decreased, so at the input rate of 40000 packets/s
 
 It is a known problem yes. And just as Rik says its has been adressed
 in 2.1.x by Alexey for first time.


> > almost no packets got through. The behaviour of FreeBSD was different,
 > > it showed a steadily increased output rate up to about 70000 packets/s
 > > before the output rate decreased. (Then the output rate was apprx.
 > > 40000 packets/s).
 > 
 > > So, my question is: are these figures true, or is it possible to
 > > optimize the kernel somehow? The only changes I have made to the
 > > kernel config was to disable advanced routing.
 > 
 > There are some flow control options in the kernel which should
 > help. From your description, it looks like they aren't enabled
 > by default ...

 CONFIG_NET_HW_FLOWCONTROL enables kernel code for it. But device
 drivers has to have support for it. But unfortunely very few drivers
 has support for it.

 Also we done experiments were we move the device RX processing to 
 SoftIRQ rather than IRQ. With this RX is in better balance with 
 other kernel tasks and TX. Under very high load and under DoS 
 attacks the system is now manageable. It's in practical use already.


 > At the NordU/USENIX conference in Stockholm (this february) I
 > saw a nice presentation on the flow control code in the Linux
 > networking code and how it improved networking performance.
 > I'm pretty convinced that flow control _should_ be saving your
 > system in this case.

 Thanks Rik. 

 This is work/experiments by Jamal and me with support from Gurus. :-) 
 Jamal did this presentation at OLS 2000. At NordU/USENIX I gave an
 updated presentation of it. The presentation is not yet available form 
 the usenix webb I think.
 
 It can ftp from robur.slu.se:
 /pub/Linux/tmp/FF-NordUSENIX.pdf or .ps

 In summary Linux is very decent router. Wire speed small packets
 @ 100 Mbps and capable of Gigabit routing (1440 pkts tested) 
 we used.
 
 Also if people are interested we have done profiling on a Linux
 production router with full BGP at pretty loaded site. This to
 give us costs for route lookup, skb malloc/free, interrupts etc.
 
 http://Linux/net-development/experiments/010313

 I'm on netdev but not the kernel list.

 Cheers.

						--ro

^ permalink raw reply	[flat|nested] 21+ messages in thread
* RE: How to optimize routing performance
@ 2001-03-15 20:16 Jonathan Earle
  0 siblings, 0 replies; 21+ messages in thread
From: Jonathan Earle @ 2001-03-15 20:16 UTC (permalink / raw)
  To: 'Rik van Riel', 'Linux Kernel List'



> > Or are you saying that the bottleneck is somewhere
> > else completely,
> 
> Indeed. The bottleneck is with processing the incoming network
> packets, at the interrupt level.

Where is the counter for these dropped packets?  If we run a few mbit of
traffic through the box, we see noticeble percentages of lost packets (via
stats from the Ixia traffic generator).  But where in Linux are these counts
stored?  ifconfig does not appear to have the #.

Cheers!
Jon

^ permalink raw reply	[flat|nested] 21+ messages in thread
[parent not found: <3AB12640.79E7B4FB@colorfullife.com>]
* RE: How to optimize routing performance
@ 2001-03-16  7:21 Mårten Wikström
  2001-03-16  8:08 ` Martin Josefsson
  0 siblings, 1 reply; 21+ messages in thread
From: Mårten Wikström @ 2001-03-16  7:21 UTC (permalink / raw)
  To: 'Martin Josefsson'
  Cc: Rik van Riel, 'linux-kernel@vger.kernel.org', netdev



> 
> You want to have CONFIG_NET_HW_FLOWCONTROL enabled. If you don't the
> kernel gets _alot_ of interrupts from the NIC and dosn't have 
> any cycles
> left to do anything. So you want to turn this on!
> 
> > At the NordU/USENIX conference in Stockholm (this february) I
> > saw a nice presentation on the flow control code in the Linux
> > networking code and how it improved networking performance.
> > I'm pretty convinced that flow control _should_ be saving your
> > system in this case.
> 
> That was probably Jamal Hadi and Robert Olsson. They have 
> been optimizing
> the tulip driver. These optimizations havn't been integrated with the
> "vanilla" driver yet, but I hope the can integrate it soon.
> 
> They have one version that is very optimized and then they have one
> version that have even more optimizations, ie. it uses polling at high
> interruptload.
> 
> you will find these drivers here:
> ftp://robur.slu.se/pub/Linux/net-development/
> The latest versions are:
> tulip-ss010111.tar.gz
> and
> tulip-ss010116-poll.tar.gz
> 
> > OTOH, if they _are_ enabled, the networking people seem to have
> > a new item for their TODO list. ;)
> 
> Yup.
> 
> You can take a look here too:
> 
> http://robur.slu.se/Linux/net-development/jamal/FF-html/
> 
> This is the presentation they gave at OLS (IIRC)
> 
> And this is the final result:
> 
> http://robur.slu.se/Linux/net-development/jamal/FF-html/img26.htm
> 
> As you can see the throughput is a _lot_ higher with this driver.
> 
> One final note: The makefile in at least 
> tulip-ss010111.tar.gz is in the
> old format (not the new as 2.4.0-testX introduced), but you 
> can copy the
> makefile from the "vanilla" driver and It'lll work like a charm.
> 
> Please redo your tests with this driver and report the 
> results to me and
> this list. I really want to know how it compares against FreeBSD.
> 
> /Martin

Thanks! I'll try that out. How can I tell if the driver supports
CONFIG_NET_HW_FLOWCONTROL? I'm not sure, but I think the cards are
tulip-based, can I then use Robert & Jamal's optimised drivers?
It'll probably take some time before I can do further testing. (My employer
thinks I've spent too much time on it already...).

FYI, Linux had _much_ better delay variation characteristics than FreeBSD.
Typically no packet was delayed more than 100usec, whereas FreeBSD had some
packets delayed about 2-3 msec.

/Mårten

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2001-03-16  8:09 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2001-03-15  7:23 How to optimize routing performance Mårten Wikström
2001-03-15 12:32 ` Rik van Riel
2001-03-15 16:20   ` Martin Josefsson
2001-03-15 18:09 ` J Sloan
2001-03-16  2:05   ` Rik van Riel
2001-03-15 19:17     ` J Sloan
2001-03-15 19:36       ` Gregory Maxwell
2001-03-15 19:45         ` J Sloan
2001-03-15 19:44       ` Mike Kravetz
2001-03-16  2:35       ` Rik van Riel
2001-03-15 19:28         ` J Sloan
  -- strict thread matches above, loose matches on Subject: below --
2001-03-15 14:19 Robert Olsson
2001-03-16  0:38 ` Rik van Riel
2001-03-15 18:45   ` Robert Olsson
2001-03-15 19:30   ` Jonathan Morton
2001-03-15 19:54     ` Robert Olsson
2001-03-15 21:01       ` jamal
2001-03-15 20:16 Jonathan Earle
     [not found] <3AB12640.79E7B4FB@colorfullife.com>
2001-03-15 21:39 ` Robert Olsson
2001-03-16  7:21 Mårten Wikström
2001-03-16  8:08 ` Martin Josefsson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox