public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* TCP Segmentation Offloading (TSO)
@ 2002-09-02 17:45 Feldman, Scott
  2002-09-02 18:58 ` kuznet
                   ` (3 more replies)
  0 siblings, 4 replies; 39+ messages in thread
From: Feldman, Scott @ 2002-09-02 17:45 UTC (permalink / raw)
  To: linux-kernel, linux-net, 'Dave Hansen',
	'Manand@us.ibm.com'
  Cc: kuznet, 'David S. Miller', Leech, Christopher

TCP Segmentation Offloading (TSO) is enabled[1] in 2.5.33, along with an
enabled e1000 driver.  Other capable devices can be enabled ala e1000; the
driver interface (NETIF_F_TSO) is very simple.

So, fire up you favorite networking performance tool and compare the
performance gains between 2.5.32 and 2.5.33 using e1000.  I ran a quick test
on a dual P4 workstation system using the commercial tool Chariot:

Tx/Rx TCP file send long (bi-directional Rx/Tx)
  w/o TSO: 1500Mbps, 82% CPU
  w/  TSO: 1633Mbps, 75% CPU

Tx TCP file send long (Tx only)
  w/o TSO: 940Mbps, 40% CPU
  w/  TSO: 940Mbps, 19% CPU

A good bump in throughput for the bi-directional test.  The Tx-only test was
already at wire speed, so the gains are pure CPU savings.

I'd like to see SPECWeb results w/ and w/o TSO, and any other relevant
testing.  UDP framentation is not offloaded, so keep testing to TCP.

-scott

[1] Kudos to Alexey Kuznetsov for enabling the stack with TSO support, to
Chris Leech for providing the e1000 bits and a prototype stack, and to David
Miller for consultation.

^ permalink raw reply	[flat|nested] 39+ messages in thread
[parent not found: <288F9BF66CD9D5118DF400508B68C4460283E564@orsmsx113.jf.intel.com.suse.lists.linux.kernel>]
* RE: TCP Segmentation Offloading (TSO)
@ 2002-09-03 17:50 Feldman, Scott
  0 siblings, 0 replies; 39+ messages in thread
From: Feldman, Scott @ 2002-09-03 17:50 UTC (permalink / raw)
  To: 'Jordi Ros', David S. Miller
  Cc: Feldman, Scott, linux-kernel, linux-net, haveblue, Manand, kuznet,
	Leech, Christopher

Jordi Ros wrote:

> What i am wondering is how come we only get a few percentage 
> improvement in throughput. Theoretically, since 64KB/1.5KB ~= 
> 40, we should get a throughput improvement of 40 times. 

You're confusing number of packets with throughput.  Cut the wire, and you
can't tell the difference with or without TSO.  It's the same amount of data
on the wire.  As David pointed out, the savings comes in how much data is
DMA'ed across the bus and how much the CPU is unburdened by the segmentation
task.  A 64K TSO would be one pseudo header and the rest payload.  Without
TSO you would add ~40 more headers.  That's the savings across the bus.
 
> Is there any other bottleneck in the system that prevents 
> us to see the 300% improvement? (i am assuming the card can 
> do tso at wire speed)

My numbers are against PCI 64/66Mhz, so that's limiting.  You're not going
to get much more that 940Mbps at 1GbE unidirectional.  That's why all of the
savings at unidirectional Tx are in CPU reduction.

-scott

^ permalink raw reply	[flat|nested] 39+ messages in thread
* Re: TCP Segmentation Offloading (TSO)
@ 2002-09-03 18:09 Manfred Spraul
  2002-09-03 23:08 ` Hirokazu Takahashi
  0 siblings, 1 reply; 39+ messages in thread
From: Manfred Spraul @ 2002-09-03 18:09 UTC (permalink / raw)
  To: Hirokazu Takahashi, linux-kernel


Hirokazu Takahashi wrote:
> P.S.
>     Using "bswap" is little bit tricky.
> 

bswap was added with the 80486 - 80386 do not have that instruction, 
perhaps it's missing in some embedded system cpus, too. Is is possible 
to avoid it?

--
	Manfred



^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2002-09-08  4:25 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-09-02 17:45 TCP Segmentation Offloading (TSO) Feldman, Scott
2002-09-02 18:58 ` kuznet
2002-09-03  7:42   ` Hirokazu Takahashi
2002-09-03  7:51     ` David S. Miller
2002-09-03 11:27       ` Paul Mackerras
2002-09-03 11:29         ` David S. Miller
2002-09-03 12:21     ` kuznet
2002-09-03 13:03       ` Hirokazu Takahashi
2002-09-03 13:19         ` Hirokazu Takahashi
2002-09-03 13:22         ` kuznet
2002-09-03 21:05           ` David S. Miller
2002-09-03 21:20             ` David S. Miller
2002-09-04  1:02     ` H. Peter Anvin
2002-09-04  1:54       ` David S. Miller
2002-09-04 22:39       ` Gabriel Paubert
2002-09-04 22:41         ` H. Peter Anvin
2002-09-05  2:13           ` Hirokazu Takahashi
2002-09-05  2:21             ` David S. Miller
2002-09-05 10:28             ` Gabriel Paubert
2002-09-05 11:17               ` Jamie Lokier
2002-09-05 13:21                 ` Gabriel Paubert
2002-09-05 13:17                   ` David S. Miller
2002-09-08  4:20                   ` Hirokazu Takahashi
2002-09-08  4:29                     ` H. Peter Anvin
2002-09-04 23:17         ` Alan Cox
2002-09-05  0:09           ` Jamie Lokier
2002-09-02 19:06 ` Jeff Garzik
2002-09-02 23:13 ` David S. Miller
2002-09-03  4:58 ` Jordi Ros
2002-09-03  6:52   ` David S. Miller
2002-09-03  7:26     ` Jordi Ros
2002-09-03  7:39       ` David S. Miller
     [not found] <288F9BF66CD9D5118DF400508B68C4460283E564@orsmsx113.jf.intel.com.suse.lists.linux.kernel>
     [not found] ` <200209021858.WAA00388@sex.inr.ac.ru.suse.lists.linux.kernel>
     [not found]   ` <20020903.164243.21934772.taka@valinux.co.jp.suse.lists.linux.kernel>
     [not found]     ` <20020903.005119.50342945.davem@redhat.com.suse.lists.linux.kernel>
2002-09-03  9:05       ` Andi Kleen
2002-09-03 10:00         ` David S. Miller
2002-09-03 10:10           ` Andi Kleen
2002-09-03 10:09             ` David S. Miller
  -- strict thread matches above, loose matches on Subject: below --
2002-09-03 17:50 Feldman, Scott
2002-09-03 18:09 Manfred Spraul
2002-09-03 23:08 ` Hirokazu Takahashi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox