netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Kok, Auke" <auke-jan.h.kok@intel.com>
To: Bruce Allen <ballen@gravity.phys.uwm.edu>
Cc: "Brandeburg, Jesse" <jesse.brandeburg@intel.com>,
	netdev@vger.kernel.org,
	Carsten Aulbert <carsten.aulbert@aei.mpg.de>,
	Henning Fehrmann <henning.fehrmann@aei.mpg.de>,
	Bruce Allen <bruce.allen@aei.mpg.de>
Subject: Re: e1000 full-duplex TCP performance well below wire speed
Date: Thu, 31 Jan 2008 10:08:30 -0800	[thread overview]
Message-ID: <47A20E9E.7070503@intel.com> (raw)
In-Reply-To: <Pine.LNX.4.63.0801310213010.3240@trinity.phys.uwm.edu>

Bruce Allen wrote:
> Hi Jesse,
> 
>>> It's good to be talking directly to one of the e1000 developers and
>>> maintainers.  Although at this point I am starting to think that the
>>> issue may be TCP stack related and nothing to do with the NIC.  Am I
>>> correct that these are quite distinct parts of the kernel?
>>
>> Yes, quite.
> 
> OK.  I hope that there is also someone knowledgable about the TCP stack
> who is following this thread. (Perhaps you also know this part of the
> kernel, but I am assuming that your expertise is on the e1000/NIC bits.)
> 
>>> Important note: we ARE able to get full duplex wire speed (over 900
>>> Mb/s simulaneously in both directions) using UDP.  The problems occur
>>> only with TCP connections.
>>
>> That eliminates bus bandwidth issues, probably, but small packets take
>> up a lot of extra descriptors, bus bandwidth, CPU, and cache resources.
> 
> I see.  Your concern is the extra ACK packets associated with TCP.  Even
> those these represent a small volume of data (around 5% with MTU=1500,
> and less at larger MTU) they double the number of packets that must be
> handled by the system compared to UDP transmission at the same data
> rate. Is that correct?


A lot of people tend to forget that the pci-express bus has enough bandwidth on
first glance - 2.5gbit/sec for 1gbit of traffix, but apart from data going over it
there is significant overhead going on: each packet requires transmit, cleanup and
buffer transactions, and there are many irq register clears per second (slow
ioread/writes). The transactions double for TCP ack processing, and this all
accumulates and starts to introduce latency, higher cpu utilization etc...

Auke

  reply	other threads:[~2008-01-31 18:11 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-01-30 12:23 e1000 full-duplex TCP performance well below wire speed Bruce Allen
2008-01-30 17:36 ` Brandeburg, Jesse
2008-01-30 18:45   ` Rick Jones
2008-01-30 23:15     ` Bruce Allen
2008-01-31 11:35     ` Carsten Aulbert
2008-01-31 17:55       ` Rick Jones
2008-02-01 19:57         ` Carsten Aulbert
2008-01-30 23:07   ` Bruce Allen
2008-01-31  5:43     ` Brandeburg, Jesse
2008-01-31  8:31       ` Bruce Allen
2008-01-31 18:08         ` Kok, Auke [this message]
2008-01-31 18:38           ` Rick Jones
2008-01-31 18:47             ` Kok, Auke
2008-01-31 19:07               ` Rick Jones
2008-01-31 19:13           ` Bruce Allen
2008-01-31 19:32             ` Kok, Auke
2008-01-31 19:48               ` Bruce Allen
2008-02-01  6:27                 ` Bill Fink
2008-02-01  7:54                   ` Bruce Allen
2008-01-31 15:12       ` Carsten Aulbert
2008-01-31 17:20         ` Brandeburg, Jesse
2008-01-31 17:27           ` Carsten Aulbert
2008-01-31 17:33             ` Brandeburg, Jesse
2008-01-31 18:11             ` running aggregate netperf TCP_RR " Rick Jones
2008-01-31 18:03         ` Rick Jones
2008-01-31 15:18       ` Carsten Aulbert
2008-01-31  9:17     ` Andi Kleen
2008-01-31  9:59       ` Bruce Allen
2008-01-31 16:09       ` Carsten Aulbert
2008-01-31 18:15         ` Kok, Auke
2008-01-30 19:17 ` Ben Greear
2008-01-30 22:33   ` Bruce Allen
     [not found] <Pine.LNX.4.63.0801300324000.6391@trinity.phys.uwm.edu>
2008-01-30 13:53 ` David Miller
2008-01-30 14:01   ` Bruce Allen
2008-01-30 16:21     ` Stephen Hemminger
2008-01-30 22:25       ` Bruce Allen
2008-01-30 22:33         ` Stephen Hemminger
2008-01-30 23:23           ` Bruce Allen
2008-01-31  0:17         ` SANGTAE HA
2008-01-31  8:52           ` Bruce Allen
2008-01-31 11:45           ` Bill Fink
2008-01-31 14:50             ` David Acker
2008-01-31 15:57               ` Bruce Allen
2008-01-31 15:54             ` Bruce Allen
2008-01-31 17:36               ` Bill Fink
2008-01-31 19:37                 ` Bruce Allen
2008-01-31 18:26             ` Brandeburg, Jesse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=47A20E9E.7070503@intel.com \
    --to=auke-jan.h.kok@intel.com \
    --cc=ballen@gravity.phys.uwm.edu \
    --cc=bruce.allen@aei.mpg.de \
    --cc=carsten.aulbert@aei.mpg.de \
    --cc=henning.fehrmann@aei.mpg.de \
    --cc=jesse.brandeburg@intel.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).