netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Rick Jones <rick.jones2@hp.com>
To: Carsten Aulbert <carsten.aulbert@aei.mpg.de>
Cc: "Brandeburg, Jesse" <jesse.brandeburg@intel.com>,
	Bruce Allen <ballen@gravity.phys.uwm.edu>,
	netdev@vger.kernel.org,
	Henning Fehrmann <henning.fehrmann@aei.mpg.de>,
	Bruce Allen <bruce.allen@aei.mpg.de>
Subject: Re: e1000 full-duplex TCP performance well below wire speed
Date: Thu, 31 Jan 2008 09:55:45 -0800	[thread overview]
Message-ID: <47A20BA1.8070206@hp.com> (raw)
In-Reply-To: <47A1B294.8080609@aei.mpg.de>

> netperf was used without any special tuning parameters. Usually we start 
> two processes on two hosts which start (almost) simultaneously, last for 
> 20-60 seconds and simply use UDP_STREAM (works well) and TCP_STREAM, i.e.
> 
> on 192.168.0.202: netperf -H 192.168.2.203 -t TCP_STREAL -l 20
> on 192.168.0.203: netperf -H 192.168.2.202 -t TCP_STREAL -l 20
> 
> 192.168.0.20[23] here is on eth0 which cannot do jumbo frames, thus we 
> use the .2. part for eth1 for a range of mtus.
> 
> The server is started on both nodes with the start-stop-daemon and no 
> special parameters I'm aware of.


So long as you are relying on external (netperf relative) means to 
report the throughput, those command lines would be fine.  I wouldn't be 
comfortably relying on the sum of the netperf-reported throughtputs with 
those comand lines though.  Netperf2 has no test synchronization, so two 
separate commands, particularly those initiated on different systems, 
are subject to skew errors.  99 times out of ten they might be epsilon, 
but I get a _little_ paranoid there.

There are three alternatives:

1) use netperf4.  not as convenient for "quick" testing at present, but 
it has explicit test synchronization, so  you "know" that the numbers 
presented are from when all connections were actively transferring data

2) use the aforementioned "burst" TCP_RR test.  This is then a single 
netperf with data flowing both ways on a single connection so no issue 
of skew, but perhaps an issue of being one connection and so one process 
on each end.

3) start both tests from the same system and follow the suggestions 
contained in :

<http://www.netperf.org/svn/netperf2/tags/netperf-2.4.4/doc/netperf.html>

particluarly:

<http://www.netperf.org/svn/netperf2/tags/netperf-2.4.4/doc/netperf.html#Using-Netperf-to-Measure-Aggregate-Performance>

and use a combination of TCP_STREAM and TCP_MAERTS (STREAM backwards) tests.

happy benchmarking,

rick jones

  reply	other threads:[~2008-01-31 17:55 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-01-30 12:23 e1000 full-duplex TCP performance well below wire speed Bruce Allen
2008-01-30 17:36 ` Brandeburg, Jesse
2008-01-30 18:45   ` Rick Jones
2008-01-30 23:15     ` Bruce Allen
2008-01-31 11:35     ` Carsten Aulbert
2008-01-31 17:55       ` Rick Jones [this message]
2008-02-01 19:57         ` Carsten Aulbert
2008-01-30 23:07   ` Bruce Allen
2008-01-31  5:43     ` Brandeburg, Jesse
2008-01-31  8:31       ` Bruce Allen
2008-01-31 18:08         ` Kok, Auke
2008-01-31 18:38           ` Rick Jones
2008-01-31 18:47             ` Kok, Auke
2008-01-31 19:07               ` Rick Jones
2008-01-31 19:13           ` Bruce Allen
2008-01-31 19:32             ` Kok, Auke
2008-01-31 19:48               ` Bruce Allen
2008-02-01  6:27                 ` Bill Fink
2008-02-01  7:54                   ` Bruce Allen
2008-01-31 15:12       ` Carsten Aulbert
2008-01-31 17:20         ` Brandeburg, Jesse
2008-01-31 17:27           ` Carsten Aulbert
2008-01-31 17:33             ` Brandeburg, Jesse
2008-01-31 18:11             ` running aggregate netperf TCP_RR " Rick Jones
2008-01-31 18:03         ` Rick Jones
2008-01-31 15:18       ` Carsten Aulbert
2008-01-31  9:17     ` Andi Kleen
2008-01-31  9:59       ` Bruce Allen
2008-01-31 16:09       ` Carsten Aulbert
2008-01-31 18:15         ` Kok, Auke
2008-01-30 19:17 ` Ben Greear
2008-01-30 22:33   ` Bruce Allen
     [not found] <Pine.LNX.4.63.0801300324000.6391@trinity.phys.uwm.edu>
2008-01-30 13:53 ` David Miller
2008-01-30 14:01   ` Bruce Allen
2008-01-30 16:21     ` Stephen Hemminger
2008-01-30 22:25       ` Bruce Allen
2008-01-30 22:33         ` Stephen Hemminger
2008-01-30 23:23           ` Bruce Allen
2008-01-31  0:17         ` SANGTAE HA
2008-01-31  8:52           ` Bruce Allen
2008-01-31 11:45           ` Bill Fink
2008-01-31 14:50             ` David Acker
2008-01-31 15:57               ` Bruce Allen
2008-01-31 15:54             ` Bruce Allen
2008-01-31 17:36               ` Bill Fink
2008-01-31 19:37                 ` Bruce Allen
2008-01-31 18:26             ` Brandeburg, Jesse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=47A20BA1.8070206@hp.com \
    --to=rick.jones2@hp.com \
    --cc=ballen@gravity.phys.uwm.edu \
    --cc=bruce.allen@aei.mpg.de \
    --cc=carsten.aulbert@aei.mpg.de \
    --cc=henning.fehrmann@aei.mpg.de \
    --cc=jesse.brandeburg@intel.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).