netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Rick Jones <rick.jones2@hp.com>
To: "Brandeburg, Jesse" <jesse.brandeburg@intel.com>
Cc: Bruce Allen <ballen@gravity.phys.uwm.edu>,
	netdev@vger.kernel.org,
	Carsten Aulbert <carsten.aulbert@aei.mpg.de>,
	Henning Fehrmann <henning.fehrmann@aei.mpg.de>,
	Bruce Allen <bruce.allen@aei.mpg.de>
Subject: Re: e1000 full-duplex TCP performance well below wire speed
Date: Wed, 30 Jan 2008 10:45:06 -0800	[thread overview]
Message-ID: <47A0C5B2.1000500@hp.com> (raw)
In-Reply-To: <36D9DB17C6DE9E40B059440DB8D95F52044F81DF@orsmsx418.amr.corp.intel.com>

> As asked in LKML thread, please post the exact netperf command used to
> start the client/server, whether or not you're using irqbalanced (aka
> irqbalance) and what cat /proc/interrupts looks like (you ARE using MSI,
> right?)

In particular, it would be good to know if you are doing two concurrent 
streams, or if you are using the "burst mode" TCP_RR with large 
request/response sizes method which then is only using one connection.

> I've recently discovered that particularly with the most recent kernels
> if you specify any socket options (-- -SX -sY) to netperf it does worse
> than if it just lets the kernel auto-tune.

That is the bit where explicit setsockopts are capped by core [rw]mem 
sysctls but the autotuning is not correct?

rick jones
BTW, a bit of netperf news - the "omni" (two routines to measure it all) 
tests seem to be more or less working now in top of trunk netperf.  It 
of course still needs work/polish, but if folks would like to play with 
them, I'd love the feedback.  Output is a bit different from classic 
netperf, and includes an option to emit the results as csv 
(test-specific -o presently) rather than "human readable" (test-specific 
-O).  You get the omni stuff via ./configure --enable-omni and use 
"omni" as the test name.  No docs yet, for options and their effects, 
you need to look at scan_omni_args in src/nettest_omni.c

One other addition in the omni tests is retreiving not just the initial 
SO_*BUF sizes, but also the final SO_*BUF sizes so one can see where 
autotuning took things just based on netperf output.

If the general concensus is that the overhead of the omni stuff isn't 
too dear, (there are more conditionals in the mainline than with classic 
netperf) I will convert the classic netperf tests to use the omni code.

BTW, don't have a heart attack when you see the quantity of current csv 
output - I do plan on being able to let the user specify what values 
should be included :)

  reply	other threads:[~2008-01-30 18:45 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-01-30 12:23 e1000 full-duplex TCP performance well below wire speed Bruce Allen
2008-01-30 17:36 ` Brandeburg, Jesse
2008-01-30 18:45   ` Rick Jones [this message]
2008-01-30 23:15     ` Bruce Allen
2008-01-31 11:35     ` Carsten Aulbert
2008-01-31 17:55       ` Rick Jones
2008-02-01 19:57         ` Carsten Aulbert
2008-01-30 23:07   ` Bruce Allen
2008-01-31  5:43     ` Brandeburg, Jesse
2008-01-31  8:31       ` Bruce Allen
2008-01-31 18:08         ` Kok, Auke
2008-01-31 18:38           ` Rick Jones
2008-01-31 18:47             ` Kok, Auke
2008-01-31 19:07               ` Rick Jones
2008-01-31 19:13           ` Bruce Allen
2008-01-31 19:32             ` Kok, Auke
2008-01-31 19:48               ` Bruce Allen
2008-02-01  6:27                 ` Bill Fink
2008-02-01  7:54                   ` Bruce Allen
2008-01-31 15:12       ` Carsten Aulbert
2008-01-31 17:20         ` Brandeburg, Jesse
2008-01-31 17:27           ` Carsten Aulbert
2008-01-31 17:33             ` Brandeburg, Jesse
2008-01-31 18:11             ` running aggregate netperf TCP_RR " Rick Jones
2008-01-31 18:03         ` Rick Jones
2008-01-31 15:18       ` Carsten Aulbert
2008-01-31  9:17     ` Andi Kleen
2008-01-31  9:59       ` Bruce Allen
2008-01-31 16:09       ` Carsten Aulbert
2008-01-31 18:15         ` Kok, Auke
2008-01-30 19:17 ` Ben Greear
2008-01-30 22:33   ` Bruce Allen
     [not found] <Pine.LNX.4.63.0801300324000.6391@trinity.phys.uwm.edu>
2008-01-30 13:53 ` David Miller
2008-01-30 14:01   ` Bruce Allen
2008-01-30 16:21     ` Stephen Hemminger
2008-01-30 22:25       ` Bruce Allen
2008-01-30 22:33         ` Stephen Hemminger
2008-01-30 23:23           ` Bruce Allen
2008-01-31  0:17         ` SANGTAE HA
2008-01-31  8:52           ` Bruce Allen
2008-01-31 11:45           ` Bill Fink
2008-01-31 14:50             ` David Acker
2008-01-31 15:57               ` Bruce Allen
2008-01-31 15:54             ` Bruce Allen
2008-01-31 17:36               ` Bill Fink
2008-01-31 19:37                 ` Bruce Allen
2008-01-31 18:26             ` Brandeburg, Jesse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=47A0C5B2.1000500@hp.com \
    --to=rick.jones2@hp.com \
    --cc=ballen@gravity.phys.uwm.edu \
    --cc=bruce.allen@aei.mpg.de \
    --cc=carsten.aulbert@aei.mpg.de \
    --cc=henning.fehrmann@aei.mpg.de \
    --cc=jesse.brandeburg@intel.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).