From: Rick Jones <rick.jones2@hp.com>
To: Linux Network Development list <netdev@vger.kernel.org>
Subject: e1000 autotuning doesn't get along with itself
Date: Thu, 16 Aug 2007 12:42:00 -0700 [thread overview]
Message-ID: <46C4A888.9090006@hp.com> (raw)
Folks -
I was trying to look at bonding vs discrete links and so put a couple dual-port
e1000-driven NICs:
4a:01.1 Ethernet controller: Intel Corporation 82546GB Gigabit Ethernet
Controller (rev 03)
Subsystem: Hewlett-Packard Company HP Dual Port 1000Base-T [A9900A]
into a pair of 8 core systems running a 2.6.22.2 kernel. This gave me:
hpcpc109:~/netperf2_trunk# ethtool -i eth2
driver: e1000
version: 7.3.20-k2-NAPI
firmware-version: N/A
bus-info: 0000:49:02.0
for the e1000 driver. I connected the two systems back-to-back and started
running some tests. In the course of trying to look at something else
(verifying the results reported by bwm-ng) I enabled demo-mode in netperf
(./configure --enable-demo) and noticed a considerable oscillation. I undid the
bond and repeated the experiment with a discrete NIC:
hpcpc109:~/netperf2_trunk# src/netperf -t TCP_RR -H 192.168.2.105 -D 1.0 -l 15
TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.2.105
(192.168.2.105) port 0 AF_INET : demo : first burst 0
Interim result: 10014.93 Trans/s over 1.00 seconds
Interim result: 10015.79 Trans/s over 1.00 seconds
Interim result: 10014.30 Trans/s over 1.00 seconds
Interim result: 10016.29 Trans/s over 1.00 seconds
Interim result: 10085.80 Trans/s over 1.00 seconds
Interim result: 17526.61 Trans/s over 1.00 seconds
Interim result: 20007.60 Trans/s over 1.00 seconds
Interim result: 19626.46 Trans/s over 1.02 seconds
Interim result: 10616.44 Trans/s over 1.85 seconds
Interim result: 10014.88 Trans/s over 1.06 seconds
Interim result: 10015.79 Trans/s over 1.00 seconds
Interim result: 10014.80 Trans/s over 1.00 seconds
Interim result: 10035.30 Trans/s over 1.00 seconds
Interim result: 13974.69 Trans/s over 1.00 seconds
Local /Remote
Socket Size Request Resp. Elapsed Trans.
Send Recv Size Size Time Rate
bytes Bytes bytes bytes secs. per sec
16384 87380 1 1 15.00 12225.77
16384 87380
On a slightly informed whim I tried disabling the interrupt thottle on both
sides (modprobe e1000 InterruptThrottleRate=0,0,0,0,0,0,0,0) and re-ran:
hpcpc109:~/netperf2_trunk# src/netperf -t TCP_RR -H 192.168.2.105 -D 1.0 -l
15TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.2.105 (192.168.2.105) port 0 AF_INET : demo : first burst 0
Interim result: 18673.68 Trans/s over 1.00 seconds
Interim result: 18685.01 Trans/s over 1.00 seconds
Interim result: 18682.30 Trans/s over 1.00 seconds
Interim result: 18681.05 Trans/s over 1.00 seconds
Interim result: 18680.25 Trans/s over 1.00 seconds
Interim result: 18742.44 Trans/s over 1.00 seconds
Interim result: 18739.45 Trans/s over 1.00 seconds
Interim result: 18723.52 Trans/s over 1.00 seconds
Interim result: 18736.53 Trans/s over 1.00 seconds
Interim result: 18737.61 Trans/s over 1.00 seconds
Interim result: 18744.76 Trans/s over 1.00 seconds
Interim result: 18728.54 Trans/s over 1.00 seconds
Interim result: 18738.91 Trans/s over 1.00 seconds
Interim result: 18735.53 Trans/s over 1.00 seconds
Interim result: 18741.03 Trans/s over 1.00 seconds
Local /Remote
Socket Size Request Resp. Elapsed Trans.
Send Recv Size Size Time Rate
bytes Bytes bytes bytes secs. per sec
16384 87380 1 1 15.00 18717.94
16384 87380
and then just for grins I tried just disabling it on one side, leaving the other
at defaults:
hpcpc109:~/netperf2_trunk# src/netperf -t TCP_RR -H 192.168.2.105 -D 1.0 -l
15TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
192.168.2.105 (192.168.2.105) port 0 AF_INET : demo : first burst 0
Interim result: 19980.84 Trans/s over 1.00 seconds
Interim result: 19997.60 Trans/s over 1.00 seconds
Interim result: 19995.60 Trans/s over 1.00 seconds
Interim result: 20002.60 Trans/s over 1.00 seconds
Interim result: 20011.58 Trans/s over 1.00 seconds
Interim result: 19985.66 Trans/s over 1.00 seconds
Interim result: 20002.60 Trans/s over 1.00 seconds
Interim result: 20010.58 Trans/s over 1.00 seconds
Interim result: 20012.60 Trans/s over 1.00 seconds
Interim result: 19993.63 Trans/s over 1.00 seconds
Interim result: 19979.63 Trans/s over 1.00 seconds
Interim result: 19991.58 Trans/s over 1.00 seconds
Interim result: 20011.60 Trans/s over 1.00 seconds
Interim result: 19948.84 Trans/s over 1.00 seconds
Local /Remote
Socket Size Request Resp. Elapsed Trans.
Send Recv Size Size Time Rate
bytes Bytes bytes bytes secs. per sec
16384 87380 1 1 15.00 19990.14
16384 87380
It looks like the e1000 interrupt throttle autotuning works very nicely when the
other side isn't doing any, but if the other side is also trying to autotune it
doesn't seem to stablize. At least not during a netperf TCP_RR test.
Does anyone else see this? To try to eliminate netperf demo mode I re-ran
without it and got the same end results.
rick jones
next reply other threads:[~2007-08-16 19:42 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-08-16 19:42 Rick Jones [this message]
2007-08-18 1:16 ` e1000 autotuning doesn't get along with itself Brandeburg, Jesse
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=46C4A888.9090006@hp.com \
--to=rick.jones2@hp.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).