From: Rick Jones <rick.jones2@hp.com>
To: Breno Leitao <leitao@linux.vnet.ibm.com>
Cc: bhutchings@solarflare.com,
Linux Network Development list <netdev@vger.kernel.org>
Subject: Re: e1000 performance issue in 4 simultaneous links
Date: Thu, 10 Jan 2008 10:37:32 -0800 [thread overview]
Message-ID: <478665EC.7040206@hp.com> (raw)
In-Reply-To: <1199986291.8931.62.camel@cafe>
> I also tried to increase my interface MTU to 9000, but I am afraid that
> netperf only transmits packets with less than 1500. Still investigating.
It may seem like picking a tiny nit, but netperf never transmits
packets. It only provides buffers of specified size to the stack. It is
then the stack which transmits and determines the size of the packets on
the network.
Drifting a bit more...
While there are settings, conditions and known stack behaviours where
one can be confident of the packet size on the network based on the
options passed to netperf, generally speaking one should not ass-u-me a
direct relationship between the options one passes to netperf and the
size of the packets on the network.
And for JumboFrames to be effective it must be set on both ends,
otherwise the TCP MSS exchange will result in the smaller of the two
MTU's "winning" as it were.
>>single CPU this can become a bottleneck. Does the test system have
>>multiple CPUs? Are IRQs for the multiple NICs balanced across
>>multiple CPUs?
>
> Yes, this machine has 8 ppc 1.9Ghz CPUs. And the IRQs are balanced
> across the CPUs, as I see in /proc/interrupts:
That suggests to me anyway that the dreaded irqbalanced is running,
shuffling the interrupts as you go. Not often a happy place for running
netperf when one want's consistent results.
>
> # cat /proc/interrupts
> CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7
> 16: 940 760 1047 904 993 777 975 813 XICS Level IPI
> 18: 4 3 4 1 3 6 8 3 XICS Level hvc_console
> 19: 0 0 0 0 0 0 0 0 XICS Level RAS_EPOW
> 273: 10728 10850 10937 10833 10884 10788 10868 10776 XICS Level eth4
> 275: 0 0 0 0 0 0 0 0 XICS Level ehci_hcd:usb1, ohci_hcd:usb2, ohci_hcd:usb3
> 277: 234933 230275 229770 234048 235906 229858 229975 233859 XICS Level eth6
> 278: 266225 267606 262844 265985 268789 266869 263110 267422 XICS Level eth7
> 279: 893 919 857 909 867 917 894 881 XICS Level eth0
> 305: 439246 439117 438495 436072 438053 440111 438973 438951 XICS Level eth0 Neterion Xframe II 10GbE network adapter
> 321: 3268 3088 3143 3113 3305 2982 3326 3084 XICS Level ipr
> 323: 268030 273207 269710 271338 270306 273258 270872 273281 XICS Level eth16
> 324: 215012 221102 219494 216732 216531 220460 219718 218654 XICS Level eth17
> 325: 7103 3580 7246 3475 7132 3394 7258 3435 XICS Level pata_pdc2027x
> BAD: 4216
IMO, what you want (in the absence of multi-queue NICs) is one CPU
taking the interrupts of one port/interface, and each port/interface's
interrupts going to a separate CPU. So, something that looks roughly
like concocted example:
CPU0 CPU1 CPU2 CPU3
1: 1234 0 0 0 eth0
2: 0 1234 0 0 eth1
3: 0 0 1234 0 eth2
4: 0 0 0 1234 eth3
which you should be able to acheive via the method I think someone else
has already mentioned about echoing values into
/proc/irq/<irq>/smp_affinity - after you have slain the dreaded
irqbalance daemon.
rick jones
next prev parent reply other threads:[~2008-01-10 18:37 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-01-10 16:17 e1000 performance issue in 4 simultaneous links Breno Leitao
2008-01-10 16:36 ` Ben Hutchings
2008-01-10 16:51 ` Jeba Anandhan
2008-01-10 17:31 ` Breno Leitao
2008-01-10 18:18 ` Kok, Auke
2008-01-10 18:37 ` Rick Jones [this message]
2008-01-10 18:26 ` Rick Jones
2008-01-10 20:52 ` Brandeburg, Jesse
2008-01-11 1:28 ` David Miller
2008-01-11 11:09 ` Benny Amorsen
2008-01-12 1:41 ` David Miller
2008-01-12 5:13 ` Denys Fedoryshchenko
2008-01-30 16:57 ` Kok, Auke
2008-01-11 16:20 ` Breno Leitao
2008-01-11 16:48 ` Eric Dumazet
2008-01-11 17:36 ` Denys Fedoryshchenko
2008-01-11 18:45 ` Breno Leitao
2008-01-11 18:19 ` Breno Leitao
2008-01-11 18:48 ` Rick Jones
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=478665EC.7040206@hp.com \
--to=rick.jones2@hp.com \
--cc=bhutchings@solarflare.com \
--cc=leitao@linux.vnet.ibm.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).