* performance issue of netdev_budget and dev_weight with ixgbe
[not found] ` <F169D4F5E1F1974DBFAFABF47F60C10A1E6EABF2@orsmsx507.amr.corp.intel.com>
@ 2009-03-20 8:48 ` Terry
2009-03-20 17:27 ` Brandeburg, Jesse
0 siblings, 1 reply; 3+ messages in thread
From: Terry @ 2009-03-20 8:48 UTC (permalink / raw)
To: Brandeburg, Jesse; +Cc: netdev
Hi Jesse,
I was doing some tuning work for ip-forwarding with oplin cards .
i found that when i set netdev_budget to 64(default is 300),just the
same as dev_weight in ixgbe, i got the highest forwarding rate.
not only in forwarding scenario, but also the "receive and send back"
scenario.
If I change netdev_budget or weight in driver to any other value, the
performace get worse.
how did the ixgbe driver folks pick the value of 64 and how does it
affect the perf?
thanks & rgds
terry
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: performance issue of netdev_budget and dev_weight with ixgbe
2009-03-20 8:48 ` performance issue of netdev_budget and dev_weight with ixgbe Terry
@ 2009-03-20 17:27 ` Brandeburg, Jesse
2009-03-20 22:59 ` David Miller
0 siblings, 1 reply; 3+ messages in thread
From: Brandeburg, Jesse @ 2009-03-20 17:27 UTC (permalink / raw)
To: Terry; +Cc: netdev@vger.kernel.org, jesse.brandeburg
On Fri, 20 Mar 2009, Terry wrote:
> I was doing some tuning work for ip-forwarding with oplin cards .
> i found that when i set netdev_budget to 64(default is 300),just the
netdev_budget controls how many times poll can be called for multiple
devices on a single cpu.
> same as dev_weight in ixgbe, i got the highest forwarding rate.
> not only in forwarding scenario, but also the "receive and send back"
> scenario.
you're changing the scheduling (scheduler interaction) behavior by
decreasing netdev_budget. You're also affecting the fairness between two
interfaces that might be running NAPI on the same CPU.
> If I change netdev_budget or weight in driver to any other value, the
> performace get worse.
> how did the ixgbe driver folks pick the value of 64 and how does it
> affect the perf?
64 is pretty much the global default for all drivers, not just ixgbe, we
didn't "pick" it at all. at 10 Gb Ethernet speeds, we get a LOT of
packets. When you play with budget you're decreasing the amount of cache
coherency you get by handing lots of packets at once.
I think if you look at the time_squeeze counter in /proc/net/softnet_stat
you'll see that when routing we often take more than a jiffie, which means
that next time through the poll loop our budget is smaller. You might
want to add code to check what the minimum value that the budget passed to
ixgbe is. If it gets too small all your cpu time is spent thrashing
between scheduling NAPI and never getting much work done.
The per-packet cost for routing is so high when merging all the transmit
work into the netif_receive_skb code, that I bet 64 packets often exceeds
a jiffie.
This is probably an area that the kernel stack could improve by batching
packets on receive (possibly similar to what yanmin zhang has been posting
recently) especially when routing.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: performance issue of netdev_budget and dev_weight with ixgbe
2009-03-20 17:27 ` Brandeburg, Jesse
@ 2009-03-20 22:59 ` David Miller
0 siblings, 0 replies; 3+ messages in thread
From: David Miller @ 2009-03-20 22:59 UTC (permalink / raw)
To: jesse.brandeburg; +Cc: hanfang, netdev
From: "Brandeburg, Jesse" <jesse.brandeburg@intel.com>
Date: Fri, 20 Mar 2009 10:27:17 -0700 (Pacific Daylight Time)
> 64 is pretty much the global default for all drivers, not just ixgbe, we
> didn't "pick" it at all. at 10 Gb Ethernet speeds, we get a LOT of
> packets. When you play with budget you're decreasing the amount of cache
> coherency you get by handing lots of packets at once.
You're also potentially decreasing fairness with other devices
in the system.
That's the purpose of this value, to allow preemption to other
active NAPI contexts in the current softirq processing run.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2009-03-20 22:59 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <a06240409c5d5e74a49d0@[129.98.90.227]>
[not found] ` <F169D4F5E1F1974DBFAFABF47F60C10A1E6EABF2@orsmsx507.amr.corp.intel.com>
2009-03-20 8:48 ` performance issue of netdev_budget and dev_weight with ixgbe Terry
2009-03-20 17:27 ` Brandeburg, Jesse
2009-03-20 22:59 ` David Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).