netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* NUMA and multiQ interation
@ 2009-11-24 17:04 Tom Herbert
  2009-11-24 17:36 ` Eric Dumazet
  2009-11-24 18:06 ` David Miller
  0 siblings, 2 replies; 5+ messages in thread
From: Tom Herbert @ 2009-11-24 17:04 UTC (permalink / raw)
  To: Linux Netdev List

This is a question about the expected interaction between NUMA and
receive multi queue.  Our test setup is a 16 core AMD system with 4
sockets, one NUMA node per socket and a bnx2x.  The test is running
500 streams in netperf RR with response/request of one byte using
net-next-2.6.

Highest throughput we are seeing is with 4 queues (1 queue processed
per socket) giving 361862 tps at 67% of cpu.  16 queues (1 queue per
cpu) gives 226722 tps at 30.43% cpu.

However, with a modified kernel that does RX skb allocations from
local node rather than the devices numa node, I'm getting 923422 tps
at 100% cpu.  This is much higher tps and better cpu utilization than
the case where allocations are coming from the device numa node.  It
appears that cross node allocations are a causing a significant
performance hit.  For a 2.5 times performance improvement I'm kind of
motivated to revert netdev_alloc_skb to when it did not pay attention
to numa node :-)

What is the expected interaction here, and would these results be
typical?  If so, would this warrant the need to associate each RX
queue to a numa node, instead of just the device?

Thanks,
Tom

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2009-11-24 20:09 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-11-24 17:04 NUMA and multiQ interation Tom Herbert
2009-11-24 17:36 ` Eric Dumazet
2009-11-24 18:06 ` David Miller
2009-11-24 19:51   ` Tom Herbert
2009-11-24 20:10     ` David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).