* Re: e1000 softirq load balancing
[not found] <48F4ED7E.4070308@cs.utexas.edu>
@ 2008-10-14 19:51 ` David Miller
2008-10-14 23:46 ` Don Porter
0 siblings, 1 reply; 4+ messages in thread
From: David Miller @ 2008-10-14 19:51 UTC (permalink / raw)
To: porterde; +Cc: linux-net, netdev
From: Don Porter <porterde@cs.utexas.edu>
Date: Tue, 14 Oct 2008 14:05:34 -0500
> It seems to me that with 4 independent NICs and plenty of CPUs to
> spare, I ought to be able to assign one softirq daemon to each NIC
> rather than funnelling all of the traffic through 1 or 2.
Traffic doesn't get distributed unless the NIC has support
for RX flow seperation and PCI MSI-X interrupts. Your NICs
do not.
So no matter how hard you try, each NIC is going to have it's
packets processed essentially on one cpu.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: e1000 softirq load balancing
2008-10-14 19:51 ` e1000 softirq load balancing David Miller
@ 2008-10-14 23:46 ` Don Porter
2008-10-14 23:51 ` David Miller
0 siblings, 1 reply; 4+ messages in thread
From: Don Porter @ 2008-10-14 23:46 UTC (permalink / raw)
To: David Miller; +Cc: linux-net, netdev
Thanks David.
Would you mind giving me a bit of intuition why I can't have a 1:1
mapping of CPUs to NICs?
I am a bit out of my depth here, but I'd like to learn.
Best,
Don
David Miller wrote:
> From: Don Porter <porterde@cs.utexas.edu>
> Date: Tue, 14 Oct 2008 14:05:34 -0500
>
>
>> It seems to me that with 4 independent NICs and plenty of CPUs to
>> spare, I ought to be able to assign one softirq daemon to each NIC
>> rather than funnelling all of the traffic through 1 or 2.
>>
>
> Traffic doesn't get distributed unless the NIC has support
> for RX flow seperation and PCI MSI-X interrupts. Your NICs
> do not.
>
> So no matter how hard you try, each NIC is going to have it's
> packets processed essentially on one cpu.
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: e1000 softirq load balancing
2008-10-14 23:46 ` Don Porter
@ 2008-10-14 23:51 ` David Miller
2008-10-14 23:55 ` Donald Porter
0 siblings, 1 reply; 4+ messages in thread
From: David Miller @ 2008-10-14 23:51 UTC (permalink / raw)
To: porterde; +Cc: linux-net, netdev
From: Don Porter <porterde@cs.utexas.edu>
Date: Tue, 14 Oct 2008 18:46:11 -0500
> Would you mind giving me a bit of intuition why I can't have a 1:1
> mapping of CPUs to NICs?
I didn't say that.
I said that without HW flow seperation support, you can only
expect N cpus to be busy where N is the number of NICs you
have.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: e1000 softirq load balancing
2008-10-14 23:51 ` David Miller
@ 2008-10-14 23:55 ` Donald Porter
0 siblings, 0 replies; 4+ messages in thread
From: Donald Porter @ 2008-10-14 23:55 UTC (permalink / raw)
To: David Miller; +Cc: linux-net, netdev
Ok. That seems very reasonable.
So the behavior I am seeing is that I have 4 NICs, but all of the
traffic is being funneled to 1-2 softirq handlers, despite the fact
that the hardware interrupts are being delivered to 4 different CPUs.
Any tips on how to debug this? Or perhaps there is some configuration
step I am missing?
Thanks,
Don
On Oct 14, 2008, at 6:51 PM, David Miller wrote:
> From: Don Porter <porterde@cs.utexas.edu>
> Date: Tue, 14 Oct 2008 18:46:11 -0500
>
>> Would you mind giving me a bit of intuition why I can't have a 1:1
>> mapping of CPUs to NICs?
>
> I didn't say that.
>
> I said that without HW flow seperation support, you can only
> expect N cpus to be busy where N is the number of NICs you
> have.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2008-10-14 23:55 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <48F4ED7E.4070308@cs.utexas.edu>
2008-10-14 19:51 ` e1000 softirq load balancing David Miller
2008-10-14 23:46 ` Don Porter
2008-10-14 23:51 ` David Miller
2008-10-14 23:55 ` Donald Porter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).