* Simple question about network stack
@ 2007-12-24 9:52 Badalian Vyacheslav
[not found] ` <20071224205358.5c192f8d@catlap>
0 siblings, 1 reply; 3+ messages in thread
From: Badalian Vyacheslav @ 2007-12-24 9:52 UTC (permalink / raw)
To: netdev
Hi all.
Sorry for offtopic.
Have problems with balance CPU load in networking.
Have 2 Ethernet adapters e1000. Have 8 CPU (4 real).
Computer work as Shaper. Use only TC rules to shape and IPTABLES to drop.
rx on eth0 go to CPU0. traffic above 400mbs do 90% SI.
rx on eth1 go to CPU1. traffic above 400mbs do 90% SI.
All other CPUS 100%idle.
question:
1. I may balance load to other cpu? I understand that i can't balance
polling place, but find in TC and IPTABLES hash may do different cpu?
2. If SI on 1 cpu more then 100% (600mbs traffic) i see strange. SOFTIRQ
process do 100%. Traffic bandwidth go from 400mbs to 100 mbs. pings
trough computer go from 0.5ms to 100ms. 1 cpu use 100%. All other cpu
100%idle. If traffic down - after some time cpu load again go to
different cpu.
P.S. Very strange that computer with 4(8) CPU and 1 CPU HT do some
network performance.
P.P.S Sorry for my English
Thanks for answers.
Slavon
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Simple question about network stack
[not found] ` <20071224205358.5c192f8d@catlap>
@ 2007-12-25 8:52 ` Badalian Vyacheslav
2007-12-25 15:41 ` Denys Fedoryshchenko
0 siblings, 1 reply; 3+ messages in thread
From: Badalian Vyacheslav @ 2007-12-25 8:52 UTC (permalink / raw)
To: Marek Kierdelewicz, netdev
Marek Kierdelewicz:
> Hi,
>
>
>> Have 2 Ethernet adapters e1000. Have 8 CPU (4 real).
>> Computer work as Shaper. Use only TC rules to shape and IPTABLES to
>> drop.
>> question:
>> 1. I may balance load to other cpu? I understand that i can't balance
>> polling place, but find in TC and IPTABLES hash may do different cpu?
>>
>
> You need as many nics as cpus to effectivelu use all your processing
> power. You can pair-up nics and cpus by configuring appropriate irq
> smp affinity. Read in [1] from line 350 on.
>
Interesting. Sorry if my questions will be stupid. "smp affinity" its
was i need, but it not work for me, and never work if i remember =(
Maybe Guru of Network ask me where i have simple stupid mistake?
In theory
#cat ffffffff > /proc/irq/ID/smp_affinity
set mask that all cpus must get interrupts RR
but i see in cat /proc/interrupts that all interrupts get only CPU0
On CPU1 0 interrupts
i do
#cat 2 > /proc/irq/16/smp_affinity
#cat 2 > /proc/irq/17/smp_affinity
#echo /proc/irq/1[67]/smp_affinity
00000002
00000002
Great. Interrupts go to CPU1
#cat 1 > /proc/irq/16/smp_affinity
#cat 1 > /proc/irq/17/smp_affinity
#echo /proc/irq/1[67]/smp_affinity
00000001
00000001
Great. Interrupts go to CPU0
#cat 3 > /proc/irq/16/smp_affinity
#cat 3 > /proc/irq/17/smp_affinity
#echo /proc/irq/1[67]/smp_affinity
00000003
00000003
Strange. Interrupts go to CPU0 only.
Where i have mistake? Why i not have RR? Or i mistake in SMP Affinity idea?
Thanks for answers!
Slavon
> Another option is to use recent e1000 nics with multiqueue capability.
> Read in [2]. Google for more information. I'm not sure but you'll
> probably need non-kerel recent e1000 drivers from sourceforge.
>
> [1]http://www.mjmwired.net/kernel/Documentation/filesystems/proc.txt
> [2]http://www.mjmwired.net/kernel/Documentation/networking/multiqueue.txt
>
> cheers,
> Marek Kierdelewicz
>
>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Simple question about network stack
2007-12-25 8:52 ` Badalian Vyacheslav
@ 2007-12-25 15:41 ` Denys Fedoryshchenko
0 siblings, 0 replies; 3+ messages in thread
From: Denys Fedoryshchenko @ 2007-12-25 15:41 UTC (permalink / raw)
To: Badalian Vyacheslav, Marek Kierdelewicz, netdev
Probably you have enabled in kernel "Enable kernel irq balancing" or
CONFIG_IRQBALANCE
It is wrong. It has to be disabled.
On Tue, 25 Dec 2007 11:52:48 +0300, Badalian Vyacheslav wrote
> Marek Kierdelewicz:
> > Hi,
> >
> >
> >> Have 2 Ethernet adapters e1000. Have 8 CPU (4 real).
> >> Computer work as Shaper. Use only TC rules to shape and IPTABLES to
> >> drop.
> >> question:
> >> 1. I may balance load to other cpu? I understand that i can't balance
> >> polling place, but find in TC and IPTABLES hash may do different cpu?
> >>
> >
> > You need as many nics as cpus to effectivelu use all your processing
> > power. You can pair-up nics and cpus by configuring appropriate irq
> > smp affinity. Read in [1] from line 350 on.
> >
> Interesting. Sorry if my questions will be stupid. "smp affinity" its
> was i need, but it not work for me, and never work if i remember =(
> Maybe Guru of Network ask me where i have simple stupid mistake?
>
> In theory
> #cat ffffffff > /proc/irq/ID/smp_affinity
> set mask that all cpus must get interrupts RR
>
> but i see in cat /proc/interrupts that all interrupts get only CPU0
> On CPU1 0 interrupts
>
> i do
> #cat 2 > /proc/irq/16/smp_affinity
> #cat 2 > /proc/irq/17/smp_affinity
> #echo /proc/irq/1[67]/smp_affinity
> 00000002
> 00000002
>
> Great. Interrupts go to CPU1
>
> #cat 1 > /proc/irq/16/smp_affinity
> #cat 1 > /proc/irq/17/smp_affinity
> #echo /proc/irq/1[67]/smp_affinity
> 00000001
> 00000001
>
> Great. Interrupts go to CPU0
>
> #cat 3 > /proc/irq/16/smp_affinity
> #cat 3 > /proc/irq/17/smp_affinity
> #echo /proc/irq/1[67]/smp_affinity
> 00000003
> 00000003
>
> Strange. Interrupts go to CPU0 only.
>
> Where i have mistake? Why i not have RR? Or i mistake in SMP
> Affinity idea? Thanks for answers!
>
> Slavon
>
> > Another option is to use recent e1000 nics with multiqueue capability.
> > Read in [2]. Google for more information. I'm not sure but you'll
> > probably need non-kerel recent e1000 drivers from sourceforge.
> >
> > [1]http://www.mjmwired.net/kernel/Documentation/filesystems/proc.txt
> > [2]http://www.mjmwired.net/kernel/Documentation/networking/multiqueue.txt
> >
> > cheers,
> > Marek Kierdelewicz
> >
> >
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
Denys Fedoryshchenko
Technical Manager
Virtual ISP S.A.L.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2007-12-25 15:42 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-12-24 9:52 Simple question about network stack Badalian Vyacheslav
[not found] ` <20071224205358.5c192f8d@catlap>
2007-12-25 8:52 ` Badalian Vyacheslav
2007-12-25 15:41 ` Denys Fedoryshchenko
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).