From: gmate.amit@gmail.com (Kumar amit mehta)
To: kernelnewbies@lists.kernelnewbies.org
Subject: relationship between cpu_affinity and Packet RX Processing
Date: Tue, 26 Mar 2013 11:31:33 -0700 [thread overview]
Message-ID: <20130326183133.GA6146@gmail.com> (raw)
Hi All,
I was reading some stuff on interrupts and irq lines today and I thought I'll
expermient with the network rx path. To start with, I've a Virtual Machine
running 3.8 linux kernel. My machine has 4 CPU cores, network (eth) interface
is driven by pcnet_32 AMD driver and is tied to IRQ line #19. I started some
network traffic and I notice that out of those 4 CPUs, only one of them is being
used and despite changing the CPU affinity, I still don't see the other cores
being used for this network traffic. So based on this behavior(please see the
logs below), I've these following queries:
i) Does it mean that this network card do not have multiple Rx Queues ?
ii) I think all the modern NICs must be implementing multiple Rx Queues and
hence Can someone please point me to the simplest of such implemenation in any
of the in-tree drivers ?
iii) I'm just doing a simple 'ping' to google with of big size packets, As I do
not have a peer to use packetgen/netperf/iperf utilities.
ref: Comments in double quotes.
<logs>
$ uname -r
3.8.0-rc6-next-20130208
$ cat /proc/cpuinfo |grep processor
processor : 0
processor : 1
processor : 2
processor : 3
"Total 4 cpu cores"
$ cat /proc/interrupts|egrep 'eth0|CPU'
CPU0 CPU1 CPU2 CPU3
19: 5103 74 33 5 IO-APIC-fasteoi eth0
"IRQ Line #19 for the network device"
$ lspci|grep -i ethernet
02:01.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE]
(rev 10)
"AMD NIC"
$ lspci -s 02:01.0 -vvv|grep module
Kernel modules: pcnet32
$ lsmod|grep pcnet32
pcnet32 40671 0
"driver"
# whoami
root
# cat /proc/irq/19/smp_affinity
03
# cat /proc/irq/19/affinity_hint
00
"I think smp_affinity is a bit map, therefore for all the four cores to be
utilized, all 4 bits should be set to 1, which leads to 15(0xf), hence Chaning
the cpu affinity"
# echo 15 > /proc/irq/19/smp_affinity
# cat /proc/irq/19/smp_affinity
15
"started network traffic here and monitoring it"
# cat /proc/interrupts|grep eth0
19: 5452 78 33 5 IO-APIC-fasteoi eth0
# cat /proc/interrupts|grep eth0
19: 5488 78 35 5 IO-APIC-fasteoi eth0
# cat /proc/interrupts|grep eth0
19: 5492 78 35 5 IO-APIC-fasteoi eth0
# cat /proc/interrupts|grep eth0
19: 5500 78 35 5 IO-APIC-fasteoi eth0
.................................
.........after some time.........
# cat /proc/interrupts|grep eth0
19: 6035 78 42 5 IO-APIC-fasteoi eth0
"Most of the packets are still getting routed through CPU0 *Only*."
<logs>
-Amit
next reply other threads:[~2013-03-26 18:31 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-26 18:31 Kumar amit mehta [this message]
2013-03-26 18:35 ` relationship between cpu_affinity and Packet RX Processing Rami Rosen
2013-03-26 19:05 ` Kumar amit mehta
2013-03-26 19:31 ` Rami Rosen
2013-03-29 13:19 ` Kumar amit mehta
2013-03-29 13:36 ` Kumar amit mehta
2013-03-30 8:16 ` Rami Rosen
2013-03-30 21:56 ` Paulo Petruzalek
2013-03-26 18:45 ` Arlie Stephens
2013-03-26 18:54 ` Kumar amit mehta
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130326183133.GA6146@gmail.com \
--to=gmate.amit@gmail.com \
--cc=kernelnewbies@lists.kernelnewbies.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).