From: Thomas Gleixner <tglx@linutronix.de>
To: Chris Friesen <chris.friesen@windriver.com>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [IRQ] IRQ affinity not working properly?
Date: Sun, 28 Mar 2021 20:45:34 +0200 [thread overview]
Message-ID: <87blb3ce29.ffs@nanos.tec.linutronix.de> (raw)
In-Reply-To: <a0d5551e-761b-1571-b8e1-5586ec4d9f3b@windriver.com>
On Fri, Jan 29 2021 at 13:17, Chris Friesen wrote:
> I have a CentOS 7 linux system with 48 logical CPUs and a number of
Kernel version?
> Intel NICs running the i40e driver. It was booted with
> irqaffinity=0-1,24-25 in the kernel boot args, resulting in
> /proc/irq/default_smp_affinity showing "0000,03000003". CPUs 2-11 are
> set as "isolated" in the kernel boot args. The irqbalance daemon is not
> running.
>
> The problem I'm seeing is that /proc/interrupts shows iavf interrupts
> (associated with physical devices running the i40e driver) on other CPUs
> than the expected affinity. For example, here are some iavf interrupts
> on CPU 4 where I would not expect to see any interrupts given that "cat
> /proc/irq/<NUM>/smp_affinity_list" reports "0-1,24-25" for all these
> interrupts. (Sorry for the line wrapping.)
>
> cat /proc/interrupts | grep -e CPU -e 941: -e 942: -e 943: -e 944: -e
> 945: -e 961: -e 962: -e 963: -e 964: -e 965:
>
> CPU0 CPU1 CPU2 CPU3 CPU4 CPU5
> 941: 0 0 0 0 28490 0
> IR-PCI-MSI-edge iavf-0000:b5:03.6:mbx
> 942: 0 0 0 0 333832 0
> IR-PCI-MSI-edge iavf-net1-TxRx-0
> 943: 0 0 0 0 300842 0
> IR-PCI-MSI-edge iavf-net1-TxRx-1
> 944: 0 0 0 0 333845 0
> IR-PCI-MSI-edge iavf-net1-TxRx-2
> 945: 0 0 0 0 333822 0
> IR-PCI-MSI-edge iavf-net1-TxRx-3
> 961: 0 0 0 0 28492 0
> IR-PCI-MSI-edge iavf-0000:b5:02.7:mbx
> 962: 0 0 0 0 435608 0
> IR-PCI-MSI-edge iavf-net1-TxRx-0
> 963: 0 0 0 0 394832 0
> IR-PCI-MSI-edge iavf-net1-TxRx-1
> 964: 0 0 0 0 398414 0
> IR-PCI-MSI-edge iavf-net1-TxRx-2
> 965: 0 0 0 0 192847 0
> IR-PCI-MSI-edge iavf-net1-TxRx-3
>
> There were IRQs coming in on the "iavf-0000:b5:02.7:mbx" interrupt at
> roughly 1 per second without any traffic, while the interrupt rate on
> the "iavf-net1-TxRx-<X>" seemed to be related to traffic.
>
> Is this expected? It seems like the IRQ subsystem is not respecting the
> configured SMP affinity for the interrupt in question. I've also seen
> the same behaviour with igb interrupts.
No it's not expected. Do you see the same behaviour with a recent
mainline kernel, i.e. 5.10 or 5.11?
Thanks,
tglx
next prev parent reply other threads:[~2021-03-28 18:46 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-29 19:17 [IRQ] IRQ affinity not working properly? Chris Friesen
2021-03-28 18:45 ` Thomas Gleixner [this message]
2021-04-21 13:31 ` Nitesh Narayan Lal
2021-04-22 15:42 ` Thomas Gleixner
2021-04-22 17:00 ` Chris Friesen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87blb3ce29.ffs@nanos.tec.linutronix.de \
--to=tglx@linutronix.de \
--cc=chris.friesen@windriver.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox