* Controlling cpu for interrupts
@ 2025-06-09 11:27 Per Oberg
2025-06-10 14:09 ` Jan Kiszka
0 siblings, 1 reply; 3+ messages in thread
From: Per Oberg @ 2025-06-09 11:27 UTC (permalink / raw)
To: xenomai
Hi
I have a setup with
- Xenomai 3.1
- Linux 4.14 ish
- RT Net e1000e + igb
- Peak PCAN
When I look closely on the rt-threads I can see that the IRQ from the CAN seems to be coming in on certain cores, but now always the same.
Here is an example of my ../xenomai/irq file:
IRQ CPU0 CPU1 CPU2 CPU3
18: 0 0 0 130419960 pcan pcan pcan pcan pcan pcan
19: 88638780 0 0 0 pcan pcan pcan pcan
127: 3215 0 0 0 rteth0-TxRx-0
Thus, for this case it seems like IRQ19 is handled by CPU0 while IRQ18 is handled on CPU3
What is the heuristics, if any, used for decing this ?
Thanks
Per Öberg
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Controlling cpu for interrupts
2025-06-09 11:27 Controlling cpu for interrupts Per Oberg
@ 2025-06-10 14:09 ` Jan Kiszka
2025-06-10 16:00 ` Per Oberg
0 siblings, 1 reply; 3+ messages in thread
From: Jan Kiszka @ 2025-06-10 14:09 UTC (permalink / raw)
To: Per Oberg, xenomai
On 09.06.25 13:27, Per Oberg wrote:
> Hi
>
> I have a setup with
> - Xenomai 3.1
> - Linux 4.14 ish
Ouch - dead horse (kernel) warning!
> - RT Net e1000e + igb
> - Peak PCAN
Standard Linux driver for CAN, right? This is at least my interpretation
based on the irq names below.
>
> When I look closely on the rt-threads I can see that the IRQ from the CAN seems to be coming in on certain cores, but now always the same.
>
> Here is an example of my ../xenomai/irq file:
>
> IRQ CPU0 CPU1 CPU2 CPU3
> 18: 0 0 0 130419960 pcan pcan pcan pcan pcan pcan
> 19: 88638780 0 0 0 pcan pcan pcan pcan
> 127: 3215 0 0 0 rteth0-TxRx-0
>
> Thus, for this case it seems like IRQ19 is handled by CPU0 while IRQ18 is handled on CPU3
>
> What is the heuristics, if any, used for decing this ?
/proc/irq/*/smp_affinity apply here, at least for standard IRQs. If the
mask is containing more than one core, it's up to the IRQ controllers
and CPUs who will take an event first.
Jan
--
Siemens AG, Foundational Technologies
Linux Expert Center
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Controlling cpu for interrupts
2025-06-10 14:09 ` Jan Kiszka
@ 2025-06-10 16:00 ` Per Oberg
0 siblings, 0 replies; 3+ messages in thread
From: Per Oberg @ 2025-06-10 16:00 UTC (permalink / raw)
To: xenomai
----- Den 10 jun 2025, på kl 16:09, Jan Kiszka jan.kiszka@siemens.com skrev:
>> Hi
>> I have a setup with
>> - Xenomai 3.1
>> - Linux 4.14 ish
> Ouch - dead horse (kernel) warning!
Thanks, yes, I know. I am currently debugging an older setup.
>> - RT Net e1000e + igb
>> - Peak PCAN
> Standard Linux driver for CAN, right? This is at least my interpretation
> based on the irq names below.
No, actually not. I should have said. It's the Peak PCAN xenomai driver
>> When I look closely on the rt-threads I can see that the IRQ from the CAN seems
>> to be coming in on certain cores, but now always the same.
>>
>> Here is an example of my ../xenomai/irq file:
>>
>> IRQ CPU0 CPU1 CPU2 CPU3
>> 18: 0 0 0 130419960 pcan pcan pcan pcan
>> pcan pcan
>> 19: 88638780 0 0 0 pcan pcan pcan pcan
>> 127: 3215 0 0 0 rteth0-TxRx-0
>> Thus, for this case it seems like IRQ19 is handled by CPU0 while IRQ18 is
>> handled on CPU3
>>
>> What is the heuristics, if any, used for decing this ?
>>
> /proc/irq/*/smp_affinity apply here, at least for standard IRQs. If the
> mask is containing more than one core, it's up to the IRQ controllers
> and CPUs who will take an event first.
Thanks, what would be the corresponding place to look for the RTDM stuff ?
> Jan
Thanks
Per Öberg
> --
> Siemens AG, Foundational Technologies
> Linux Expert Center
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-06-10 16:00 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-09 11:27 Controlling cpu for interrupts Per Oberg
2025-06-10 14:09 ` Jan Kiszka
2025-06-10 16:00 ` Per Oberg
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).