Linux PCI subsystem development
 help / color / mirror / Atom feed
* where is the irq effective affinity set from pci_alloc_irq_vectors_affinity()?
@ 2024-08-04 20:14 Olivier Langlois
  2024-08-05  7:08 ` Olivier Langlois
  0 siblings, 1 reply; 2+ messages in thread
From: Olivier Langlois @ 2024-08-04 20:14 UTC (permalink / raw)
  To: linux-pci

I am trying to understand the result that the nvme driver has when it
calls pci_alloc_irq_vectors_affinity() from nvme_setup_irqs()
(drivers/nvme/host/pci.c)

$ cat /proc/interrupts | grep nvme
 63:          9          0          0          0  PCI-MSIX-0000:00:04.0
0-edge      nvme0q0
 64:          0          0          0     237894  PCI-MSIX-0000:00:04.0
1-edge      nvme0q1

$ cat /proc/irq/64/smp_affinity_list
0-3

$ cat /proc/irq/64/effective_affinity_list 
3

I think that this happens somewhere below pci_msi_setup_msi_irqs()
(drivers/pci/msi/irqdomain.c)
but I am losing track of what is done precisely because I am not sure
of what is the irq_domain on my system.

I have experimented by playing with the nvme io queues num that is
passed to pci_msi_setup_msi_irqs()
as the max_vectors params.

The set irq effective affinity appears to always be the last cpu of the
affinity mask.

I would like to have some control on the selected effective_affinity as
I am trying to use NOHZ_FULL effectively on my system.

NOTE:
I am NOT using irqbalance

thank you,


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: where is the irq effective affinity set from pci_alloc_irq_vectors_affinity()?
  2024-08-04 20:14 where is the irq effective affinity set from pci_alloc_irq_vectors_affinity()? Olivier Langlois
@ 2024-08-05  7:08 ` Olivier Langlois
  0 siblings, 0 replies; 2+ messages in thread
From: Olivier Langlois @ 2024-08-05  7:08 UTC (permalink / raw)
  To: linux-pci; +Cc: paulmck

On Sun, 2024-08-04 at 16:14 -0400, Olivier Langlois wrote:
> I am trying to understand the result that the nvme driver has when it
> calls pci_alloc_irq_vectors_affinity() from nvme_setup_irqs()
> (drivers/nvme/host/pci.c)
> 
> $ cat /proc/interrupts | grep nvme
>  63:          9          0          0          0  PCI-MSIX-
> 0000:00:04.0
> 0-edge      nvme0q0
>  64:          0          0          0     237894  PCI-MSIX-
> 0000:00:04.0
> 1-edge      nvme0q1
> 
> $ cat /proc/irq/64/smp_affinity_list
> 0-3
> 
> $ cat /proc/irq/64/effective_affinity_list 
> 3
> 
> I think that this happens somewhere below pci_msi_setup_msi_irqs()
> (drivers/pci/msi/irqdomain.c)
> but I am losing track of what is done precisely because I am not sure
> of what is the irq_domain on my system.
> 
> I have experimented by playing with the nvme io queues num that is
> passed to pci_msi_setup_msi_irqs()
> as the max_vectors params.
> 
> The set irq effective affinity appears to always be the last cpu of
> the
> affinity mask.
> 
> I would like to have some control on the selected effective_affinity
> as
> I am trying to use NOHZ_FULL effectively on my system.
> 
> NOTE:
> I am NOT using irqbalance
> 
> thank you,
> 
I have found

arch/x86/kernel/apic/vector.c and kernel/irq/matrix.c

I think that matrix_find_best_cpu() should consider if a CPU is
NOHZ_FULL and NOT report it as the best cpu if there are other
options...

I'll try to play with the idea and report back if I get some success
with it...

Greetings,


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2024-08-05  7:08 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-08-04 20:14 where is the irq effective affinity set from pci_alloc_irq_vectors_affinity()? Olivier Langlois
2024-08-05  7:08 ` Olivier Langlois

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox