linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* Why NVMe MSIx vectors affinity set across NUMA nodes?
@ 2018-01-22  4:25 Ganapatrao Kulkarni
  2018-01-22 17:14 ` Keith Busch
  0 siblings, 1 reply; 13+ messages in thread
From: Ganapatrao Kulkarni @ 2018-01-22  4:25 UTC (permalink / raw)


Hi,

I  have observed that NVMe driver splitting interrupt affinity of MSIx
vectors among available NUMA nodes,
any specific reason for that?

i see this is happening due to pci flag PCI_IRQ_AFFINITY is set in
function nvme_setup_io_queues

   nr_io_queues = pci_alloc_irq_vectors(pdev, 1, nr_io_queues,
                        PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY);

IMO, having all vectors on same node CPUs improves interrupt latency
than distributing among all nodes.


thanks
Ganapat

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2018-01-24 20:38 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-01-22  4:25 Why NVMe MSIx vectors affinity set across NUMA nodes? Ganapatrao Kulkarni
2018-01-22 17:14 ` Keith Busch
2018-01-22 17:22   ` Ganapatrao Kulkarni
2018-01-22 17:32     ` Keith Busch
2018-01-22 17:55       ` Ganapatrao Kulkarni
2018-01-22 18:05         ` Keith Busch
2018-01-22 18:12           ` Ganapatrao Kulkarni
2018-01-22 18:20             ` Keith Busch
2018-01-23 13:30               ` Sagi Grimberg
2018-01-24  2:17                 ` Ganapatrao Kulkarni
2018-01-24 15:48                   ` Keith Busch
2018-01-24 19:39                     ` Sagi Grimberg
2018-01-24 20:38                       ` Keith Busch

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).