* [PATCH v3 0/4] Allow dyn MSI-X vector allocation of MANA
@ 2025-05-09 10:12 Shradha Gupta
2025-05-09 10:13 ` [PATCH v3 1/4] PCI/MSI: Export pci_msix_prepare_desc() for dynamic MSI-X allocations Shradha Gupta
` (3 more replies)
0 siblings, 4 replies; 20+ messages in thread
From: Shradha Gupta @ 2025-05-09 10:12 UTC (permalink / raw)
Cc: Shradha Gupta, linux-hyperv, linux-pci, linux-kernel, Nipun Gupta,
Yury Norov, Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen,
Kevin Tian, Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
Manivannan Sadhasivam, Krzysztof Wilczy�~Dski,
Lorenzo Pieralisi, Dexuan Cui, Wei Liu, Haiyang Zhang,
K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Simon Horman,
Leon Romanovsky, Maxim Levitsky, Erni Sri Satya Vennela,
Peter Zijlstra, netdev, linux-rdma, Paul Rosswurm, Shradha Gupta
In this patchset we want to enable the MANA driver to be able to
allocate MSI-X vectors in PCI dynamically
The first patch exports pci_msix_prepare_desc() in PCI to be able to
correctly prepare descriptors for dynamically added MSI-X vectors
The second patch adds the support of dynamic vector allocation in
pci-hyperv PCI controller by enabling the MSI_FLAG_PCI_MSIX_ALLOC_DYN
flag and using the pci_msix_prepare_desc() exported in first patch.
The third patch is a preparation patch for mana changes to support
dynamic IRQ allocation. It contains changes in irq_setup() to allow
skipping first sibling CPU sets, in case certain IRQs are already
affinitized to them.
The fourth patch has the changes in MANA driver to be able to allocate
MSI-X vectors dynamically. If the support does not exist it defaults to
older behavior.
---
Changes in v3
* split the 3rd patch into preparation patch around irq_setup() and
changes in mana driver to allow dynamic IRQ allocation
* Add arm64 support for dynamic MSI-X allocation in pci_hyperv
controller
---
Changes in v2
* split the first patch into two(exporting the preapre_desc
func and using the function and flag in pci-hyperv)
* replace 'pci vectors' by 'MSI-X vectors'
* Change the cover letter description to align with changes made
---
Shradha Gupta (4):
PCI/MSI: Export pci_msix_prepare_desc() for dynamic MSI-X alloc
PCI: hv: Allow dynamic MSI-X vector allocation
net: mana: Allow irq_setup() to skip cpus for affinity
net: mana: Allocate MSI-X vectors dynamically
.../net/ethernet/microsoft/mana/gdma_main.c | 264 +++++++++++++++---
drivers/pci/controller/pci-hyperv.c | 7 +-
drivers/pci/msi/irqdomain.c | 5 +-
include/linux/msi.h | 2 +
include/net/mana/gdma.h | 8 +-
5 files changed, 235 insertions(+), 51 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH v3 1/4] PCI/MSI: Export pci_msix_prepare_desc() for dynamic MSI-X allocations
2025-05-09 10:12 [PATCH v3 0/4] Allow dyn MSI-X vector allocation of MANA Shradha Gupta
@ 2025-05-09 10:13 ` Shradha Gupta
2025-05-09 10:13 ` [PATCH v3 2/4] PCI: hv: Allow dynamic MSI-X vector allocation Shradha Gupta
` (2 subsequent siblings)
3 siblings, 0 replies; 20+ messages in thread
From: Shradha Gupta @ 2025-05-09 10:13 UTC (permalink / raw)
To: Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen,
Thomas Gleixner, Bjorn Helgaas
Cc: Shradha Gupta, linux-hyperv, linux-pci, linux-kernel, Nipun Gupta,
Yury Norov, Kevin Tian, Long Li, Rob Herring,
Manivannan Sadhasivam, Krzysztof Wilczy�~Dski,
Lorenzo Pieralisi, Dexuan Cui, Wei Liu, Haiyang Zhang,
K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Simon Horman,
Leon Romanovsky, Maxim Levitsky, Erni Sri Satya Vennela,
Peter Zijlstra, netdev, linux-rdma, Paul Rosswurm, Shradha Gupta
For supporting dynamic MSI-X vector allocation by PCI controllers, enabling
the flag MSI_FLAG_PCI_MSIX_ALLOC_DYN is not enough, msix_prepare_msi_desc()
to prepare the MSI descriptor is also needed.
Export pci_msix_prepare_desc() to allow PCI controllers to support dynamic
MSI-X vector allocation.
Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
Changes in v3
* Improved the patch description by removing abbreviations
---
drivers/pci/msi/irqdomain.c | 5 +++--
include/linux/msi.h | 2 ++
2 files changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/pci/msi/irqdomain.c b/drivers/pci/msi/irqdomain.c
index d7ba8795d60f..43129aa6d6c7 100644
--- a/drivers/pci/msi/irqdomain.c
+++ b/drivers/pci/msi/irqdomain.c
@@ -222,13 +222,14 @@ static void pci_irq_unmask_msix(struct irq_data *data)
pci_msix_unmask(irq_data_get_msi_desc(data));
}
-static void pci_msix_prepare_desc(struct irq_domain *domain, msi_alloc_info_t *arg,
- struct msi_desc *desc)
+void pci_msix_prepare_desc(struct irq_domain *domain, msi_alloc_info_t *arg,
+ struct msi_desc *desc)
{
/* Don't fiddle with preallocated MSI descriptors */
if (!desc->pci.mask_base)
msix_prepare_msi_desc(to_pci_dev(desc->dev), desc);
}
+EXPORT_SYMBOL_GPL(pci_msix_prepare_desc);
static const struct msi_domain_template pci_msix_template = {
.chip = {
diff --git a/include/linux/msi.h b/include/linux/msi.h
index 86e42742fd0f..d5864d5e75c2 100644
--- a/include/linux/msi.h
+++ b/include/linux/msi.h
@@ -691,6 +691,8 @@ struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode,
struct irq_domain *parent);
u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev);
struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev);
+void pci_msix_prepare_desc(struct irq_domain *domain, msi_alloc_info_t *arg,
+ struct msi_desc *desc);
#else /* CONFIG_PCI_MSI */
static inline struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev)
{
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v3 2/4] PCI: hv: Allow dynamic MSI-X vector allocation
2025-05-09 10:12 [PATCH v3 0/4] Allow dyn MSI-X vector allocation of MANA Shradha Gupta
2025-05-09 10:13 ` [PATCH v3 1/4] PCI/MSI: Export pci_msix_prepare_desc() for dynamic MSI-X allocations Shradha Gupta
@ 2025-05-09 10:13 ` Shradha Gupta
2025-05-12 7:00 ` Manivannan Sadhasivam
2025-05-09 10:13 ` [PATCH v3 3/4] net: mana: Allow irq_setup() to skip cpus for affinity Shradha Gupta
2025-05-09 10:13 ` [PATCH v3 4/4] net: mana: Allocate MSI-X vectors dynamically Shradha Gupta
3 siblings, 1 reply; 20+ messages in thread
From: Shradha Gupta @ 2025-05-09 10:13 UTC (permalink / raw)
To: Bjorn Helgaas, Rob Herring, Manivannan Sadhasivam,
Krzysztof Wilczy�~Dski, Lorenzo Pieralisi, Dexuan Cui,
Wei Liu, Haiyang Zhang, K. Y. Srinivasan
Cc: Shradha Gupta, linux-hyperv, linux-pci, linux-kernel, Nipun Gupta,
Yury Norov, Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen,
Kevin Tian, Long Li, Thomas Gleixner, Andrew Lunn,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Konstantin Taranov, Simon Horman, Leon Romanovsky, Maxim Levitsky,
Erni Sri Satya Vennela, Peter Zijlstra, netdev, linux-rdma,
Paul Rosswurm, Shradha Gupta
Allow dynamic MSI-X vector allocation for pci_hyperv PCI controller
by adding support for the flag MSI_FLAG_PCI_MSIX_ALLOC_DYN and using
pci_msix_prepare_desc() to prepare the MSI-X descriptors.
Feature support added for both x86 and ARM64
Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
---
Changes in v3:
* Add arm64 support
---
Changes in v2:
* split the patch to keep changes in PCI and pci_hyperv controller
seperate
* replace strings "pci vectors" by "MSI-X vectors"
---
drivers/pci/controller/pci-hyperv.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
index ac27bda5ba26..8c8882cb0ad2 100644
--- a/drivers/pci/controller/pci-hyperv.c
+++ b/drivers/pci/controller/pci-hyperv.c
@@ -598,7 +598,8 @@ static unsigned int hv_msi_get_int_vector(struct irq_data *data)
return cfg->vector;
}
-#define hv_msi_prepare pci_msi_prepare
+#define hv_msi_prepare pci_msi_prepare
+#define hv_msix_prepare_desc pci_msix_prepare_desc
/**
* hv_arch_irq_unmask() - "Unmask" the IRQ by setting its current
@@ -727,6 +728,7 @@ static void hv_arch_irq_unmask(struct irq_data *data)
#define FLOW_HANDLER NULL
#define FLOW_NAME NULL
#define hv_msi_prepare NULL
+#define hv_msix_prepare_desc pci_msix_prepare_desc
struct hv_pci_chip_data {
DECLARE_BITMAP(spi_map, HV_PCI_MSI_SPI_NR);
@@ -2063,6 +2065,7 @@ static struct irq_chip hv_msi_irq_chip = {
static struct msi_domain_ops hv_msi_ops = {
.msi_prepare = hv_msi_prepare,
.msi_free = hv_msi_free,
+ .prepare_desc = hv_msix_prepare_desc,
};
/**
@@ -2084,7 +2087,7 @@ static int hv_pcie_init_irq_domain(struct hv_pcibus_device *hbus)
hbus->msi_info.ops = &hv_msi_ops;
hbus->msi_info.flags = (MSI_FLAG_USE_DEF_DOM_OPS |
MSI_FLAG_USE_DEF_CHIP_OPS | MSI_FLAG_MULTI_PCI_MSI |
- MSI_FLAG_PCI_MSIX);
+ MSI_FLAG_PCI_MSIX | MSI_FLAG_PCI_MSIX_ALLOC_DYN);
hbus->msi_info.handler = FLOW_HANDLER;
hbus->msi_info.handler_name = FLOW_NAME;
hbus->msi_info.data = hbus;
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v3 3/4] net: mana: Allow irq_setup() to skip cpus for affinity
2025-05-09 10:12 [PATCH v3 0/4] Allow dyn MSI-X vector allocation of MANA Shradha Gupta
2025-05-09 10:13 ` [PATCH v3 1/4] PCI/MSI: Export pci_msix_prepare_desc() for dynamic MSI-X allocations Shradha Gupta
2025-05-09 10:13 ` [PATCH v3 2/4] PCI: hv: Allow dynamic MSI-X vector allocation Shradha Gupta
@ 2025-05-09 10:13 ` Shradha Gupta
2025-05-09 15:49 ` Yury Norov
2025-05-14 4:53 ` Michael Kelley
2025-05-09 10:13 ` [PATCH v3 4/4] net: mana: Allocate MSI-X vectors dynamically Shradha Gupta
3 siblings, 2 replies; 20+ messages in thread
From: Shradha Gupta @ 2025-05-09 10:13 UTC (permalink / raw)
To: Dexuan Cui, Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Andrew Lunn,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Konstantin Taranov, Simon Horman, Leon Romanovsky, Maxim Levitsky,
Erni Sri Satya Vennela, Peter Zijlstra
Cc: Shradha Gupta, linux-hyperv, linux-pci, linux-kernel, Nipun Gupta,
Yury Norov, Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen,
Kevin Tian, Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
Manivannan Sadhasivam, Krzysztof Wilczy�~Dski,
Lorenzo Pieralisi, netdev, linux-rdma, Paul Rosswurm,
Shradha Gupta
In order to prepare the MANA driver to allocate the MSI-X IRQs
dynamically, we need to prepare the irq_setup() to allow skipping
affinitizing IRQs to first CPU sibling group.
This would be for cases when number of IRQs is less than or equal
to number of online CPUs. In such cases for dynamically added IRQs
the first CPU sibling group would already be affinitized with HWC IRQ
Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
---
drivers/net/ethernet/microsoft/mana/gdma_main.c | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
index 4ffaf7588885..2de42ce43373 100644
--- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
+++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
@@ -1288,7 +1288,8 @@ void mana_gd_free_res_map(struct gdma_resource *r)
r->size = 0;
}
-static int irq_setup(unsigned int *irqs, unsigned int len, int node)
+static int irq_setup(unsigned int *irqs, unsigned int len, int node,
+ bool skip_first_cpu)
{
const struct cpumask *next, *prev = cpu_none_mask;
cpumask_var_t cpus __free(free_cpumask_var);
@@ -1303,9 +1304,20 @@ static int irq_setup(unsigned int *irqs, unsigned int len, int node)
while (weight > 0) {
cpumask_andnot(cpus, next, prev);
for_each_cpu(cpu, cpus) {
+ /*
+ * if the CPU sibling set is to be skipped we
+ * just move on to the next CPUs without len--
+ */
+ if (unlikely(skip_first_cpu)) {
+ skip_first_cpu = false;
+ goto next_cpumask;
+ }
+
if (len-- == 0)
goto done;
+
irq_set_affinity_and_hint(*irqs++, topology_sibling_cpumask(cpu));
+next_cpumask:
cpumask_andnot(cpus, cpus, topology_sibling_cpumask(cpu));
--weight;
}
@@ -1403,7 +1415,7 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
}
}
- err = irq_setup(irqs, (nvec - start_irq_index), gc->numa_node);
+ err = irq_setup(irqs, (nvec - start_irq_index), gc->numa_node, false);
if (err)
goto free_irq;
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH v3 4/4] net: mana: Allocate MSI-X vectors dynamically
2025-05-09 10:12 [PATCH v3 0/4] Allow dyn MSI-X vector allocation of MANA Shradha Gupta
` (2 preceding siblings ...)
2025-05-09 10:13 ` [PATCH v3 3/4] net: mana: Allow irq_setup() to skip cpus for affinity Shradha Gupta
@ 2025-05-09 10:13 ` Shradha Gupta
2025-05-14 5:04 ` Michael Kelley
2025-05-21 16:27 ` Michael Kelley
3 siblings, 2 replies; 20+ messages in thread
From: Shradha Gupta @ 2025-05-09 10:13 UTC (permalink / raw)
To: Dexuan Cui, Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Andrew Lunn,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Konstantin Taranov, Simon Horman, Leon Romanovsky, Maxim Levitsky,
Erni Sri Satya Vennela, Peter Zijlstra
Cc: Shradha Gupta, linux-hyperv, linux-pci, linux-kernel, Nipun Gupta,
Yury Norov, Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen,
Kevin Tian, Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
Manivannan Sadhasivam, Krzysztof Wilczy�~Dski,
Lorenzo Pieralisi, netdev, linux-rdma, Paul Rosswurm,
Shradha Gupta
Currently, the MANA driver allocates MSI-X vectors statically based on
MANA_MAX_NUM_QUEUES and num_online_cpus() values and in some cases ends
up allocating more vectors than it needs. This is because, by this time
we do not have a HW channel and do not know how many IRQs should be
allocated.
To avoid this, we allocate 1 MSI-X vector during the creation of HWC and
after getting the value supported by hardware, dynamically add the
remaining MSI-X vectors.
Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
---
Changes in v3:
* implemented irq_contexts as xarrays rather than list
* split the patch to create a perparation patch around irq_setup()
* add log when IRQ allocation/setup for remaining IRQs fails
---
Changes in v2:
* Use string 'MSI-X vectors' instead of 'pci vectors'
* make skip-cpu a bool instead of int
* rearrange the comment arout skip_cpu variable appropriately
* update the capability bit for driver indicating dynamic IRQ allocation
* enforced max line length to 80
* enforced RCT convention
* initialized gic to NULL, for when there is a possibility of gic
not being populated correctly
---
.../net/ethernet/microsoft/mana/gdma_main.c | 248 +++++++++++++++---
include/net/mana/gdma.h | 8 +-
2 files changed, 211 insertions(+), 45 deletions(-)
diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
index 2de42ce43373..f07cebffc30d 100644
--- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
+++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
@@ -6,6 +6,8 @@
#include <linux/pci.h>
#include <linux/utsname.h>
#include <linux/version.h>
+#include <linux/msi.h>
+#include <linux/irqdomain.h>
#include <net/mana/mana.h>
@@ -80,8 +82,15 @@ static int mana_gd_query_max_resources(struct pci_dev *pdev)
return err ? err : -EPROTO;
}
- if (gc->num_msix_usable > resp.max_msix)
- gc->num_msix_usable = resp.max_msix;
+ if (!pci_msix_can_alloc_dyn(pdev)) {
+ if (gc->num_msix_usable > resp.max_msix)
+ gc->num_msix_usable = resp.max_msix;
+ } else {
+ /* If dynamic allocation is enabled we have already allocated
+ * hwc msi
+ */
+ gc->num_msix_usable = min(resp.max_msix, num_online_cpus() + 1);
+ }
if (gc->num_msix_usable <= 1)
return -ENOSPC;
@@ -482,7 +491,9 @@ static int mana_gd_register_irq(struct gdma_queue *queue,
}
queue->eq.msix_index = msi_index;
- gic = &gc->irq_contexts[msi_index];
+ gic = xa_load(&gc->irq_contexts, msi_index);
+ if (!gic)
+ return -EINVAL;
spin_lock_irqsave(&gic->lock, flags);
list_add_rcu(&queue->entry, &gic->eq_list);
@@ -507,7 +518,10 @@ static void mana_gd_deregiser_irq(struct gdma_queue *queue)
if (WARN_ON(msix_index >= gc->num_msix_usable))
return;
- gic = &gc->irq_contexts[msix_index];
+ gic = xa_load(&gc->irq_contexts, msix_index);
+ if (!gic)
+ return;
+
spin_lock_irqsave(&gic->lock, flags);
list_for_each_entry_rcu(eq, &gic->eq_list, entry) {
if (queue == eq) {
@@ -1329,29 +1343,96 @@ static int irq_setup(unsigned int *irqs, unsigned int len, int node,
return 0;
}
-static int mana_gd_setup_irqs(struct pci_dev *pdev)
+static int mana_gd_setup_dyn_irqs(struct pci_dev *pdev, int nvec)
{
struct gdma_context *gc = pci_get_drvdata(pdev);
- unsigned int max_queues_per_port;
struct gdma_irq_context *gic;
- unsigned int max_irqs, cpu;
- int start_irq_index = 1;
- int nvec, *irqs, irq;
+ bool skip_first_cpu = false;
int err, i = 0, j;
+ int *irqs, irq;
cpus_read_lock();
- max_queues_per_port = num_online_cpus();
- if (max_queues_per_port > MANA_MAX_NUM_QUEUES)
- max_queues_per_port = MANA_MAX_NUM_QUEUES;
- /* Need 1 interrupt for the Hardware communication Channel (HWC) */
- max_irqs = max_queues_per_port + 1;
+ irqs = kmalloc_array(nvec, sizeof(int), GFP_KERNEL);
+ if (!irqs) {
+ err = -ENOMEM;
+ goto free_irq_vector;
+ }
- nvec = pci_alloc_irq_vectors(pdev, 2, max_irqs, PCI_IRQ_MSIX);
- if (nvec < 0) {
- cpus_read_unlock();
- return nvec;
+ for (i = 0; i < nvec; i++) {
+ gic = kcalloc(1, sizeof(struct gdma_irq_context), GFP_KERNEL);
+ if (!gic) {
+ err = -ENOMEM;
+ goto free_irq;
+ }
+ gic->handler = mana_gd_process_eq_events;
+ INIT_LIST_HEAD(&gic->eq_list);
+ spin_lock_init(&gic->lock);
+
+ snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s",
+ i, pci_name(pdev));
+
+ /* one pci vector is already allocated for HWC */
+ irqs[i] = pci_irq_vector(pdev, i + 1);
+ if (irqs[i] < 0) {
+ err = irqs[i];
+ goto free_current_gic;
+ }
+
+ err = request_irq(irqs[i], mana_gd_intr, 0, gic->name, gic);
+ if (err)
+ goto free_current_gic;
+
+ xa_store(&gc->irq_contexts, i + 1, gic, GFP_KERNEL);
}
+
+ /*
+ * When calling irq_setup() for dynamically added IRQs, if number of
+ * CPUs is more than or equal to allocated MSI-X, we need to skip the
+ * first CPU sibling group since they are already affinitized to HWC IRQ
+ */
+ if (gc->num_msix_usable <= num_online_cpus())
+ skip_first_cpu = true;
+
+ err = irq_setup(irqs, nvec, gc->numa_node, skip_first_cpu);
+ if (err)
+ goto free_irq;
+
+ cpus_read_unlock();
+ kfree(irqs);
+ return 0;
+
+free_current_gic:
+ kfree(gic);
+free_irq:
+ for (j = i; j >= 0; j--) {
+ irq = pci_irq_vector(pdev, j);
+ gic = xa_load(&gc->irq_contexts, j);
+ if (!gic)
+ continue;
+
+ irq_update_affinity_hint(irq, NULL);
+ free_irq(irq, gic);
+ xa_erase(&gc->irq_contexts, j);
+ kfree(gic);
+ }
+ kfree(irqs);
+free_irq_vector:
+ cpus_read_unlock();
+ return err;
+}
+
+static int mana_gd_setup_irqs(struct pci_dev *pdev, int nvec)
+{
+ struct gdma_context *gc = pci_get_drvdata(pdev);
+ struct gdma_irq_context *gic;
+ int start_irq_index = 1;
+ unsigned int cpu;
+ int *irqs, irq;
+ int err, i = 0, j;
+
+ cpus_read_lock();
+
if (nvec <= num_online_cpus())
start_irq_index = 0;
@@ -1361,15 +1442,13 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
goto free_irq_vector;
}
- gc->irq_contexts = kcalloc(nvec, sizeof(struct gdma_irq_context),
- GFP_KERNEL);
- if (!gc->irq_contexts) {
- err = -ENOMEM;
- goto free_irq_array;
- }
-
for (i = 0; i < nvec; i++) {
- gic = &gc->irq_contexts[i];
+ gic = kcalloc(1, sizeof(struct gdma_irq_context), GFP_KERNEL);
+ if (!gic) {
+ err = -ENOMEM;
+ goto free_irq;
+ }
+
gic->handler = mana_gd_process_eq_events;
INIT_LIST_HEAD(&gic->eq_list);
spin_lock_init(&gic->lock);
@@ -1384,13 +1463,13 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
irq = pci_irq_vector(pdev, i);
if (irq < 0) {
err = irq;
- goto free_irq;
+ goto free_current_gic;
}
if (!i) {
err = request_irq(irq, mana_gd_intr, 0, gic->name, gic);
if (err)
- goto free_irq;
+ goto free_current_gic;
/* If number of IRQ is one extra than number of online CPUs,
* then we need to assign IRQ0 (hwc irq) and IRQ1 to
@@ -1408,39 +1487,110 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
}
} else {
irqs[i - start_irq_index] = irq;
- err = request_irq(irqs[i - start_irq_index], mana_gd_intr, 0,
- gic->name, gic);
+ err = request_irq(irqs[i - start_irq_index],
+ mana_gd_intr, 0, gic->name, gic);
if (err)
- goto free_irq;
+ goto free_current_gic;
}
+
+ xa_store(&gc->irq_contexts, i, gic, GFP_KERNEL);
}
err = irq_setup(irqs, (nvec - start_irq_index), gc->numa_node, false);
if (err)
goto free_irq;
- gc->max_num_msix = nvec;
- gc->num_msix_usable = nvec;
cpus_read_unlock();
kfree(irqs);
return 0;
+free_current_gic:
+ kfree(gic);
free_irq:
for (j = i - 1; j >= 0; j--) {
irq = pci_irq_vector(pdev, j);
- gic = &gc->irq_contexts[j];
+ gic = xa_load(&gc->irq_contexts, j);
+ if (!gic)
+ continue;
irq_update_affinity_hint(irq, NULL);
free_irq(irq, gic);
+ xa_erase(&gc->irq_contexts, j);
+ kfree(gic);
}
- kfree(gc->irq_contexts);
- gc->irq_contexts = NULL;
-free_irq_array:
kfree(irqs);
free_irq_vector:
+ xa_destroy(&gc->irq_contexts);
cpus_read_unlock();
- pci_free_irq_vectors(pdev);
+ return err;
+}
+
+static int mana_gd_setup_hwc_irqs(struct pci_dev *pdev)
+{
+ struct gdma_context *gc = pci_get_drvdata(pdev);
+ unsigned int max_irqs, min_irqs;
+ int max_queues_per_port;
+ int nvec, err;
+
+ if (pci_msix_can_alloc_dyn(pdev)) {
+ max_irqs = 1;
+ min_irqs = 1;
+ } else {
+ max_queues_per_port = num_online_cpus();
+ if (max_queues_per_port > MANA_MAX_NUM_QUEUES)
+ max_queues_per_port = MANA_MAX_NUM_QUEUES;
+ /* Need 1 interrupt for HWC */
+ max_irqs = max_queues_per_port + 1;
+ min_irqs = 2;
+ }
+
+ nvec = pci_alloc_irq_vectors(pdev, min_irqs, max_irqs, PCI_IRQ_MSIX);
+ if (nvec < 0)
+ return nvec;
+
+ err = mana_gd_setup_irqs(pdev, nvec);
+ if (err) {
+ pci_free_irq_vectors(pdev);
+ return err;
+ }
+
+ gc->num_msix_usable = nvec;
+ gc->max_num_msix = nvec;
+
+ return err;
+}
+
+static int mana_gd_setup_remaining_irqs(struct pci_dev *pdev)
+{
+ struct gdma_context *gc = pci_get_drvdata(pdev);
+ int max_irqs, i, err = 0;
+ struct msi_map irq_map;
+
+ if (!pci_msix_can_alloc_dyn(pdev))
+ /* remain irqs are already allocated with HWC IRQ */
+ return 0;
+
+ /* allocate only remaining IRQs*/
+ max_irqs = gc->num_msix_usable - 1;
+
+ for (i = 1; i <= max_irqs; i++) {
+ irq_map = pci_msix_alloc_irq_at(pdev, i, NULL);
+ if (!irq_map.virq) {
+ err = irq_map.index;
+ /* caller will handle cleaning up all allocated
+ * irqs, after HWC is destroyed
+ */
+ return err;
+ }
+ }
+
+ err = mana_gd_setup_dyn_irqs(pdev, max_irqs);
+ if (err)
+ return err;
+
+ gc->max_num_msix = gc->max_num_msix + max_irqs;
+
return err;
}
@@ -1458,19 +1608,22 @@ static void mana_gd_remove_irqs(struct pci_dev *pdev)
if (irq < 0)
continue;
- gic = &gc->irq_contexts[i];
+ gic = xa_load(&gc->irq_contexts, i);
+ if (!gic)
+ continue;
/* Need to clear the hint before free_irq */
irq_update_affinity_hint(irq, NULL);
free_irq(irq, gic);
+ xa_erase(&gc->irq_contexts, i);
+ kfree(gic);
}
pci_free_irq_vectors(pdev);
gc->max_num_msix = 0;
gc->num_msix_usable = 0;
- kfree(gc->irq_contexts);
- gc->irq_contexts = NULL;
+ xa_destroy(&gc->irq_contexts);
}
static int mana_gd_setup(struct pci_dev *pdev)
@@ -1481,9 +1634,10 @@ static int mana_gd_setup(struct pci_dev *pdev)
mana_gd_init_registers(pdev);
mana_smc_init(&gc->shm_channel, gc->dev, gc->shm_base);
- err = mana_gd_setup_irqs(pdev);
+ err = mana_gd_setup_hwc_irqs(pdev);
if (err) {
- dev_err(gc->dev, "Failed to setup IRQs: %d\n", err);
+ dev_err(gc->dev, "Failed to setup IRQs for HWC creation: %d\n",
+ err);
return err;
}
@@ -1499,6 +1653,12 @@ static int mana_gd_setup(struct pci_dev *pdev)
if (err)
goto destroy_hwc;
+ err = mana_gd_setup_remaining_irqs(pdev);
+ if (err) {
+ dev_err(gc->dev, "Failed to setup remaining IRQs: %d", err);
+ goto destroy_hwc;
+ }
+
err = mana_gd_detect_devices(pdev);
if (err)
goto destroy_hwc;
@@ -1575,6 +1735,7 @@ static int mana_gd_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
gc->is_pf = mana_is_pf(pdev->device);
gc->bar0_va = bar0_va;
gc->dev = &pdev->dev;
+ xa_init(&gc->irq_contexts);
if (gc->is_pf)
gc->mana_pci_debugfs = debugfs_create_dir("0", mana_debugfs_root);
diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h
index 228603bf03f2..f20d1d1ea5e8 100644
--- a/include/net/mana/gdma.h
+++ b/include/net/mana/gdma.h
@@ -373,7 +373,7 @@ struct gdma_context {
unsigned int max_num_queues;
unsigned int max_num_msix;
unsigned int num_msix_usable;
- struct gdma_irq_context *irq_contexts;
+ struct xarray irq_contexts;
/* L2 MTU */
u16 adapter_mtu;
@@ -558,12 +558,16 @@ enum {
/* Driver can handle holes (zeros) in the device list */
#define GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP BIT(11)
+/* Driver supports dynamic MSI-X vector allocation */
+#define GDMA_DRV_CAP_FLAG_1_DYNAMIC_IRQ_ALLOC_SUPPORT BIT(13)
+
#define GDMA_DRV_CAP_FLAGS1 \
(GDMA_DRV_CAP_FLAG_1_EQ_SHARING_MULTI_VPORT | \
GDMA_DRV_CAP_FLAG_1_NAPI_WKDONE_FIX | \
GDMA_DRV_CAP_FLAG_1_HWC_TIMEOUT_RECONFIG | \
GDMA_DRV_CAP_FLAG_1_VARIABLE_INDIRECTION_TABLE_SUPPORT | \
- GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP)
+ GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP | \
+ GDMA_DRV_CAP_FLAG_1_DYNAMIC_IRQ_ALLOC_SUPPORT)
#define GDMA_DRV_CAP_FLAGS2 0
--
2.34.1
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH v3 3/4] net: mana: Allow irq_setup() to skip cpus for affinity
2025-05-09 10:13 ` [PATCH v3 3/4] net: mana: Allow irq_setup() to skip cpus for affinity Shradha Gupta
@ 2025-05-09 15:49 ` Yury Norov
2025-05-12 5:49 ` Shradha Gupta
2025-05-14 4:53 ` Michael Kelley
1 sibling, 1 reply; 20+ messages in thread
From: Yury Norov @ 2025-05-09 15:49 UTC (permalink / raw)
To: Shradha Gupta
Cc: Dexuan Cui, Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Andrew Lunn,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Konstantin Taranov, Simon Horman, Leon Romanovsky, Maxim Levitsky,
Erni Sri Satya Vennela, Peter Zijlstra, linux-hyperv, linux-pci,
linux-kernel, Nipun Gupta, Jason Gunthorpe, Jonathan Cameron,
Anna-Maria Behnsen, Kevin Tian, Long Li, Thomas Gleixner,
Bjorn Helgaas, Rob Herring, Manivannan Sadhasivam,
Krzysztof Wilczy�~Dski, Lorenzo Pieralisi, netdev,
linux-rdma, Paul Rosswurm, Shradha Gupta
On Fri, May 09, 2025 at 03:13:45AM -0700, Shradha Gupta wrote:
> In order to prepare the MANA driver to allocate the MSI-X IRQs
> dynamically, we need to prepare the irq_setup() to allow skipping
> affinitizing IRQs to first CPU sibling group.
>
> This would be for cases when number of IRQs is less than or equal
> to number of online CPUs. In such cases for dynamically added IRQs
> the first CPU sibling group would already be affinitized with HWC IRQ
>
> Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
> ---
> drivers/net/ethernet/microsoft/mana/gdma_main.c | 16 ++++++++++++++--
> 1 file changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> index 4ffaf7588885..2de42ce43373 100644
> --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
> +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> @@ -1288,7 +1288,8 @@ void mana_gd_free_res_map(struct gdma_resource *r)
> r->size = 0;
> }
>
> -static int irq_setup(unsigned int *irqs, unsigned int len, int node)
> +static int irq_setup(unsigned int *irqs, unsigned int len, int node,
> + bool skip_first_cpu)
> {
> const struct cpumask *next, *prev = cpu_none_mask;
> cpumask_var_t cpus __free(free_cpumask_var);
> @@ -1303,9 +1304,20 @@ static int irq_setup(unsigned int *irqs, unsigned int len, int node)
> while (weight > 0) {
> cpumask_andnot(cpus, next, prev);
> for_each_cpu(cpu, cpus) {
> + /*
> + * if the CPU sibling set is to be skipped we
> + * just move on to the next CPUs without len--
> + */
> + if (unlikely(skip_first_cpu)) {
> + skip_first_cpu = false;
> + goto next_cpumask;
> + }
> +
> if (len-- == 0)
> goto done;
> +
> irq_set_affinity_and_hint(*irqs++, topology_sibling_cpumask(cpu));
> +next_cpumask:
> cpumask_andnot(cpus, cpus, topology_sibling_cpumask(cpu));
> --weight;
> }
> @@ -1403,7 +1415,7 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
> }
> }
>
> - err = irq_setup(irqs, (nvec - start_irq_index), gc->numa_node);
> + err = irq_setup(irqs, (nvec - start_irq_index), gc->numa_node, false);
What for the 2nd parameter is parenthesized?
> if (err)
> goto free_irq;
>
> --
> 2.34.1
Reviewed-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 3/4] net: mana: Allow irq_setup() to skip cpus for affinity
2025-05-09 15:49 ` Yury Norov
@ 2025-05-12 5:49 ` Shradha Gupta
0 siblings, 0 replies; 20+ messages in thread
From: Shradha Gupta @ 2025-05-12 5:49 UTC (permalink / raw)
To: Yury Norov
Cc: Dexuan Cui, Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Andrew Lunn,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Konstantin Taranov, Simon Horman, Leon Romanovsky, Maxim Levitsky,
Erni Sri Satya Vennela, Peter Zijlstra, linux-hyperv, linux-pci,
linux-kernel, Nipun Gupta, Jason Gunthorpe, Jonathan Cameron,
Anna-Maria Behnsen, Kevin Tian, Long Li, Thomas Gleixner,
Bjorn Helgaas, Rob Herring, Manivannan Sadhasivam,
Krzysztof Wilczy???~Dski, Lorenzo Pieralisi, netdev, linux-rdma,
Paul Rosswurm, Shradha Gupta
On Fri, May 09, 2025 at 11:49:56AM -0400, Yury Norov wrote:
> On Fri, May 09, 2025 at 03:13:45AM -0700, Shradha Gupta wrote:
> > In order to prepare the MANA driver to allocate the MSI-X IRQs
> > dynamically, we need to prepare the irq_setup() to allow skipping
> > affinitizing IRQs to first CPU sibling group.
> >
> > This would be for cases when number of IRQs is less than or equal
> > to number of online CPUs. In such cases for dynamically added IRQs
> > the first CPU sibling group would already be affinitized with HWC IRQ
> >
> > Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> > Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
> > ---
> > drivers/net/ethernet/microsoft/mana/gdma_main.c | 16 ++++++++++++++--
> > 1 file changed, 14 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> > index 4ffaf7588885..2de42ce43373 100644
> > --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
> > +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> > @@ -1288,7 +1288,8 @@ void mana_gd_free_res_map(struct gdma_resource *r)
> > r->size = 0;
> > }
> >
> > -static int irq_setup(unsigned int *irqs, unsigned int len, int node)
> > +static int irq_setup(unsigned int *irqs, unsigned int len, int node,
> > + bool skip_first_cpu)
> > {
> > const struct cpumask *next, *prev = cpu_none_mask;
> > cpumask_var_t cpus __free(free_cpumask_var);
> > @@ -1303,9 +1304,20 @@ static int irq_setup(unsigned int *irqs, unsigned int len, int node)
> > while (weight > 0) {
> > cpumask_andnot(cpus, next, prev);
> > for_each_cpu(cpu, cpus) {
> > + /*
> > + * if the CPU sibling set is to be skipped we
> > + * just move on to the next CPUs without len--
> > + */
> > + if (unlikely(skip_first_cpu)) {
> > + skip_first_cpu = false;
> > + goto next_cpumask;
> > + }
> > +
> > if (len-- == 0)
> > goto done;
> > +
> > irq_set_affinity_and_hint(*irqs++, topology_sibling_cpumask(cpu));
> > +next_cpumask:
> > cpumask_andnot(cpus, cpus, topology_sibling_cpumask(cpu));
> > --weight;
> > }
> > @@ -1403,7 +1415,7 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
> > }
> > }
> >
> > - err = irq_setup(irqs, (nvec - start_irq_index), gc->numa_node);
> > + err = irq_setup(irqs, (nvec - start_irq_index), gc->numa_node, false);
>
> What for the 2nd parameter is parenthesized?
right, it is not needed. will correct it. Thanks Yury.
>
> > if (err)
> > goto free_irq;
> >
> > --
> > 2.34.1
>
>
> Reviewed-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 2/4] PCI: hv: Allow dynamic MSI-X vector allocation
2025-05-09 10:13 ` [PATCH v3 2/4] PCI: hv: Allow dynamic MSI-X vector allocation Shradha Gupta
@ 2025-05-12 7:00 ` Manivannan Sadhasivam
2025-05-12 7:38 ` Shradha Gupta
0 siblings, 1 reply; 20+ messages in thread
From: Manivannan Sadhasivam @ 2025-05-12 7:00 UTC (permalink / raw)
To: Shradha Gupta
Cc: Bjorn Helgaas, Rob Herring, Krzysztof Wilczy�~Dski,
Lorenzo Pieralisi, Dexuan Cui, Wei Liu, Haiyang Zhang,
K. Y. Srinivasan, linux-hyperv, linux-pci, linux-kernel,
Nipun Gupta, Yury Norov, Jason Gunthorpe, Jonathan Cameron,
Anna-Maria Behnsen, Kevin Tian, Long Li, Thomas Gleixner,
Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Konstantin Taranov, Simon Horman, Leon Romanovsky,
Maxim Levitsky, Erni Sri Satya Vennela, Peter Zijlstra, netdev,
linux-rdma, Paul Rosswurm, Shradha Gupta
On Fri, May 09, 2025 at 03:13:22AM -0700, Shradha Gupta wrote:
> Allow dynamic MSI-X vector allocation for pci_hyperv PCI controller
> by adding support for the flag MSI_FLAG_PCI_MSIX_ALLOC_DYN and using
> pci_msix_prepare_desc() to prepare the MSI-X descriptors.
>
> Feature support added for both x86 and ARM64
>
> Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
> ---
> Changes in v3:
> * Add arm64 support
> ---
> Changes in v2:
> * split the patch to keep changes in PCI and pci_hyperv controller
> seperate
> * replace strings "pci vectors" by "MSI-X vectors"
> ---
> drivers/pci/controller/pci-hyperv.c | 7 +++++--
> 1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
> index ac27bda5ba26..8c8882cb0ad2 100644
> --- a/drivers/pci/controller/pci-hyperv.c
> +++ b/drivers/pci/controller/pci-hyperv.c
> @@ -598,7 +598,8 @@ static unsigned int hv_msi_get_int_vector(struct irq_data *data)
> return cfg->vector;
> }
>
> -#define hv_msi_prepare pci_msi_prepare
> +#define hv_msi_prepare pci_msi_prepare
> +#define hv_msix_prepare_desc pci_msix_prepare_desc
Please do not use custom macro unless its defintion changes based on some
conditional. In this case, you should use pci_msix_prepare_desc directly for
prepare_desc() callback.
- Mani
--
மணிவண்ணன் சதாசிவம்
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 2/4] PCI: hv: Allow dynamic MSI-X vector allocation
2025-05-12 7:00 ` Manivannan Sadhasivam
@ 2025-05-12 7:38 ` Shradha Gupta
0 siblings, 0 replies; 20+ messages in thread
From: Shradha Gupta @ 2025-05-12 7:38 UTC (permalink / raw)
To: Manivannan Sadhasivam
Cc: Bjorn Helgaas, Rob Herring, Krzysztof Wilczy???~Dski,
Lorenzo Pieralisi, Dexuan Cui, Wei Liu, Haiyang Zhang,
K. Y. Srinivasan, linux-hyperv, linux-pci, linux-kernel,
Nipun Gupta, Yury Norov, Jason Gunthorpe, Jonathan Cameron,
Anna-Maria Behnsen, Kevin Tian, Long Li, Thomas Gleixner,
Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Konstantin Taranov, Simon Horman, Leon Romanovsky,
Maxim Levitsky, Erni Sri Satya Vennela, Peter Zijlstra, netdev,
linux-rdma, Paul Rosswurm, Shradha Gupta
On Mon, May 12, 2025 at 12:30:04PM +0530, Manivannan Sadhasivam wrote:
> On Fri, May 09, 2025 at 03:13:22AM -0700, Shradha Gupta wrote:
> > Allow dynamic MSI-X vector allocation for pci_hyperv PCI controller
> > by adding support for the flag MSI_FLAG_PCI_MSIX_ALLOC_DYN and using
> > pci_msix_prepare_desc() to prepare the MSI-X descriptors.
> >
> > Feature support added for both x86 and ARM64
> >
> > Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> > Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
> > ---
> > Changes in v3:
> > * Add arm64 support
> > ---
> > Changes in v2:
> > * split the patch to keep changes in PCI and pci_hyperv controller
> > seperate
> > * replace strings "pci vectors" by "MSI-X vectors"
> > ---
> > drivers/pci/controller/pci-hyperv.c | 7 +++++--
> > 1 file changed, 5 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
> > index ac27bda5ba26..8c8882cb0ad2 100644
> > --- a/drivers/pci/controller/pci-hyperv.c
> > +++ b/drivers/pci/controller/pci-hyperv.c
> > @@ -598,7 +598,8 @@ static unsigned int hv_msi_get_int_vector(struct irq_data *data)
> > return cfg->vector;
> > }
> >
> > -#define hv_msi_prepare pci_msi_prepare
> > +#define hv_msi_prepare pci_msi_prepare
> > +#define hv_msix_prepare_desc pci_msix_prepare_desc
>
> Please do not use custom macro unless its defintion changes based on some
> conditional. In this case, you should use pci_msix_prepare_desc directly for
> prepare_desc() callback.
>
> - Mani
>
> --
> ??????????????????????????? ????????????????????????
Thanks for catching this Mani, I agree. I will fix this.
regards,
Shradha.
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [PATCH v3 3/4] net: mana: Allow irq_setup() to skip cpus for affinity
2025-05-09 10:13 ` [PATCH v3 3/4] net: mana: Allow irq_setup() to skip cpus for affinity Shradha Gupta
2025-05-09 15:49 ` Yury Norov
@ 2025-05-14 4:53 ` Michael Kelley
2025-05-14 16:55 ` Yury Norov
1 sibling, 1 reply; 20+ messages in thread
From: Michael Kelley @ 2025-05-14 4:53 UTC (permalink / raw)
To: Shradha Gupta, Dexuan Cui, Wei Liu, Haiyang Zhang,
K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Simon Horman,
Leon Romanovsky, Maxim Levitsky, Erni Sri Satya Vennela,
Peter Zijlstra
Cc: linux-hyperv@vger.kernel.org, linux-pci@vger.kernel.org,
linux-kernel@vger.kernel.org, Nipun Gupta, Yury Norov,
Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
Manivannan Sadhasivam, Krzysztof Wilczy�~Dski,
Lorenzo Pieralisi, netdev@vger.kernel.org,
linux-rdma@vger.kernel.org, Paul Rosswurm, Shradha Gupta
From: Shradha Gupta <shradhagupta@linux.microsoft.com> Sent: Friday, May 9, 2025 3:14 AM
>
> In order to prepare the MANA driver to allocate the MSI-X IRQs
> dynamically, we need to prepare the irq_setup() to allow skipping
s/prepare the irq_setup()/enhance irq_setup()/
> affinitizing IRQs to first CPU sibling group.
s/to first/to the first/
>
> This would be for cases when number of IRQs is less than or equal
s/when number/when the number/
> to number of online CPUs. In such cases for dynamically added IRQs
s/to number/to the number/
> the first CPU sibling group would already be affinitized with HWC IRQ
Add a period at the end of the sentence.
>
> Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
> ---
> drivers/net/ethernet/microsoft/mana/gdma_main.c | 16 ++++++++++++++--
> 1 file changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c
> b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> index 4ffaf7588885..2de42ce43373 100644
> --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
> +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> @@ -1288,7 +1288,8 @@ void mana_gd_free_res_map(struct gdma_resource *r)
> r->size = 0;
> }
>
> -static int irq_setup(unsigned int *irqs, unsigned int len, int node)
> +static int irq_setup(unsigned int *irqs, unsigned int len, int node,
> + bool skip_first_cpu)
> {
> const struct cpumask *next, *prev = cpu_none_mask;
> cpumask_var_t cpus __free(free_cpumask_var);
> @@ -1303,9 +1304,20 @@ static int irq_setup(unsigned int *irqs, unsigned int len, int node)
> while (weight > 0) {
> cpumask_andnot(cpus, next, prev);
> for_each_cpu(cpu, cpus) {
> + /*
> + * if the CPU sibling set is to be skipped we
> + * just move on to the next CPUs without len--
> + */
> + if (unlikely(skip_first_cpu)) {
> + skip_first_cpu = false;
> + goto next_cpumask;
> + }
> +
> if (len-- == 0)
> goto done;
> +
> irq_set_affinity_and_hint(*irqs++, topology_sibling_cpumask(cpu));
> +next_cpumask:
> cpumask_andnot(cpus, cpus, topology_sibling_cpumask(cpu));
> --weight;
> }
With a little bit of reordering of the code, you could avoid the need for the "next_cpumask"
label and goto statement. "continue" is usually cleaner than a "goto". Here's what I'm thinking:
for_each_cpu(cpu, cpus) {
cpumask_andnot(cpus, cpus, topology_sibling_cpumask(cpu));
--weight;
If (unlikely(skip_first_cpu)) {
skip_first_cpu = false;
continue;
}
If (len-- == 0)
goto done;
irq_set_affinity_and_hint(*irqs++, topology_sibling_cpumask(cpu));
}
I wish there were some comments in irq_setup() explaining the overall intention of
the algorithm. I can see how the goal is to first assign CPUs that are local to the current
NUMA node, and then expand outward to CPUs that are further away. And you want
to *not* assign both siblings in a hyper-threaded core. But I can't figure out what
"weight" is trying to accomplish. Maybe this was discussed when the code first
went in, but I can't remember now. :-(
Michael
> @@ -1403,7 +1415,7 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
> }
> }
>
> - err = irq_setup(irqs, (nvec - start_irq_index), gc->numa_node);
> + err = irq_setup(irqs, (nvec - start_irq_index), gc->numa_node, false);
> if (err)
> goto free_irq;
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [PATCH v3 4/4] net: mana: Allocate MSI-X vectors dynamically
2025-05-09 10:13 ` [PATCH v3 4/4] net: mana: Allocate MSI-X vectors dynamically Shradha Gupta
@ 2025-05-14 5:04 ` Michael Kelley
2025-05-14 17:07 ` Yury Norov
2025-05-21 16:27 ` Michael Kelley
1 sibling, 1 reply; 20+ messages in thread
From: Michael Kelley @ 2025-05-14 5:04 UTC (permalink / raw)
To: Shradha Gupta, Dexuan Cui, Wei Liu, Haiyang Zhang,
K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Simon Horman,
Leon Romanovsky, Maxim Levitsky, Erni Sri Satya Vennela,
Peter Zijlstra
Cc: linux-hyperv@vger.kernel.org, linux-pci@vger.kernel.org,
linux-kernel@vger.kernel.org, Nipun Gupta, Yury Norov,
Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
Manivannan Sadhasivam, Krzysztof Wilczy�~Dski,
Lorenzo Pieralisi, netdev@vger.kernel.org,
linux-rdma@vger.kernel.org, Paul Rosswurm, Shradha Gupta
From: Shradha Gupta <shradhagupta@linux.microsoft.com> Sent: Friday, May 9, 2025 3:14 AM
>
> Currently, the MANA driver allocates MSI-X vectors statically based on
> MANA_MAX_NUM_QUEUES and num_online_cpus() values and in some cases ends
> up allocating more vectors than it needs. This is because, by this time
> we do not have a HW channel and do not know how many IRQs should be
> allocated.
>
> To avoid this, we allocate 1 MSI-X vector during the creation of HWC and
> after getting the value supported by hardware, dynamically add the
> remaining MSI-X vectors.
After this patch is applied, there are two functions for setting up IRQs:
1. mana_gd_setup_dyn_irqs()
2. mana_gd_setup_irqs()
#1 is about 78 lines of code and comments, while #2 is about 103 lines of
code and comments. But the two functions have a lot of commonality,
and that amount of commonality raises a red flag for me.
Have you looked at parameterizing things so a single function can serve
both purposes? I haven't worked through all the details, but at first
glance it looks very feasible, and without introducing unreasonable
messiness. Saving 70 to 80 lines of fairly duplicative code is worth a bit
of effort.
I have some other comments on the code. But if those two functions can
be combined, I'd rather re-review the result before adding comments that
may become irrelevant due to the restructuring.
Michael
>
> Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
> ---
> Changes in v3:
> * implemented irq_contexts as xarrays rather than list
> * split the patch to create a perparation patch around irq_setup()
> * add log when IRQ allocation/setup for remaining IRQs fails
> ---
> Changes in v2:
> * Use string 'MSI-X vectors' instead of 'pci vectors'
> * make skip-cpu a bool instead of int
> * rearrange the comment arout skip_cpu variable appropriately
> * update the capability bit for driver indicating dynamic IRQ allocation
> * enforced max line length to 80
> * enforced RCT convention
> * initialized gic to NULL, for when there is a possibility of gic
> not being populated correctly
> ---
> .../net/ethernet/microsoft/mana/gdma_main.c | 248 +++++++++++++++---
> include/net/mana/gdma.h | 8 +-
> 2 files changed, 211 insertions(+), 45 deletions(-)
>
> diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c
> b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> index 2de42ce43373..f07cebffc30d 100644
> --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
> +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> @@ -6,6 +6,8 @@
> #include <linux/pci.h>
> #include <linux/utsname.h>
> #include <linux/version.h>
> +#include <linux/msi.h>
> +#include <linux/irqdomain.h>
>
> #include <net/mana/mana.h>
>
> @@ -80,8 +82,15 @@ static int mana_gd_query_max_resources(struct pci_dev *pdev)
> return err ? err : -EPROTO;
> }
>
> - if (gc->num_msix_usable > resp.max_msix)
> - gc->num_msix_usable = resp.max_msix;
> + if (!pci_msix_can_alloc_dyn(pdev)) {
> + if (gc->num_msix_usable > resp.max_msix)
> + gc->num_msix_usable = resp.max_msix;
> + } else {
> + /* If dynamic allocation is enabled we have already allocated
> + * hwc msi
> + */
> + gc->num_msix_usable = min(resp.max_msix, num_online_cpus() + 1);
> + }
>
> if (gc->num_msix_usable <= 1)
> return -ENOSPC;
> @@ -482,7 +491,9 @@ static int mana_gd_register_irq(struct gdma_queue *queue,
> }
>
> queue->eq.msix_index = msi_index;
> - gic = &gc->irq_contexts[msi_index];
> + gic = xa_load(&gc->irq_contexts, msi_index);
> + if (!gic)
> + return -EINVAL;
>
> spin_lock_irqsave(&gic->lock, flags);
> list_add_rcu(&queue->entry, &gic->eq_list);
> @@ -507,7 +518,10 @@ static void mana_gd_deregiser_irq(struct gdma_queue
> *queue)
> if (WARN_ON(msix_index >= gc->num_msix_usable))
> return;
>
> - gic = &gc->irq_contexts[msix_index];
> + gic = xa_load(&gc->irq_contexts, msix_index);
> + if (!gic)
> + return;
> +
> spin_lock_irqsave(&gic->lock, flags);
> list_for_each_entry_rcu(eq, &gic->eq_list, entry) {
> if (queue == eq) {
> @@ -1329,29 +1343,96 @@ static int irq_setup(unsigned int *irqs, unsigned int len, int
> node,
> return 0;
> }
>
> -static int mana_gd_setup_irqs(struct pci_dev *pdev)
> +static int mana_gd_setup_dyn_irqs(struct pci_dev *pdev, int nvec)
> {
> struct gdma_context *gc = pci_get_drvdata(pdev);
> - unsigned int max_queues_per_port;
> struct gdma_irq_context *gic;
> - unsigned int max_irqs, cpu;
> - int start_irq_index = 1;
> - int nvec, *irqs, irq;
> + bool skip_first_cpu = false;
> int err, i = 0, j;
> + int *irqs, irq;
>
> cpus_read_lock();
> - max_queues_per_port = num_online_cpus();
> - if (max_queues_per_port > MANA_MAX_NUM_QUEUES)
> - max_queues_per_port = MANA_MAX_NUM_QUEUES;
>
> - /* Need 1 interrupt for the Hardware communication Channel (HWC) */
> - max_irqs = max_queues_per_port + 1;
> + irqs = kmalloc_array(nvec, sizeof(int), GFP_KERNEL);
> + if (!irqs) {
> + err = -ENOMEM;
> + goto free_irq_vector;
> + }
>
> - nvec = pci_alloc_irq_vectors(pdev, 2, max_irqs, PCI_IRQ_MSIX);
> - if (nvec < 0) {
> - cpus_read_unlock();
> - return nvec;
> + for (i = 0; i < nvec; i++) {
> + gic = kcalloc(1, sizeof(struct gdma_irq_context), GFP_KERNEL);
> + if (!gic) {
> + err = -ENOMEM;
> + goto free_irq;
> + }
> + gic->handler = mana_gd_process_eq_events;
> + INIT_LIST_HEAD(&gic->eq_list);
> + spin_lock_init(&gic->lock);
> +
> + snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s",
> + i, pci_name(pdev));
> +
> + /* one pci vector is already allocated for HWC */
> + irqs[i] = pci_irq_vector(pdev, i + 1);
> + if (irqs[i] < 0) {
> + err = irqs[i];
> + goto free_current_gic;
> + }
> +
> + err = request_irq(irqs[i], mana_gd_intr, 0, gic->name, gic);
> + if (err)
> + goto free_current_gic;
> +
> + xa_store(&gc->irq_contexts, i + 1, gic, GFP_KERNEL);
> }
> +
> + /*
> + * When calling irq_setup() for dynamically added IRQs, if number of
> + * CPUs is more than or equal to allocated MSI-X, we need to skip the
> + * first CPU sibling group since they are already affinitized to HWC IRQ
> + */
> + if (gc->num_msix_usable <= num_online_cpus())
> + skip_first_cpu = true;
> +
> + err = irq_setup(irqs, nvec, gc->numa_node, skip_first_cpu);
> + if (err)
> + goto free_irq;
> +
> + cpus_read_unlock();
> + kfree(irqs);
> + return 0;
> +
> +free_current_gic:
> + kfree(gic);
> +free_irq:
> + for (j = i; j >= 0; j--) {
> + irq = pci_irq_vector(pdev, j);
> + gic = xa_load(&gc->irq_contexts, j);
> + if (!gic)
> + continue;
> +
> + irq_update_affinity_hint(irq, NULL);
> + free_irq(irq, gic);
> + xa_erase(&gc->irq_contexts, j);
> + kfree(gic);
> + }
> + kfree(irqs);
> +free_irq_vector:
> + cpus_read_unlock();
> + return err;
> +}
> +
> +static int mana_gd_setup_irqs(struct pci_dev *pdev, int nvec)
> +{
> + struct gdma_context *gc = pci_get_drvdata(pdev);
> + struct gdma_irq_context *gic;
> + int start_irq_index = 1;
> + unsigned int cpu;
> + int *irqs, irq;
> + int err, i = 0, j;
> +
> + cpus_read_lock();
> +
> if (nvec <= num_online_cpus())
> start_irq_index = 0;
>
> @@ -1361,15 +1442,13 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
> goto free_irq_vector;
> }
>
> - gc->irq_contexts = kcalloc(nvec, sizeof(struct gdma_irq_context),
> - GFP_KERNEL);
> - if (!gc->irq_contexts) {
> - err = -ENOMEM;
> - goto free_irq_array;
> - }
> -
> for (i = 0; i < nvec; i++) {
> - gic = &gc->irq_contexts[i];
> + gic = kcalloc(1, sizeof(struct gdma_irq_context), GFP_KERNEL);
> + if (!gic) {
> + err = -ENOMEM;
> + goto free_irq;
> + }
> +
> gic->handler = mana_gd_process_eq_events;
> INIT_LIST_HEAD(&gic->eq_list);
> spin_lock_init(&gic->lock);
> @@ -1384,13 +1463,13 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
> irq = pci_irq_vector(pdev, i);
> if (irq < 0) {
> err = irq;
> - goto free_irq;
> + goto free_current_gic;
> }
>
> if (!i) {
> err = request_irq(irq, mana_gd_intr, 0, gic->name, gic);
> if (err)
> - goto free_irq;
> + goto free_current_gic;
>
> /* If number of IRQ is one extra than number of online CPUs,
> * then we need to assign IRQ0 (hwc irq) and IRQ1 to
> @@ -1408,39 +1487,110 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
> }
> } else {
> irqs[i - start_irq_index] = irq;
> - err = request_irq(irqs[i - start_irq_index], mana_gd_intr, 0,
> - gic->name, gic);
> + err = request_irq(irqs[i - start_irq_index],
> + mana_gd_intr, 0, gic->name, gic);
> if (err)
> - goto free_irq;
> + goto free_current_gic;
> }
> +
> + xa_store(&gc->irq_contexts, i, gic, GFP_KERNEL);
> }
>
> err = irq_setup(irqs, (nvec - start_irq_index), gc->numa_node, false);
> if (err)
> goto free_irq;
>
> - gc->max_num_msix = nvec;
> - gc->num_msix_usable = nvec;
> cpus_read_unlock();
> kfree(irqs);
> return 0;
>
> +free_current_gic:
> + kfree(gic);
> free_irq:
> for (j = i - 1; j >= 0; j--) {
> irq = pci_irq_vector(pdev, j);
> - gic = &gc->irq_contexts[j];
> + gic = xa_load(&gc->irq_contexts, j);
> + if (!gic)
> + continue;
>
> irq_update_affinity_hint(irq, NULL);
> free_irq(irq, gic);
> + xa_erase(&gc->irq_contexts, j);
> + kfree(gic);
> }
>
> - kfree(gc->irq_contexts);
> - gc->irq_contexts = NULL;
> -free_irq_array:
> kfree(irqs);
> free_irq_vector:
> + xa_destroy(&gc->irq_contexts);
> cpus_read_unlock();
> - pci_free_irq_vectors(pdev);
> + return err;
> +}
> +
> +static int mana_gd_setup_hwc_irqs(struct pci_dev *pdev)
> +{
> + struct gdma_context *gc = pci_get_drvdata(pdev);
> + unsigned int max_irqs, min_irqs;
> + int max_queues_per_port;
> + int nvec, err;
> +
> + if (pci_msix_can_alloc_dyn(pdev)) {
> + max_irqs = 1;
> + min_irqs = 1;
> + } else {
> + max_queues_per_port = num_online_cpus();
> + if (max_queues_per_port > MANA_MAX_NUM_QUEUES)
> + max_queues_per_port = MANA_MAX_NUM_QUEUES;
> + /* Need 1 interrupt for HWC */
> + max_irqs = max_queues_per_port + 1;
> + min_irqs = 2;
> + }
> +
> + nvec = pci_alloc_irq_vectors(pdev, min_irqs, max_irqs, PCI_IRQ_MSIX);
> + if (nvec < 0)
> + return nvec;
> +
> + err = mana_gd_setup_irqs(pdev, nvec);
> + if (err) {
> + pci_free_irq_vectors(pdev);
> + return err;
> + }
> +
> + gc->num_msix_usable = nvec;
> + gc->max_num_msix = nvec;
> +
> + return err;
> +}
> +
> +static int mana_gd_setup_remaining_irqs(struct pci_dev *pdev)
> +{
> + struct gdma_context *gc = pci_get_drvdata(pdev);
> + int max_irqs, i, err = 0;
> + struct msi_map irq_map;
> +
> + if (!pci_msix_can_alloc_dyn(pdev))
> + /* remain irqs are already allocated with HWC IRQ */
> + return 0;
> +
> + /* allocate only remaining IRQs*/
> + max_irqs = gc->num_msix_usable - 1;
> +
> + for (i = 1; i <= max_irqs; i++) {
> + irq_map = pci_msix_alloc_irq_at(pdev, i, NULL);
> + if (!irq_map.virq) {
> + err = irq_map.index;
> + /* caller will handle cleaning up all allocated
> + * irqs, after HWC is destroyed
> + */
> + return err;
> + }
> + }
> +
> + err = mana_gd_setup_dyn_irqs(pdev, max_irqs);
> + if (err)
> + return err;
> +
> + gc->max_num_msix = gc->max_num_msix + max_irqs;
> +
> return err;
> }
>
> @@ -1458,19 +1608,22 @@ static void mana_gd_remove_irqs(struct pci_dev *pdev)
> if (irq < 0)
> continue;
>
> - gic = &gc->irq_contexts[i];
> + gic = xa_load(&gc->irq_contexts, i);
> + if (!gic)
> + continue;
>
> /* Need to clear the hint before free_irq */
> irq_update_affinity_hint(irq, NULL);
> free_irq(irq, gic);
> + xa_erase(&gc->irq_contexts, i);
> + kfree(gic);
> }
>
> pci_free_irq_vectors(pdev);
>
> gc->max_num_msix = 0;
> gc->num_msix_usable = 0;
> - kfree(gc->irq_contexts);
> - gc->irq_contexts = NULL;
> + xa_destroy(&gc->irq_contexts);
> }
>
> static int mana_gd_setup(struct pci_dev *pdev)
> @@ -1481,9 +1634,10 @@ static int mana_gd_setup(struct pci_dev *pdev)
> mana_gd_init_registers(pdev);
> mana_smc_init(&gc->shm_channel, gc->dev, gc->shm_base);
>
> - err = mana_gd_setup_irqs(pdev);
> + err = mana_gd_setup_hwc_irqs(pdev);
> if (err) {
> - dev_err(gc->dev, "Failed to setup IRQs: %d\n", err);
> + dev_err(gc->dev, "Failed to setup IRQs for HWC creation: %d\n",
> + err);
> return err;
> }
>
> @@ -1499,6 +1653,12 @@ static int mana_gd_setup(struct pci_dev *pdev)
> if (err)
> goto destroy_hwc;
>
> + err = mana_gd_setup_remaining_irqs(pdev);
> + if (err) {
> + dev_err(gc->dev, "Failed to setup remaining IRQs: %d", err);
> + goto destroy_hwc;
> + }
> +
> err = mana_gd_detect_devices(pdev);
> if (err)
> goto destroy_hwc;
> @@ -1575,6 +1735,7 @@ static int mana_gd_probe(struct pci_dev *pdev, const struct
> pci_device_id *ent)
> gc->is_pf = mana_is_pf(pdev->device);
> gc->bar0_va = bar0_va;
> gc->dev = &pdev->dev;
> + xa_init(&gc->irq_contexts);
>
> if (gc->is_pf)
> gc->mana_pci_debugfs = debugfs_create_dir("0", mana_debugfs_root);
> diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h
> index 228603bf03f2..f20d1d1ea5e8 100644
> --- a/include/net/mana/gdma.h
> +++ b/include/net/mana/gdma.h
> @@ -373,7 +373,7 @@ struct gdma_context {
> unsigned int max_num_queues;
> unsigned int max_num_msix;
> unsigned int num_msix_usable;
> - struct gdma_irq_context *irq_contexts;
> + struct xarray irq_contexts;
>
> /* L2 MTU */
> u16 adapter_mtu;
> @@ -558,12 +558,16 @@ enum {
> /* Driver can handle holes (zeros) in the device list */
> #define GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP BIT(11)
>
> +/* Driver supports dynamic MSI-X vector allocation */
> +#define GDMA_DRV_CAP_FLAG_1_DYNAMIC_IRQ_ALLOC_SUPPORT BIT(13)
> +
> #define GDMA_DRV_CAP_FLAGS1 \
> (GDMA_DRV_CAP_FLAG_1_EQ_SHARING_MULTI_VPORT | \
> GDMA_DRV_CAP_FLAG_1_NAPI_WKDONE_FIX | \
> GDMA_DRV_CAP_FLAG_1_HWC_TIMEOUT_RECONFIG | \
> GDMA_DRV_CAP_FLAG_1_VARIABLE_INDIRECTION_TABLE_SUPPORT | \
> - GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP)
> + GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP | \
> + GDMA_DRV_CAP_FLAG_1_DYNAMIC_IRQ_ALLOC_SUPPORT)
>
> #define GDMA_DRV_CAP_FLAGS2 0
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 3/4] net: mana: Allow irq_setup() to skip cpus for affinity
2025-05-14 4:53 ` Michael Kelley
@ 2025-05-14 16:55 ` Yury Norov
2025-05-14 17:26 ` Michael Kelley
0 siblings, 1 reply; 20+ messages in thread
From: Yury Norov @ 2025-05-14 16:55 UTC (permalink / raw)
To: Michael Kelley
Cc: Shradha Gupta, Dexuan Cui, Wei Liu, Haiyang Zhang,
K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Simon Horman,
Leon Romanovsky, Maxim Levitsky, Erni Sri Satya Vennela,
Peter Zijlstra, linux-hyperv@vger.kernel.org,
linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org,
Nipun Gupta, Jason Gunthorpe, Jonathan Cameron,
Anna-Maria Behnsen, Kevin Tian, Long Li, Thomas Gleixner,
Bjorn Helgaas, Rob Herring, Manivannan Sadhasivam,
Krzysztof Wilczy�~Dski, Lorenzo Pieralisi,
netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Paul Rosswurm,
Shradha Gupta
On Wed, May 14, 2025 at 04:53:34AM +0000, Michael Kelley wrote:
> > -static int irq_setup(unsigned int *irqs, unsigned int len, int node)
> > +static int irq_setup(unsigned int *irqs, unsigned int len, int node,
> > + bool skip_first_cpu)
> > {
> > const struct cpumask *next, *prev = cpu_none_mask;
> > cpumask_var_t cpus __free(free_cpumask_var);
> > @@ -1303,9 +1304,20 @@ static int irq_setup(unsigned int *irqs, unsigned int len, int node)
> > while (weight > 0) {
> > cpumask_andnot(cpus, next, prev);
> > for_each_cpu(cpu, cpus) {
> > + /*
> > + * if the CPU sibling set is to be skipped we
> > + * just move on to the next CPUs without len--
> > + */
> > + if (unlikely(skip_first_cpu)) {
> > + skip_first_cpu = false;
> > + goto next_cpumask;
> > + }
> > +
> > if (len-- == 0)
> > goto done;
> > +
> > irq_set_affinity_and_hint(*irqs++, topology_sibling_cpumask(cpu));
> > +next_cpumask:
> > cpumask_andnot(cpus, cpus, topology_sibling_cpumask(cpu));
> > --weight;
> > }
>
> With a little bit of reordering of the code, you could avoid the need for the "next_cpumask"
> label and goto statement. "continue" is usually cleaner than a "goto". Here's what I'm thinking:
>
> for_each_cpu(cpu, cpus) {
> cpumask_andnot(cpus, cpus, topology_sibling_cpumask(cpu));
> --weight;
cpumask_andnot() is O(N), and before it was conditional on 'len == 0',
so we didn't do that on the very last step. Your version has to do that.
Don't know how important that is for real workloads. Shradha maybe can
measure it...
>
> If (unlikely(skip_first_cpu)) {
> skip_first_cpu = false;
> continue;
> }
>
> If (len-- == 0)
> goto done;
>
> irq_set_affinity_and_hint(*irqs++, topology_sibling_cpumask(cpu));
> }
>
> I wish there were some comments in irq_setup() explaining the overall intention of
> the algorithm. I can see how the goal is to first assign CPUs that are local to the current
> NUMA node, and then expand outward to CPUs that are further away. And you want
> to *not* assign both siblings in a hyper-threaded core.
I wrote this function, so let me step in.
The intention is described in the corresponding commit message:
Souradeep investigated that the driver performs faster if IRQs are
spread on CPUs with the following heuristics:
1. No more than one IRQ per CPU, if possible;
2. NUMA locality is the second priority;
3. Sibling dislocality is the last priority.
Let's consider this topology:
Node 0 1
Core 0 1 2 3
CPU 0 1 2 3 4 5 6 7
The most performant IRQ distribution based on the above topology
and heuristics may look like this:
IRQ Nodes Cores CPUs
0 1 0 0-1
1 1 1 2-3
2 1 0 0-1
3 1 1 2-3
4 2 2 4-5
5 2 3 6-7
6 2 2 4-5
7 2 3 6-7
> But I can't figure out what
> "weight" is trying to accomplish. Maybe this was discussed when the code first
> went in, but I can't remember now. :-(
The weight here is to implement the heuristic discovered by Souradeep:
NUMA locality is preferred over sibling dislocality.
The outer for_each() loop resets the weight to the actual number of
CPUs in the hop. Then inner for_each() loop decrements it by the
number of sibling groups (cores) while assigning first IRQ to each
group.
Now, because NUMA locality is more important, we should walk the
same set of siblings and assign 2nd IRQ, and it's implemented by the
medium while() loop. So, we do like this unless the number of IRQs
assigned on this hop will not become equal to number of CPUs in the
hop (weight == 0). Then we switch to the next hop and do the same
thing.
Hope that helps.
Thanks,
Yury
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 4/4] net: mana: Allocate MSI-X vectors dynamically
2025-05-14 5:04 ` Michael Kelley
@ 2025-05-14 17:07 ` Yury Norov
2025-05-15 5:10 ` Shradha Gupta
0 siblings, 1 reply; 20+ messages in thread
From: Yury Norov @ 2025-05-14 17:07 UTC (permalink / raw)
To: Michael Kelley
Cc: Shradha Gupta, Dexuan Cui, Wei Liu, Haiyang Zhang,
K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Simon Horman,
Leon Romanovsky, Maxim Levitsky, Erni Sri Satya Vennela,
Peter Zijlstra, linux-hyperv@vger.kernel.org,
linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org,
Nipun Gupta, Jason Gunthorpe, Jonathan Cameron,
Anna-Maria Behnsen, Kevin Tian, Long Li, Thomas Gleixner,
Bjorn Helgaas, Rob Herring, Manivannan Sadhasivam,
Krzysztof Wilczy�~Dski, Lorenzo Pieralisi,
netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Paul Rosswurm,
Shradha Gupta
On Wed, May 14, 2025 at 05:04:03AM +0000, Michael Kelley wrote:
> From: Shradha Gupta <shradhagupta@linux.microsoft.com> Sent: Friday, May 9, 2025 3:14 AM
> >
> > Currently, the MANA driver allocates MSI-X vectors statically based on
> > MANA_MAX_NUM_QUEUES and num_online_cpus() values and in some cases ends
> > up allocating more vectors than it needs. This is because, by this time
> > we do not have a HW channel and do not know how many IRQs should be
> > allocated.
> >
> > To avoid this, we allocate 1 MSI-X vector during the creation of HWC and
> > after getting the value supported by hardware, dynamically add the
> > remaining MSI-X vectors.
>
> After this patch is applied, there are two functions for setting up IRQs:
> 1. mana_gd_setup_dyn_irqs()
> 2. mana_gd_setup_irqs()
>
> #1 is about 78 lines of code and comments, while #2 is about 103 lines of
> code and comments. But the two functions have a lot of commonality,
> and that amount of commonality raises a red flag for me.
>
> Have you looked at parameterizing things so a single function can serve
> both purposes? I haven't worked through all the details, but at first
> glance it looks very feasible, and without introducing unreasonable
> messiness. Saving 70 to 80 lines of fairly duplicative code is worth a bit
> of effort.
>
> I have some other comments on the code. But if those two functions can
> be combined, I'd rather re-review the result before adding comments that
> may become irrelevant due to the restructuring.
>
> Michael
On previous iteration I already mentioned that this patch is too big,
doesn't do exactly one thing and should be split to become a reviewable
change. Shradha split-out the irq_setup() piece, and your review proves
that splitting helps to reviewability.
The rest of the change is still a mess. I agree that the functions you
mention likely duplicate each other. But overall, this patch bomb is
above my ability to review. Fortunately, I don't have to.
Thanks,
Yury
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [PATCH v3 3/4] net: mana: Allow irq_setup() to skip cpus for affinity
2025-05-14 16:55 ` Yury Norov
@ 2025-05-14 17:26 ` Michael Kelley
2025-05-14 17:58 ` Yury Norov
2025-05-15 4:49 ` Shradha Gupta
0 siblings, 2 replies; 20+ messages in thread
From: Michael Kelley @ 2025-05-14 17:26 UTC (permalink / raw)
To: Yury Norov
Cc: Shradha Gupta, Dexuan Cui, Wei Liu, Haiyang Zhang,
K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Simon Horman,
Leon Romanovsky, Maxim Levitsky, Erni Sri Satya Vennela,
Peter Zijlstra, linux-hyperv@vger.kernel.org,
linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org,
Nipun Gupta, Jason Gunthorpe, Jonathan Cameron,
Anna-Maria Behnsen, Kevin Tian, Long Li, Thomas Gleixner,
Bjorn Helgaas, Rob Herring, Manivannan Sadhasivam,
Krzysztof Wilczy�~Dski, Lorenzo Pieralisi,
netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Paul Rosswurm,
Shradha Gupta
From: Yury Norov <yury.norov@gmail.com> Sent: Wednesday, May 14, 2025 9:55 AM
>
> On Wed, May 14, 2025 at 04:53:34AM +0000, Michael Kelley wrote:
> > > -static int irq_setup(unsigned int *irqs, unsigned int len, int node)
> > > +static int irq_setup(unsigned int *irqs, unsigned int len, int node,
> > > + bool skip_first_cpu)
> > > {
> > > const struct cpumask *next, *prev = cpu_none_mask;
> > > cpumask_var_t cpus __free(free_cpumask_var);
> > > @@ -1303,9 +1304,20 @@ static int irq_setup(unsigned int *irqs, unsigned int len, int node)
> > > while (weight > 0) {
> > > cpumask_andnot(cpus, next, prev);
> > > for_each_cpu(cpu, cpus) {
> > > + /*
> > > + * if the CPU sibling set is to be skipped we
> > > + * just move on to the next CPUs without len--
> > > + */
> > > + if (unlikely(skip_first_cpu)) {
> > > + skip_first_cpu = false;
> > > + goto next_cpumask;
> > > + }
> > > +
> > > if (len-- == 0)
> > > goto done;
> > > +
> > > irq_set_affinity_and_hint(*irqs++, topology_sibling_cpumask(cpu));
> > > +next_cpumask:
> > > cpumask_andnot(cpus, cpus, topology_sibling_cpumask(cpu));
> > > --weight;
> > > }
> >
> > With a little bit of reordering of the code, you could avoid the need for the "next_cpumask"
> > label and goto statement. "continue" is usually cleaner than a "goto". Here's what I'm thinking:
> >
> > for_each_cpu(cpu, cpus) {
> > cpumask_andnot(cpus, cpus, topology_sibling_cpumask(cpu));
> > --weight;
>
> cpumask_andnot() is O(N), and before it was conditional on 'len == 0',
> so we didn't do that on the very last step. Your version has to do that.
> Don't know how important that is for real workloads. Shradha maybe can
> measure it...
Yes, there's one extra invocation of cpumask_andnot(). But if the
VM has a small number of CPUs, that extra invocation is negligible.
If the VM has a large number of CPUs, we're already executing
cpumask_andnot() many times, so one extra time is also negligible.
And this whole thing is done only at device initialization time, so
it's not a hot path.
>
> >
> > If (unlikely(skip_first_cpu)) {
> > skip_first_cpu = false;
> > continue;
> > }
> >
> > If (len-- == 0)
> > goto done;
> >
> > irq_set_affinity_and_hint(*irqs++, topology_sibling_cpumask(cpu));
> > }
> >
> > I wish there were some comments in irq_setup() explaining the overall intention of
> > the algorithm. I can see how the goal is to first assign CPUs that are local to the current
> > NUMA node, and then expand outward to CPUs that are further away. And you want
> > to *not* assign both siblings in a hyper-threaded core.
>
> I wrote this function, so let me step in.
>
> The intention is described in the corresponding commit message:
>
> Souradeep investigated that the driver performs faster if IRQs are
> spread on CPUs with the following heuristics:
>
> 1. No more than one IRQ per CPU, if possible;
> 2. NUMA locality is the second priority;
> 3. Sibling dislocality is the last priority.
>
> Let's consider this topology:
>
> Node 0 1
> Core 0 1 2 3
> CPU 0 1 2 3 4 5 6 7
>
> The most performant IRQ distribution based on the above topology
> and heuristics may look like this:
>
> IRQ Nodes Cores CPUs
> 0 1 0 0-1
> 1 1 1 2-3
> 2 1 0 0-1
> 3 1 1 2-3
> 4 2 2 4-5
> 5 2 3 6-7
> 6 2 2 4-5
> 7 2 3 6-7
>
> > But I can't figure out what
> > "weight" is trying to accomplish. Maybe this was discussed when the code first
> > went in, but I can't remember now. :-(
>
> The weight here is to implement the heuristic discovered by Souradeep:
> NUMA locality is preferred over sibling dislocality.
>
> The outer for_each() loop resets the weight to the actual number of
> CPUs in the hop. Then inner for_each() loop decrements it by the
> number of sibling groups (cores) while assigning first IRQ to each
> group.
>
> Now, because NUMA locality is more important, we should walk the
> same set of siblings and assign 2nd IRQ, and it's implemented by the
> medium while() loop. So, we do like this unless the number of IRQs
> assigned on this hop will not become equal to number of CPUs in the
> hop (weight == 0). Then we switch to the next hop and do the same
> thing.
>
> Hope that helps.
Yes, that helps! So the key to understanding "weight" is that
NUMA locality is preferred over sibling dislocality.
This is a great summary! All or most of it should go as a
comment describing the function and what it is trying to do.
Michael
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 3/4] net: mana: Allow irq_setup() to skip cpus for affinity
2025-05-14 17:26 ` Michael Kelley
@ 2025-05-14 17:58 ` Yury Norov
2025-05-15 4:51 ` Shradha Gupta
2025-05-15 4:49 ` Shradha Gupta
1 sibling, 1 reply; 20+ messages in thread
From: Yury Norov @ 2025-05-14 17:58 UTC (permalink / raw)
To: Michael Kelley
Cc: Shradha Gupta, Dexuan Cui, Wei Liu, Haiyang Zhang,
K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Simon Horman,
Leon Romanovsky, Maxim Levitsky, Erni Sri Satya Vennela,
Peter Zijlstra, linux-hyperv@vger.kernel.org,
linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org,
Nipun Gupta, Jason Gunthorpe, Jonathan Cameron,
Anna-Maria Behnsen, Kevin Tian, Long Li, Thomas Gleixner,
Bjorn Helgaas, Rob Herring, Manivannan Sadhasivam,
Krzysztof Wilczy�~Dski, Lorenzo Pieralisi,
netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Paul Rosswurm,
Shradha Gupta
On Wed, May 14, 2025 at 05:26:45PM +0000, Michael Kelley wrote:
> > Hope that helps.
>
> Yes, that helps! So the key to understanding "weight" is that
> NUMA locality is preferred over sibling dislocality.
>
> This is a great summary! All or most of it should go as a
> comment describing the function and what it is trying to do.
OK, please consider applying:
From abdf5cc6dabd7f433b1d1e66db86333a33a2cd15 Mon Sep 17 00:00:00 2001
From: Yury Norov [NVIDIA] <yury.norov@gmail.com>
Date: Wed, 14 May 2025 13:45:26 -0400
Subject: [PATCH] net: mana: explain irq_setup() algorithm
Commit 91bfe210e196 ("net: mana: add a function to spread IRQs per CPUs")
added the irq_setup() function that distributes IRQs on CPUs according
to a tricky heuristic. The corresponding commit message explains the
heuristic.
Duplicate it in the source code to make available for readers without
digging git in history. Also, add more detailed explanation about how
the heuristics is implemented.
Signed-off-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
---
.../net/ethernet/microsoft/mana/gdma_main.c | 41 +++++++++++++++++++
1 file changed, 41 insertions(+)
diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
index 4ffaf7588885..f9e8d4d1ba3a 100644
--- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
+++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
@@ -1288,6 +1288,47 @@ void mana_gd_free_res_map(struct gdma_resource *r)
r->size = 0;
}
+/*
+ * Spread on CPUs with the following heuristics:
+ *
+ * 1. No more than one IRQ per CPU, if possible;
+ * 2. NUMA locality is the second priority;
+ * 3. Sibling dislocality is the last priority.
+ *
+ * Let's consider this topology:
+ *
+ * Node 0 1
+ * Core 0 1 2 3
+ * CPU 0 1 2 3 4 5 6 7
+ *
+ * The most performant IRQ distribution based on the above topology
+ * and heuristics may look like this:
+ *
+ * IRQ Nodes Cores CPUs
+ * 0 1 0 0-1
+ * 1 1 1 2-3
+ * 2 1 0 0-1
+ * 3 1 1 2-3
+ * 4 2 2 4-5
+ * 5 2 3 6-7
+ * 6 2 2 4-5
+ * 7 2 3 6-7
+ *
+ * The heuristics is implemented as follows.
+ *
+ * The outer for_each() loop resets the 'weight' to the actual number
+ * of CPUs in the hop. Then inner for_each() loop decrements it by the
+ * number of sibling groups (cores) while assigning first set of IRQs
+ * to each group. IRQs 0 and 1 above are distributed this way.
+ *
+ * Now, because NUMA locality is more important, we should walk the
+ * same set of siblings and assign 2nd set of IRQs (2 and 3), and it's
+ * implemented by the medium while() loop. We do like this unless the
+ * number of IRQs assigned on this hop will not become equal to number
+ * of CPUs in the hop (weight == 0). Then we switch to the next hop and
+ * do the same thing.
+ */
+
static int irq_setup(unsigned int *irqs, unsigned int len, int node)
{
const struct cpumask *next, *prev = cpu_none_mask;
--
2.43.0
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [PATCH v3 3/4] net: mana: Allow irq_setup() to skip cpus for affinity
2025-05-14 17:26 ` Michael Kelley
2025-05-14 17:58 ` Yury Norov
@ 2025-05-15 4:49 ` Shradha Gupta
1 sibling, 0 replies; 20+ messages in thread
From: Shradha Gupta @ 2025-05-15 4:49 UTC (permalink / raw)
To: Michael Kelley
Cc: Yury Norov, Dexuan Cui, Wei Liu, Haiyang Zhang, K. Y. Srinivasan,
Andrew Lunn, David S. Miller, Eric Dumazet, Jakub Kicinski,
Paolo Abeni, Konstantin Taranov, Simon Horman, Leon Romanovsky,
Maxim Levitsky, Erni Sri Satya Vennela, Peter Zijlstra,
linux-hyperv@vger.kernel.org, linux-pci@vger.kernel.org,
linux-kernel@vger.kernel.org, Nipun Gupta, Jason Gunthorpe,
Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian, Long Li,
Thomas Gleixner, Bjorn Helgaas, Rob Herring,
Manivannan Sadhasivam, Krzysztof Wilczy???~Dski,
Lorenzo Pieralisi, netdev@vger.kernel.org,
linux-rdma@vger.kernel.org, Paul Rosswurm, Shradha Gupta
On Wed, May 14, 2025 at 05:26:45PM +0000, Michael Kelley wrote:
> From: Yury Norov <yury.norov@gmail.com> Sent: Wednesday, May 14, 2025 9:55 AM
> >
> > On Wed, May 14, 2025 at 04:53:34AM +0000, Michael Kelley wrote:
> > > > -static int irq_setup(unsigned int *irqs, unsigned int len, int node)
> > > > +static int irq_setup(unsigned int *irqs, unsigned int len, int node,
> > > > + bool skip_first_cpu)
> > > > {
> > > > const struct cpumask *next, *prev = cpu_none_mask;
> > > > cpumask_var_t cpus __free(free_cpumask_var);
> > > > @@ -1303,9 +1304,20 @@ static int irq_setup(unsigned int *irqs, unsigned int len, int node)
> > > > while (weight > 0) {
> > > > cpumask_andnot(cpus, next, prev);
> > > > for_each_cpu(cpu, cpus) {
> > > > + /*
> > > > + * if the CPU sibling set is to be skipped we
> > > > + * just move on to the next CPUs without len--
> > > > + */
> > > > + if (unlikely(skip_first_cpu)) {
> > > > + skip_first_cpu = false;
> > > > + goto next_cpumask;
> > > > + }
> > > > +
> > > > if (len-- == 0)
> > > > goto done;
> > > > +
> > > > irq_set_affinity_and_hint(*irqs++, topology_sibling_cpumask(cpu));
> > > > +next_cpumask:
> > > > cpumask_andnot(cpus, cpus, topology_sibling_cpumask(cpu));
> > > > --weight;
> > > > }
> > >
> > > With a little bit of reordering of the code, you could avoid the need for the "next_cpumask"
> > > label and goto statement. "continue" is usually cleaner than a "goto". Here's what I'm thinking:
> > >
> > > for_each_cpu(cpu, cpus) {
> > > cpumask_andnot(cpus, cpus, topology_sibling_cpumask(cpu));
> > > --weight;
> >
> > cpumask_andnot() is O(N), and before it was conditional on 'len == 0',
> > so we didn't do that on the very last step. Your version has to do that.
> > Don't know how important that is for real workloads. Shradha maybe can
> > measure it...
>
> Yes, there's one extra invocation of cpumask_andnot(). But if the
> VM has a small number of CPUs, that extra invocation is negligible.
> If the VM has a large number of CPUs, we're already executing
> cpumask_andnot() many times, so one extra time is also negligible.
> And this whole thing is done only at device initialization time, so
> it's not a hot path.
>
Hi Michael, Yury,
That's right, the overhead is negligible. Tested with some common
workloads. I will change this in the next version.
Shradha.
> >
> > >
> > > If (unlikely(skip_first_cpu)) {
> > > skip_first_cpu = false;
> > > continue;
> > > }
> > >
> > > If (len-- == 0)
> > > goto done;
> > >
> > > irq_set_affinity_and_hint(*irqs++, topology_sibling_cpumask(cpu));
> > > }
> > >
> > > I wish there were some comments in irq_setup() explaining the overall intention of
> > > the algorithm. I can see how the goal is to first assign CPUs that are local to the current
> > > NUMA node, and then expand outward to CPUs that are further away. And you want
> > > to *not* assign both siblings in a hyper-threaded core.
> >
> > I wrote this function, so let me step in.
> >
> > The intention is described in the corresponding commit message:
> >
> > Souradeep investigated that the driver performs faster if IRQs are
> > spread on CPUs with the following heuristics:
> >
> > 1. No more than one IRQ per CPU, if possible;
> > 2. NUMA locality is the second priority;
> > 3. Sibling dislocality is the last priority.
> >
> > Let's consider this topology:
> >
> > Node 0 1
> > Core 0 1 2 3
> > CPU 0 1 2 3 4 5 6 7
> >
> > The most performant IRQ distribution based on the above topology
> > and heuristics may look like this:
> >
> > IRQ Nodes Cores CPUs
> > 0 1 0 0-1
> > 1 1 1 2-3
> > 2 1 0 0-1
> > 3 1 1 2-3
> > 4 2 2 4-5
> > 5 2 3 6-7
> > 6 2 2 4-5
> > 7 2 3 6-7
> >
> > > But I can't figure out what
> > > "weight" is trying to accomplish. Maybe this was discussed when the code first
> > > went in, but I can't remember now. :-(
> >
> > The weight here is to implement the heuristic discovered by Souradeep:
> > NUMA locality is preferred over sibling dislocality.
> >
> > The outer for_each() loop resets the weight to the actual number of
> > CPUs in the hop. Then inner for_each() loop decrements it by the
> > number of sibling groups (cores) while assigning first IRQ to each
> > group.
> >
> > Now, because NUMA locality is more important, we should walk the
> > same set of siblings and assign 2nd IRQ, and it's implemented by the
> > medium while() loop. So, we do like this unless the number of IRQs
> > assigned on this hop will not become equal to number of CPUs in the
> > hop (weight == 0). Then we switch to the next hop and do the same
> > thing.
> >
> > Hope that helps.
>
> Yes, that helps! So the key to understanding "weight" is that
> NUMA locality is preferred over sibling dislocality.
>
> This is a great summary! All or most of it should go as a
> comment describing the function and what it is trying to do.
>
> Michael
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 3/4] net: mana: Allow irq_setup() to skip cpus for affinity
2025-05-14 17:58 ` Yury Norov
@ 2025-05-15 4:51 ` Shradha Gupta
0 siblings, 0 replies; 20+ messages in thread
From: Shradha Gupta @ 2025-05-15 4:51 UTC (permalink / raw)
To: Yury Norov
Cc: Michael Kelley, Dexuan Cui, Wei Liu, Haiyang Zhang,
K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Simon Horman,
Leon Romanovsky, Maxim Levitsky, Erni Sri Satya Vennela,
Peter Zijlstra, linux-hyperv@vger.kernel.org,
linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org,
Nipun Gupta, Jason Gunthorpe, Jonathan Cameron,
Anna-Maria Behnsen, Kevin Tian, Long Li, Thomas Gleixner,
Bjorn Helgaas, Rob Herring, Manivannan Sadhasivam,
Krzysztof Wilczy???~Dski, Lorenzo Pieralisi,
netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Paul Rosswurm,
Shradha Gupta
On Wed, May 14, 2025 at 01:58:40PM -0400, Yury Norov wrote:
> On Wed, May 14, 2025 at 05:26:45PM +0000, Michael Kelley wrote:
> > > Hope that helps.
> >
> > Yes, that helps! So the key to understanding "weight" is that
> > NUMA locality is preferred over sibling dislocality.
> >
> > This is a great summary! All or most of it should go as a
> > comment describing the function and what it is trying to do.
>
> OK, please consider applying:
>
> >From abdf5cc6dabd7f433b1d1e66db86333a33a2cd15 Mon Sep 17 00:00:00 2001
> From: Yury Norov [NVIDIA] <yury.norov@gmail.com>
> Date: Wed, 14 May 2025 13:45:26 -0400
> Subject: [PATCH] net: mana: explain irq_setup() algorithm
>
> Commit 91bfe210e196 ("net: mana: add a function to spread IRQs per CPUs")
> added the irq_setup() function that distributes IRQs on CPUs according
> to a tricky heuristic. The corresponding commit message explains the
> heuristic.
>
> Duplicate it in the source code to make available for readers without
> digging git in history. Also, add more detailed explanation about how
> the heuristics is implemented.
>
> Signed-off-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
> ---
> .../net/ethernet/microsoft/mana/gdma_main.c | 41 +++++++++++++++++++
> 1 file changed, 41 insertions(+)
>
> diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> index 4ffaf7588885..f9e8d4d1ba3a 100644
> --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
> +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> @@ -1288,6 +1288,47 @@ void mana_gd_free_res_map(struct gdma_resource *r)
> r->size = 0;
> }
>
> +/*
> + * Spread on CPUs with the following heuristics:
> + *
> + * 1. No more than one IRQ per CPU, if possible;
> + * 2. NUMA locality is the second priority;
> + * 3. Sibling dislocality is the last priority.
> + *
> + * Let's consider this topology:
> + *
> + * Node 0 1
> + * Core 0 1 2 3
> + * CPU 0 1 2 3 4 5 6 7
> + *
> + * The most performant IRQ distribution based on the above topology
> + * and heuristics may look like this:
> + *
> + * IRQ Nodes Cores CPUs
> + * 0 1 0 0-1
> + * 1 1 1 2-3
> + * 2 1 0 0-1
> + * 3 1 1 2-3
> + * 4 2 2 4-5
> + * 5 2 3 6-7
> + * 6 2 2 4-5
> + * 7 2 3 6-7
> + *
> + * The heuristics is implemented as follows.
> + *
> + * The outer for_each() loop resets the 'weight' to the actual number
> + * of CPUs in the hop. Then inner for_each() loop decrements it by the
> + * number of sibling groups (cores) while assigning first set of IRQs
> + * to each group. IRQs 0 and 1 above are distributed this way.
> + *
> + * Now, because NUMA locality is more important, we should walk the
> + * same set of siblings and assign 2nd set of IRQs (2 and 3), and it's
> + * implemented by the medium while() loop. We do like this unless the
> + * number of IRQs assigned on this hop will not become equal to number
> + * of CPUs in the hop (weight == 0). Then we switch to the next hop and
> + * do the same thing.
> + */
> +
> static int irq_setup(unsigned int *irqs, unsigned int len, int node)
> {
> const struct cpumask *next, *prev = cpu_none_mask;
Thank you Yury,
I will include this patch in the patchset with the next version.
> --
> 2.43.0
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 4/4] net: mana: Allocate MSI-X vectors dynamically
2025-05-14 17:07 ` Yury Norov
@ 2025-05-15 5:10 ` Shradha Gupta
0 siblings, 0 replies; 20+ messages in thread
From: Shradha Gupta @ 2025-05-15 5:10 UTC (permalink / raw)
To: Yury Norov
Cc: Michael Kelley, Dexuan Cui, Wei Liu, Haiyang Zhang,
K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Simon Horman,
Leon Romanovsky, Maxim Levitsky, Erni Sri Satya Vennela,
Peter Zijlstra, linux-hyperv@vger.kernel.org,
linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org,
Nipun Gupta, Jason Gunthorpe, Jonathan Cameron,
Anna-Maria Behnsen, Kevin Tian, Long Li, Thomas Gleixner,
Bjorn Helgaas, Rob Herring, Manivannan Sadhasivam,
Krzysztof Wilczy???~Dski, Lorenzo Pieralisi,
netdev@vger.kernel.org, linux-rdma@vger.kernel.org, Paul Rosswurm,
Shradha Gupta
On Wed, May 14, 2025 at 01:07:47PM -0400, Yury Norov wrote:
> On Wed, May 14, 2025 at 05:04:03AM +0000, Michael Kelley wrote:
> > From: Shradha Gupta <shradhagupta@linux.microsoft.com> Sent: Friday, May 9, 2025 3:14 AM
> > >
> > > Currently, the MANA driver allocates MSI-X vectors statically based on
> > > MANA_MAX_NUM_QUEUES and num_online_cpus() values and in some cases ends
> > > up allocating more vectors than it needs. This is because, by this time
> > > we do not have a HW channel and do not know how many IRQs should be
> > > allocated.
> > >
> > > To avoid this, we allocate 1 MSI-X vector during the creation of HWC and
> > > after getting the value supported by hardware, dynamically add the
> > > remaining MSI-X vectors.
> >
> > After this patch is applied, there are two functions for setting up IRQs:
> > 1. mana_gd_setup_dyn_irqs()
> > 2. mana_gd_setup_irqs()
> >
> > #1 is about 78 lines of code and comments, while #2 is about 103 lines of
> > code and comments. But the two functions have a lot of commonality,
> > and that amount of commonality raises a red flag for me.
> >
> > Have you looked at parameterizing things so a single function can serve
> > both purposes? I haven't worked through all the details, but at first
> > glance it looks very feasible, and without introducing unreasonable
> > messiness. Saving 70 to 80 lines of fairly duplicative code is worth a bit
> > of effort.
> >
> > I have some other comments on the code. But if those two functions can
> > be combined, I'd rather re-review the result before adding comments that
> > may become irrelevant due to the restructuring.
> >
> > Michael
>
> On previous iteration I already mentioned that this patch is too big,
> doesn't do exactly one thing and should be split to become a reviewable
> change. Shradha split-out the irq_setup() piece, and your review proves
> that splitting helps to reviewability.
>
> The rest of the change is still a mess. I agree that the functions you
> mention likely duplicate each other. But overall, this patch bomb is
> above my ability to review. Fortunately, I don't have to.
>
> Thanks,
> Yury
Hi Michael, Yury,
My intentions for keeping all these changes in MANA initialization code
together in a patch were to avoid multiple patches touching same
functions throughout the patchset. I felt it would become error-prone and
hamper reviewability.
About having to combine mana_gd_setup_dyn_irqs() and
mana_gd_setup_irqs(), we initially went ahead with a single function for
this with multiple parameters to prevent code duplication. But it made the
function very hard to read and caused a lot of confusion while reviewing
it.
At the surface of it, it would like like just two conditions to be
dealth with in the function but it actually has to deal with many more
conditions, like -
1. static allocation of all IRQs (IRQs < num_vCPUs)
2. static allocation of all IRQs (IRQs > num_vCPUS)
3. static allocation of 1 HWC IRQ
4. dynamic allocation of remaining IRQs (IRQs < num_vCPUs)
5. dynamic allocation of remaining IRQs (IRQs > num_vCPUs)
Therefore, to keep the code more readable we decided to separate the
functaionalities.
Thanks,
Shradha.
Shradha.
^ permalink raw reply [flat|nested] 20+ messages in thread
* RE: [PATCH v3 4/4] net: mana: Allocate MSI-X vectors dynamically
2025-05-09 10:13 ` [PATCH v3 4/4] net: mana: Allocate MSI-X vectors dynamically Shradha Gupta
2025-05-14 5:04 ` Michael Kelley
@ 2025-05-21 16:27 ` Michael Kelley
2025-05-22 9:17 ` Shradha Gupta
1 sibling, 1 reply; 20+ messages in thread
From: Michael Kelley @ 2025-05-21 16:27 UTC (permalink / raw)
To: Shradha Gupta, Dexuan Cui, Wei Liu, Haiyang Zhang,
K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Simon Horman,
Leon Romanovsky, Maxim Levitsky, Erni Sri Satya Vennela,
Peter Zijlstra
Cc: linux-hyperv@vger.kernel.org, linux-pci@vger.kernel.org,
linux-kernel@vger.kernel.org, Nipun Gupta, Yury Norov,
Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
Manivannan Sadhasivam, Krzysztof Wilczy�~Dski,
Lorenzo Pieralisi, netdev@vger.kernel.org,
linux-rdma@vger.kernel.org, Paul Rosswurm, Shradha Gupta
From: Shradha Gupta <shradhagupta@linux.microsoft.com> Sent: Friday, May 9, 2025 3:14 AM
>
> Currently, the MANA driver allocates MSI-X vectors statically based on
> MANA_MAX_NUM_QUEUES and num_online_cpus() values and in some cases ends
> up allocating more vectors than it needs. This is because, by this time
> we do not have a HW channel and do not know how many IRQs should be
> allocated.
>
> To avoid this, we allocate 1 MSI-X vector during the creation of HWC and
> after getting the value supported by hardware, dynamically add the
> remaining MSI-X vectors.
>
> Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
> ---
> Changes in v3:
> * implemented irq_contexts as xarrays rather than list
> * split the patch to create a perparation patch around irq_setup()
> * add log when IRQ allocation/setup for remaining IRQs fails
> ---
> Changes in v2:
> * Use string 'MSI-X vectors' instead of 'pci vectors'
> * make skip-cpu a bool instead of int
> * rearrange the comment arout skip_cpu variable appropriately
> * update the capability bit for driver indicating dynamic IRQ allocation
> * enforced max line length to 80
> * enforced RCT convention
> * initialized gic to NULL, for when there is a possibility of gic
> not being populated correctly
> ---
> .../net/ethernet/microsoft/mana/gdma_main.c | 248 +++++++++++++++---
> include/net/mana/gdma.h | 8 +-
> 2 files changed, 211 insertions(+), 45 deletions(-)
>
> diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c
> b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> index 2de42ce43373..f07cebffc30d 100644
> --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
> +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> @@ -6,6 +6,8 @@
> #include <linux/pci.h>
> #include <linux/utsname.h>
> #include <linux/version.h>
> +#include <linux/msi.h>
> +#include <linux/irqdomain.h>
>
> #include <net/mana/mana.h>
>
> @@ -80,8 +82,15 @@ static int mana_gd_query_max_resources(struct pci_dev *pdev)
> return err ? err : -EPROTO;
> }
>
> - if (gc->num_msix_usable > resp.max_msix)
> - gc->num_msix_usable = resp.max_msix;
> + if (!pci_msix_can_alloc_dyn(pdev)) {
> + if (gc->num_msix_usable > resp.max_msix)
> + gc->num_msix_usable = resp.max_msix;
> + } else {
> + /* If dynamic allocation is enabled we have already allocated
> + * hwc msi
> + */
> + gc->num_msix_usable = min(resp.max_msix, num_online_cpus() + 1);
> + }
>
> if (gc->num_msix_usable <= 1)
> return -ENOSPC;
> @@ -482,7 +491,9 @@ static int mana_gd_register_irq(struct gdma_queue *queue,
> }
>
> queue->eq.msix_index = msi_index;
> - gic = &gc->irq_contexts[msi_index];
> + gic = xa_load(&gc->irq_contexts, msi_index);
> + if (!gic)
> + return -EINVAL;
>
> spin_lock_irqsave(&gic->lock, flags);
> list_add_rcu(&queue->entry, &gic->eq_list);
> @@ -507,7 +518,10 @@ static void mana_gd_deregiser_irq(struct gdma_queue *queue)
> if (WARN_ON(msix_index >= gc->num_msix_usable))
> return;
>
> - gic = &gc->irq_contexts[msix_index];
> + gic = xa_load(&gc->irq_contexts, msix_index);
> + if (!gic)
> + return;
If xa_load() doesn't return a valid gic, it seems like that would warrant a
WARN_ON(), like the above case where the msix_index is out of range.
> +
> spin_lock_irqsave(&gic->lock, flags);
> list_for_each_entry_rcu(eq, &gic->eq_list, entry) {
> if (queue == eq) {
> @@ -1329,29 +1343,96 @@ static int irq_setup(unsigned int *irqs, unsigned int len, int node,
> return 0;
> }
>
> -static int mana_gd_setup_irqs(struct pci_dev *pdev)
> +static int mana_gd_setup_dyn_irqs(struct pci_dev *pdev, int nvec)
> {
> struct gdma_context *gc = pci_get_drvdata(pdev);
> - unsigned int max_queues_per_port;
> struct gdma_irq_context *gic;
> - unsigned int max_irqs, cpu;
> - int start_irq_index = 1;
> - int nvec, *irqs, irq;
> + bool skip_first_cpu = false;
> int err, i = 0, j;
Initializing "i" to 0 is superfluous. The "for" loop below does it.
> + int *irqs, irq;
>
> cpus_read_lock();
> - max_queues_per_port = num_online_cpus();
> - if (max_queues_per_port > MANA_MAX_NUM_QUEUES)
> - max_queues_per_port = MANA_MAX_NUM_QUEUES;
>
> - /* Need 1 interrupt for the Hardware communication Channel (HWC) */
> - max_irqs = max_queues_per_port + 1;
> + irqs = kmalloc_array(nvec, sizeof(int), GFP_KERNEL);
> + if (!irqs) {
> + err = -ENOMEM;
> + goto free_irq_vector;
> + }
>
> - nvec = pci_alloc_irq_vectors(pdev, 2, max_irqs, PCI_IRQ_MSIX);
> - if (nvec < 0) {
> - cpus_read_unlock();
> - return nvec;
> + for (i = 0; i < nvec; i++) {
> + gic = kcalloc(1, sizeof(struct gdma_irq_context), GFP_KERNEL);
kcalloc() with a constant 1 first argument is a bit unusual. Just use kzalloc() since
there's no array here?
> + if (!gic) {
> + err = -ENOMEM;
> + goto free_irq;
> + }
> + gic->handler = mana_gd_process_eq_events;
> + INIT_LIST_HEAD(&gic->eq_list);
> + spin_lock_init(&gic->lock);
> +
> + snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s",
> + i, pci_name(pdev));
> +
> + /* one pci vector is already allocated for HWC */
> + irqs[i] = pci_irq_vector(pdev, i + 1);
> + if (irqs[i] < 0) {
> + err = irqs[i];
> + goto free_current_gic;
> + }
> +
> + err = request_irq(irqs[i], mana_gd_intr, 0, gic->name, gic);
> + if (err)
> + goto free_current_gic;
> +
> + xa_store(&gc->irq_contexts, i + 1, gic, GFP_KERNEL);
> }
> +
> + /*
> + * When calling irq_setup() for dynamically added IRQs, if number of
> + * CPUs is more than or equal to allocated MSI-X, we need to skip the
> + * first CPU sibling group since they are already affinitized to HWC IRQ
> + */
> + if (gc->num_msix_usable <= num_online_cpus())
> + skip_first_cpu = true;
> +
> + err = irq_setup(irqs, nvec, gc->numa_node, skip_first_cpu);
> + if (err)
> + goto free_irq;
> +
> + cpus_read_unlock();
> + kfree(irqs);
> + return 0;
> +
> +free_current_gic:
> + kfree(gic);
> +free_irq:
In the error case, this label is reached with "i" in two possible
states. Case 1: It might be the index of the entry that failed due to
the "goto free_current_gic" statements. Case 2: It might be the
index of one entry past all the successfully requested irqs, when the
failure occurs on irq_setup() and the code does "goto free_irq".
> + for (j = i; j >= 0; j--) {
So the "for" loop starts with "j" set to an index that doesn't
exist (in Case 2 above), or an index that is only partially
complete (Case 1 above).
And actually, local variable "j" isn't needed for this loop.
It could just count down using "i".
> + irq = pci_irq_vector(pdev, j);
This seems to be looking up the wrong irq vector. In the main
loop earlier, the index to pci_irq_vector() is "i + 1" but there's
no "+ 1" here.
> + gic = xa_load(&gc->irq_contexts, j);
> + if (!gic)
> + continue;
So evidently it is expected that this xa_load() will fail
the first time through this "j" loop. In Case 1, the xa_store()
was never done, and in Case 2, the index starts out one
too big.
> +
> + irq_update_affinity_hint(irq, NULL);
> + free_irq(irq, gic);
> + xa_erase(&gc->irq_contexts, j);
> + kfree(gic);
> + }
Except for the wrong index to pci_irq_vector(), I think this works,
but it's a bit bizarre. More natural would be to initialize "j" to
"i - 1" so that the first iteration through the loop isn't degenerate.
In that case, all the calls to xa_load() should succeed, and you
might put a WARN_ON() if there's a failure.
> + kfree(irqs);
> +free_irq_vector:
> + cpus_read_unlock();
> + return err;
> +}
> +
> +static int mana_gd_setup_irqs(struct pci_dev *pdev, int nvec)
> +{
> + struct gdma_context *gc = pci_get_drvdata(pdev);
> + struct gdma_irq_context *gic;
> + int start_irq_index = 1;
> + unsigned int cpu;
> + int *irqs, irq;
> + int err, i = 0, j;
Initializing "i" to 0 is superfluous.
> +
> + cpus_read_lock();
> +
> if (nvec <= num_online_cpus())
> start_irq_index = 0;
>
> @@ -1361,15 +1442,13 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
> goto free_irq_vector;
> }
>
> - gc->irq_contexts = kcalloc(nvec, sizeof(struct gdma_irq_context),
> - GFP_KERNEL);
> - if (!gc->irq_contexts) {
> - err = -ENOMEM;
> - goto free_irq_array;
> - }
> -
> for (i = 0; i < nvec; i++) {
> - gic = &gc->irq_contexts[i];
> + gic = kcalloc(1, sizeof(struct gdma_irq_context), GFP_KERNEL);
kcalloc() with a constant 1 first argument is a bit unusual. Just use kzalloc() since
there's no array here?
> + if (!gic) {
> + err = -ENOMEM;
> + goto free_irq;
> + }
> +
> gic->handler = mana_gd_process_eq_events;
> INIT_LIST_HEAD(&gic->eq_list);
> spin_lock_init(&gic->lock);
> @@ -1384,13 +1463,13 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
> irq = pci_irq_vector(pdev, i);
> if (irq < 0) {
> err = irq;
> - goto free_irq;
> + goto free_current_gic;
> }
>
> if (!i) {
> err = request_irq(irq, mana_gd_intr, 0, gic->name, gic);
> if (err)
> - goto free_irq;
> + goto free_current_gic;
>
> /* If number of IRQ is one extra than number of online CPUs,
> * then we need to assign IRQ0 (hwc irq) and IRQ1 to
> @@ -1408,39 +1487,110 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
> }
> } else {
> irqs[i - start_irq_index] = irq;
> - err = request_irq(irqs[i - start_irq_index], mana_gd_intr, 0,
> - gic->name, gic);
> + err = request_irq(irqs[i - start_irq_index],
> + mana_gd_intr, 0, gic->name, gic);
> if (err)
> - goto free_irq;
> + goto free_current_gic;
> }
> +
> + xa_store(&gc->irq_contexts, i, gic, GFP_KERNEL);
> }
FWIW, I think all this logic around "start_irq_index" could be simplified,
though I haven't worked through the details. If it would simplify the code,
it would be fine to allocate the "irqs" array with one extra entry that is unused
if IRQ0 and IRQ1 use the same CPUs.
>
> err = irq_setup(irqs, (nvec - start_irq_index), gc->numa_node, false);
> if (err)
> goto free_irq;
>
> - gc->max_num_msix = nvec;
> - gc->num_msix_usable = nvec;
> cpus_read_unlock();
> kfree(irqs);
> return 0;
>
> +free_current_gic:
> + kfree(gic);
> free_irq:
> for (j = i - 1; j >= 0; j--) {
In this case j is initialized to i - 1, which is what I expected in
the previous case.
Again, this loop could just countdown using "i" instead of
introducing "j".
> irq = pci_irq_vector(pdev, j);
> - gic = &gc->irq_contexts[j];
> + gic = xa_load(&gc->irq_contexts, j);
> + if (!gic)
> + continue;
Failure to get a valid gic should never happen, so perhaps a WARN_ON()
is appropriate.
>
> irq_update_affinity_hint(irq, NULL);
> free_irq(irq, gic);
> + xa_erase(&gc->irq_contexts, j);
> + kfree(gic);
> }
>
> - kfree(gc->irq_contexts);
> - gc->irq_contexts = NULL;
> -free_irq_array:
> kfree(irqs);
> free_irq_vector:
> + xa_destroy(&gc->irq_contexts);
This seems like the wrong place to be doing xa_destroy(). It leads
to inconsistencies. For example, if mana_gd_setup_hwc_irqs()
fails, it may have failed before calling mana_gd_setup_irqs(), in
which case the xa_destroy() is not done. Or if the failure occurred here
in mana_gd_setup_irqs(), then xa_destroy() will have been done.
Ideally, the xa_destroy() for error cases could be done in
mana_gd_probe() where the xa_init() is done so that the calls match
up, and of course in mana_gd_remove() when the device goes away
entirely.
> cpus_read_unlock();
> - pci_free_irq_vectors(pdev);
> + return err;
> +}
> +
> +static int mana_gd_setup_hwc_irqs(struct pci_dev *pdev)
> +{
> + struct gdma_context *gc = pci_get_drvdata(pdev);
> + unsigned int max_irqs, min_irqs;
> + int max_queues_per_port;
> + int nvec, err;
> +
> + if (pci_msix_can_alloc_dyn(pdev)) {
> + max_irqs = 1;
> + min_irqs = 1;
> + } else {
> + max_queues_per_port = num_online_cpus();
> + if (max_queues_per_port > MANA_MAX_NUM_QUEUES)
> + max_queues_per_port = MANA_MAX_NUM_QUEUES;
> + /* Need 1 interrupt for HWC */
> + max_irqs = max_queues_per_port + 1;
This code is simply being copied from existing code, but it would be nicer to
code it as:
max_irqs = min(num_online_cpus(), MANA_MAX_NUM_QUEUES) + 1;
Explicitly using the "min" function reduces the cognitive effort to parse
the "if" statement and figure out that it is picking the minimum of the two
values. And the local variable max_queues_per_port can be dropped, but
keep the comment :-)
> + min_irqs = 2;
> + }
> +
> + nvec = pci_alloc_irq_vectors(pdev, min_irqs, max_irqs, PCI_IRQ_MSIX);
> + if (nvec < 0)
> + return nvec;
> +
> + err = mana_gd_setup_irqs(pdev, nvec);
> + if (err) {
> + pci_free_irq_vectors(pdev);
> + return err;
> + }
> +
> + gc->num_msix_usable = nvec;
> + gc->max_num_msix = nvec;
> +
> + return err;
"err" should always be zero at this point, so could do "return 0"
> +}
> +
> +static int mana_gd_setup_remaining_irqs(struct pci_dev *pdev)
> +{
> + struct gdma_context *gc = pci_get_drvdata(pdev);
> + int max_irqs, i, err = 0;
> + struct msi_map irq_map;
> +
> + if (!pci_msix_can_alloc_dyn(pdev))
> + /* remain irqs are already allocated with HWC IRQ */
> + return 0;
> +
> + /* allocate only remaining IRQs*/
> + max_irqs = gc->num_msix_usable - 1;
> +
> + for (i = 1; i <= max_irqs; i++) {
> + irq_map = pci_msix_alloc_irq_at(pdev, i, NULL);
> + if (!irq_map.virq) {
> + err = irq_map.index;
> + /* caller will handle cleaning up all allocated
> + * irqs, after HWC is destroyed
> + */
> + return err;
> + }
> + }
> +
> + err = mana_gd_setup_dyn_irqs(pdev, max_irqs);
> + if (err)
> + return err;
> +
> + gc->max_num_msix = gc->max_num_msix + max_irqs;
> +
> return err;
Again, err must always be zero here.
> }
>
> @@ -1458,19 +1608,22 @@ static void mana_gd_remove_irqs(struct pci_dev *pdev)
> if (irq < 0)
> continue;
>
> - gic = &gc->irq_contexts[i];
> + gic = xa_load(&gc->irq_contexts, i);
> + if (!gic)
> + continue;
>
> /* Need to clear the hint before free_irq */
> irq_update_affinity_hint(irq, NULL);
> free_irq(irq, gic);
> + xa_erase(&gc->irq_contexts, i);
> + kfree(gic);
> }
>
> pci_free_irq_vectors(pdev);
>
> gc->max_num_msix = 0;
> gc->num_msix_usable = 0;
> - kfree(gc->irq_contexts);
> - gc->irq_contexts = NULL;
> + xa_destroy(&gc->irq_contexts);
As noted above, I'm doubtful about this being the right place to do
xa_destroy().
> }
>
> static int mana_gd_setup(struct pci_dev *pdev)
> @@ -1481,9 +1634,10 @@ static int mana_gd_setup(struct pci_dev *pdev)
> mana_gd_init_registers(pdev);
> mana_smc_init(&gc->shm_channel, gc->dev, gc->shm_base);
>
> - err = mana_gd_setup_irqs(pdev);
> + err = mana_gd_setup_hwc_irqs(pdev);
> if (err) {
> - dev_err(gc->dev, "Failed to setup IRQs: %d\n", err);
> + dev_err(gc->dev, "Failed to setup IRQs for HWC creation: %d\n",
> + err);
> return err;
> }
>
> @@ -1499,6 +1653,12 @@ static int mana_gd_setup(struct pci_dev *pdev)
> if (err)
> goto destroy_hwc;
>
> + err = mana_gd_setup_remaining_irqs(pdev);
> + if (err) {
> + dev_err(gc->dev, "Failed to setup remaining IRQs: %d", err);
> + goto destroy_hwc;
> + }
> +
> err = mana_gd_detect_devices(pdev);
> if (err)
> goto destroy_hwc;
> @@ -1575,6 +1735,7 @@ static int mana_gd_probe(struct pci_dev *pdev, const struct
> pci_device_id *ent)
> gc->is_pf = mana_is_pf(pdev->device);
> gc->bar0_va = bar0_va;
> gc->dev = &pdev->dev;
> + xa_init(&gc->irq_contexts);
>
> if (gc->is_pf)
> gc->mana_pci_debugfs = debugfs_create_dir("0", mana_debugfs_root);
> diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h
> index 228603bf03f2..f20d1d1ea5e8 100644
> --- a/include/net/mana/gdma.h
> +++ b/include/net/mana/gdma.h
> @@ -373,7 +373,7 @@ struct gdma_context {
> unsigned int max_num_queues;
> unsigned int max_num_msix;
> unsigned int num_msix_usable;
> - struct gdma_irq_context *irq_contexts;
> + struct xarray irq_contexts;
>
> /* L2 MTU */
> u16 adapter_mtu;
> @@ -558,12 +558,16 @@ enum {
> /* Driver can handle holes (zeros) in the device list */
> #define GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP BIT(11)
>
> +/* Driver supports dynamic MSI-X vector allocation */
> +#define GDMA_DRV_CAP_FLAG_1_DYNAMIC_IRQ_ALLOC_SUPPORT BIT(13)
> +
> #define GDMA_DRV_CAP_FLAGS1 \
> (GDMA_DRV_CAP_FLAG_1_EQ_SHARING_MULTI_VPORT | \
> GDMA_DRV_CAP_FLAG_1_NAPI_WKDONE_FIX | \
> GDMA_DRV_CAP_FLAG_1_HWC_TIMEOUT_RECONFIG | \
> GDMA_DRV_CAP_FLAG_1_VARIABLE_INDIRECTION_TABLE_SUPPORT | \
> - GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP)
> + GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP | \
> + GDMA_DRV_CAP_FLAG_1_DYNAMIC_IRQ_ALLOC_SUPPORT)
>
> #define GDMA_DRV_CAP_FLAGS2 0
>
> --
> 2.34.1
>
^ permalink raw reply [flat|nested] 20+ messages in thread
* Re: [PATCH v3 4/4] net: mana: Allocate MSI-X vectors dynamically
2025-05-21 16:27 ` Michael Kelley
@ 2025-05-22 9:17 ` Shradha Gupta
0 siblings, 0 replies; 20+ messages in thread
From: Shradha Gupta @ 2025-05-22 9:17 UTC (permalink / raw)
To: Michael Kelley
Cc: Dexuan Cui, Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Andrew Lunn,
David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
Konstantin Taranov, Simon Horman, Leon Romanovsky, Maxim Levitsky,
Erni Sri Satya Vennela, Peter Zijlstra,
linux-hyperv@vger.kernel.org, linux-pci@vger.kernel.org,
linux-kernel@vger.kernel.org, Nipun Gupta, Yury Norov,
Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
Manivannan Sadhasivam, Krzysztof Wilczy???~Dski,
Lorenzo Pieralisi, netdev@vger.kernel.org,
linux-rdma@vger.kernel.org, Paul Rosswurm, Shradha Gupta
On Wed, May 21, 2025 at 04:27:04PM +0000, Michael Kelley wrote:
> From: Shradha Gupta <shradhagupta@linux.microsoft.com> Sent: Friday, May 9, 2025 3:14 AM
> >
> > Currently, the MANA driver allocates MSI-X vectors statically based on
> > MANA_MAX_NUM_QUEUES and num_online_cpus() values and in some cases ends
> > up allocating more vectors than it needs. This is because, by this time
> > we do not have a HW channel and do not know how many IRQs should be
> > allocated.
> >
> > To avoid this, we allocate 1 MSI-X vector during the creation of HWC and
> > after getting the value supported by hardware, dynamically add the
> > remaining MSI-X vectors.
> >
> > Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> > Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
> > ---
> > Changes in v3:
> > * implemented irq_contexts as xarrays rather than list
> > * split the patch to create a perparation patch around irq_setup()
> > * add log when IRQ allocation/setup for remaining IRQs fails
> > ---
> > Changes in v2:
> > * Use string 'MSI-X vectors' instead of 'pci vectors'
> > * make skip-cpu a bool instead of int
> > * rearrange the comment arout skip_cpu variable appropriately
> > * update the capability bit for driver indicating dynamic IRQ allocation
> > * enforced max line length to 80
> > * enforced RCT convention
> > * initialized gic to NULL, for when there is a possibility of gic
> > not being populated correctly
> > ---
> > .../net/ethernet/microsoft/mana/gdma_main.c | 248 +++++++++++++++---
> > include/net/mana/gdma.h | 8 +-
> > 2 files changed, 211 insertions(+), 45 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c
> > b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> > index 2de42ce43373..f07cebffc30d 100644
> > --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
> > +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> > @@ -6,6 +6,8 @@
> > #include <linux/pci.h>
> > #include <linux/utsname.h>
> > #include <linux/version.h>
> > +#include <linux/msi.h>
> > +#include <linux/irqdomain.h>
> >
> > #include <net/mana/mana.h>
> >
> > @@ -80,8 +82,15 @@ static int mana_gd_query_max_resources(struct pci_dev *pdev)
> > return err ? err : -EPROTO;
> > }
> >
> > - if (gc->num_msix_usable > resp.max_msix)
> > - gc->num_msix_usable = resp.max_msix;
> > + if (!pci_msix_can_alloc_dyn(pdev)) {
> > + if (gc->num_msix_usable > resp.max_msix)
> > + gc->num_msix_usable = resp.max_msix;
> > + } else {
> > + /* If dynamic allocation is enabled we have already allocated
> > + * hwc msi
> > + */
> > + gc->num_msix_usable = min(resp.max_msix, num_online_cpus() + 1);
> > + }
> >
> > if (gc->num_msix_usable <= 1)
> > return -ENOSPC;
> > @@ -482,7 +491,9 @@ static int mana_gd_register_irq(struct gdma_queue *queue,
> > }
> >
> > queue->eq.msix_index = msi_index;
> > - gic = &gc->irq_contexts[msi_index];
> > + gic = xa_load(&gc->irq_contexts, msi_index);
> > + if (!gic)
> > + return -EINVAL;
> >
> > spin_lock_irqsave(&gic->lock, flags);
> > list_add_rcu(&queue->entry, &gic->eq_list);
> > @@ -507,7 +518,10 @@ static void mana_gd_deregiser_irq(struct gdma_queue *queue)
> > if (WARN_ON(msix_index >= gc->num_msix_usable))
> > return;
> >
> > - gic = &gc->irq_contexts[msix_index];
> > + gic = xa_load(&gc->irq_contexts, msix_index);
> > + if (!gic)
> > + return;
>
> If xa_load() doesn't return a valid gic, it seems like that would warrant a
> WARN_ON(), like the above case where the msix_index is out of range.
>
That makes sense, I will add this change
> > +
> > spin_lock_irqsave(&gic->lock, flags);
> > list_for_each_entry_rcu(eq, &gic->eq_list, entry) {
> > if (queue == eq) {
> > @@ -1329,29 +1343,96 @@ static int irq_setup(unsigned int *irqs, unsigned int len, int node,
> > return 0;
> > }
> >
> > -static int mana_gd_setup_irqs(struct pci_dev *pdev)
> > +static int mana_gd_setup_dyn_irqs(struct pci_dev *pdev, int nvec)
> > {
> > struct gdma_context *gc = pci_get_drvdata(pdev);
> > - unsigned int max_queues_per_port;
> > struct gdma_irq_context *gic;
> > - unsigned int max_irqs, cpu;
> > - int start_irq_index = 1;
> > - int nvec, *irqs, irq;
> > + bool skip_first_cpu = false;
> > int err, i = 0, j;
>
> Initializing "i" to 0 is superfluous. The "for" loop below does it.
>
noted
> > + int *irqs, irq;
> >
> > cpus_read_lock();
> > - max_queues_per_port = num_online_cpus();
> > - if (max_queues_per_port > MANA_MAX_NUM_QUEUES)
> > - max_queues_per_port = MANA_MAX_NUM_QUEUES;
> >
> > - /* Need 1 interrupt for the Hardware communication Channel (HWC) */
> > - max_irqs = max_queues_per_port + 1;
> > + irqs = kmalloc_array(nvec, sizeof(int), GFP_KERNEL);
> > + if (!irqs) {
> > + err = -ENOMEM;
> > + goto free_irq_vector;
> > + }
> >
> > - nvec = pci_alloc_irq_vectors(pdev, 2, max_irqs, PCI_IRQ_MSIX);
> > - if (nvec < 0) {
> > - cpus_read_unlock();
> > - return nvec;
> > + for (i = 0; i < nvec; i++) {
> > + gic = kcalloc(1, sizeof(struct gdma_irq_context), GFP_KERNEL);
>
> kcalloc() with a constant 1 first argument is a bit unusual. Just use kzalloc() since
> there's no array here?
>
sure, will make this change
> > + if (!gic) {
> > + err = -ENOMEM;
> > + goto free_irq;
> > + }
> > + gic->handler = mana_gd_process_eq_events;
> > + INIT_LIST_HEAD(&gic->eq_list);
> > + spin_lock_init(&gic->lock);
> > +
> > + snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s",
> > + i, pci_name(pdev));
> > +
> > + /* one pci vector is already allocated for HWC */
> > + irqs[i] = pci_irq_vector(pdev, i + 1);
> > + if (irqs[i] < 0) {
> > + err = irqs[i];
> > + goto free_current_gic;
> > + }
> > +
> > + err = request_irq(irqs[i], mana_gd_intr, 0, gic->name, gic);
> > + if (err)
> > + goto free_current_gic;
> > +
> > + xa_store(&gc->irq_contexts, i + 1, gic, GFP_KERNEL);
> > }
> > +
> > + /*
> > + * When calling irq_setup() for dynamically added IRQs, if number of
> > + * CPUs is more than or equal to allocated MSI-X, we need to skip the
> > + * first CPU sibling group since they are already affinitized to HWC IRQ
> > + */
> > + if (gc->num_msix_usable <= num_online_cpus())
> > + skip_first_cpu = true;
> > +
> > + err = irq_setup(irqs, nvec, gc->numa_node, skip_first_cpu);
> > + if (err)
> > + goto free_irq;
> > +
> > + cpus_read_unlock();
> > + kfree(irqs);
> > + return 0;
> > +
> > +free_current_gic:
> > + kfree(gic);
> > +free_irq:
>
> In the error case, this label is reached with "i" in two possible
> states. Case 1: It might be the index of the entry that failed due to
> the "goto free_current_gic" statements. Case 2: It might be the
> index of one entry past all the successfully requested irqs, when the
> failure occurs on irq_setup() and the code does "goto free_irq".
>
> > + for (j = i; j >= 0; j--) {
>
> So the "for" loop starts with "j" set to an index that doesn't
> exist (in Case 2 above), or an index that is only partially
> complete (Case 1 above).
>
> And actually, local variable "j" isn't needed for this loop.
> It could just count down using "i".
>
> > + irq = pci_irq_vector(pdev, j);
>
> This seems to be looking up the wrong irq vector. In the main
> loop earlier, the index to pci_irq_vector() is "i + 1" but there's
> no "+ 1" here.
>
> > + gic = xa_load(&gc->irq_contexts, j);
> > + if (!gic)
> > + continue;
>
> So evidently it is expected that this xa_load() will fail
> the first time through this "j" loop. In Case 1, the xa_store()
> was never done, and in Case 2, the index starts out one
> too big.
>
> > +
> > + irq_update_affinity_hint(irq, NULL);
> > + free_irq(irq, gic);
> > + xa_erase(&gc->irq_contexts, j);
> > + kfree(gic);
> > + }
>
> Except for the wrong index to pci_irq_vector(), I think this works,
> but it's a bit bizarre. More natural would be to initialize "j" to
> "i - 1" so that the first iteration through the loop isn't degenerate.
> In that case, all the calls to xa_load() should succeed, and you
> might put a WARN_ON() if there's a failure.
>
Okay, I think I get your point. Let me try to make this error handling
more intuitive and straight forward.
> > + kfree(irqs);
> > +free_irq_vector:
> > + cpus_read_unlock();
> > + return err;
> > +}
> > +
> > +static int mana_gd_setup_irqs(struct pci_dev *pdev, int nvec)
> > +{
> > + struct gdma_context *gc = pci_get_drvdata(pdev);
> > + struct gdma_irq_context *gic;
> > + int start_irq_index = 1;
> > + unsigned int cpu;
> > + int *irqs, irq;
> > + int err, i = 0, j;
>
> Initializing "i" to 0 is superfluous.
>
Noted
> > +
> > + cpus_read_lock();
> > +
> > if (nvec <= num_online_cpus())
> > start_irq_index = 0;
> >
> > @@ -1361,15 +1442,13 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
> > goto free_irq_vector;
> > }
> >
> > - gc->irq_contexts = kcalloc(nvec, sizeof(struct gdma_irq_context),
> > - GFP_KERNEL);
> > - if (!gc->irq_contexts) {
> > - err = -ENOMEM;
> > - goto free_irq_array;
> > - }
> > -
> > for (i = 0; i < nvec; i++) {
> > - gic = &gc->irq_contexts[i];
> > + gic = kcalloc(1, sizeof(struct gdma_irq_context), GFP_KERNEL);
>
> kcalloc() with a constant 1 first argument is a bit unusual. Just use kzalloc() since
> there's no array here?
>
noted
> > + if (!gic) {
> > + err = -ENOMEM;
> > + goto free_irq;
> > + }
> > +
> > gic->handler = mana_gd_process_eq_events;
> > INIT_LIST_HEAD(&gic->eq_list);
> > spin_lock_init(&gic->lock);
> > @@ -1384,13 +1463,13 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
> > irq = pci_irq_vector(pdev, i);
> > if (irq < 0) {
> > err = irq;
> > - goto free_irq;
> > + goto free_current_gic;
> > }
> >
> > if (!i) {
> > err = request_irq(irq, mana_gd_intr, 0, gic->name, gic);
> > if (err)
> > - goto free_irq;
> > + goto free_current_gic;
> >
> > /* If number of IRQ is one extra than number of online CPUs,
> > * then we need to assign IRQ0 (hwc irq) and IRQ1 to
> > @@ -1408,39 +1487,110 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
> > }
> > } else {
> > irqs[i - start_irq_index] = irq;
> > - err = request_irq(irqs[i - start_irq_index], mana_gd_intr, 0,
> > - gic->name, gic);
> > + err = request_irq(irqs[i - start_irq_index],
> > + mana_gd_intr, 0, gic->name, gic);
> > if (err)
> > - goto free_irq;
> > + goto free_current_gic;
> > }
> > +
> > + xa_store(&gc->irq_contexts, i, gic, GFP_KERNEL);
> > }
>
> FWIW, I think all this logic around "start_irq_index" could be simplified,
> though I haven't worked through the details. If it would simplify the code,
> it would be fine to allocate the "irqs" array with one extra entry that is unused
> if IRQ0 and IRQ1 use the same CPUs.
>
Sure, let me try that
> >
> > err = irq_setup(irqs, (nvec - start_irq_index), gc->numa_node, false);
> > if (err)
> > goto free_irq;
> >
> > - gc->max_num_msix = nvec;
> > - gc->num_msix_usable = nvec;
> > cpus_read_unlock();
> > kfree(irqs);
> > return 0;
> >
> > +free_current_gic:
> > + kfree(gic);
> > free_irq:
> > for (j = i - 1; j >= 0; j--) {
>
> In this case j is initialized to i - 1, which is what I expected in
> the previous case.
>
> Again, this loop could just countdown using "i" instead of
> introducing "j".
>
> > irq = pci_irq_vector(pdev, j);
> > - gic = &gc->irq_contexts[j];
> > + gic = xa_load(&gc->irq_contexts, j);
> > + if (!gic)
> > + continue;
>
> Failure to get a valid gic should never happen, so perhaps a WARN_ON()
> is appropriate.
>
noted
> >
> > irq_update_affinity_hint(irq, NULL);
> > free_irq(irq, gic);
> > + xa_erase(&gc->irq_contexts, j);
> > + kfree(gic);
> > }
> >
> > - kfree(gc->irq_contexts);
> > - gc->irq_contexts = NULL;
> > -free_irq_array:
> > kfree(irqs);
> > free_irq_vector:
> > + xa_destroy(&gc->irq_contexts);
>
> This seems like the wrong place to be doing xa_destroy(). It leads
> to inconsistencies. For example, if mana_gd_setup_hwc_irqs()
> fails, it may have failed before calling mana_gd_setup_irqs(), in
> which case the xa_destroy() is not done. Or if the failure occurred here
> in mana_gd_setup_irqs(), then xa_destroy() will have been done.
> Ideally, the xa_destroy() for error cases could be done in
> mana_gd_probe() where the xa_init() is done so that the calls match
> up, and of course in mana_gd_remove() when the device goes away
> entirely.
>
I agree, I'll fix this
> > cpus_read_unlock();
> > - pci_free_irq_vectors(pdev);
> > + return err;
> > +}
> > +
> > +static int mana_gd_setup_hwc_irqs(struct pci_dev *pdev)
> > +{
> > + struct gdma_context *gc = pci_get_drvdata(pdev);
> > + unsigned int max_irqs, min_irqs;
> > + int max_queues_per_port;
> > + int nvec, err;
> > +
> > + if (pci_msix_can_alloc_dyn(pdev)) {
> > + max_irqs = 1;
> > + min_irqs = 1;
> > + } else {
> > + max_queues_per_port = num_online_cpus();
> > + if (max_queues_per_port > MANA_MAX_NUM_QUEUES)
> > + max_queues_per_port = MANA_MAX_NUM_QUEUES;
> > + /* Need 1 interrupt for HWC */
> > + max_irqs = max_queues_per_port + 1;
>
> This code is simply being copied from existing code, but it would be nicer to
> code it as:
>
> max_irqs = min(num_online_cpus(), MANA_MAX_NUM_QUEUES) + 1;
>
> Explicitly using the "min" function reduces the cognitive effort to parse
> the "if" statement and figure out that it is picking the minimum of the two
> values. And the local variable max_queues_per_port can be dropped, but
> keep the comment :-)
>
noted
> > + min_irqs = 2;
> > + }
> > +
> > + nvec = pci_alloc_irq_vectors(pdev, min_irqs, max_irqs, PCI_IRQ_MSIX);
> > + if (nvec < 0)
> > + return nvec;
> > +
> > + err = mana_gd_setup_irqs(pdev, nvec);
> > + if (err) {
> > + pci_free_irq_vectors(pdev);
> > + return err;
> > + }
> > +
> > + gc->num_msix_usable = nvec;
> > + gc->max_num_msix = nvec;
> > +
> > + return err;
>
> "err" should always be zero at this point, so could do "return 0"
>
noted
> > +}
> > +
> > +static int mana_gd_setup_remaining_irqs(struct pci_dev *pdev)
> > +{
> > + struct gdma_context *gc = pci_get_drvdata(pdev);
> > + int max_irqs, i, err = 0;
> > + struct msi_map irq_map;
> > +
> > + if (!pci_msix_can_alloc_dyn(pdev))
> > + /* remain irqs are already allocated with HWC IRQ */
> > + return 0;
> > +
> > + /* allocate only remaining IRQs*/
> > + max_irqs = gc->num_msix_usable - 1;
> > +
> > + for (i = 1; i <= max_irqs; i++) {
> > + irq_map = pci_msix_alloc_irq_at(pdev, i, NULL);
> > + if (!irq_map.virq) {
> > + err = irq_map.index;
> > + /* caller will handle cleaning up all allocated
> > + * irqs, after HWC is destroyed
> > + */
> > + return err;
> > + }
> > + }
> > +
> > + err = mana_gd_setup_dyn_irqs(pdev, max_irqs);
> > + if (err)
> > + return err;
> > +
> > + gc->max_num_msix = gc->max_num_msix + max_irqs;
> > +
> > return err;
>
> Again, err must always be zero here.
>
> > }
> >
> > @@ -1458,19 +1608,22 @@ static void mana_gd_remove_irqs(struct pci_dev *pdev)
> > if (irq < 0)
> > continue;
> >
> > - gic = &gc->irq_contexts[i];
> > + gic = xa_load(&gc->irq_contexts, i);
> > + if (!gic)
> > + continue;
> >
> > /* Need to clear the hint before free_irq */
> > irq_update_affinity_hint(irq, NULL);
> > free_irq(irq, gic);
> > + xa_erase(&gc->irq_contexts, i);
> > + kfree(gic);
> > }
> >
> > pci_free_irq_vectors(pdev);
> >
> > gc->max_num_msix = 0;
> > gc->num_msix_usable = 0;
> > - kfree(gc->irq_contexts);
> > - gc->irq_contexts = NULL;
> > + xa_destroy(&gc->irq_contexts);
>
> As noted above, I'm doubtful about this being the right place to do
> xa_destroy().
>
> > }
> >
> > static int mana_gd_setup(struct pci_dev *pdev)
> > @@ -1481,9 +1634,10 @@ static int mana_gd_setup(struct pci_dev *pdev)
> > mana_gd_init_registers(pdev);
> > mana_smc_init(&gc->shm_channel, gc->dev, gc->shm_base);
> >
> > - err = mana_gd_setup_irqs(pdev);
> > + err = mana_gd_setup_hwc_irqs(pdev);
> > if (err) {
> > - dev_err(gc->dev, "Failed to setup IRQs: %d\n", err);
> > + dev_err(gc->dev, "Failed to setup IRQs for HWC creation: %d\n",
> > + err);
> > return err;
> > }
> >
> > @@ -1499,6 +1653,12 @@ static int mana_gd_setup(struct pci_dev *pdev)
> > if (err)
> > goto destroy_hwc;
> >
> > + err = mana_gd_setup_remaining_irqs(pdev);
> > + if (err) {
> > + dev_err(gc->dev, "Failed to setup remaining IRQs: %d", err);
> > + goto destroy_hwc;
> > + }
> > +
> > err = mana_gd_detect_devices(pdev);
> > if (err)
> > goto destroy_hwc;
> > @@ -1575,6 +1735,7 @@ static int mana_gd_probe(struct pci_dev *pdev, const struct
> > pci_device_id *ent)
> > gc->is_pf = mana_is_pf(pdev->device);
> > gc->bar0_va = bar0_va;
> > gc->dev = &pdev->dev;
> > + xa_init(&gc->irq_contexts);
> >
> > if (gc->is_pf)
> > gc->mana_pci_debugfs = debugfs_create_dir("0", mana_debugfs_root);
> > diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h
> > index 228603bf03f2..f20d1d1ea5e8 100644
> > --- a/include/net/mana/gdma.h
> > +++ b/include/net/mana/gdma.h
> > @@ -373,7 +373,7 @@ struct gdma_context {
> > unsigned int max_num_queues;
> > unsigned int max_num_msix;
> > unsigned int num_msix_usable;
> > - struct gdma_irq_context *irq_contexts;
> > + struct xarray irq_contexts;
> >
> > /* L2 MTU */
> > u16 adapter_mtu;
> > @@ -558,12 +558,16 @@ enum {
> > /* Driver can handle holes (zeros) in the device list */
> > #define GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP BIT(11)
> >
> > +/* Driver supports dynamic MSI-X vector allocation */
> > +#define GDMA_DRV_CAP_FLAG_1_DYNAMIC_IRQ_ALLOC_SUPPORT BIT(13)
> > +
> > #define GDMA_DRV_CAP_FLAGS1 \
> > (GDMA_DRV_CAP_FLAG_1_EQ_SHARING_MULTI_VPORT | \
> > GDMA_DRV_CAP_FLAG_1_NAPI_WKDONE_FIX | \
> > GDMA_DRV_CAP_FLAG_1_HWC_TIMEOUT_RECONFIG | \
> > GDMA_DRV_CAP_FLAG_1_VARIABLE_INDIRECTION_TABLE_SUPPORT | \
> > - GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP)
> > + GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP | \
> > + GDMA_DRV_CAP_FLAG_1_DYNAMIC_IRQ_ALLOC_SUPPORT)
> >
> > #define GDMA_DRV_CAP_FLAGS2 0
> >
> > --
> > 2.34.1
> >
>
Thanks for all the comments Michael, I will have them all incorporated
in the next version.
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2025-05-22 9:17 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-09 10:12 [PATCH v3 0/4] Allow dyn MSI-X vector allocation of MANA Shradha Gupta
2025-05-09 10:13 ` [PATCH v3 1/4] PCI/MSI: Export pci_msix_prepare_desc() for dynamic MSI-X allocations Shradha Gupta
2025-05-09 10:13 ` [PATCH v3 2/4] PCI: hv: Allow dynamic MSI-X vector allocation Shradha Gupta
2025-05-12 7:00 ` Manivannan Sadhasivam
2025-05-12 7:38 ` Shradha Gupta
2025-05-09 10:13 ` [PATCH v3 3/4] net: mana: Allow irq_setup() to skip cpus for affinity Shradha Gupta
2025-05-09 15:49 ` Yury Norov
2025-05-12 5:49 ` Shradha Gupta
2025-05-14 4:53 ` Michael Kelley
2025-05-14 16:55 ` Yury Norov
2025-05-14 17:26 ` Michael Kelley
2025-05-14 17:58 ` Yury Norov
2025-05-15 4:51 ` Shradha Gupta
2025-05-15 4:49 ` Shradha Gupta
2025-05-09 10:13 ` [PATCH v3 4/4] net: mana: Allocate MSI-X vectors dynamically Shradha Gupta
2025-05-14 5:04 ` Michael Kelley
2025-05-14 17:07 ` Yury Norov
2025-05-15 5:10 ` Shradha Gupta
2025-05-21 16:27 ` Michael Kelley
2025-05-22 9:17 ` Shradha Gupta
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).