linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA
@ 2025-05-27 15:57 Shradha Gupta
  2025-05-27 15:57 ` [PATCH v4 1/5] PCI/MSI: Export pci_msix_prepare_desc() for dynamic MSI-X allocations Shradha Gupta
                   ` (6 more replies)
  0 siblings, 7 replies; 22+ messages in thread
From: Shradha Gupta @ 2025-05-27 15:57 UTC (permalink / raw)
  Cc: Shradha Gupta, linux-hyperv, linux-pci, linux-kernel, Nipun Gupta,
	Yury Norov, Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen,
	Kevin Tian, Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy�~Dski,
	Lorenzo Pieralisi, Dexuan Cui, Wei Liu, Haiyang Zhang,
	K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Simon Horman,
	Leon Romanovsky, Maxim Levitsky, Erni Sri Satya Vennela,
	Peter Zijlstra, netdev, linux-rdma, Paul Rosswurm, Shradha Gupta

In this patchset we want to enable the MANA driver to be able to
allocate MSI-X vectors in PCI dynamically.

The first patch exports pci_msix_prepare_desc() in PCI to be able to
correctly prepare descriptors for dynamically added MSI-X vectors.

The second patch adds the support of dynamic vector allocation in
pci-hyperv PCI controller by enabling the MSI_FLAG_PCI_MSIX_ALLOC_DYN
flag and using the pci_msix_prepare_desc() exported in first patch.

The third patch adds a detailed description of the irq_setup(), to
help understand the function design better.

The fourth patch is a preparation patch for mana changes to support
dynamic IRQ allocation. It contains changes in irq_setup() to allow
skipping first sibling CPU sets, in case certain IRQs are already
affinitized to them.

The fifth patch has the changes in MANA driver to be able to allocate
MSI-X vectors dynamically. If the support does not exist it defaults to
older behavior.
---
 Change in v4
 * add a patch describing the functionality of irq_setup() through a 
   comment
 * In irq_setup(), avoid using a label next_cpumask:
 * modify the changes in MANA patch about restructuring the error
   handling path in mana_gd_setup_dyn_irqs()
 * modify the mana_gd_setup_irqs() to simplify handling around
   start_irq_index
 * add warning if an invalid gic is returned
 * place the xa_destroy() cleanup in mana_gd_remove
---
 Changes in v3
 * split the 3rd patch into preparation patch around irq_setup() and
   changes in mana driver to allow dynamic IRQ allocation
 * Add arm64 support for dynamic MSI-X allocation in pci_hyperv
   controller
---
 Changes in v2
 * split the first patch into two(exporting the preapre_desc
   func and using the function and flag in pci-hyperv)
 * replace 'pci vectors' by 'MSI-X vectors'
 * Change the cover letter description to align with changes made
---

Shradha Gupta (5):
  PCI/MSI: Export pci_msix_prepare_desc() for dynamic MSI-X allocations
  PCI: hv: Allow dynamic MSI-X vector allocation
  net: mana: explain irq_setup() algorithm
  net: mana: Allow irq_setup() to skip cpus for affinity
  net: mana: Allocate MSI-X vectors dynamically

 .../net/ethernet/microsoft/mana/gdma_main.c   | 356 ++++++++++++++----
 drivers/pci/controller/pci-hyperv.c           |   5 +-
 drivers/pci/msi/irqdomain.c                   |   5 +-
 include/linux/msi.h                           |   2 +
 include/net/mana/gdma.h                       |   8 +-
 5 files changed, 293 insertions(+), 83 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v4 1/5] PCI/MSI: Export pci_msix_prepare_desc() for dynamic MSI-X allocations
  2025-05-27 15:57 [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA Shradha Gupta
@ 2025-05-27 15:57 ` Shradha Gupta
  2025-05-29  3:46   ` Saurabh Singh Sengar
  2025-05-27 15:58 ` [PATCH v4 2/5] PCI: hv: Allow dynamic MSI-X vector allocation Shradha Gupta
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Shradha Gupta @ 2025-05-27 15:57 UTC (permalink / raw)
  To: Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen,
	Thomas Gleixner, Bjorn Helgaas, Michael Kelley
  Cc: Shradha Gupta, linux-hyperv, linux-pci, linux-kernel, Nipun Gupta,
	Yury Norov, Kevin Tian, Long Li, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy�~Dski,
	Lorenzo Pieralisi, Dexuan Cui, Wei Liu, Haiyang Zhang,
	K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Simon Horman,
	Leon Romanovsky, Maxim Levitsky, Erni Sri Satya Vennela,
	Peter Zijlstra, netdev, linux-rdma, Paul Rosswurm, Shradha Gupta

For supporting dynamic MSI-X vector allocation by PCI controllers, enabling
the flag MSI_FLAG_PCI_MSIX_ALLOC_DYN is not enough, msix_prepare_msi_desc()
to prepare the MSI descriptor is also needed.

Export pci_msix_prepare_desc() to allow PCI controllers to support dynamic
MSI-X vector allocation.

Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
---
 Changes in v3
 * Improved the patch description by removing abbreviations
---
 drivers/pci/msi/irqdomain.c | 5 +++--
 include/linux/msi.h         | 2 ++
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/pci/msi/irqdomain.c b/drivers/pci/msi/irqdomain.c
index d7ba8795d60f..43129aa6d6c7 100644
--- a/drivers/pci/msi/irqdomain.c
+++ b/drivers/pci/msi/irqdomain.c
@@ -222,13 +222,14 @@ static void pci_irq_unmask_msix(struct irq_data *data)
 	pci_msix_unmask(irq_data_get_msi_desc(data));
 }
 
-static void pci_msix_prepare_desc(struct irq_domain *domain, msi_alloc_info_t *arg,
-				  struct msi_desc *desc)
+void pci_msix_prepare_desc(struct irq_domain *domain, msi_alloc_info_t *arg,
+			   struct msi_desc *desc)
 {
 	/* Don't fiddle with preallocated MSI descriptors */
 	if (!desc->pci.mask_base)
 		msix_prepare_msi_desc(to_pci_dev(desc->dev), desc);
 }
+EXPORT_SYMBOL_GPL(pci_msix_prepare_desc);
 
 static const struct msi_domain_template pci_msix_template = {
 	.chip = {
diff --git a/include/linux/msi.h b/include/linux/msi.h
index 86e42742fd0f..d5864d5e75c2 100644
--- a/include/linux/msi.h
+++ b/include/linux/msi.h
@@ -691,6 +691,8 @@ struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode,
 					     struct irq_domain *parent);
 u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev);
 struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev);
+void pci_msix_prepare_desc(struct irq_domain *domain, msi_alloc_info_t *arg,
+			   struct msi_desc *desc);
 #else /* CONFIG_PCI_MSI */
 static inline struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev)
 {
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v4 2/5] PCI: hv: Allow dynamic MSI-X vector allocation
  2025-05-27 15:57 [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA Shradha Gupta
  2025-05-27 15:57 ` [PATCH v4 1/5] PCI/MSI: Export pci_msix_prepare_desc() for dynamic MSI-X allocations Shradha Gupta
@ 2025-05-27 15:58 ` Shradha Gupta
  2025-05-29  3:46   ` Saurabh Singh Sengar
  2025-05-27 15:58 ` [PATCH v4 3/5] net: mana: explain irq_setup() algorithm Shradha Gupta
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Shradha Gupta @ 2025-05-27 15:58 UTC (permalink / raw)
  To: Bjorn Helgaas, Rob Herring, Manivannan Sadhasivam,
	Krzysztof Wilczy�~Dski, Lorenzo Pieralisi, Dexuan Cui,
	Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Michael Kelley
  Cc: Shradha Gupta, linux-hyperv, linux-pci, linux-kernel, Nipun Gupta,
	Yury Norov, Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen,
	Kevin Tian, Long Li, Thomas Gleixner, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Konstantin Taranov, Simon Horman, Leon Romanovsky, Maxim Levitsky,
	Erni Sri Satya Vennela, Peter Zijlstra, netdev, linux-rdma,
	Paul Rosswurm, Shradha Gupta

Allow dynamic MSI-X vector allocation for pci_hyperv PCI controller
by adding support for the flag MSI_FLAG_PCI_MSIX_ALLOC_DYN and using
pci_msix_prepare_desc() to prepare the MSI-X descriptors.

Feature support added for both x86 and ARM64

Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
---
 Changes in v4:
 * use the same prepare_desc() callback for arm and x86
---
 Changes in v3:
 * Add arm64 support
---
 Changes in v2:
 * split the patch to keep changes in PCI and pci_hyperv controller
   seperate
 * replace strings "pci vectors" by "MSI-X vectors"
---
 drivers/pci/controller/pci-hyperv.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletions(-)

diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
index ac27bda5ba26..0c790f35ad0e 100644
--- a/drivers/pci/controller/pci-hyperv.c
+++ b/drivers/pci/controller/pci-hyperv.c
@@ -2063,6 +2063,7 @@ static struct irq_chip hv_msi_irq_chip = {
 static struct msi_domain_ops hv_msi_ops = {
 	.msi_prepare	= hv_msi_prepare,
 	.msi_free	= hv_msi_free,
+	.prepare_desc	= pci_msix_prepare_desc,
 };
 
 /**
@@ -2084,7 +2085,7 @@ static int hv_pcie_init_irq_domain(struct hv_pcibus_device *hbus)
 	hbus->msi_info.ops = &hv_msi_ops;
 	hbus->msi_info.flags = (MSI_FLAG_USE_DEF_DOM_OPS |
 		MSI_FLAG_USE_DEF_CHIP_OPS | MSI_FLAG_MULTI_PCI_MSI |
-		MSI_FLAG_PCI_MSIX);
+		MSI_FLAG_PCI_MSIX | MSI_FLAG_PCI_MSIX_ALLOC_DYN);
 	hbus->msi_info.handler = FLOW_HANDLER;
 	hbus->msi_info.handler_name = FLOW_NAME;
 	hbus->msi_info.data = hbus;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v4 3/5] net: mana: explain irq_setup() algorithm
  2025-05-27 15:57 [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA Shradha Gupta
  2025-05-27 15:57 ` [PATCH v4 1/5] PCI/MSI: Export pci_msix_prepare_desc() for dynamic MSI-X allocations Shradha Gupta
  2025-05-27 15:58 ` [PATCH v4 2/5] PCI: hv: Allow dynamic MSI-X vector allocation Shradha Gupta
@ 2025-05-27 15:58 ` Shradha Gupta
  2025-05-27 19:10   ` Yury Norov
  2025-05-27 15:58 ` [PATCH v4 4/5] net: mana: Allow irq_setup() to skip cpus for affinity Shradha Gupta
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Shradha Gupta @ 2025-05-27 15:58 UTC (permalink / raw)
  To: Dexuan Cui, Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Konstantin Taranov, Simon Horman, Leon Romanovsky, Maxim Levitsky,
	Erni Sri Satya Vennela, Peter Zijlstra, Michael Kelley
  Cc: Shradha Gupta, linux-hyperv, linux-pci, linux-kernel, Nipun Gupta,
	Yury Norov, Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen,
	Kevin Tian, Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy�~Dski,
	Lorenzo Pieralisi, netdev, linux-rdma, Paul Rosswurm,
	Shradha Gupta

Commit 91bfe210e196 ("net: mana: add a function to spread IRQs per CPUs")
added the irq_setup() function that distributes IRQs on CPUs according
to a tricky heuristic. The corresponding commit message explains the
heuristic.

Duplicate it in the source code to make available for readers without
digging git in history. Also, add more detailed explanation about how
the heuristics is implemented.

Signed-off-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
---
 .../net/ethernet/microsoft/mana/gdma_main.c   | 41 +++++++++++++++++++
 1 file changed, 41 insertions(+)

diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
index 4ffaf7588885..f9e8d4d1ba3a 100644
--- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
+++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
@@ -1288,6 +1288,47 @@ void mana_gd_free_res_map(struct gdma_resource *r)
 	r->size = 0;
 }
 
+/*
+ * Spread on CPUs with the following heuristics:
+ *
+ * 1. No more than one IRQ per CPU, if possible;
+ * 2. NUMA locality is the second priority;
+ * 3. Sibling dislocality is the last priority.
+ *
+ * Let's consider this topology:
+ *
+ * Node            0               1
+ * Core        0       1       2       3
+ * CPU       0   1   2   3   4   5   6   7
+ *
+ * The most performant IRQ distribution based on the above topology
+ * and heuristics may look like this:
+ *
+ * IRQ     Nodes   Cores   CPUs
+ * 0       1       0       0-1
+ * 1       1       1       2-3
+ * 2       1       0       0-1
+ * 3       1       1       2-3
+ * 4       2       2       4-5
+ * 5       2       3       6-7
+ * 6       2       2       4-5
+ * 7       2       3       6-7
+ *
+ * The heuristics is implemented as follows.
+ *
+ * The outer for_each() loop resets the 'weight' to the actual number
+ * of CPUs in the hop. Then inner for_each() loop decrements it by the
+ * number of sibling groups (cores) while assigning first set of IRQs
+ * to each group. IRQs 0 and 1 above are distributed this way.
+ *
+ * Now, because NUMA locality is more important, we should walk the
+ * same set of siblings and assign 2nd set of IRQs (2 and 3), and it's
+ * implemented by the medium while() loop. We do like this unless the
+ * number of IRQs assigned on this hop will not become equal to number
+ * of CPUs in the hop (weight == 0). Then we switch to the next hop and
+ * do the same thing.
+ */
+
 static int irq_setup(unsigned int *irqs, unsigned int len, int node)
 {
 	const struct cpumask *next, *prev = cpu_none_mask;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v4 4/5] net: mana: Allow irq_setup() to skip cpus for affinity
  2025-05-27 15:57 [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA Shradha Gupta
                   ` (2 preceding siblings ...)
  2025-05-27 15:58 ` [PATCH v4 3/5] net: mana: explain irq_setup() algorithm Shradha Gupta
@ 2025-05-27 15:58 ` Shradha Gupta
  2025-05-27 15:59 ` [PATCH v4 5/5] net: mana: Allocate MSI-X vectors dynamically Shradha Gupta
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 22+ messages in thread
From: Shradha Gupta @ 2025-05-27 15:58 UTC (permalink / raw)
  To: Dexuan Cui, Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Konstantin Taranov, Simon Horman, Leon Romanovsky, Maxim Levitsky,
	Erni Sri Satya Vennela, Peter Zijlstra, Michael Kelley
  Cc: Shradha Gupta, linux-hyperv, linux-pci, linux-kernel, Nipun Gupta,
	Yury Norov, Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen,
	Kevin Tian, Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy�~Dski,
	Lorenzo Pieralisi, netdev, linux-rdma, Paul Rosswurm,
	Shradha Gupta

In order to prepare the MANA driver to allocate the MSI-X IRQs
dynamically, we need to enhance irq_setup() to allow skipping
affinitizing IRQs to the first CPU sibling group.

This would be for cases when the number of IRQs is less than or equal
to the number of online CPUs. In such cases for dynamically added IRQs
the first CPU sibling group would already be affinitized with HWC IRQ.

Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
Reviewed-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
---
 Changes in v4
 * fix commit description
 * avoided using next_cpumask: label in the irq_setup()
---
 drivers/net/ethernet/microsoft/mana/gdma_main.c | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
index f9e8d4d1ba3a..763a548c4a2b 100644
--- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
+++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
@@ -1329,7 +1329,8 @@ void mana_gd_free_res_map(struct gdma_resource *r)
  * do the same thing.
  */
 
-static int irq_setup(unsigned int *irqs, unsigned int len, int node)
+static int irq_setup(unsigned int *irqs, unsigned int len, int node,
+		     bool skip_first_cpu)
 {
 	const struct cpumask *next, *prev = cpu_none_mask;
 	cpumask_var_t cpus __free(free_cpumask_var);
@@ -1344,11 +1345,18 @@ static int irq_setup(unsigned int *irqs, unsigned int len, int node)
 		while (weight > 0) {
 			cpumask_andnot(cpus, next, prev);
 			for_each_cpu(cpu, cpus) {
+				cpumask_andnot(cpus, cpus, topology_sibling_cpumask(cpu));
+				--weight;
+
+				if (unlikely(skip_first_cpu)) {
+					skip_first_cpu = false;
+					continue;
+				}
+
 				if (len-- == 0)
 					goto done;
+
 				irq_set_affinity_and_hint(*irqs++, topology_sibling_cpumask(cpu));
-				cpumask_andnot(cpus, cpus, topology_sibling_cpumask(cpu));
-				--weight;
 			}
 		}
 		prev = next;
@@ -1444,7 +1452,7 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
 		}
 	}
 
-	err = irq_setup(irqs, (nvec - start_irq_index), gc->numa_node);
+	err = irq_setup(irqs, nvec - start_irq_index, gc->numa_node, false);
 	if (err)
 		goto free_irq;
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v4 5/5] net: mana: Allocate MSI-X vectors dynamically
  2025-05-27 15:57 [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA Shradha Gupta
                   ` (3 preceding siblings ...)
  2025-05-27 15:58 ` [PATCH v4 4/5] net: mana: Allow irq_setup() to skip cpus for affinity Shradha Gupta
@ 2025-05-27 15:59 ` Shradha Gupta
  2025-05-28  8:16   ` Saurabh Singh Sengar
                     ` (2 more replies)
  2025-05-28 18:55 ` [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA Simon Horman
  2025-06-01 14:53 ` Zhu Yanjun
  6 siblings, 3 replies; 22+ messages in thread
From: Shradha Gupta @ 2025-05-27 15:59 UTC (permalink / raw)
  To: Dexuan Cui, Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Konstantin Taranov, Simon Horman, Leon Romanovsky, Maxim Levitsky,
	Erni Sri Satya Vennela, Peter Zijlstra, Michael Kelley
  Cc: Shradha Gupta, linux-hyperv, linux-pci, linux-kernel, Nipun Gupta,
	Yury Norov, Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen,
	Kevin Tian, Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy�~Dski,
	Lorenzo Pieralisi, netdev, linux-rdma, Paul Rosswurm,
	Shradha Gupta

Currently, the MANA driver allocates MSI-X vectors statically based on
MANA_MAX_NUM_QUEUES and num_online_cpus() values and in some cases ends
up allocating more vectors than it needs. This is because, by this time
we do not have a HW channel and do not know how many IRQs should be
allocated.

To avoid this, we allocate 1 MSI-X vector during the creation of HWC and
after getting the value supported by hardware, dynamically add the
remaining MSI-X vectors.

Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
---
 Changes in v4:
 * added BUG_ON at appropriate places
 * moved xa_destroy to mana_gd_remove()
 * rearragned the cleanup logic in mana_gd_setup_dyn_irqs()
 * simplified processing around start_irq_index in mana_gd_setup_irqs()
 * return 0 instead of return err as appropriate
---
 Changes in v3:
 * implemented irq_contexts as xarrays rather than list
 * split the patch to create a perparation patch around irq_setup()
 * add log when IRQ allocation/setup for remaining IRQs fails
---
 Changes in v2:
 * Use string 'MSI-X vectors' instead of 'pci vectors'
 * make skip-cpu a bool instead of int
 * rearrange the comment arout skip_cpu variable appropriately
 * update the capability bit for driver indicating dynamic IRQ allocation
 * enforced max line length to 80
 * enforced RCT convention
 * initialized gic to NULL, for when there is a possibility of gic
   not being populated correctly
---
 .../net/ethernet/microsoft/mana/gdma_main.c   | 306 +++++++++++++-----
 include/net/mana/gdma.h                       |   8 +-
 2 files changed, 235 insertions(+), 79 deletions(-)

diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
index 763a548c4a2b..98ebecbec9a7 100644
--- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
+++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
@@ -6,6 +6,8 @@
 #include <linux/pci.h>
 #include <linux/utsname.h>
 #include <linux/version.h>
+#include <linux/msi.h>
+#include <linux/irqdomain.h>
 
 #include <net/mana/mana.h>
 
@@ -80,8 +82,15 @@ static int mana_gd_query_max_resources(struct pci_dev *pdev)
 		return err ? err : -EPROTO;
 	}
 
-	if (gc->num_msix_usable > resp.max_msix)
-		gc->num_msix_usable = resp.max_msix;
+	if (!pci_msix_can_alloc_dyn(pdev)) {
+		if (gc->num_msix_usable > resp.max_msix)
+			gc->num_msix_usable = resp.max_msix;
+	} else {
+		/* If dynamic allocation is enabled we have already allocated
+		 * hwc msi
+		 */
+		gc->num_msix_usable = min(resp.max_msix, num_online_cpus() + 1);
+	}
 
 	if (gc->num_msix_usable <= 1)
 		return -ENOSPC;
@@ -482,7 +491,9 @@ static int mana_gd_register_irq(struct gdma_queue *queue,
 	}
 
 	queue->eq.msix_index = msi_index;
-	gic = &gc->irq_contexts[msi_index];
+	gic = xa_load(&gc->irq_contexts, msi_index);
+	if (WARN_ON(!gic))
+		return -EINVAL;
 
 	spin_lock_irqsave(&gic->lock, flags);
 	list_add_rcu(&queue->entry, &gic->eq_list);
@@ -507,7 +518,10 @@ static void mana_gd_deregiser_irq(struct gdma_queue *queue)
 	if (WARN_ON(msix_index >= gc->num_msix_usable))
 		return;
 
-	gic = &gc->irq_contexts[msix_index];
+	gic = xa_load(&gc->irq_contexts, msix_index);
+	if (WARN_ON(!gic))
+		return;
+
 	spin_lock_irqsave(&gic->lock, flags);
 	list_for_each_entry_rcu(eq, &gic->eq_list, entry) {
 		if (queue == eq) {
@@ -1366,47 +1380,113 @@ static int irq_setup(unsigned int *irqs, unsigned int len, int node,
 	return 0;
 }
 
-static int mana_gd_setup_irqs(struct pci_dev *pdev)
+static int mana_gd_setup_dyn_irqs(struct pci_dev *pdev, int nvec)
 {
 	struct gdma_context *gc = pci_get_drvdata(pdev);
-	unsigned int max_queues_per_port;
 	struct gdma_irq_context *gic;
-	unsigned int max_irqs, cpu;
-	int start_irq_index = 1;
-	int nvec, *irqs, irq;
-	int err, i = 0, j;
+	bool skip_first_cpu = false;
+	int *irqs, irq, err, i;
 
 	cpus_read_lock();
-	max_queues_per_port = num_online_cpus();
-	if (max_queues_per_port > MANA_MAX_NUM_QUEUES)
-		max_queues_per_port = MANA_MAX_NUM_QUEUES;
 
-	/* Need 1 interrupt for the Hardware communication Channel (HWC) */
-	max_irqs = max_queues_per_port + 1;
-
-	nvec = pci_alloc_irq_vectors(pdev, 2, max_irqs, PCI_IRQ_MSIX);
-	if (nvec < 0) {
-		cpus_read_unlock();
-		return nvec;
-	}
-	if (nvec <= num_online_cpus())
-		start_irq_index = 0;
-
-	irqs = kmalloc_array((nvec - start_irq_index), sizeof(int), GFP_KERNEL);
+	irqs = kmalloc_array(nvec, sizeof(int), GFP_KERNEL);
 	if (!irqs) {
 		err = -ENOMEM;
 		goto free_irq_vector;
 	}
 
-	gc->irq_contexts = kcalloc(nvec, sizeof(struct gdma_irq_context),
-				   GFP_KERNEL);
-	if (!gc->irq_contexts) {
+	/*
+	 * While processing the next pci irq vector, we start with index 1,
+	 * as IRQ vector at index 0 is already processed for HWC.
+	 * However, the population of irqs array starts with index 0, to be
+	 * further used in irq_setup()
+	 */
+	for (i = 1; i <= nvec; i++) {
+		gic = kzalloc(sizeof(*gic), GFP_KERNEL);
+		if (!gic) {
+			err = -ENOMEM;
+			goto free_irq;
+		}
+		gic->handler = mana_gd_process_eq_events;
+		INIT_LIST_HEAD(&gic->eq_list);
+		spin_lock_init(&gic->lock);
+
+		snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s",
+			 i - 1, pci_name(pdev));
+
+		/* one pci vector is already allocated for HWC */
+		irqs[i - 1] = pci_irq_vector(pdev, i);
+		if (irqs[i - 1] < 0) {
+			err = irqs[i - 1];
+			goto free_current_gic;
+		}
+
+		err = request_irq(irqs[i - 1], mana_gd_intr, 0, gic->name, gic);
+		if (err)
+			goto free_current_gic;
+
+		xa_store(&gc->irq_contexts, i, gic, GFP_KERNEL);
+	}
+
+	/*
+	 * When calling irq_setup() for dynamically added IRQs, if number of
+	 * CPUs is more than or equal to allocated MSI-X, we need to skip the
+	 * first CPU sibling group since they are already affinitized to HWC IRQ
+	 */
+	if (gc->num_msix_usable <= num_online_cpus())
+		skip_first_cpu = true;
+
+	err = irq_setup(irqs, nvec, gc->numa_node, skip_first_cpu);
+	if (err)
+		goto free_irq;
+
+	cpus_read_unlock();
+	kfree(irqs);
+	return 0;
+
+free_current_gic:
+	kfree(gic);
+free_irq:
+	for (i -= 1; i > 0; i--) {
+		irq = pci_irq_vector(pdev, i);
+		gic = xa_load(&gc->irq_contexts, i);
+		if (WARN_ON(!gic))
+			continue;
+
+		irq_update_affinity_hint(irq, NULL);
+		free_irq(irq, gic);
+		xa_erase(&gc->irq_contexts, i);
+		kfree(gic);
+	}
+	kfree(irqs);
+free_irq_vector:
+	cpus_read_unlock();
+	return err;
+}
+
+static int mana_gd_setup_irqs(struct pci_dev *pdev, int nvec)
+{
+	struct gdma_context *gc = pci_get_drvdata(pdev);
+	struct gdma_irq_context *gic;
+	int *irqs, *start_irqs, irq;
+	unsigned int cpu;
+	int err, i;
+
+	cpus_read_lock();
+
+	irqs = kmalloc_array(nvec, sizeof(int), GFP_KERNEL);
+	if (!irqs) {
 		err = -ENOMEM;
-		goto free_irq_array;
+		goto free_irq_vector;
 	}
 
 	for (i = 0; i < nvec; i++) {
-		gic = &gc->irq_contexts[i];
+		gic = kzalloc(sizeof(*gic), GFP_KERNEL);
+		if (!gic) {
+			err = -ENOMEM;
+			goto free_irq;
+		}
+
 		gic->handler = mana_gd_process_eq_events;
 		INIT_LIST_HEAD(&gic->eq_list);
 		spin_lock_init(&gic->lock);
@@ -1418,69 +1498,128 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
 			snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s",
 				 i - 1, pci_name(pdev));
 
-		irq = pci_irq_vector(pdev, i);
-		if (irq < 0) {
-			err = irq;
-			goto free_irq;
+		irqs[i] = pci_irq_vector(pdev, i);
+		if (irqs[i] < 0) {
+			err = irqs[i];
+			goto free_current_gic;
 		}
 
-		if (!i) {
-			err = request_irq(irq, mana_gd_intr, 0, gic->name, gic);
-			if (err)
-				goto free_irq;
-
-			/* If number of IRQ is one extra than number of online CPUs,
-			 * then we need to assign IRQ0 (hwc irq) and IRQ1 to
-			 * same CPU.
-			 * Else we will use different CPUs for IRQ0 and IRQ1.
-			 * Also we are using cpumask_local_spread instead of
-			 * cpumask_first for the node, because the node can be
-			 * mem only.
-			 */
-			if (start_irq_index) {
-				cpu = cpumask_local_spread(i, gc->numa_node);
-				irq_set_affinity_and_hint(irq, cpumask_of(cpu));
-			} else {
-				irqs[start_irq_index] = irq;
-			}
-		} else {
-			irqs[i - start_irq_index] = irq;
-			err = request_irq(irqs[i - start_irq_index], mana_gd_intr, 0,
-					  gic->name, gic);
-			if (err)
-				goto free_irq;
-		}
+		err = request_irq(irqs[i], mana_gd_intr, 0, gic->name, gic);
+		if (err)
+			goto free_current_gic;
+
+		xa_store(&gc->irq_contexts, i, gic, GFP_KERNEL);
 	}
 
-	err = irq_setup(irqs, nvec - start_irq_index, gc->numa_node, false);
+	/* If number of IRQ is one extra than number of online CPUs,
+	 * then we need to assign IRQ0 (hwc irq) and IRQ1 to
+	 * same CPU.
+	 * Else we will use different CPUs for IRQ0 and IRQ1.
+	 * Also we are using cpumask_local_spread instead of
+	 * cpumask_first for the node, because the node can be
+	 * mem only.
+	 */
+	start_irqs = irqs;
+	if (nvec > num_online_cpus()) {
+		cpu = cpumask_local_spread(0, gc->numa_node);
+		irq_set_affinity_and_hint(irqs[0], cpumask_of(cpu));
+		irqs++;
+		nvec -= 1;
+	}
+
+	err = irq_setup(irqs, nvec, gc->numa_node, false);
 	if (err)
 		goto free_irq;
 
-	gc->max_num_msix = nvec;
-	gc->num_msix_usable = nvec;
 	cpus_read_unlock();
-	kfree(irqs);
+	kfree(start_irqs);
 	return 0;
 
+free_current_gic:
+	kfree(gic);
 free_irq:
-	for (j = i - 1; j >= 0; j--) {
-		irq = pci_irq_vector(pdev, j);
-		gic = &gc->irq_contexts[j];
+	for (i -= 1; i >= 0; i--) {
+		irq = pci_irq_vector(pdev, i);
+		gic = xa_load(&gc->irq_contexts, i);
+		if (WARN_ON(!gic))
+			continue;
 
 		irq_update_affinity_hint(irq, NULL);
 		free_irq(irq, gic);
+		xa_erase(&gc->irq_contexts, i);
+		kfree(gic);
 	}
 
-	kfree(gc->irq_contexts);
-	gc->irq_contexts = NULL;
-free_irq_array:
-	kfree(irqs);
+	kfree(start_irqs);
 free_irq_vector:
 	cpus_read_unlock();
-	pci_free_irq_vectors(pdev);
 	return err;
 }
 
+static int mana_gd_setup_hwc_irqs(struct pci_dev *pdev)
+{
+	struct gdma_context *gc = pci_get_drvdata(pdev);
+	unsigned int max_irqs, min_irqs;
+	int nvec, err;
+
+	if (pci_msix_can_alloc_dyn(pdev)) {
+		max_irqs = 1;
+		min_irqs = 1;
+	} else {
+		/* Need 1 interrupt for HWC */
+		max_irqs = min(num_online_cpus(), MANA_MAX_NUM_QUEUES) + 1;
+		min_irqs = 2;
+	}
+
+	nvec = pci_alloc_irq_vectors(pdev, min_irqs, max_irqs, PCI_IRQ_MSIX);
+	if (nvec < 0)
+		return nvec;
+
+	err = mana_gd_setup_irqs(pdev, nvec);
+	if (err) {
+		pci_free_irq_vectors(pdev);
+		return err;
+	}
+
+	gc->num_msix_usable = nvec;
+	gc->max_num_msix = nvec;
+
+	return 0;
+}
+
+static int mana_gd_setup_remaining_irqs(struct pci_dev *pdev)
+{
+	struct gdma_context *gc = pci_get_drvdata(pdev);
+	struct msi_map irq_map;
+	int max_irqs, i, err;
+
+	if (!pci_msix_can_alloc_dyn(pdev))
+		/* remain irqs are already allocated with HWC IRQ */
+		return 0;
+
+	/* allocate only remaining IRQs*/
+	max_irqs = gc->num_msix_usable - 1;
+
+	for (i = 1; i <= max_irqs; i++) {
+		irq_map = pci_msix_alloc_irq_at(pdev, i, NULL);
+		if (!irq_map.virq) {
+			err = irq_map.index;
+			/* caller will handle cleaning up all allocated
+			 * irqs, after HWC is destroyed
+			 */
+			return err;
+		}
+	}
+
+	err = mana_gd_setup_dyn_irqs(pdev, max_irqs);
+	if (err)
+		return err;
+
+	gc->max_num_msix = gc->max_num_msix + max_irqs;
+
+	return 0;
+}
+
 static void mana_gd_remove_irqs(struct pci_dev *pdev)
 {
 	struct gdma_context *gc = pci_get_drvdata(pdev);
@@ -1495,19 +1634,21 @@ static void mana_gd_remove_irqs(struct pci_dev *pdev)
 		if (irq < 0)
 			continue;
 
-		gic = &gc->irq_contexts[i];
+		gic = xa_load(&gc->irq_contexts, i);
+		if (WARN_ON(!gic))
+			continue;
 
 		/* Need to clear the hint before free_irq */
 		irq_update_affinity_hint(irq, NULL);
 		free_irq(irq, gic);
+		xa_erase(&gc->irq_contexts, i);
+		kfree(gic);
 	}
 
 	pci_free_irq_vectors(pdev);
 
 	gc->max_num_msix = 0;
 	gc->num_msix_usable = 0;
-	kfree(gc->irq_contexts);
-	gc->irq_contexts = NULL;
 }
 
 static int mana_gd_setup(struct pci_dev *pdev)
@@ -1518,9 +1659,10 @@ static int mana_gd_setup(struct pci_dev *pdev)
 	mana_gd_init_registers(pdev);
 	mana_smc_init(&gc->shm_channel, gc->dev, gc->shm_base);
 
-	err = mana_gd_setup_irqs(pdev);
+	err = mana_gd_setup_hwc_irqs(pdev);
 	if (err) {
-		dev_err(gc->dev, "Failed to setup IRQs: %d\n", err);
+		dev_err(gc->dev, "Failed to setup IRQs for HWC creation: %d\n",
+			err);
 		return err;
 	}
 
@@ -1536,6 +1678,12 @@ static int mana_gd_setup(struct pci_dev *pdev)
 	if (err)
 		goto destroy_hwc;
 
+	err = mana_gd_setup_remaining_irqs(pdev);
+	if (err) {
+		dev_err(gc->dev, "Failed to setup remaining IRQs: %d", err);
+		goto destroy_hwc;
+	}
+
 	err = mana_gd_detect_devices(pdev);
 	if (err)
 		goto destroy_hwc;
@@ -1612,6 +1760,7 @@ static int mana_gd_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	gc->is_pf = mana_is_pf(pdev->device);
 	gc->bar0_va = bar0_va;
 	gc->dev = &pdev->dev;
+	xa_init(&gc->irq_contexts);
 
 	if (gc->is_pf)
 		gc->mana_pci_debugfs = debugfs_create_dir("0", mana_debugfs_root);
@@ -1640,6 +1789,7 @@ static int mana_gd_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
 	 */
 	debugfs_remove_recursive(gc->mana_pci_debugfs);
 	gc->mana_pci_debugfs = NULL;
+	xa_destroy(&gc->irq_contexts);
 	pci_iounmap(pdev, bar0_va);
 free_gc:
 	pci_set_drvdata(pdev, NULL);
@@ -1664,6 +1814,8 @@ static void mana_gd_remove(struct pci_dev *pdev)
 
 	gc->mana_pci_debugfs = NULL;
 
+	xa_destroy(&gc->irq_contexts);
+
 	pci_iounmap(pdev, gc->bar0_va);
 
 	vfree(gc);
diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h
index 228603bf03f2..f20d1d1ea5e8 100644
--- a/include/net/mana/gdma.h
+++ b/include/net/mana/gdma.h
@@ -373,7 +373,7 @@ struct gdma_context {
 	unsigned int		max_num_queues;
 	unsigned int		max_num_msix;
 	unsigned int		num_msix_usable;
-	struct gdma_irq_context	*irq_contexts;
+	struct xarray		irq_contexts;
 
 	/* L2 MTU */
 	u16 adapter_mtu;
@@ -558,12 +558,16 @@ enum {
 /* Driver can handle holes (zeros) in the device list */
 #define GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP BIT(11)
 
+/* Driver supports dynamic MSI-X vector allocation */
+#define GDMA_DRV_CAP_FLAG_1_DYNAMIC_IRQ_ALLOC_SUPPORT BIT(13)
+
 #define GDMA_DRV_CAP_FLAGS1 \
 	(GDMA_DRV_CAP_FLAG_1_EQ_SHARING_MULTI_VPORT | \
 	 GDMA_DRV_CAP_FLAG_1_NAPI_WKDONE_FIX | \
 	 GDMA_DRV_CAP_FLAG_1_HWC_TIMEOUT_RECONFIG | \
 	 GDMA_DRV_CAP_FLAG_1_VARIABLE_INDIRECTION_TABLE_SUPPORT | \
-	 GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP)
+	 GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP | \
+	 GDMA_DRV_CAP_FLAG_1_DYNAMIC_IRQ_ALLOC_SUPPORT)
 
 #define GDMA_DRV_CAP_FLAGS2 0
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 3/5] net: mana: explain irq_setup() algorithm
  2025-05-27 15:58 ` [PATCH v4 3/5] net: mana: explain irq_setup() algorithm Shradha Gupta
@ 2025-05-27 19:10   ` Yury Norov
  2025-05-29 13:15     ` Shradha Gupta
  0 siblings, 1 reply; 22+ messages in thread
From: Yury Norov @ 2025-05-27 19:10 UTC (permalink / raw)
  To: Shradha Gupta
  Cc: Dexuan Cui, Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Konstantin Taranov, Simon Horman, Leon Romanovsky, Maxim Levitsky,
	Erni Sri Satya Vennela, Peter Zijlstra, Michael Kelley,
	linux-hyperv, linux-pci, linux-kernel, Nipun Gupta,
	Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
	Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy�~Dski,
	Lorenzo Pieralisi, netdev, linux-rdma, Paul Rosswurm,
	Shradha Gupta

So now git will think that you're the author of the patch.

If author and sender are different people, the first line in commit
message body should state that. In this case, it should be:

From: Yury Norov <yury.norov@gmail.com>

Please consider this one example

https://patchew.org/linux/20250326-fixed-type-genmasks-v8-0-24afed16ca00@wanadoo.fr/20250326-fixed-type-genmasks-v8-6-24afed16ca00@wanadoo.fr/

Thanks,
Yury

On Tue, May 27, 2025 at 08:58:25AM -0700, Shradha Gupta wrote:
> Commit 91bfe210e196 ("net: mana: add a function to spread IRQs per CPUs")
> added the irq_setup() function that distributes IRQs on CPUs according
> to a tricky heuristic. The corresponding commit message explains the
> heuristic.
> 
> Duplicate it in the source code to make available for readers without
> digging git in history. Also, add more detailed explanation about how
> the heuristics is implemented.
> 
> Signed-off-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
> Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> ---
>  .../net/ethernet/microsoft/mana/gdma_main.c   | 41 +++++++++++++++++++
>  1 file changed, 41 insertions(+)
> 
> diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> index 4ffaf7588885..f9e8d4d1ba3a 100644
> --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
> +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> @@ -1288,6 +1288,47 @@ void mana_gd_free_res_map(struct gdma_resource *r)
>  	r->size = 0;
>  }
>  
> +/*
> + * Spread on CPUs with the following heuristics:
> + *
> + * 1. No more than one IRQ per CPU, if possible;
> + * 2. NUMA locality is the second priority;
> + * 3. Sibling dislocality is the last priority.
> + *
> + * Let's consider this topology:
> + *
> + * Node            0               1
> + * Core        0       1       2       3
> + * CPU       0   1   2   3   4   5   6   7
> + *
> + * The most performant IRQ distribution based on the above topology
> + * and heuristics may look like this:
> + *
> + * IRQ     Nodes   Cores   CPUs
> + * 0       1       0       0-1
> + * 1       1       1       2-3
> + * 2       1       0       0-1
> + * 3       1       1       2-3
> + * 4       2       2       4-5
> + * 5       2       3       6-7
> + * 6       2       2       4-5
> + * 7       2       3       6-7
> + *
> + * The heuristics is implemented as follows.
> + *
> + * The outer for_each() loop resets the 'weight' to the actual number
> + * of CPUs in the hop. Then inner for_each() loop decrements it by the
> + * number of sibling groups (cores) while assigning first set of IRQs
> + * to each group. IRQs 0 and 1 above are distributed this way.
> + *
> + * Now, because NUMA locality is more important, we should walk the
> + * same set of siblings and assign 2nd set of IRQs (2 and 3), and it's
> + * implemented by the medium while() loop. We do like this unless the
> + * number of IRQs assigned on this hop will not become equal to number
> + * of CPUs in the hop (weight == 0). Then we switch to the next hop and
> + * do the same thing.
> + */
> +
>  static int irq_setup(unsigned int *irqs, unsigned int len, int node)
>  {
>  	const struct cpumask *next, *prev = cpu_none_mask;
> -- 
> 2.34.1

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 5/5] net: mana: Allocate MSI-X vectors dynamically
  2025-05-27 15:59 ` [PATCH v4 5/5] net: mana: Allocate MSI-X vectors dynamically Shradha Gupta
@ 2025-05-28  8:16   ` Saurabh Singh Sengar
  2025-05-29 13:17     ` Shradha Gupta
  2025-05-28 18:52   ` Simon Horman
  2025-05-29  3:45   ` Saurabh Singh Sengar
  2 siblings, 1 reply; 22+ messages in thread
From: Saurabh Singh Sengar @ 2025-05-28  8:16 UTC (permalink / raw)
  To: Shradha Gupta
  Cc: Dexuan Cui, Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Konstantin Taranov, Simon Horman, Leon Romanovsky, Maxim Levitsky,
	Erni Sri Satya Vennela, Peter Zijlstra, Michael Kelley,
	linux-hyperv, linux-pci, linux-kernel, Nipun Gupta, Yury Norov,
	Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
	Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy�~Dski,
	Lorenzo Pieralisi, netdev, linux-rdma, Paul Rosswurm,
	Shradha Gupta

On Tue, May 27, 2025 at 08:59:03AM -0700, Shradha Gupta wrote:
> Currently, the MANA driver allocates MSI-X vectors statically based on
> MANA_MAX_NUM_QUEUES and num_online_cpus() values and in some cases ends
> up allocating more vectors than it needs. This is because, by this time
> we do not have a HW channel and do not know how many IRQs should be
> allocated.
> 
> To avoid this, we allocate 1 MSI-X vector during the creation of HWC and
> after getting the value supported by hardware, dynamically add the
> remaining MSI-X vectors.
> 
> Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
> ---
>  Changes in v4:
>  * added BUG_ON at appropriate places
>  * moved xa_destroy to mana_gd_remove()
>  * rearragned the cleanup logic in mana_gd_setup_dyn_irqs()
>  * simplified processing around start_irq_index in mana_gd_setup_irqs()
>  * return 0 instead of return err as appropriate
> ---
>  Changes in v3:
>  * implemented irq_contexts as xarrays rather than list
>  * split the patch to create a perparation patch around irq_setup()
>  * add log when IRQ allocation/setup for remaining IRQs fails
> ---
>  Changes in v2:
>  * Use string 'MSI-X vectors' instead of 'pci vectors'
>  * make skip-cpu a bool instead of int
>  * rearrange the comment arout skip_cpu variable appropriately
>  * update the capability bit for driver indicating dynamic IRQ allocation
>  * enforced max line length to 80
>  * enforced RCT convention
>  * initialized gic to NULL, for when there is a possibility of gic
>    not being populated correctly
> ---
>  .../net/ethernet/microsoft/mana/gdma_main.c   | 306 +++++++++++++-----
>  include/net/mana/gdma.h                       |   8 +-
>  2 files changed, 235 insertions(+), 79 deletions(-)
> 
> diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> index 763a548c4a2b..98ebecbec9a7 100644
> --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
> +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> @@ -6,6 +6,8 @@
>  #include <linux/pci.h>
>  #include <linux/utsname.h>
>  #include <linux/version.h>
> +#include <linux/msi.h>
> +#include <linux/irqdomain.h>
>  
>  #include <net/mana/mana.h>
>  
> @@ -80,8 +82,15 @@ static int mana_gd_query_max_resources(struct pci_dev *pdev)
>  		return err ? err : -EPROTO;
>  	}
>  
> -	if (gc->num_msix_usable > resp.max_msix)
> -		gc->num_msix_usable = resp.max_msix;
> +	if (!pci_msix_can_alloc_dyn(pdev)) {
> +		if (gc->num_msix_usable > resp.max_msix)
> +			gc->num_msix_usable = resp.max_msix;
> +	} else {
> +		/* If dynamic allocation is enabled we have already allocated
> +		 * hwc msi
> +		 */
> +		gc->num_msix_usable = min(resp.max_msix, num_online_cpus() + 1);
> +	}
>  
>  	if (gc->num_msix_usable <= 1)
>  		return -ENOSPC;
> @@ -482,7 +491,9 @@ static int mana_gd_register_irq(struct gdma_queue *queue,
>  	}
>  
>  	queue->eq.msix_index = msi_index;
> -	gic = &gc->irq_contexts[msi_index];
> +	gic = xa_load(&gc->irq_contexts, msi_index);
> +	if (WARN_ON(!gic))
> +		return -EINVAL;
>  
>  	spin_lock_irqsave(&gic->lock, flags);
>  	list_add_rcu(&queue->entry, &gic->eq_list);
> @@ -507,7 +518,10 @@ static void mana_gd_deregiser_irq(struct gdma_queue *queue)
>  	if (WARN_ON(msix_index >= gc->num_msix_usable))
>  		return;
>  
> -	gic = &gc->irq_contexts[msix_index];
> +	gic = xa_load(&gc->irq_contexts, msix_index);
> +	if (WARN_ON(!gic))
> +		return;
> +
>  	spin_lock_irqsave(&gic->lock, flags);
>  	list_for_each_entry_rcu(eq, &gic->eq_list, entry) {
>  		if (queue == eq) {
> @@ -1366,47 +1380,113 @@ static int irq_setup(unsigned int *irqs, unsigned int len, int node,
>  	return 0;
>  }
>  
> -static int mana_gd_setup_irqs(struct pci_dev *pdev)
> +static int mana_gd_setup_dyn_irqs(struct pci_dev *pdev, int nvec)
>  {
>  	struct gdma_context *gc = pci_get_drvdata(pdev);
> -	unsigned int max_queues_per_port;
>  	struct gdma_irq_context *gic;
> -	unsigned int max_irqs, cpu;
> -	int start_irq_index = 1;
> -	int nvec, *irqs, irq;
> -	int err, i = 0, j;
> +	bool skip_first_cpu = false;
> +	int *irqs, irq, err, i;
>  
>  	cpus_read_lock();
> -	max_queues_per_port = num_online_cpus();
> -	if (max_queues_per_port > MANA_MAX_NUM_QUEUES)
> -		max_queues_per_port = MANA_MAX_NUM_QUEUES;
>  
> -	/* Need 1 interrupt for the Hardware communication Channel (HWC) */
> -	max_irqs = max_queues_per_port + 1;
> -
> -	nvec = pci_alloc_irq_vectors(pdev, 2, max_irqs, PCI_IRQ_MSIX);
> -	if (nvec < 0) {
> -		cpus_read_unlock();
> -		return nvec;
> -	}
> -	if (nvec <= num_online_cpus())
> -		start_irq_index = 0;
> -
> -	irqs = kmalloc_array((nvec - start_irq_index), sizeof(int), GFP_KERNEL);
> +	irqs = kmalloc_array(nvec, sizeof(int), GFP_KERNEL);
>  	if (!irqs) {
>  		err = -ENOMEM;
>  		goto free_irq_vector;
>  	}
>  
> -	gc->irq_contexts = kcalloc(nvec, sizeof(struct gdma_irq_context),
> -				   GFP_KERNEL);
> -	if (!gc->irq_contexts) {
> +	/*
> +	 * While processing the next pci irq vector, we start with index 1,
> +	 * as IRQ vector at index 0 is already processed for HWC.
> +	 * However, the population of irqs array starts with index 0, to be
> +	 * further used in irq_setup()
> +	 */
> +	for (i = 1; i <= nvec; i++) {
> +		gic = kzalloc(sizeof(*gic), GFP_KERNEL);
> +		if (!gic) {
> +			err = -ENOMEM;
> +			goto free_irq;
> +		}
> +		gic->handler = mana_gd_process_eq_events;
> +		INIT_LIST_HEAD(&gic->eq_list);
> +		spin_lock_init(&gic->lock);
> +
> +		snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s",
> +			 i - 1, pci_name(pdev));
> +
> +		/* one pci vector is already allocated for HWC */
> +		irqs[i - 1] = pci_irq_vector(pdev, i);
> +		if (irqs[i - 1] < 0) {
> +			err = irqs[i - 1];
> +			goto free_current_gic;
> +		}
> +
> +		err = request_irq(irqs[i - 1], mana_gd_intr, 0, gic->name, gic);
> +		if (err)
> +			goto free_current_gic;
> +
> +		xa_store(&gc->irq_contexts, i, gic, GFP_KERNEL);
> +	}
> +
> +	/*
> +	 * When calling irq_setup() for dynamically added IRQs, if number of
> +	 * CPUs is more than or equal to allocated MSI-X, we need to skip the
> +	 * first CPU sibling group since they are already affinitized to HWC IRQ
> +	 */
> +	if (gc->num_msix_usable <= num_online_cpus())
> +		skip_first_cpu = true;
> +
> +	err = irq_setup(irqs, nvec, gc->numa_node, skip_first_cpu);
> +	if (err)
> +		goto free_irq;
> +
> +	cpus_read_unlock();
> +	kfree(irqs);
> +	return 0;
> +
> +free_current_gic:
> +	kfree(gic);
> +free_irq:
> +	for (i -= 1; i > 0; i--) {
> +		irq = pci_irq_vector(pdev, i);
> +		gic = xa_load(&gc->irq_contexts, i);
> +		if (WARN_ON(!gic))
> +			continue;
> +
> +		irq_update_affinity_hint(irq, NULL);
> +		free_irq(irq, gic);
> +		xa_erase(&gc->irq_contexts, i);
> +		kfree(gic);
> +	}
> +	kfree(irqs);
> +free_irq_vector:
> +	cpus_read_unlock();
> +	return err;
> +}
> +
> +static int mana_gd_setup_irqs(struct pci_dev *pdev, int nvec)
> +{
> +	struct gdma_context *gc = pci_get_drvdata(pdev);
> +	struct gdma_irq_context *gic;
> +	int *irqs, *start_irqs, irq;
> +	unsigned int cpu;
> +	int err, i;
> +
> +	cpus_read_lock();
> +
> +	irqs = kmalloc_array(nvec, sizeof(int), GFP_KERNEL);
> +	if (!irqs) {
>  		err = -ENOMEM;
> -		goto free_irq_array;
> +		goto free_irq_vector;
>  	}
>  
>  	for (i = 0; i < nvec; i++) {
> -		gic = &gc->irq_contexts[i];
> +		gic = kzalloc(sizeof(*gic), GFP_KERNEL);
> +		if (!gic) {
> +			err = -ENOMEM;
> +			goto free_irq;
> +		}
> +
>  		gic->handler = mana_gd_process_eq_events;
>  		INIT_LIST_HEAD(&gic->eq_list);
>  		spin_lock_init(&gic->lock);
> @@ -1418,69 +1498,128 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
>  			snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s",
>  				 i - 1, pci_name(pdev));
>  
> -		irq = pci_irq_vector(pdev, i);
> -		if (irq < 0) {
> -			err = irq;
> -			goto free_irq;
> +		irqs[i] = pci_irq_vector(pdev, i);
> +		if (irqs[i] < 0) {
> +			err = irqs[i];
> +			goto free_current_gic;
>  		}
>  
> -		if (!i) {
> -			err = request_irq(irq, mana_gd_intr, 0, gic->name, gic);
> -			if (err)
> -				goto free_irq;
> -
> -			/* If number of IRQ is one extra than number of online CPUs,
> -			 * then we need to assign IRQ0 (hwc irq) and IRQ1 to
> -			 * same CPU.
> -			 * Else we will use different CPUs for IRQ0 and IRQ1.
> -			 * Also we are using cpumask_local_spread instead of
> -			 * cpumask_first for the node, because the node can be
> -			 * mem only.
> -			 */
> -			if (start_irq_index) {
> -				cpu = cpumask_local_spread(i, gc->numa_node);
> -				irq_set_affinity_and_hint(irq, cpumask_of(cpu));
> -			} else {
> -				irqs[start_irq_index] = irq;
> -			}
> -		} else {
> -			irqs[i - start_irq_index] = irq;
> -			err = request_irq(irqs[i - start_irq_index], mana_gd_intr, 0,
> -					  gic->name, gic);
> -			if (err)
> -				goto free_irq;
> -		}
> +		err = request_irq(irqs[i], mana_gd_intr, 0, gic->name, gic);
> +		if (err)
> +			goto free_current_gic;
> +
> +		xa_store(&gc->irq_contexts, i, gic, GFP_KERNEL);
>  	}
>  
> -	err = irq_setup(irqs, nvec - start_irq_index, gc->numa_node, false);
> +	/* If number of IRQ is one extra than number of online CPUs,
> +	 * then we need to assign IRQ0 (hwc irq) and IRQ1 to
> +	 * same CPU.
> +	 * Else we will use different CPUs for IRQ0 and IRQ1.
> +	 * Also we are using cpumask_local_spread instead of
> +	 * cpumask_first for the node, because the node can be
> +	 * mem only.
> +	 */
> +	start_irqs = irqs;
> +	if (nvec > num_online_cpus()) {
> +		cpu = cpumask_local_spread(0, gc->numa_node);
> +		irq_set_affinity_and_hint(irqs[0], cpumask_of(cpu));
> +		irqs++;
> +		nvec -= 1;
> +	}
> +
> +	err = irq_setup(irqs, nvec, gc->numa_node, false);
>  	if (err)
>  		goto free_irq;
>  
> -	gc->max_num_msix = nvec;
> -	gc->num_msix_usable = nvec;
>  	cpus_read_unlock();
> -	kfree(irqs);
> +	kfree(start_irqs);
>  	return 0;
>  
> +free_current_gic:
> +	kfree(gic);
>  free_irq:
> -	for (j = i - 1; j >= 0; j--) {
> -		irq = pci_irq_vector(pdev, j);
> -		gic = &gc->irq_contexts[j];
> +	for (i -= 1; i >= 0; i--) {
> +		irq = pci_irq_vector(pdev, i);
> +		gic = xa_load(&gc->irq_contexts, i);
> +		if (WARN_ON(!gic))
> +			continue;
>  
>  		irq_update_affinity_hint(irq, NULL);
>  		free_irq(irq, gic);
> +		xa_erase(&gc->irq_contexts, i);
> +		kfree(gic);
>  	}
>  
> -	kfree(gc->irq_contexts);
> -	gc->irq_contexts = NULL;
> -free_irq_array:
> -	kfree(irqs);
> +	kfree(start_irqs);

There is a case when start_irqs can be used here uninitialized.

- Saurabh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 5/5] net: mana: Allocate MSI-X vectors dynamically
  2025-05-27 15:59 ` [PATCH v4 5/5] net: mana: Allocate MSI-X vectors dynamically Shradha Gupta
  2025-05-28  8:16   ` Saurabh Singh Sengar
@ 2025-05-28 18:52   ` Simon Horman
  2025-05-29 13:18     ` Shradha Gupta
  2025-05-29  3:45   ` Saurabh Singh Sengar
  2 siblings, 1 reply; 22+ messages in thread
From: Simon Horman @ 2025-05-28 18:52 UTC (permalink / raw)
  To: Shradha Gupta
  Cc: Dexuan Cui, Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Konstantin Taranov, Leon Romanovsky, Maxim Levitsky,
	Erni Sri Satya Vennela, Peter Zijlstra, Michael Kelley,
	linux-hyperv, linux-pci, linux-kernel, Nipun Gupta, Yury Norov,
	Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
	Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy�~Dski,
	Lorenzo Pieralisi, netdev, linux-rdma, Paul Rosswurm,
	Shradha Gupta

On Tue, May 27, 2025 at 08:59:03AM -0700, Shradha Gupta wrote:
> Currently, the MANA driver allocates MSI-X vectors statically based on
> MANA_MAX_NUM_QUEUES and num_online_cpus() values and in some cases ends
> up allocating more vectors than it needs. This is because, by this time
> we do not have a HW channel and do not know how many IRQs should be
> allocated.
> 
> To avoid this, we allocate 1 MSI-X vector during the creation of HWC and
> after getting the value supported by hardware, dynamically add the
> remaining MSI-X vectors.
> 
> Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>

...

> +static int mana_gd_setup_irqs(struct pci_dev *pdev, int nvec)
> +{
> +	struct gdma_context *gc = pci_get_drvdata(pdev);
> +	struct gdma_irq_context *gic;
> +	int *irqs, *start_irqs, irq;
> +	unsigned int cpu;
> +	int err, i;
> +
> +	cpus_read_lock();
> +
> +	irqs = kmalloc_array(nvec, sizeof(int), GFP_KERNEL);
> +	if (!irqs) {
>  		err = -ENOMEM;
> -		goto free_irq_array;
> +		goto free_irq_vector;
>  	}
>  
>  	for (i = 0; i < nvec; i++) {
> -		gic = &gc->irq_contexts[i];
> +		gic = kzalloc(sizeof(*gic), GFP_KERNEL);
> +		if (!gic) {
> +			err = -ENOMEM;
> +			goto free_irq;
> +		}
> +
>  		gic->handler = mana_gd_process_eq_events;
>  		INIT_LIST_HEAD(&gic->eq_list);
>  		spin_lock_init(&gic->lock);
> @@ -1418,69 +1498,128 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
>  			snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s",
>  				 i - 1, pci_name(pdev));
>  
> -		irq = pci_irq_vector(pdev, i);
> -		if (irq < 0) {
> -			err = irq;
> -			goto free_irq;
> +		irqs[i] = pci_irq_vector(pdev, i);
> +		if (irqs[i] < 0) {
> +			err = irqs[i];
> +			goto free_current_gic;
>  		}
>  
> -		if (!i) {
> -			err = request_irq(irq, mana_gd_intr, 0, gic->name, gic);
> -			if (err)
> -				goto free_irq;
> -
> -			/* If number of IRQ is one extra than number of online CPUs,
> -			 * then we need to assign IRQ0 (hwc irq) and IRQ1 to
> -			 * same CPU.
> -			 * Else we will use different CPUs for IRQ0 and IRQ1.
> -			 * Also we are using cpumask_local_spread instead of
> -			 * cpumask_first for the node, because the node can be
> -			 * mem only.
> -			 */
> -			if (start_irq_index) {
> -				cpu = cpumask_local_spread(i, gc->numa_node);
> -				irq_set_affinity_and_hint(irq, cpumask_of(cpu));
> -			} else {
> -				irqs[start_irq_index] = irq;
> -			}
> -		} else {
> -			irqs[i - start_irq_index] = irq;
> -			err = request_irq(irqs[i - start_irq_index], mana_gd_intr, 0,
> -					  gic->name, gic);
> -			if (err)
> -				goto free_irq;
> -		}
> +		err = request_irq(irqs[i], mana_gd_intr, 0, gic->name, gic);
> +		if (err)
> +			goto free_current_gic;

Jumping to free_current_gic will free start_irqs.
However, start_irqs isn't initialised until a few lines below.

Flagged by Smatch.

> +
> +		xa_store(&gc->irq_contexts, i, gic, GFP_KERNEL);
>  	}
>  
> -	err = irq_setup(irqs, nvec - start_irq_index, gc->numa_node, false);
> +	/* If number of IRQ is one extra than number of online CPUs,
> +	 * then we need to assign IRQ0 (hwc irq) and IRQ1 to
> +	 * same CPU.
> +	 * Else we will use different CPUs for IRQ0 and IRQ1.
> +	 * Also we are using cpumask_local_spread instead of
> +	 * cpumask_first for the node, because the node can be
> +	 * mem only.
> +	 */
> +	start_irqs = irqs;
> +	if (nvec > num_online_cpus()) {
> +		cpu = cpumask_local_spread(0, gc->numa_node);
> +		irq_set_affinity_and_hint(irqs[0], cpumask_of(cpu));
> +		irqs++;
> +		nvec -= 1;
> +	}
> +
> +	err = irq_setup(irqs, nvec, gc->numa_node, false);
>  	if (err)
>  		goto free_irq;
>  
> -	gc->max_num_msix = nvec;
> -	gc->num_msix_usable = nvec;
>  	cpus_read_unlock();
> -	kfree(irqs);
> +	kfree(start_irqs);
>  	return 0;
>  
> +free_current_gic:
> +	kfree(gic);
>  free_irq:
> -	for (j = i - 1; j >= 0; j--) {
> -		irq = pci_irq_vector(pdev, j);
> -		gic = &gc->irq_contexts[j];
> +	for (i -= 1; i >= 0; i--) {
> +		irq = pci_irq_vector(pdev, i);
> +		gic = xa_load(&gc->irq_contexts, i);
> +		if (WARN_ON(!gic))
> +			continue;
>  
>  		irq_update_affinity_hint(irq, NULL);
>  		free_irq(irq, gic);
> +		xa_erase(&gc->irq_contexts, i);
> +		kfree(gic);
>  	}
>  
> -	kfree(gc->irq_contexts);
> -	gc->irq_contexts = NULL;
> -free_irq_array:
> -	kfree(irqs);
> +	kfree(start_irqs);
>  free_irq_vector:
>  	cpus_read_unlock();
> -	pci_free_irq_vectors(pdev);
>  	return err;
>  }

...

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA
  2025-05-27 15:57 [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA Shradha Gupta
                   ` (4 preceding siblings ...)
  2025-05-27 15:59 ` [PATCH v4 5/5] net: mana: Allocate MSI-X vectors dynamically Shradha Gupta
@ 2025-05-28 18:55 ` Simon Horman
  2025-05-29 13:28   ` Shradha Gupta
  2025-06-01 14:53 ` Zhu Yanjun
  6 siblings, 1 reply; 22+ messages in thread
From: Simon Horman @ 2025-05-28 18:55 UTC (permalink / raw)
  To: Shradha Gupta
  Cc: linux-hyperv, linux-pci, linux-kernel, Nipun Gupta, Yury Norov,
	Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
	Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy�~Dski,
	Lorenzo Pieralisi, Dexuan Cui, Wei Liu, Haiyang Zhang,
	K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Leon Romanovsky,
	Maxim Levitsky, Erni Sri Satya Vennela, Peter Zijlstra, netdev,
	linux-rdma, Paul Rosswurm, Shradha Gupta

On Tue, May 27, 2025 at 08:57:33AM -0700, Shradha Gupta wrote:
> In this patchset we want to enable the MANA driver to be able to
> allocate MSI-X vectors in PCI dynamically.
> 
> The first patch exports pci_msix_prepare_desc() in PCI to be able to
> correctly prepare descriptors for dynamically added MSI-X vectors.
> 
> The second patch adds the support of dynamic vector allocation in
> pci-hyperv PCI controller by enabling the MSI_FLAG_PCI_MSIX_ALLOC_DYN
> flag and using the pci_msix_prepare_desc() exported in first patch.
> 
> The third patch adds a detailed description of the irq_setup(), to
> help understand the function design better.
> 
> The fourth patch is a preparation patch for mana changes to support
> dynamic IRQ allocation. It contains changes in irq_setup() to allow
> skipping first sibling CPU sets, in case certain IRQs are already
> affinitized to them.
> 
> The fifth patch has the changes in MANA driver to be able to allocate
> MSI-X vectors dynamically. If the support does not exist it defaults to
> older behavior.

Hi Shradha,

It's unclear what the target tree for this patch-set is.
But if it is net-next, which seems likely given the code under
drivers/net/, then:

Please include that target in the subject of each patch in the patch-set.

	Subject: [PATCH v5 net-next 0/5] ...

And, moreover, ...

## Form letter - net-next-closed

The merge window for v6.16 has begun and therefore net-next is closed
for new drivers, features, code refactoring and optimizations. We are
currently accepting bug fixes only.

Please repost when net-next reopens after June 8th.

RFC patches sent for review only are obviously welcome at any time.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 5/5] net: mana: Allocate MSI-X vectors dynamically
  2025-05-27 15:59 ` [PATCH v4 5/5] net: mana: Allocate MSI-X vectors dynamically Shradha Gupta
  2025-05-28  8:16   ` Saurabh Singh Sengar
  2025-05-28 18:52   ` Simon Horman
@ 2025-05-29  3:45   ` Saurabh Singh Sengar
  2025-05-29 13:20     ` Shradha Gupta
  2 siblings, 1 reply; 22+ messages in thread
From: Saurabh Singh Sengar @ 2025-05-29  3:45 UTC (permalink / raw)
  To: Shradha Gupta
  Cc: Dexuan Cui, Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Konstantin Taranov, Simon Horman, Leon Romanovsky, Maxim Levitsky,
	Erni Sri Satya Vennela, Peter Zijlstra, Michael Kelley,
	linux-hyperv, linux-pci, linux-kernel, Nipun Gupta, Yury Norov,
	Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
	Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy�~Dski,
	Lorenzo Pieralisi, netdev, linux-rdma, Paul Rosswurm,
	Shradha Gupta

On Tue, May 27, 2025 at 08:59:03AM -0700, Shradha Gupta wrote:
> Currently, the MANA driver allocates MSI-X vectors statically based on
> MANA_MAX_NUM_QUEUES and num_online_cpus() values and in some cases ends
> up allocating more vectors than it needs. This is because, by this time
> we do not have a HW channel and do not know how many IRQs should be
> allocated.
> 
> To avoid this, we allocate 1 MSI-X vector during the creation of HWC and
> after getting the value supported by hardware, dynamically add the
> remaining MSI-X vectors.
> 
> Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
> ---
>  Changes in v4:
>  * added BUG_ON at appropriate places
>  * moved xa_destroy to mana_gd_remove()
>  * rearragned the cleanup logic in mana_gd_setup_dyn_irqs()
>  * simplified processing around start_irq_index in mana_gd_setup_irqs()
>  * return 0 instead of return err as appropriate
> ---
>  Changes in v3:
>  * implemented irq_contexts as xarrays rather than list
>  * split the patch to create a perparation patch around irq_setup()
>  * add log when IRQ allocation/setup for remaining IRQs fails
> ---
>  Changes in v2:
>  * Use string 'MSI-X vectors' instead of 'pci vectors'
>  * make skip-cpu a bool instead of int
>  * rearrange the comment arout skip_cpu variable appropriately
>  * update the capability bit for driver indicating dynamic IRQ allocation
>  * enforced max line length to 80
>  * enforced RCT convention
>  * initialized gic to NULL, for when there is a possibility of gic
>    not being populated correctly
> ---
>  .../net/ethernet/microsoft/mana/gdma_main.c   | 306 +++++++++++++-----
>  include/net/mana/gdma.h                       |   8 +-
>  2 files changed, 235 insertions(+), 79 deletions(-)
> 
> diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> index 763a548c4a2b..98ebecbec9a7 100644
> --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
> +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> @@ -6,6 +6,8 @@
>  #include <linux/pci.h>
>  #include <linux/utsname.h>
>  #include <linux/version.h>
> +#include <linux/msi.h>
> +#include <linux/irqdomain.h>
>  
>  #include <net/mana/mana.h>
>  
> @@ -80,8 +82,15 @@ static int mana_gd_query_max_resources(struct pci_dev *pdev)
>  		return err ? err : -EPROTO;
>  	}
>  
> -	if (gc->num_msix_usable > resp.max_msix)
> -		gc->num_msix_usable = resp.max_msix;
> +	if (!pci_msix_can_alloc_dyn(pdev)) {
> +		if (gc->num_msix_usable > resp.max_msix)
> +			gc->num_msix_usable = resp.max_msix;
> +	} else {
> +		/* If dynamic allocation is enabled we have already allocated
> +		 * hwc msi
> +		 */
> +		gc->num_msix_usable = min(resp.max_msix, num_online_cpus() + 1);
> +	}
>  
>  	if (gc->num_msix_usable <= 1)
>  		return -ENOSPC;
> @@ -482,7 +491,9 @@ static int mana_gd_register_irq(struct gdma_queue *queue,
>  	}
>  
>  	queue->eq.msix_index = msi_index;
> -	gic = &gc->irq_contexts[msi_index];
> +	gic = xa_load(&gc->irq_contexts, msi_index);
> +	if (WARN_ON(!gic))
> +		return -EINVAL;
>  
>  	spin_lock_irqsave(&gic->lock, flags);
>  	list_add_rcu(&queue->entry, &gic->eq_list);
> @@ -507,7 +518,10 @@ static void mana_gd_deregiser_irq(struct gdma_queue *queue)
>  	if (WARN_ON(msix_index >= gc->num_msix_usable))
>  		return;
>  
> -	gic = &gc->irq_contexts[msix_index];
> +	gic = xa_load(&gc->irq_contexts, msix_index);
> +	if (WARN_ON(!gic))
> +		return;
> +
>  	spin_lock_irqsave(&gic->lock, flags);
>  	list_for_each_entry_rcu(eq, &gic->eq_list, entry) {
>  		if (queue == eq) {
> @@ -1366,47 +1380,113 @@ static int irq_setup(unsigned int *irqs, unsigned int len, int node,
>  	return 0;
>  }
>  
> -static int mana_gd_setup_irqs(struct pci_dev *pdev)
> +static int mana_gd_setup_dyn_irqs(struct pci_dev *pdev, int nvec)
>  {
>  	struct gdma_context *gc = pci_get_drvdata(pdev);
> -	unsigned int max_queues_per_port;
>  	struct gdma_irq_context *gic;
> -	unsigned int max_irqs, cpu;
> -	int start_irq_index = 1;
> -	int nvec, *irqs, irq;
> -	int err, i = 0, j;
> +	bool skip_first_cpu = false;
> +	int *irqs, irq, err, i;
>  
>  	cpus_read_lock();

Now that num_online_cpus is moved further down in this new logic,
do we want to reduce the critical section ?

I don't think we want kmalloc_array to be protected.


> -	max_queues_per_port = num_online_cpus();
> -	if (max_queues_per_port > MANA_MAX_NUM_QUEUES)
> -		max_queues_per_port = MANA_MAX_NUM_QUEUES;
>  
> -	/* Need 1 interrupt for the Hardware communication Channel (HWC) */
> -	max_irqs = max_queues_per_port + 1;
> -
> -	nvec = pci_alloc_irq_vectors(pdev, 2, max_irqs, PCI_IRQ_MSIX);
> -	if (nvec < 0) {
> -		cpus_read_unlock();
> -		return nvec;
> -	}
> -	if (nvec <= num_online_cpus())
> -		start_irq_index = 0;
> -
> -	irqs = kmalloc_array((nvec - start_irq_index), sizeof(int), GFP_KERNEL);
> +	irqs = kmalloc_array(nvec, sizeof(int), GFP_KERNEL);
>  	if (!irqs) {
>  		err = -ENOMEM;
>  		goto free_irq_vector;
>  	}
>  
> -	gc->irq_contexts = kcalloc(nvec, sizeof(struct gdma_irq_context),
> -				   GFP_KERNEL);
> -	if (!gc->irq_contexts) {
> +	/*
> +	 * While processing the next pci irq vector, we start with index 1,
> +	 * as IRQ vector at index 0 is already processed for HWC.
> +	 * However, the population of irqs array starts with index 0, to be
> +	 * further used in irq_setup()
> +	 */
> +	for (i = 1; i <= nvec; i++) {
> +		gic = kzalloc(sizeof(*gic), GFP_KERNEL);
> +		if (!gic) {
> +			err = -ENOMEM;
> +			goto free_irq;
> +		}
> +		gic->handler = mana_gd_process_eq_events;
> +		INIT_LIST_HEAD(&gic->eq_list);
> +		spin_lock_init(&gic->lock);
> +
> +		snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s",
> +			 i - 1, pci_name(pdev));
> +
> +		/* one pci vector is already allocated for HWC */
> +		irqs[i - 1] = pci_irq_vector(pdev, i);
> +		if (irqs[i - 1] < 0) {
> +			err = irqs[i - 1];
> +			goto free_current_gic;
> +		}
> +
> +		err = request_irq(irqs[i - 1], mana_gd_intr, 0, gic->name, gic);
> +		if (err)
> +			goto free_current_gic;
> +
> +		xa_store(&gc->irq_contexts, i, gic, GFP_KERNEL);
> +	}
> +
> +	/*
> +	 * When calling irq_setup() for dynamically added IRQs, if number of
> +	 * CPUs is more than or equal to allocated MSI-X, we need to skip the
> +	 * first CPU sibling group since they are already affinitized to HWC IRQ
> +	 */
> +	if (gc->num_msix_usable <= num_online_cpus())
> +		skip_first_cpu = true;
> +
> +	err = irq_setup(irqs, nvec, gc->numa_node, skip_first_cpu);
> +	if (err)
> +		goto free_irq;
> +
> +	cpus_read_unlock();
> +	kfree(irqs);
> +	return 0;
> +
> +free_current_gic:
> +	kfree(gic);
> +free_irq:
> +	for (i -= 1; i > 0; i--) {
> +		irq = pci_irq_vector(pdev, i);
> +		gic = xa_load(&gc->irq_contexts, i);
> +		if (WARN_ON(!gic))
> +			continue;
> +
> +		irq_update_affinity_hint(irq, NULL);
> +		free_irq(irq, gic);
> +		xa_erase(&gc->irq_contexts, i);
> +		kfree(gic);
> +	}
> +	kfree(irqs);
> +free_irq_vector:
> +	cpus_read_unlock();
> +	return err;
> +}
> +
> +static int mana_gd_setup_irqs(struct pci_dev *pdev, int nvec)
> +{
> +	struct gdma_context *gc = pci_get_drvdata(pdev);
> +	struct gdma_irq_context *gic;
> +	int *irqs, *start_irqs, irq;
> +	unsigned int cpu;
> +	int err, i;
> +
> +	cpus_read_lock();

Same here

> +	irqs = kmalloc_array(nvec, sizeof(int), GFP_KERNEL);
> +	if (!irqs) {
>  		err = -ENOMEM;
> -		goto free_irq_array;
> +		goto free_irq_vector;
>  	}
>  
>  	for (i = 0; i < nvec; i++) {
> -		gic = &gc->irq_contexts[i];
> +		gic = kzalloc(sizeof(*gic), GFP_KERNEL);
> +		if (!gic) {
> +			err = -ENOMEM;
> +			goto free_irq;
> +		}
> +
>  		gic->handler = mana_gd_process_eq_events;
>  		INIT_LIST_HEAD(&gic->eq_list);
>  		spin_lock_init(&gic->lock);

<snip>

- Saurabh

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 1/5] PCI/MSI: Export pci_msix_prepare_desc() for dynamic MSI-X allocations
  2025-05-27 15:57 ` [PATCH v4 1/5] PCI/MSI: Export pci_msix_prepare_desc() for dynamic MSI-X allocations Shradha Gupta
@ 2025-05-29  3:46   ` Saurabh Singh Sengar
  0 siblings, 0 replies; 22+ messages in thread
From: Saurabh Singh Sengar @ 2025-05-29  3:46 UTC (permalink / raw)
  To: Shradha Gupta
  Cc: Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen,
	Thomas Gleixner, Bjorn Helgaas, Michael Kelley, linux-hyperv,
	linux-pci, linux-kernel, Nipun Gupta, Yury Norov, Kevin Tian,
	Long Li, Rob Herring, Manivannan Sadhasivam,
	Krzysztof Wilczy�~Dski, Lorenzo Pieralisi, Dexuan Cui,
	Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Konstantin Taranov, Simon Horman, Leon Romanovsky, Maxim Levitsky,
	Erni Sri Satya Vennela, Peter Zijlstra, netdev, linux-rdma,
	Paul Rosswurm, Shradha Gupta

On Tue, May 27, 2025 at 08:57:57AM -0700, Shradha Gupta wrote:
> For supporting dynamic MSI-X vector allocation by PCI controllers, enabling
> the flag MSI_FLAG_PCI_MSIX_ALLOC_DYN is not enough, msix_prepare_msi_desc()
> to prepare the MSI descriptor is also needed.
> 
> Export pci_msix_prepare_desc() to allow PCI controllers to support dynamic
> MSI-X vector allocation.
> 
> Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
> Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  Changes in v3
>  * Improved the patch description by removing abbreviations
> ---
>  drivers/pci/msi/irqdomain.c | 5 +++--
>  include/linux/msi.h         | 2 ++
>  2 files changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/pci/msi/irqdomain.c b/drivers/pci/msi/irqdomain.c
> index d7ba8795d60f..43129aa6d6c7 100644
> --- a/drivers/pci/msi/irqdomain.c
> +++ b/drivers/pci/msi/irqdomain.c
> @@ -222,13 +222,14 @@ static void pci_irq_unmask_msix(struct irq_data *data)
>  	pci_msix_unmask(irq_data_get_msi_desc(data));
>  }
>  
> -static void pci_msix_prepare_desc(struct irq_domain *domain, msi_alloc_info_t *arg,
> -				  struct msi_desc *desc)
> +void pci_msix_prepare_desc(struct irq_domain *domain, msi_alloc_info_t *arg,
> +			   struct msi_desc *desc)
>  {
>  	/* Don't fiddle with preallocated MSI descriptors */
>  	if (!desc->pci.mask_base)
>  		msix_prepare_msi_desc(to_pci_dev(desc->dev), desc);
>  }
> +EXPORT_SYMBOL_GPL(pci_msix_prepare_desc);
>  
>  static const struct msi_domain_template pci_msix_template = {
>  	.chip = {
> diff --git a/include/linux/msi.h b/include/linux/msi.h
> index 86e42742fd0f..d5864d5e75c2 100644
> --- a/include/linux/msi.h
> +++ b/include/linux/msi.h
> @@ -691,6 +691,8 @@ struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode,
>  					     struct irq_domain *parent);
>  u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev);
>  struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev);
> +void pci_msix_prepare_desc(struct irq_domain *domain, msi_alloc_info_t *arg,
> +			   struct msi_desc *desc);
>  #else /* CONFIG_PCI_MSI */
>  static inline struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev)
>  {
> -- 
> 2.34.1
>

Reviewed-by: Saurabh Sengar <ssengar@linux.microsoft.com> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 2/5] PCI: hv: Allow dynamic MSI-X vector allocation
  2025-05-27 15:58 ` [PATCH v4 2/5] PCI: hv: Allow dynamic MSI-X vector allocation Shradha Gupta
@ 2025-05-29  3:46   ` Saurabh Singh Sengar
  0 siblings, 0 replies; 22+ messages in thread
From: Saurabh Singh Sengar @ 2025-05-29  3:46 UTC (permalink / raw)
  To: Shradha Gupta
  Cc: Bjorn Helgaas, Rob Herring, Manivannan Sadhasivam,
	Krzysztof Wilczy�~Dski, Lorenzo Pieralisi, Dexuan Cui,
	Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Michael Kelley,
	linux-hyperv, linux-pci, linux-kernel, Nipun Gupta, Yury Norov,
	Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
	Long Li, Thomas Gleixner, Andrew Lunn, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, Konstantin Taranov,
	Simon Horman, Leon Romanovsky, Maxim Levitsky,
	Erni Sri Satya Vennela, Peter Zijlstra, netdev, linux-rdma,
	Paul Rosswurm, Shradha Gupta

On Tue, May 27, 2025 at 08:58:09AM -0700, Shradha Gupta wrote:
> Allow dynamic MSI-X vector allocation for pci_hyperv PCI controller
> by adding support for the flag MSI_FLAG_PCI_MSIX_ALLOC_DYN and using
> pci_msix_prepare_desc() to prepare the MSI-X descriptors.
> 
> Feature support added for both x86 and ARM64
> 
> Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
> ---
>  Changes in v4:
>  * use the same prepare_desc() callback for arm and x86
> ---
>  Changes in v3:
>  * Add arm64 support
> ---
>  Changes in v2:
>  * split the patch to keep changes in PCI and pci_hyperv controller
>    seperate
>  * replace strings "pci vectors" by "MSI-X vectors"
> ---
>  drivers/pci/controller/pci-hyperv.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletions(-)
> 
> diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
> index ac27bda5ba26..0c790f35ad0e 100644
> --- a/drivers/pci/controller/pci-hyperv.c
> +++ b/drivers/pci/controller/pci-hyperv.c
> @@ -2063,6 +2063,7 @@ static struct irq_chip hv_msi_irq_chip = {
>  static struct msi_domain_ops hv_msi_ops = {
>  	.msi_prepare	= hv_msi_prepare,
>  	.msi_free	= hv_msi_free,
> +	.prepare_desc	= pci_msix_prepare_desc,
>  };
>  
>  /**
> @@ -2084,7 +2085,7 @@ static int hv_pcie_init_irq_domain(struct hv_pcibus_device *hbus)
>  	hbus->msi_info.ops = &hv_msi_ops;
>  	hbus->msi_info.flags = (MSI_FLAG_USE_DEF_DOM_OPS |
>  		MSI_FLAG_USE_DEF_CHIP_OPS | MSI_FLAG_MULTI_PCI_MSI |
> -		MSI_FLAG_PCI_MSIX);
> +		MSI_FLAG_PCI_MSIX | MSI_FLAG_PCI_MSIX_ALLOC_DYN);
>  	hbus->msi_info.handler = FLOW_HANDLER;
>  	hbus->msi_info.handler_name = FLOW_NAME;
>  	hbus->msi_info.data = hbus;
> -- 
> 2.34.1
> 

Reviewed-by: Saurabh Sengar <ssengar@linux.microsoft.com>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 3/5] net: mana: explain irq_setup() algorithm
  2025-05-27 19:10   ` Yury Norov
@ 2025-05-29 13:15     ` Shradha Gupta
  0 siblings, 0 replies; 22+ messages in thread
From: Shradha Gupta @ 2025-05-29 13:15 UTC (permalink / raw)
  To: Yury Norov
  Cc: Dexuan Cui, Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Konstantin Taranov, Simon Horman, Leon Romanovsky, Maxim Levitsky,
	Erni Sri Satya Vennela, Peter Zijlstra, Michael Kelley,
	linux-hyperv, linux-pci, linux-kernel, Nipun Gupta,
	Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
	Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy???~Dski,
	Lorenzo Pieralisi, netdev, linux-rdma, Paul Rosswurm,
	Shradha Gupta

On Tue, May 27, 2025 at 03:10:15PM -0400, Yury Norov wrote:
> So now git will think that you're the author of the patch.
> 
> If author and sender are different people, the first line in commit
> message body should state that. In this case, it should be:
> 
> From: Yury Norov <yury.norov@gmail.com>
> 
> Please consider this one example
> 
> https://patchew.org/linux/20250326-fixed-type-genmasks-v8-0-24afed16ca00@wanadoo.fr/20250326-fixed-type-genmasks-v8-6-24afed16ca00@wanadoo.fr/
> 
> Thanks,
> Yury
>

Understood, Thank you Yury. I'll make this change in the next version

Regards,
Shradha. 
> On Tue, May 27, 2025 at 08:58:25AM -0700, Shradha Gupta wrote:
> > Commit 91bfe210e196 ("net: mana: add a function to spread IRQs per CPUs")
> > added the irq_setup() function that distributes IRQs on CPUs according
> > to a tricky heuristic. The corresponding commit message explains the
> > heuristic.
> > 
> > Duplicate it in the source code to make available for readers without
> > digging git in history. Also, add more detailed explanation about how
> > the heuristics is implemented.
> > 
> > Signed-off-by: Yury Norov [NVIDIA] <yury.norov@gmail.com>
> > Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> > ---
> >  .../net/ethernet/microsoft/mana/gdma_main.c   | 41 +++++++++++++++++++
> >  1 file changed, 41 insertions(+)
> > 
> > diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> > index 4ffaf7588885..f9e8d4d1ba3a 100644
> > --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
> > +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> > @@ -1288,6 +1288,47 @@ void mana_gd_free_res_map(struct gdma_resource *r)
> >  	r->size = 0;
> >  }
> >  
> > +/*
> > + * Spread on CPUs with the following heuristics:
> > + *
> > + * 1. No more than one IRQ per CPU, if possible;
> > + * 2. NUMA locality is the second priority;
> > + * 3. Sibling dislocality is the last priority.
> > + *
> > + * Let's consider this topology:
> > + *
> > + * Node            0               1
> > + * Core        0       1       2       3
> > + * CPU       0   1   2   3   4   5   6   7
> > + *
> > + * The most performant IRQ distribution based on the above topology
> > + * and heuristics may look like this:
> > + *
> > + * IRQ     Nodes   Cores   CPUs
> > + * 0       1       0       0-1
> > + * 1       1       1       2-3
> > + * 2       1       0       0-1
> > + * 3       1       1       2-3
> > + * 4       2       2       4-5
> > + * 5       2       3       6-7
> > + * 6       2       2       4-5
> > + * 7       2       3       6-7
> > + *
> > + * The heuristics is implemented as follows.
> > + *
> > + * The outer for_each() loop resets the 'weight' to the actual number
> > + * of CPUs in the hop. Then inner for_each() loop decrements it by the
> > + * number of sibling groups (cores) while assigning first set of IRQs
> > + * to each group. IRQs 0 and 1 above are distributed this way.
> > + *
> > + * Now, because NUMA locality is more important, we should walk the
> > + * same set of siblings and assign 2nd set of IRQs (2 and 3), and it's
> > + * implemented by the medium while() loop. We do like this unless the
> > + * number of IRQs assigned on this hop will not become equal to number
> > + * of CPUs in the hop (weight == 0). Then we switch to the next hop and
> > + * do the same thing.
> > + */
> > +
> >  static int irq_setup(unsigned int *irqs, unsigned int len, int node)
> >  {
> >  	const struct cpumask *next, *prev = cpu_none_mask;
> > -- 
> > 2.34.1

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 5/5] net: mana: Allocate MSI-X vectors dynamically
  2025-05-28  8:16   ` Saurabh Singh Sengar
@ 2025-05-29 13:17     ` Shradha Gupta
  0 siblings, 0 replies; 22+ messages in thread
From: Shradha Gupta @ 2025-05-29 13:17 UTC (permalink / raw)
  To: Saurabh Singh Sengar
  Cc: Dexuan Cui, Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Konstantin Taranov, Simon Horman, Leon Romanovsky, Maxim Levitsky,
	Erni Sri Satya Vennela, Peter Zijlstra, Michael Kelley,
	linux-hyperv, linux-pci, linux-kernel, Nipun Gupta, Yury Norov,
	Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
	Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy???~Dski,
	Lorenzo Pieralisi, netdev, linux-rdma, Paul Rosswurm,
	Shradha Gupta

On Wed, May 28, 2025 at 01:16:38AM -0700, Saurabh Singh Sengar wrote:
> On Tue, May 27, 2025 at 08:59:03AM -0700, Shradha Gupta wrote:
> > Currently, the MANA driver allocates MSI-X vectors statically based on
> > MANA_MAX_NUM_QUEUES and num_online_cpus() values and in some cases ends
> > up allocating more vectors than it needs. This is because, by this time
> > we do not have a HW channel and do not know how many IRQs should be
> > allocated.
> > 
> > To avoid this, we allocate 1 MSI-X vector during the creation of HWC and
> > after getting the value supported by hardware, dynamically add the
> > remaining MSI-X vectors.
> > 
> > Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> > Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
> > ---
> >  Changes in v4:
> >  * added BUG_ON at appropriate places
> >  * moved xa_destroy to mana_gd_remove()
> >  * rearragned the cleanup logic in mana_gd_setup_dyn_irqs()
> >  * simplified processing around start_irq_index in mana_gd_setup_irqs()
> >  * return 0 instead of return err as appropriate
> > ---
> >  Changes in v3:
> >  * implemented irq_contexts as xarrays rather than list
> >  * split the patch to create a perparation patch around irq_setup()
> >  * add log when IRQ allocation/setup for remaining IRQs fails
> > ---
> >  Changes in v2:
> >  * Use string 'MSI-X vectors' instead of 'pci vectors'
> >  * make skip-cpu a bool instead of int
> >  * rearrange the comment arout skip_cpu variable appropriately
> >  * update the capability bit for driver indicating dynamic IRQ allocation
> >  * enforced max line length to 80
> >  * enforced RCT convention
> >  * initialized gic to NULL, for when there is a possibility of gic
> >    not being populated correctly
> > ---
> >  .../net/ethernet/microsoft/mana/gdma_main.c   | 306 +++++++++++++-----
> >  include/net/mana/gdma.h                       |   8 +-
> >  2 files changed, 235 insertions(+), 79 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> > index 763a548c4a2b..98ebecbec9a7 100644
> > --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
> > +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> > @@ -6,6 +6,8 @@
> >  #include <linux/pci.h>
> >  #include <linux/utsname.h>
> >  #include <linux/version.h>
> > +#include <linux/msi.h>
> > +#include <linux/irqdomain.h>
> >  
> >  #include <net/mana/mana.h>
> >  
> > @@ -80,8 +82,15 @@ static int mana_gd_query_max_resources(struct pci_dev *pdev)
> >  		return err ? err : -EPROTO;
> >  	}
> >  
> > -	if (gc->num_msix_usable > resp.max_msix)
> > -		gc->num_msix_usable = resp.max_msix;
> > +	if (!pci_msix_can_alloc_dyn(pdev)) {
> > +		if (gc->num_msix_usable > resp.max_msix)
> > +			gc->num_msix_usable = resp.max_msix;
> > +	} else {
> > +		/* If dynamic allocation is enabled we have already allocated
> > +		 * hwc msi
> > +		 */
> > +		gc->num_msix_usable = min(resp.max_msix, num_online_cpus() + 1);
> > +	}
> >  
> >  	if (gc->num_msix_usable <= 1)
> >  		return -ENOSPC;
> > @@ -482,7 +491,9 @@ static int mana_gd_register_irq(struct gdma_queue *queue,
> >  	}
> >  
> >  	queue->eq.msix_index = msi_index;
> > -	gic = &gc->irq_contexts[msi_index];
> > +	gic = xa_load(&gc->irq_contexts, msi_index);
> > +	if (WARN_ON(!gic))
> > +		return -EINVAL;
> >  
> >  	spin_lock_irqsave(&gic->lock, flags);
> >  	list_add_rcu(&queue->entry, &gic->eq_list);
> > @@ -507,7 +518,10 @@ static void mana_gd_deregiser_irq(struct gdma_queue *queue)
> >  	if (WARN_ON(msix_index >= gc->num_msix_usable))
> >  		return;
> >  
> > -	gic = &gc->irq_contexts[msix_index];
> > +	gic = xa_load(&gc->irq_contexts, msix_index);
> > +	if (WARN_ON(!gic))
> > +		return;
> > +
> >  	spin_lock_irqsave(&gic->lock, flags);
> >  	list_for_each_entry_rcu(eq, &gic->eq_list, entry) {
> >  		if (queue == eq) {
> > @@ -1366,47 +1380,113 @@ static int irq_setup(unsigned int *irqs, unsigned int len, int node,
> >  	return 0;
> >  }
> >  
> > -static int mana_gd_setup_irqs(struct pci_dev *pdev)
> > +static int mana_gd_setup_dyn_irqs(struct pci_dev *pdev, int nvec)
> >  {
> >  	struct gdma_context *gc = pci_get_drvdata(pdev);
> > -	unsigned int max_queues_per_port;
> >  	struct gdma_irq_context *gic;
> > -	unsigned int max_irqs, cpu;
> > -	int start_irq_index = 1;
> > -	int nvec, *irqs, irq;
> > -	int err, i = 0, j;
> > +	bool skip_first_cpu = false;
> > +	int *irqs, irq, err, i;
> >  
> >  	cpus_read_lock();
> > -	max_queues_per_port = num_online_cpus();
> > -	if (max_queues_per_port > MANA_MAX_NUM_QUEUES)
> > -		max_queues_per_port = MANA_MAX_NUM_QUEUES;
> >  
> > -	/* Need 1 interrupt for the Hardware communication Channel (HWC) */
> > -	max_irqs = max_queues_per_port + 1;
> > -
> > -	nvec = pci_alloc_irq_vectors(pdev, 2, max_irqs, PCI_IRQ_MSIX);
> > -	if (nvec < 0) {
> > -		cpus_read_unlock();
> > -		return nvec;
> > -	}
> > -	if (nvec <= num_online_cpus())
> > -		start_irq_index = 0;
> > -
> > -	irqs = kmalloc_array((nvec - start_irq_index), sizeof(int), GFP_KERNEL);
> > +	irqs = kmalloc_array(nvec, sizeof(int), GFP_KERNEL);
> >  	if (!irqs) {
> >  		err = -ENOMEM;
> >  		goto free_irq_vector;
> >  	}
> >  
> > -	gc->irq_contexts = kcalloc(nvec, sizeof(struct gdma_irq_context),
> > -				   GFP_KERNEL);
> > -	if (!gc->irq_contexts) {
> > +	/*
> > +	 * While processing the next pci irq vector, we start with index 1,
> > +	 * as IRQ vector at index 0 is already processed for HWC.
> > +	 * However, the population of irqs array starts with index 0, to be
> > +	 * further used in irq_setup()
> > +	 */
> > +	for (i = 1; i <= nvec; i++) {
> > +		gic = kzalloc(sizeof(*gic), GFP_KERNEL);
> > +		if (!gic) {
> > +			err = -ENOMEM;
> > +			goto free_irq;
> > +		}
> > +		gic->handler = mana_gd_process_eq_events;
> > +		INIT_LIST_HEAD(&gic->eq_list);
> > +		spin_lock_init(&gic->lock);
> > +
> > +		snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s",
> > +			 i - 1, pci_name(pdev));
> > +
> > +		/* one pci vector is already allocated for HWC */
> > +		irqs[i - 1] = pci_irq_vector(pdev, i);
> > +		if (irqs[i - 1] < 0) {
> > +			err = irqs[i - 1];
> > +			goto free_current_gic;
> > +		}
> > +
> > +		err = request_irq(irqs[i - 1], mana_gd_intr, 0, gic->name, gic);
> > +		if (err)
> > +			goto free_current_gic;
> > +
> > +		xa_store(&gc->irq_contexts, i, gic, GFP_KERNEL);
> > +	}
> > +
> > +	/*
> > +	 * When calling irq_setup() for dynamically added IRQs, if number of
> > +	 * CPUs is more than or equal to allocated MSI-X, we need to skip the
> > +	 * first CPU sibling group since they are already affinitized to HWC IRQ
> > +	 */
> > +	if (gc->num_msix_usable <= num_online_cpus())
> > +		skip_first_cpu = true;
> > +
> > +	err = irq_setup(irqs, nvec, gc->numa_node, skip_first_cpu);
> > +	if (err)
> > +		goto free_irq;
> > +
> > +	cpus_read_unlock();
> > +	kfree(irqs);
> > +	return 0;
> > +
> > +free_current_gic:
> > +	kfree(gic);
> > +free_irq:
> > +	for (i -= 1; i > 0; i--) {
> > +		irq = pci_irq_vector(pdev, i);
> > +		gic = xa_load(&gc->irq_contexts, i);
> > +		if (WARN_ON(!gic))
> > +			continue;
> > +
> > +		irq_update_affinity_hint(irq, NULL);
> > +		free_irq(irq, gic);
> > +		xa_erase(&gc->irq_contexts, i);
> > +		kfree(gic);
> > +	}
> > +	kfree(irqs);
> > +free_irq_vector:
> > +	cpus_read_unlock();
> > +	return err;
> > +}
> > +
> > +static int mana_gd_setup_irqs(struct pci_dev *pdev, int nvec)
> > +{
> > +	struct gdma_context *gc = pci_get_drvdata(pdev);
> > +	struct gdma_irq_context *gic;
> > +	int *irqs, *start_irqs, irq;
> > +	unsigned int cpu;
> > +	int err, i;
> > +
> > +	cpus_read_lock();
> > +
> > +	irqs = kmalloc_array(nvec, sizeof(int), GFP_KERNEL);
> > +	if (!irqs) {
> >  		err = -ENOMEM;
> > -		goto free_irq_array;
> > +		goto free_irq_vector;
> >  	}
> >  
> >  	for (i = 0; i < nvec; i++) {
> > -		gic = &gc->irq_contexts[i];
> > +		gic = kzalloc(sizeof(*gic), GFP_KERNEL);
> > +		if (!gic) {
> > +			err = -ENOMEM;
> > +			goto free_irq;
> > +		}
> > +
> >  		gic->handler = mana_gd_process_eq_events;
> >  		INIT_LIST_HEAD(&gic->eq_list);
> >  		spin_lock_init(&gic->lock);
> > @@ -1418,69 +1498,128 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
> >  			snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s",
> >  				 i - 1, pci_name(pdev));
> >  
> > -		irq = pci_irq_vector(pdev, i);
> > -		if (irq < 0) {
> > -			err = irq;
> > -			goto free_irq;
> > +		irqs[i] = pci_irq_vector(pdev, i);
> > +		if (irqs[i] < 0) {
> > +			err = irqs[i];
> > +			goto free_current_gic;
> >  		}
> >  
> > -		if (!i) {
> > -			err = request_irq(irq, mana_gd_intr, 0, gic->name, gic);
> > -			if (err)
> > -				goto free_irq;
> > -
> > -			/* If number of IRQ is one extra than number of online CPUs,
> > -			 * then we need to assign IRQ0 (hwc irq) and IRQ1 to
> > -			 * same CPU.
> > -			 * Else we will use different CPUs for IRQ0 and IRQ1.
> > -			 * Also we are using cpumask_local_spread instead of
> > -			 * cpumask_first for the node, because the node can be
> > -			 * mem only.
> > -			 */
> > -			if (start_irq_index) {
> > -				cpu = cpumask_local_spread(i, gc->numa_node);
> > -				irq_set_affinity_and_hint(irq, cpumask_of(cpu));
> > -			} else {
> > -				irqs[start_irq_index] = irq;
> > -			}
> > -		} else {
> > -			irqs[i - start_irq_index] = irq;
> > -			err = request_irq(irqs[i - start_irq_index], mana_gd_intr, 0,
> > -					  gic->name, gic);
> > -			if (err)
> > -				goto free_irq;
> > -		}
> > +		err = request_irq(irqs[i], mana_gd_intr, 0, gic->name, gic);
> > +		if (err)
> > +			goto free_current_gic;
> > +
> > +		xa_store(&gc->irq_contexts, i, gic, GFP_KERNEL);
> >  	}
> >  
> > -	err = irq_setup(irqs, nvec - start_irq_index, gc->numa_node, false);
> > +	/* If number of IRQ is one extra than number of online CPUs,
> > +	 * then we need to assign IRQ0 (hwc irq) and IRQ1 to
> > +	 * same CPU.
> > +	 * Else we will use different CPUs for IRQ0 and IRQ1.
> > +	 * Also we are using cpumask_local_spread instead of
> > +	 * cpumask_first for the node, because the node can be
> > +	 * mem only.
> > +	 */
> > +	start_irqs = irqs;
> > +	if (nvec > num_online_cpus()) {
> > +		cpu = cpumask_local_spread(0, gc->numa_node);
> > +		irq_set_affinity_and_hint(irqs[0], cpumask_of(cpu));
> > +		irqs++;
> > +		nvec -= 1;
> > +	}
> > +
> > +	err = irq_setup(irqs, nvec, gc->numa_node, false);
> >  	if (err)
> >  		goto free_irq;
> >  
> > -	gc->max_num_msix = nvec;
> > -	gc->num_msix_usable = nvec;
> >  	cpus_read_unlock();
> > -	kfree(irqs);
> > +	kfree(start_irqs);
> >  	return 0;
> >  
> > +free_current_gic:
> > +	kfree(gic);
> >  free_irq:
> > -	for (j = i - 1; j >= 0; j--) {
> > -		irq = pci_irq_vector(pdev, j);
> > -		gic = &gc->irq_contexts[j];
> > +	for (i -= 1; i >= 0; i--) {
> > +		irq = pci_irq_vector(pdev, i);
> > +		gic = xa_load(&gc->irq_contexts, i);
> > +		if (WARN_ON(!gic))
> > +			continue;
> >  
> >  		irq_update_affinity_hint(irq, NULL);
> >  		free_irq(irq, gic);
> > +		xa_erase(&gc->irq_contexts, i);
> > +		kfree(gic);
> >  	}
> >  
> > -	kfree(gc->irq_contexts);
> > -	gc->irq_contexts = NULL;
> > -free_irq_array:
> > -	kfree(irqs);
> > +	kfree(start_irqs);
> 
> There is a case when start_irqs can be used here uninitialized.
> 
> - Saurabh

Thanks Saurabh. I'll get this in the next version

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 5/5] net: mana: Allocate MSI-X vectors dynamically
  2025-05-28 18:52   ` Simon Horman
@ 2025-05-29 13:18     ` Shradha Gupta
  0 siblings, 0 replies; 22+ messages in thread
From: Shradha Gupta @ 2025-05-29 13:18 UTC (permalink / raw)
  To: Simon Horman
  Cc: Dexuan Cui, Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Konstantin Taranov, Leon Romanovsky, Maxim Levitsky,
	Erni Sri Satya Vennela, Peter Zijlstra, Michael Kelley,
	linux-hyperv, linux-pci, linux-kernel, Nipun Gupta, Yury Norov,
	Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
	Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy???~Dski,
	Lorenzo Pieralisi, netdev, linux-rdma, Paul Rosswurm,
	Shradha Gupta

On Wed, May 28, 2025 at 07:52:35PM +0100, Simon Horman wrote:
> On Tue, May 27, 2025 at 08:59:03AM -0700, Shradha Gupta wrote:
> > Currently, the MANA driver allocates MSI-X vectors statically based on
> > MANA_MAX_NUM_QUEUES and num_online_cpus() values and in some cases ends
> > up allocating more vectors than it needs. This is because, by this time
> > we do not have a HW channel and do not know how many IRQs should be
> > allocated.
> > 
> > To avoid this, we allocate 1 MSI-X vector during the creation of HWC and
> > after getting the value supported by hardware, dynamically add the
> > remaining MSI-X vectors.
> > 
> > Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> > Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
> 
> ...
> 
> > +static int mana_gd_setup_irqs(struct pci_dev *pdev, int nvec)
> > +{
> > +	struct gdma_context *gc = pci_get_drvdata(pdev);
> > +	struct gdma_irq_context *gic;
> > +	int *irqs, *start_irqs, irq;
> > +	unsigned int cpu;
> > +	int err, i;
> > +
> > +	cpus_read_lock();
> > +
> > +	irqs = kmalloc_array(nvec, sizeof(int), GFP_KERNEL);
> > +	if (!irqs) {
> >  		err = -ENOMEM;
> > -		goto free_irq_array;
> > +		goto free_irq_vector;
> >  	}
> >  
> >  	for (i = 0; i < nvec; i++) {
> > -		gic = &gc->irq_contexts[i];
> > +		gic = kzalloc(sizeof(*gic), GFP_KERNEL);
> > +		if (!gic) {
> > +			err = -ENOMEM;
> > +			goto free_irq;
> > +		}
> > +
> >  		gic->handler = mana_gd_process_eq_events;
> >  		INIT_LIST_HEAD(&gic->eq_list);
> >  		spin_lock_init(&gic->lock);
> > @@ -1418,69 +1498,128 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
> >  			snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s",
> >  				 i - 1, pci_name(pdev));
> >  
> > -		irq = pci_irq_vector(pdev, i);
> > -		if (irq < 0) {
> > -			err = irq;
> > -			goto free_irq;
> > +		irqs[i] = pci_irq_vector(pdev, i);
> > +		if (irqs[i] < 0) {
> > +			err = irqs[i];
> > +			goto free_current_gic;
> >  		}
> >  
> > -		if (!i) {
> > -			err = request_irq(irq, mana_gd_intr, 0, gic->name, gic);
> > -			if (err)
> > -				goto free_irq;
> > -
> > -			/* If number of IRQ is one extra than number of online CPUs,
> > -			 * then we need to assign IRQ0 (hwc irq) and IRQ1 to
> > -			 * same CPU.
> > -			 * Else we will use different CPUs for IRQ0 and IRQ1.
> > -			 * Also we are using cpumask_local_spread instead of
> > -			 * cpumask_first for the node, because the node can be
> > -			 * mem only.
> > -			 */
> > -			if (start_irq_index) {
> > -				cpu = cpumask_local_spread(i, gc->numa_node);
> > -				irq_set_affinity_and_hint(irq, cpumask_of(cpu));
> > -			} else {
> > -				irqs[start_irq_index] = irq;
> > -			}
> > -		} else {
> > -			irqs[i - start_irq_index] = irq;
> > -			err = request_irq(irqs[i - start_irq_index], mana_gd_intr, 0,
> > -					  gic->name, gic);
> > -			if (err)
> > -				goto free_irq;
> > -		}
> > +		err = request_irq(irqs[i], mana_gd_intr, 0, gic->name, gic);
> > +		if (err)
> > +			goto free_current_gic;
> 
> Jumping to free_current_gic will free start_irqs.
> However, start_irqs isn't initialised until a few lines below.
> 
> Flagged by Smatch.
> 

Thanks Simon, I'll get this in next version

> > +
> > +		xa_store(&gc->irq_contexts, i, gic, GFP_KERNEL);
> >  	}
> >  
> > -	err = irq_setup(irqs, nvec - start_irq_index, gc->numa_node, false);
> > +	/* If number of IRQ is one extra than number of online CPUs,
> > +	 * then we need to assign IRQ0 (hwc irq) and IRQ1 to
> > +	 * same CPU.
> > +	 * Else we will use different CPUs for IRQ0 and IRQ1.
> > +	 * Also we are using cpumask_local_spread instead of
> > +	 * cpumask_first for the node, because the node can be
> > +	 * mem only.
> > +	 */
> > +	start_irqs = irqs;
> > +	if (nvec > num_online_cpus()) {
> > +		cpu = cpumask_local_spread(0, gc->numa_node);
> > +		irq_set_affinity_and_hint(irqs[0], cpumask_of(cpu));
> > +		irqs++;
> > +		nvec -= 1;
> > +	}
> > +
> > +	err = irq_setup(irqs, nvec, gc->numa_node, false);
> >  	if (err)
> >  		goto free_irq;
> >  
> > -	gc->max_num_msix = nvec;
> > -	gc->num_msix_usable = nvec;
> >  	cpus_read_unlock();
> > -	kfree(irqs);
> > +	kfree(start_irqs);
> >  	return 0;
> >  
> > +free_current_gic:
> > +	kfree(gic);
> >  free_irq:
> > -	for (j = i - 1; j >= 0; j--) {
> > -		irq = pci_irq_vector(pdev, j);
> > -		gic = &gc->irq_contexts[j];
> > +	for (i -= 1; i >= 0; i--) {
> > +		irq = pci_irq_vector(pdev, i);
> > +		gic = xa_load(&gc->irq_contexts, i);
> > +		if (WARN_ON(!gic))
> > +			continue;
> >  
> >  		irq_update_affinity_hint(irq, NULL);
> >  		free_irq(irq, gic);
> > +		xa_erase(&gc->irq_contexts, i);
> > +		kfree(gic);
> >  	}
> >  
> > -	kfree(gc->irq_contexts);
> > -	gc->irq_contexts = NULL;
> > -free_irq_array:
> > -	kfree(irqs);
> > +	kfree(start_irqs);
> >  free_irq_vector:
> >  	cpus_read_unlock();
> > -	pci_free_irq_vectors(pdev);
> >  	return err;
> >  }
> 
> ...

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 5/5] net: mana: Allocate MSI-X vectors dynamically
  2025-05-29  3:45   ` Saurabh Singh Sengar
@ 2025-05-29 13:20     ` Shradha Gupta
  0 siblings, 0 replies; 22+ messages in thread
From: Shradha Gupta @ 2025-05-29 13:20 UTC (permalink / raw)
  To: Saurabh Singh Sengar
  Cc: Dexuan Cui, Wei Liu, Haiyang Zhang, K. Y. Srinivasan, Andrew Lunn,
	David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Konstantin Taranov, Simon Horman, Leon Romanovsky, Maxim Levitsky,
	Erni Sri Satya Vennela, Peter Zijlstra, Michael Kelley,
	linux-hyperv, linux-pci, linux-kernel, Nipun Gupta, Yury Norov,
	Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
	Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy???~Dski,
	Lorenzo Pieralisi, netdev, linux-rdma, Paul Rosswurm,
	Shradha Gupta

On Wed, May 28, 2025 at 08:45:20PM -0700, Saurabh Singh Sengar wrote:
> On Tue, May 27, 2025 at 08:59:03AM -0700, Shradha Gupta wrote:
> > Currently, the MANA driver allocates MSI-X vectors statically based on
> > MANA_MAX_NUM_QUEUES and num_online_cpus() values and in some cases ends
> > up allocating more vectors than it needs. This is because, by this time
> > we do not have a HW channel and do not know how many IRQs should be
> > allocated.
> > 
> > To avoid this, we allocate 1 MSI-X vector during the creation of HWC and
> > after getting the value supported by hardware, dynamically add the
> > remaining MSI-X vectors.
> > 
> > Signed-off-by: Shradha Gupta <shradhagupta@linux.microsoft.com>
> > Reviewed-by: Haiyang Zhang <haiyangz@microsoft.com>
> > ---
> >  Changes in v4:
> >  * added BUG_ON at appropriate places
> >  * moved xa_destroy to mana_gd_remove()
> >  * rearragned the cleanup logic in mana_gd_setup_dyn_irqs()
> >  * simplified processing around start_irq_index in mana_gd_setup_irqs()
> >  * return 0 instead of return err as appropriate
> > ---
> >  Changes in v3:
> >  * implemented irq_contexts as xarrays rather than list
> >  * split the patch to create a perparation patch around irq_setup()
> >  * add log when IRQ allocation/setup for remaining IRQs fails
> > ---
> >  Changes in v2:
> >  * Use string 'MSI-X vectors' instead of 'pci vectors'
> >  * make skip-cpu a bool instead of int
> >  * rearrange the comment arout skip_cpu variable appropriately
> >  * update the capability bit for driver indicating dynamic IRQ allocation
> >  * enforced max line length to 80
> >  * enforced RCT convention
> >  * initialized gic to NULL, for when there is a possibility of gic
> >    not being populated correctly
> > ---
> >  .../net/ethernet/microsoft/mana/gdma_main.c   | 306 +++++++++++++-----
> >  include/net/mana/gdma.h                       |   8 +-
> >  2 files changed, 235 insertions(+), 79 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> > index 763a548c4a2b..98ebecbec9a7 100644
> > --- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
> > +++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
> > @@ -6,6 +6,8 @@
> >  #include <linux/pci.h>
> >  #include <linux/utsname.h>
> >  #include <linux/version.h>
> > +#include <linux/msi.h>
> > +#include <linux/irqdomain.h>
> >  
> >  #include <net/mana/mana.h>
> >  
> > @@ -80,8 +82,15 @@ static int mana_gd_query_max_resources(struct pci_dev *pdev)
> >  		return err ? err : -EPROTO;
> >  	}
> >  
> > -	if (gc->num_msix_usable > resp.max_msix)
> > -		gc->num_msix_usable = resp.max_msix;
> > +	if (!pci_msix_can_alloc_dyn(pdev)) {
> > +		if (gc->num_msix_usable > resp.max_msix)
> > +			gc->num_msix_usable = resp.max_msix;
> > +	} else {
> > +		/* If dynamic allocation is enabled we have already allocated
> > +		 * hwc msi
> > +		 */
> > +		gc->num_msix_usable = min(resp.max_msix, num_online_cpus() + 1);
> > +	}
> >  
> >  	if (gc->num_msix_usable <= 1)
> >  		return -ENOSPC;
> > @@ -482,7 +491,9 @@ static int mana_gd_register_irq(struct gdma_queue *queue,
> >  	}
> >  
> >  	queue->eq.msix_index = msi_index;
> > -	gic = &gc->irq_contexts[msi_index];
> > +	gic = xa_load(&gc->irq_contexts, msi_index);
> > +	if (WARN_ON(!gic))
> > +		return -EINVAL;
> >  
> >  	spin_lock_irqsave(&gic->lock, flags);
> >  	list_add_rcu(&queue->entry, &gic->eq_list);
> > @@ -507,7 +518,10 @@ static void mana_gd_deregiser_irq(struct gdma_queue *queue)
> >  	if (WARN_ON(msix_index >= gc->num_msix_usable))
> >  		return;
> >  
> > -	gic = &gc->irq_contexts[msix_index];
> > +	gic = xa_load(&gc->irq_contexts, msix_index);
> > +	if (WARN_ON(!gic))
> > +		return;
> > +
> >  	spin_lock_irqsave(&gic->lock, flags);
> >  	list_for_each_entry_rcu(eq, &gic->eq_list, entry) {
> >  		if (queue == eq) {
> > @@ -1366,47 +1380,113 @@ static int irq_setup(unsigned int *irqs, unsigned int len, int node,
> >  	return 0;
> >  }
> >  
> > -static int mana_gd_setup_irqs(struct pci_dev *pdev)
> > +static int mana_gd_setup_dyn_irqs(struct pci_dev *pdev, int nvec)
> >  {
> >  	struct gdma_context *gc = pci_get_drvdata(pdev);
> > -	unsigned int max_queues_per_port;
> >  	struct gdma_irq_context *gic;
> > -	unsigned int max_irqs, cpu;
> > -	int start_irq_index = 1;
> > -	int nvec, *irqs, irq;
> > -	int err, i = 0, j;
> > +	bool skip_first_cpu = false;
> > +	int *irqs, irq, err, i;
> >  
> >  	cpus_read_lock();
> 
> Now that num_online_cpus is moved further down in this new logic,
> do we want to reduce the critical section ?
> 
> I don't think we want kmalloc_array to be protected.
> 
> 
> > -	max_queues_per_port = num_online_cpus();
> > -	if (max_queues_per_port > MANA_MAX_NUM_QUEUES)
> > -		max_queues_per_port = MANA_MAX_NUM_QUEUES;
> >  
> > -	/* Need 1 interrupt for the Hardware communication Channel (HWC) */
> > -	max_irqs = max_queues_per_port + 1;
> > -
> > -	nvec = pci_alloc_irq_vectors(pdev, 2, max_irqs, PCI_IRQ_MSIX);
> > -	if (nvec < 0) {
> > -		cpus_read_unlock();
> > -		return nvec;
> > -	}
> > -	if (nvec <= num_online_cpus())
> > -		start_irq_index = 0;
> > -
> > -	irqs = kmalloc_array((nvec - start_irq_index), sizeof(int), GFP_KERNEL);
> > +	irqs = kmalloc_array(nvec, sizeof(int), GFP_KERNEL);
> >  	if (!irqs) {
> >  		err = -ENOMEM;
> >  		goto free_irq_vector;
> >  	}
> >  
> > -	gc->irq_contexts = kcalloc(nvec, sizeof(struct gdma_irq_context),
> > -				   GFP_KERNEL);
> > -	if (!gc->irq_contexts) {
> > +	/*
> > +	 * While processing the next pci irq vector, we start with index 1,
> > +	 * as IRQ vector at index 0 is already processed for HWC.
> > +	 * However, the population of irqs array starts with index 0, to be
> > +	 * further used in irq_setup()
> > +	 */
> > +	for (i = 1; i <= nvec; i++) {
> > +		gic = kzalloc(sizeof(*gic), GFP_KERNEL);
> > +		if (!gic) {
> > +			err = -ENOMEM;
> > +			goto free_irq;
> > +		}
> > +		gic->handler = mana_gd_process_eq_events;
> > +		INIT_LIST_HEAD(&gic->eq_list);
> > +		spin_lock_init(&gic->lock);
> > +
> > +		snprintf(gic->name, MANA_IRQ_NAME_SZ, "mana_q%d@pci:%s",
> > +			 i - 1, pci_name(pdev));
> > +
> > +		/* one pci vector is already allocated for HWC */
> > +		irqs[i - 1] = pci_irq_vector(pdev, i);
> > +		if (irqs[i - 1] < 0) {
> > +			err = irqs[i - 1];
> > +			goto free_current_gic;
> > +		}
> > +
> > +		err = request_irq(irqs[i - 1], mana_gd_intr, 0, gic->name, gic);
> > +		if (err)
> > +			goto free_current_gic;
> > +
> > +		xa_store(&gc->irq_contexts, i, gic, GFP_KERNEL);
> > +	}
> > +
> > +	/*
> > +	 * When calling irq_setup() for dynamically added IRQs, if number of
> > +	 * CPUs is more than or equal to allocated MSI-X, we need to skip the
> > +	 * first CPU sibling group since they are already affinitized to HWC IRQ
> > +	 */
> > +	if (gc->num_msix_usable <= num_online_cpus())
> > +		skip_first_cpu = true;
> > +
> > +	err = irq_setup(irqs, nvec, gc->numa_node, skip_first_cpu);
> > +	if (err)
> > +		goto free_irq;
> > +
> > +	cpus_read_unlock();
> > +	kfree(irqs);
> > +	return 0;
> > +
> > +free_current_gic:
> > +	kfree(gic);
> > +free_irq:
> > +	for (i -= 1; i > 0; i--) {
> > +		irq = pci_irq_vector(pdev, i);
> > +		gic = xa_load(&gc->irq_contexts, i);
> > +		if (WARN_ON(!gic))
> > +			continue;
> > +
> > +		irq_update_affinity_hint(irq, NULL);
> > +		free_irq(irq, gic);
> > +		xa_erase(&gc->irq_contexts, i);
> > +		kfree(gic);
> > +	}
> > +	kfree(irqs);
> > +free_irq_vector:
> > +	cpus_read_unlock();
> > +	return err;
> > +}
> > +
> > +static int mana_gd_setup_irqs(struct pci_dev *pdev, int nvec)
> > +{
> > +	struct gdma_context *gc = pci_get_drvdata(pdev);
> > +	struct gdma_irq_context *gic;
> > +	int *irqs, *start_irqs, irq;
> > +	unsigned int cpu;
> > +	int err, i;
> > +
> > +	cpus_read_lock();
> 
> Same here
> 
> > +	irqs = kmalloc_array(nvec, sizeof(int), GFP_KERNEL);
> > +	if (!irqs) {
> >  		err = -ENOMEM;
> > -		goto free_irq_array;
> > +		goto free_irq_vector;
> >  	}
> >  
> >  	for (i = 0; i < nvec; i++) {
> > -		gic = &gc->irq_contexts[i];
> > +		gic = kzalloc(sizeof(*gic), GFP_KERNEL);
> > +		if (!gic) {
> > +			err = -ENOMEM;
> > +			goto free_irq;
> > +		}
> > +
> >  		gic->handler = mana_gd_process_eq_events;
> >  		INIT_LIST_HEAD(&gic->eq_list);
> >  		spin_lock_init(&gic->lock);
> 
> <snip>
> 
> - Saurabh

Thanks Saurabh, will get these too


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA
  2025-05-28 18:55 ` [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA Simon Horman
@ 2025-05-29 13:28   ` Shradha Gupta
  2025-05-30 18:07     ` Simon Horman
  0 siblings, 1 reply; 22+ messages in thread
From: Shradha Gupta @ 2025-05-29 13:28 UTC (permalink / raw)
  To: Simon Horman
  Cc: linux-hyperv, linux-pci, linux-kernel, Nipun Gupta, Yury Norov,
	Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
	Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy???~Dski,
	Lorenzo Pieralisi, Dexuan Cui, Wei Liu, Haiyang Zhang,
	K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Leon Romanovsky,
	Maxim Levitsky, Erni Sri Satya Vennela, Peter Zijlstra, netdev,
	linux-rdma, Paul Rosswurm, Shradha Gupta

On Wed, May 28, 2025 at 07:55:08PM +0100, Simon Horman wrote:
> On Tue, May 27, 2025 at 08:57:33AM -0700, Shradha Gupta wrote:
> > In this patchset we want to enable the MANA driver to be able to
> > allocate MSI-X vectors in PCI dynamically.
> > 
> > The first patch exports pci_msix_prepare_desc() in PCI to be able to
> > correctly prepare descriptors for dynamically added MSI-X vectors.
> > 
> > The second patch adds the support of dynamic vector allocation in
> > pci-hyperv PCI controller by enabling the MSI_FLAG_PCI_MSIX_ALLOC_DYN
> > flag and using the pci_msix_prepare_desc() exported in first patch.
> > 
> > The third patch adds a detailed description of the irq_setup(), to
> > help understand the function design better.
> > 
> > The fourth patch is a preparation patch for mana changes to support
> > dynamic IRQ allocation. It contains changes in irq_setup() to allow
> > skipping first sibling CPU sets, in case certain IRQs are already
> > affinitized to them.
> > 
> > The fifth patch has the changes in MANA driver to be able to allocate
> > MSI-X vectors dynamically. If the support does not exist it defaults to
> > older behavior.
> 
> Hi Shradha,
> 
> It's unclear what the target tree for this patch-set is.
> But if it is net-next, which seems likely given the code under
> drivers/net/, then:
> 
> Please include that target in the subject of each patch in the patch-set.
> 
> 	Subject: [PATCH v5 net-next 0/5] ...
> 
> And, moreover, ...
> 
> ## Form letter - net-next-closed
> 
> The merge window for v6.16 has begun and therefore net-next is closed
> for new drivers, features, code refactoring and optimizations. We are
> currently accepting bug fixes only.
> 
> Please repost when net-next reopens after June 8th.
> 
> RFC patches sent for review only are obviously welcome at any time.

Thank you Simon.

While posting this patchset I was a bit confused about what should be
the target tree. That's why in the cover letter of the V1 for this
series, I had requested more clarity on the same (since there are patches
from PCI and net-next both).

In such cases how do we decide which tree to target?

Also, noted about the next merge window for net-next :-)

Regards,
Shradha.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA
  2025-05-29 13:28   ` Shradha Gupta
@ 2025-05-30 18:07     ` Simon Horman
  2025-06-03  4:15       ` Shradha Gupta
  0 siblings, 1 reply; 22+ messages in thread
From: Simon Horman @ 2025-05-30 18:07 UTC (permalink / raw)
  To: Shradha Gupta
  Cc: linux-hyperv, linux-pci, linux-kernel, Nipun Gupta, Yury Norov,
	Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
	Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy???~Dski,
	Lorenzo Pieralisi, Dexuan Cui, Wei Liu, Haiyang Zhang,
	K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Leon Romanovsky,
	Maxim Levitsky, Erni Sri Satya Vennela, Peter Zijlstra, netdev,
	linux-rdma, Paul Rosswurm, Shradha Gupta

On Thu, May 29, 2025 at 06:28:45AM -0700, Shradha Gupta wrote:
> On Wed, May 28, 2025 at 07:55:08PM +0100, Simon Horman wrote:
> > On Tue, May 27, 2025 at 08:57:33AM -0700, Shradha Gupta wrote:
> > > In this patchset we want to enable the MANA driver to be able to
> > > allocate MSI-X vectors in PCI dynamically.
> > > 
> > > The first patch exports pci_msix_prepare_desc() in PCI to be able to
> > > correctly prepare descriptors for dynamically added MSI-X vectors.
> > > 
> > > The second patch adds the support of dynamic vector allocation in
> > > pci-hyperv PCI controller by enabling the MSI_FLAG_PCI_MSIX_ALLOC_DYN
> > > flag and using the pci_msix_prepare_desc() exported in first patch.
> > > 
> > > The third patch adds a detailed description of the irq_setup(), to
> > > help understand the function design better.
> > > 
> > > The fourth patch is a preparation patch for mana changes to support
> > > dynamic IRQ allocation. It contains changes in irq_setup() to allow
> > > skipping first sibling CPU sets, in case certain IRQs are already
> > > affinitized to them.
> > > 
> > > The fifth patch has the changes in MANA driver to be able to allocate
> > > MSI-X vectors dynamically. If the support does not exist it defaults to
> > > older behavior.
> > 
> > Hi Shradha,
> > 
> > It's unclear what the target tree for this patch-set is.
> > But if it is net-next, which seems likely given the code under
> > drivers/net/, then:
> > 
> > Please include that target in the subject of each patch in the patch-set.
> > 
> > 	Subject: [PATCH v5 net-next 0/5] ...
> > 
> > And, moreover, ...
> > 
> > ## Form letter - net-next-closed
> > 
> > The merge window for v6.16 has begun and therefore net-next is closed
> > for new drivers, features, code refactoring and optimizations. We are
> > currently accepting bug fixes only.
> > 
> > Please repost when net-next reopens after June 8th.
> > 
> > RFC patches sent for review only are obviously welcome at any time.
> 
> Thank you Simon.
> 
> While posting this patchset I was a bit confused about what should be
> the target tree. That's why in the cover letter of the V1 for this
> series, I had requested more clarity on the same (since there are patches
> from PCI and net-next both).
> 
> In such cases how do we decide which tree to target?

Yes, that isn't entirely clear to me either.
Hopefully the maintainers can negotiate this.

> 
> Also, noted about the next merge window for net-next :-)
> 
> Regards,
> Shradha.
> 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA
  2025-05-27 15:57 [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA Shradha Gupta
                   ` (5 preceding siblings ...)
  2025-05-28 18:55 ` [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA Simon Horman
@ 2025-06-01 14:53 ` Zhu Yanjun
  2025-06-03  4:17   ` Shradha Gupta
  6 siblings, 1 reply; 22+ messages in thread
From: Zhu Yanjun @ 2025-06-01 14:53 UTC (permalink / raw)
  To: Shradha Gupta
  Cc: linux-hyperv, linux-pci, linux-kernel, Nipun Gupta, Yury Norov,
	Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
	Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy�~Dski,
	Lorenzo Pieralisi, Dexuan Cui, Wei Liu, Haiyang Zhang,
	K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Simon Horman,
	Leon Romanovsky, Maxim Levitsky, Erni Sri Satya Vennela,
	Peter Zijlstra, netdev, linux-rdma, Paul Rosswurm, Shradha Gupta

在 2025/5/27 17:57, Shradha Gupta 写道:
> In this patchset we want to enable the MANA driver to be able to
> allocate MSI-X vectors in PCI dynamically.
> 
> The first patch exports pci_msix_prepare_desc() in PCI to be able to
> correctly prepare descriptors for dynamically added MSI-X vectors.
> 
> The second patch adds the support of dynamic vector allocation in
> pci-hyperv PCI controller by enabling the MSI_FLAG_PCI_MSIX_ALLOC_DYN
> flag and using the pci_msix_prepare_desc() exported in first patch.
> 
> The third patch adds a detailed description of the irq_setup(), to
> help understand the function design better.
> 
> The fourth patch is a preparation patch for mana changes to support
> dynamic IRQ allocation. It contains changes in irq_setup() to allow
> skipping first sibling CPU sets, in case certain IRQs are already
> affinitized to them.
> 
> The fifth patch has the changes in MANA driver to be able to allocate
> MSI-X vectors dynamically. If the support does not exist it defaults to
> older behavior.
> ---
>   Change in v4
>   * add a patch describing the functionality of irq_setup() through a
>     comment
>   * In irq_setup(), avoid using a label next_cpumask:
>   * modify the changes in MANA patch about restructuring the error
>     handling path in mana_gd_setup_dyn_irqs()
>   * modify the mana_gd_setup_irqs() to simplify handling around
>     start_irq_index
>   * add warning if an invalid gic is returned
>   * place the xa_destroy() cleanup in mana_gd_remove
> ---
>   Changes in v3
>   * split the 3rd patch into preparation patch around irq_setup() and
>     changes in mana driver to allow dynamic IRQ allocation
>   * Add arm64 support for dynamic MSI-X allocation in pci_hyperv
>     controller
> ---
>   Changes in v2
>   * split the first patch into two(exporting the preapre_desc
>     func and using the function and flag in pci-hyperv)
>   * replace 'pci vectors' by 'MSI-X vectors'
>   * Change the cover letter description to align with changes made
> ---
> 
> Shradha Gupta (5):
>    PCI/MSI: Export pci_msix_prepare_desc() for dynamic MSI-X allocations
>    PCI: hv: Allow dynamic MSI-X vector allocation
>    net: mana: explain irq_setup() algorithm
>    net: mana: Allow irq_setup() to skip cpus for affinity
>    net: mana: Allocate MSI-X vectors dynamically

In this patchset, base-commit seems missing.

Please see this link:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/submitting-patches.rst?h=v6.15#n868

"
When you open ``outgoing/0000-cover-letter.patch`` for editing, you will
notice that it will have the ``base-commit:`` trailer at the very
bottom, which provides the reviewer and the CI tools enough information
to properly perform ``git am`` without worrying about conflicts::
"

When creating patches:
"
git format-patch --base=main origin/main
"

This will include a base-commit: line in each patch file:

"
base-commit: abcdef1234567890...
"

This is useful when submitting patches to mailing lists or other tooling.

Please follow the submitting-patches.rst to add base-commit.

Best Regards,
Zhu Yanjun

> 
>   .../net/ethernet/microsoft/mana/gdma_main.c   | 356 ++++++++++++++----
>   drivers/pci/controller/pci-hyperv.c           |   5 +-
>   drivers/pci/msi/irqdomain.c                   |   5 +-
>   include/linux/msi.h                           |   2 +
>   include/net/mana/gdma.h                       |   8 +-
>   5 files changed, 293 insertions(+), 83 deletions(-)
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA
  2025-05-30 18:07     ` Simon Horman
@ 2025-06-03  4:15       ` Shradha Gupta
  0 siblings, 0 replies; 22+ messages in thread
From: Shradha Gupta @ 2025-06-03  4:15 UTC (permalink / raw)
  To: Simon Horman
  Cc: linux-hyperv, linux-pci, linux-kernel, Nipun Gupta, Yury Norov,
	Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
	Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy???~Dski,
	Lorenzo Pieralisi, Dexuan Cui, Wei Liu, Haiyang Zhang,
	K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Leon Romanovsky,
	Maxim Levitsky, Erni Sri Satya Vennela, Peter Zijlstra, netdev,
	linux-rdma, Paul Rosswurm, Shradha Gupta

On Fri, May 30, 2025 at 07:07:32PM +0100, Simon Horman wrote:
> On Thu, May 29, 2025 at 06:28:45AM -0700, Shradha Gupta wrote:
> > On Wed, May 28, 2025 at 07:55:08PM +0100, Simon Horman wrote:
> > > On Tue, May 27, 2025 at 08:57:33AM -0700, Shradha Gupta wrote:
> > > > In this patchset we want to enable the MANA driver to be able to
> > > > allocate MSI-X vectors in PCI dynamically.
> > > > 
> > > > The first patch exports pci_msix_prepare_desc() in PCI to be able to
> > > > correctly prepare descriptors for dynamically added MSI-X vectors.
> > > > 
> > > > The second patch adds the support of dynamic vector allocation in
> > > > pci-hyperv PCI controller by enabling the MSI_FLAG_PCI_MSIX_ALLOC_DYN
> > > > flag and using the pci_msix_prepare_desc() exported in first patch.
> > > > 
> > > > The third patch adds a detailed description of the irq_setup(), to
> > > > help understand the function design better.
> > > > 
> > > > The fourth patch is a preparation patch for mana changes to support
> > > > dynamic IRQ allocation. It contains changes in irq_setup() to allow
> > > > skipping first sibling CPU sets, in case certain IRQs are already
> > > > affinitized to them.
> > > > 
> > > > The fifth patch has the changes in MANA driver to be able to allocate
> > > > MSI-X vectors dynamically. If the support does not exist it defaults to
> > > > older behavior.
> > > 
> > > Hi Shradha,
> > > 
> > > It's unclear what the target tree for this patch-set is.
> > > But if it is net-next, which seems likely given the code under
> > > drivers/net/, then:
> > > 
> > > Please include that target in the subject of each patch in the patch-set.
> > > 
> > > 	Subject: [PATCH v5 net-next 0/5] ...
> > > 
> > > And, moreover, ...
> > > 
> > > ## Form letter - net-next-closed
> > > 
> > > The merge window for v6.16 has begun and therefore net-next is closed
> > > for new drivers, features, code refactoring and optimizations. We are
> > > currently accepting bug fixes only.
> > > 
> > > Please repost when net-next reopens after June 8th.
> > > 
> > > RFC patches sent for review only are obviously welcome at any time.
> > 
> > Thank you Simon.
> > 
> > While posting this patchset I was a bit confused about what should be
> > the target tree. That's why in the cover letter of the V1 for this
> > series, I had requested more clarity on the same (since there are patches
> > from PCI and net-next both).
> > 
> > In such cases how do we decide which tree to target?
> 
> Yes, that isn't entirely clear to me either.
> Hopefully the maintainers can negotiate this.
>

Thanks Simon. also since teh target tree is not entirely clear, can I
still send out an updated version with suggested changes?
 
> > 
> > Also, noted about the next merge window for net-next :-)
> > 
> > Regards,
> > Shradha.
> > 

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA
  2025-06-01 14:53 ` Zhu Yanjun
@ 2025-06-03  4:17   ` Shradha Gupta
  0 siblings, 0 replies; 22+ messages in thread
From: Shradha Gupta @ 2025-06-03  4:17 UTC (permalink / raw)
  To: Zhu Yanjun
  Cc: linux-hyperv, linux-pci, linux-kernel, Nipun Gupta, Yury Norov,
	Jason Gunthorpe, Jonathan Cameron, Anna-Maria Behnsen, Kevin Tian,
	Long Li, Thomas Gleixner, Bjorn Helgaas, Rob Herring,
	Manivannan Sadhasivam, Krzysztof Wilczy???~Dski,
	Lorenzo Pieralisi, Dexuan Cui, Wei Liu, Haiyang Zhang,
	K. Y. Srinivasan, Andrew Lunn, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Konstantin Taranov, Simon Horman,
	Leon Romanovsky, Maxim Levitsky, Erni Sri Satya Vennela,
	Peter Zijlstra, netdev, linux-rdma, Paul Rosswurm, Shradha Gupta

On Sun, Jun 01, 2025 at 04:53:31PM +0200, Zhu Yanjun wrote:
> ??? 2025/5/27 17:57, Shradha Gupta ??????:
> >In this patchset we want to enable the MANA driver to be able to
> >allocate MSI-X vectors in PCI dynamically.
> >
> >The first patch exports pci_msix_prepare_desc() in PCI to be able to
> >correctly prepare descriptors for dynamically added MSI-X vectors.
> >
> >The second patch adds the support of dynamic vector allocation in
> >pci-hyperv PCI controller by enabling the MSI_FLAG_PCI_MSIX_ALLOC_DYN
> >flag and using the pci_msix_prepare_desc() exported in first patch.
> >
> >The third patch adds a detailed description of the irq_setup(), to
> >help understand the function design better.
> >
> >The fourth patch is a preparation patch for mana changes to support
> >dynamic IRQ allocation. It contains changes in irq_setup() to allow
> >skipping first sibling CPU sets, in case certain IRQs are already
> >affinitized to them.
> >
> >The fifth patch has the changes in MANA driver to be able to allocate
> >MSI-X vectors dynamically. If the support does not exist it defaults to
> >older behavior.
> >---
> >  Change in v4
> >  * add a patch describing the functionality of irq_setup() through a
> >    comment
> >  * In irq_setup(), avoid using a label next_cpumask:
> >  * modify the changes in MANA patch about restructuring the error
> >    handling path in mana_gd_setup_dyn_irqs()
> >  * modify the mana_gd_setup_irqs() to simplify handling around
> >    start_irq_index
> >  * add warning if an invalid gic is returned
> >  * place the xa_destroy() cleanup in mana_gd_remove
> >---
> >  Changes in v3
> >  * split the 3rd patch into preparation patch around irq_setup() and
> >    changes in mana driver to allow dynamic IRQ allocation
> >  * Add arm64 support for dynamic MSI-X allocation in pci_hyperv
> >    controller
> >---
> >  Changes in v2
> >  * split the first patch into two(exporting the preapre_desc
> >    func and using the function and flag in pci-hyperv)
> >  * replace 'pci vectors' by 'MSI-X vectors'
> >  * Change the cover letter description to align with changes made
> >---
> >
> >Shradha Gupta (5):
> >   PCI/MSI: Export pci_msix_prepare_desc() for dynamic MSI-X allocations
> >   PCI: hv: Allow dynamic MSI-X vector allocation
> >   net: mana: explain irq_setup() algorithm
> >   net: mana: Allow irq_setup() to skip cpus for affinity
> >   net: mana: Allocate MSI-X vectors dynamically
> 
> In this patchset, base-commit seems missing.
> 
> Please see this link:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/submitting-patches.rst?h=v6.15#n868
> 
> "
> When you open ``outgoing/0000-cover-letter.patch`` for editing, you will
> notice that it will have the ``base-commit:`` trailer at the very
> bottom, which provides the reviewer and the CI tools enough information
> to properly perform ``git am`` without worrying about conflicts::
> "
> 
> When creating patches:
> "
> git format-patch --base=main origin/main
> "
> 
> This will include a base-commit: line in each patch file:
> 
> "
> base-commit: abcdef1234567890...
> "
> 
> This is useful when submitting patches to mailing lists or other tooling.
> 
> Please follow the submitting-patches.rst to add base-commit.
> 
> Best Regards,
> Zhu Yanjun
>

Thank you, I will make the necessary changes in the next version.

Regards,
Shradha. 
> >
> >  .../net/ethernet/microsoft/mana/gdma_main.c   | 356 ++++++++++++++----
> >  drivers/pci/controller/pci-hyperv.c           |   5 +-
> >  drivers/pci/msi/irqdomain.c                   |   5 +-
> >  include/linux/msi.h                           |   2 +
> >  include/net/mana/gdma.h                       |   8 +-
> >  5 files changed, 293 insertions(+), 83 deletions(-)
> >

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2025-06-03  4:17 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-27 15:57 [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA Shradha Gupta
2025-05-27 15:57 ` [PATCH v4 1/5] PCI/MSI: Export pci_msix_prepare_desc() for dynamic MSI-X allocations Shradha Gupta
2025-05-29  3:46   ` Saurabh Singh Sengar
2025-05-27 15:58 ` [PATCH v4 2/5] PCI: hv: Allow dynamic MSI-X vector allocation Shradha Gupta
2025-05-29  3:46   ` Saurabh Singh Sengar
2025-05-27 15:58 ` [PATCH v4 3/5] net: mana: explain irq_setup() algorithm Shradha Gupta
2025-05-27 19:10   ` Yury Norov
2025-05-29 13:15     ` Shradha Gupta
2025-05-27 15:58 ` [PATCH v4 4/5] net: mana: Allow irq_setup() to skip cpus for affinity Shradha Gupta
2025-05-27 15:59 ` [PATCH v4 5/5] net: mana: Allocate MSI-X vectors dynamically Shradha Gupta
2025-05-28  8:16   ` Saurabh Singh Sengar
2025-05-29 13:17     ` Shradha Gupta
2025-05-28 18:52   ` Simon Horman
2025-05-29 13:18     ` Shradha Gupta
2025-05-29  3:45   ` Saurabh Singh Sengar
2025-05-29 13:20     ` Shradha Gupta
2025-05-28 18:55 ` [PATCH v4 0/5] Allow dyn MSI-X vector allocation of MANA Simon Horman
2025-05-29 13:28   ` Shradha Gupta
2025-05-30 18:07     ` Simon Horman
2025-06-03  4:15       ` Shradha Gupta
2025-06-01 14:53 ` Zhu Yanjun
2025-06-03  4:17   ` Shradha Gupta

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).