From: Zhu Lingshan <lingshan.zhu@intel.com>
To: mst@redhat.com, jasowang@redhat.com
Cc: netdev@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	Zhu Lingshan <lingshan.zhu@intel.com>
Subject: [PATCH V2 2/4] vDPA/ifcvf: implement device MSIX vector allocator
Date: Tue, 25 Jan 2022 17:17:42 +0800	[thread overview]
Message-ID: <20220125091744.115996-3-lingshan.zhu@intel.com> (raw)
In-Reply-To: <20220125091744.115996-1-lingshan.zhu@intel.com>
This commit implements a MSIX vector allocation helper
for vqs and config interrupts.
Signed-off-by: Zhu Lingshan <lingshan.zhu@intel.com>
---
 drivers/vdpa/ifcvf/ifcvf_main.c | 30 ++++++++++++++++++++++++++++--
 1 file changed, 28 insertions(+), 2 deletions(-)
diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c
index d1a6b5ab543c..7e2af2d2aaf5 100644
--- a/drivers/vdpa/ifcvf/ifcvf_main.c
+++ b/drivers/vdpa/ifcvf/ifcvf_main.c
@@ -58,14 +58,40 @@ static void ifcvf_free_irq(struct ifcvf_adapter *adapter, int queues)
 	ifcvf_free_irq_vectors(pdev);
 }
 
+static int ifcvf_alloc_vectors(struct ifcvf_adapter *adapter)
+{
+	struct pci_dev *pdev = adapter->pdev;
+	struct ifcvf_hw *vf = &adapter->vf;
+	u16 max_intr, ret;
+
+	/* all queues and config interrupt  */
+	max_intr = vf->nr_vring + 1;
+	ret = pci_alloc_irq_vectors(pdev, 1, max_intr, PCI_IRQ_MSIX | PCI_IRQ_AFFINITY);
+
+	if (ret < 0) {
+		IFCVF_ERR(pdev, "Failed to alloc IRQ vectors\n");
+		return ret;
+	}
+
+	if (ret < max_intr)
+		IFCVF_INFO(pdev,
+			   "Requested %u vectors, however only %u allocated, lower performance\n",
+			   max_intr, ret);
+
+	return ret;
+}
+
 static int ifcvf_request_irq(struct ifcvf_adapter *adapter)
 {
 	struct pci_dev *pdev = adapter->pdev;
 	struct ifcvf_hw *vf = &adapter->vf;
 	int vector, i, ret, irq;
-	u16 max_intr;
+	u16 nvectors, max_intr;
+
+	nvectors = ifcvf_alloc_vectors(adapter);
+	if (!(nvectors > 0))
+		return nvectors;
 
-	/* all queues and config interrupt  */
 	max_intr = vf->nr_vring + 1;
 
 	ret = pci_alloc_irq_vectors(pdev, max_intr,
-- 
2.27.0
next prev parent reply	other threads:[~2022-01-25  9:37 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-25  9:17 [PATCH V2 0/4] vDPA/ifcvf: implement shared IRQ feature Zhu Lingshan
2022-01-25  9:17 ` [PATCH V2 1/4] vDPA/ifcvf: implement IO read/write helpers in the header file Zhu Lingshan
2022-01-25  9:17 ` Zhu Lingshan [this message]
2022-01-25 19:36   ` [PATCH V2 2/4] vDPA/ifcvf: implement device MSIX vector allocator Michael S. Tsirkin
2022-01-26 12:18     ` Zhu, Lingshan
2022-01-25  9:17 ` [PATCH V2 3/4] vhost_vdpa: don't setup irq offloading when irq_num < 0 Zhu Lingshan
2022-01-25 19:17   ` kernel test robot
2022-01-25 19:30   ` Michael S. Tsirkin
2022-01-26 12:18     ` Zhu, Lingshan
2022-01-25  9:17 ` [PATCH V2 4/4] vDPA/ifcvf: implement shared IRQ feature Zhu Lingshan
2022-01-25 19:42   ` Michael S. Tsirkin
2022-01-26 12:22     ` Zhu, Lingshan
2022-01-25 19:32 ` [PATCH V2 0/4] " Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox
  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):
  git send-email \
    --in-reply-to=20220125091744.115996-3-lingshan.zhu@intel.com \
    --to=lingshan.zhu@intel.com \
    --cc=jasowang@redhat.com \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY
  https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
  Be sure your reply has a Subject: header at the top and a blank line
  before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).