From: "Michael S. Tsirkin" <mst@redhat.com>
To: "gavin.liu" <gavin.liu@jaguarmicro.com>
Cc: jasowang@redhat.com, xuanzhuo@linux.alibaba.com,
virtualization@lists.linux.dev, angus.chen@jaguarmicro.com,
yuxue.liu@jaguarmicro.com
Subject: Re: [PATCH v2] vp_vdpa: fix the method of calculating vectors
Date: Mon, 8 Apr 2024 04:02:54 -0400 [thread overview]
Message-ID: <20240408035346-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20240318130008.1928-1-gavin.liu@jaguarmicro.com>
On Mon, Mar 18, 2024 at 09:00:08PM +0800, gavin.liu wrote:
> From: Yuxue Liu <yuxue.liu@jaguarmicro.com>
>
> When there is a ctlq and it doesn't require interrupt
> callbacks,the original method of calculating vectors
> wastes hardware MSI or MSI-X resources as well as system
> IRQ resources. Referencing the per_vq_vectors mode in the
> vp_find_vqs_msix function, calculate the required number
> of vectors based on whether the callback is set.
>
> Signed-off-by: Yuxue Liu <yuxue.liu@jaguarmicro.com>
Overall, the patch makes sense. But you need to clean it up.
Also, given previous versions were broken,
pls document which configurations you tested, and how.
> ---
>
> V1 -> V2: fix when allocating IRQs, scan all queues.
> V1: https://lore.kernel.org/all/20240318030121.1873-1-gavin.liu@jaguarmicro.com/
> ---
> drivers/vdpa/virtio_pci/vp_vdpa.c | 24 ++++++++++++++++++------
> 1 file changed, 18 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c b/drivers/vdpa/virtio_pci/vp_vdpa.c
> index 281287fae89f..87329d4358ce 100644
> --- a/drivers/vdpa/virtio_pci/vp_vdpa.c
> +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
> @@ -160,8 +160,15 @@ static int vp_vdpa_request_irq(struct vp_vdpa *vp_vdpa)
> struct pci_dev *pdev = mdev->pci_dev;
> int i, ret, irq;
> int queues = vp_vdpa->queues;
> - int vectors = queues + 1;
> + int allocated_vectors, vectors = 0;
> + u16 msix_vec;
The names are messed up.
What we allocate should be called allocated_vectors.
So rename vectors -> allocated_vectors.
The currect vector used for each vq can be called e.g. msix_vec.
And it is pointless to make it u16 here IMHO - just int will do.
>
> + for (i = 0; i < queues; i++) {
> + if (vp_vdpa->vring[i].cb.callback != NULL)
I don't like != NULL style - just
if (vp_vdpa->vring[i].cb.callback)
will do.
> + vectors++;
> + }
> + /*By default, config interrupts request a single vector*/
bad coding style
and what does "by default" mean here?
> + vectors = vectors + 1;
> ret = pci_alloc_irq_vectors(pdev, vectors, vectors, PCI_IRQ_MSIX);
> if (ret != vectors) {
> dev_err(&pdev->dev,
> @@ -169,13 +176,17 @@ static int vp_vdpa_request_irq(struct vp_vdpa *vp_vdpa)
> vectors, ret);
> return ret;
> }
> -
why?
> vp_vdpa->vectors = vectors;
>
> for (i = 0; i < queues; i++) {
> + if (vp_vdpa->vring[i].cb.callback == NULL)
> + continue;
> + else
> + msix_vec = allocated_vectors++;
So just put replace allocated_vectors with msix_vec
incrementing it at the end of the loop,
and you will not need the allocated_vectors variable.
> +
Same here.
if (!vp_vdpa->vring[i].cb.callback)
Also there's no need for else after continue.
> snprintf(vp_vdpa->vring[i].msix_name, VP_VDPA_NAME_SIZE,
> "vp-vdpa[%s]-%d\n", pci_name(pdev), i);
> - irq = pci_irq_vector(pdev, i);
> + irq = pci_irq_vector(pdev, msix_vec);
> ret = devm_request_irq(&pdev->dev, irq,
> vp_vdpa_vq_handler,
> 0, vp_vdpa->vring[i].msix_name,
> @@ -185,13 +196,14 @@ static int vp_vdpa_request_irq(struct vp_vdpa *vp_vdpa)
> "vp_vdpa: fail to request irq for vq %d\n", i);
> goto err;
> }
> - vp_modern_queue_vector(mdev, i, i);
> + vp_modern_queue_vector(mdev, i, msix_vec);
> vp_vdpa->vring[i].irq = irq;
> }
>
> + msix_vec = allocated_vectors;
> snprintf(vp_vdpa->msix_name, VP_VDPA_NAME_SIZE, "vp-vdpa[%s]-config\n",
> pci_name(pdev));
> - irq = pci_irq_vector(pdev, queues);
> + irq = pci_irq_vector(pdev, msix_vec);
> ret = devm_request_irq(&pdev->dev, irq, vp_vdpa_config_handler, 0,
> vp_vdpa->msix_name, vp_vdpa);
> if (ret) {
> @@ -199,7 +211,7 @@ static int vp_vdpa_request_irq(struct vp_vdpa *vp_vdpa)
> "vp_vdpa: fail to request irq for vq %d\n", i);
> goto err;
> }
> - vp_modern_config_vector(mdev, queues);
> + vp_modern_config_vector(mdev, msix_vec);
> vp_vdpa->config_irq = irq;
>
> return 0;
> --
> 2.43.0
next prev parent reply other threads:[~2024-04-08 8:03 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-18 13:00 [PATCH v2] vp_vdpa: fix the method of calculating vectors gavin.liu
2024-04-08 8:02 ` Michael S. Tsirkin [this message]
2024-04-09 1:49 ` [PATCH v3] " lyx634449800
2024-04-09 3:53 ` Jason Wang
2024-04-09 5:40 ` Michael S. Tsirkin
2024-04-09 8:58 ` [PATCH v4] vp_vdpa: don't allocate unused msix vectors lyx634449800
2024-04-09 9:26 ` Michael S. Tsirkin
2024-04-09 9:56 ` Heng Qi
2024-04-10 3:30 ` [PATCH v5] " lyx634449800
2024-04-22 12:08 ` Michael S. Tsirkin
2024-04-23 1:39 ` 回复: " Gavin Liu
2024-04-23 8:35 ` Michael S. Tsirkin
2024-04-23 8:42 ` Angus Chen
2024-04-23 9:25 ` 回复: " Gavin Liu
2024-04-25 22:09 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240408035346-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=angus.chen@jaguarmicro.com \
--cc=gavin.liu@jaguarmicro.com \
--cc=jasowang@redhat.com \
--cc=virtualization@lists.linux.dev \
--cc=xuanzhuo@linux.alibaba.com \
--cc=yuxue.liu@jaguarmicro.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).