virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Gavin Liu <gavin.liu@jaguarmicro.com>
Cc: "jasowang@redhat.com" <jasowang@redhat.com>,
	Angus Chen <angus.chen@jaguarmicro.com>,
	"virtualization@lists.linux.dev" <virtualization@lists.linux.dev>,
	"xuanzhuo@linux.alibaba.com" <xuanzhuo@linux.alibaba.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Heng Qi <hengqi@linux.alibaba.com>
Subject: Re: 回复: [PATCH v5] vp_vdpa: don't allocate unused msix vectors
Date: Tue, 23 Apr 2024 04:35:18 -0400	[thread overview]
Message-ID: <20240423043424-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <SEYPR06MB6756E87AE40D93704A0614A9EC112@SEYPR06MB6756.apcprd06.prod.outlook.com>

On Tue, Apr 23, 2024 at 01:39:17AM +0000, Gavin Liu wrote:
> On Wed, Apr 10, 2024 at 11:30:20AM +0800, lyx634449800 wrote:
> > From: Yuxue Liu <yuxue.liu@jaguarmicro.com>
> >
> > When there is a ctlq and it doesn't require interrupt callbacks,the 
> > original method of calculating vectors wastes hardware msi or msix 
> > resources as well as system IRQ resources.
> >
> > When conducting performance testing using testpmd in the guest os, it 
> > was found that the performance was lower compared to directly using 
> > vfio-pci to passthrough the device
> >
> > In scenarios where the virtio device in the guest os does not utilize 
> > interrupts, the vdpa driver still configures the hardware's msix 
> > vector. Therefore, the hardware still sends interrupts to the host os.
> 
> >I just have a question on this part. How come hardware sends interrupts does not guest driver disable them?
>                
>    1:Assuming the guest OS's Virtio device is using PMD mode, QEMU sets the call fd to -1
>    2:On the host side, the vhost_vdpa program will set vp_vdpa->vring[i].cb.callback to invalid
>    3:Before the modification, the vp_vdpa_request_irq function does not check whether 
>       vp_vdpa->vring[i].cb.callback is valid. Instead, it enables the hardware's MSIX
> 	  interrupts based on the number of queues of the device
> 

So MSIX is enabled but why would it trigger? virtio PMD in poll mode
presumably suppresses interrupts after all.

> 
> 
> ----- Original Message -----
> From: Michael S. Tsirkin mst@redhat.com
> Sent: April 22, 2024 20:09
> To: Gavin Liu gavin.liu@jaguarmicro.com
> Cc: jasowang@redhat.com; Angus Chen angus.chen@jaguarmicro.com; virtualization@lists.linux.dev; xuanzhuo@linux.alibaba.com; linux-kernel@vger.kernel.org; Heng Qi hengqi@linux.alibaba.com
> Subject: Re: [PATCH v5] vp_vdpa: don't allocate unused msix vectors
> 
> 
> 
> External Mail: This email originated from OUTSIDE of the organization!
> Do not click links, open attachments or provide ANY information unless you recognize the sender and know the content is safe.
> 
> 
> On Wed, Apr 10, 2024 at 11:30:20AM +0800, lyx634449800 wrote:
> > From: Yuxue Liu <yuxue.liu@jaguarmicro.com>
> >
> > When there is a ctlq and it doesn't require interrupt callbacks,the 
> > original method of calculating vectors wastes hardware msi or msix 
> > resources as well as system IRQ resources.
> >
> > When conducting performance testing using testpmd in the guest os, it 
> > was found that the performance was lower compared to directly using 
> > vfio-pci to passthrough the device
> >
> > In scenarios where the virtio device in the guest os does not utilize 
> > interrupts, the vdpa driver still configures the hardware's msix 
> > vector. Therefore, the hardware still sends interrupts to the host os.
> 
> I just have a question on this part. How come hardware sends interrupts does not guest driver disable them?
> 
> > Because of this unnecessary
> > action by the hardware, hardware performance decreases, and it also 
> > affects the performance of the host os.
> >
> > Before modification:(interrupt mode)
> >  32:  0   0  0  0 PCI-MSI 32768-edge    vp-vdpa[0000:00:02.0]-0
> >  33:  0   0  0  0 PCI-MSI 32769-edge    vp-vdpa[0000:00:02.0]-1
> >  34:  0   0  0  0 PCI-MSI 32770-edge    vp-vdpa[0000:00:02.0]-2
> >  35:  0   0  0  0 PCI-MSI 32771-edge    vp-vdpa[0000:00:02.0]-config
> >
> > After modification:(interrupt mode)
> >  32:  0  0  1  7   PCI-MSI 32768-edge  vp-vdpa[0000:00:02.0]-0
> >  33: 36  0  3  0   PCI-MSI 32769-edge  vp-vdpa[0000:00:02.0]-1
> >  34:  0  0  0  0   PCI-MSI 32770-edge  vp-vdpa[0000:00:02.0]-config
> >
> > Before modification:(virtio pmd mode for guest os)
> >  32:  0   0  0  0 PCI-MSI 32768-edge    vp-vdpa[0000:00:02.0]-0
> >  33:  0   0  0  0 PCI-MSI 32769-edge    vp-vdpa[0000:00:02.0]-1
> >  34:  0   0  0  0 PCI-MSI 32770-edge    vp-vdpa[0000:00:02.0]-2
> >  35:  0   0  0  0 PCI-MSI 32771-edge    vp-vdpa[0000:00:02.0]-config
> >
> > After modification:(virtio pmd mode for guest os)
> >  32: 0  0  0   0   PCI-MSI 32768-edge   vp-vdpa[0000:00:02.0]-config
> >
> > To verify the use of the virtio PMD mode in the guest operating 
> > system, the following patch needs to be applied to QEMU:
> > https://lore.kernel.org/all/20240408073311.2049-1-yuxue.liu@jaguarmicr
> > o.com
> >
> > Signed-off-by: Yuxue Liu <yuxue.liu@jaguarmicro.com>
> > Acked-by: Jason Wang <jasowang@redhat.com>
> > Reviewed-by: Heng Qi <hengqi@linux.alibaba.com>
> > ---
> > V5: modify the description of the printout when an exception occurs
> > V4: update the title and assign values to uninitialized variables
> > V3: delete unused variables and add validation records
> > V2: fix when allocating IRQs, scan all queues
> >
> >  drivers/vdpa/virtio_pci/vp_vdpa.c | 22 ++++++++++++++++------
> >  1 file changed, 16 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c 
> > b/drivers/vdpa/virtio_pci/vp_vdpa.c
> > index df5f4a3bccb5..8de0224e9ec2 100644
> > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c
> > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
> > @@ -160,7 +160,13 @@ static int vp_vdpa_request_irq(struct vp_vdpa *vp_vdpa)
> >       struct pci_dev *pdev = mdev->pci_dev;
> >       int i, ret, irq;
> >       int queues = vp_vdpa->queues;
> > -     int vectors = queues + 1;
> > +     int vectors = 1;
> > +     int msix_vec = 0;
> > +
> > +     for (i = 0; i < queues; i++) {
> > +             if (vp_vdpa->vring[i].cb.callback)
> > +                     vectors++;
> > +     }
> >
> >       ret = pci_alloc_irq_vectors(pdev, vectors, vectors, PCI_IRQ_MSIX);
> >       if (ret != vectors) {
> > @@ -173,9 +179,12 @@ static int vp_vdpa_request_irq(struct vp_vdpa *vp_vdpa)
> >       vp_vdpa->vectors = vectors;
> >
> >       for (i = 0; i < queues; i++) {
> > +             if (!vp_vdpa->vring[i].cb.callback)
> > +                     continue;
> > +
> >               snprintf(vp_vdpa->vring[i].msix_name, VP_VDPA_NAME_SIZE,
> >                       "vp-vdpa[%s]-%d\n", pci_name(pdev), i);
> > -             irq = pci_irq_vector(pdev, i);
> > +             irq = pci_irq_vector(pdev, msix_vec);
> >               ret = devm_request_irq(&pdev->dev, irq,
> >                                      vp_vdpa_vq_handler,
> >                                      0, vp_vdpa->vring[i].msix_name, 
> > @@ -185,21 +194,22 @@ static int vp_vdpa_request_irq(struct vp_vdpa *vp_vdpa)
> >                               "vp_vdpa: fail to request irq for vq %d\n", i);
> >                       goto err;
> >               }
> > -             vp_modern_queue_vector(mdev, i, i);
> > +             vp_modern_queue_vector(mdev, i, msix_vec);
> >               vp_vdpa->vring[i].irq = irq;
> > +             msix_vec++;
> >       }
> >
> >       snprintf(vp_vdpa->msix_name, VP_VDPA_NAME_SIZE, "vp-vdpa[%s]-config\n",
> >                pci_name(pdev));
> > -     irq = pci_irq_vector(pdev, queues);
> > +     irq = pci_irq_vector(pdev, msix_vec);
> >       ret = devm_request_irq(&pdev->dev, irq, vp_vdpa_config_handler, 0,
> >                              vp_vdpa->msix_name, vp_vdpa);
> >       if (ret) {
> >               dev_err(&pdev->dev,
> > -                     "vp_vdpa: fail to request irq for vq %d\n", i);
> > +                     "vp_vdpa: fail to request irq for config: %d\n", 
> > + ret);
> >                       goto err;
> >       }
> > -     vp_modern_config_vector(mdev, queues);
> > +     vp_modern_config_vector(mdev, msix_vec);
> >       vp_vdpa->config_irq = irq;
> >
> >       return 0;
> > --
> > 2.43.0
> 


  reply	other threads:[~2024-04-23  8:35 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-18 13:00 [PATCH v2] vp_vdpa: fix the method of calculating vectors gavin.liu
2024-04-08  8:02 ` Michael S. Tsirkin
2024-04-09  1:49   ` [PATCH v3] " lyx634449800
2024-04-09  3:53     ` Jason Wang
2024-04-09  5:40     ` Michael S. Tsirkin
2024-04-09  8:58       ` [PATCH v4] vp_vdpa: don't allocate unused msix vectors lyx634449800
2024-04-09  9:26         ` Michael S. Tsirkin
2024-04-09  9:56         ` Heng Qi
2024-04-10  3:30           ` [PATCH v5] " lyx634449800
2024-04-22 12:08             ` Michael S. Tsirkin
2024-04-23  1:39               ` 回复: " Gavin Liu
2024-04-23  8:35                 ` Michael S. Tsirkin [this message]
2024-04-23  8:42                   ` Angus Chen
2024-04-23  9:25                     ` 回复: " Gavin Liu
2024-04-25 22:09                     ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240423043424-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=angus.chen@jaguarmicro.com \
    --cc=gavin.liu@jaguarmicro.com \
    --cc=hengqi@linux.alibaba.com \
    --cc=jasowang@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=virtualization@lists.linux.dev \
    --cc=xuanzhuo@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).