From: Jing Liu <jing2.liu@intel.com>
To: qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, pbonzini@redhat.com,
kevin.tian@intel.com, reinette.chatre@intel.com,
jing2.liu@intel.com, jing2.liu@linux.intel.com
Subject: [PATCH v1 2/4] vfio/pci: enable vector on dynamic MSI-X allocation
Date: Tue, 22 Aug 2023 03:29:25 -0400 [thread overview]
Message-ID: <20230822072927.224803-3-jing2.liu@intel.com> (raw)
In-Reply-To: <20230822072927.224803-1-jing2.liu@intel.com>
The vector_use callback is used to enable vector that is unmasked in
guest. The kernel used to only support static MSI-X allocation. When
allocating a new interrupt using "static MSI-X allocation" kernels,
Qemu first disables all previously allocated vectors and then
re-allocates all including the new one. The nr_vectors of VFIOPCIDevice
indicates that all vectors from 0 to nr_vectors are allocated (and may
be enabled), which is used to to loop all the possibly used vectors
When, e.g., disabling MSI-X interrupts.
Extend the vector_use function to support dynamic MSI-X allocation when
host supports the capability. Qemu therefore can individually allocate
and enable a new interrupt without affecting others or causing interrupts
lost during runtime.
Utilize nr_vectors to calculate the upper bound of enabled vectors in
dynamic MSI-X allocation mode since looping all msix_entries_nr is not
efficient and unnecessary.
Signed-off-by: Jing Liu <jing2.liu@intel.com>
Tested-by: Reinette Chatre <reinette.chatre@intel.com>
---
Changes since RFC v1:
- Test vdev->msix->noresize to identify the allocation mode. (Alex)
- Move defer_kvm_irq_routing test out and update nr_vectors in a
common place before vfio_enable_vectors(). (Alex)
- Revise the comments. (Alex)
---
hw/vfio/pci.c | 44 +++++++++++++++++++++++++++-----------------
1 file changed, 27 insertions(+), 17 deletions(-)
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index 8a3b34f3c196..31f36d68bb19 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -470,6 +470,7 @@ static int vfio_msix_vector_do_use(PCIDevice *pdev, unsigned int nr,
VFIOPCIDevice *vdev = VFIO_PCI(pdev);
VFIOMSIVector *vector;
int ret;
+ int old_nr_vecs = vdev->nr_vectors;
trace_vfio_msix_vector_do_use(vdev->vbasedev.name, nr);
@@ -512,33 +513,42 @@ static int vfio_msix_vector_do_use(PCIDevice *pdev, unsigned int nr,
}
/*
- * We don't want to have the host allocate all possible MSI vectors
- * for a device if they're not in use, so we shutdown and incrementally
- * increase them as needed.
+ * When dynamic allocation is not supported, we don't want to have the
+ * host allocate all possible MSI vectors for a device if they're not
+ * in use, so we shutdown and incrementally increase them as needed.
+ * nr_vectors represents the total number of vectors allocated.
+ *
+ * When dynamic allocation is supported, let the host only allocate
+ * and enable a vector when it is in use in guest. nr_vectors represents
+ * the upper bound of vectors being enabled (but not all of the ranges
+ * is allocated or enabled).
*/
if (vdev->nr_vectors < nr + 1) {
vdev->nr_vectors = nr + 1;
- if (!vdev->defer_kvm_irq_routing) {
+ }
+
+ if (!vdev->defer_kvm_irq_routing) {
+ if (vdev->msix->noresize && (old_nr_vecs < nr + 1)) {
vfio_disable_irqindex(&vdev->vbasedev, VFIO_PCI_MSIX_IRQ_INDEX);
ret = vfio_enable_vectors(vdev, true);
if (ret) {
error_report("vfio: failed to enable vectors, %d", ret);
}
- }
- } else {
- Error *err = NULL;
- int32_t fd;
-
- if (vector->virq >= 0) {
- fd = event_notifier_get_fd(&vector->kvm_interrupt);
} else {
- fd = event_notifier_get_fd(&vector->interrupt);
- }
+ Error *err = NULL;
+ int32_t fd;
- if (vfio_set_irq_signaling(&vdev->vbasedev,
- VFIO_PCI_MSIX_IRQ_INDEX, nr,
- VFIO_IRQ_SET_ACTION_TRIGGER, fd, &err)) {
- error_reportf_err(err, VFIO_MSG_PREFIX, vdev->vbasedev.name);
+ if (vector->virq >= 0) {
+ fd = event_notifier_get_fd(&vector->kvm_interrupt);
+ } else {
+ fd = event_notifier_get_fd(&vector->interrupt);
+ }
+
+ if (vfio_set_irq_signaling(&vdev->vbasedev,
+ VFIO_PCI_MSIX_IRQ_INDEX, nr,
+ VFIO_IRQ_SET_ACTION_TRIGGER, fd, &err)) {
+ error_reportf_err(err, VFIO_MSG_PREFIX, vdev->vbasedev.name);
+ }
}
}
--
2.27.0
next prev parent reply other threads:[~2023-08-22 7:30 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-22 7:29 [PATCH v1 0/4] Support dynamic MSI-X allocation Jing Liu
2023-08-22 7:29 ` [PATCH v1 1/4] vfio/pci: detect the support of " Jing Liu
2023-08-29 13:33 ` Cédric Le Goater
2023-08-30 7:21 ` Liu, Jing2
2023-08-22 7:29 ` Jing Liu [this message]
2023-08-22 7:29 ` [PATCH v1 3/4] vfio/pci: use an invalid fd to enable MSI-X Jing Liu
2023-08-29 14:04 ` Cédric Le Goater
2023-08-30 10:03 ` Liu, Jing2
2023-08-30 10:48 ` Igor Mammedov
2023-09-04 7:37 ` Liu, Jing2
2023-08-22 7:29 ` [PATCH v1 4/4] vfio/pci: enable MSI-X in interrupt restoring on dynamic allocation Jing Liu
2023-09-15 7:40 ` [PATCH v1 0/4] Support dynamic MSI-X allocation Liu, Jing2
2023-09-15 7:42 ` Cédric Le Goater
2023-09-15 8:03 ` Liu, Jing2
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230822072927.224803-3-jing2.liu@intel.com \
--to=jing2.liu@intel.com \
--cc=alex.williamson@redhat.com \
--cc=clg@redhat.com \
--cc=jing2.liu@linux.intel.com \
--cc=kevin.tian@intel.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=reinette.chatre@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).