From: "Longpeng(Mike)" <longpeng2@huawei.com>
To: <alex.williamson@redhat.com>, <philmd@redhat.com>,
<pbonzini@redhat.com>, <marcel.apfelbaum@gmail.com>,
<mst@redhat.com>
Cc: chenjiashang@huawei.com,
"Longpeng\(Mike\)" <longpeng2@huawei.com>,
arei.gonglei@huawei.com, qemu-devel@nongnu.org
Subject: [PATCH v3 9/9] vfio: defer to commit kvm irq routing when enable msi/msix
Date: Tue, 21 Sep 2021 07:02:02 +0800 [thread overview]
Message-ID: <20210920230202.1439-10-longpeng2@huawei.com> (raw)
In-Reply-To: <20210920230202.1439-1-longpeng2@huawei.com>
In migration resume phase, all unmasked msix vectors need to be
setup when load the VF state. However, the setup operation would
take longer if the VM has more VFs and each VF has more unmasked
vectors.
The hot spot is kvm_irqchip_commit_routes, it'll scan and update
all irqfds that already assigned each invocation, so more vectors
means need more time to process them.
vfio_pci_load_config
vfio_msix_enable
msix_set_vector_notifiers
for (vector = 0; vector < dev->msix_entries_nr; vector++) {
vfio_msix_vector_do_use
vfio_add_kvm_msi_virq
kvm_irqchip_commit_routes <-- expensive
}
We can reduce the cost by only commit once outside the loop. The
routes is cached in kvm_state, we commit them first and then bind
irqfd for each vector.
The test VM has 128 vcpus and 8 VF (each one has 65 vectors),
we measure the cost of the vfio_msix_enable for each VF, and
we can see 90+% costs can be reduce.
VF Count of irqfds[*] Original With this patch
1st 65 8 2
2nd 130 15 2
3rd 195 22 2
4th 260 24 3
5th 325 36 2
6th 390 44 3
7th 455 51 3
8th 520 58 4
Total 258ms 21ms
[*] Count of irqfds
How many irqfds that already assigned and need to process in this
round.
The optimition can be applied to msi type too.
Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
---
hw/vfio/pci.c | 36 ++++++++++++++++++++++++++++--------
1 file changed, 28 insertions(+), 8 deletions(-)
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index 2de1cc5425..b26129bddf 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -513,11 +513,13 @@ static int vfio_msix_vector_do_use(PCIDevice *pdev, unsigned int nr,
* increase them as needed.
*/
if (vdev->nr_vectors < nr + 1) {
- vfio_disable_irqindex(&vdev->vbasedev, VFIO_PCI_MSIX_IRQ_INDEX);
vdev->nr_vectors = nr + 1;
- ret = vfio_enable_vectors(vdev, true);
- if (ret) {
- error_report("vfio: failed to enable vectors, %d", ret);
+ if (!vdev->defer_kvm_irq_routing) {
+ vfio_disable_irqindex(&vdev->vbasedev, VFIO_PCI_MSIX_IRQ_INDEX);
+ ret = vfio_enable_vectors(vdev, true);
+ if (ret) {
+ error_report("vfio: failed to enable vectors, %d", ret);
+ }
}
} else {
Error *err = NULL;
@@ -579,8 +581,7 @@ static void vfio_msix_vector_release(PCIDevice *pdev, unsigned int nr)
}
}
-/* TODO: invoked when enclabe msi/msix vectors */
-static __attribute__((unused)) void vfio_commit_kvm_msi_virq(VFIOPCIDevice *vdev)
+static void vfio_commit_kvm_msi_virq(VFIOPCIDevice *vdev)
{
int i;
VFIOMSIVector *vector;
@@ -610,6 +611,9 @@ static __attribute__((unused)) void vfio_commit_kvm_msi_virq(VFIOPCIDevice *vdev
static void vfio_msix_enable(VFIOPCIDevice *vdev)
{
+ PCIDevice *pdev = &vdev->pdev;
+ int ret;
+
vfio_disable_interrupts(vdev);
vdev->msi_vectors = g_new0(VFIOMSIVector, vdev->msix->entries);
@@ -632,11 +636,22 @@ static void vfio_msix_enable(VFIOPCIDevice *vdev)
vfio_msix_vector_do_use(&vdev->pdev, 0, NULL, NULL);
vfio_msix_vector_release(&vdev->pdev, 0);
- if (msix_set_vector_notifiers(&vdev->pdev, vfio_msix_vector_use,
- vfio_msix_vector_release, NULL)) {
+ vdev->defer_kvm_irq_routing = true;
+
+ ret = msix_set_vector_notifiers(&vdev->pdev, vfio_msix_vector_use,
+ vfio_msix_vector_release, NULL);
+ if (ret < 0) {
error_report("vfio: msix_set_vector_notifiers failed");
+ } else if (!pdev->msix_function_masked) {
+ vfio_commit_kvm_msi_virq(vdev);
+ vfio_disable_irqindex(&vdev->vbasedev, VFIO_PCI_MSIX_IRQ_INDEX);
+ ret = vfio_enable_vectors(vdev, true);
+ if (ret) {
+ error_report("vfio: failed to enable vectors, %d", ret);
+ }
}
+ vdev->defer_kvm_irq_routing = false;
trace_vfio_msix_enable(vdev->vbasedev.name);
}
@@ -645,6 +660,7 @@ static void vfio_msi_enable(VFIOPCIDevice *vdev)
int ret, i;
vfio_disable_interrupts(vdev);
+ vdev->defer_kvm_irq_routing = true;
vdev->nr_vectors = msi_nr_vectors_allocated(&vdev->pdev);
retry:
@@ -671,6 +687,8 @@ retry:
vfio_add_kvm_msi_virq(vdev, vector, i, false);
}
+ vfio_commit_kvm_msi_virq(vdev);
+
/* Set interrupt type prior to possible interrupts */
vdev->interrupt = VFIO_INT_MSI;
@@ -697,9 +715,11 @@ retry:
*/
error_report("vfio: Error: Failed to enable MSI");
+ vdev->defer_kvm_irq_routing = false;
return;
}
+ vdev->defer_kvm_irq_routing = false;
trace_vfio_msi_enable(vdev->vbasedev.name, vdev->nr_vectors);
}
--
2.23.0
next prev parent reply other threads:[~2021-09-20 23:10 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-20 23:01 [PATCH v3 0/9] optimize the downtime for vfio migration Longpeng(Mike)
2021-09-20 23:01 ` [PATCH v3 1/9] vfio: simplify the conditional statements in vfio_msi_enable Longpeng(Mike)
2021-09-20 23:01 ` [PATCH v3 2/9] vfio: move re-enabling INTX out of the common helper Longpeng(Mike)
2021-09-20 23:01 ` [PATCH v3 3/9] vfio: simplify the failure path in vfio_msi_enable Longpeng(Mike)
2021-09-20 23:01 ` [PATCH v3 4/9] msix: simplify the conditional in msix_set/unset_vector_notifiers Longpeng(Mike)
2021-10-01 23:04 ` Alex Williamson
2021-10-08 1:02 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-09-20 23:01 ` [PATCH v3 5/9] msix: reset poll_notifier to NULL if fail to set notifiers Longpeng(Mike)
2021-09-20 23:01 ` [PATCH v3 6/9] kvm: irqchip: extract kvm_irqchip_add_deferred_msi_route Longpeng(Mike)
2021-09-20 23:02 ` [PATCH v3 7/9] vfio: add infrastructure to commit the deferred kvm routing Longpeng(Mike)
2021-10-01 23:04 ` Alex Williamson
2021-10-08 1:26 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-09-20 23:02 ` [PATCH v3 8/9] Revert "vfio: Avoid disabling and enabling vectors repeatedly in VFIO migration" Longpeng(Mike)
2021-10-01 23:04 ` Alex Williamson
2021-10-08 1:32 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-09-20 23:02 ` Longpeng(Mike) [this message]
2021-10-01 23:04 ` [PATCH v3 9/9] vfio: defer to commit kvm irq routing when enable msi/msix Alex Williamson
2021-10-05 13:10 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210920230202.1439-10-longpeng2@huawei.com \
--to=longpeng2@huawei.com \
--cc=alex.williamson@redhat.com \
--cc=arei.gonglei@huawei.com \
--cc=chenjiashang@huawei.com \
--cc=marcel.apfelbaum@gmail.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=philmd@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).