* [PATCH v1 0/3] virtio-pci: optimize set_guest_notifier
@ 2023-02-28 9:39 Longpeng(Mike) via
2023-02-28 9:39 ` [PATCH v1 1/3] virtio-pci: submit msi route changes in batch Longpeng(Mike) via
` (2 more replies)
0 siblings, 3 replies; 16+ messages in thread
From: Longpeng(Mike) via @ 2023-02-28 9:39 UTC (permalink / raw)
To: mst, jasowang, pbonzini
Cc: arei.gonglei, yechuan, eperezma, alex.williamson, mtosatti, clg,
qemu-devel, Longpeng
From: Longpeng <longpeng2@huawei.com>
This patchset optimizes the time-consuming operation in virtio_pci_set_guest_notifier,
especially for the vhost-vdpa migration, the time spend on set_guest_notifier can
reduce 87% in some cases.
Longpeng (Mike) (3):
virtio-pci: submit msi route changes in batch
kvm-irqchip: use KVMRouteChange API to update msi route
virtio-pci: defer to commit kvm irq routing when enable msi/msix
accel/kvm/kvm-all.c | 10 +--
accel/stubs/kvm-stub.c | 2 +-
hw/intc/ioapic.c | 5 +-
hw/misc/ivshmem.c | 6 +-
hw/vfio/pci.c | 5 +-
hw/virtio/virtio-pci.c | 140 ++++++++++++++++++++++++++++++++-----
include/hw/virtio/virtio.h | 1 +
include/sysemu/kvm.h | 2 +-
target/i386/kvm/kvm.c | 6 +-
9 files changed, 145 insertions(+), 32 deletions(-)
--
2.23.0
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v1 1/3] virtio-pci: submit msi route changes in batch
2023-02-28 9:39 [PATCH v1 0/3] virtio-pci: optimize set_guest_notifier Longpeng(Mike) via
@ 2023-02-28 9:39 ` Longpeng(Mike) via
2023-02-28 10:17 ` Michael S. Tsirkin
2023-02-28 10:18 ` Michael S. Tsirkin
2023-02-28 9:39 ` [PATCH v1 2/3] kvm-irqchip: use KVMRouteChange API to update msi route Longpeng(Mike) via
2023-02-28 9:39 ` [PATCH v1 3/3] virtio-pci: defer to commit kvm irq routing when enable msi/msix Longpeng(Mike) via
2 siblings, 2 replies; 16+ messages in thread
From: Longpeng(Mike) via @ 2023-02-28 9:39 UTC (permalink / raw)
To: mst, jasowang, pbonzini
Cc: arei.gonglei, yechuan, eperezma, alex.williamson, mtosatti, clg,
qemu-devel, Longpeng
From: Longpeng <longpeng2@huawei.com>
The kvm_irqchip_commit_routes() is a time-intensive operation, it needs
scan and update all irqfds that are already assigned during each invocation,
so more vectors means need more time to process them. For virtio-pci, we
can just submit once when enabling vectors of a virtio-pci device.
This can reduce the downtime when migrating a VM with vhost-vdpa devices.
Signed-off-by: Longpeng <longpeng2@huawei.com>
---
hw/virtio/virtio-pci.c | 24 +++++++++++++++++++++---
1 file changed, 21 insertions(+), 3 deletions(-)
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index 247325c193..22e76e3902 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -49,6 +49,19 @@
* configuration space */
#define VIRTIO_PCI_CONFIG_SIZE(dev) VIRTIO_PCI_CONFIG_OFF(msix_enabled(dev))
+/* Protected by the BQL */
+static KVMRouteChange virtio_pci_route_change;
+
+static inline void virtio_pci_begin_route_changes(void)
+{
+ virtio_pci_route_change = kvm_irqchip_begin_route_changes(kvm_state);
+}
+
+static inline void virtio_pci_commit_route_changes(void)
+{
+ kvm_irqchip_commit_route_changes(&virtio_pci_route_change);
+}
+
static void virtio_pci_bus_new(VirtioBusState *bus, size_t bus_size,
VirtIOPCIProxy *dev);
static void virtio_pci_reset(DeviceState *qdev);
@@ -790,12 +803,11 @@ static int kvm_virtio_pci_vq_vector_use(VirtIOPCIProxy *proxy,
int ret;
if (irqfd->users == 0) {
- KVMRouteChange c = kvm_irqchip_begin_route_changes(kvm_state);
- ret = kvm_irqchip_add_msi_route(&c, vector, &proxy->pci_dev);
+ ret = kvm_irqchip_add_msi_route(&virtio_pci_route_change, vector,
+ &proxy->pci_dev);
if (ret < 0) {
return ret;
}
- kvm_irqchip_commit_route_changes(&c);
irqfd->virq = ret;
}
irqfd->users++;
@@ -903,12 +915,18 @@ static int kvm_virtio_pci_vector_vq_use(VirtIOPCIProxy *proxy, int nvqs)
int ret = 0;
VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
+ virtio_pci_begin_route_changes();
+
for (queue_no = 0; queue_no < nvqs; queue_no++) {
if (!virtio_queue_get_num(vdev, queue_no)) {
+ virtio_pci_commit_route_changes();
return -1;
}
ret = kvm_virtio_pci_vector_use_one(proxy, queue_no);
}
+
+ virtio_pci_commit_route_changes();
+
return ret;
}
--
2.23.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v1 2/3] kvm-irqchip: use KVMRouteChange API to update msi route
2023-02-28 9:39 [PATCH v1 0/3] virtio-pci: optimize set_guest_notifier Longpeng(Mike) via
2023-02-28 9:39 ` [PATCH v1 1/3] virtio-pci: submit msi route changes in batch Longpeng(Mike) via
@ 2023-02-28 9:39 ` Longpeng(Mike) via
2023-02-28 9:39 ` [PATCH v1 3/3] virtio-pci: defer to commit kvm irq routing when enable msi/msix Longpeng(Mike) via
2 siblings, 0 replies; 16+ messages in thread
From: Longpeng(Mike) via @ 2023-02-28 9:39 UTC (permalink / raw)
To: mst, jasowang, pbonzini
Cc: arei.gonglei, yechuan, eperezma, alex.williamson, mtosatti, clg,
qemu-devel, Longpeng
From: Longpeng <longpeng2@huawei.com>
The KVMRouteChange API is added by commit 9568690868e ("kvm-irqchip:
introduce new API to support route change"). We can also apply it on
kvm_irqchip_update_msi_route(), there are no functional changes and
we can optimize the virtio-pci core base on this change in the next
patch.
Signed-off-by: Longpeng <longpeng2@huawei.com>
---
accel/kvm/kvm-all.c | 10 ++++++----
accel/stubs/kvm-stub.c | 2 +-
hw/intc/ioapic.c | 5 +++--
hw/misc/ivshmem.c | 6 ++++--
hw/vfio/pci.c | 5 +++--
hw/virtio/virtio-pci.c | 7 +++++--
include/sysemu/kvm.h | 2 +-
target/i386/kvm/kvm.c | 6 ++++--
8 files changed, 27 insertions(+), 16 deletions(-)
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
index 9b26582655..1ed0dc4c9d 100644
--- a/accel/kvm/kvm-all.c
+++ b/accel/kvm/kvm-all.c
@@ -1820,10 +1820,11 @@ static void kvm_add_routing_entry(KVMState *s,
set_gsi(s, entry->gsi);
}
-static int kvm_update_routing_entry(KVMState *s,
+static int kvm_update_routing_entry(KVMRouteChange *c,
struct kvm_irq_routing_entry *new_entry)
{
struct kvm_irq_routing_entry *entry;
+ KVMState *s = c->s;
int n;
for (n = 0; n < s->irq_routes->nr; n++) {
@@ -1837,6 +1838,7 @@ static int kvm_update_routing_entry(KVMState *s,
}
*entry = *new_entry;
+ c->changes++;
return 0;
}
@@ -2046,7 +2048,7 @@ int kvm_irqchip_add_msi_route(KVMRouteChange *c, int vector, PCIDevice *dev)
return virq;
}
-int kvm_irqchip_update_msi_route(KVMState *s, int virq, MSIMessage msg,
+int kvm_irqchip_update_msi_route(KVMRouteChange *c, int virq, MSIMessage msg,
PCIDevice *dev)
{
struct kvm_irq_routing_entry kroute = {};
@@ -2075,7 +2077,7 @@ int kvm_irqchip_update_msi_route(KVMState *s, int virq, MSIMessage msg,
trace_kvm_irqchip_update_msi_route(virq);
- return kvm_update_routing_entry(s, &kroute);
+ return kvm_update_routing_entry(c, &kroute);
}
static int kvm_irqchip_assign_irqfd(KVMState *s, EventNotifier *event,
@@ -2221,7 +2223,7 @@ static int kvm_irqchip_assign_irqfd(KVMState *s, EventNotifier *event,
abort();
}
-int kvm_irqchip_update_msi_route(KVMState *s, int virq, MSIMessage msg)
+int kvm_irqchip_update_msi_route(KVMRouteChange *c, int virq, MSIMessage msg)
{
return -ENOSYS;
}
diff --git a/accel/stubs/kvm-stub.c b/accel/stubs/kvm-stub.c
index 5d2dd8f351..5bcf98b9ab 100644
--- a/accel/stubs/kvm-stub.c
+++ b/accel/stubs/kvm-stub.c
@@ -69,7 +69,7 @@ void kvm_irqchip_release_virq(KVMState *s, int virq)
{
}
-int kvm_irqchip_update_msi_route(KVMState *s, int virq, MSIMessage msg,
+int kvm_irqchip_update_msi_route(KVMRouteChange *c, int virq, MSIMessage msg,
PCIDevice *dev)
{
return -ENOSYS;
diff --git a/hw/intc/ioapic.c b/hw/intc/ioapic.c
index 264262959d..07b9cf7705 100644
--- a/hw/intc/ioapic.c
+++ b/hw/intc/ioapic.c
@@ -195,6 +195,7 @@ static void ioapic_update_kvm_routes(IOAPICCommonState *s)
int i;
if (kvm_irqchip_is_split()) {
+ KVMRouteChange c = kvm_irqchip_begin_route_changes(kvm_state);
for (i = 0; i < IOAPIC_NUM_PINS; i++) {
MSIMessage msg;
struct ioapic_entry_info info;
@@ -202,10 +203,10 @@ static void ioapic_update_kvm_routes(IOAPICCommonState *s)
if (!info.masked) {
msg.address = info.addr;
msg.data = info.data;
- kvm_irqchip_update_msi_route(kvm_state, i, msg, NULL);
+ kvm_irqchip_update_msi_route(&c, i, msg, NULL);
}
}
- kvm_irqchip_commit_routes(kvm_state);
+ kvm_irqchip_commit_route_changes(&c);
}
#endif
}
diff --git a/hw/misc/ivshmem.c b/hw/misc/ivshmem.c
index d66d912172..0e9427be42 100644
--- a/hw/misc/ivshmem.c
+++ b/hw/misc/ivshmem.c
@@ -278,6 +278,7 @@ static int ivshmem_vector_unmask(PCIDevice *dev, unsigned vector,
IVShmemState *s = IVSHMEM_COMMON(dev);
EventNotifier *n = &s->peers[s->vm_id].eventfds[vector];
MSIVector *v = &s->msi_vectors[vector];
+ KVMRouteChange c;
int ret;
IVSHMEM_DPRINTF("vector unmask %p %d\n", dev, vector);
@@ -287,11 +288,12 @@ static int ivshmem_vector_unmask(PCIDevice *dev, unsigned vector,
}
assert(!v->unmasked);
- ret = kvm_irqchip_update_msi_route(kvm_state, v->virq, msg, dev);
+ c = kvm_irqchip_begin_route_changes(kvm_state);
+ ret = kvm_irqchip_update_msi_route(&c, v->virq, msg, dev);
if (ret < 0) {
return ret;
}
- kvm_irqchip_commit_routes(kvm_state);
+ kvm_irqchip_commit_route_changes(&c);
ret = kvm_irqchip_add_irqfd_notifier_gsi(kvm_state, n, NULL, v->virq);
if (ret < 0) {
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index 939dcc3d4a..fb69cc9965 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -460,8 +460,9 @@ static void vfio_remove_kvm_msi_virq(VFIOMSIVector *vector)
static void vfio_update_kvm_msi_virq(VFIOMSIVector *vector, MSIMessage msg,
PCIDevice *pdev)
{
- kvm_irqchip_update_msi_route(kvm_state, vector->virq, msg, pdev);
- kvm_irqchip_commit_routes(kvm_state);
+ KVMRouteChange c = kvm_irqchip_begin_route_changes(kvm_state);
+ kvm_irqchip_update_msi_route(&c, vector->virq, msg, pdev);
+ kvm_irqchip_commit_route_changes(&c);
}
static int vfio_msix_vector_do_use(PCIDevice *pdev, unsigned int nr,
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index 22e76e3902..5fd02b7cb8 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -990,12 +990,15 @@ static int virtio_pci_one_vector_unmask(VirtIOPCIProxy *proxy,
if (proxy->vector_irqfd) {
irqfd = &proxy->vector_irqfd[vector];
if (irqfd->msg.data != msg.data || irqfd->msg.address != msg.address) {
- ret = kvm_irqchip_update_msi_route(kvm_state, irqfd->virq, msg,
+ virtio_pci_begin_route_changes();
+ ret = kvm_irqchip_update_msi_route(&virtio_pci_route_change,
+ irqfd->virq, msg,
&proxy->pci_dev);
if (ret < 0) {
+ virtio_pci_commit_route_changes();
return ret;
}
- kvm_irqchip_commit_routes(kvm_state);
+ virtio_pci_commit_route_changes();
}
}
diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
index e9a97eda8c..4c52a39efa 100644
--- a/include/sysemu/kvm.h
+++ b/include/sysemu/kvm.h
@@ -494,7 +494,7 @@ void kvm_init_cpu_signals(CPUState *cpu);
* @return: virq (>=0) when success, errno (<0) when failed.
*/
int kvm_irqchip_add_msi_route(KVMRouteChange *c, int vector, PCIDevice *dev);
-int kvm_irqchip_update_msi_route(KVMState *s, int virq, MSIMessage msg,
+int kvm_irqchip_update_msi_route(KVMRouteChange *c, int virq, MSIMessage msg,
PCIDevice *dev);
void kvm_irqchip_commit_routes(KVMState *s);
diff --git a/target/i386/kvm/kvm.c b/target/i386/kvm/kvm.c
index 5870301991..ceb61badb4 100644
--- a/target/i386/kvm/kvm.c
+++ b/target/i386/kvm/kvm.c
@@ -5532,9 +5532,11 @@ static void kvm_update_msi_routes_all(void *private, bool global,
{
int cnt = 0, vector;
MSIRouteEntry *entry;
+ KVMRouteChange c;
MSIMessage msg;
PCIDevice *dev;
+ c = kvm_irqchip_begin_route_changes(kvm_state);
/* TODO: explicit route update */
QLIST_FOREACH(entry, &msi_route_list, list) {
cnt++;
@@ -5551,9 +5553,9 @@ static void kvm_update_msi_routes_all(void *private, bool global,
*/
continue;
}
- kvm_irqchip_update_msi_route(kvm_state, entry->virq, msg, dev);
+ kvm_irqchip_update_msi_route(&c, entry->virq, msg, dev);
}
- kvm_irqchip_commit_routes(kvm_state);
+ kvm_irqchip_commit_route_changes(&c);
trace_kvm_x86_update_msi_routes(cnt);
}
--
2.23.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v1 3/3] virtio-pci: defer to commit kvm irq routing when enable msi/msix
2023-02-28 9:39 [PATCH v1 0/3] virtio-pci: optimize set_guest_notifier Longpeng(Mike) via
2023-02-28 9:39 ` [PATCH v1 1/3] virtio-pci: submit msi route changes in batch Longpeng(Mike) via
2023-02-28 9:39 ` [PATCH v1 2/3] kvm-irqchip: use KVMRouteChange API to update msi route Longpeng(Mike) via
@ 2023-02-28 9:39 ` Longpeng(Mike) via
2023-02-28 10:40 ` Michael S. Tsirkin
2 siblings, 1 reply; 16+ messages in thread
From: Longpeng(Mike) via @ 2023-02-28 9:39 UTC (permalink / raw)
To: mst, jasowang, pbonzini
Cc: arei.gonglei, yechuan, eperezma, alex.williamson, mtosatti, clg,
qemu-devel, Longpeng
From: Longpeng <longpeng2@huawei.com>
All unmasked vectors will be setup in msix_set_vector_notifiers(), which
is a time-consuming operation because each vector need to be submit to
KVM once. It's even worse if the VM has several devices and each devices
has dozens of vectors.
We can defer and commit the vectors in batch, just like the commit dc580d51f7
("vfio: defer to commit kvm irq routing when enable msi/msix"),
The can reduce 80% of the time spending on virtio_pci_set_guest_notifiers().
Signed-off-by: Longpeng <longpeng2@huawei.com>
---
hw/virtio/virtio-pci.c | 113 ++++++++++++++++++++++++++++++++-----
include/hw/virtio/virtio.h | 1 +
2 files changed, 99 insertions(+), 15 deletions(-)
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index 5fd02b7cb8..13f9c31009 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -51,15 +51,22 @@
/* Protected by the BQL */
static KVMRouteChange virtio_pci_route_change;
+static unsigned virtio_pci_route_change_depth;
static inline void virtio_pci_begin_route_changes(void)
{
- virtio_pci_route_change = kvm_irqchip_begin_route_changes(kvm_state);
+ if (!virtio_pci_route_change_depth) {
+ virtio_pci_route_change = kvm_irqchip_begin_route_changes(kvm_state);
+ }
+ virtio_pci_route_change_depth++;
}
static inline void virtio_pci_commit_route_changes(void)
{
- kvm_irqchip_commit_route_changes(&virtio_pci_route_change);
+ virtio_pci_route_change_depth--;
+ if (!virtio_pci_route_change_depth) {
+ kvm_irqchip_commit_route_changes(&virtio_pci_route_change);
+ }
}
static void virtio_pci_bus_new(VirtioBusState *bus, size_t bus_size,
@@ -976,6 +983,88 @@ static void kvm_virtio_pci_vector_config_release(VirtIOPCIProxy *proxy)
kvm_virtio_pci_vector_release_one(proxy, VIRTIO_CONFIG_IRQ_IDX);
}
+static int virtio_pci_vector_do_unmask(VirtIOPCIProxy *proxy,
+ unsigned int queue_no,
+ unsigned int vector,
+ EventNotifier *n)
+{
+ VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
+ VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
+ int ret = 0;
+
+ /*
+ * If guest supports masking, irqfd is already setup, unmask it.
+ * Otherwise, set it up now.
+ */
+ if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
+ k->guest_notifier_mask(vdev, queue_no, false);
+ /* Test after unmasking to avoid losing events. */
+ if (k->guest_notifier_pending &&
+ k->guest_notifier_pending(vdev, queue_no)) {
+ event_notifier_set(n);
+ }
+ } else {
+ ret = kvm_virtio_pci_irqfd_use(proxy, n, vector);
+ }
+
+ return ret;
+}
+
+static void virtio_pci_prepare_kvm_msi_virq_batch(VirtIOPCIProxy *proxy)
+{
+ VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
+
+ assert(!vdev->defer_kvm_irq_routing);
+ vdev->defer_kvm_irq_routing = true;
+ virtio_pci_begin_route_changes();
+}
+
+static void virtio_pci_commit_kvm_msi_virq_batch(VirtIOPCIProxy *proxy)
+{
+ VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
+ PCIDevice *dev = &proxy->pci_dev;
+ VirtQueue *vq;
+ EventNotifier *n;
+ int vector, index;
+ int ret;
+
+ assert(vdev->defer_kvm_irq_routing);
+ virtio_pci_commit_route_changes();
+ vdev->defer_kvm_irq_routing = false;
+
+ if (!msix_enabled(dev)) {
+ return;
+ }
+
+ /* Unmask all unmasked vectors */
+ for (vector = 0; vector < dev->msix_entries_nr; vector++) {
+ if (msix_is_masked(dev, vector)) {
+ continue;
+ }
+
+ vq = virtio_vector_first_queue(vdev, vector);
+ while (vq) {
+ index = virtio_get_queue_index(vq);
+ if (!virtio_queue_get_num(vdev, index)) {
+ break;
+ }
+ if (index < proxy->nvqs_with_notifiers) {
+ n = virtio_queue_get_guest_notifier(vq);
+ ret = virtio_pci_vector_do_unmask(proxy, index, vector, n);
+ assert(ret >= 0);
+ }
+ vq = virtio_vector_next_queue(vq);
+ }
+
+ if (vector == vdev->config_vector) {
+ n = virtio_config_get_guest_notifier(vdev);
+ ret = virtio_pci_vector_do_unmask(proxy, VIRTIO_CONFIG_IRQ_IDX,
+ vector, n);
+ assert(ret >= 0);
+ }
+ }
+}
+
static int virtio_pci_one_vector_unmask(VirtIOPCIProxy *proxy,
unsigned int queue_no,
unsigned int vector,
@@ -983,7 +1072,6 @@ static int virtio_pci_one_vector_unmask(VirtIOPCIProxy *proxy,
EventNotifier *n)
{
VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
- VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
VirtIOIRQFD *irqfd;
int ret = 0;
@@ -1002,19 +1090,10 @@ static int virtio_pci_one_vector_unmask(VirtIOPCIProxy *proxy,
}
}
- /* If guest supports masking, irqfd is already setup, unmask it.
- * Otherwise, set it up now.
- */
- if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
- k->guest_notifier_mask(vdev, queue_no, false);
- /* Test after unmasking to avoid losing events. */
- if (k->guest_notifier_pending &&
- k->guest_notifier_pending(vdev, queue_no)) {
- event_notifier_set(n);
- }
- } else {
- ret = kvm_virtio_pci_irqfd_use(proxy, n, vector);
+ if (!vdev->defer_kvm_irq_routing) {
+ ret = virtio_pci_vector_do_unmask(proxy, queue_no, vector, n);
}
+
return ret;
}
@@ -1284,12 +1363,16 @@ static int virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool assign)
}
}
+ virtio_pci_prepare_kvm_msi_virq_batch(proxy);
+
r = msix_set_vector_notifiers(&proxy->pci_dev, virtio_pci_vector_unmask,
virtio_pci_vector_mask,
virtio_pci_vector_poll);
if (r < 0) {
goto notifiers_error;
}
+
+ virtio_pci_commit_kvm_msi_virq_batch(proxy);
}
return 0;
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index 77c6c55929..9d82831350 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -147,6 +147,7 @@ struct VirtIODevice
bool start_on_kick; /* when virtio 1.0 feature has not been negotiated */
bool disable_legacy_check;
bool vhost_started;
+ bool defer_kvm_irq_routing;
VMChangeStateEntry *vmstate;
char *bus_name;
uint8_t device_endian;
--
2.23.0
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH v1 1/3] virtio-pci: submit msi route changes in batch
2023-02-28 9:39 ` [PATCH v1 1/3] virtio-pci: submit msi route changes in batch Longpeng(Mike) via
@ 2023-02-28 10:17 ` Michael S. Tsirkin
2023-02-28 11:20 ` longpeng2--- via
2023-02-28 10:18 ` Michael S. Tsirkin
1 sibling, 1 reply; 16+ messages in thread
From: Michael S. Tsirkin @ 2023-02-28 10:17 UTC (permalink / raw)
To: Longpeng(Mike)
Cc: jasowang, pbonzini, arei.gonglei, yechuan, eperezma,
alex.williamson, mtosatti, clg, qemu-devel
On Tue, Feb 28, 2023 at 05:39:35PM +0800, Longpeng(Mike) wrote:
> From: Longpeng <longpeng2@huawei.com>
>
> The kvm_irqchip_commit_routes() is a time-intensive operation, it needs
> scan and update all irqfds that are already assigned during each invocation,
> so more vectors means need more time to process them.
I think the real reason is it's the write side of RCU.
> For virtio-pci, we
> can just submit once when enabling vectors of a virtio-pci device.
>
> This can reduce the downtime when migrating a VM with vhost-vdpa devices.
>
> Signed-off-by: Longpeng <longpeng2@huawei.com>
> ---
> hw/virtio/virtio-pci.c | 24 +++++++++++++++++++++---
> 1 file changed, 21 insertions(+), 3 deletions(-)
>
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> index 247325c193..22e76e3902 100644
> --- a/hw/virtio/virtio-pci.c
> +++ b/hw/virtio/virtio-pci.c
> @@ -49,6 +49,19 @@
> * configuration space */
> #define VIRTIO_PCI_CONFIG_SIZE(dev) VIRTIO_PCI_CONFIG_OFF(msix_enabled(dev))
>
> +/* Protected by the BQL */
> +static KVMRouteChange virtio_pci_route_change;
> +
> +static inline void virtio_pci_begin_route_changes(void)
> +{
> + virtio_pci_route_change = kvm_irqchip_begin_route_changes(kvm_state);
> +}
> +
> +static inline void virtio_pci_commit_route_changes(void)
> +{
> + kvm_irqchip_commit_route_changes(&virtio_pci_route_change);
> +}
> +
> static void virtio_pci_bus_new(VirtioBusState *bus, size_t bus_size,
> VirtIOPCIProxy *dev);
> static void virtio_pci_reset(DeviceState *qdev);
> @@ -790,12 +803,11 @@ static int kvm_virtio_pci_vq_vector_use(VirtIOPCIProxy *proxy,
> int ret;
>
> if (irqfd->users == 0) {
> - KVMRouteChange c = kvm_irqchip_begin_route_changes(kvm_state);
> - ret = kvm_irqchip_add_msi_route(&c, vector, &proxy->pci_dev);
> + ret = kvm_irqchip_add_msi_route(&virtio_pci_route_change, vector,
> + &proxy->pci_dev);
> if (ret < 0) {
> return ret;
> }
> - kvm_irqchip_commit_route_changes(&c);
> irqfd->virq = ret;
> }
> irqfd->users++;
> @@ -903,12 +915,18 @@ static int kvm_virtio_pci_vector_vq_use(VirtIOPCIProxy *proxy, int nvqs)
> int ret = 0;
> VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
>
> + virtio_pci_begin_route_changes();
> +
> for (queue_no = 0; queue_no < nvqs; queue_no++) {
> if (!virtio_queue_get_num(vdev, queue_no)) {
> + virtio_pci_commit_route_changes();
> return -1;
> }
> ret = kvm_virtio_pci_vector_use_one(proxy, queue_no);
> }
> +
> + virtio_pci_commit_route_changes();
> +
> return ret;
> }
>
> --
> 2.23.0
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v1 1/3] virtio-pci: submit msi route changes in batch
2023-02-28 9:39 ` [PATCH v1 1/3] virtio-pci: submit msi route changes in batch Longpeng(Mike) via
2023-02-28 10:17 ` Michael S. Tsirkin
@ 2023-02-28 10:18 ` Michael S. Tsirkin
2023-02-28 11:24 ` longpeng2--- via
1 sibling, 1 reply; 16+ messages in thread
From: Michael S. Tsirkin @ 2023-02-28 10:18 UTC (permalink / raw)
To: Longpeng(Mike)
Cc: jasowang, pbonzini, arei.gonglei, yechuan, eperezma,
alex.williamson, mtosatti, clg, qemu-devel
On Tue, Feb 28, 2023 at 05:39:35PM +0800, Longpeng(Mike) wrote:
> From: Longpeng <longpeng2@huawei.com>
>
> The kvm_irqchip_commit_routes() is a time-intensive operation, it needs
> scan and update all irqfds that are already assigned during each invocation,
> so more vectors means need more time to process them. For virtio-pci, we
> can just submit once when enabling vectors of a virtio-pci device.
>
> This can reduce the downtime when migrating a VM with vhost-vdpa devices.
can in what sense? does it or does it not? by how much?
> Signed-off-by: Longpeng <longpeng2@huawei.com>
> ---
> hw/virtio/virtio-pci.c | 24 +++++++++++++++++++++---
> 1 file changed, 21 insertions(+), 3 deletions(-)
>
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> index 247325c193..22e76e3902 100644
> --- a/hw/virtio/virtio-pci.c
> +++ b/hw/virtio/virtio-pci.c
> @@ -49,6 +49,19 @@
> * configuration space */
> #define VIRTIO_PCI_CONFIG_SIZE(dev) VIRTIO_PCI_CONFIG_OFF(msix_enabled(dev))
>
> +/* Protected by the BQL */
> +static KVMRouteChange virtio_pci_route_change;
> +
> +static inline void virtio_pci_begin_route_changes(void)
> +{
> + virtio_pci_route_change = kvm_irqchip_begin_route_changes(kvm_state);
> +}
> +
> +static inline void virtio_pci_commit_route_changes(void)
> +{
> + kvm_irqchip_commit_route_changes(&virtio_pci_route_change);
> +}
> +
> static void virtio_pci_bus_new(VirtioBusState *bus, size_t bus_size,
> VirtIOPCIProxy *dev);
> static void virtio_pci_reset(DeviceState *qdev);
> @@ -790,12 +803,11 @@ static int kvm_virtio_pci_vq_vector_use(VirtIOPCIProxy *proxy,
> int ret;
>
> if (irqfd->users == 0) {
> - KVMRouteChange c = kvm_irqchip_begin_route_changes(kvm_state);
> - ret = kvm_irqchip_add_msi_route(&c, vector, &proxy->pci_dev);
> + ret = kvm_irqchip_add_msi_route(&virtio_pci_route_change, vector,
> + &proxy->pci_dev);
> if (ret < 0) {
> return ret;
> }
> - kvm_irqchip_commit_route_changes(&c);
> irqfd->virq = ret;
> }
> irqfd->users++;
> @@ -903,12 +915,18 @@ static int kvm_virtio_pci_vector_vq_use(VirtIOPCIProxy *proxy, int nvqs)
> int ret = 0;
> VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
>
> + virtio_pci_begin_route_changes();
> +
> for (queue_no = 0; queue_no < nvqs; queue_no++) {
> if (!virtio_queue_get_num(vdev, queue_no)) {
> + virtio_pci_commit_route_changes();
> return -1;
> }
> ret = kvm_virtio_pci_vector_use_one(proxy, queue_no);
> }
> +
> + virtio_pci_commit_route_changes();
> +
> return ret;
> }
>
> --
> 2.23.0
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v1 3/3] virtio-pci: defer to commit kvm irq routing when enable msi/msix
2023-02-28 9:39 ` [PATCH v1 3/3] virtio-pci: defer to commit kvm irq routing when enable msi/msix Longpeng(Mike) via
@ 2023-02-28 10:40 ` Michael S. Tsirkin
2023-02-28 11:07 ` Daniel P. Berrangé
2023-02-28 11:10 ` longpeng2--- via
0 siblings, 2 replies; 16+ messages in thread
From: Michael S. Tsirkin @ 2023-02-28 10:40 UTC (permalink / raw)
To: Longpeng(Mike)
Cc: jasowang, pbonzini, arei.gonglei, yechuan, eperezma,
alex.williamson, mtosatti, clg, qemu-devel
On Tue, Feb 28, 2023 at 05:39:37PM +0800, Longpeng(Mike) wrote:
> From: Longpeng <longpeng2@huawei.com>
>
> All unmasked vectors will be setup in msix_set_vector_notifiers(), which
> is a time-consuming operation because each vector need to be submit to
> KVM once. It's even worse if the VM has several devices and each devices
> has dozens of vectors.
>
> We can defer and commit the vectors in batch, just like the commit dc580d51f7
> ("vfio: defer to commit kvm irq routing when enable msi/msix"),
>
> The can reduce 80% of the time spending on virtio_pci_set_guest_notifiers().
cover letter also refers to 80%. what about patch 1 then? does it
contribute some of this gain?
> Signed-off-by: Longpeng <longpeng2@huawei.com>
In the age of language models there's no longer any excuse to post
agrammatical commit messages. Please just give your text to one of these
to correct.
I prompted: "please correct grammar in the following text"
and got back:
All unmasked vectors will be set up in
msix_set_vector_notifiers(), which is a time-consuming operation because
each vector needs to be submitted to KVM once. It's even worse if the VM
has several devices and each device has dozens of vectors.
We can defer and commit the vectors in batches, just like the
commit dc580d51f7 ("vfio: defer to commit kvm irq routing when enabling
msi/msix").
This can reduce the time spent on virtio_pci_set_guest_notifiers() by 80%.
> ---
> hw/virtio/virtio-pci.c | 113 ++++++++++++++++++++++++++++++++-----
> include/hw/virtio/virtio.h | 1 +
> 2 files changed, 99 insertions(+), 15 deletions(-)
>
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> index 5fd02b7cb8..13f9c31009 100644
> --- a/hw/virtio/virtio-pci.c
> +++ b/hw/virtio/virtio-pci.c
> @@ -51,15 +51,22 @@
>
> /* Protected by the BQL */
> static KVMRouteChange virtio_pci_route_change;
> +static unsigned virtio_pci_route_change_depth;
>
> static inline void virtio_pci_begin_route_changes(void)
> {
> - virtio_pci_route_change = kvm_irqchip_begin_route_changes(kvm_state);
> + if (!virtio_pci_route_change_depth) {
> + virtio_pci_route_change = kvm_irqchip_begin_route_changes(kvm_state);
> + }
> + virtio_pci_route_change_depth++;
> }
>
> static inline void virtio_pci_commit_route_changes(void)
> {
> - kvm_irqchip_commit_route_changes(&virtio_pci_route_change);
> + virtio_pci_route_change_depth--;
> + if (!virtio_pci_route_change_depth) {
> + kvm_irqchip_commit_route_changes(&virtio_pci_route_change);
> + }
> }
>
> static void virtio_pci_bus_new(VirtioBusState *bus, size_t bus_size,
> @@ -976,6 +983,88 @@ static void kvm_virtio_pci_vector_config_release(VirtIOPCIProxy *proxy)
> kvm_virtio_pci_vector_release_one(proxy, VIRTIO_CONFIG_IRQ_IDX);
> }
>
> +static int virtio_pci_vector_do_unmask(VirtIOPCIProxy *proxy,
> + unsigned int queue_no,
> + unsigned int vector,
> + EventNotifier *n)
> +{
> + VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
> + VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
> + int ret = 0;
> +
> + /*
> + * If guest supports masking, irqfd is already setup, unmask it.
> + * Otherwise, set it up now.
> + */
> + if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
> + k->guest_notifier_mask(vdev, queue_no, false);
> + /* Test after unmasking to avoid losing events. */
> + if (k->guest_notifier_pending &&
> + k->guest_notifier_pending(vdev, queue_no)) {
> + event_notifier_set(n);
> + }
> + } else {
> + ret = kvm_virtio_pci_irqfd_use(proxy, n, vector);
> + }
> +
> + return ret;
> +}
> +
> +static void virtio_pci_prepare_kvm_msi_virq_batch(VirtIOPCIProxy *proxy)
> +{
> + VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
> +
> + assert(!vdev->defer_kvm_irq_routing);
> + vdev->defer_kvm_irq_routing = true;
> + virtio_pci_begin_route_changes();
move this out of here please - otherwise it's not clear each begin
is matched by commit. in fact just open code this function.
> +}
> +
> +static void virtio_pci_commit_kvm_msi_virq_batch(VirtIOPCIProxy *proxy)
> +{
> + VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
> + PCIDevice *dev = &proxy->pci_dev;
> + VirtQueue *vq;
> + EventNotifier *n;
> + int vector, index;
> + int ret;
> +
> + assert(vdev->defer_kvm_irq_routing);
> + virtio_pci_commit_route_changes();
> + vdev->defer_kvm_irq_routing = false;
> +
> + if (!msix_enabled(dev)) {
> + return;
> + }
> +
> + /* Unmask all unmasked vectors */
> + for (vector = 0; vector < dev->msix_entries_nr; vector++) {
> + if (msix_is_masked(dev, vector)) {
> + continue;
> + }
> +
> + vq = virtio_vector_first_queue(vdev, vector);
> + while (vq) {
> + index = virtio_get_queue_index(vq);
> + if (!virtio_queue_get_num(vdev, index)) {
> + break;
> + }
> + if (index < proxy->nvqs_with_notifiers) {
> + n = virtio_queue_get_guest_notifier(vq);
> + ret = virtio_pci_vector_do_unmask(proxy, index, vector, n);
> + assert(ret >= 0);
> + }
> + vq = virtio_vector_next_queue(vq);
> + }
> +
> + if (vector == vdev->config_vector) {
> + n = virtio_config_get_guest_notifier(vdev);
> + ret = virtio_pci_vector_do_unmask(proxy, VIRTIO_CONFIG_IRQ_IDX,
> + vector, n);
> + assert(ret >= 0);
> + }
> + }
> +}
> +
> static int virtio_pci_one_vector_unmask(VirtIOPCIProxy *proxy,
> unsigned int queue_no,
> unsigned int vector,
> @@ -983,7 +1072,6 @@ static int virtio_pci_one_vector_unmask(VirtIOPCIProxy *proxy,
> EventNotifier *n)
> {
> VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
> - VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
> VirtIOIRQFD *irqfd;
> int ret = 0;
>
> @@ -1002,19 +1090,10 @@ static int virtio_pci_one_vector_unmask(VirtIOPCIProxy *proxy,
> }
> }
>
> - /* If guest supports masking, irqfd is already setup, unmask it.
> - * Otherwise, set it up now.
> - */
> - if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
> - k->guest_notifier_mask(vdev, queue_no, false);
> - /* Test after unmasking to avoid losing events. */
> - if (k->guest_notifier_pending &&
> - k->guest_notifier_pending(vdev, queue_no)) {
> - event_notifier_set(n);
> - }
> - } else {
> - ret = kvm_virtio_pci_irqfd_use(proxy, n, vector);
> + if (!vdev->defer_kvm_irq_routing) {
> + ret = virtio_pci_vector_do_unmask(proxy, queue_no, vector, n);
> }
> +
> return ret;
> }
>
> @@ -1284,12 +1363,16 @@ static int virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool assign)
> }
> }
>
> + virtio_pci_prepare_kvm_msi_virq_batch(proxy);
> +
> r = msix_set_vector_notifiers(&proxy->pci_dev, virtio_pci_vector_unmask,
> virtio_pci_vector_mask,
> virtio_pci_vector_poll);
> if (r < 0) {
> goto notifiers_error;
> }
> +
> + virtio_pci_commit_kvm_msi_virq_batch(proxy);
> }
>
> return 0;
> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
> index 77c6c55929..9d82831350 100644
> --- a/include/hw/virtio/virtio.h
> +++ b/include/hw/virtio/virtio.h
> @@ -147,6 +147,7 @@ struct VirtIODevice
> bool start_on_kick; /* when virtio 1.0 feature has not been negotiated */
> bool disable_legacy_check;
> bool vhost_started;
> + bool defer_kvm_irq_routing;
Can't we avoid leaking kvm things all over the place?
What does this flag even mean?
> VMChangeStateEntry *vmstate;
> char *bus_name;
> uint8_t device_endian;
> --
> 2.23.0
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v1 3/3] virtio-pci: defer to commit kvm irq routing when enable msi/msix
2023-02-28 10:40 ` Michael S. Tsirkin
@ 2023-02-28 11:07 ` Daniel P. Berrangé
2023-02-28 12:29 ` Michael S. Tsirkin
2023-02-28 11:10 ` longpeng2--- via
1 sibling, 1 reply; 16+ messages in thread
From: Daniel P. Berrangé @ 2023-02-28 11:07 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Longpeng(Mike), jasowang, pbonzini, arei.gonglei, yechuan,
eperezma, alex.williamson, mtosatti, clg, qemu-devel
On Tue, Feb 28, 2023 at 05:40:33AM -0500, Michael S. Tsirkin wrote:
> On Tue, Feb 28, 2023 at 05:39:37PM +0800, Longpeng(Mike) wrote:
> > From: Longpeng <longpeng2@huawei.com>
> >
> > All unmasked vectors will be setup in msix_set_vector_notifiers(), which
> > is a time-consuming operation because each vector need to be submit to
> > KVM once. It's even worse if the VM has several devices and each devices
> > has dozens of vectors.
> >
> > We can defer and commit the vectors in batch, just like the commit dc580d51f7
> > ("vfio: defer to commit kvm irq routing when enable msi/msix"),
> >
> > The can reduce 80% of the time spending on virtio_pci_set_guest_notifiers().
>
> cover letter also refers to 80%. what about patch 1 then? does it
> contribute some of this gain?
>
> > Signed-off-by: Longpeng <longpeng2@huawei.com>
>
> In the age of language models there's no longer any excuse to post
> agrammatical commit messages.
IMHO it is not appropriate to criticize the writing of people
who may not have English as a first language.
> Please just give your text to one of these
> to correct.
I'd really rather we don't suggest our contributors feed stuff
through such systems. You might have an example where its output
made sense, but there's plenty of demonstrations of such tools
outputting obvious garbage, or worse stuff that superficially
looks OK but is subtly changing the meaning. The latter is going
to be especially hard to spot for contributors without English
as a first language. IMHO it is better to have explanations
directly written by the contributor so it accurately reflects
their own understanding of the patch, even if not grammatically
perfect.
And that's ignoring the unsettled legal questions around the
licensing of output from these systems. Personally I'd suggest
our coding guidelines should explicitly reject any usage of
so called "AI" systems for QEMU contributions because of legal
uncertainty alone.
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v1 3/3] virtio-pci: defer to commit kvm irq routing when enable msi/msix
2023-02-28 10:40 ` Michael S. Tsirkin
2023-02-28 11:07 ` Daniel P. Berrangé
@ 2023-02-28 11:10 ` longpeng2--- via
2023-02-28 11:36 ` Michael S. Tsirkin
1 sibling, 1 reply; 16+ messages in thread
From: longpeng2--- via @ 2023-02-28 11:10 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: jasowang, pbonzini, arei.gonglei, yechuan, eperezma,
alex.williamson, mtosatti, clg, qemu-devel
在 2023/2/28 18:40, Michael S. Tsirkin 写道:
> On Tue, Feb 28, 2023 at 05:39:37PM +0800, Longpeng(Mike) wrote:
>> From: Longpeng <longpeng2@huawei.com>
>>
>> All unmasked vectors will be setup in msix_set_vector_notifiers(), which
>> is a time-consuming operation because each vector need to be submit to
>> KVM once. It's even worse if the VM has several devices and each devices
>> has dozens of vectors.
>>
>> We can defer and commit the vectors in batch, just like the commit dc580d51f7
>> ("vfio: defer to commit kvm irq routing when enable msi/msix"),
>>
>> The can reduce 80% of the time spending on virtio_pci_set_guest_notifiers().
>
> cover letter also refers to 80%. what about patch 1 then? does it
> contribute some of this gain?
>
Sorry, it's a clerical error, patch 3 reduces 50% in actually.
In my test, patch 1 can reduce 37%(160ms->101ms) and patch 3 can reduce
50%(101ms->21ms, 80/160).
>> Signed-off-by: Longpeng <longpeng2@huawei.com>
>
> In the age of language models there's no longer any excuse to post
> agrammatical commit messages. Please just give your text to one of these
> to correct.
>
Oh, I really envy you because I can not use it in my workspace. Thank
you for your correction.
> I prompted: "please correct grammar in the following text"
> and got back:
>
> All unmasked vectors will be set up in
> msix_set_vector_notifiers(), which is a time-consuming operation because
> each vector needs to be submitted to KVM once. It's even worse if the VM
> has several devices and each device has dozens of vectors.
>
> We can defer and commit the vectors in batches, just like the
> commit dc580d51f7 ("vfio: defer to commit kvm irq routing when enabling
> msi/msix").
>
> This can reduce the time spent on virtio_pci_set_guest_notifiers() by 80%.
>
>
>
>
>> ---
>> hw/virtio/virtio-pci.c | 113 ++++++++++++++++++++++++++++++++-----
>> include/hw/virtio/virtio.h | 1 +
>> 2 files changed, 99 insertions(+), 15 deletions(-)
>>
>> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
>> index 5fd02b7cb8..13f9c31009 100644
>> --- a/hw/virtio/virtio-pci.c
>> +++ b/hw/virtio/virtio-pci.c
>> @@ -51,15 +51,22 @@
>>
>> /* Protected by the BQL */
>> static KVMRouteChange virtio_pci_route_change;
>> +static unsigned virtio_pci_route_change_depth;
>>
>> static inline void virtio_pci_begin_route_changes(void)
>> {
>> - virtio_pci_route_change = kvm_irqchip_begin_route_changes(kvm_state);
>> + if (!virtio_pci_route_change_depth) {
>> + virtio_pci_route_change = kvm_irqchip_begin_route_changes(kvm_state);
>> + }
>> + virtio_pci_route_change_depth++;
>> }
>>
>> static inline void virtio_pci_commit_route_changes(void)
>> {
>> - kvm_irqchip_commit_route_changes(&virtio_pci_route_change);
>> + virtio_pci_route_change_depth--;
>> + if (!virtio_pci_route_change_depth) {
>> + kvm_irqchip_commit_route_changes(&virtio_pci_route_change);
>> + }
>> }
>>
>> static void virtio_pci_bus_new(VirtioBusState *bus, size_t bus_size,
>> @@ -976,6 +983,88 @@ static void kvm_virtio_pci_vector_config_release(VirtIOPCIProxy *proxy)
>> kvm_virtio_pci_vector_release_one(proxy, VIRTIO_CONFIG_IRQ_IDX);
>> }
>>
>> +static int virtio_pci_vector_do_unmask(VirtIOPCIProxy *proxy,
>> + unsigned int queue_no,
>> + unsigned int vector,
>> + EventNotifier *n)
>> +{
>> + VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
>> + VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
>> + int ret = 0;
>> +
>> + /*
>> + * If guest supports masking, irqfd is already setup, unmask it.
>> + * Otherwise, set it up now.
>> + */
>> + if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
>> + k->guest_notifier_mask(vdev, queue_no, false);
>> + /* Test after unmasking to avoid losing events. */
>> + if (k->guest_notifier_pending &&
>> + k->guest_notifier_pending(vdev, queue_no)) {
>> + event_notifier_set(n);
>> + }
>> + } else {
>> + ret = kvm_virtio_pci_irqfd_use(proxy, n, vector);
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +static void virtio_pci_prepare_kvm_msi_virq_batch(VirtIOPCIProxy *proxy)
>> +{
>> + VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
>> +
>> + assert(!vdev->defer_kvm_irq_routing);
>> + vdev->defer_kvm_irq_routing = true;
>> + virtio_pci_begin_route_changes();
>
> move this out of here please - otherwise it's not clear each begin
> is matched by commit. in fact just open code this function.
>
>> +}
>> +
>> +static void virtio_pci_commit_kvm_msi_virq_batch(VirtIOPCIProxy *proxy)
>> +{
>> + VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
>> + PCIDevice *dev = &proxy->pci_dev;
>> + VirtQueue *vq;
>> + EventNotifier *n;
>> + int vector, index;
>> + int ret;
>> +
>> + assert(vdev->defer_kvm_irq_routing);
>> + virtio_pci_commit_route_changes();
>> + vdev->defer_kvm_irq_routing = false;
>> +
>> + if (!msix_enabled(dev)) {
>> + return;
>> + }
>> +
>> + /* Unmask all unmasked vectors */
>> + for (vector = 0; vector < dev->msix_entries_nr; vector++) {
>> + if (msix_is_masked(dev, vector)) {
>> + continue;
>> + }
>> +
>> + vq = virtio_vector_first_queue(vdev, vector);
>> + while (vq) {
>> + index = virtio_get_queue_index(vq);
>> + if (!virtio_queue_get_num(vdev, index)) {
>> + break;
>> + }
>> + if (index < proxy->nvqs_with_notifiers) {
>> + n = virtio_queue_get_guest_notifier(vq);
>> + ret = virtio_pci_vector_do_unmask(proxy, index, vector, n);
>> + assert(ret >= 0);
>> + }
>> + vq = virtio_vector_next_queue(vq);
>> + }
>> +
>> + if (vector == vdev->config_vector) {
>> + n = virtio_config_get_guest_notifier(vdev);
>> + ret = virtio_pci_vector_do_unmask(proxy, VIRTIO_CONFIG_IRQ_IDX,
>> + vector, n);
>> + assert(ret >= 0);
>> + }
>> + }
>> +}
>> +
>> static int virtio_pci_one_vector_unmask(VirtIOPCIProxy *proxy,
>> unsigned int queue_no,
>> unsigned int vector,
>> @@ -983,7 +1072,6 @@ static int virtio_pci_one_vector_unmask(VirtIOPCIProxy *proxy,
>> EventNotifier *n)
>> {
>> VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
>> - VirtioDeviceClass *k = VIRTIO_DEVICE_GET_CLASS(vdev);
>> VirtIOIRQFD *irqfd;
>> int ret = 0;
>>
>> @@ -1002,19 +1090,10 @@ static int virtio_pci_one_vector_unmask(VirtIOPCIProxy *proxy,
>> }
>> }
>>
>> - /* If guest supports masking, irqfd is already setup, unmask it.
>> - * Otherwise, set it up now.
>> - */
>> - if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
>> - k->guest_notifier_mask(vdev, queue_no, false);
>> - /* Test after unmasking to avoid losing events. */
>> - if (k->guest_notifier_pending &&
>> - k->guest_notifier_pending(vdev, queue_no)) {
>> - event_notifier_set(n);
>> - }
>> - } else {
>> - ret = kvm_virtio_pci_irqfd_use(proxy, n, vector);
>> + if (!vdev->defer_kvm_irq_routing) {
>> + ret = virtio_pci_vector_do_unmask(proxy, queue_no, vector, n);
>> }
>> +
>> return ret;
>> }
>>
>> @@ -1284,12 +1363,16 @@ static int virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool assign)
>> }
>> }
>>
>> + virtio_pci_prepare_kvm_msi_virq_batch(proxy);
>> +
>> r = msix_set_vector_notifiers(&proxy->pci_dev, virtio_pci_vector_unmask,
>> virtio_pci_vector_mask,
>> virtio_pci_vector_poll);
>> if (r < 0) {
>> goto notifiers_error;
>> }
>> +
>> + virtio_pci_commit_kvm_msi_virq_batch(proxy);
>> }
>>
>> return 0;
>> diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
>> index 77c6c55929..9d82831350 100644
>> --- a/include/hw/virtio/virtio.h
>> +++ b/include/hw/virtio/virtio.h
>> @@ -147,6 +147,7 @@ struct VirtIODevice
>> bool start_on_kick; /* when virtio 1.0 feature has not been negotiated */
>> bool disable_legacy_check;
>> bool vhost_started;
>> + bool defer_kvm_irq_routing;
>
> Can't we avoid leaking kvm things all over the place?
> What does this flag even mean?
>
>> VMChangeStateEntry *vmstate;
>> char *bus_name;
>> uint8_t device_endian;
>> --
>> 2.23.0
>
> .
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v1 1/3] virtio-pci: submit msi route changes in batch
2023-02-28 10:17 ` Michael S. Tsirkin
@ 2023-02-28 11:20 ` longpeng2--- via
2023-02-28 11:39 ` longpeng2--- via
0 siblings, 1 reply; 16+ messages in thread
From: longpeng2--- via @ 2023-02-28 11:20 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: jasowang, pbonzini, arei.gonglei, yechuan, eperezma,
alex.williamson, mtosatti, clg, qemu-devel
在 2023/2/28 18:17, Michael S. Tsirkin 写道:
> On Tue, Feb 28, 2023 at 05:39:35PM +0800, Longpeng(Mike) wrote:
>> From: Longpeng <longpeng2@huawei.com>
>>
>> The kvm_irqchip_commit_routes() is a time-intensive operation, it needs
>> scan and update all irqfds that are already assigned during each invocation,
>> so more vectors means need more time to process them.
>
> I think the real reason is it's the write side of RCU.
>
Yes, so we can reduce the invocation of it in this way.
I'll send other optimizations in the next step, including irqbypass,
kvm_irqfd, etc.
>> For virtio-pci, we
>> can just submit once when enabling vectors of a virtio-pci device.
>>
>> This can reduce the downtime when migrating a VM with vhost-vdpa devices.
>>
>> Signed-off-by: Longpeng <longpeng2@huawei.com>
>> ---
>> hw/virtio/virtio-pci.c | 24 +++++++++++++++++++++---
>> 1 file changed, 21 insertions(+), 3 deletions(-)
>>
>> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
>> index 247325c193..22e76e3902 100644
>> --- a/hw/virtio/virtio-pci.c
>> +++ b/hw/virtio/virtio-pci.c
>> @@ -49,6 +49,19 @@
>> * configuration space */
>> #define VIRTIO_PCI_CONFIG_SIZE(dev) VIRTIO_PCI_CONFIG_OFF(msix_enabled(dev))
>>
>> +/* Protected by the BQL */
>> +static KVMRouteChange virtio_pci_route_change;
>> +
>> +static inline void virtio_pci_begin_route_changes(void)
>> +{
>> + virtio_pci_route_change = kvm_irqchip_begin_route_changes(kvm_state);
>> +}
>> +
>> +static inline void virtio_pci_commit_route_changes(void)
>> +{
>> + kvm_irqchip_commit_route_changes(&virtio_pci_route_change);
>> +}
>> +
>> static void virtio_pci_bus_new(VirtioBusState *bus, size_t bus_size,
>> VirtIOPCIProxy *dev);
>> static void virtio_pci_reset(DeviceState *qdev);
>> @@ -790,12 +803,11 @@ static int kvm_virtio_pci_vq_vector_use(VirtIOPCIProxy *proxy,
>> int ret;
>>
>> if (irqfd->users == 0) {
>> - KVMRouteChange c = kvm_irqchip_begin_route_changes(kvm_state);
>> - ret = kvm_irqchip_add_msi_route(&c, vector, &proxy->pci_dev);
>> + ret = kvm_irqchip_add_msi_route(&virtio_pci_route_change, vector,
>> + &proxy->pci_dev);
>> if (ret < 0) {
>> return ret;
>> }
>> - kvm_irqchip_commit_route_changes(&c);
>> irqfd->virq = ret;
>> }
>> irqfd->users++;
>> @@ -903,12 +915,18 @@ static int kvm_virtio_pci_vector_vq_use(VirtIOPCIProxy *proxy, int nvqs)
>> int ret = 0;
>> VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
>>
>> + virtio_pci_begin_route_changes();
>> +
>> for (queue_no = 0; queue_no < nvqs; queue_no++) {
>> if (!virtio_queue_get_num(vdev, queue_no)) {
>> + virtio_pci_commit_route_changes();
>> return -1;
>> }
>> ret = kvm_virtio_pci_vector_use_one(proxy, queue_no);
>> }
>> +
>> + virtio_pci_commit_route_changes();
>> +
>> return ret;
>> }
>>
>> --
>> 2.23.0
>
> .
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v1 1/3] virtio-pci: submit msi route changes in batch
2023-02-28 10:18 ` Michael S. Tsirkin
@ 2023-02-28 11:24 ` longpeng2--- via
0 siblings, 0 replies; 16+ messages in thread
From: longpeng2--- via @ 2023-02-28 11:24 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: jasowang, pbonzini, arei.gonglei, yechuan, eperezma,
alex.williamson, mtosatti, clg, qemu-devel
在 2023/2/28 18:18, Michael S. Tsirkin 写道:
> On Tue, Feb 28, 2023 at 05:39:35PM +0800, Longpeng(Mike) wrote:
>> From: Longpeng <longpeng2@huawei.com>
>>
>> The kvm_irqchip_commit_routes() is a time-intensive operation, it needs
>> scan and update all irqfds that are already assigned during each invocation,
>> so more vectors means need more time to process them. For virtio-pci, we
>> can just submit once when enabling vectors of a virtio-pci device.
>>
>> This can reduce the downtime when migrating a VM with vhost-vdpa devices.
>
> can in what sense? does it or does it not? by how much?
>
I've replied in patch 3.
>> Signed-off-by: Longpeng <longpeng2@huawei.com>
>> ---
>> hw/virtio/virtio-pci.c | 24 +++++++++++++++++++++---
>> 1 file changed, 21 insertions(+), 3 deletions(-)
>>
>> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
>> index 247325c193..22e76e3902 100644
>> --- a/hw/virtio/virtio-pci.c
>> +++ b/hw/virtio/virtio-pci.c
>> @@ -49,6 +49,19 @@
>> * configuration space */
>> #define VIRTIO_PCI_CONFIG_SIZE(dev) VIRTIO_PCI_CONFIG_OFF(msix_enabled(dev))
>>
>> +/* Protected by the BQL */
>> +static KVMRouteChange virtio_pci_route_change;
>> +
>> +static inline void virtio_pci_begin_route_changes(void)
>> +{
>> + virtio_pci_route_change = kvm_irqchip_begin_route_changes(kvm_state);
>> +}
>> +
>> +static inline void virtio_pci_commit_route_changes(void)
>> +{
>> + kvm_irqchip_commit_route_changes(&virtio_pci_route_change);
>> +}
>> +
>> static void virtio_pci_bus_new(VirtioBusState *bus, size_t bus_size,
>> VirtIOPCIProxy *dev);
>> static void virtio_pci_reset(DeviceState *qdev);
>> @@ -790,12 +803,11 @@ static int kvm_virtio_pci_vq_vector_use(VirtIOPCIProxy *proxy,
>> int ret;
>>
>> if (irqfd->users == 0) {
>> - KVMRouteChange c = kvm_irqchip_begin_route_changes(kvm_state);
>> - ret = kvm_irqchip_add_msi_route(&c, vector, &proxy->pci_dev);
>> + ret = kvm_irqchip_add_msi_route(&virtio_pci_route_change, vector,
>> + &proxy->pci_dev);
>> if (ret < 0) {
>> return ret;
>> }
>> - kvm_irqchip_commit_route_changes(&c);
>> irqfd->virq = ret;
>> }
>> irqfd->users++;
>> @@ -903,12 +915,18 @@ static int kvm_virtio_pci_vector_vq_use(VirtIOPCIProxy *proxy, int nvqs)
>> int ret = 0;
>> VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
>>
>> + virtio_pci_begin_route_changes();
>> +
>> for (queue_no = 0; queue_no < nvqs; queue_no++) {
>> if (!virtio_queue_get_num(vdev, queue_no)) {
>> + virtio_pci_commit_route_changes();
>> return -1;
>> }
>> ret = kvm_virtio_pci_vector_use_one(proxy, queue_no);
>> }
>> +
>> + virtio_pci_commit_route_changes();
>> +
>> return ret;
>> }
>>
>> --
>> 2.23.0
>
> .
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v1 3/3] virtio-pci: defer to commit kvm irq routing when enable msi/msix
2023-02-28 11:10 ` longpeng2--- via
@ 2023-02-28 11:36 ` Michael S. Tsirkin
0 siblings, 0 replies; 16+ messages in thread
From: Michael S. Tsirkin @ 2023-02-28 11:36 UTC (permalink / raw)
To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
Cc: jasowang, pbonzini, arei.gonglei, yechuan, eperezma,
alex.williamson, mtosatti, clg, qemu-devel
On Tue, Feb 28, 2023 at 07:10:36PM +0800, Longpeng (Mike, Cloud Infrastructure Service Product Dept.) wrote:
> Oh, I really envy you because I can not use it in my workspace. Thank you
> for your correction.
:(
I keep seeing these adverts for grammarly, try this maybe :)
--
MST
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v1 1/3] virtio-pci: submit msi route changes in batch
2023-02-28 11:20 ` longpeng2--- via
@ 2023-02-28 11:39 ` longpeng2--- via
2023-02-28 12:09 ` Michael S. Tsirkin
0 siblings, 1 reply; 16+ messages in thread
From: longpeng2--- via @ 2023-02-28 11:39 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: jasowang, pbonzini, arei.gonglei, yechuan, eperezma,
alex.williamson, mtosatti, clg, qemu-devel
在 2023/2/28 19:20, Longpeng (Mike, Cloud Infrastructure Service Product
Dept.) 写道:
>
>
> 在 2023/2/28 18:17, Michael S. Tsirkin 写道:
>> On Tue, Feb 28, 2023 at 05:39:35PM +0800, Longpeng(Mike) wrote:
>>> From: Longpeng <longpeng2@huawei.com>
>>>
>>> The kvm_irqchip_commit_routes() is a time-intensive operation, it needs
>>> scan and update all irqfds that are already assigned during each
>>> invocation,
>>> so more vectors means need more time to process them.
>>
>> I think the real reason is it's the write side of RCU.
>>
>
> Yes, so we can reduce the invocation of it in this way.
>
> I'll send other optimizations in the next step, including irqbypass,
> kvm_irqfd, etc.
>
Iterates the irqfds list is also time-consuming, it would iterate all
existing irqfds when we commit, so the time complexity is O(n^2) without
this patch.
>>> For virtio-pci, we
>>> can just submit once when enabling vectors of a virtio-pci device.
>>>
>>> This can reduce the downtime when migrating a VM with vhost-vdpa
>>> devices.
>>>
>>> Signed-off-by: Longpeng <longpeng2@huawei.com>
>>> ---
>>> hw/virtio/virtio-pci.c | 24 +++++++++++++++++++++---
>>> 1 file changed, 21 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
>>> index 247325c193..22e76e3902 100644
>>> --- a/hw/virtio/virtio-pci.c
>>> +++ b/hw/virtio/virtio-pci.c
>>> @@ -49,6 +49,19 @@
>>> * configuration space */
>>> #define VIRTIO_PCI_CONFIG_SIZE(dev)
>>> VIRTIO_PCI_CONFIG_OFF(msix_enabled(dev))
>>> +/* Protected by the BQL */
>>> +static KVMRouteChange virtio_pci_route_change;
>>> +
>>> +static inline void virtio_pci_begin_route_changes(void)
>>> +{
>>> + virtio_pci_route_change =
>>> kvm_irqchip_begin_route_changes(kvm_state);
>>> +}
>>> +
>>> +static inline void virtio_pci_commit_route_changes(void)
>>> +{
>>> + kvm_irqchip_commit_route_changes(&virtio_pci_route_change);
>>> +}
>>> +
>>> static void virtio_pci_bus_new(VirtioBusState *bus, size_t bus_size,
>>> VirtIOPCIProxy *dev);
>>> static void virtio_pci_reset(DeviceState *qdev);
>>> @@ -790,12 +803,11 @@ static int
>>> kvm_virtio_pci_vq_vector_use(VirtIOPCIProxy *proxy,
>>> int ret;
>>> if (irqfd->users == 0) {
>>> - KVMRouteChange c = kvm_irqchip_begin_route_changes(kvm_state);
>>> - ret = kvm_irqchip_add_msi_route(&c, vector, &proxy->pci_dev);
>>> + ret = kvm_irqchip_add_msi_route(&virtio_pci_route_change,
>>> vector,
>>> + &proxy->pci_dev);
>>> if (ret < 0) {
>>> return ret;
>>> }
>>> - kvm_irqchip_commit_route_changes(&c);
>>> irqfd->virq = ret;
>>> }
>>> irqfd->users++;
>>> @@ -903,12 +915,18 @@ static int
>>> kvm_virtio_pci_vector_vq_use(VirtIOPCIProxy *proxy, int nvqs)
>>> int ret = 0;
>>> VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
>>> + virtio_pci_begin_route_changes();
>>> +
>>> for (queue_no = 0; queue_no < nvqs; queue_no++) {
>>> if (!virtio_queue_get_num(vdev, queue_no)) {
>>> + virtio_pci_commit_route_changes();
>>> return -1;
>>> }
>>> ret = kvm_virtio_pci_vector_use_one(proxy, queue_no);
>>> }
>>> +
>>> + virtio_pci_commit_route_changes();
>>> +
>>> return ret;
>>> }
>>> --
>>> 2.23.0
>>
>> .
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v1 1/3] virtio-pci: submit msi route changes in batch
2023-02-28 11:39 ` longpeng2--- via
@ 2023-02-28 12:09 ` Michael S. Tsirkin
0 siblings, 0 replies; 16+ messages in thread
From: Michael S. Tsirkin @ 2023-02-28 12:09 UTC (permalink / raw)
To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
Cc: jasowang, pbonzini, arei.gonglei, yechuan, eperezma,
alex.williamson, mtosatti, clg, qemu-devel
On Tue, Feb 28, 2023 at 07:39:06PM +0800, Longpeng (Mike, Cloud Infrastructure Service Product Dept.) wrote:
>
>
> 在 2023/2/28 19:20, Longpeng (Mike, Cloud Infrastructure Service Product
> Dept.) 写道:
> >
> >
> > 在 2023/2/28 18:17, Michael S. Tsirkin 写道:
> > > On Tue, Feb 28, 2023 at 05:39:35PM +0800, Longpeng(Mike) wrote:
> > > > From: Longpeng <longpeng2@huawei.com>
> > > >
> > > > The kvm_irqchip_commit_routes() is a time-intensive operation, it needs
> > > > scan and update all irqfds that are already assigned during each
> > > > invocation,
> > > > so more vectors means need more time to process them.
> > >
> > > I think the real reason is it's the write side of RCU.
> > >
> >
> > Yes, so we can reduce the invocation of it in this way.
> >
> > I'll send other optimizations in the next step, including irqbypass,
> > kvm_irqfd, etc.
> >
>
> Iterates the irqfds list is also time-consuming, it would iterate all
> existing irqfds when we commit, so the time complexity is O(n^2) without
> this patch.
Sounds good, pls include this in the commit log.
> > > > For virtio-pci, we
> > > > can just submit once when enabling vectors of a virtio-pci device.
> > > >
> > > > This can reduce the downtime when migrating a VM with vhost-vdpa
> > > > devices.
> > > >
> > > > Signed-off-by: Longpeng <longpeng2@huawei.com>
> > > > ---
> > > > hw/virtio/virtio-pci.c | 24 +++++++++++++++++++++---
> > > > 1 file changed, 21 insertions(+), 3 deletions(-)
> > > >
> > > > diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> > > > index 247325c193..22e76e3902 100644
> > > > --- a/hw/virtio/virtio-pci.c
> > > > +++ b/hw/virtio/virtio-pci.c
> > > > @@ -49,6 +49,19 @@
> > > > * configuration space */
> > > > #define VIRTIO_PCI_CONFIG_SIZE(dev)
> > > > VIRTIO_PCI_CONFIG_OFF(msix_enabled(dev))
> > > > +/* Protected by the BQL */
> > > > +static KVMRouteChange virtio_pci_route_change;
> > > > +
> > > > +static inline void virtio_pci_begin_route_changes(void)
> > > > +{
> > > > + virtio_pci_route_change =
> > > > kvm_irqchip_begin_route_changes(kvm_state);
> > > > +}
> > > > +
> > > > +static inline void virtio_pci_commit_route_changes(void)
> > > > +{
> > > > + kvm_irqchip_commit_route_changes(&virtio_pci_route_change);
> > > > +}
> > > > +
> > > > static void virtio_pci_bus_new(VirtioBusState *bus, size_t bus_size,
> > > > VirtIOPCIProxy *dev);
> > > > static void virtio_pci_reset(DeviceState *qdev);
> > > > @@ -790,12 +803,11 @@ static int
> > > > kvm_virtio_pci_vq_vector_use(VirtIOPCIProxy *proxy,
> > > > int ret;
> > > > if (irqfd->users == 0) {
> > > > - KVMRouteChange c = kvm_irqchip_begin_route_changes(kvm_state);
> > > > - ret = kvm_irqchip_add_msi_route(&c, vector, &proxy->pci_dev);
> > > > + ret =
> > > > kvm_irqchip_add_msi_route(&virtio_pci_route_change, vector,
> > > > + &proxy->pci_dev);
> > > > if (ret < 0) {
> > > > return ret;
> > > > }
> > > > - kvm_irqchip_commit_route_changes(&c);
> > > > irqfd->virq = ret;
> > > > }
> > > > irqfd->users++;
> > > > @@ -903,12 +915,18 @@ static int
> > > > kvm_virtio_pci_vector_vq_use(VirtIOPCIProxy *proxy, int nvqs)
> > > > int ret = 0;
> > > > VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
> > > > + virtio_pci_begin_route_changes();
> > > > +
> > > > for (queue_no = 0; queue_no < nvqs; queue_no++) {
> > > > if (!virtio_queue_get_num(vdev, queue_no)) {
> > > > + virtio_pci_commit_route_changes();
> > > > return -1;
> > > > }
> > > > ret = kvm_virtio_pci_vector_use_one(proxy, queue_no);
> > > > }
> > > > +
> > > > + virtio_pci_commit_route_changes();
> > > > +
> > > > return ret;
> > > > }
> > > > --
> > > > 2.23.0
> > >
> > > .
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v1 3/3] virtio-pci: defer to commit kvm irq routing when enable msi/msix
2023-02-28 11:07 ` Daniel P. Berrangé
@ 2023-02-28 12:29 ` Michael S. Tsirkin
2023-02-28 13:05 ` Daniel P. Berrangé
0 siblings, 1 reply; 16+ messages in thread
From: Michael S. Tsirkin @ 2023-02-28 12:29 UTC (permalink / raw)
To: Daniel P. Berrangé
Cc: Longpeng(Mike), jasowang, pbonzini, arei.gonglei, yechuan,
eperezma, alex.williamson, mtosatti, clg, qemu-devel
On Tue, Feb 28, 2023 at 11:07:21AM +0000, Daniel P. Berrangé wrote:
> IMHO it is not appropriate to criticize the writing of people
> who may not have English as a first language.
Sorry if I offended anyone. I do want change log messages to be clear
and unambigous though since they are a permanent record. Me rewriting
them for contributors does not seem to scale. I was hoping a grammar
checker will help but if not I don't know what to suggest then.
--
MST
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v1 3/3] virtio-pci: defer to commit kvm irq routing when enable msi/msix
2023-02-28 12:29 ` Michael S. Tsirkin
@ 2023-02-28 13:05 ` Daniel P. Berrangé
0 siblings, 0 replies; 16+ messages in thread
From: Daniel P. Berrangé @ 2023-02-28 13:05 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Longpeng(Mike), jasowang, pbonzini, arei.gonglei, yechuan,
eperezma, alex.williamson, mtosatti, clg, qemu-devel
On Tue, Feb 28, 2023 at 07:29:42AM -0500, Michael S. Tsirkin wrote:
> On Tue, Feb 28, 2023 at 11:07:21AM +0000, Daniel P. Berrangé wrote:
> > IMHO it is not appropriate to criticize the writing of people
> > who may not have English as a first language.
>
> Sorry if I offended anyone. I do want change log messages to be clear
> and unambigous though since they are a permanent record. Me rewriting
> them for contributors does not seem to scale. I was hoping a grammar
> checker will help but if not I don't know what to suggest then.
Agreed that having the maintainer frequently rewriting them isn't
scalable in general, but that's not the common case I think/hope.
If a commit message truely isn't easy enough to understand, it is
reasonable to ask the contributor to clarify it and post a v2,
giving them some hints where appropriate.
If it is just sub-optimal grammar that doesn't massively impact
understanding, then I'm inclined to just accept patches as is,
or do very minor copy editting for obvious / simple issues.
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2023-02-28 13:06 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-02-28 9:39 [PATCH v1 0/3] virtio-pci: optimize set_guest_notifier Longpeng(Mike) via
2023-02-28 9:39 ` [PATCH v1 1/3] virtio-pci: submit msi route changes in batch Longpeng(Mike) via
2023-02-28 10:17 ` Michael S. Tsirkin
2023-02-28 11:20 ` longpeng2--- via
2023-02-28 11:39 ` longpeng2--- via
2023-02-28 12:09 ` Michael S. Tsirkin
2023-02-28 10:18 ` Michael S. Tsirkin
2023-02-28 11:24 ` longpeng2--- via
2023-02-28 9:39 ` [PATCH v1 2/3] kvm-irqchip: use KVMRouteChange API to update msi route Longpeng(Mike) via
2023-02-28 9:39 ` [PATCH v1 3/3] virtio-pci: defer to commit kvm irq routing when enable msi/msix Longpeng(Mike) via
2023-02-28 10:40 ` Michael S. Tsirkin
2023-02-28 11:07 ` Daniel P. Berrangé
2023-02-28 12:29 ` Michael S. Tsirkin
2023-02-28 13:05 ` Daniel P. Berrangé
2023-02-28 11:10 ` longpeng2--- via
2023-02-28 11:36 ` Michael S. Tsirkin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).