* [PATCH V2 0/2] preserve pending interrupts during cpr
@ 2025-07-16 18:06 Steve Sistare
2025-07-16 18:06 ` [PATCH V2 1/2] vfio/pci: augment set_handler Steve Sistare
2025-07-16 18:06 ` [PATCH V2 2/2] vfio/pci: preserve pending interrupts Steve Sistare
0 siblings, 2 replies; 10+ messages in thread
From: Steve Sistare @ 2025-07-16 18:06 UTC (permalink / raw)
To: qemu-devel
Cc: Cedric Le Goater, Zhenzhong Duan, Alex Williamson, Steve Sistare
Close a race condition that causes cpr-transfer to lose VFIO
interrupts. See commit messages for details.
Steve Sistare (2):
vfio/pci: augment set_handler
vfio/pci: preserve pending interrupts
hw/vfio/cpr.c | 93 +++++++++++++++++++++++++++++++++++++-
hw/vfio/pci.c | 15 +++++-
hw/vfio/pci.h | 4 +-
include/hw/vfio/vfio-cpr.h | 6 +++
4 files changed, 114 insertions(+), 4 deletions(-)
--
2.39.3
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH V2 1/2] vfio/pci: augment set_handler
2025-07-16 18:06 [PATCH V2 0/2] preserve pending interrupts during cpr Steve Sistare
@ 2025-07-16 18:06 ` Steve Sistare
2025-07-16 18:06 ` [PATCH V2 2/2] vfio/pci: preserve pending interrupts Steve Sistare
1 sibling, 0 replies; 10+ messages in thread
From: Steve Sistare @ 2025-07-16 18:06 UTC (permalink / raw)
To: qemu-devel
Cc: Cedric Le Goater, Zhenzhong Duan, Alex Williamson, Steve Sistare
Extend vfio_pci_msi_set_handler() so it can set or clear the handler.
Add a similar accessor for INTx. No functional change.
Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
---
hw/vfio/cpr.c | 2 +-
hw/vfio/pci.c | 13 +++++++++++--
hw/vfio/pci.h | 3 ++-
3 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/hw/vfio/cpr.c b/hw/vfio/cpr.c
index af0f12a7ad..2a244fc4b6 100644
--- a/hw/vfio/cpr.c
+++ b/hw/vfio/cpr.c
@@ -70,7 +70,7 @@ static void vfio_cpr_claim_vectors(VFIOPCIDevice *vdev, int nr_vectors,
fd = vfio_cpr_load_vector_fd(vdev, "interrupt", i);
if (fd >= 0) {
vfio_pci_vector_init(vdev, i);
- vfio_pci_msi_set_handler(vdev, i);
+ vfio_pci_msi_set_handler(vdev, i, true);
}
if (vfio_cpr_load_vector_fd(vdev, "kvm_interrupt", i) >= 0) {
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index 1093b28df7..8b471c054a 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -415,6 +415,14 @@ bool vfio_pci_intx_enable(VFIOPCIDevice *vdev, Error **errp)
return vfio_intx_enable(vdev, errp);
}
+void vfio_pci_intx_set_handler(VFIOPCIDevice *vdev, bool enable)
+{
+ int fd = event_notifier_get_fd(&vdev->intx.interrupt);
+ IOHandler *handler = (enable ? vfio_intx_interrupt : NULL);
+
+ qemu_set_fd_handler(fd, handler, NULL, vdev);
+}
+
/*
* MSI/X
*/
@@ -453,12 +461,13 @@ static void vfio_msi_interrupt(void *opaque)
notify(&vdev->pdev, nr);
}
-void vfio_pci_msi_set_handler(VFIOPCIDevice *vdev, int nr)
+void vfio_pci_msi_set_handler(VFIOPCIDevice *vdev, int nr, bool enable)
{
VFIOMSIVector *vector = &vdev->msi_vectors[nr];
int fd = event_notifier_get_fd(&vector->interrupt);
+ IOHandler *handler = (enable ? vfio_msi_interrupt : NULL);
- qemu_set_fd_handler(fd, vfio_msi_interrupt, NULL, vector);
+ qemu_set_fd_handler(fd, handler, NULL, vector);
}
/*
diff --git a/hw/vfio/pci.h b/hw/vfio/pci.h
index 495fae737d..80c8fcfa07 100644
--- a/hw/vfio/pci.h
+++ b/hw/vfio/pci.h
@@ -218,8 +218,9 @@ void vfio_pci_add_kvm_msi_virq(VFIOPCIDevice *vdev, VFIOMSIVector *vector,
void vfio_pci_prepare_kvm_msi_virq_batch(VFIOPCIDevice *vdev);
void vfio_pci_commit_kvm_msi_virq_batch(VFIOPCIDevice *vdev);
bool vfio_pci_intx_enable(VFIOPCIDevice *vdev, Error **errp);
+void vfio_pci_intx_set_handler(VFIOPCIDevice *vdev, bool enable);
void vfio_pci_msix_set_notifiers(VFIOPCIDevice *vdev);
-void vfio_pci_msi_set_handler(VFIOPCIDevice *vdev, int nr);
+void vfio_pci_msi_set_handler(VFIOPCIDevice *vdev, int nr, bool enable);
uint32_t vfio_pci_read_config(PCIDevice *pdev, uint32_t addr, int len);
void vfio_pci_write_config(PCIDevice *pdev,
--
2.39.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH V2 2/2] vfio/pci: preserve pending interrupts
2025-07-16 18:06 [PATCH V2 0/2] preserve pending interrupts during cpr Steve Sistare
2025-07-16 18:06 ` [PATCH V2 1/2] vfio/pci: augment set_handler Steve Sistare
@ 2025-07-16 18:06 ` Steve Sistare
2025-07-17 2:43 ` Duan, Zhenzhong
1 sibling, 1 reply; 10+ messages in thread
From: Steve Sistare @ 2025-07-16 18:06 UTC (permalink / raw)
To: qemu-devel
Cc: Cedric Le Goater, Zhenzhong Duan, Alex Williamson, Steve Sistare
cpr-transfer may lose a VFIO interrupt because the KVM instance is
destroyed and recreated. If an interrupt arrives in the middle, it is
dropped. To fix, stop pending new interrupts during cpr save, and pick
up the pieces. In more detail:
Stop the VCPUs. Call kvm_irqchip_remove_irqfd_notifier_gsi --> KVM_IRQFD to
deassign the irqfd gsi that routes interrupts directly to the VCPU and KVM.
After this call, interrupts fall back to the kernel vfio_msihandler, which
writes to QEMU's kvm_interrupt eventfd. CPR already preserves that
eventfd. When the route is re-established in new QEMU, the kernel tests
the eventfd and injects an interrupt to KVM if necessary.
Deassign INTx in a similar manner. For both MSI and INTx, remove the
eventfd handler so old QEMU does not consume an event.
If an interrupt was already pended to KVM prior to the completion of
kvm_irqchip_remove_irqfd_notifier_gsi, it will be recovered by the
subsequent call to cpu_synchronize_all_states, which pulls KVM interrupt
state to userland prior to saving it in vmstate.
Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
---
hw/vfio/cpr.c | 91 ++++++++++++++++++++++++++++++++++++++
hw/vfio/pci.c | 2 +
hw/vfio/pci.h | 1 +
include/hw/vfio/vfio-cpr.h | 6 +++
4 files changed, 100 insertions(+)
diff --git a/hw/vfio/cpr.c b/hw/vfio/cpr.c
index 2a244fc4b6..bca74ea20a 100644
--- a/hw/vfio/cpr.c
+++ b/hw/vfio/cpr.c
@@ -198,3 +198,94 @@ void vfio_cpr_add_kvm_notifier(void)
MIG_MODE_CPR_TRANSFER);
}
}
+
+static int set_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
+ EventNotifier *rn, int virq, bool enable)
+{
+ if (enable) {
+ return kvm_irqchip_add_irqfd_notifier_gsi(s, n, rn, virq);
+ } else {
+ return kvm_irqchip_remove_irqfd_notifier_gsi(s, n, virq);
+ }
+}
+
+static int vfio_cpr_set_msi_virq(VFIOPCIDevice *vdev, Error **errp, bool enable)
+{
+ const char *op = (enable ? "enable" : "disable");
+ PCIDevice *pdev = &vdev->pdev;
+ int i, nr_vectors, ret = 0;
+
+ if (msix_enabled(pdev)) {
+ nr_vectors = vdev->msix->entries;
+
+ } else if (msi_enabled(pdev)) {
+ nr_vectors = msi_nr_vectors_allocated(pdev);
+
+ } else if (vfio_pci_read_config(pdev, PCI_INTERRUPT_PIN, 1)) {
+ ret = set_irqfd_notifier_gsi(kvm_state, &vdev->intx.interrupt,
+ &vdev->intx.unmask, vdev->intx.route.irq,
+ enable);
+ if (ret) {
+ error_setg_errno(errp, -ret, "failed to %s INTx irq %d",
+ op, vdev->intx.route.irq);
+ return ret;
+ }
+ vfio_pci_intx_set_handler(vdev, enable);
+ return ret;
+
+ } else {
+ return 0;
+ }
+
+ for (i = 0; i < nr_vectors; i++) {
+ VFIOMSIVector *vector = &vdev->msi_vectors[i];
+ if (vector->use) {
+ ret = set_irqfd_notifier_gsi(kvm_state, &vector->kvm_interrupt,
+ NULL, vector->virq, enable);
+ if (ret) {
+ error_setg_errno(errp, -ret,
+ "failed to %s msi vector %d virq %d",
+ op, i, vector->virq);
+ return ret;
+ }
+ vfio_pci_msi_set_handler(vdev, i, enable);
+ }
+ }
+
+ return ret;
+}
+
+/*
+ * When CPR starts, detach IRQs from the VFIO device so future interrupts
+ * are posted to kvm_interrupt, which is preserved in new QEMU. Interrupts
+ * that were already posted to the old KVM instance, but not delivered to the
+ * VCPU, are recovered via KVM_GET_LAPIC and pushed to the new KVM instance
+ * in new QEMU.
+ *
+ * If CPR fails, reattach the IRQs.
+ */
+static int vfio_cpr_pci_notifier(NotifierWithReturn *notifier,
+ MigrationEvent *e, Error **errp)
+{
+ VFIOPCIDevice *vdev =
+ container_of(notifier, VFIOPCIDevice, cpr.transfer_notifier);
+
+ if (e->type == MIG_EVENT_PRECOPY_SETUP) {
+ return vfio_cpr_set_msi_virq(vdev, errp, false);
+ } else if (e->type == MIG_EVENT_PRECOPY_FAILED) {
+ return vfio_cpr_set_msi_virq(vdev, errp, true);
+ }
+ return 0;
+}
+
+void vfio_cpr_pci_register_device(VFIOPCIDevice *vdev)
+{
+ migration_add_notifier_mode(&vdev->cpr.transfer_notifier,
+ vfio_cpr_pci_notifier,
+ MIG_MODE_CPR_TRANSFER);
+}
+
+void vfio_cpr_pci_unregister_device(VFIOPCIDevice *vdev)
+{
+ migration_remove_notifier(&vdev->cpr.transfer_notifier);
+}
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index 8b471c054a..22a4125131 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -2993,6 +2993,7 @@ void vfio_pci_put_device(VFIOPCIDevice *vdev)
{
vfio_display_finalize(vdev);
vfio_bars_finalize(vdev);
+ vfio_cpr_pci_unregister_device(vdev);
g_free(vdev->emulated_config_bits);
g_free(vdev->rom);
/*
@@ -3442,6 +3443,7 @@ static void vfio_pci_realize(PCIDevice *pdev, Error **errp)
vfio_pci_register_err_notifier(vdev);
vfio_pci_register_req_notifier(vdev);
vfio_setup_resetfn_quirk(vdev);
+ vfio_cpr_pci_register_device(vdev);
return;
diff --git a/hw/vfio/pci.h b/hw/vfio/pci.h
index 80c8fcfa07..7989b94eb3 100644
--- a/hw/vfio/pci.h
+++ b/hw/vfio/pci.h
@@ -194,6 +194,7 @@ struct VFIOPCIDevice {
bool skip_vsc_check;
VFIODisplay *dpy;
Notifier irqchip_change_notifier;
+ VFIOPCICPR cpr;
};
/* Use uin32_t for vendor & device so PCI_ANY_ID expands and cannot match hw */
diff --git a/include/hw/vfio/vfio-cpr.h b/include/hw/vfio/vfio-cpr.h
index 80ad20d216..d37daffbc5 100644
--- a/include/hw/vfio/vfio-cpr.h
+++ b/include/hw/vfio/vfio-cpr.h
@@ -38,6 +38,10 @@ typedef struct VFIODeviceCPR {
uint32_t ioas_id;
} VFIODeviceCPR;
+typedef struct VFIOPCICPR {
+ NotifierWithReturn transfer_notifier;
+} VFIOPCICPR;
+
bool vfio_legacy_cpr_register_container(struct VFIOContainer *container,
Error **errp);
void vfio_legacy_cpr_unregister_container(struct VFIOContainer *container);
@@ -77,5 +81,7 @@ extern const VMStateDescription vfio_cpr_pci_vmstate;
extern const VMStateDescription vmstate_cpr_vfio_devices;
void vfio_cpr_add_kvm_notifier(void);
+void vfio_cpr_pci_register_device(struct VFIOPCIDevice *vdev);
+void vfio_cpr_pci_unregister_device(struct VFIOPCIDevice *vdev);
#endif /* HW_VFIO_VFIO_CPR_H */
--
2.39.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* RE: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
2025-07-16 18:06 ` [PATCH V2 2/2] vfio/pci: preserve pending interrupts Steve Sistare
@ 2025-07-17 2:43 ` Duan, Zhenzhong
2025-07-18 17:38 ` Steven Sistare
0 siblings, 1 reply; 10+ messages in thread
From: Duan, Zhenzhong @ 2025-07-17 2:43 UTC (permalink / raw)
To: Steve Sistare, qemu-devel@nongnu.org; +Cc: Cedric Le Goater, Alex Williamson
>-----Original Message-----
>From: Steve Sistare <steven.sistare@oracle.com>
>Subject: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
>
>cpr-transfer may lose a VFIO interrupt because the KVM instance is
>destroyed and recreated. If an interrupt arrives in the middle, it is
>dropped. To fix, stop pending new interrupts during cpr save, and pick
>up the pieces. In more detail:
>
>Stop the VCPUs. Call kvm_irqchip_remove_irqfd_notifier_gsi --> KVM_IRQFD
>to
>deassign the irqfd gsi that routes interrupts directly to the VCPU and KVM.
>After this call, interrupts fall back to the kernel vfio_msihandler, which
>writes to QEMU's kvm_interrupt eventfd. CPR already preserves that
>eventfd. When the route is re-established in new QEMU, the kernel tests
>the eventfd and injects an interrupt to KVM if necessary.
With this patch, producer is detached from the kvm consumer, do we still need to close kvm fd on source QEMU?
Zhenzhong
>
>Deassign INTx in a similar manner. For both MSI and INTx, remove the
>eventfd handler so old QEMU does not consume an event.
>
>If an interrupt was already pended to KVM prior to the completion of
>kvm_irqchip_remove_irqfd_notifier_gsi, it will be recovered by the
>subsequent call to cpu_synchronize_all_states, which pulls KVM interrupt
>state to userland prior to saving it in vmstate.
>
>Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
>---
> hw/vfio/cpr.c | 91
>++++++++++++++++++++++++++++++++++++++
> hw/vfio/pci.c | 2 +
> hw/vfio/pci.h | 1 +
> include/hw/vfio/vfio-cpr.h | 6 +++
> 4 files changed, 100 insertions(+)
>
>diff --git a/hw/vfio/cpr.c b/hw/vfio/cpr.c
>index 2a244fc4b6..bca74ea20a 100644
>--- a/hw/vfio/cpr.c
>+++ b/hw/vfio/cpr.c
>@@ -198,3 +198,94 @@ void vfio_cpr_add_kvm_notifier(void)
> MIG_MODE_CPR_TRANSFER);
> }
> }
>+
>+static int set_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
>+ EventNotifier *rn, int virq, bool
>enable)
>+{
>+ if (enable) {
>+ return kvm_irqchip_add_irqfd_notifier_gsi(s, n, rn, virq);
>+ } else {
>+ return kvm_irqchip_remove_irqfd_notifier_gsi(s, n, virq);
>+ }
>+}
>+
>+static int vfio_cpr_set_msi_virq(VFIOPCIDevice *vdev, Error **errp, bool
>enable)
>+{
>+ const char *op = (enable ? "enable" : "disable");
>+ PCIDevice *pdev = &vdev->pdev;
>+ int i, nr_vectors, ret = 0;
>+
>+ if (msix_enabled(pdev)) {
>+ nr_vectors = vdev->msix->entries;
>+
>+ } else if (msi_enabled(pdev)) {
>+ nr_vectors = msi_nr_vectors_allocated(pdev);
>+
>+ } else if (vfio_pci_read_config(pdev, PCI_INTERRUPT_PIN, 1)) {
>+ ret = set_irqfd_notifier_gsi(kvm_state, &vdev->intx.interrupt,
>+ &vdev->intx.unmask,
>vdev->intx.route.irq,
>+ enable);
>+ if (ret) {
>+ error_setg_errno(errp, -ret, "failed to %s INTx irq %d",
>+ op, vdev->intx.route.irq);
>+ return ret;
>+ }
>+ vfio_pci_intx_set_handler(vdev, enable);
>+ return ret;
>+
>+ } else {
>+ return 0;
>+ }
>+
>+ for (i = 0; i < nr_vectors; i++) {
>+ VFIOMSIVector *vector = &vdev->msi_vectors[i];
>+ if (vector->use) {
>+ ret = set_irqfd_notifier_gsi(kvm_state,
>&vector->kvm_interrupt,
>+ NULL, vector->virq,
>enable);
>+ if (ret) {
>+ error_setg_errno(errp, -ret,
>+ "failed to %s msi vector %d
>virq %d",
>+ op, i, vector->virq);
>+ return ret;
>+ }
>+ vfio_pci_msi_set_handler(vdev, i, enable);
>+ }
>+ }
>+
>+ return ret;
>+}
>+
>+/*
>+ * When CPR starts, detach IRQs from the VFIO device so future interrupts
>+ * are posted to kvm_interrupt, which is preserved in new QEMU.
>Interrupts
>+ * that were already posted to the old KVM instance, but not delivered to the
>+ * VCPU, are recovered via KVM_GET_LAPIC and pushed to the new KVM
>instance
>+ * in new QEMU.
>+ *
>+ * If CPR fails, reattach the IRQs.
>+ */
>+static int vfio_cpr_pci_notifier(NotifierWithReturn *notifier,
>+ MigrationEvent *e, Error **errp)
>+{
>+ VFIOPCIDevice *vdev =
>+ container_of(notifier, VFIOPCIDevice, cpr.transfer_notifier);
>+
>+ if (e->type == MIG_EVENT_PRECOPY_SETUP) {
>+ return vfio_cpr_set_msi_virq(vdev, errp, false);
>+ } else if (e->type == MIG_EVENT_PRECOPY_FAILED) {
>+ return vfio_cpr_set_msi_virq(vdev, errp, true);
>+ }
>+ return 0;
>+}
>+
>+void vfio_cpr_pci_register_device(VFIOPCIDevice *vdev)
>+{
>+ migration_add_notifier_mode(&vdev->cpr.transfer_notifier,
>+ vfio_cpr_pci_notifier,
>+ MIG_MODE_CPR_TRANSFER);
>+}
>+
>+void vfio_cpr_pci_unregister_device(VFIOPCIDevice *vdev)
>+{
>+ migration_remove_notifier(&vdev->cpr.transfer_notifier);
>+}
>diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
>index 8b471c054a..22a4125131 100644
>--- a/hw/vfio/pci.c
>+++ b/hw/vfio/pci.c
>@@ -2993,6 +2993,7 @@ void vfio_pci_put_device(VFIOPCIDevice *vdev)
> {
> vfio_display_finalize(vdev);
> vfio_bars_finalize(vdev);
>+ vfio_cpr_pci_unregister_device(vdev);
> g_free(vdev->emulated_config_bits);
> g_free(vdev->rom);
> /*
>@@ -3442,6 +3443,7 @@ static void vfio_pci_realize(PCIDevice *pdev, Error
>**errp)
> vfio_pci_register_err_notifier(vdev);
> vfio_pci_register_req_notifier(vdev);
> vfio_setup_resetfn_quirk(vdev);
>+ vfio_cpr_pci_register_device(vdev);
>
> return;
>
>diff --git a/hw/vfio/pci.h b/hw/vfio/pci.h
>index 80c8fcfa07..7989b94eb3 100644
>--- a/hw/vfio/pci.h
>+++ b/hw/vfio/pci.h
>@@ -194,6 +194,7 @@ struct VFIOPCIDevice {
> bool skip_vsc_check;
> VFIODisplay *dpy;
> Notifier irqchip_change_notifier;
>+ VFIOPCICPR cpr;
> };
>
> /* Use uin32_t for vendor & device so PCI_ANY_ID expands and cannot
>match hw */
>diff --git a/include/hw/vfio/vfio-cpr.h b/include/hw/vfio/vfio-cpr.h
>index 80ad20d216..d37daffbc5 100644
>--- a/include/hw/vfio/vfio-cpr.h
>+++ b/include/hw/vfio/vfio-cpr.h
>@@ -38,6 +38,10 @@ typedef struct VFIODeviceCPR {
> uint32_t ioas_id;
> } VFIODeviceCPR;
>
>+typedef struct VFIOPCICPR {
>+ NotifierWithReturn transfer_notifier;
>+} VFIOPCICPR;
>+
> bool vfio_legacy_cpr_register_container(struct VFIOContainer *container,
> Error **errp);
> void vfio_legacy_cpr_unregister_container(struct VFIOContainer
>*container);
>@@ -77,5 +81,7 @@ extern const VMStateDescription vfio_cpr_pci_vmstate;
> extern const VMStateDescription vmstate_cpr_vfio_devices;
>
> void vfio_cpr_add_kvm_notifier(void);
>+void vfio_cpr_pci_register_device(struct VFIOPCIDevice *vdev);
>+void vfio_cpr_pci_unregister_device(struct VFIOPCIDevice *vdev);
>
> #endif /* HW_VFIO_VFIO_CPR_H */
>--
>2.39.3
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
2025-07-17 2:43 ` Duan, Zhenzhong
@ 2025-07-18 17:38 ` Steven Sistare
2025-07-21 11:18 ` Duan, Zhenzhong
0 siblings, 1 reply; 10+ messages in thread
From: Steven Sistare @ 2025-07-18 17:38 UTC (permalink / raw)
To: Duan, Zhenzhong, qemu-devel@nongnu.org; +Cc: Cedric Le Goater, Alex Williamson
On 7/16/2025 10:43 PM, Duan, Zhenzhong wrote:
>> -----Original Message-----
>> From: Steve Sistare <steven.sistare@oracle.com>
>> Subject: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
>>
>> cpr-transfer may lose a VFIO interrupt because the KVM instance is
>> destroyed and recreated. If an interrupt arrives in the middle, it is
>> dropped. To fix, stop pending new interrupts during cpr save, and pick
>> up the pieces. In more detail:
>>
>> Stop the VCPUs. Call kvm_irqchip_remove_irqfd_notifier_gsi --> KVM_IRQFD
>> to
>> deassign the irqfd gsi that routes interrupts directly to the VCPU and KVM.
>> After this call, interrupts fall back to the kernel vfio_msihandler, which
>> writes to QEMU's kvm_interrupt eventfd. CPR already preserves that
>> eventfd. When the route is re-established in new QEMU, the kernel tests
>> the eventfd and injects an interrupt to KVM if necessary.
>
> With this patch, producer is detached from the kvm consumer, do we still need to close kvm fd on source QEMU?
Good observation! I tested with this patch, without the kvm close patch,
and indeed it works.
However, I would like to keep the kvm close patch, because it has another benefit:
it makes cpr-exec mode faster. In that mode, old QEMU directly exec's new QEMU,
and it is faster because the kernel exec code does not have to traverse and examine
kvm page mappings. That cost is linear with address space size. I use cpr-exec
mode at Oracle, and I plan to submit it for consideration in QEMU 10.2.
- Steve
>> Deassign INTx in a similar manner. For both MSI and INTx, remove the
>> eventfd handler so old QEMU does not consume an event.
>>
>> If an interrupt was already pended to KVM prior to the completion of
>> kvm_irqchip_remove_irqfd_notifier_gsi, it will be recovered by the
>> subsequent call to cpu_synchronize_all_states, which pulls KVM interrupt
>> state to userland prior to saving it in vmstate.
>>
>> Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
>> ---
>> hw/vfio/cpr.c | 91
>> ++++++++++++++++++++++++++++++++++++++
>> hw/vfio/pci.c | 2 +
>> hw/vfio/pci.h | 1 +
>> include/hw/vfio/vfio-cpr.h | 6 +++
>> 4 files changed, 100 insertions(+)
>>
>> diff --git a/hw/vfio/cpr.c b/hw/vfio/cpr.c
>> index 2a244fc4b6..bca74ea20a 100644
>> --- a/hw/vfio/cpr.c
>> +++ b/hw/vfio/cpr.c
>> @@ -198,3 +198,94 @@ void vfio_cpr_add_kvm_notifier(void)
>> MIG_MODE_CPR_TRANSFER);
>> }
>> }
>> +
>> +static int set_irqfd_notifier_gsi(KVMState *s, EventNotifier *n,
>> + EventNotifier *rn, int virq, bool
>> enable)
>> +{
>> + if (enable) {
>> + return kvm_irqchip_add_irqfd_notifier_gsi(s, n, rn, virq);
>> + } else {
>> + return kvm_irqchip_remove_irqfd_notifier_gsi(s, n, virq);
>> + }
>> +}
>> +
>> +static int vfio_cpr_set_msi_virq(VFIOPCIDevice *vdev, Error **errp, bool
>> enable)
>> +{
>> + const char *op = (enable ? "enable" : "disable");
>> + PCIDevice *pdev = &vdev->pdev;
>> + int i, nr_vectors, ret = 0;
>> +
>> + if (msix_enabled(pdev)) {
>> + nr_vectors = vdev->msix->entries;
>> +
>> + } else if (msi_enabled(pdev)) {
>> + nr_vectors = msi_nr_vectors_allocated(pdev);
>> +
>> + } else if (vfio_pci_read_config(pdev, PCI_INTERRUPT_PIN, 1)) {
>> + ret = set_irqfd_notifier_gsi(kvm_state, &vdev->intx.interrupt,
>> + &vdev->intx.unmask,
>> vdev->intx.route.irq,
>> + enable);
>> + if (ret) {
>> + error_setg_errno(errp, -ret, "failed to %s INTx irq %d",
>> + op, vdev->intx.route.irq);
>> + return ret;
>> + }
>> + vfio_pci_intx_set_handler(vdev, enable);
>> + return ret;
>> +
>> + } else {
>> + return 0;
>> + }
>> +
>> + for (i = 0; i < nr_vectors; i++) {
>> + VFIOMSIVector *vector = &vdev->msi_vectors[i];
>> + if (vector->use) {
>> + ret = set_irqfd_notifier_gsi(kvm_state,
>> &vector->kvm_interrupt,
>> + NULL, vector->virq,
>> enable);
>> + if (ret) {
>> + error_setg_errno(errp, -ret,
>> + "failed to %s msi vector %d
>> virq %d",
>> + op, i, vector->virq);
>> + return ret;
>> + }
>> + vfio_pci_msi_set_handler(vdev, i, enable);
>> + }
>> + }
>> +
>> + return ret;
>> +}
>> +
>> +/*
>> + * When CPR starts, detach IRQs from the VFIO device so future interrupts
>> + * are posted to kvm_interrupt, which is preserved in new QEMU.
>> Interrupts
>> + * that were already posted to the old KVM instance, but not delivered to the
>> + * VCPU, are recovered via KVM_GET_LAPIC and pushed to the new KVM
>> instance
>> + * in new QEMU.
>> + *
>> + * If CPR fails, reattach the IRQs.
>> + */
>> +static int vfio_cpr_pci_notifier(NotifierWithReturn *notifier,
>> + MigrationEvent *e, Error **errp)
>> +{
>> + VFIOPCIDevice *vdev =
>> + container_of(notifier, VFIOPCIDevice, cpr.transfer_notifier);
>> +
>> + if (e->type == MIG_EVENT_PRECOPY_SETUP) {
>> + return vfio_cpr_set_msi_virq(vdev, errp, false);
>> + } else if (e->type == MIG_EVENT_PRECOPY_FAILED) {
>> + return vfio_cpr_set_msi_virq(vdev, errp, true);
>> + }
>> + return 0;
>> +}
>> +
>> +void vfio_cpr_pci_register_device(VFIOPCIDevice *vdev)
>> +{
>> + migration_add_notifier_mode(&vdev->cpr.transfer_notifier,
>> + vfio_cpr_pci_notifier,
>> + MIG_MODE_CPR_TRANSFER);
>> +}
>> +
>> +void vfio_cpr_pci_unregister_device(VFIOPCIDevice *vdev)
>> +{
>> + migration_remove_notifier(&vdev->cpr.transfer_notifier);
>> +}
>> diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
>> index 8b471c054a..22a4125131 100644
>> --- a/hw/vfio/pci.c
>> +++ b/hw/vfio/pci.c
>> @@ -2993,6 +2993,7 @@ void vfio_pci_put_device(VFIOPCIDevice *vdev)
>> {
>> vfio_display_finalize(vdev);
>> vfio_bars_finalize(vdev);
>> + vfio_cpr_pci_unregister_device(vdev);
>> g_free(vdev->emulated_config_bits);
>> g_free(vdev->rom);
>> /*
>> @@ -3442,6 +3443,7 @@ static void vfio_pci_realize(PCIDevice *pdev, Error
>> **errp)
>> vfio_pci_register_err_notifier(vdev);
>> vfio_pci_register_req_notifier(vdev);
>> vfio_setup_resetfn_quirk(vdev);
>> + vfio_cpr_pci_register_device(vdev);
>>
>> return;
>>
>> diff --git a/hw/vfio/pci.h b/hw/vfio/pci.h
>> index 80c8fcfa07..7989b94eb3 100644
>> --- a/hw/vfio/pci.h
>> +++ b/hw/vfio/pci.h
>> @@ -194,6 +194,7 @@ struct VFIOPCIDevice {
>> bool skip_vsc_check;
>> VFIODisplay *dpy;
>> Notifier irqchip_change_notifier;
>> + VFIOPCICPR cpr;
>> };
>>
>> /* Use uin32_t for vendor & device so PCI_ANY_ID expands and cannot
>> match hw */
>> diff --git a/include/hw/vfio/vfio-cpr.h b/include/hw/vfio/vfio-cpr.h
>> index 80ad20d216..d37daffbc5 100644
>> --- a/include/hw/vfio/vfio-cpr.h
>> +++ b/include/hw/vfio/vfio-cpr.h
>> @@ -38,6 +38,10 @@ typedef struct VFIODeviceCPR {
>> uint32_t ioas_id;
>> } VFIODeviceCPR;
>>
>> +typedef struct VFIOPCICPR {
>> + NotifierWithReturn transfer_notifier;
>> +} VFIOPCICPR;
>> +
>> bool vfio_legacy_cpr_register_container(struct VFIOContainer *container,
>> Error **errp);
>> void vfio_legacy_cpr_unregister_container(struct VFIOContainer
>> *container);
>> @@ -77,5 +81,7 @@ extern const VMStateDescription vfio_cpr_pci_vmstate;
>> extern const VMStateDescription vmstate_cpr_vfio_devices;
>>
>> void vfio_cpr_add_kvm_notifier(void);
>> +void vfio_cpr_pci_register_device(struct VFIOPCIDevice *vdev);
>> +void vfio_cpr_pci_unregister_device(struct VFIOPCIDevice *vdev);
>>
>> #endif /* HW_VFIO_VFIO_CPR_H */
>> --
>> 2.39.3
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
2025-07-18 17:38 ` Steven Sistare
@ 2025-07-21 11:18 ` Duan, Zhenzhong
2025-07-28 12:38 ` Cédric Le Goater
2025-08-04 14:11 ` Steven Sistare
0 siblings, 2 replies; 10+ messages in thread
From: Duan, Zhenzhong @ 2025-07-21 11:18 UTC (permalink / raw)
To: Steven Sistare, qemu-devel@nongnu.org; +Cc: Cedric Le Goater, Alex Williamson
>-----Original Message-----
>From: Steven Sistare <steven.sistare@oracle.com>
>Subject: Re: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
>
>On 7/16/2025 10:43 PM, Duan, Zhenzhong wrote:
>>> -----Original Message-----
>>> From: Steve Sistare <steven.sistare@oracle.com>
>>> Subject: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
>>>
>>> cpr-transfer may lose a VFIO interrupt because the KVM instance is
>>> destroyed and recreated. If an interrupt arrives in the middle, it is
>>> dropped. To fix, stop pending new interrupts during cpr save, and pick
>>> up the pieces. In more detail:
>>>
>>> Stop the VCPUs. Call kvm_irqchip_remove_irqfd_notifier_gsi -->
>KVM_IRQFD
>>> to
>>> deassign the irqfd gsi that routes interrupts directly to the VCPU and KVM.
>>> After this call, interrupts fall back to the kernel vfio_msihandler, which
>>> writes to QEMU's kvm_interrupt eventfd. CPR already preserves that
>>> eventfd. When the route is re-established in new QEMU, the kernel tests
>>> the eventfd and injects an interrupt to KVM if necessary.
>>
>> With this patch, producer is detached from the kvm consumer, do we still
>need to close kvm fd on source QEMU?
>
>Good observation! I tested with this patch, without the kvm close patch,
>and indeed it works.
Thanks for confirming.
>
>However, I would like to keep the kvm close patch, because it has another
>benefit:
>it makes cpr-exec mode faster. In that mode, old QEMU directly exec's new
>QEMU,
>and it is faster because the kernel exec code does not have to traverse and
>examine
>kvm page mappings. That cost is linear with address space size. I use
>cpr-exec
>mode at Oracle, and I plan to submit it for consideration in QEMU 10.2.
Sure, but I'd like to get clear on the reason.
What kvm page do you mean, guest memory pages?
When exec, old kvm_fd is closed with close_no_exec implicitly, I don't understand
why faster if kvm_fd is closed explicitly.
Zhenzhong
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
2025-07-21 11:18 ` Duan, Zhenzhong
@ 2025-07-28 12:38 ` Cédric Le Goater
2025-08-04 13:59 ` Steven Sistare
2025-08-04 14:11 ` Steven Sistare
1 sibling, 1 reply; 10+ messages in thread
From: Cédric Le Goater @ 2025-07-28 12:38 UTC (permalink / raw)
To: Duan, Zhenzhong, Steven Sistare, qemu-devel@nongnu.org; +Cc: Alex Williamson
Steve,
On 7/21/25 13:18, Duan, Zhenzhong wrote:
>
>
>> -----Original Message-----
>> From: Steven Sistare <steven.sistare@oracle.com>
>> Subject: Re: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
>>
>> On 7/16/2025 10:43 PM, Duan, Zhenzhong wrote:
>>>> -----Original Message-----
>>>> From: Steve Sistare <steven.sistare@oracle.com>
>>>> Subject: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
>>>>
>>>> cpr-transfer may lose a VFIO interrupt because the KVM instance is
>>>> destroyed and recreated. If an interrupt arrives in the middle, it is
>>>> dropped. To fix, stop pending new interrupts during cpr save, and pick
>>>> up the pieces. In more detail:
>>>>
>>>> Stop the VCPUs. Call kvm_irqchip_remove_irqfd_notifier_gsi -->
>> KVM_IRQFD
>>>> to
>>>> deassign the irqfd gsi that routes interrupts directly to the VCPU and KVM.
>>>> After this call, interrupts fall back to the kernel vfio_msihandler, which
>>>> writes to QEMU's kvm_interrupt eventfd. CPR already preserves that
>>>> eventfd. When the route is re-established in new QEMU, the kernel tests
>>>> the eventfd and injects an interrupt to KVM if necessary.
>>>
>>> With this patch, producer is detached from the kvm consumer, do we still
>> need to close kvm fd on source QEMU?
>>
>> Good observation! I tested with this patch, without the kvm close patch,
>> and indeed it works.
>
> Thanks for confirming.
>
>>
>> However, I would like to keep the kvm close patch, because it has another
>> benefit:
>> it makes cpr-exec mode faster. In that mode, old QEMU directly exec's new
>> QEMU,
>> and it is faster because the kernel exec code does not have to traverse and
>> examine
>> kvm page mappings. That cost is linear with address space size. I use
>> cpr-exec
>> mode at Oracle, and I plan to submit it for consideration in QEMU 10.2.
>
> Sure, but I'd like to get clear on the reason.
> What kvm page do you mean, guest memory pages?
> When exec, old kvm_fd is closed with close_no_exec implicitly, I don't understand
> why faster if kvm_fd is closed explicitly.
>
I would like to send a vfio PR before -rc1 (tomorrow). Could you please
respond to Zhenzhong's comments when you are back (today I think) ?
Thanks,
C.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
2025-07-28 12:38 ` Cédric Le Goater
@ 2025-08-04 13:59 ` Steven Sistare
0 siblings, 0 replies; 10+ messages in thread
From: Steven Sistare @ 2025-08-04 13:59 UTC (permalink / raw)
To: Cédric Le Goater, Duan, Zhenzhong, qemu-devel@nongnu.org
Cc: Alex Williamson
On 7/28/2025 8:38 AM, Cédric Le Goater wrote:
> Steve,
>
> On 7/21/25 13:18, Duan, Zhenzhong wrote:
>>
>>> -----Original Message-----
>>> From: Steven Sistare <steven.sistare@oracle.com>
>>> Subject: Re: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
>>>
>>> On 7/16/2025 10:43 PM, Duan, Zhenzhong wrote:
>>>>> -----Original Message-----
>>>>> From: Steve Sistare <steven.sistare@oracle.com>
>>>>> Subject: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
>>>>>
>>>>> cpr-transfer may lose a VFIO interrupt because the KVM instance is
>>>>> destroyed and recreated. If an interrupt arrives in the middle, it is
>>>>> dropped. To fix, stop pending new interrupts during cpr save, and pick
>>>>> up the pieces. In more detail:
>>>>>
>>>>> Stop the VCPUs. Call kvm_irqchip_remove_irqfd_notifier_gsi -->
>>> KVM_IRQFD
>>>>> to
>>>>> deassign the irqfd gsi that routes interrupts directly to the VCPU and KVM.
>>>>> After this call, interrupts fall back to the kernel vfio_msihandler, which
>>>>> writes to QEMU's kvm_interrupt eventfd. CPR already preserves that
>>>>> eventfd. When the route is re-established in new QEMU, the kernel tests
>>>>> the eventfd and injects an interrupt to KVM if necessary.
>>>>
>>>> With this patch, producer is detached from the kvm consumer, do we still
>>> need to close kvm fd on source QEMU?
>>>
>>> Good observation! I tested with this patch, without the kvm close patch,
>>> and indeed it works.
>>
>> Thanks for confirming.
>>
>>>
>>> However, I would like to keep the kvm close patch, because it has another
>>> benefit:
>>> it makes cpr-exec mode faster. In that mode, old QEMU directly exec's new
>>> QEMU,
>>> and it is faster because the kernel exec code does not have to traverse and
>>> examine
>>> kvm page mappings. That cost is linear with address space size. I use
>>> cpr-exec
>>> mode at Oracle, and I plan to submit it for consideration in QEMU 10.2.
>>
>> Sure, but I'd like to get clear on the reason.
>> What kvm page do you mean, guest memory pages?
>> When exec, old kvm_fd is closed with close_no_exec implicitly, I don't understand
>> why faster if kvm_fd is closed explicitly.
>>
> I would like to send a vfio PR before -rc1 (tomorrow). Could you please
> respond to Zhenzhong's comments when you are back (today I think) ?
I am back today. I will respond shortly.
- Steve
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
2025-07-21 11:18 ` Duan, Zhenzhong
2025-07-28 12:38 ` Cédric Le Goater
@ 2025-08-04 14:11 ` Steven Sistare
2025-08-05 3:41 ` Duan, Zhenzhong
1 sibling, 1 reply; 10+ messages in thread
From: Steven Sistare @ 2025-08-04 14:11 UTC (permalink / raw)
To: Duan, Zhenzhong, qemu-devel@nongnu.org; +Cc: Cedric Le Goater, Alex Williamson
On 7/21/2025 7:18 AM, Duan, Zhenzhong wrote:
>> -----Original Message-----
>> From: Steven Sistare <steven.sistare@oracle.com>
>> Subject: Re: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
>>
>> On 7/16/2025 10:43 PM, Duan, Zhenzhong wrote:
>>>> -----Original Message-----
>>>> From: Steve Sistare <steven.sistare@oracle.com>
>>>> Subject: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
>>>>
>>>> cpr-transfer may lose a VFIO interrupt because the KVM instance is
>>>> destroyed and recreated. If an interrupt arrives in the middle, it is
>>>> dropped. To fix, stop pending new interrupts during cpr save, and pick
>>>> up the pieces. In more detail:
>>>>
>>>> Stop the VCPUs. Call kvm_irqchip_remove_irqfd_notifier_gsi -->
>> KVM_IRQFD
>>>> to
>>>> deassign the irqfd gsi that routes interrupts directly to the VCPU and KVM.
>>>> After this call, interrupts fall back to the kernel vfio_msihandler, which
>>>> writes to QEMU's kvm_interrupt eventfd. CPR already preserves that
>>>> eventfd. When the route is re-established in new QEMU, the kernel tests
>>>> the eventfd and injects an interrupt to KVM if necessary.
>>>
>>> With this patch, producer is detached from the kvm consumer, do we still
>> need to close kvm fd on source QEMU?
>>
>> Good observation! I tested with this patch, without the kvm close patch,
>> and indeed it works.
>
> Thanks for confirming.
>
>> However, I would like to keep the kvm close patch, because it has another
>> benefit:
>> it makes cpr-exec mode faster. In that mode, old QEMU directly exec's new
>> QEMU,
>> and it is faster because the kernel exec code does not have to traverse and
>> examine
>> kvm page mappings. That cost is linear with address space size. I use
>> cpr-exec
>> mode at Oracle, and I plan to submit it for consideration in QEMU 10.2.
>
> Sure, but I'd like to get clear on the reason.
> What kvm page do you mean, guest memory pages?
KVM has a slots data structure that it uses to track guest memory pages.
During exec, slots is cleared page-by-page in the path
copy_page_range -> mmu_notifier_invalidate_range_start -> kvm_mmu_notifier_invalidate_range_start
> When exec, old kvm_fd is closed with close_no_exec implicitly, I don't understand
> why faster if kvm_fd is closed explicitly.
The kernel closes close-on-exec fd's after copy_page_range, after the mmu notifier
has done all the per-page work.
- Steve
^ permalink raw reply [flat|nested] 10+ messages in thread
* RE: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
2025-08-04 14:11 ` Steven Sistare
@ 2025-08-05 3:41 ` Duan, Zhenzhong
0 siblings, 0 replies; 10+ messages in thread
From: Duan, Zhenzhong @ 2025-08-05 3:41 UTC (permalink / raw)
To: Steven Sistare, qemu-devel@nongnu.org; +Cc: Cedric Le Goater, Alex Williamson
>-----Original Message-----
>From: Steven Sistare <steven.sistare@oracle.com>
>Subject: Re: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
>
>On 7/21/2025 7:18 AM, Duan, Zhenzhong wrote:
>>> -----Original Message-----
>>> From: Steven Sistare <steven.sistare@oracle.com>
>>> Subject: Re: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
>>>
>>> On 7/16/2025 10:43 PM, Duan, Zhenzhong wrote:
>>>>> -----Original Message-----
>>>>> From: Steve Sistare <steven.sistare@oracle.com>
>>>>> Subject: [PATCH V2 2/2] vfio/pci: preserve pending interrupts
>>>>>
>>>>> cpr-transfer may lose a VFIO interrupt because the KVM instance is
>>>>> destroyed and recreated. If an interrupt arrives in the middle, it is
>>>>> dropped. To fix, stop pending new interrupts during cpr save, and pick
>>>>> up the pieces. In more detail:
>>>>>
>>>>> Stop the VCPUs. Call kvm_irqchip_remove_irqfd_notifier_gsi -->
>>> KVM_IRQFD
>>>>> to
>>>>> deassign the irqfd gsi that routes interrupts directly to the VCPU and
>KVM.
>>>>> After this call, interrupts fall back to the kernel vfio_msihandler, which
>>>>> writes to QEMU's kvm_interrupt eventfd. CPR already preserves that
>>>>> eventfd. When the route is re-established in new QEMU, the kernel
>tests
>>>>> the eventfd and injects an interrupt to KVM if necessary.
>>>>
>>>> With this patch, producer is detached from the kvm consumer, do we still
>>> need to close kvm fd on source QEMU?
>>>
>>> Good observation! I tested with this patch, without the kvm close patch,
>>> and indeed it works.
>>
>> Thanks for confirming.
>>
>>> However, I would like to keep the kvm close patch, because it has another
>>> benefit:
>>> it makes cpr-exec mode faster. In that mode, old QEMU directly exec's
>new
>>> QEMU,
>>> and it is faster because the kernel exec code does not have to traverse and
>>> examine
>>> kvm page mappings. That cost is linear with address space size. I use
>>> cpr-exec
>>> mode at Oracle, and I plan to submit it for consideration in QEMU 10.2.
>>
>> Sure, but I'd like to get clear on the reason.
>> What kvm page do you mean, guest memory pages?
>
>KVM has a slots data structure that it uses to track guest memory pages.
>During exec, slots is cleared page-by-page in the path
> copy_page_range -> mmu_notifier_invalidate_range_start ->
>kvm_mmu_notifier_invalidate_range_start
Understood, you want to avoid zapping EPT by closing kvm fd.
>
>> When exec, old kvm_fd is closed with close_no_exec implicitly, I don't
>understand
>> why faster if kvm_fd is closed explicitly.
>
>The kernel closes close-on-exec fd's after copy_page_range, after the mmu
>notifier
>has done all the per-page work.
Clear, for the whole series:
Reviewed-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Thanks
Zhenzhong
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2025-08-05 3:42 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-16 18:06 [PATCH V2 0/2] preserve pending interrupts during cpr Steve Sistare
2025-07-16 18:06 ` [PATCH V2 1/2] vfio/pci: augment set_handler Steve Sistare
2025-07-16 18:06 ` [PATCH V2 2/2] vfio/pci: preserve pending interrupts Steve Sistare
2025-07-17 2:43 ` Duan, Zhenzhong
2025-07-18 17:38 ` Steven Sistare
2025-07-21 11:18 ` Duan, Zhenzhong
2025-07-28 12:38 ` Cédric Le Goater
2025-08-04 13:59 ` Steven Sistare
2025-08-04 14:11 ` Steven Sistare
2025-08-05 3:41 ` Duan, Zhenzhong
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).