qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] VFIO multifd device state transfer patches for QEMU 10.1
@ 2025-07-15 14:37 Maciej S. Szmigiero
  2025-07-15 14:37 ` [PATCH v2 1/2] vfio/migration: Add x-migration-load-config-after-iter VFIO property Maciej S. Szmigiero
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Maciej S. Szmigiero @ 2025-07-15 14:37 UTC (permalink / raw)
  To: Alex Williamson, Cédric Le Goater, Peter Xu, Fabiano Rosas
  Cc: Peter Maydell, Eric Auger, Avihai Horon, qemu-arm, qemu-devel

From: "Maciej S. Szmigiero" <maciej.szmigiero@oracle.com>

This is an updated v2 patch series of the v1 series located at [1] of the
leftover patches of VFIO multifd migration patch set that was merged in
QEMU 10.0.

Changes from v1:
* Drop the in-flight VFIO device state buffer *count* limit, leave
  the *size* limit only.

* Add a small target-dependent helper to hw/vfio/helpers.c to avoid doing
  strcmp(target_name(), "aarch64") in hw/vfio/migration-multifd.c.

* Mention that VFIO device config with ARM interlocking enabled is
  loaded as part of the non-iterables as suggested by Avihai.

* Collect Fabiano and Avihai Reviewed-by tags.

[1]: https://lore.kernel.org/qemu-devel/cover.1750787338.git.maciej.szmigiero@oracle.com/

Maciej S. Szmigiero (2):
  vfio/migration: Add x-migration-load-config-after-iter VFIO property
  vfio/migration: Max in-flight VFIO device state buffers size limit

 docs/devel/migration/vfio.rst     |  19 ++++++
 hw/core/machine.c                 |   1 +
 hw/vfio/helpers.c                 |  17 +++++
 hw/vfio/migration-multifd.c       | 100 +++++++++++++++++++++++++++++-
 hw/vfio/migration-multifd.h       |   3 +
 hw/vfio/migration.c               |  10 ++-
 hw/vfio/pci.c                     |  19 ++++++
 hw/vfio/vfio-helpers.h            |   2 +
 hw/vfio/vfio-migration-internal.h |   1 +
 include/hw/vfio/vfio-device.h     |   2 +
 10 files changed, 171 insertions(+), 3 deletions(-)



^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v2 1/2] vfio/migration: Add x-migration-load-config-after-iter VFIO property
  2025-07-15 14:37 [PATCH v2 0/2] VFIO multifd device state transfer patches for QEMU 10.1 Maciej S. Szmigiero
@ 2025-07-15 14:37 ` Maciej S. Szmigiero
  2025-07-15 14:37 ` [PATCH v2 2/2] vfio/migration: Max in-flight VFIO device state buffers size limit Maciej S. Szmigiero
  2025-07-15 16:18 ` [PATCH v2 0/2] VFIO multifd device state transfer patches for QEMU 10.1 Cédric Le Goater
  2 siblings, 0 replies; 4+ messages in thread
From: Maciej S. Szmigiero @ 2025-07-15 14:37 UTC (permalink / raw)
  To: Alex Williamson, Cédric Le Goater, Peter Xu, Fabiano Rosas
  Cc: Peter Maydell, Eric Auger, Avihai Horon, qemu-arm, qemu-devel

From: "Maciej S. Szmigiero" <maciej.szmigiero@oracle.com>

This property allows configuring whether to start the config load only
after all iterables were loaded, during non-iterables loading phase.
Such interlocking is required for ARM64 due to this platform VFIO
dependency on interrupt controller being loaded first.

The property defaults to AUTO, which means ON for ARM, OFF for other
platforms.

Reviewed-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Avihai Horon <avihaih@nvidia.com>
Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
---
 docs/devel/migration/vfio.rst     |  6 +++
 hw/core/machine.c                 |  1 +
 hw/vfio/helpers.c                 | 17 +++++++
 hw/vfio/migration-multifd.c       | 79 +++++++++++++++++++++++++++++++
 hw/vfio/migration-multifd.h       |  3 ++
 hw/vfio/migration.c               | 10 +++-
 hw/vfio/pci.c                     | 10 ++++
 hw/vfio/vfio-helpers.h            |  2 +
 hw/vfio/vfio-migration-internal.h |  1 +
 include/hw/vfio/vfio-device.h     |  1 +
 10 files changed, 129 insertions(+), 1 deletion(-)

diff --git a/docs/devel/migration/vfio.rst b/docs/devel/migration/vfio.rst
index 2d8e5ca9dd0e..dae3a988307f 100644
--- a/docs/devel/migration/vfio.rst
+++ b/docs/devel/migration/vfio.rst
@@ -247,3 +247,9 @@ The multifd VFIO device state transfer is controlled by
 "x-migration-multifd-transfer" VFIO device property. This property defaults to
 AUTO, which means that VFIO device state transfer via multifd channels is
 attempted in configurations that otherwise support it.
+
+Some host platforms (like ARM64) require that VFIO device config is loaded only
+after all iterables were loaded, during non-iterables loading phase.
+Such interlocking is controlled by "x-migration-load-config-after-iter" VFIO
+device property, which in its default setting (AUTO) does so only on platforms
+that actually require it.
diff --git a/hw/core/machine.c b/hw/core/machine.c
index e869821b2246..16640b700f2e 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -39,6 +39,7 @@
 
 GlobalProperty hw_compat_10_0[] = {
     { "scsi-hd", "dpofua", "off" },
+    { "vfio-pci", "x-migration-load-config-after-iter", "off" },
 };
 const size_t hw_compat_10_0_len = G_N_ELEMENTS(hw_compat_10_0);
 
diff --git a/hw/vfio/helpers.c b/hw/vfio/helpers.c
index 9a5f62154554..23d13e5db5f2 100644
--- a/hw/vfio/helpers.c
+++ b/hw/vfio/helpers.c
@@ -209,3 +209,20 @@ retry:
 
     return info;
 }
+
+bool vfio_arch_wants_loading_config_after_iter(void)
+{
+    /*
+     * Starting the config load only after all iterables were loaded (during
+     * non-iterables loading phase) is required for ARM64 due to this platform
+     * VFIO dependency on interrupt controller being loaded first.
+     *
+     * See commit d329f5032e17 ("vfio: Move the saving of the config space to
+     * the right place in VFIO migration").
+     */
+#if defined(TARGET_ARM)
+    return true;
+#else
+    return false;
+#endif
+}
diff --git a/hw/vfio/migration-multifd.c b/hw/vfio/migration-multifd.c
index 55635486c8f1..e539befaa925 100644
--- a/hw/vfio/migration-multifd.c
+++ b/hw/vfio/migration-multifd.c
@@ -23,6 +23,7 @@
 #include "migration-multifd.h"
 #include "vfio-migration-internal.h"
 #include "trace.h"
+#include "vfio-helpers.h"
 
 #define VFIO_DEVICE_STATE_CONFIG_STATE (1)
 
@@ -35,6 +36,18 @@ typedef struct VFIODeviceStatePacket {
     uint8_t data[0];
 } QEMU_PACKED VFIODeviceStatePacket;
 
+bool vfio_load_config_after_iter(VFIODevice *vbasedev)
+{
+    if (vbasedev->migration_load_config_after_iter == ON_OFF_AUTO_ON) {
+        return true;
+    } else if (vbasedev->migration_load_config_after_iter == ON_OFF_AUTO_OFF) {
+        return false;
+    }
+
+    assert(vbasedev->migration_load_config_after_iter == ON_OFF_AUTO_AUTO);
+    return vfio_arch_wants_loading_config_after_iter();
+}
+
 /* type safety */
 typedef struct VFIOStateBuffers {
     GArray *array;
@@ -50,6 +63,9 @@ typedef struct VFIOMultifd {
     bool load_bufs_thread_running;
     bool load_bufs_thread_want_exit;
 
+    bool load_bufs_iter_done;
+    QemuCond load_bufs_iter_done_cond;
+
     VFIOStateBuffers load_bufs;
     QemuCond load_bufs_buffer_ready_cond;
     QemuCond load_bufs_thread_finished_cond;
@@ -394,6 +410,22 @@ static bool vfio_load_bufs_thread(void *opaque, bool *should_quit, Error **errp)
         multifd->load_buf_idx++;
     }
 
+    if (vfio_load_config_after_iter(vbasedev)) {
+        while (!multifd->load_bufs_iter_done) {
+            qemu_cond_wait(&multifd->load_bufs_iter_done_cond,
+                           &multifd->load_bufs_mutex);
+
+            /*
+             * Need to re-check cancellation immediately after wait in case
+             * cond was signalled by vfio_load_cleanup_load_bufs_thread().
+             */
+            if (vfio_load_bufs_thread_want_exit(multifd, should_quit)) {
+                error_setg(errp, "operation cancelled");
+                goto thread_exit;
+            }
+        }
+    }
+
     if (!vfio_load_bufs_thread_load_config(vbasedev, errp)) {
         goto thread_exit;
     }
@@ -413,6 +445,48 @@ thread_exit:
     return ret;
 }
 
+int vfio_load_state_config_load_ready(VFIODevice *vbasedev)
+{
+    VFIOMigration *migration = vbasedev->migration;
+    VFIOMultifd *multifd = migration->multifd;
+    int ret = 0;
+
+    if (!vfio_multifd_transfer_enabled(vbasedev)) {
+        error_report("%s: got DEV_CONFIG_LOAD_READY outside multifd transfer",
+                     vbasedev->name);
+        return -EINVAL;
+    }
+
+    if (!vfio_load_config_after_iter(vbasedev)) {
+        error_report("%s: got DEV_CONFIG_LOAD_READY but was disabled",
+                     vbasedev->name);
+        return -EINVAL;
+    }
+
+    assert(multifd);
+
+    /* The lock order is load_bufs_mutex -> BQL so unlock BQL here first */
+    bql_unlock();
+    WITH_QEMU_LOCK_GUARD(&multifd->load_bufs_mutex) {
+        if (multifd->load_bufs_iter_done) {
+            /* Can't print error here as we're outside BQL */
+            ret = -EINVAL;
+            break;
+        }
+
+        multifd->load_bufs_iter_done = true;
+        qemu_cond_signal(&multifd->load_bufs_iter_done_cond);
+    }
+    bql_lock();
+
+    if (ret) {
+        error_report("%s: duplicate DEV_CONFIG_LOAD_READY",
+                     vbasedev->name);
+    }
+
+    return ret;
+}
+
 static VFIOMultifd *vfio_multifd_new(void)
 {
     VFIOMultifd *multifd = g_new(VFIOMultifd, 1);
@@ -425,6 +499,9 @@ static VFIOMultifd *vfio_multifd_new(void)
     multifd->load_buf_idx_last = UINT32_MAX;
     qemu_cond_init(&multifd->load_bufs_buffer_ready_cond);
 
+    multifd->load_bufs_iter_done = false;
+    qemu_cond_init(&multifd->load_bufs_iter_done_cond);
+
     multifd->load_bufs_thread_running = false;
     multifd->load_bufs_thread_want_exit = false;
     qemu_cond_init(&multifd->load_bufs_thread_finished_cond);
@@ -448,6 +525,7 @@ static void vfio_load_cleanup_load_bufs_thread(VFIOMultifd *multifd)
             multifd->load_bufs_thread_want_exit = true;
 
             qemu_cond_signal(&multifd->load_bufs_buffer_ready_cond);
+            qemu_cond_signal(&multifd->load_bufs_iter_done_cond);
             qemu_cond_wait(&multifd->load_bufs_thread_finished_cond,
                            &multifd->load_bufs_mutex);
         }
@@ -460,6 +538,7 @@ static void vfio_multifd_free(VFIOMultifd *multifd)
     vfio_load_cleanup_load_bufs_thread(multifd);
 
     qemu_cond_destroy(&multifd->load_bufs_thread_finished_cond);
+    qemu_cond_destroy(&multifd->load_bufs_iter_done_cond);
     vfio_state_buffers_destroy(&multifd->load_bufs);
     qemu_cond_destroy(&multifd->load_bufs_buffer_ready_cond);
     qemu_mutex_destroy(&multifd->load_bufs_mutex);
diff --git a/hw/vfio/migration-multifd.h b/hw/vfio/migration-multifd.h
index ebf22a7997ac..82d2d3a1fd3e 100644
--- a/hw/vfio/migration-multifd.h
+++ b/hw/vfio/migration-multifd.h
@@ -20,9 +20,12 @@ void vfio_multifd_cleanup(VFIODevice *vbasedev);
 bool vfio_multifd_transfer_supported(void);
 bool vfio_multifd_transfer_enabled(VFIODevice *vbasedev);
 
+bool vfio_load_config_after_iter(VFIODevice *vbasedev);
 bool vfio_multifd_load_state_buffer(void *opaque, char *data, size_t data_size,
                                     Error **errp);
 
+int vfio_load_state_config_load_ready(VFIODevice *vbasedev);
+
 void vfio_multifd_emit_dummy_eos(VFIODevice *vbasedev, QEMUFile *f);
 
 bool
diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
index c329578eec31..4c06e3db936a 100644
--- a/hw/vfio/migration.c
+++ b/hw/vfio/migration.c
@@ -675,7 +675,11 @@ static void vfio_save_state(QEMUFile *f, void *opaque)
     int ret;
 
     if (vfio_multifd_transfer_enabled(vbasedev)) {
-        vfio_multifd_emit_dummy_eos(vbasedev, f);
+        if (vfio_load_config_after_iter(vbasedev)) {
+            qemu_put_be64(f, VFIO_MIG_FLAG_DEV_CONFIG_LOAD_READY);
+        } else {
+            vfio_multifd_emit_dummy_eos(vbasedev, f);
+        }
         return;
     }
 
@@ -784,6 +788,10 @@ static int vfio_load_state(QEMUFile *f, void *opaque, int version_id)
 
             return ret;
         }
+        case VFIO_MIG_FLAG_DEV_CONFIG_LOAD_READY:
+        {
+            return vfio_load_state_config_load_ready(vbasedev);
+        }
         default:
             error_report("%s: Unknown tag 0x%"PRIx64, vbasedev->name, data);
             return -EINVAL;
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index 1093b28df7c3..7010f0af35b6 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -3623,6 +3623,9 @@ static const Property vfio_pci_dev_properties[] = {
                 vbasedev.migration_multifd_transfer,
                 vfio_pci_migration_multifd_transfer_prop, OnOffAuto,
                 .set_default = true, .defval.i = ON_OFF_AUTO_AUTO),
+    DEFINE_PROP_ON_OFF_AUTO("x-migration-load-config-after-iter", VFIOPCIDevice,
+                            vbasedev.migration_load_config_after_iter,
+                            ON_OFF_AUTO_AUTO),
     DEFINE_PROP_BOOL("migration-events", VFIOPCIDevice,
                      vbasedev.migration_events, false),
     DEFINE_PROP_BOOL("x-no-mmap", VFIOPCIDevice, vbasedev.no_mmap, false),
@@ -3797,6 +3800,13 @@ static void vfio_pci_dev_class_init(ObjectClass *klass, const void *data)
                                           "x-migration-multifd-transfer",
                                           "Transfer this device state via "
                                           "multifd channels when live migrating it");
+    object_class_property_set_description(klass, /* 10.1 */
+                                          "x-migration-load-config-after-iter",
+                                          "Start the config load only after "
+                                          "all iterables were loaded (during "
+                                          "non-iterables loading phase) when "
+                                          "doing live migration of device state "
+                                          "via multifd channels");
 }
 
 static const TypeInfo vfio_pci_dev_info = {
diff --git a/hw/vfio/vfio-helpers.h b/hw/vfio/vfio-helpers.h
index 54a327ffbc04..ce317580800a 100644
--- a/hw/vfio/vfio-helpers.h
+++ b/hw/vfio/vfio-helpers.h
@@ -32,4 +32,6 @@ struct vfio_device_info *vfio_get_device_info(int fd);
 int vfio_kvm_device_add_fd(int fd, Error **errp);
 int vfio_kvm_device_del_fd(int fd, Error **errp);
 
+bool vfio_arch_wants_loading_config_after_iter(void);
+
 #endif /* HW_VFIO_VFIO_HELPERS_H */
diff --git a/hw/vfio/vfio-migration-internal.h b/hw/vfio/vfio-migration-internal.h
index a8b456b239df..54141e27e6b2 100644
--- a/hw/vfio/vfio-migration-internal.h
+++ b/hw/vfio/vfio-migration-internal.h
@@ -32,6 +32,7 @@
 #define VFIO_MIG_FLAG_DEV_SETUP_STATE   (0xffffffffef100003ULL)
 #define VFIO_MIG_FLAG_DEV_DATA_STATE    (0xffffffffef100004ULL)
 #define VFIO_MIG_FLAG_DEV_INIT_DATA_SENT (0xffffffffef100005ULL)
+#define VFIO_MIG_FLAG_DEV_CONFIG_LOAD_READY (0xffffffffef100006ULL)
 
 typedef struct VFIODevice VFIODevice;
 typedef struct VFIOMultifd VFIOMultifd;
diff --git a/include/hw/vfio/vfio-device.h b/include/hw/vfio/vfio-device.h
index 1901a35aa902..dac3fdce1539 100644
--- a/include/hw/vfio/vfio-device.h
+++ b/include/hw/vfio/vfio-device.h
@@ -67,6 +67,7 @@ typedef struct VFIODevice {
     bool ram_block_discard_allowed;
     OnOffAuto enable_migration;
     OnOffAuto migration_multifd_transfer;
+    OnOffAuto migration_load_config_after_iter;
     bool migration_events;
     bool use_region_fds;
     VFIODeviceOps *ops;


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH v2 2/2] vfio/migration: Max in-flight VFIO device state buffers size limit
  2025-07-15 14:37 [PATCH v2 0/2] VFIO multifd device state transfer patches for QEMU 10.1 Maciej S. Szmigiero
  2025-07-15 14:37 ` [PATCH v2 1/2] vfio/migration: Add x-migration-load-config-after-iter VFIO property Maciej S. Szmigiero
@ 2025-07-15 14:37 ` Maciej S. Szmigiero
  2025-07-15 16:18 ` [PATCH v2 0/2] VFIO multifd device state transfer patches for QEMU 10.1 Cédric Le Goater
  2 siblings, 0 replies; 4+ messages in thread
From: Maciej S. Szmigiero @ 2025-07-15 14:37 UTC (permalink / raw)
  To: Alex Williamson, Cédric Le Goater, Peter Xu, Fabiano Rosas
  Cc: Peter Maydell, Eric Auger, Avihai Horon, qemu-arm, qemu-devel

From: "Maciej S. Szmigiero" <maciej.szmigiero@oracle.com>

Allow capping the maximum total size of in-flight VFIO device state buffers
queued at the destination, otherwise a malicious QEMU source could
theoretically cause the target QEMU to allocate unlimited amounts of memory
for buffers-in-flight.

Since this is not expected to be a realistic threat in most of VFIO live
migration use cases and the right value depends on the particular setup
disable this limit by default by setting it to UINT64_MAX.

Reviewed-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Avihai Horon <avihaih@nvidia.com>
Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
---
 docs/devel/migration/vfio.rst | 13 +++++++++++++
 hw/vfio/migration-multifd.c   | 21 +++++++++++++++++++--
 hw/vfio/pci.c                 |  9 +++++++++
 include/hw/vfio/vfio-device.h |  1 +
 4 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/docs/devel/migration/vfio.rst b/docs/devel/migration/vfio.rst
index dae3a988307f..0790e5031d8f 100644
--- a/docs/devel/migration/vfio.rst
+++ b/docs/devel/migration/vfio.rst
@@ -248,6 +248,19 @@ The multifd VFIO device state transfer is controlled by
 AUTO, which means that VFIO device state transfer via multifd channels is
 attempted in configurations that otherwise support it.
 
+Since the target QEMU needs to load device state buffers in-order it needs to
+queue incoming buffers until they can be loaded into the device.
+This means that a malicious QEMU source could theoretically cause the target
+QEMU to allocate unlimited amounts of memory for such buffers-in-flight.
+
+The "x-migration-max-queued-buffers-size" property allows capping the total size
+of these VFIO device state buffers queued at the destination.
+
+Because a malicious QEMU source causing OOM on the target is not expected to be
+a realistic threat in most of VFIO live migration use cases and the right value
+depends on the particular setup by default this queued buffers size limit is
+disabled by setting it to UINT64_MAX.
+
 Some host platforms (like ARM64) require that VFIO device config is loaded only
 after all iterables were loaded, during non-iterables loading phase.
 Such interlocking is controlled by "x-migration-load-config-after-iter" VFIO
diff --git a/hw/vfio/migration-multifd.c b/hw/vfio/migration-multifd.c
index e539befaa925..d522671b8d62 100644
--- a/hw/vfio/migration-multifd.c
+++ b/hw/vfio/migration-multifd.c
@@ -72,6 +72,7 @@ typedef struct VFIOMultifd {
     QemuMutex load_bufs_mutex; /* Lock order: this lock -> BQL */
     uint32_t load_buf_idx;
     uint32_t load_buf_idx_last;
+    size_t load_buf_queued_pending_buffers_size;
 } VFIOMultifd;
 
 static void vfio_state_buffer_clear(gpointer data)
@@ -128,6 +129,7 @@ static bool vfio_load_state_buffer_insert(VFIODevice *vbasedev,
     VFIOMigration *migration = vbasedev->migration;
     VFIOMultifd *multifd = migration->multifd;
     VFIOStateBuffer *lb;
+    size_t data_size = packet_total_size - sizeof(*packet);
 
     vfio_state_buffers_assert_init(&multifd->load_bufs);
     if (packet->idx >= vfio_state_buffers_size_get(&multifd->load_bufs)) {
@@ -143,8 +145,19 @@ static bool vfio_load_state_buffer_insert(VFIODevice *vbasedev,
 
     assert(packet->idx >= multifd->load_buf_idx);
 
-    lb->data = g_memdup2(&packet->data, packet_total_size - sizeof(*packet));
-    lb->len = packet_total_size - sizeof(*packet);
+    multifd->load_buf_queued_pending_buffers_size += data_size;
+    if (multifd->load_buf_queued_pending_buffers_size >
+        vbasedev->migration_max_queued_buffers_size) {
+        error_setg(errp,
+                   "%s: queuing state buffer %" PRIu32
+                   " would exceed the size max of %" PRIu64,
+                   vbasedev->name, packet->idx,
+                   vbasedev->migration_max_queued_buffers_size);
+        return false;
+    }
+
+    lb->data = g_memdup2(&packet->data, data_size);
+    lb->len = data_size;
     lb->is_present = true;
 
     return true;
@@ -328,6 +341,9 @@ static bool vfio_load_state_buffer_write(VFIODevice *vbasedev,
         assert(wr_ret <= buf_len);
         buf_len -= wr_ret;
         buf_cur += wr_ret;
+
+        assert(multifd->load_buf_queued_pending_buffers_size >= wr_ret);
+        multifd->load_buf_queued_pending_buffers_size -= wr_ret;
     }
 
     trace_vfio_load_state_device_buffer_load_end(vbasedev->name,
@@ -497,6 +513,7 @@ static VFIOMultifd *vfio_multifd_new(void)
 
     multifd->load_buf_idx = 0;
     multifd->load_buf_idx_last = UINT32_MAX;
+    multifd->load_buf_queued_pending_buffers_size = 0;
     qemu_cond_init(&multifd->load_bufs_buffer_ready_cond);
 
     multifd->load_bufs_iter_done = false;
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
index 7010f0af35b6..4a360bd17ed6 100644
--- a/hw/vfio/pci.c
+++ b/hw/vfio/pci.c
@@ -3626,6 +3626,8 @@ static const Property vfio_pci_dev_properties[] = {
     DEFINE_PROP_ON_OFF_AUTO("x-migration-load-config-after-iter", VFIOPCIDevice,
                             vbasedev.migration_load_config_after_iter,
                             ON_OFF_AUTO_AUTO),
+    DEFINE_PROP_SIZE("x-migration-max-queued-buffers-size", VFIOPCIDevice,
+                     vbasedev.migration_max_queued_buffers_size, UINT64_MAX),
     DEFINE_PROP_BOOL("migration-events", VFIOPCIDevice,
                      vbasedev.migration_events, false),
     DEFINE_PROP_BOOL("x-no-mmap", VFIOPCIDevice, vbasedev.no_mmap, false),
@@ -3807,6 +3809,13 @@ static void vfio_pci_dev_class_init(ObjectClass *klass, const void *data)
                                           "non-iterables loading phase) when "
                                           "doing live migration of device state "
                                           "via multifd channels");
+    object_class_property_set_description(klass, /* 10.1 */
+                                          "x-migration-max-queued-buffers-size",
+                                          "Maximum size of in-flight VFIO "
+                                          "device state buffers queued at the "
+                                          "destination when doing live "
+                                          "migration of device state via "
+                                          "multifd channels");
 }
 
 static const TypeInfo vfio_pci_dev_info = {
diff --git a/include/hw/vfio/vfio-device.h b/include/hw/vfio/vfio-device.h
index dac3fdce1539..6e4d5ccdac6e 100644
--- a/include/hw/vfio/vfio-device.h
+++ b/include/hw/vfio/vfio-device.h
@@ -68,6 +68,7 @@ typedef struct VFIODevice {
     OnOffAuto enable_migration;
     OnOffAuto migration_multifd_transfer;
     OnOffAuto migration_load_config_after_iter;
+    uint64_t migration_max_queued_buffers_size;
     bool migration_events;
     bool use_region_fds;
     VFIODeviceOps *ops;


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v2 0/2] VFIO multifd device state transfer patches for QEMU 10.1
  2025-07-15 14:37 [PATCH v2 0/2] VFIO multifd device state transfer patches for QEMU 10.1 Maciej S. Szmigiero
  2025-07-15 14:37 ` [PATCH v2 1/2] vfio/migration: Add x-migration-load-config-after-iter VFIO property Maciej S. Szmigiero
  2025-07-15 14:37 ` [PATCH v2 2/2] vfio/migration: Max in-flight VFIO device state buffers size limit Maciej S. Szmigiero
@ 2025-07-15 16:18 ` Cédric Le Goater
  2 siblings, 0 replies; 4+ messages in thread
From: Cédric Le Goater @ 2025-07-15 16:18 UTC (permalink / raw)
  To: Maciej S. Szmigiero, Alex Williamson, Peter Xu, Fabiano Rosas
  Cc: Peter Maydell, Eric Auger, Avihai Horon, qemu-arm, qemu-devel

On 7/15/25 16:37, Maciej S. Szmigiero wrote:
> From: "Maciej S. Szmigiero" <maciej.szmigiero@oracle.com>
> 
> This is an updated v2 patch series of the v1 series located at [1] of the
> leftover patches of VFIO multifd migration patch set that was merged in
> QEMU 10.0.
> 
> Changes from v1:
> * Drop the in-flight VFIO device state buffer *count* limit, leave
>    the *size* limit only.
> 
> * Add a small target-dependent helper to hw/vfio/helpers.c to avoid doing
>    strcmp(target_name(), "aarch64") in hw/vfio/migration-multifd.c.
> 
> * Mention that VFIO device config with ARM interlocking enabled is
>    loaded as part of the non-iterables as suggested by Avihai.
> 
> * Collect Fabiano and Avihai Reviewed-by tags.
> 
> [1]: https://lore.kernel.org/qemu-devel/cover.1750787338.git.maciej.szmigiero@oracle.com/
> 
> Maciej S. Szmigiero (2):
>    vfio/migration: Add x-migration-load-config-after-iter VFIO property
>    vfio/migration: Max in-flight VFIO device state buffers size limit
> 
>   docs/devel/migration/vfio.rst     |  19 ++++++
>   hw/core/machine.c                 |   1 +
>   hw/vfio/helpers.c                 |  17 +++++
>   hw/vfio/migration-multifd.c       | 100 +++++++++++++++++++++++++++++-
>   hw/vfio/migration-multifd.h       |   3 +
>   hw/vfio/migration.c               |  10 ++-
>   hw/vfio/pci.c                     |  19 ++++++
>   hw/vfio/vfio-helpers.h            |   2 +
>   hw/vfio/vfio-migration-internal.h |   1 +
>   include/hw/vfio/vfio-device.h     |   2 +
>   10 files changed, 171 insertions(+), 3 deletions(-)
> 


Applied to vfio-next.

Thanks,

C.





^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2025-07-15 16:46 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-15 14:37 [PATCH v2 0/2] VFIO multifd device state transfer patches for QEMU 10.1 Maciej S. Szmigiero
2025-07-15 14:37 ` [PATCH v2 1/2] vfio/migration: Add x-migration-load-config-after-iter VFIO property Maciej S. Szmigiero
2025-07-15 14:37 ` [PATCH v2 2/2] vfio/migration: Max in-flight VFIO device state buffers size limit Maciej S. Szmigiero
2025-07-15 16:18 ` [PATCH v2 0/2] VFIO multifd device state transfer patches for QEMU 10.1 Cédric Le Goater

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).