* [PATCH v4 01/10] vfio/iommufd: Add framework code to support getting dirty bitmap before unmap
  2025-10-29  9:53 [PATCH v4 00/10] vfio: relax the vIOMMU check Zhenzhong Duan
@ 2025-10-29  9:53 ` Zhenzhong Duan
  2025-10-29  9:53 ` [PATCH v4 02/10] vfio/iommufd: Query dirty bitmap before DMA unmap Zhenzhong Duan
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Zhenzhong Duan @ 2025-10-29  9:53 UTC (permalink / raw)
  To: qemu-devel
  Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
	eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
	rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan
Currently we support device and iommu dirty tracking, device dirty tracking
is preferred.
Add the framework code in iommufd_cdev_unmap() to choose either device or
iommu dirty tracking, just like vfio_legacy_dma_unmap_one().
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
Tested-by: Xudong Hao <xudong.hao@intel.com>
Tested-by: Giovannio Cabiddu <giovanni.cabiddu@intel.com>
Tested-by: Rohith S R <rohith.s.r@intel.com>
---
 hw/vfio/iommufd.c | 34 +++++++++++++++++++++++++++++++---
 1 file changed, 31 insertions(+), 3 deletions(-)
diff --git a/hw/vfio/iommufd.c b/hw/vfio/iommufd.c
index bb5775aa71..806ca6ef14 100644
--- a/hw/vfio/iommufd.c
+++ b/hw/vfio/iommufd.c
@@ -61,14 +61,42 @@ static int iommufd_cdev_unmap(const VFIOContainer *bcontainer,
                               IOMMUTLBEntry *iotlb, bool unmap_all)
 {
     const VFIOIOMMUFDContainer *container = VFIO_IOMMU_IOMMUFD(bcontainer);
+    IOMMUFDBackend *be = container->be;
+    uint32_t ioas_id = container->ioas_id;
+    bool need_dirty_sync = false;
+    Error *local_err = NULL;
+    int ret;
 
     if (unmap_all) {
         size = UINT64_MAX;
     }
 
-    /* TODO: Handle dma_unmap_bitmap with iotlb args (migration) */
-    return iommufd_backend_unmap_dma(container->be,
-                                     container->ioas_id, iova, size);
+    if (iotlb && vfio_container_dirty_tracking_is_started(bcontainer)) {
+        if (!vfio_container_devices_dirty_tracking_is_supported(bcontainer) &&
+            bcontainer->dirty_pages_supported) {
+            /* TODO: query dirty bitmap before DMA unmap */
+            return iommufd_backend_unmap_dma(be, ioas_id, iova, size);
+        }
+
+        need_dirty_sync = true;
+    }
+
+    ret = iommufd_backend_unmap_dma(be, ioas_id, iova, size);
+    if (ret) {
+        return ret;
+    }
+
+    if (need_dirty_sync) {
+        ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size,
+                                                iotlb->translated_addr,
+                                                &local_err);
+        if (ret) {
+            error_report_err(local_err);
+            return ret;
+        }
+    }
+
+    return 0;
 }
 
 static bool iommufd_cdev_kvm_device_add(VFIODevice *vbasedev, Error **errp)
-- 
2.47.1
^ permalink raw reply related	[flat|nested] 17+ messages in thread* [PATCH v4 02/10] vfio/iommufd: Query dirty bitmap before DMA unmap
  2025-10-29  9:53 [PATCH v4 00/10] vfio: relax the vIOMMU check Zhenzhong Duan
  2025-10-29  9:53 ` [PATCH v4 01/10] vfio/iommufd: Add framework code to support getting dirty bitmap before unmap Zhenzhong Duan
@ 2025-10-29  9:53 ` Zhenzhong Duan
  2025-10-29  9:53 ` [PATCH v4 03/10] vfio/container-legacy: rename vfio_dma_unmap_bitmap() to vfio_legacy_dma_unmap_get_dirty_bitmap() Zhenzhong Duan
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Zhenzhong Duan @ 2025-10-29  9:53 UTC (permalink / raw)
  To: qemu-devel
  Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
	eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
	rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan
When an existing mapping is unmapped, there could already be dirty bits
which need to be recorded before unmap.
If query dirty bitmap fails, we still need to do unmapping or else there
is stale mapping and it's risky to guest.
Co-developed-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
Tested-by: Xudong Hao <xudong.hao@intel.com>
Tested-by: Giovannio Cabiddu <giovanni.cabiddu@intel.com>
Tested-by: Rohith S R <rohith.s.r@intel.com>
---
 hw/vfio/iommufd.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/hw/vfio/iommufd.c b/hw/vfio/iommufd.c
index 806ca6ef14..5f96a41246 100644
--- a/hw/vfio/iommufd.c
+++ b/hw/vfio/iommufd.c
@@ -65,7 +65,7 @@ static int iommufd_cdev_unmap(const VFIOContainer *bcontainer,
     uint32_t ioas_id = container->ioas_id;
     bool need_dirty_sync = false;
     Error *local_err = NULL;
-    int ret;
+    int ret, unmap_ret;
 
     if (unmap_all) {
         size = UINT64_MAX;
@@ -74,8 +74,21 @@ static int iommufd_cdev_unmap(const VFIOContainer *bcontainer,
     if (iotlb && vfio_container_dirty_tracking_is_started(bcontainer)) {
         if (!vfio_container_devices_dirty_tracking_is_supported(bcontainer) &&
             bcontainer->dirty_pages_supported) {
-            /* TODO: query dirty bitmap before DMA unmap */
-            return iommufd_backend_unmap_dma(be, ioas_id, iova, size);
+            ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size,
+                                                    iotlb->translated_addr,
+                                                    &local_err);
+            if (ret) {
+                error_report_err(local_err);
+            }
+            /* Unmap stale mapping even if query dirty bitmap fails */
+            unmap_ret = iommufd_backend_unmap_dma(be, ioas_id, iova, size);
+
+            /*
+             * If dirty tracking fails, return the failure to VFIO core to
+             * fail the migration, or else there will be dirty pages missed
+             * to be migrated.
+             */
+            return unmap_ret ? : ret;
         }
 
         need_dirty_sync = true;
-- 
2.47.1
^ permalink raw reply related	[flat|nested] 17+ messages in thread* [PATCH v4 03/10] vfio/container-legacy: rename vfio_dma_unmap_bitmap() to vfio_legacy_dma_unmap_get_dirty_bitmap()
  2025-10-29  9:53 [PATCH v4 00/10] vfio: relax the vIOMMU check Zhenzhong Duan
  2025-10-29  9:53 ` [PATCH v4 01/10] vfio/iommufd: Add framework code to support getting dirty bitmap before unmap Zhenzhong Duan
  2025-10-29  9:53 ` [PATCH v4 02/10] vfio/iommufd: Query dirty bitmap before DMA unmap Zhenzhong Duan
@ 2025-10-29  9:53 ` Zhenzhong Duan
  2025-10-29  9:53 ` [PATCH v4 04/10] vfio: Introduce a helper vfio_device_dirty_pages_disabled() Zhenzhong Duan
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Zhenzhong Duan @ 2025-10-29  9:53 UTC (permalink / raw)
  To: qemu-devel
  Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
	eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
	rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan
This is to follow naming style in container-legacy.c to have low level functions
with vfio_legacy_ prefix.
No functional changes.
Suggested-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
---
 hw/vfio/container-legacy.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/hw/vfio/container-legacy.c b/hw/vfio/container-legacy.c
index 8e9639603e..b7e3b892b9 100644
--- a/hw/vfio/container-legacy.c
+++ b/hw/vfio/container-legacy.c
@@ -68,9 +68,10 @@ static int vfio_ram_block_discard_disable(VFIOLegacyContainer *container,
     }
 }
 
-static int vfio_dma_unmap_bitmap(const VFIOLegacyContainer *container,
-                                 hwaddr iova, uint64_t size,
-                                 IOMMUTLBEntry *iotlb)
+static int
+vfio_legacy_dma_unmap_get_dirty_bitmap(const VFIOLegacyContainer *container,
+                                       hwaddr iova, uint64_t size,
+                                       IOMMUTLBEntry *iotlb)
 {
     const VFIOContainer *bcontainer = VFIO_IOMMU(container);
     struct vfio_iommu_type1_dma_unmap *unmap;
@@ -141,7 +142,8 @@ static int vfio_legacy_dma_unmap_one(const VFIOLegacyContainer *container,
     if (iotlb && vfio_container_dirty_tracking_is_started(bcontainer)) {
         if (!vfio_container_devices_dirty_tracking_is_supported(bcontainer) &&
             bcontainer->dirty_pages_supported) {
-            return vfio_dma_unmap_bitmap(container, iova, size, iotlb);
+            return vfio_legacy_dma_unmap_get_dirty_bitmap(container, iova, size,
+                                                          iotlb);
         }
 
         need_dirty_sync = true;
-- 
2.47.1
^ permalink raw reply related	[flat|nested] 17+ messages in thread* [PATCH v4 04/10] vfio: Introduce a helper vfio_device_dirty_pages_disabled()
  2025-10-29  9:53 [PATCH v4 00/10] vfio: relax the vIOMMU check Zhenzhong Duan
                   ` (2 preceding siblings ...)
  2025-10-29  9:53 ` [PATCH v4 03/10] vfio/container-legacy: rename vfio_dma_unmap_bitmap() to vfio_legacy_dma_unmap_get_dirty_bitmap() Zhenzhong Duan
@ 2025-10-29  9:53 ` Zhenzhong Duan
  2025-10-29  9:53 ` [PATCH v4 05/10] vfio: Add a backend_flag parameter to vfio_contianer_query_dirty_bitmap() Zhenzhong Duan
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Zhenzhong Duan @ 2025-10-29  9:53 UTC (permalink / raw)
  To: qemu-devel
  Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
	eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
	rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan
Introduce a helper vfio_device_dirty_pages_disabled() to check if device
dirty tracking will be used for a VFIO device. Use it to replace existing
code, it will also be used in subsequent patches.
No functional change intended.
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
---
 include/hw/vfio/vfio-device.h | 10 ++++++++++
 hw/vfio/container.c           |  5 +----
 hw/vfio/device.c              |  6 ++++++
 hw/vfio/migration.c           |  3 +--
 4 files changed, 18 insertions(+), 6 deletions(-)
diff --git a/include/hw/vfio/vfio-device.h b/include/hw/vfio/vfio-device.h
index 0fe6c60ba2..a0b8fc2eb6 100644
--- a/include/hw/vfio/vfio-device.h
+++ b/include/hw/vfio/vfio-device.h
@@ -148,6 +148,16 @@ bool vfio_device_irq_set_signaling(VFIODevice *vbasedev, int index, int subindex
 
 void vfio_device_reset_handler(void *opaque);
 bool vfio_device_is_mdev(VFIODevice *vbasedev);
+/**
+ * vfio_device_dirty_pages_disabled: Check if device dirty tracking will be
+ * used for a VFIO device
+ *
+ * @vbasedev: The VFIODevice to transform
+ *
+ * Return: true if either @vbasedev doesn't support device dirty tracking or
+ * is forcedly disabled from command line, otherwise false.
+ */
+bool vfio_device_dirty_pages_disabled(VFIODevice *vbasedev);
 bool vfio_device_hiod_create_and_realize(VFIODevice *vbasedev,
                                          const char *typename, Error **errp);
 bool vfio_device_attach(char *name, VFIODevice *vbasedev,
diff --git a/hw/vfio/container.c b/hw/vfio/container.c
index 9ddec300e3..077eb57304 100644
--- a/hw/vfio/container.c
+++ b/hw/vfio/container.c
@@ -178,10 +178,7 @@ bool vfio_container_devices_dirty_tracking_is_supported(
     VFIODevice *vbasedev;
 
     QLIST_FOREACH(vbasedev, &bcontainer->device_list, container_next) {
-        if (vbasedev->device_dirty_page_tracking == ON_OFF_AUTO_OFF) {
-            return false;
-        }
-        if (!vbasedev->dirty_pages_supported) {
+        if (vfio_device_dirty_pages_disabled(vbasedev)) {
             return false;
         }
     }
diff --git a/hw/vfio/device.c b/hw/vfio/device.c
index 8b63e765ac..5ed3103e72 100644
--- a/hw/vfio/device.c
+++ b/hw/vfio/device.c
@@ -411,6 +411,12 @@ bool vfio_device_is_mdev(VFIODevice *vbasedev)
     return subsys && (strcmp(subsys, "/sys/bus/mdev") == 0);
 }
 
+bool vfio_device_dirty_pages_disabled(VFIODevice *vbasedev)
+{
+    return (!vbasedev->dirty_pages_supported ||
+            vbasedev->device_dirty_page_tracking == ON_OFF_AUTO_OFF);
+}
+
 bool vfio_device_hiod_create_and_realize(VFIODevice *vbasedev,
                                          const char *typename, Error **errp)
 {
diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
index 4c06e3db93..3d819eb87e 100644
--- a/hw/vfio/migration.c
+++ b/hw/vfio/migration.c
@@ -1183,8 +1183,7 @@ bool vfio_migration_realize(VFIODevice *vbasedev, Error **errp)
         return !vfio_block_migration(vbasedev, err, errp);
     }
 
-    if ((!vbasedev->dirty_pages_supported ||
-         vbasedev->device_dirty_page_tracking == ON_OFF_AUTO_OFF) &&
+    if (vfio_device_dirty_pages_disabled(vbasedev) &&
         !vbasedev->iommu_dirty_tracking) {
         if (vbasedev->enable_migration == ON_OFF_AUTO_AUTO) {
             error_setg(&err,
-- 
2.47.1
^ permalink raw reply related	[flat|nested] 17+ messages in thread* [PATCH v4 05/10] vfio: Add a backend_flag parameter to vfio_contianer_query_dirty_bitmap()
  2025-10-29  9:53 [PATCH v4 00/10] vfio: relax the vIOMMU check Zhenzhong Duan
                   ` (3 preceding siblings ...)
  2025-10-29  9:53 ` [PATCH v4 04/10] vfio: Introduce a helper vfio_device_dirty_pages_disabled() Zhenzhong Duan
@ 2025-10-29  9:53 ` Zhenzhong Duan
  2025-10-29  9:53 ` [PATCH v4 06/10] vfio/iommufd: Add IOMMU_HWPT_GET_DIRTY_BITMAP_NO_CLEAR flag support Zhenzhong Duan
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Zhenzhong Duan @ 2025-10-29  9:53 UTC (permalink / raw)
  To: qemu-devel
  Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
	eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
	rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan
From: Joao Martins <joao.m.martins@oracle.com>
This new parameter will be used in following patch, currently 0 is passed.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
Tested-by: Giovannio Cabiddu <giovanni.cabiddu@intel.com>
Tested-by: Rohith S R <rohith.s.r@intel.com>
---
 include/hw/vfio/vfio-container.h |  8 ++++++--
 hw/vfio-user/container.c         |  5 +++--
 hw/vfio/container-legacy.c       |  5 +++--
 hw/vfio/container.c              | 15 +++++++++------
 hw/vfio/iommufd.c                |  7 ++++---
 hw/vfio/listener.c               |  6 +++---
 hw/vfio/trace-events             |  2 +-
 7 files changed, 29 insertions(+), 19 deletions(-)
diff --git a/include/hw/vfio/vfio-container.h b/include/hw/vfio/vfio-container.h
index c4b58d664b..9f6e8cedfc 100644
--- a/include/hw/vfio/vfio-container.h
+++ b/include/hw/vfio/vfio-container.h
@@ -99,7 +99,9 @@ bool vfio_container_devices_dirty_tracking_is_supported(
     const VFIOContainer *bcontainer);
 int vfio_container_query_dirty_bitmap(const VFIOContainer *bcontainer,
                                       uint64_t iova, uint64_t size,
-                                      hwaddr translated_addr, Error **errp);
+                                      uint64_t backend_flag,
+                                      hwaddr translated_addr,
+                                      Error **errp);
 
 GList *vfio_container_get_iova_ranges(const VFIOContainer *bcontainer);
 
@@ -253,12 +255,14 @@ struct VFIOIOMMUClass {
      * @vbmap: #VFIOBitmap internal bitmap structure
      * @iova: iova base address
      * @size: size of iova range
+     * @backend_flag: flags for backend, opaque to upper layer container
      * @errp: pointer to Error*, to store an error if it happens.
      *
      * Returns zero to indicate success and negative for error.
      */
     int (*query_dirty_bitmap)(const VFIOContainer *bcontainer,
-                VFIOBitmap *vbmap, hwaddr iova, hwaddr size, Error **errp);
+                              VFIOBitmap *vbmap, hwaddr iova, hwaddr size,
+                              uint64_t backend_flag, Error **errp);
     /* PCI specific */
     int (*pci_hot_reset)(VFIODevice *vbasedev, bool single);
 
diff --git a/hw/vfio-user/container.c b/hw/vfio-user/container.c
index e45192fef6..3ce6ea12db 100644
--- a/hw/vfio-user/container.c
+++ b/hw/vfio-user/container.c
@@ -162,8 +162,9 @@ vfio_user_set_dirty_page_tracking(const VFIOContainer *bcontainer,
 }
 
 static int vfio_user_query_dirty_bitmap(const VFIOContainer *bcontainer,
-                                         VFIOBitmap *vbmap, hwaddr iova,
-                                         hwaddr size, Error **errp)
+                                        VFIOBitmap *vbmap, hwaddr iova,
+                                        hwaddr size, uint64_t backend_flag,
+                                        Error **errp)
 {
     error_setg_errno(errp, ENOTSUP, "Not supported");
     return -ENOTSUP;
diff --git a/hw/vfio/container-legacy.c b/hw/vfio/container-legacy.c
index b7e3b892b9..dd9c4a6a5a 100644
--- a/hw/vfio/container-legacy.c
+++ b/hw/vfio/container-legacy.c
@@ -154,7 +154,7 @@ static int vfio_legacy_dma_unmap_one(const VFIOLegacyContainer *container,
     }
 
     if (need_dirty_sync) {
-        ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size,
+        ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size, 0,
                                     iotlb->translated_addr, &local_err);
         if (ret) {
             error_report_err(local_err);
@@ -255,7 +255,8 @@ vfio_legacy_set_dirty_page_tracking(const VFIOContainer *bcontainer,
 }
 
 static int vfio_legacy_query_dirty_bitmap(const VFIOContainer *bcontainer,
-                      VFIOBitmap *vbmap, hwaddr iova, hwaddr size, Error **errp)
+                                   VFIOBitmap *vbmap, hwaddr iova, hwaddr size,
+                                   uint64_t backend_flag, Error **errp)
 {
     const VFIOLegacyContainer *container = VFIO_IOMMU_LEGACY(bcontainer);
     struct vfio_iommu_type1_dirty_bitmap *dbitmap;
diff --git a/hw/vfio/container.c b/hw/vfio/container.c
index 077eb57304..8879da78c8 100644
--- a/hw/vfio/container.c
+++ b/hw/vfio/container.c
@@ -210,13 +210,13 @@ static int vfio_device_dma_logging_report(VFIODevice *vbasedev, hwaddr iova,
 
 static int vfio_container_iommu_query_dirty_bitmap(
     const VFIOContainer *bcontainer, VFIOBitmap *vbmap, hwaddr iova,
-    hwaddr size, Error **errp)
+    hwaddr size, uint64_t backend_flag, Error **errp)
 {
     VFIOIOMMUClass *vioc = VFIO_IOMMU_GET_CLASS(bcontainer);
 
     g_assert(vioc->query_dirty_bitmap);
     return vioc->query_dirty_bitmap(bcontainer, vbmap, iova, size,
-                                               errp);
+                                    backend_flag, errp);
 }
 
 static int vfio_container_devices_query_dirty_bitmap(
@@ -244,7 +244,9 @@ static int vfio_container_devices_query_dirty_bitmap(
 
 int vfio_container_query_dirty_bitmap(const VFIOContainer *bcontainer,
                                       uint64_t iova, uint64_t size,
-                                      hwaddr translated_addr, Error **errp)
+                                      uint64_t backend_flag,
+                                      hwaddr translated_addr,
+                                      Error **errp)
 {
     bool all_device_dirty_tracking =
         vfio_container_devices_dirty_tracking_is_supported(bcontainer);
@@ -271,7 +273,7 @@ int vfio_container_query_dirty_bitmap(const VFIOContainer *bcontainer,
                                                         errp);
     } else {
         ret = vfio_container_iommu_query_dirty_bitmap(bcontainer, &vbmap, iova, size,
-                                                     errp);
+                                                      backend_flag, errp);
     }
 
     if (ret) {
@@ -282,8 +284,9 @@ int vfio_container_query_dirty_bitmap(const VFIOContainer *bcontainer,
                                                          translated_addr,
                                                          vbmap.pages);
 
-    trace_vfio_container_query_dirty_bitmap(iova, size, vbmap.size,
-                                            translated_addr, dirty_pages);
+    trace_vfio_container_query_dirty_bitmap(iova, size, backend_flag,
+                                            vbmap.size, translated_addr,
+                                            dirty_pages);
 out:
     g_free(vbmap.bitmap);
 
diff --git a/hw/vfio/iommufd.c b/hw/vfio/iommufd.c
index 5f96a41246..b59a8639c6 100644
--- a/hw/vfio/iommufd.c
+++ b/hw/vfio/iommufd.c
@@ -74,7 +74,7 @@ static int iommufd_cdev_unmap(const VFIOContainer *bcontainer,
     if (iotlb && vfio_container_dirty_tracking_is_started(bcontainer)) {
         if (!vfio_container_devices_dirty_tracking_is_supported(bcontainer) &&
             bcontainer->dirty_pages_supported) {
-            ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size,
+            ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size, 0,
                                                     iotlb->translated_addr,
                                                     &local_err);
             if (ret) {
@@ -100,7 +100,7 @@ static int iommufd_cdev_unmap(const VFIOContainer *bcontainer,
     }
 
     if (need_dirty_sync) {
-        ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size,
+        ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size, 0,
                                                 iotlb->translated_addr,
                                                 &local_err);
         if (ret) {
@@ -216,7 +216,8 @@ err:
 
 static int iommufd_query_dirty_bitmap(const VFIOContainer *bcontainer,
                                       VFIOBitmap *vbmap, hwaddr iova,
-                                      hwaddr size, Error **errp)
+                                      hwaddr size, uint64_t backend_flag,
+                                      Error **errp)
 {
     VFIOIOMMUFDContainer *container = VFIO_IOMMU_IOMMUFD(bcontainer);
     unsigned long page_size = qemu_real_host_page_size();
diff --git a/hw/vfio/listener.c b/hw/vfio/listener.c
index 2d7d3a4645..2109101158 100644
--- a/hw/vfio/listener.c
+++ b/hw/vfio/listener.c
@@ -1083,7 +1083,7 @@ static void vfio_iommu_map_dirty_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
     translated_addr = memory_region_get_ram_addr(mr) + xlat;
 
     ret = vfio_container_query_dirty_bitmap(bcontainer, iova, iotlb->addr_mask + 1,
-                                translated_addr, &local_err);
+                                0, translated_addr, &local_err);
     if (ret) {
         error_prepend(&local_err,
                       "vfio_iommu_map_dirty_notify(%p, 0x%"HWADDR_PRIx", "
@@ -1119,7 +1119,7 @@ static int vfio_ram_discard_query_dirty_bitmap(MemoryRegionSection *section,
      * Sync the whole mapped region (spanning multiple individual mappings)
      * in one go.
      */
-    ret = vfio_container_query_dirty_bitmap(vrdl->bcontainer, iova, size,
+    ret = vfio_container_query_dirty_bitmap(vrdl->bcontainer, iova, size, 0,
                                             translated_addr, &local_err);
     if (ret) {
         error_report_err(local_err);
@@ -1204,7 +1204,7 @@ static int vfio_sync_dirty_bitmap(VFIOContainer *bcontainer,
 
     return vfio_container_query_dirty_bitmap(bcontainer,
                    REAL_HOST_PAGE_ALIGN(section->offset_within_address_space),
-                   int128_get64(section->size), translated_addr, errp);
+                   int128_get64(section->size), 0, translated_addr, errp);
 }
 
 static void vfio_listener_log_sync(MemoryListener *listener,
diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events
index 1e895448cd..3c62bab764 100644
--- a/hw/vfio/trace-events
+++ b/hw/vfio/trace-events
@@ -105,7 +105,7 @@ vfio_device_dirty_tracking_start(int nr_ranges, uint64_t min32, uint64_t max32,
 vfio_iommu_map_dirty_notify(uint64_t iova_start, uint64_t iova_end) "iommu dirty @ 0x%"PRIx64" - 0x%"PRIx64
 
 # container.c
-vfio_container_query_dirty_bitmap(uint64_t iova, uint64_t size, uint64_t bitmap_size, uint64_t translated_addr, uint64_t dirty_pages) "iova=0x%"PRIx64" size= 0x%"PRIx64" bitmap_size=0x%"PRIx64" gpa=0x%"PRIx64" dirty_pages=%"PRIu64
+vfio_container_query_dirty_bitmap(uint64_t iova, uint64_t size, uint64_t backend_flag, uint64_t bitmap_size, uint64_t translated_addr, uint64_t dirty_pages) "iova=0x%"PRIx64" size=0x%"PRIx64" backend_flag=0x%"PRIx64" bitmap_size=0x%"PRIx64" gpa=0x%"PRIx64" dirty_pages=%"PRIu64
 
 # container-legacy.c
 vfio_container_disconnect(int fd) "close container->fd=%d"
-- 
2.47.1
^ permalink raw reply related	[flat|nested] 17+ messages in thread* [PATCH v4 06/10] vfio/iommufd: Add IOMMU_HWPT_GET_DIRTY_BITMAP_NO_CLEAR flag support
  2025-10-29  9:53 [PATCH v4 00/10] vfio: relax the vIOMMU check Zhenzhong Duan
                   ` (4 preceding siblings ...)
  2025-10-29  9:53 ` [PATCH v4 05/10] vfio: Add a backend_flag parameter to vfio_contianer_query_dirty_bitmap() Zhenzhong Duan
@ 2025-10-29  9:53 ` Zhenzhong Duan
  2025-10-29  9:53 ` [PATCH v4 07/10] intel_iommu: Fix unmap_bitmap failure with legacy VFIO backend Zhenzhong Duan
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Zhenzhong Duan @ 2025-10-29  9:53 UTC (permalink / raw)
  To: qemu-devel
  Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
	eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
	rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan
Pass IOMMU_HWPT_GET_DIRTY_BITMAP_NO_CLEAR when doing the last dirty
bitmap query right before unmap, no PTEs flushes. This accelerates the
query without issue because unmap will tear down the mapping anyway.
Co-developed-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
Tested-by: Xudong Hao <xudong.hao@intel.com>
Tested-by: Giovannio Cabiddu <giovanni.cabiddu@intel.com>
Tested-by: Rohith S R <rohith.s.r@intel.com>
---
 include/system/iommufd.h | 2 +-
 backends/iommufd.c       | 5 +++--
 hw/vfio/iommufd.c        | 5 +++--
 backends/trace-events    | 2 +-
 4 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/include/system/iommufd.h b/include/system/iommufd.h
index a659f36a20..767a8e4cb6 100644
--- a/include/system/iommufd.h
+++ b/include/system/iommufd.h
@@ -64,7 +64,7 @@ bool iommufd_backend_set_dirty_tracking(IOMMUFDBackend *be, uint32_t hwpt_id,
 bool iommufd_backend_get_dirty_bitmap(IOMMUFDBackend *be, uint32_t hwpt_id,
                                       uint64_t iova, ram_addr_t size,
                                       uint64_t page_size, uint64_t *data,
-                                      Error **errp);
+                                      uint64_t flags, Error **errp);
 bool iommufd_backend_invalidate_cache(IOMMUFDBackend *be, uint32_t id,
                                       uint32_t data_type, uint32_t entry_len,
                                       uint32_t *entry_num, void *data,
diff --git a/backends/iommufd.c b/backends/iommufd.c
index fdfb7c9d67..086bd67aea 100644
--- a/backends/iommufd.c
+++ b/backends/iommufd.c
@@ -361,7 +361,7 @@ bool iommufd_backend_get_dirty_bitmap(IOMMUFDBackend *be,
                                       uint32_t hwpt_id,
                                       uint64_t iova, ram_addr_t size,
                                       uint64_t page_size, uint64_t *data,
-                                      Error **errp)
+                                      uint64_t flags, Error **errp)
 {
     int ret;
     struct iommu_hwpt_get_dirty_bitmap get_dirty_bitmap = {
@@ -371,11 +371,12 @@ bool iommufd_backend_get_dirty_bitmap(IOMMUFDBackend *be,
         .length = size,
         .page_size = page_size,
         .data = (uintptr_t)data,
+        .flags = flags,
     };
 
     ret = ioctl(be->fd, IOMMU_HWPT_GET_DIRTY_BITMAP, &get_dirty_bitmap);
     trace_iommufd_backend_get_dirty_bitmap(be->fd, hwpt_id, iova, size,
-                                           page_size, ret ? errno : 0);
+                                           flags, page_size, ret ? errno : 0);
     if (ret) {
         error_setg_errno(errp, errno,
                          "IOMMU_HWPT_GET_DIRTY_BITMAP (iova: 0x%"HWADDR_PRIx
diff --git a/hw/vfio/iommufd.c b/hw/vfio/iommufd.c
index b59a8639c6..ba5c6b6586 100644
--- a/hw/vfio/iommufd.c
+++ b/hw/vfio/iommufd.c
@@ -74,7 +74,8 @@ static int iommufd_cdev_unmap(const VFIOContainer *bcontainer,
     if (iotlb && vfio_container_dirty_tracking_is_started(bcontainer)) {
         if (!vfio_container_devices_dirty_tracking_is_supported(bcontainer) &&
             bcontainer->dirty_pages_supported) {
-            ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size, 0,
+            ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size,
+                                                    IOMMU_HWPT_GET_DIRTY_BITMAP_NO_CLEAR,
                                                     iotlb->translated_addr,
                                                     &local_err);
             if (ret) {
@@ -231,7 +232,7 @@ static int iommufd_query_dirty_bitmap(const VFIOContainer *bcontainer,
         if (!iommufd_backend_get_dirty_bitmap(container->be, hwpt->hwpt_id,
                                               iova, size, page_size,
                                               (uint64_t *)vbmap->bitmap,
-                                              errp)) {
+                                              backend_flag, errp)) {
             return -EINVAL;
         }
     }
diff --git a/backends/trace-events b/backends/trace-events
index 56132d3fd2..e1992ba12f 100644
--- a/backends/trace-events
+++ b/backends/trace-events
@@ -19,5 +19,5 @@ iommufd_backend_alloc_ioas(int iommufd, uint32_t ioas) " iommufd=%d ioas=%d"
 iommufd_backend_alloc_hwpt(int iommufd, uint32_t dev_id, uint32_t pt_id, uint32_t flags, uint32_t hwpt_type, uint32_t len, uint64_t data_ptr, uint32_t out_hwpt_id, int ret) " iommufd=%d dev_id=%u pt_id=%u flags=0x%x hwpt_type=%u len=%u data_ptr=0x%"PRIx64" out_hwpt=%u (%d)"
 iommufd_backend_free_id(int iommufd, uint32_t id, int ret) " iommufd=%d id=%d (%d)"
 iommufd_backend_set_dirty(int iommufd, uint32_t hwpt_id, bool start, int ret) " iommufd=%d hwpt=%u enable=%d (%d)"
-iommufd_backend_get_dirty_bitmap(int iommufd, uint32_t hwpt_id, uint64_t iova, uint64_t size, uint64_t page_size, int ret) " iommufd=%d hwpt=%u iova=0x%"PRIx64" size=0x%"PRIx64" page_size=0x%"PRIx64" (%d)"
+iommufd_backend_get_dirty_bitmap(int iommufd, uint32_t hwpt_id, uint64_t iova, uint64_t size, uint64_t flags, uint64_t page_size, int ret) " iommufd=%d hwpt=%u iova=0x%"PRIx64" size=0x%"PRIx64" flags=0x%"PRIx64" page_size=0x%"PRIx64" (%d)"
 iommufd_backend_invalidate_cache(int iommufd, uint32_t id, uint32_t data_type, uint32_t entry_len, uint32_t entry_num, uint32_t done_num, uint64_t data_ptr, int ret) " iommufd=%d id=%u data_type=%u entry_len=%u entry_num=%u done_num=%u data_ptr=0x%"PRIx64" (%d)"
-- 
2.47.1
^ permalink raw reply related	[flat|nested] 17+ messages in thread* [PATCH v4 07/10] intel_iommu: Fix unmap_bitmap failure with legacy VFIO backend
  2025-10-29  9:53 [PATCH v4 00/10] vfio: relax the vIOMMU check Zhenzhong Duan
                   ` (5 preceding siblings ...)
  2025-10-29  9:53 ` [PATCH v4 06/10] vfio/iommufd: Add IOMMU_HWPT_GET_DIRTY_BITMAP_NO_CLEAR flag support Zhenzhong Duan
@ 2025-10-29  9:53 ` Zhenzhong Duan
  2025-10-29  9:53 ` [PATCH v4 08/10] vfio/migration: Fix a check on vbasedev->iommu_dirty_tracking Zhenzhong Duan
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 17+ messages in thread
From: Zhenzhong Duan @ 2025-10-29  9:53 UTC (permalink / raw)
  To: qemu-devel
  Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
	eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
	rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan
If a VFIO device in guest switches from IOMMU domain to block domain,
vtd_address_space_unmap() is called to unmap whole address space.
If that happens during migration, migration fails with legacy VFIO
backend as below:
Status: failed (vfio_container_dma_unmap(0x561bbbd92d90, 0x100000000000, 0x100000000000) = -7 (Argument list too long))
Because legacy VFIO limits maximum bitmap size to 256MB which maps to 8TB on
4K page system, when 16TB sized UNMAP notification is sent, unmap_bitmap
ioctl fails. Normally such large UNMAP notification come from IOVA range
rather than system memory.
Fix it by iterating over DMAMap list to unmap each range with active mapping
when migration is active. If migration is not active, unmapping the whole
address space in one go is optimal.
There is no such limitation with iommufd backend, but it's still not optimal
to allocate large bitmap, e.g., there may be large hole between IOVA ranges,
allocating large bitmap and dirty tracking on the hole is time consuming and
useless work.
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
Tested-by: Giovannio Cabiddu <giovanni.cabiddu@intel.com>
Tested-by: Rohith S R <rohith.s.r@intel.com>
---
 hw/i386/intel_iommu.c | 42 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 42 insertions(+)
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index c402643b56..b00fdecaf8 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -37,6 +37,7 @@
 #include "system/system.h"
 #include "hw/i386/apic_internal.h"
 #include "kvm/kvm_i386.h"
+#include "migration/misc.h"
 #include "migration/vmstate.h"
 #include "trace.h"
 
@@ -4695,6 +4696,42 @@ static void vtd_dev_unset_iommu_device(PCIBus *bus, void *opaque, int devfn)
     vtd_iommu_unlock(s);
 }
 
+/*
+ * Unmapping a large range in one go is not optimal during migration because
+ * a large dirty bitmap needs to be allocated while there may be only small
+ * mappings, iterate over DMAMap list to unmap each range with active mapping.
+ */
+static void vtd_address_space_unmap_in_migration(VTDAddressSpace *as,
+                                                 IOMMUNotifier *n)
+{
+    const DMAMap *map;
+    const DMAMap target = {
+        .iova = n->start,
+        .size = n->end,
+    };
+    IOVATree *tree = as->iova_tree;
+
+    /*
+     * DMAMap is created during IOMMU page table sync, it's either 4KB or huge
+     * page size and always a power of 2 in size. So the range of DMAMap could
+     * be used for UNMAP notification directly.
+     */
+    while ((map = iova_tree_find(tree, &target))) {
+        IOMMUTLBEvent event;
+
+        event.type = IOMMU_NOTIFIER_UNMAP;
+        event.entry.iova = map->iova;
+        event.entry.addr_mask = map->size;
+        event.entry.target_as = &address_space_memory;
+        event.entry.perm = IOMMU_NONE;
+        /* This field is meaningless for unmap */
+        event.entry.translated_addr = 0;
+        memory_region_notify_iommu_one(n, &event);
+
+        iova_tree_remove(tree, *map);
+    }
+}
+
 /* Unmap the whole range in the notifier's scope. */
 static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
 {
@@ -4704,6 +4741,11 @@ static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
     IntelIOMMUState *s = as->iommu_state;
     DMAMap map;
 
+    if (migration_is_running()) {
+        vtd_address_space_unmap_in_migration(as, n);
+        return;
+    }
+
     /*
      * Note: all the codes in this function has a assumption that IOVA
      * bits are no more than VTD_MGAW bits (which is restricted by
-- 
2.47.1
^ permalink raw reply related	[flat|nested] 17+ messages in thread* [PATCH v4 08/10] vfio/migration: Fix a check on vbasedev->iommu_dirty_tracking
  2025-10-29  9:53 [PATCH v4 00/10] vfio: relax the vIOMMU check Zhenzhong Duan
                   ` (6 preceding siblings ...)
  2025-10-29  9:53 ` [PATCH v4 07/10] intel_iommu: Fix unmap_bitmap failure with legacy VFIO backend Zhenzhong Duan
@ 2025-10-29  9:53 ` Zhenzhong Duan
  2025-10-30  8:21   ` Avihai Horon
  2025-10-29  9:53 ` [PATCH v4 09/10] vfio/migration: Add migration blocker if VM memory is too large to cause unmap_bitmap failure Zhenzhong Duan
  2025-10-29  9:53 ` [PATCH v4 10/10] vfio/migration: Allow live migration with vIOMMU without VFs using device dirty tracking Zhenzhong Duan
  9 siblings, 1 reply; 17+ messages in thread
From: Zhenzhong Duan @ 2025-10-29  9:53 UTC (permalink / raw)
  To: qemu-devel
  Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
	eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
	rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan
VFIO IOMMU type1 claims to support IOMMU dirty tracking by setting
bcontainer->dirty_pages_supported, but in vfio_migration_realize()
vbasedev->iommu_dirty_tracking is checked, we should pass
bcontainer->dirty_pages_supported to vbasedev->iommu_dirty_tracking
in legacy backend so that the check is accurate.
Fixes: 30b916778517 ("vfio/common: Allow disabling device dirty page tracking")
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
 hw/vfio/container-legacy.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/hw/vfio/container-legacy.c b/hw/vfio/container-legacy.c
index dd9c4a6a5a..fa726a2733 100644
--- a/hw/vfio/container-legacy.c
+++ b/hw/vfio/container-legacy.c
@@ -845,6 +845,7 @@ static bool vfio_device_get(VFIOGroup *group, const char *name,
                             VFIODevice *vbasedev, Error **errp)
 {
     g_autofree struct vfio_device_info *info = NULL;
+    VFIOContainer *bcontainer = VFIO_IOMMU(group->container);
     int fd;
 
     fd = vfio_cpr_group_get_device_fd(group->fd, name);
@@ -883,7 +884,8 @@ static bool vfio_device_get(VFIOGroup *group, const char *name,
         }
     }
 
-    vfio_device_prepare(vbasedev, VFIO_IOMMU(group->container), info);
+    vfio_device_prepare(vbasedev, bcontainer, info);
+    vbasedev->iommu_dirty_tracking = bcontainer->dirty_pages_supported;
 
     vbasedev->fd = fd;
     vbasedev->group = group;
-- 
2.47.1
^ permalink raw reply related	[flat|nested] 17+ messages in thread* Re: [PATCH v4 08/10] vfio/migration: Fix a check on vbasedev->iommu_dirty_tracking
  2025-10-29  9:53 ` [PATCH v4 08/10] vfio/migration: Fix a check on vbasedev->iommu_dirty_tracking Zhenzhong Duan
@ 2025-10-30  8:21   ` Avihai Horon
  2025-10-30  9:10     ` Duan, Zhenzhong
  0 siblings, 1 reply; 17+ messages in thread
From: Avihai Horon @ 2025-10-30  8:21 UTC (permalink / raw)
  To: Zhenzhong Duan, qemu-devel
  Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
	eric.auger, joao.m.martins, xudong.hao, giovanni.cabiddu,
	rohith.s.r, mark.gross, arjan.van.de.ven
On 29/10/2025 11:53, Zhenzhong Duan wrote:
> External email: Use caution opening links or attachments
>
>
> VFIO IOMMU type1 claims to support IOMMU dirty tracking by setting
> bcontainer->dirty_pages_supported, but in vfio_migration_realize()
> vbasedev->iommu_dirty_tracking is checked, we should pass
> bcontainer->dirty_pages_supported to vbasedev->iommu_dirty_tracking
> in legacy backend so that the check is accurate.
>
> Fixes: 30b916778517 ("vfio/common: Allow disabling device dirty page tracking")
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
>   hw/vfio/container-legacy.c | 4 +++-
>   1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/hw/vfio/container-legacy.c b/hw/vfio/container-legacy.c
> index dd9c4a6a5a..fa726a2733 100644
> --- a/hw/vfio/container-legacy.c
> +++ b/hw/vfio/container-legacy.c
> @@ -845,6 +845,7 @@ static bool vfio_device_get(VFIOGroup *group, const char *name,
>                               VFIODevice *vbasedev, Error **errp)
>   {
>       g_autofree struct vfio_device_info *info = NULL;
> +    VFIOContainer *bcontainer = VFIO_IOMMU(group->container);
>       int fd;
>
>       fd = vfio_cpr_group_get_device_fd(group->fd, name);
> @@ -883,7 +884,8 @@ static bool vfio_device_get(VFIOGroup *group, const char *name,
>           }
>       }
>
> -    vfio_device_prepare(vbasedev, VFIO_IOMMU(group->container), info);
> +    vfio_device_prepare(vbasedev, bcontainer, info);
> +    vbasedev->iommu_dirty_tracking = bcontainer->dirty_pages_supported;
IIRC, we intentionally don't consider VFIO IOMMU type1 dirty tracking as 
a real dirty tracker, because all it does is mark the whole tracked 
address range as dirty (which is the same as not having dirty tracking 
support at all).
Thus, when vbasedev->iommu_dirty_tracking was introduced, we 
intentionally set it only if IOMMUFD dirty tracking was supported.
So, I don't think this patch is needed.
BTW, do you have a real production use case for migration with VFIO 
IOMMU type1 dirty tracking? I mean, is the scenario you described in 
patch #7 a real use case or you just fixed it for completeness?
If there is no use case, maybe patch #9 is not really needed?
Thanks.
^ permalink raw reply	[flat|nested] 17+ messages in thread* RE: [PATCH v4 08/10] vfio/migration: Fix a check on vbasedev->iommu_dirty_tracking
  2025-10-30  8:21   ` Avihai Horon
@ 2025-10-30  9:10     ` Duan, Zhenzhong
  2025-10-30 10:17       ` Joao Martins
  0 siblings, 1 reply; 17+ messages in thread
From: Duan, Zhenzhong @ 2025-10-30  9:10 UTC (permalink / raw)
  To: Avihai Horon, qemu-devel@nongnu.org
  Cc: alex@shazbot.org, clg@redhat.com, mst@redhat.com,
	jasowang@redhat.com, Liu, Yi L, clement.mathieu--drif@eviden.com,
	eric.auger@redhat.com, joao.m.martins@oracle.com, Hao, Xudong,
	Cabiddu, Giovanni, Rohith S R, Gross, Mark, Van De Ven, Arjan
>-----Original Message-----
>From: Avihai Horon <avihaih@nvidia.com>
>Subject: Re: [PATCH v4 08/10] vfio/migration: Fix a check on
>vbasedev->iommu_dirty_tracking
>
>
>On 29/10/2025 11:53, Zhenzhong Duan wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> VFIO IOMMU type1 claims to support IOMMU dirty tracking by setting
>> bcontainer->dirty_pages_supported, but in vfio_migration_realize()
>> vbasedev->iommu_dirty_tracking is checked, we should pass
>> bcontainer->dirty_pages_supported to vbasedev->iommu_dirty_tracking
>> in legacy backend so that the check is accurate.
>>
>> Fixes: 30b916778517 ("vfio/common: Allow disabling device dirty page
>tracking")
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> ---
>>   hw/vfio/container-legacy.c | 4 +++-
>>   1 file changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/vfio/container-legacy.c b/hw/vfio/container-legacy.c
>> index dd9c4a6a5a..fa726a2733 100644
>> --- a/hw/vfio/container-legacy.c
>> +++ b/hw/vfio/container-legacy.c
>> @@ -845,6 +845,7 @@ static bool vfio_device_get(VFIOGroup *group,
>const char *name,
>>                               VFIODevice *vbasedev, Error **errp)
>>   {
>>       g_autofree struct vfio_device_info *info = NULL;
>> +    VFIOContainer *bcontainer = VFIO_IOMMU(group->container);
>>       int fd;
>>
>>       fd = vfio_cpr_group_get_device_fd(group->fd, name);
>> @@ -883,7 +884,8 @@ static bool vfio_device_get(VFIOGroup *group,
>const char *name,
>>           }
>>       }
>>
>> -    vfio_device_prepare(vbasedev, VFIO_IOMMU(group->container),
>info);
>> +    vfio_device_prepare(vbasedev, bcontainer, info);
>> +    vbasedev->iommu_dirty_tracking =
>bcontainer->dirty_pages_supported;
>
>IIRC, we intentionally don't consider VFIO IOMMU type1 dirty tracking as
>a real dirty tracker, because all it does is mark the whole tracked
>address range as dirty (which is the same as not having dirty tracking
>support at all).
Yes.
>Thus, when vbasedev->iommu_dirty_tracking was introduced, we
>intentionally set it only if IOMMUFD dirty tracking was supported.
Then we have conflict setting of vbasedev->iommu_dirty_tracking and bcontainer->dirty_pages_supported.
We had set bcontainer->dirty_pages_supported = true in vfio_get_iommu_info_migration().
bcontainer->dirty_pages_supported is checked in dirty tracking related functions of container scope,
e.g., vfio_container_set_dirty_page_tracking() and vfio_container_query_dirty_bitmap(). If we take the
legacy backend ditry tracking as not supported, we should set it to false, is that work for you?
>
>So, I don't think this patch needed.
Yes, if we set bcontainer->dirty_pages_supported = false, this patch and patch9 are both not needed.
>
>BTW, do you have a real production use case for migration with VFIO
>IOMMU type1 dirty tracking? I mean, is the scenario you described in
>patch #7 a real use case or you just fixed it for completeness?
>If there is no use case, maybe patch #9 is not really needed?
patch7 is a real use case, in guest, we switch device from IOMMU domain to block domain and see the issue.
We need to send accurate unmap notification with actual mapping during migration, for both backend.
patch9 is specifically for unmap_bitmap(), it's needed as long as we set bcontainer->dirty_pages_supported = true for legacy backend,
because vfio_legacy_dma_unmap_one checks bcontainer->dirty_pages_supported.
Thanks
Zhenzhong
^ permalink raw reply	[flat|nested] 17+ messages in thread* Re: [PATCH v4 08/10] vfio/migration: Fix a check on vbasedev->iommu_dirty_tracking
  2025-10-30  9:10     ` Duan, Zhenzhong
@ 2025-10-30 10:17       ` Joao Martins
  2025-10-30 13:06         ` Duan, Zhenzhong
  0 siblings, 1 reply; 17+ messages in thread
From: Joao Martins @ 2025-10-30 10:17 UTC (permalink / raw)
  To: Duan, Zhenzhong, Avihai Horon
  Cc: alex@shazbot.org, clg@redhat.com, mst@redhat.com,
	jasowang@redhat.com, Liu, Yi L, clement.mathieu--drif@eviden.com,
	eric.auger@redhat.com, Hao, Xudong, Cabiddu, Giovanni, Rohith S R,
	Gross, Mark, Van De Ven, Arjan, qemu-devel@nongnu.org
On 30/10/2025 09:10, Duan, Zhenzhong wrote:
>> From: Avihai Horon <avihaih@nvidia.com>
>> On 29/10/2025 11:53, Zhenzhong Duan wrote:
>>
>> BTW, do you have a real production use case for migration with VFIO
>> IOMMU type1 dirty tracking? I mean, is the scenario you described in
>> patch #7 a real use case or you just fixed it for completeness?
>> If there is no use case, maybe patch #9 is not really needed?
> 
> patch7 is a real use case, in guest, we switch device from IOMMU domain to block domain and see the issue.
> We need to send accurate unmap notification with actual mapping during migration, for both backend.
> 
I think the real question is why you are using type1 backend overall for
purposes of dirty tracking.
type1 dirty tracking just returns everything in the bitmap as 1s. There's no
actual dirty tracking and we usually call 'perpectual' dirty tracking because
everything DMA mapped as write is always returned as dirty no matter what you
do. It doesn't look at pagetable neither for the unmap get dirty.
It's only (known) use has been for testing (in the lack of IOMMU HW + IOMMUFD)
But reading your statement in a different way: you are saying that you use *two*
backends at the same time? Why would you do that?
^ permalink raw reply	[flat|nested] 17+ messages in thread
* RE: [PATCH v4 08/10] vfio/migration: Fix a check on vbasedev->iommu_dirty_tracking
  2025-10-30 10:17       ` Joao Martins
@ 2025-10-30 13:06         ` Duan, Zhenzhong
  2025-10-30 13:38           ` Joao Martins
  0 siblings, 1 reply; 17+ messages in thread
From: Duan, Zhenzhong @ 2025-10-30 13:06 UTC (permalink / raw)
  To: Joao Martins, Avihai Horon
  Cc: alex@shazbot.org, clg@redhat.com, mst@redhat.com,
	jasowang@redhat.com, Liu, Yi L, clement.mathieu--drif@eviden.com,
	eric.auger@redhat.com, Hao, Xudong, Cabiddu, Giovanni, Rohith S R,
	Gross, Mark, Van De Ven, Arjan, qemu-devel@nongnu.org
>-----Original Message-----
>From: Joao Martins <joao.m.martins@oracle.com>
>Subject: Re: [PATCH v4 08/10] vfio/migration: Fix a check on
>vbasedev->iommu_dirty_tracking
>
>On 30/10/2025 09:10, Duan, Zhenzhong wrote:
>>> From: Avihai Horon <avihaih@nvidia.com>
>>> On 29/10/2025 11:53, Zhenzhong Duan wrote:
>>>
>>> BTW, do you have a real production use case for migration with VFIO
>>> IOMMU type1 dirty tracking? I mean, is the scenario you described in
>>> patch #7 a real use case or you just fixed it for completeness?
>>> If there is no use case, maybe patch #9 is not really needed?
>>
>> patch7 is a real use case, in guest, we switch device from IOMMU domain
>to block domain and see the issue.
>> We need to send accurate unmap notification with actual mapping during
>migration, for both backend.
>>
>
>I think the real question is why you are using type1 backend overall for
>purposes of dirty tracking.
Just because we had set bcontainer->dirty_pages_supported = true, the dirty tracking
by query through kernel VFIO type1 interface is used during migration, even though it's 'perpectual'.
>
>type1 dirty tracking just returns everything in the bitmap as 1s. There's no
>actual dirty tracking and we usually call 'perpectual' dirty tracking because
>everything DMA mapped as write is always returned as dirty no matter what
>you
>do. It doesn't look at pagetable neither for the unmap get dirty.
>
>It's only (known) use has been for testing (in the lack of IOMMU HW +
>IOMMUFD)
You mean testing live migration or the legacy VFIO type1 dirty tracking interface?
If you mean the first, we can force enable it with 'enable-migration=on' and set
bcontainer->dirty_pages_supported = false.
>
>But reading your statement in a different way: you are saying that you use
>*two*
>backends at the same time? Why would you do that?
I mean patch7 is needed no matter legacy backend or IOMMUFD backend.
btw: we do support a qemu cmdline with mixed backend, e.g., two devices,
one backed by legacy backend and the other backed by IOMMUFD. But I'm not mean that.
Thanks
Zhenzhong
^ permalink raw reply	[flat|nested] 17+ messages in thread
* Re: [PATCH v4 08/10] vfio/migration: Fix a check on vbasedev->iommu_dirty_tracking
  2025-10-30 13:06         ` Duan, Zhenzhong
@ 2025-10-30 13:38           ` Joao Martins
  2025-10-31  2:40             ` Duan, Zhenzhong
  0 siblings, 1 reply; 17+ messages in thread
From: Joao Martins @ 2025-10-30 13:38 UTC (permalink / raw)
  To: Duan, Zhenzhong, Avihai Horon
  Cc: alex@shazbot.org, clg@redhat.com, mst@redhat.com,
	jasowang@redhat.com, Liu, Yi L, clement.mathieu--drif@eviden.com,
	eric.auger@redhat.com, Hao, Xudong, Cabiddu, Giovanni, Rohith S R,
	Gross, Mark, Van De Ven, Arjan, qemu-devel@nongnu.org
On 30/10/2025 13:06, Duan, Zhenzhong wrote:
>> -----Original Message-----
>> From: Joao Martins <joao.m.martins@oracle.com>
>> Subject: Re: [PATCH v4 08/10] vfio/migration: Fix a check on
>> vbasedev->iommu_dirty_tracking
>>
>> On 30/10/2025 09:10, Duan, Zhenzhong wrote:
>>>> From: Avihai Horon <avihaih@nvidia.com>
>>>> On 29/10/2025 11:53, Zhenzhong Duan wrote:
>>>>
>>>> BTW, do you have a real production use case for migration with VFIO
>>>> IOMMU type1 dirty tracking? I mean, is the scenario you described in
>>>> patch #7 a real use case or you just fixed it for completeness?
>>>> If there is no use case, maybe patch #9 is not really needed?
>>>
>>> patch7 is a real use case, in guest, we switch device from IOMMU domain
>> to block domain and see the issue.
>>> We need to send accurate unmap notification with actual mapping during
>> migration, for both backend.
>>>
>>
>> I think the real question is why you are using type1 backend overall for
>> purposes of dirty tracking.
> 
> Just because we had set bcontainer->dirty_pages_supported = true, the dirty tracking
> by query through kernel VFIO type1 interface is used during migration, even though it's 'perpectual'.
> 
Right but that's what iommu_dirty_tracking is for. iommu_dirty_tracking should
only be set for something that actually uses IOMMU for dirty tracking. And this
patch does the opposite of that.
See below on the perpectual part.
>>
>> type1 dirty tracking just returns everything in the bitmap as 1s. There's no
>> actual dirty tracking and we usually call 'perpectual' dirty tracking because
>> everything DMA mapped as write is always returned as dirty no matter what
>> you
>> do. It doesn't look at pagetable neither for the unmap get dirty.
>>
>> It's only (known) use has been for testing (in the lack of IOMMU HW +
>> IOMMUFD)
> 
> You mean testing live migration or the legacy VFIO type1 dirty tracking interface?
> If you mean the first, we can force enable it with 'enable-migration=on' and set
> bcontainer->dirty_pages_supported = false.
> 
I'm aware. I meant testing of dirty_pages_supported code path before IOMMUFD
existed, or in the lack of IOMMU hardware.
>>
>> But reading your statement in a different way: you are saying that you use
>> *two*
>> backends at the same time? Why would you do that?
> 
> I mean patch7 is needed no matter legacy backend or IOMMUFD backend.
> 
> btw: we do support a qemu cmdline with mixed backend, e.g., two devices,
> one backed by legacy backend and the other backed by IOMMUFD. But I'm not mean that.
patch 7 I don't disagree. But this path 8 doesn't make sense, and it's not
fixing anything but rather introducing wrong behavior I think (also the Fixes:
wouild be wrong if you're code is setting iommu_dirty_tracking). It is setting
iommu_dirty_tracking on something that does not anything iommu dirty tracking
related. For type1 container it should be false.
iommu dirty tracking is purposedly there to make the distinct difference that
type1 dirty tracking is *not* actual IOMMU dirty tracking. And that the
migration *only* goes/is-allowed automatically if either you have device dirty
page tracking or IOMMU dirty tracking, but not on type1 legacy dirty tracking ..
unless you force it (with enable-migration=on).
Imagine type1 dirty tracking actually getting treated like iommu_dirty_tracking:
it means you are telling users that 1TB guests migration with VFs that don't
have device dirty tracking without IOMMU HW can be migrated *by default*. And in
pratice that means they transfer 1TB of RAM during stop-copy. And have them like
1min downtime.
^ permalink raw reply	[flat|nested] 17+ messages in thread
* RE: [PATCH v4 08/10] vfio/migration: Fix a check on vbasedev->iommu_dirty_tracking
  2025-10-30 13:38           ` Joao Martins
@ 2025-10-31  2:40             ` Duan, Zhenzhong
  0 siblings, 0 replies; 17+ messages in thread
From: Duan, Zhenzhong @ 2025-10-31  2:40 UTC (permalink / raw)
  To: Joao Martins, Avihai Horon
  Cc: alex@shazbot.org, clg@redhat.com, mst@redhat.com,
	jasowang@redhat.com, Liu, Yi L, clement.mathieu--drif@eviden.com,
	eric.auger@redhat.com, Hao, Xudong, Cabiddu, Giovanni, Rohith S R,
	Gross, Mark, Van De Ven, Arjan, qemu-devel@nongnu.org
>-----Original Message-----
>From: Joao Martins <joao.m.martins@oracle.com>
>Subject: Re: [PATCH v4 08/10] vfio/migration: Fix a check on
>vbasedev->iommu_dirty_tracking
>
>On 30/10/2025 13:06, Duan, Zhenzhong wrote:
>>> -----Original Message-----
>>> From: Joao Martins <joao.m.martins@oracle.com>
>>> Subject: Re: [PATCH v4 08/10] vfio/migration: Fix a check on
>>> vbasedev->iommu_dirty_tracking
>>>
>>> On 30/10/2025 09:10, Duan, Zhenzhong wrote:
>>>>> From: Avihai Horon <avihaih@nvidia.com>
>>>>> On 29/10/2025 11:53, Zhenzhong Duan wrote:
>>>>>
>>>>> BTW, do you have a real production use case for migration with VFIO
>>>>> IOMMU type1 dirty tracking? I mean, is the scenario you described in
>>>>> patch #7 a real use case or you just fixed it for completeness?
>>>>> If there is no use case, maybe patch #9 is not really needed?
>>>>
>>>> patch7 is a real use case, in guest, we switch device from IOMMU domain
>>> to block domain and see the issue.
>>>> We need to send accurate unmap notification with actual mapping during
>>> migration, for both backend.
>>>>
>>>
>>> I think the real question is why you are using type1 backend overall for
>>> purposes of dirty tracking.
>>
>> Just because we had set bcontainer->dirty_pages_supported = true, the
>dirty tracking
>> by query through kernel VFIO type1 interface is used during migration, even
>though it's 'perpectual'.
>>
>
>Right but that's what iommu_dirty_tracking is for. iommu_dirty_tracking
>should
>only be set for something that actually uses IOMMU for dirty tracking. And
>this
>patch does the opposite of that.
>
>See below on the perpectual part.
>
>>>
>>> type1 dirty tracking just returns everything in the bitmap as 1s. There's no
>>> actual dirty tracking and we usually call 'perpectual' dirty tracking because
>>> everything DMA mapped as write is always returned as dirty no matter
>what
>>> you
>>> do. It doesn't look at pagetable neither for the unmap get dirty.
>>>
>>> It's only (known) use has been for testing (in the lack of IOMMU HW +
>>> IOMMUFD)
>>
>> You mean testing live migration or the legacy VFIO type1 dirty tracking
>interface?
>> If you mean the first, we can force enable it with 'enable-migration=on' and
>set
>> bcontainer->dirty_pages_supported = false.
>>
>
>I'm aware. I meant testing of dirty_pages_supported code path before
>IOMMUFD
>existed, or in the lack of IOMMU hardware.
>
>>>
>>> But reading your statement in a different way: you are saying that you use
>>> *two*
>>> backends at the same time? Why would you do that?
>>
>> I mean patch7 is needed no matter legacy backend or IOMMUFD backend.
>>
>> btw: we do support a qemu cmdline with mixed backend, e.g., two devices,
>> one backed by legacy backend and the other backed by IOMMUFD. But I'm
>not mean that.
>patch 7 I don't disagree. But this path 8 doesn't make sense, and it's not
>fixing anything but rather introducing wrong behavior I think (also the Fixes:
>wouild be wrong if you're code is setting iommu_dirty_tracking). It is setting
>iommu_dirty_tracking on something that does not anything iommu dirty
>tracking
>related. For type1 container it should be false.
>
>iommu dirty tracking is purposedly there to make the distinct difference that
>type1 dirty tracking is *not* actual IOMMU dirty tracking. And that the
>migration *only* goes/is-allowed automatically if either you have device dirty
>page tracking or IOMMU dirty tracking, but not on type1 legacy dirty
>tracking ..
>unless you force it (with enable-migration=on).
>
>Imagine type1 dirty tracking actually getting treated like
>iommu_dirty_tracking:
>it means you are telling users that 1TB guests migration with VFs that don't
>have device dirty tracking without IOMMU HW can be migrated *by default*.
>And in
>pratice that means they transfer 1TB of RAM during stop-copy. And have
>them like
>1min downtime.
Clear, so it's intentional to have dirty_pages_supported = true and
iommu dirty tracking = false for legacy backend, I'll drop this patch.
Thank you all.
BRs,
Zhenzhong
^ permalink raw reply	[flat|nested] 17+ messages in thread
* [PATCH v4 09/10] vfio/migration: Add migration blocker if VM memory is too large to cause unmap_bitmap failure
  2025-10-29  9:53 [PATCH v4 00/10] vfio: relax the vIOMMU check Zhenzhong Duan
                   ` (7 preceding siblings ...)
  2025-10-29  9:53 ` [PATCH v4 08/10] vfio/migration: Fix a check on vbasedev->iommu_dirty_tracking Zhenzhong Duan
@ 2025-10-29  9:53 ` Zhenzhong Duan
  2025-10-29  9:53 ` [PATCH v4 10/10] vfio/migration: Allow live migration with vIOMMU without VFs using device dirty tracking Zhenzhong Duan
  9 siblings, 0 replies; 17+ messages in thread
From: Zhenzhong Duan @ 2025-10-29  9:53 UTC (permalink / raw)
  To: qemu-devel
  Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
	eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
	rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan
With default config, kernel VFIO IOMMU type1 driver limits dirty bitmap to
256MB for unmap_bitmap ioctl so the maximum guest memory region is no more
than 8TB size for the ioctl to succeed.
Be conservative here to limit total guest memory to max value supported
by unmap_bitmap ioctl or else add a migration blocker. IOMMUFD backend
doesn't have such limit, one can use IOMMUFD backed device if there is a
need to migration such large VM.
Suggested-by: Yi Liu <yi.l.liu@intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
 hw/vfio/iommufd.c   |  5 +++++
 hw/vfio/migration.c | 29 +++++++++++++++++++++++++++++
 2 files changed, 34 insertions(+)
diff --git a/hw/vfio/iommufd.c b/hw/vfio/iommufd.c
index ba5c6b6586..8de765c769 100644
--- a/hw/vfio/iommufd.c
+++ b/hw/vfio/iommufd.c
@@ -633,6 +633,11 @@ skip_ioas_alloc:
     QLIST_INIT(&container->hwpt_list);
 
     bcontainer = VFIO_IOMMU(container);
+
+    /* There is no limitation on dirty bitmap size in IOMMUFD */
+    bcontainer->max_dirty_bitmap_size = UINT64_MAX;
+    bcontainer->dirty_pgsizes = qemu_real_host_page_size();
+
     vfio_address_space_insert(space, bcontainer);
 
     if (!iommufd_cdev_attach_container(vbasedev, container, errp)) {
diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
index 3d819eb87e..0a1d8cac2c 100644
--- a/hw/vfio/migration.c
+++ b/hw/vfio/migration.c
@@ -16,6 +16,7 @@
 #include <sys/ioctl.h>
 
 #include "system/runstate.h"
+#include "hw/boards.h"
 #include "hw/vfio/vfio-device.h"
 #include "hw/vfio/vfio-migration.h"
 #include "migration/misc.h"
@@ -1152,6 +1153,28 @@ static bool vfio_viommu_preset(VFIODevice *vbasedev)
     return vbasedev->bcontainer->space->as != &address_space_memory;
 }
 
+static bool vfio_dirty_tracking_exceed_limit(VFIODevice *vbasedev)
+{
+    VFIOContainer *bcontainer = vbasedev->bcontainer;
+    uint64_t max_size, page_size;
+
+    /*
+     * VFIO IOMMU type1 driver has limitation of bitmap size on unmap_bitmap
+     * ioctl(), calculate the limit and compare with guest memory size to
+     * catch dirty tracking failure early.
+     *
+     * This limit is 8TB with default kernel and QEMU config, we are a bit
+     * conservative here as VM memory layout may be nonconsecutive or VM
+     * can run with vIOMMU enabled so the limitation could be relaxed. One
+     * can also switch to use IOMMUFD backend if there is a need to migrate
+     * large VM.
+     */
+    page_size = 1 << ctz64(bcontainer->dirty_pgsizes);
+    max_size = bcontainer->max_dirty_bitmap_size * BITS_PER_BYTE * page_size;
+
+    return current_machine->ram_size > max_size;
+}
+
 /*
  * Return true when either migration initialized or blocker registered.
  * Currently only return false when adding blocker fails which will
@@ -1194,6 +1217,12 @@ bool vfio_migration_realize(VFIODevice *vbasedev, Error **errp)
 
         warn_report("%s: VFIO device doesn't support device and "
                     "IOMMU dirty tracking", vbasedev->name);
+    } else if (vfio_device_dirty_pages_disabled(vbasedev) &&
+               vfio_dirty_tracking_exceed_limit(vbasedev)) {
+        error_setg(&err, "%s: Migration is currently not supported with large "
+                   "memory VM due to dirty tracking limitation in backend",
+                   vbasedev->name);
+        goto add_blocker;
     }
 
     ret = vfio_block_multiple_devices_migration(vbasedev, errp);
-- 
2.47.1
^ permalink raw reply related	[flat|nested] 17+ messages in thread* [PATCH v4 10/10] vfio/migration: Allow live migration with vIOMMU without VFs using device dirty tracking
  2025-10-29  9:53 [PATCH v4 00/10] vfio: relax the vIOMMU check Zhenzhong Duan
                   ` (8 preceding siblings ...)
  2025-10-29  9:53 ` [PATCH v4 09/10] vfio/migration: Add migration blocker if VM memory is too large to cause unmap_bitmap failure Zhenzhong Duan
@ 2025-10-29  9:53 ` Zhenzhong Duan
  9 siblings, 0 replies; 17+ messages in thread
From: Zhenzhong Duan @ 2025-10-29  9:53 UTC (permalink / raw)
  To: qemu-devel
  Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
	eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
	rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan,
	Jason Zeng
Commit e46883204c38 ("vfio/migration: Block migration with vIOMMU")
introduces a migration blocker when vIOMMU is enabled, because we need
to calculate the IOVA ranges for device dirty tracking. But this is
unnecessary for iommu dirty tracking.
Limit the vfio_viommu_preset() check to those devices which use device
dirty tracking. This allows live migration with VFIO devices which use
iommu dirty tracking.
Suggested-by: Jason Zeng <jason.zeng@intel.com>
Co-developed-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
Tested-by: Xudong Hao <xudong.hao@intel.com>
Tested-by: Giovannio Cabiddu <giovanni.cabiddu@intel.com>
Tested-by: Rohith S R <rohith.s.r@intel.com>
---
 hw/vfio/migration.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
index 0a1d8cac2c..40f337b798 100644
--- a/hw/vfio/migration.c
+++ b/hw/vfio/migration.c
@@ -1230,7 +1230,8 @@ bool vfio_migration_realize(VFIODevice *vbasedev, Error **errp)
         goto out_deinit;
     }
 
-    if (vfio_viommu_preset(vbasedev)) {
+    if (!vfio_device_dirty_pages_disabled(vbasedev) &&
+        vfio_viommu_preset(vbasedev)) {
         error_setg(&err, "%s: Migration is currently not supported "
                    "with vIOMMU enabled", vbasedev->name);
         goto add_blocker;
-- 
2.47.1
^ permalink raw reply related	[flat|nested] 17+ messages in thread