* [PATCH v5 0/9] vfio: relax the vIOMMU check
@ 2025-11-06 4:20 Zhenzhong Duan
2025-11-06 4:20 ` [PATCH v5 1/9] vfio/iommufd: Add framework code to support getting dirty bitmap before unmap Zhenzhong Duan
` (9 more replies)
0 siblings, 10 replies; 18+ messages in thread
From: Zhenzhong Duan @ 2025-11-06 4:20 UTC (permalink / raw)
To: qemu-devel
Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan
Hi,
This series relax the vIOMMU check and allows live migration with vIOMMU
without VFs using device dirty tracking. It's rewritten based on first 4
patches of [1] from Joao.
Currently what block us is the lack of dirty bitmap query with iommufd
before unmap. By adding that query and handle some corner case we can
relax the check.
Based on vfio-next branch:
patch1-2: add dirty bitmap query with iommufd
patch3: a ranaming cleanup
patch4-5: unmap_bitmap optimization
patch6-7: fixes to avoid losing dirty pages
patch8: add a blocker if VM memory is really quite large for unmap_bitmap
patch9: relax vIOMMU check
We used [2] to test, it contains dom_switch series + this series +
nesting series. I included nesting series just because I'd like to confirm
the two patches optimizing out dirty tracking for readonly pages work.
We tested VM live migration (running QAT workload in VM) with QAT device
passthrough, below matrix configs with guest config 'iommu=pt' and 'iommu=nopt':
1.Scalable mode vIOMMU + IOMMUFD cdev mode
2.Scalable mode vIOMMU + legacy VFIO mode
3.legacy mode vIOMMU + IOMMUFD cdev mode
4.legacy mode vIOMMU + legacy VFIO mode
The QAT workload is a user level app that utilizes VFIO to control QAT device.
Thanks
Zhenzhong
[1] https://github.com/jpemartins/qemu/commits/vfio-migration-viommu/
[2] https://github.com/yiliu1765/qemu/tree/liuyi/zhenzhong/iommufd_nesting.v8.DS_LM.wip
Changelog:
v5:
- drop the patch checking iommu_dirty_tracking (Avihai, Joao)
- pass iotlb info to unmap_bitmap when switch out of system AS
v4:
- bypass memory size check for device dirty tracking as it's unrelated (Avihai)
- split vfio_device_dirty_pages_disabled() helper out as a separate patch
- add a patch to fix minor error on checking vbasedev->iommu_dirty_tracking
v3:
- return bitmap query failure to fail migration (Avihai)
- refine patch7, set IOMMUFD backend 'dirty_pgsizes' and 'max_dirty_bitmap_size' (Cedric)
- refine patch7, calculate memory limit instead of hardcode 8TB (Liuyi)
- refine commit log (Cedric, Liuyi)
v2:
- add backend_flag parameter to pass DIRTY_BITMAP_NO_CLEAR (Joao, Cedric)
- add a cleanup patch to rename vfio_dma_unmap_bitmap (Cedric)
- add blocker if unmap_bitmap limit check fail (Liuyi)
Joao Martins (1):
vfio: Add a backend_flag parameter to
vfio_contianer_query_dirty_bitmap()
Zhenzhong Duan (8):
vfio/iommufd: Add framework code to support getting dirty bitmap
before unmap
vfio/iommufd: Query dirty bitmap before DMA unmap
vfio/container-legacy: rename vfio_dma_unmap_bitmap() to
vfio_legacy_dma_unmap_get_dirty_bitmap()
vfio/iommufd: Add IOMMU_HWPT_GET_DIRTY_BITMAP_NO_CLEAR flag support
intel_iommu: Fix unmap_bitmap failure with legacy VFIO backend
vfio/listener: Construct iotlb entry when unmap memory address space
vfio/migration: Add migration blocker if VM memory is too large to
cause unmap_bitmap failure
vfio/migration: Allow live migration with vIOMMU without VFs using
device dirty tracking
include/hw/vfio/vfio-container.h | 8 +++--
include/hw/vfio/vfio-device.h | 10 ++++++
include/system/iommufd.h | 2 +-
backends/iommufd.c | 5 +--
hw/i386/intel_iommu.c | 42 +++++++++++++++++++++++++
hw/vfio-user/container.c | 5 +--
hw/vfio/container-legacy.c | 15 +++++----
hw/vfio/container.c | 20 ++++++------
hw/vfio/device.c | 6 ++++
hw/vfio/iommufd.c | 53 +++++++++++++++++++++++++++++---
hw/vfio/listener.c | 21 ++++++++++---
hw/vfio/migration.c | 40 ++++++++++++++++++++++--
backends/trace-events | 2 +-
hw/vfio/trace-events | 2 +-
14 files changed, 194 insertions(+), 37 deletions(-)
--
2.47.1
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v5 1/9] vfio/iommufd: Add framework code to support getting dirty bitmap before unmap
2025-11-06 4:20 [PATCH v5 0/9] vfio: relax the vIOMMU check Zhenzhong Duan
@ 2025-11-06 4:20 ` Zhenzhong Duan
2025-11-06 4:20 ` [PATCH v5 2/9] vfio/iommufd: Query dirty bitmap before DMA unmap Zhenzhong Duan
` (8 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Zhenzhong Duan @ 2025-11-06 4:20 UTC (permalink / raw)
To: qemu-devel
Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan
Currently we support device and iommu dirty tracking, device dirty tracking
is preferred.
Add the framework code in iommufd_cdev_unmap() to choose either device or
iommu dirty tracking, just like vfio_legacy_dma_unmap_one().
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
Tested-by: Xudong Hao <xudong.hao@intel.com>
Tested-by: Giovannio Cabiddu <giovanni.cabiddu@intel.com>
Tested-by: Rohith S R <rohith.s.r@intel.com>
---
hw/vfio/iommufd.c | 34 +++++++++++++++++++++++++++++++---
1 file changed, 31 insertions(+), 3 deletions(-)
diff --git a/hw/vfio/iommufd.c b/hw/vfio/iommufd.c
index bb5775aa71..806ca6ef14 100644
--- a/hw/vfio/iommufd.c
+++ b/hw/vfio/iommufd.c
@@ -61,14 +61,42 @@ static int iommufd_cdev_unmap(const VFIOContainer *bcontainer,
IOMMUTLBEntry *iotlb, bool unmap_all)
{
const VFIOIOMMUFDContainer *container = VFIO_IOMMU_IOMMUFD(bcontainer);
+ IOMMUFDBackend *be = container->be;
+ uint32_t ioas_id = container->ioas_id;
+ bool need_dirty_sync = false;
+ Error *local_err = NULL;
+ int ret;
if (unmap_all) {
size = UINT64_MAX;
}
- /* TODO: Handle dma_unmap_bitmap with iotlb args (migration) */
- return iommufd_backend_unmap_dma(container->be,
- container->ioas_id, iova, size);
+ if (iotlb && vfio_container_dirty_tracking_is_started(bcontainer)) {
+ if (!vfio_container_devices_dirty_tracking_is_supported(bcontainer) &&
+ bcontainer->dirty_pages_supported) {
+ /* TODO: query dirty bitmap before DMA unmap */
+ return iommufd_backend_unmap_dma(be, ioas_id, iova, size);
+ }
+
+ need_dirty_sync = true;
+ }
+
+ ret = iommufd_backend_unmap_dma(be, ioas_id, iova, size);
+ if (ret) {
+ return ret;
+ }
+
+ if (need_dirty_sync) {
+ ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size,
+ iotlb->translated_addr,
+ &local_err);
+ if (ret) {
+ error_report_err(local_err);
+ return ret;
+ }
+ }
+
+ return 0;
}
static bool iommufd_cdev_kvm_device_add(VFIODevice *vbasedev, Error **errp)
--
2.47.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v5 2/9] vfio/iommufd: Query dirty bitmap before DMA unmap
2025-11-06 4:20 [PATCH v5 0/9] vfio: relax the vIOMMU check Zhenzhong Duan
2025-11-06 4:20 ` [PATCH v5 1/9] vfio/iommufd: Add framework code to support getting dirty bitmap before unmap Zhenzhong Duan
@ 2025-11-06 4:20 ` Zhenzhong Duan
2025-11-06 4:20 ` [PATCH v5 3/9] vfio/container-legacy: rename vfio_dma_unmap_bitmap() to vfio_legacy_dma_unmap_get_dirty_bitmap() Zhenzhong Duan
` (7 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Zhenzhong Duan @ 2025-11-06 4:20 UTC (permalink / raw)
To: qemu-devel
Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan
When an existing mapping is unmapped, there could already be dirty bits
which need to be recorded before unmap.
If query dirty bitmap fails, we still need to do unmapping or else there
is stale mapping and it's risky to guest.
Co-developed-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
Tested-by: Xudong Hao <xudong.hao@intel.com>
Tested-by: Giovannio Cabiddu <giovanni.cabiddu@intel.com>
Tested-by: Rohith S R <rohith.s.r@intel.com>
---
hw/vfio/iommufd.c | 19 ++++++++++++++++---
1 file changed, 16 insertions(+), 3 deletions(-)
diff --git a/hw/vfio/iommufd.c b/hw/vfio/iommufd.c
index 806ca6ef14..5f96a41246 100644
--- a/hw/vfio/iommufd.c
+++ b/hw/vfio/iommufd.c
@@ -65,7 +65,7 @@ static int iommufd_cdev_unmap(const VFIOContainer *bcontainer,
uint32_t ioas_id = container->ioas_id;
bool need_dirty_sync = false;
Error *local_err = NULL;
- int ret;
+ int ret, unmap_ret;
if (unmap_all) {
size = UINT64_MAX;
@@ -74,8 +74,21 @@ static int iommufd_cdev_unmap(const VFIOContainer *bcontainer,
if (iotlb && vfio_container_dirty_tracking_is_started(bcontainer)) {
if (!vfio_container_devices_dirty_tracking_is_supported(bcontainer) &&
bcontainer->dirty_pages_supported) {
- /* TODO: query dirty bitmap before DMA unmap */
- return iommufd_backend_unmap_dma(be, ioas_id, iova, size);
+ ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size,
+ iotlb->translated_addr,
+ &local_err);
+ if (ret) {
+ error_report_err(local_err);
+ }
+ /* Unmap stale mapping even if query dirty bitmap fails */
+ unmap_ret = iommufd_backend_unmap_dma(be, ioas_id, iova, size);
+
+ /*
+ * If dirty tracking fails, return the failure to VFIO core to
+ * fail the migration, or else there will be dirty pages missed
+ * to be migrated.
+ */
+ return unmap_ret ? : ret;
}
need_dirty_sync = true;
--
2.47.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v5 3/9] vfio/container-legacy: rename vfio_dma_unmap_bitmap() to vfio_legacy_dma_unmap_get_dirty_bitmap()
2025-11-06 4:20 [PATCH v5 0/9] vfio: relax the vIOMMU check Zhenzhong Duan
2025-11-06 4:20 ` [PATCH v5 1/9] vfio/iommufd: Add framework code to support getting dirty bitmap before unmap Zhenzhong Duan
2025-11-06 4:20 ` [PATCH v5 2/9] vfio/iommufd: Query dirty bitmap before DMA unmap Zhenzhong Duan
@ 2025-11-06 4:20 ` Zhenzhong Duan
2025-11-06 4:20 ` [PATCH v5 4/9] vfio: Add a backend_flag parameter to vfio_contianer_query_dirty_bitmap() Zhenzhong Duan
` (6 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Zhenzhong Duan @ 2025-11-06 4:20 UTC (permalink / raw)
To: qemu-devel
Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan
This is to follow naming style in container-legacy.c to have low level functions
with vfio_legacy_ prefix.
No functional changes.
Suggested-by: Cédric Le Goater <clg@redhat.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
---
hw/vfio/container-legacy.c | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/hw/vfio/container-legacy.c b/hw/vfio/container-legacy.c
index 8e9639603e..b7e3b892b9 100644
--- a/hw/vfio/container-legacy.c
+++ b/hw/vfio/container-legacy.c
@@ -68,9 +68,10 @@ static int vfio_ram_block_discard_disable(VFIOLegacyContainer *container,
}
}
-static int vfio_dma_unmap_bitmap(const VFIOLegacyContainer *container,
- hwaddr iova, uint64_t size,
- IOMMUTLBEntry *iotlb)
+static int
+vfio_legacy_dma_unmap_get_dirty_bitmap(const VFIOLegacyContainer *container,
+ hwaddr iova, uint64_t size,
+ IOMMUTLBEntry *iotlb)
{
const VFIOContainer *bcontainer = VFIO_IOMMU(container);
struct vfio_iommu_type1_dma_unmap *unmap;
@@ -141,7 +142,8 @@ static int vfio_legacy_dma_unmap_one(const VFIOLegacyContainer *container,
if (iotlb && vfio_container_dirty_tracking_is_started(bcontainer)) {
if (!vfio_container_devices_dirty_tracking_is_supported(bcontainer) &&
bcontainer->dirty_pages_supported) {
- return vfio_dma_unmap_bitmap(container, iova, size, iotlb);
+ return vfio_legacy_dma_unmap_get_dirty_bitmap(container, iova, size,
+ iotlb);
}
need_dirty_sync = true;
--
2.47.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v5 4/9] vfio: Add a backend_flag parameter to vfio_contianer_query_dirty_bitmap()
2025-11-06 4:20 [PATCH v5 0/9] vfio: relax the vIOMMU check Zhenzhong Duan
` (2 preceding siblings ...)
2025-11-06 4:20 ` [PATCH v5 3/9] vfio/container-legacy: rename vfio_dma_unmap_bitmap() to vfio_legacy_dma_unmap_get_dirty_bitmap() Zhenzhong Duan
@ 2025-11-06 4:20 ` Zhenzhong Duan
2025-11-06 4:20 ` [PATCH v5 5/9] vfio/iommufd: Add IOMMU_HWPT_GET_DIRTY_BITMAP_NO_CLEAR flag support Zhenzhong Duan
` (5 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Zhenzhong Duan @ 2025-11-06 4:20 UTC (permalink / raw)
To: qemu-devel
Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan
From: Joao Martins <joao.m.martins@oracle.com>
This new parameter will be used in following patch, currently 0 is passed.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
Tested-by: Giovannio Cabiddu <giovanni.cabiddu@intel.com>
Tested-by: Rohith S R <rohith.s.r@intel.com>
---
include/hw/vfio/vfio-container.h | 8 ++++++--
hw/vfio-user/container.c | 5 +++--
hw/vfio/container-legacy.c | 5 +++--
hw/vfio/container.c | 15 +++++++++------
hw/vfio/iommufd.c | 7 ++++---
hw/vfio/listener.c | 6 +++---
hw/vfio/trace-events | 2 +-
7 files changed, 29 insertions(+), 19 deletions(-)
diff --git a/include/hw/vfio/vfio-container.h b/include/hw/vfio/vfio-container.h
index c4b58d664b..9f6e8cedfc 100644
--- a/include/hw/vfio/vfio-container.h
+++ b/include/hw/vfio/vfio-container.h
@@ -99,7 +99,9 @@ bool vfio_container_devices_dirty_tracking_is_supported(
const VFIOContainer *bcontainer);
int vfio_container_query_dirty_bitmap(const VFIOContainer *bcontainer,
uint64_t iova, uint64_t size,
- hwaddr translated_addr, Error **errp);
+ uint64_t backend_flag,
+ hwaddr translated_addr,
+ Error **errp);
GList *vfio_container_get_iova_ranges(const VFIOContainer *bcontainer);
@@ -253,12 +255,14 @@ struct VFIOIOMMUClass {
* @vbmap: #VFIOBitmap internal bitmap structure
* @iova: iova base address
* @size: size of iova range
+ * @backend_flag: flags for backend, opaque to upper layer container
* @errp: pointer to Error*, to store an error if it happens.
*
* Returns zero to indicate success and negative for error.
*/
int (*query_dirty_bitmap)(const VFIOContainer *bcontainer,
- VFIOBitmap *vbmap, hwaddr iova, hwaddr size, Error **errp);
+ VFIOBitmap *vbmap, hwaddr iova, hwaddr size,
+ uint64_t backend_flag, Error **errp);
/* PCI specific */
int (*pci_hot_reset)(VFIODevice *vbasedev, bool single);
diff --git a/hw/vfio-user/container.c b/hw/vfio-user/container.c
index e45192fef6..3ce6ea12db 100644
--- a/hw/vfio-user/container.c
+++ b/hw/vfio-user/container.c
@@ -162,8 +162,9 @@ vfio_user_set_dirty_page_tracking(const VFIOContainer *bcontainer,
}
static int vfio_user_query_dirty_bitmap(const VFIOContainer *bcontainer,
- VFIOBitmap *vbmap, hwaddr iova,
- hwaddr size, Error **errp)
+ VFIOBitmap *vbmap, hwaddr iova,
+ hwaddr size, uint64_t backend_flag,
+ Error **errp)
{
error_setg_errno(errp, ENOTSUP, "Not supported");
return -ENOTSUP;
diff --git a/hw/vfio/container-legacy.c b/hw/vfio/container-legacy.c
index b7e3b892b9..dd9c4a6a5a 100644
--- a/hw/vfio/container-legacy.c
+++ b/hw/vfio/container-legacy.c
@@ -154,7 +154,7 @@ static int vfio_legacy_dma_unmap_one(const VFIOLegacyContainer *container,
}
if (need_dirty_sync) {
- ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size,
+ ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size, 0,
iotlb->translated_addr, &local_err);
if (ret) {
error_report_err(local_err);
@@ -255,7 +255,8 @@ vfio_legacy_set_dirty_page_tracking(const VFIOContainer *bcontainer,
}
static int vfio_legacy_query_dirty_bitmap(const VFIOContainer *bcontainer,
- VFIOBitmap *vbmap, hwaddr iova, hwaddr size, Error **errp)
+ VFIOBitmap *vbmap, hwaddr iova, hwaddr size,
+ uint64_t backend_flag, Error **errp)
{
const VFIOLegacyContainer *container = VFIO_IOMMU_LEGACY(bcontainer);
struct vfio_iommu_type1_dirty_bitmap *dbitmap;
diff --git a/hw/vfio/container.c b/hw/vfio/container.c
index 9ddec300e3..7706603c1c 100644
--- a/hw/vfio/container.c
+++ b/hw/vfio/container.c
@@ -213,13 +213,13 @@ static int vfio_device_dma_logging_report(VFIODevice *vbasedev, hwaddr iova,
static int vfio_container_iommu_query_dirty_bitmap(
const VFIOContainer *bcontainer, VFIOBitmap *vbmap, hwaddr iova,
- hwaddr size, Error **errp)
+ hwaddr size, uint64_t backend_flag, Error **errp)
{
VFIOIOMMUClass *vioc = VFIO_IOMMU_GET_CLASS(bcontainer);
g_assert(vioc->query_dirty_bitmap);
return vioc->query_dirty_bitmap(bcontainer, vbmap, iova, size,
- errp);
+ backend_flag, errp);
}
static int vfio_container_devices_query_dirty_bitmap(
@@ -247,7 +247,9 @@ static int vfio_container_devices_query_dirty_bitmap(
int vfio_container_query_dirty_bitmap(const VFIOContainer *bcontainer,
uint64_t iova, uint64_t size,
- hwaddr translated_addr, Error **errp)
+ uint64_t backend_flag,
+ hwaddr translated_addr,
+ Error **errp)
{
bool all_device_dirty_tracking =
vfio_container_devices_dirty_tracking_is_supported(bcontainer);
@@ -274,7 +276,7 @@ int vfio_container_query_dirty_bitmap(const VFIOContainer *bcontainer,
errp);
} else {
ret = vfio_container_iommu_query_dirty_bitmap(bcontainer, &vbmap, iova, size,
- errp);
+ backend_flag, errp);
}
if (ret) {
@@ -285,8 +287,9 @@ int vfio_container_query_dirty_bitmap(const VFIOContainer *bcontainer,
translated_addr,
vbmap.pages);
- trace_vfio_container_query_dirty_bitmap(iova, size, vbmap.size,
- translated_addr, dirty_pages);
+ trace_vfio_container_query_dirty_bitmap(iova, size, backend_flag,
+ vbmap.size, translated_addr,
+ dirty_pages);
out:
g_free(vbmap.bitmap);
diff --git a/hw/vfio/iommufd.c b/hw/vfio/iommufd.c
index 5f96a41246..b59a8639c6 100644
--- a/hw/vfio/iommufd.c
+++ b/hw/vfio/iommufd.c
@@ -74,7 +74,7 @@ static int iommufd_cdev_unmap(const VFIOContainer *bcontainer,
if (iotlb && vfio_container_dirty_tracking_is_started(bcontainer)) {
if (!vfio_container_devices_dirty_tracking_is_supported(bcontainer) &&
bcontainer->dirty_pages_supported) {
- ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size,
+ ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size, 0,
iotlb->translated_addr,
&local_err);
if (ret) {
@@ -100,7 +100,7 @@ static int iommufd_cdev_unmap(const VFIOContainer *bcontainer,
}
if (need_dirty_sync) {
- ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size,
+ ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size, 0,
iotlb->translated_addr,
&local_err);
if (ret) {
@@ -216,7 +216,8 @@ err:
static int iommufd_query_dirty_bitmap(const VFIOContainer *bcontainer,
VFIOBitmap *vbmap, hwaddr iova,
- hwaddr size, Error **errp)
+ hwaddr size, uint64_t backend_flag,
+ Error **errp)
{
VFIOIOMMUFDContainer *container = VFIO_IOMMU_IOMMUFD(bcontainer);
unsigned long page_size = qemu_real_host_page_size();
diff --git a/hw/vfio/listener.c b/hw/vfio/listener.c
index 2d7d3a4645..2109101158 100644
--- a/hw/vfio/listener.c
+++ b/hw/vfio/listener.c
@@ -1083,7 +1083,7 @@ static void vfio_iommu_map_dirty_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
translated_addr = memory_region_get_ram_addr(mr) + xlat;
ret = vfio_container_query_dirty_bitmap(bcontainer, iova, iotlb->addr_mask + 1,
- translated_addr, &local_err);
+ 0, translated_addr, &local_err);
if (ret) {
error_prepend(&local_err,
"vfio_iommu_map_dirty_notify(%p, 0x%"HWADDR_PRIx", "
@@ -1119,7 +1119,7 @@ static int vfio_ram_discard_query_dirty_bitmap(MemoryRegionSection *section,
* Sync the whole mapped region (spanning multiple individual mappings)
* in one go.
*/
- ret = vfio_container_query_dirty_bitmap(vrdl->bcontainer, iova, size,
+ ret = vfio_container_query_dirty_bitmap(vrdl->bcontainer, iova, size, 0,
translated_addr, &local_err);
if (ret) {
error_report_err(local_err);
@@ -1204,7 +1204,7 @@ static int vfio_sync_dirty_bitmap(VFIOContainer *bcontainer,
return vfio_container_query_dirty_bitmap(bcontainer,
REAL_HOST_PAGE_ALIGN(section->offset_within_address_space),
- int128_get64(section->size), translated_addr, errp);
+ int128_get64(section->size), 0, translated_addr, errp);
}
static void vfio_listener_log_sync(MemoryListener *listener,
diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events
index 1e895448cd..3c62bab764 100644
--- a/hw/vfio/trace-events
+++ b/hw/vfio/trace-events
@@ -105,7 +105,7 @@ vfio_device_dirty_tracking_start(int nr_ranges, uint64_t min32, uint64_t max32,
vfio_iommu_map_dirty_notify(uint64_t iova_start, uint64_t iova_end) "iommu dirty @ 0x%"PRIx64" - 0x%"PRIx64
# container.c
-vfio_container_query_dirty_bitmap(uint64_t iova, uint64_t size, uint64_t bitmap_size, uint64_t translated_addr, uint64_t dirty_pages) "iova=0x%"PRIx64" size= 0x%"PRIx64" bitmap_size=0x%"PRIx64" gpa=0x%"PRIx64" dirty_pages=%"PRIu64
+vfio_container_query_dirty_bitmap(uint64_t iova, uint64_t size, uint64_t backend_flag, uint64_t bitmap_size, uint64_t translated_addr, uint64_t dirty_pages) "iova=0x%"PRIx64" size=0x%"PRIx64" backend_flag=0x%"PRIx64" bitmap_size=0x%"PRIx64" gpa=0x%"PRIx64" dirty_pages=%"PRIu64
# container-legacy.c
vfio_container_disconnect(int fd) "close container->fd=%d"
--
2.47.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v5 5/9] vfio/iommufd: Add IOMMU_HWPT_GET_DIRTY_BITMAP_NO_CLEAR flag support
2025-11-06 4:20 [PATCH v5 0/9] vfio: relax the vIOMMU check Zhenzhong Duan
` (3 preceding siblings ...)
2025-11-06 4:20 ` [PATCH v5 4/9] vfio: Add a backend_flag parameter to vfio_contianer_query_dirty_bitmap() Zhenzhong Duan
@ 2025-11-06 4:20 ` Zhenzhong Duan
2025-11-06 4:20 ` [PATCH v5 6/9] intel_iommu: Fix unmap_bitmap failure with legacy VFIO backend Zhenzhong Duan
` (4 subsequent siblings)
9 siblings, 0 replies; 18+ messages in thread
From: Zhenzhong Duan @ 2025-11-06 4:20 UTC (permalink / raw)
To: qemu-devel
Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan
Pass IOMMU_HWPT_GET_DIRTY_BITMAP_NO_CLEAR when doing the last dirty
bitmap query right before unmap, no PTEs flushes. This accelerates the
query without issue because unmap will tear down the mapping anyway.
Co-developed-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
Tested-by: Xudong Hao <xudong.hao@intel.com>
Tested-by: Giovannio Cabiddu <giovanni.cabiddu@intel.com>
Tested-by: Rohith S R <rohith.s.r@intel.com>
---
include/system/iommufd.h | 2 +-
backends/iommufd.c | 5 +++--
hw/vfio/iommufd.c | 5 +++--
backends/trace-events | 2 +-
4 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/include/system/iommufd.h b/include/system/iommufd.h
index a659f36a20..767a8e4cb6 100644
--- a/include/system/iommufd.h
+++ b/include/system/iommufd.h
@@ -64,7 +64,7 @@ bool iommufd_backend_set_dirty_tracking(IOMMUFDBackend *be, uint32_t hwpt_id,
bool iommufd_backend_get_dirty_bitmap(IOMMUFDBackend *be, uint32_t hwpt_id,
uint64_t iova, ram_addr_t size,
uint64_t page_size, uint64_t *data,
- Error **errp);
+ uint64_t flags, Error **errp);
bool iommufd_backend_invalidate_cache(IOMMUFDBackend *be, uint32_t id,
uint32_t data_type, uint32_t entry_len,
uint32_t *entry_num, void *data,
diff --git a/backends/iommufd.c b/backends/iommufd.c
index fdfb7c9d67..086bd67aea 100644
--- a/backends/iommufd.c
+++ b/backends/iommufd.c
@@ -361,7 +361,7 @@ bool iommufd_backend_get_dirty_bitmap(IOMMUFDBackend *be,
uint32_t hwpt_id,
uint64_t iova, ram_addr_t size,
uint64_t page_size, uint64_t *data,
- Error **errp)
+ uint64_t flags, Error **errp)
{
int ret;
struct iommu_hwpt_get_dirty_bitmap get_dirty_bitmap = {
@@ -371,11 +371,12 @@ bool iommufd_backend_get_dirty_bitmap(IOMMUFDBackend *be,
.length = size,
.page_size = page_size,
.data = (uintptr_t)data,
+ .flags = flags,
};
ret = ioctl(be->fd, IOMMU_HWPT_GET_DIRTY_BITMAP, &get_dirty_bitmap);
trace_iommufd_backend_get_dirty_bitmap(be->fd, hwpt_id, iova, size,
- page_size, ret ? errno : 0);
+ flags, page_size, ret ? errno : 0);
if (ret) {
error_setg_errno(errp, errno,
"IOMMU_HWPT_GET_DIRTY_BITMAP (iova: 0x%"HWADDR_PRIx
diff --git a/hw/vfio/iommufd.c b/hw/vfio/iommufd.c
index b59a8639c6..ba5c6b6586 100644
--- a/hw/vfio/iommufd.c
+++ b/hw/vfio/iommufd.c
@@ -74,7 +74,8 @@ static int iommufd_cdev_unmap(const VFIOContainer *bcontainer,
if (iotlb && vfio_container_dirty_tracking_is_started(bcontainer)) {
if (!vfio_container_devices_dirty_tracking_is_supported(bcontainer) &&
bcontainer->dirty_pages_supported) {
- ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size, 0,
+ ret = vfio_container_query_dirty_bitmap(bcontainer, iova, size,
+ IOMMU_HWPT_GET_DIRTY_BITMAP_NO_CLEAR,
iotlb->translated_addr,
&local_err);
if (ret) {
@@ -231,7 +232,7 @@ static int iommufd_query_dirty_bitmap(const VFIOContainer *bcontainer,
if (!iommufd_backend_get_dirty_bitmap(container->be, hwpt->hwpt_id,
iova, size, page_size,
(uint64_t *)vbmap->bitmap,
- errp)) {
+ backend_flag, errp)) {
return -EINVAL;
}
}
diff --git a/backends/trace-events b/backends/trace-events
index 56132d3fd2..e1992ba12f 100644
--- a/backends/trace-events
+++ b/backends/trace-events
@@ -19,5 +19,5 @@ iommufd_backend_alloc_ioas(int iommufd, uint32_t ioas) " iommufd=%d ioas=%d"
iommufd_backend_alloc_hwpt(int iommufd, uint32_t dev_id, uint32_t pt_id, uint32_t flags, uint32_t hwpt_type, uint32_t len, uint64_t data_ptr, uint32_t out_hwpt_id, int ret) " iommufd=%d dev_id=%u pt_id=%u flags=0x%x hwpt_type=%u len=%u data_ptr=0x%"PRIx64" out_hwpt=%u (%d)"
iommufd_backend_free_id(int iommufd, uint32_t id, int ret) " iommufd=%d id=%d (%d)"
iommufd_backend_set_dirty(int iommufd, uint32_t hwpt_id, bool start, int ret) " iommufd=%d hwpt=%u enable=%d (%d)"
-iommufd_backend_get_dirty_bitmap(int iommufd, uint32_t hwpt_id, uint64_t iova, uint64_t size, uint64_t page_size, int ret) " iommufd=%d hwpt=%u iova=0x%"PRIx64" size=0x%"PRIx64" page_size=0x%"PRIx64" (%d)"
+iommufd_backend_get_dirty_bitmap(int iommufd, uint32_t hwpt_id, uint64_t iova, uint64_t size, uint64_t flags, uint64_t page_size, int ret) " iommufd=%d hwpt=%u iova=0x%"PRIx64" size=0x%"PRIx64" flags=0x%"PRIx64" page_size=0x%"PRIx64" (%d)"
iommufd_backend_invalidate_cache(int iommufd, uint32_t id, uint32_t data_type, uint32_t entry_len, uint32_t entry_num, uint32_t done_num, uint64_t data_ptr, int ret) " iommufd=%d id=%u data_type=%u entry_len=%u entry_num=%u done_num=%u data_ptr=0x%"PRIx64" (%d)"
--
2.47.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v5 6/9] intel_iommu: Fix unmap_bitmap failure with legacy VFIO backend
2025-11-06 4:20 [PATCH v5 0/9] vfio: relax the vIOMMU check Zhenzhong Duan
` (4 preceding siblings ...)
2025-11-06 4:20 ` [PATCH v5 5/9] vfio/iommufd: Add IOMMU_HWPT_GET_DIRTY_BITMAP_NO_CLEAR flag support Zhenzhong Duan
@ 2025-11-06 4:20 ` Zhenzhong Duan
2025-11-28 3:51 ` Duan, Zhenzhong
2025-11-06 4:20 ` [PATCH v5 7/9] vfio/listener: Construct iotlb entry when unmap memory address space Zhenzhong Duan
` (3 subsequent siblings)
9 siblings, 1 reply; 18+ messages in thread
From: Zhenzhong Duan @ 2025-11-06 4:20 UTC (permalink / raw)
To: qemu-devel
Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan
If a VFIO device in guest switches from IOMMU domain to block domain,
vtd_address_space_unmap() is called to unmap whole address space.
If that happens during migration, migration fails with legacy VFIO
backend as below:
Status: failed (vfio_container_dma_unmap(0x561bbbd92d90, 0x100000000000, 0x100000000000) = -7 (Argument list too long))
Because legacy VFIO limits maximum bitmap size to 256MB which maps to 8TB on
4K page system, when 16TB sized UNMAP notification is sent, unmap_bitmap
ioctl fails. Normally such large UNMAP notification come from IOVA range
rather than system memory.
Apart from that, vtd_address_space_unmap() sends UNMAP notification with
translated_addr = 0, because there is no valid translated_addr for unmapping
a whole iommu memory region. This breaks dirty tracking no matter which VFIO
backend is used.
Fix them all by iterating over DMAMap list to unmap each range with active
mapping when migration is active. If migration is not active, unmapping the
whole address space in one go is optimal.
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
Tested-by: Giovannio Cabiddu <giovanni.cabiddu@intel.com>
Tested-by: Rohith S R <rohith.s.r@intel.com>
---
hw/i386/intel_iommu.c | 42 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)
diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
index c402643b56..8e98b0b71d 100644
--- a/hw/i386/intel_iommu.c
+++ b/hw/i386/intel_iommu.c
@@ -37,6 +37,7 @@
#include "system/system.h"
#include "hw/i386/apic_internal.h"
#include "kvm/kvm_i386.h"
+#include "migration/misc.h"
#include "migration/vmstate.h"
#include "trace.h"
@@ -4695,6 +4696,42 @@ static void vtd_dev_unset_iommu_device(PCIBus *bus, void *opaque, int devfn)
vtd_iommu_unlock(s);
}
+/*
+ * Unmapping a large range in one go is not optimal during migration because
+ * a large dirty bitmap needs to be allocated while there may be only small
+ * mappings, iterate over DMAMap list to unmap each range with active mapping.
+ */
+static void vtd_address_space_unmap_in_migration(VTDAddressSpace *as,
+ IOMMUNotifier *n)
+{
+ const DMAMap *map;
+ const DMAMap target = {
+ .iova = n->start,
+ .size = n->end,
+ };
+ IOVATree *tree = as->iova_tree;
+
+ /*
+ * DMAMap is created during IOMMU page table sync, it's either 4KB or huge
+ * page size and always a power of 2 in size. So the range of DMAMap could
+ * be used for UNMAP notification directly.
+ */
+ while ((map = iova_tree_find(tree, &target))) {
+ IOMMUTLBEvent event;
+
+ event.type = IOMMU_NOTIFIER_UNMAP;
+ event.entry.iova = map->iova;
+ event.entry.addr_mask = map->size;
+ event.entry.target_as = &address_space_memory;
+ event.entry.perm = IOMMU_NONE;
+ /* This field is needed to set dirty bigmap */
+ event.entry.translated_addr = map->translated_addr;
+ memory_region_notify_iommu_one(n, &event);
+
+ iova_tree_remove(tree, *map);
+ }
+}
+
/* Unmap the whole range in the notifier's scope. */
static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
{
@@ -4704,6 +4741,11 @@ static void vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
IntelIOMMUState *s = as->iommu_state;
DMAMap map;
+ if (migration_is_running()) {
+ vtd_address_space_unmap_in_migration(as, n);
+ return;
+ }
+
/*
* Note: all the codes in this function has a assumption that IOVA
* bits are no more than VTD_MGAW bits (which is restricted by
--
2.47.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v5 7/9] vfio/listener: Construct iotlb entry when unmap memory address space
2025-11-06 4:20 [PATCH v5 0/9] vfio: relax the vIOMMU check Zhenzhong Duan
` (5 preceding siblings ...)
2025-11-06 4:20 ` [PATCH v5 6/9] intel_iommu: Fix unmap_bitmap failure with legacy VFIO backend Zhenzhong Duan
@ 2025-11-06 4:20 ` Zhenzhong Duan
2025-11-25 10:04 ` Yi Liu
2025-11-06 4:20 ` [PATCH v5 8/9] vfio/migration: Add migration blocker if VM memory is too large to cause unmap_bitmap failure Zhenzhong Duan
` (2 subsequent siblings)
9 siblings, 1 reply; 18+ messages in thread
From: Zhenzhong Duan @ 2025-11-06 4:20 UTC (permalink / raw)
To: qemu-devel
Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan
If a VFIO device in guest switches from passthrough(PT) domain to block
domain, the whole memory address space is unmapped, but we passed a NULL
iotlb entry to unmap_bitmap, then bitmap query didn't happen and we lost
dirty pages.
By constructing an iotlb entry with iova = gpa for unmap_bitmap, it can
set dirty bits correctly.
For IOMMU address space, we still send NULL iotlb because VFIO don't
know the actual mappings in guest. It's vIOMMU's responsibility to send
actual unmapping notifications, e.g., vtd_address_space_unmap_in_migration()
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Tested-by: Giovannio Cabiddu <giovanni.cabiddu@intel.com>
---
hw/vfio/listener.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/hw/vfio/listener.c b/hw/vfio/listener.c
index 2109101158..3b48f6796c 100644
--- a/hw/vfio/listener.c
+++ b/hw/vfio/listener.c
@@ -713,14 +713,27 @@ static void vfio_listener_region_del(MemoryListener *listener,
if (try_unmap) {
bool unmap_all = false;
+ IOMMUTLBEntry entry = {}, *iotlb = NULL;
if (int128_eq(llsize, int128_2_64())) {
assert(!iova);
unmap_all = true;
llsize = int128_zero();
}
+
+ /*
+ * Fake an IOTLB entry for identity mapping which is needed by dirty
+ * tracking. In fact, in unmap_bitmap, only translated_addr field is
+ * used to set dirty bitmap.
+ */
+ if (!memory_region_is_iommu(section->mr)) {
+ entry.iova = iova;
+ entry.translated_addr = iova;
+ iotlb = &entry;
+ }
+
ret = vfio_container_dma_unmap(bcontainer, iova, int128_get64(llsize),
- NULL, unmap_all);
+ iotlb, unmap_all);
if (ret) {
error_report("vfio_container_dma_unmap(%p, 0x%"HWADDR_PRIx", "
"0x%"HWADDR_PRIx") = %d (%s)",
--
2.47.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v5 8/9] vfio/migration: Add migration blocker if VM memory is too large to cause unmap_bitmap failure
2025-11-06 4:20 [PATCH v5 0/9] vfio: relax the vIOMMU check Zhenzhong Duan
` (6 preceding siblings ...)
2025-11-06 4:20 ` [PATCH v5 7/9] vfio/listener: Construct iotlb entry when unmap memory address space Zhenzhong Duan
@ 2025-11-06 4:20 ` Zhenzhong Duan
2025-11-25 9:20 ` Yi Liu
2025-11-06 4:20 ` [PATCH v5 9/9] vfio/migration: Allow live migration with vIOMMU without VFs using device dirty tracking Zhenzhong Duan
2025-11-20 9:29 ` [PATCH v5 0/9] vfio: relax the vIOMMU check Duan, Zhenzhong
9 siblings, 1 reply; 18+ messages in thread
From: Zhenzhong Duan @ 2025-11-06 4:20 UTC (permalink / raw)
To: qemu-devel
Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan
With default config, kernel VFIO IOMMU type1 driver limits dirty bitmap to
256MB for unmap_bitmap ioctl so the maximum guest memory region is no more
than 8TB size for the ioctl to succeed.
Be conservative here to limit total guest memory to max value supported
by unmap_bitmap ioctl or else add a migration blocker. IOMMUFD backend
doesn't have such limit, one can use it if there is a need to migrate such
large VM.
Suggested-by: Yi Liu <yi.l.liu@intel.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
---
hw/vfio/migration.c | 34 ++++++++++++++++++++++++++++++++++
1 file changed, 34 insertions(+)
diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
index 4c06e3db93..86e5b7ab55 100644
--- a/hw/vfio/migration.c
+++ b/hw/vfio/migration.c
@@ -16,6 +16,7 @@
#include <sys/ioctl.h>
#include "system/runstate.h"
+#include "hw/boards.h"
#include "hw/vfio/vfio-device.h"
#include "hw/vfio/vfio-migration.h"
#include "migration/misc.h"
@@ -1152,6 +1153,32 @@ static bool vfio_viommu_preset(VFIODevice *vbasedev)
return vbasedev->bcontainer->space->as != &address_space_memory;
}
+static bool vfio_dirty_tracking_exceed_limit(VFIODevice *vbasedev)
+{
+ VFIOContainer *bcontainer = vbasedev->bcontainer;
+ uint64_t max_size, page_size;
+
+ if (!bcontainer->dirty_pages_supported) {
+ return false;
+ }
+
+ /*
+ * VFIO IOMMU type1 driver has limitation of bitmap size on unmap_bitmap
+ * ioctl(), calculate the limit and compare with guest memory size to
+ * catch dirty tracking failure early.
+ *
+ * This limit is 8TB with default kernel and QEMU config, we are a bit
+ * conservative here as VM memory layout may be nonconsecutive or VM
+ * can run with vIOMMU enabled so the limitation could be relaxed. One
+ * can also switch to use IOMMUFD backend if there is a need to migrate
+ * large VM.
+ */
+ page_size = 1 << ctz64(bcontainer->dirty_pgsizes);
+ max_size = bcontainer->max_dirty_bitmap_size * BITS_PER_BYTE * page_size;
+
+ return current_machine->ram_size > max_size;
+}
+
/*
* Return true when either migration initialized or blocker registered.
* Currently only return false when adding blocker fails which will
@@ -1193,6 +1220,13 @@ bool vfio_migration_realize(VFIODevice *vbasedev, Error **errp)
goto add_blocker;
}
+ if (vfio_dirty_tracking_exceed_limit(vbasedev)) {
+ error_setg(&err, "%s: Migration is currently not supported with "
+ "large memory VM due to dirty tracking limitation in "
+ "backend", vbasedev->name);
+ goto add_blocker;
+ }
+
warn_report("%s: VFIO device doesn't support device and "
"IOMMU dirty tracking", vbasedev->name);
}
--
2.47.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v5 9/9] vfio/migration: Allow live migration with vIOMMU without VFs using device dirty tracking
2025-11-06 4:20 [PATCH v5 0/9] vfio: relax the vIOMMU check Zhenzhong Duan
` (7 preceding siblings ...)
2025-11-06 4:20 ` [PATCH v5 8/9] vfio/migration: Add migration blocker if VM memory is too large to cause unmap_bitmap failure Zhenzhong Duan
@ 2025-11-06 4:20 ` Zhenzhong Duan
2025-11-20 9:29 ` [PATCH v5 0/9] vfio: relax the vIOMMU check Duan, Zhenzhong
9 siblings, 0 replies; 18+ messages in thread
From: Zhenzhong Duan @ 2025-11-06 4:20 UTC (permalink / raw)
To: qemu-devel
Cc: alex, clg, mst, jasowang, yi.l.liu, clement.mathieu--drif,
eric.auger, joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu,
rohith.s.r, mark.gross, arjan.van.de.ven, Zhenzhong Duan,
Jason Zeng
Commit e46883204c38 ("vfio/migration: Block migration with vIOMMU")
introduces a migration blocker when vIOMMU is enabled, because we need
to calculate the IOVA ranges for device dirty tracking. But this is
unnecessary for iommu dirty tracking.
Limit the vfio_viommu_preset() check to those devices which use device
dirty tracking. This allows live migration with VFIO devices which use
iommu dirty tracking.
Suggested-by: Jason Zeng <jason.zeng@intel.com>
Co-developed-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
Tested-by: Xudong Hao <xudong.hao@intel.com>
Tested-by: Giovannio Cabiddu <giovanni.cabiddu@intel.com>
Tested-by: Rohith S R <rohith.s.r@intel.com>
---
include/hw/vfio/vfio-device.h | 10 ++++++++++
hw/vfio/container.c | 5 +----
hw/vfio/device.c | 6 ++++++
hw/vfio/migration.c | 6 +++---
4 files changed, 20 insertions(+), 7 deletions(-)
diff --git a/include/hw/vfio/vfio-device.h b/include/hw/vfio/vfio-device.h
index 0fe6c60ba2..a0b8fc2eb6 100644
--- a/include/hw/vfio/vfio-device.h
+++ b/include/hw/vfio/vfio-device.h
@@ -148,6 +148,16 @@ bool vfio_device_irq_set_signaling(VFIODevice *vbasedev, int index, int subindex
void vfio_device_reset_handler(void *opaque);
bool vfio_device_is_mdev(VFIODevice *vbasedev);
+/**
+ * vfio_device_dirty_pages_disabled: Check if device dirty tracking will be
+ * used for a VFIO device
+ *
+ * @vbasedev: The VFIODevice to transform
+ *
+ * Return: true if either @vbasedev doesn't support device dirty tracking or
+ * is forcedly disabled from command line, otherwise false.
+ */
+bool vfio_device_dirty_pages_disabled(VFIODevice *vbasedev);
bool vfio_device_hiod_create_and_realize(VFIODevice *vbasedev,
const char *typename, Error **errp);
bool vfio_device_attach(char *name, VFIODevice *vbasedev,
diff --git a/hw/vfio/container.c b/hw/vfio/container.c
index 7706603c1c..8879da78c8 100644
--- a/hw/vfio/container.c
+++ b/hw/vfio/container.c
@@ -178,10 +178,7 @@ bool vfio_container_devices_dirty_tracking_is_supported(
VFIODevice *vbasedev;
QLIST_FOREACH(vbasedev, &bcontainer->device_list, container_next) {
- if (vbasedev->device_dirty_page_tracking == ON_OFF_AUTO_OFF) {
- return false;
- }
- if (!vbasedev->dirty_pages_supported) {
+ if (vfio_device_dirty_pages_disabled(vbasedev)) {
return false;
}
}
diff --git a/hw/vfio/device.c b/hw/vfio/device.c
index 8b63e765ac..5ed3103e72 100644
--- a/hw/vfio/device.c
+++ b/hw/vfio/device.c
@@ -411,6 +411,12 @@ bool vfio_device_is_mdev(VFIODevice *vbasedev)
return subsys && (strcmp(subsys, "/sys/bus/mdev") == 0);
}
+bool vfio_device_dirty_pages_disabled(VFIODevice *vbasedev)
+{
+ return (!vbasedev->dirty_pages_supported ||
+ vbasedev->device_dirty_page_tracking == ON_OFF_AUTO_OFF);
+}
+
bool vfio_device_hiod_create_and_realize(VFIODevice *vbasedev,
const char *typename, Error **errp)
{
diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
index 86e5b7ab55..c0b7d3434f 100644
--- a/hw/vfio/migration.c
+++ b/hw/vfio/migration.c
@@ -1210,8 +1210,7 @@ bool vfio_migration_realize(VFIODevice *vbasedev, Error **errp)
return !vfio_block_migration(vbasedev, err, errp);
}
- if ((!vbasedev->dirty_pages_supported ||
- vbasedev->device_dirty_page_tracking == ON_OFF_AUTO_OFF) &&
+ if (vfio_device_dirty_pages_disabled(vbasedev) &&
!vbasedev->iommu_dirty_tracking) {
if (vbasedev->enable_migration == ON_OFF_AUTO_AUTO) {
error_setg(&err,
@@ -1236,7 +1235,8 @@ bool vfio_migration_realize(VFIODevice *vbasedev, Error **errp)
goto out_deinit;
}
- if (vfio_viommu_preset(vbasedev)) {
+ if (!vfio_device_dirty_pages_disabled(vbasedev) &&
+ vfio_viommu_preset(vbasedev)) {
error_setg(&err, "%s: Migration is currently not supported "
"with vIOMMU enabled", vbasedev->name);
goto add_blocker;
--
2.47.1
^ permalink raw reply related [flat|nested] 18+ messages in thread
* RE: [PATCH v5 0/9] vfio: relax the vIOMMU check
2025-11-06 4:20 [PATCH v5 0/9] vfio: relax the vIOMMU check Zhenzhong Duan
` (8 preceding siblings ...)
2025-11-06 4:20 ` [PATCH v5 9/9] vfio/migration: Allow live migration with vIOMMU without VFs using device dirty tracking Zhenzhong Duan
@ 2025-11-20 9:29 ` Duan, Zhenzhong
2025-11-25 7:40 ` Cédric Le Goater
9 siblings, 1 reply; 18+ messages in thread
From: Duan, Zhenzhong @ 2025-11-20 9:29 UTC (permalink / raw)
To: qemu-devel@nongnu.org
Cc: alex@shazbot.org, clg@redhat.com, mst@redhat.com,
jasowang@redhat.com, Liu, Yi L, clement.mathieu--drif@eviden.com,
eric.auger@redhat.com, joao.m.martins@oracle.com,
avihaih@nvidia.com, Hao, Xudong, Cabiddu, Giovanni, Rohith S R,
Gross, Mark, Van De Ven, Arjan
Hi All,
Kindly ping😊, any more comments?
Thanks
Zhenzhong
>-----Original Message-----
>From: Duan, Zhenzhong <zhenzhong.duan@intel.com>
>Subject: [PATCH v5 0/9] vfio: relax the vIOMMU check
>
>Hi,
>
>This series relax the vIOMMU check and allows live migration with vIOMMU
>without VFs using device dirty tracking. It's rewritten based on first 4
>patches of [1] from Joao.
>
>Currently what block us is the lack of dirty bitmap query with iommufd
>before unmap. By adding that query and handle some corner case we can
>relax the check.
>
>Based on vfio-next branch:
>
>patch1-2: add dirty bitmap query with iommufd
>patch3: a ranaming cleanup
>patch4-5: unmap_bitmap optimization
>patch6-7: fixes to avoid losing dirty pages
>patch8: add a blocker if VM memory is really quite large for unmap_bitmap
>patch9: relax vIOMMU check
>
>
>We used [2] to test, it contains dom_switch series + this series +
>nesting series. I included nesting series just because I'd like to confirm
>the two patches optimizing out dirty tracking for readonly pages work.
>
>We tested VM live migration (running QAT workload in VM) with QAT device
>passthrough, below matrix configs with guest config 'iommu=pt' and
>'iommu=nopt':
>1.Scalable mode vIOMMU + IOMMUFD cdev mode
>2.Scalable mode vIOMMU + legacy VFIO mode
>3.legacy mode vIOMMU + IOMMUFD cdev mode
>4.legacy mode vIOMMU + legacy VFIO mode
>
>The QAT workload is a user level app that utilizes VFIO to control QAT device.
>
>Thanks
>Zhenzhong
>
>[1] https://github.com/jpemartins/qemu/commits/vfio-migration-viommu/
>[2]
>https://github.com/yiliu1765/qemu/tree/liuyi/zhenzhong/iommufd_nesting.v
>8.DS_LM.wip
>
>Changelog:
>v5:
>- drop the patch checking iommu_dirty_tracking (Avihai, Joao)
>- pass iotlb info to unmap_bitmap when switch out of system AS
>
>v4:
>- bypass memory size check for device dirty tracking as it's unrelated (Avihai)
>- split vfio_device_dirty_pages_disabled() helper out as a separate patch
>- add a patch to fix minor error on checking vbasedev->iommu_dirty_tracking
>
>v3:
>- return bitmap query failure to fail migration (Avihai)
>- refine patch7, set IOMMUFD backend 'dirty_pgsizes' and
>'max_dirty_bitmap_size' (Cedric)
>- refine patch7, calculate memory limit instead of hardcode 8TB (Liuyi)
>- refine commit log (Cedric, Liuyi)
>
>v2:
>- add backend_flag parameter to pass DIRTY_BITMAP_NO_CLEAR (Joao,
>Cedric)
>- add a cleanup patch to rename vfio_dma_unmap_bitmap (Cedric)
>- add blocker if unmap_bitmap limit check fail (Liuyi)
>
>
>
>Joao Martins (1):
> vfio: Add a backend_flag parameter to
> vfio_contianer_query_dirty_bitmap()
>
>Zhenzhong Duan (8):
> vfio/iommufd: Add framework code to support getting dirty bitmap
> before unmap
> vfio/iommufd: Query dirty bitmap before DMA unmap
> vfio/container-legacy: rename vfio_dma_unmap_bitmap() to
> vfio_legacy_dma_unmap_get_dirty_bitmap()
> vfio/iommufd: Add IOMMU_HWPT_GET_DIRTY_BITMAP_NO_CLEAR flag
>support
> intel_iommu: Fix unmap_bitmap failure with legacy VFIO backend
> vfio/listener: Construct iotlb entry when unmap memory address space
> vfio/migration: Add migration blocker if VM memory is too large to
> cause unmap_bitmap failure
> vfio/migration: Allow live migration with vIOMMU without VFs using
> device dirty tracking
>
> include/hw/vfio/vfio-container.h | 8 +++--
> include/hw/vfio/vfio-device.h | 10 ++++++
> include/system/iommufd.h | 2 +-
> backends/iommufd.c | 5 +--
> hw/i386/intel_iommu.c | 42 +++++++++++++++++++++++++
> hw/vfio-user/container.c | 5 +--
> hw/vfio/container-legacy.c | 15 +++++----
> hw/vfio/container.c | 20 ++++++------
> hw/vfio/device.c | 6 ++++
> hw/vfio/iommufd.c | 53
>+++++++++++++++++++++++++++++---
> hw/vfio/listener.c | 21 ++++++++++---
> hw/vfio/migration.c | 40 ++++++++++++++++++++++--
> backends/trace-events | 2 +-
> hw/vfio/trace-events | 2 +-
> 14 files changed, 194 insertions(+), 37 deletions(-)
>
>--
>2.47.1
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v5 0/9] vfio: relax the vIOMMU check
2025-11-20 9:29 ` [PATCH v5 0/9] vfio: relax the vIOMMU check Duan, Zhenzhong
@ 2025-11-25 7:40 ` Cédric Le Goater
0 siblings, 0 replies; 18+ messages in thread
From: Cédric Le Goater @ 2025-11-25 7:40 UTC (permalink / raw)
To: Duan, Zhenzhong, qemu-devel@nongnu.org
Cc: alex@shazbot.org, mst@redhat.com, jasowang@redhat.com, Liu, Yi L,
clement.mathieu--drif@eviden.com, eric.auger@redhat.com,
joao.m.martins@oracle.com, avihaih@nvidia.com, Hao, Xudong,
Cabiddu, Giovanni, Rohith S R, Gross, Mark, Van De Ven, Arjan
On 11/20/25 10:29, Duan, Zhenzhong wrote:
> Hi All,
>
> Kindly ping😊, any more comments?
Yes ! These are for QEMU 11.0 and it would be nice to have
some Acks on the remaining patches.
Thanks,
C.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v5 8/9] vfio/migration: Add migration blocker if VM memory is too large to cause unmap_bitmap failure
2025-11-06 4:20 ` [PATCH v5 8/9] vfio/migration: Add migration blocker if VM memory is too large to cause unmap_bitmap failure Zhenzhong Duan
@ 2025-11-25 9:20 ` Yi Liu
0 siblings, 0 replies; 18+ messages in thread
From: Yi Liu @ 2025-11-25 9:20 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel
Cc: alex, clg, mst, jasowang, clement.mathieu--drif, eric.auger,
joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu, rohith.s.r,
mark.gross, arjan.van.de.ven
On 2025/11/6 12:20, Zhenzhong Duan wrote:
> With default config, kernel VFIO IOMMU type1 driver limits dirty bitmap to
> 256MB for unmap_bitmap ioctl so the maximum guest memory region is no more
> than 8TB size for the ioctl to succeed.
>
> Be conservative here to limit total guest memory to max value supported
> by unmap_bitmap ioctl or else add a migration blocker. IOMMUFD backend
> doesn't have such limit, one can use it if there is a need to migrate such
> large VM.
>
> Suggested-by: Yi Liu <yi.l.liu@intel.com>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
> hw/vfio/migration.c | 34 ++++++++++++++++++++++++++++++++++
> 1 file changed, 34 insertions(+)
>
> diff --git a/hw/vfio/migration.c b/hw/vfio/migration.c
> index 4c06e3db93..86e5b7ab55 100644
> --- a/hw/vfio/migration.c
> +++ b/hw/vfio/migration.c
> @@ -16,6 +16,7 @@
> #include <sys/ioctl.h>
>
> #include "system/runstate.h"
> +#include "hw/boards.h"
> #include "hw/vfio/vfio-device.h"
> #include "hw/vfio/vfio-migration.h"
> #include "migration/misc.h"
> @@ -1152,6 +1153,32 @@ static bool vfio_viommu_preset(VFIODevice *vbasedev)
> return vbasedev->bcontainer->space->as != &address_space_memory;
> }
>
> +static bool vfio_dirty_tracking_exceed_limit(VFIODevice *vbasedev)
> +{
> + VFIOContainer *bcontainer = vbasedev->bcontainer;
> + uint64_t max_size, page_size;
> +
> + if (!bcontainer->dirty_pages_supported) {
> + return false;
> + }
> +
> + /*
> + * VFIO IOMMU type1 driver has limitation of bitmap size on unmap_bitmap
> + * ioctl(), calculate the limit and compare with guest memory size to
> + * catch dirty tracking failure early.
> + *
> + * This limit is 8TB with default kernel and QEMU config, we are a bit
> + * conservative here as VM memory layout may be nonconsecutive or VM
> + * can run with vIOMMU enabled so the limitation could be relaxed. One
> + * can also switch to use IOMMUFD backend if there is a need to migrate
> + * large VM.
> + */
> + page_size = 1 << ctz64(bcontainer->dirty_pgsizes);
> + max_size = bcontainer->max_dirty_bitmap_size * BITS_PER_BYTE * page_size;
> +
> + return current_machine->ram_size > max_size;
> +}
> +
> /*
> * Return true when either migration initialized or blocker registered.
> * Currently only return false when adding blocker fails which will
> @@ -1193,6 +1220,13 @@ bool vfio_migration_realize(VFIODevice *vbasedev, Error **errp)
> goto add_blocker;
> }
>
> + if (vfio_dirty_tracking_exceed_limit(vbasedev)) {
> + error_setg(&err, "%s: Migration is currently not supported with "
> + "large memory VM due to dirty tracking limitation in "
> + "backend", vbasedev->name);
> + goto add_blocker;
> + }
> +
> warn_report("%s: VFIO device doesn't support device and "
> "IOMMU dirty tracking", vbasedev->name);
> }
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v5 7/9] vfio/listener: Construct iotlb entry when unmap memory address space
2025-11-06 4:20 ` [PATCH v5 7/9] vfio/listener: Construct iotlb entry when unmap memory address space Zhenzhong Duan
@ 2025-11-25 10:04 ` Yi Liu
2025-11-26 5:45 ` Duan, Zhenzhong
0 siblings, 1 reply; 18+ messages in thread
From: Yi Liu @ 2025-11-25 10:04 UTC (permalink / raw)
To: Zhenzhong Duan, qemu-devel
Cc: alex, clg, mst, jasowang, clement.mathieu--drif, eric.auger,
joao.m.martins, avihaih, xudong.hao, giovanni.cabiddu, rohith.s.r,
mark.gross, arjan.van.de.ven
On 2025/11/6 12:20, Zhenzhong Duan wrote:
> If a VFIO device in guest switches from passthrough(PT) domain to block
> domain, the whole memory address space is unmapped, but we passed a NULL
> iotlb entry to unmap_bitmap, then bitmap query didn't happen and we lost
> dirty pages.
this is a good catch. :) Have you observed problem in testing or just
identified it with patch iteration?
> By constructing an iotlb entry with iova = gpa for unmap_bitmap, it can
> set dirty bits correctly.
>
> For IOMMU address space, we still send NULL iotlb because VFIO don't
> know the actual mappings in guest. It's vIOMMU's responsibility to send
> actual unmapping notifications, e.g., vtd_address_space_unmap_in_migration()
>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> Tested-by: Giovannio Cabiddu <giovanni.cabiddu@intel.com>
> ---
> hw/vfio/listener.c | 15 ++++++++++++++-
> 1 file changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/hw/vfio/listener.c b/hw/vfio/listener.c
> index 2109101158..3b48f6796c 100644
> --- a/hw/vfio/listener.c
> +++ b/hw/vfio/listener.c
> @@ -713,14 +713,27 @@ static void vfio_listener_region_del(MemoryListener *listener,
>
> if (try_unmap) {
> bool unmap_all = false;
> + IOMMUTLBEntry entry = {}, *iotlb = NULL;
>
> if (int128_eq(llsize, int128_2_64())) {
> assert(!iova);
> unmap_all = true;
> llsize = int128_zero();
> }
> +
> + /*
> + * Fake an IOTLB entry for identity mapping which is needed by dirty
> + * tracking. In fact, in unmap_bitmap, only translated_addr field is
> + * used to set dirty bitmap.
Just say sync dirty is needed per unmap. So you may add a check
in_migration as well. If not in migration, it is no needed to do it.
> + */
> + if (!memory_region_is_iommu(section->mr)) {
> + entry.iova = iova;
> + entry.translated_addr = iova;
> + iotlb = &entry;
> + }
> +
While, I'm still wondering how to deal with iommu MR case. Let's see a
scenario first. When switching from DMA domain to PT, QEMU will switch
to PT. This shall trigger the vfio_listener_region_del() and unregister
the iommu notifier. This means vIOMMU side needs to do unmap prior to
switching AS. If not, the iommu notifier is gone when vIOMMU wants to
unmap with an IOTLBEvent. For virtual intel_iommu, it is calling
vtd_address_space_unmap_in_migration() prior to calling
vtd_switch_address_space(). So I think you need to tweak the intel_iommu
a bit to suit the order requirement. :)
BTW. should the iommu MRs even go to this try_unmap branch? I think for
such MRs, it relies on the vIOMMU to unmap explicitly (hence trigger the
vfio_iommu_map_notify()).
> ret = vfio_container_dma_unmap(bcontainer, iova, int128_get64(llsize),
> - NULL, unmap_all);
> + iotlb, unmap_all);
> if (ret) {
> error_report("vfio_container_dma_unmap(%p, 0x%"HWADDR_PRIx", "
> "0x%"HWADDR_PRIx") = %d (%s)",
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 18+ messages in thread
* RE: [PATCH v5 7/9] vfio/listener: Construct iotlb entry when unmap memory address space
2025-11-25 10:04 ` Yi Liu
@ 2025-11-26 5:45 ` Duan, Zhenzhong
2025-11-27 13:23 ` Yi Liu
0 siblings, 1 reply; 18+ messages in thread
From: Duan, Zhenzhong @ 2025-11-26 5:45 UTC (permalink / raw)
To: Liu, Yi L, qemu-devel@nongnu.org
Cc: alex@shazbot.org, clg@redhat.com, mst@redhat.com,
jasowang@redhat.com, clement.mathieu--drif@eviden.com,
eric.auger@redhat.com, joao.m.martins@oracle.com,
avihaih@nvidia.com, Hao, Xudong, Cabiddu, Giovanni, Rohith S R,
Gross, Mark, Van De Ven, Arjan
>-----Original Message-----
>From: Liu, Yi L <yi.l.liu@intel.com>
>Subject: Re: [PATCH v5 7/9] vfio/listener: Construct iotlb entry when unmap
>memory address space
>
>On 2025/11/6 12:20, Zhenzhong Duan wrote:
>> If a VFIO device in guest switches from passthrough(PT) domain to block
>> domain, the whole memory address space is unmapped, but we passed a
>NULL
>> iotlb entry to unmap_bitmap, then bitmap query didn't happen and we lost
>> dirty pages.
>
>this is a good catch. :) Have you observed problem in testing or just
>identified it with patch iteration?
Patch iteration.
>
>> By constructing an iotlb entry with iova = gpa for unmap_bitmap, it can
>> set dirty bits correctly.
>>
>> For IOMMU address space, we still send NULL iotlb because VFIO don't
>> know the actual mappings in guest. It's vIOMMU's responsibility to send
>> actual unmapping notifications, e.g.,
>vtd_address_space_unmap_in_migration()
>>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>> Tested-by: Giovannio Cabiddu <giovanni.cabiddu@intel.com>
>> ---
>> hw/vfio/listener.c | 15 ++++++++++++++-
>> 1 file changed, 14 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/vfio/listener.c b/hw/vfio/listener.c
>> index 2109101158..3b48f6796c 100644
>> --- a/hw/vfio/listener.c
>> +++ b/hw/vfio/listener.c
>> @@ -713,14 +713,27 @@ static void
>vfio_listener_region_del(MemoryListener *listener,
>>
>> if (try_unmap) {
>> bool unmap_all = false;
>> + IOMMUTLBEntry entry = {}, *iotlb = NULL;
>>
>> if (int128_eq(llsize, int128_2_64())) {
>> assert(!iova);
>> unmap_all = true;
>> llsize = int128_zero();
>> }
>> +
>> + /*
>> + * Fake an IOTLB entry for identity mapping which is needed by
>dirty
>> + * tracking. In fact, in unmap_bitmap, only translated_addr field
>is
>> + * used to set dirty bitmap.
>
>Just say sync dirty is needed per unmap. So you may add a check
>in_migration as well. If not in migration, it is no needed to do it.
Dirty tracking is not only for migration, but also ditry rate/limit. So a non-null iotlb
is always passed for ram MR. That iotlb pointer is used only when
vfio_container_dirty_tracking_is_started() return true.
I can add a check on global_dirty_tracking if you prefer to add a check.
>
>> + */
>> + if (!memory_region_is_iommu(section->mr)) {
>> + entry.iova = iova;
>> + entry.translated_addr = iova;
>> + iotlb = &entry;
>> + }
>> +
>
>While, I'm still wondering how to deal with iommu MR case. Let's see a
>scenario first. When switching from DMA domain to PT, QEMU will switch
>to PT. This shall trigger the vfio_listener_region_del() and unregister
>the iommu notifier. This means vIOMMU side needs to do unmap prior to
>switching AS. If not, the iommu notifier is gone when vIOMMU wants to
>unmap with an IOTLBEvent. For virtual intel_iommu, it is calling
>vtd_address_space_unmap_in_migration() prior to calling
>vtd_switch_address_space(). So I think you need to tweak the intel_iommu
>a bit to suit the order requirement. :)
VTD doesn't support switching from DMA domain to PT atomically, so switches
to block domain in between, see intel_iommu_attach_device() in kernel.
So with this sequence is DMA->block->PT domain, we have guaranteed the order
you shared? See vtd_pasid_cache_sync_locked().
>
>BTW. should the iommu MRs even go to this try_unmap branch? I think for
>such MRs, it relies on the vIOMMU to unmap explicitly (hence trigger the
>vfio_iommu_map_notify()).
Yes, it's unnecessary, but it's hard for VFIO to distinguish if try_unmap is due to
domain switch or a real unmap. I think it's harmless because the second try_unmap
unmaps nothing.
Thanks
Zhenzhong
>
>> ret = vfio_container_dma_unmap(bcontainer, iova,
>int128_get64(llsize),
>> - NULL, unmap_all);
>> + iotlb, unmap_all);
>> if (ret) {
>> error_report("vfio_container_dma_unmap(%p,
>0x%"HWADDR_PRIx", "
>> "0x%"HWADDR_PRIx") = %d (%s)",
>
>Regards,
>Yi Liu
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v5 7/9] vfio/listener: Construct iotlb entry when unmap memory address space
2025-11-26 5:45 ` Duan, Zhenzhong
@ 2025-11-27 13:23 ` Yi Liu
2025-11-28 2:58 ` Duan, Zhenzhong
0 siblings, 1 reply; 18+ messages in thread
From: Yi Liu @ 2025-11-27 13:23 UTC (permalink / raw)
To: Duan, Zhenzhong, qemu-devel@nongnu.org
Cc: alex@shazbot.org, clg@redhat.com, mst@redhat.com,
jasowang@redhat.com, clement.mathieu--drif@eviden.com,
eric.auger@redhat.com, joao.m.martins@oracle.com,
avihaih@nvidia.com, Hao, Xudong, Cabiddu, Giovanni, Rohith S R,
Gross, Mark, Van De Ven, Arjan
On 2025/11/26 13:45, Duan, Zhenzhong wrote:
>
>
>> -----Original Message-----
>> From: Liu, Yi L <yi.l.liu@intel.com>
>> Subject: Re: [PATCH v5 7/9] vfio/listener: Construct iotlb entry when unmap
>> memory address space
>>
>> On 2025/11/6 12:20, Zhenzhong Duan wrote:
>>> If a VFIO device in guest switches from passthrough(PT) domain to block
>>> domain, the whole memory address space is unmapped, but we passed a
>> NULL
>>> iotlb entry to unmap_bitmap, then bitmap query didn't happen and we lost
>>> dirty pages.
>>
>> this is a good catch. :) Have you observed problem in testing or just
>> identified it with patch iteration?
>
> Patch iteration.
>
>>
>>> By constructing an iotlb entry with iova = gpa for unmap_bitmap, it can
>>> set dirty bits correctly.
>>>
>>> For IOMMU address space, we still send NULL iotlb because VFIO don't
>>> know the actual mappings in guest. It's vIOMMU's responsibility to send
>>> actual unmapping notifications, e.g.,
>> vtd_address_space_unmap_in_migration()
>>>
>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>> Tested-by: Giovannio Cabiddu <giovanni.cabiddu@intel.com>
>>> ---
>>> hw/vfio/listener.c | 15 ++++++++++++++-
>>> 1 file changed, 14 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/hw/vfio/listener.c b/hw/vfio/listener.c
>>> index 2109101158..3b48f6796c 100644
>>> --- a/hw/vfio/listener.c
>>> +++ b/hw/vfio/listener.c
>>> @@ -713,14 +713,27 @@ static void
>> vfio_listener_region_del(MemoryListener *listener,
>>>
>>> if (try_unmap) {
>>> bool unmap_all = false;
>>> + IOMMUTLBEntry entry = {}, *iotlb = NULL;
>>>
>>> if (int128_eq(llsize, int128_2_64())) {
>>> assert(!iova);
>>> unmap_all = true;
>>> llsize = int128_zero();
>>> }
>>> +
>>> + /*
>>> + * Fake an IOTLB entry for identity mapping which is needed by
>> dirty
>>> + * tracking. In fact, in unmap_bitmap, only translated_addr field
>> is
>>> + * used to set dirty bitmap.
>>
>> Just say sync dirty is needed per unmap. So you may add a check
>> in_migration as well. If not in migration, it is no needed to do it.
>
> Dirty tracking is not only for migration, but also ditry rate/limit. So a non-null iotlb
> is always passed for ram MR. That iotlb pointer is used only when
> vfio_container_dirty_tracking_is_started() return true.
>
> I can add a check on global_dirty_tracking if you prefer to add a check.
yeah, this would be helpful.
>
>>
>>> + */
>>> + if (!memory_region_is_iommu(section->mr)) {
>>> + entry.iova = iova;
>>> + entry.translated_addr = iova;
>>> + iotlb = &entry;
>>> + }
>>> +
>>
>> While, I'm still wondering how to deal with iommu MR case. Let's see a
>> scenario first. When switching from DMA domain to PT, QEMU will switch
>> to PT. This shall trigger the vfio_listener_region_del() and unregister
>> the iommu notifier. This means vIOMMU side needs to do unmap prior to
>> switching AS. If not, the iommu notifier is gone when vIOMMU wants to
>> unmap with an IOTLBEvent. For virtual intel_iommu, it is calling
>> vtd_address_space_unmap_in_migration() prior to calling
>> vtd_switch_address_space(). So I think you need to tweak the intel_iommu
>> a bit to suit the order requirement. :)
>
> VTD doesn't support switching from DMA domain to PT atomically, so switches
> to block domain in between, see intel_iommu_attach_device() in kernel.
>
> So with this sequence is DMA->block->PT domain, we have guaranteed the order
> you shared? See vtd_pasid_cache_sync_locked().
I see. So guest helps it. This might be a bit weak since we rely on
guest behavior. I think you may add a TODO or add comment to note it.
BTW. I think the subject can be refined since the real purpose is to
make tracking dirty pages in the unmap happen in region_del.
vfio/listener: Add missing dirty tracking in region_del
with this and the prior check, this patch looks good to me.
Reviewed-by: Yi Liu <yi.l.liu@intel.com>
>>
>> BTW. should the iommu MRs even go to this try_unmap branch? I think for
>> such MRs, it relies on the vIOMMU to unmap explicitly (hence trigger the
>> vfio_iommu_map_notify()).
>
> Yes, it's unnecessary, but it's hard for VFIO to distinguish if try_unmap is due to
> domain switch or a real unmap. I think it's harmless because the second try_unmap
> unmaps nothing.
hmmm. can a unmap path go to region_del()? Not quite get the second
try_unmap, do you mean when vIOMMU unmaps everything via
vfio_iommu_map_notify() and then switch AS which triggers the region_del
and this try_unmap branch?
Regards,
Yi Liu
^ permalink raw reply [flat|nested] 18+ messages in thread
* RE: [PATCH v5 7/9] vfio/listener: Construct iotlb entry when unmap memory address space
2025-11-27 13:23 ` Yi Liu
@ 2025-11-28 2:58 ` Duan, Zhenzhong
0 siblings, 0 replies; 18+ messages in thread
From: Duan, Zhenzhong @ 2025-11-28 2:58 UTC (permalink / raw)
To: Liu, Yi L, qemu-devel@nongnu.org
Cc: alex@shazbot.org, clg@redhat.com, mst@redhat.com,
jasowang@redhat.com, clement.mathieu--drif@eviden.com,
eric.auger@redhat.com, joao.m.martins@oracle.com,
avihaih@nvidia.com, Hao, Xudong, Cabiddu, Giovanni, Rohith S R,
Gross, Mark, Van De Ven, Arjan
>-----Original Message-----
>From: Liu, Yi L <yi.l.liu@intel.com>
>Subject: Re: [PATCH v5 7/9] vfio/listener: Construct iotlb entry when unmap
>memory address space
>
>On 2025/11/26 13:45, Duan, Zhenzhong wrote:
>>
>>
>>> -----Original Message-----
>>> From: Liu, Yi L <yi.l.liu@intel.com>
>>> Subject: Re: [PATCH v5 7/9] vfio/listener: Construct iotlb entry when
>unmap
>>> memory address space
>>>
>>> On 2025/11/6 12:20, Zhenzhong Duan wrote:
>>>> If a VFIO device in guest switches from passthrough(PT) domain to block
>>>> domain, the whole memory address space is unmapped, but we passed a
>>> NULL
>>>> iotlb entry to unmap_bitmap, then bitmap query didn't happen and we
>lost
>>>> dirty pages.
>>>
>>> this is a good catch. :) Have you observed problem in testing or just
>>> identified it with patch iteration?
>>
>> Patch iteration.
>>
>>>
>>>> By constructing an iotlb entry with iova = gpa for unmap_bitmap, it can
>>>> set dirty bits correctly.
>>>>
>>>> For IOMMU address space, we still send NULL iotlb because VFIO don't
>>>> know the actual mappings in guest. It's vIOMMU's responsibility to send
>>>> actual unmapping notifications, e.g.,
>>> vtd_address_space_unmap_in_migration()
>>>>
>>>> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>>>> Tested-by: Giovannio Cabiddu <giovanni.cabiddu@intel.com>
>>>> ---
>>>> hw/vfio/listener.c | 15 ++++++++++++++-
>>>> 1 file changed, 14 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/hw/vfio/listener.c b/hw/vfio/listener.c
>>>> index 2109101158..3b48f6796c 100644
>>>> --- a/hw/vfio/listener.c
>>>> +++ b/hw/vfio/listener.c
>>>> @@ -713,14 +713,27 @@ static void
>>> vfio_listener_region_del(MemoryListener *listener,
>>>>
>>>> if (try_unmap) {
>>>> bool unmap_all = false;
>>>> + IOMMUTLBEntry entry = {}, *iotlb = NULL;
>>>>
>>>> if (int128_eq(llsize, int128_2_64())) {
>>>> assert(!iova);
>>>> unmap_all = true;
>>>> llsize = int128_zero();
>>>> }
>>>> +
>>>> + /*
>>>> + * Fake an IOTLB entry for identity mapping which is needed
>by
>>> dirty
>>>> + * tracking. In fact, in unmap_bitmap, only translated_addr
>field
>>> is
>>>> + * used to set dirty bitmap.
>>>
>>> Just say sync dirty is needed per unmap. So you may add a check
>>> in_migration as well. If not in migration, it is no needed to do it.
>>
>> Dirty tracking is not only for migration, but also ditry rate/limit. So a
>non-null iotlb
>> is always passed for ram MR. That iotlb pointer is used only when
>> vfio_container_dirty_tracking_is_started() return true.
>>
>> I can add a check on global_dirty_tracking if you prefer to add a check.
>
>yeah, this would be helpful.
>
>>
>>>
>>>> + */
>>>> + if (!memory_region_is_iommu(section->mr)) {
>>>> + entry.iova = iova;
>>>> + entry.translated_addr = iova;
>>>> + iotlb = &entry;
>>>> + }
>>>> +
>>>
>>> While, I'm still wondering how to deal with iommu MR case. Let's see a
>>> scenario first. When switching from DMA domain to PT, QEMU will switch
>>> to PT. This shall trigger the vfio_listener_region_del() and unregister
>>> the iommu notifier. This means vIOMMU side needs to do unmap prior to
>>> switching AS. If not, the iommu notifier is gone when vIOMMU wants to
>>> unmap with an IOTLBEvent. For virtual intel_iommu, it is calling
>>> vtd_address_space_unmap_in_migration() prior to calling
>>> vtd_switch_address_space(). So I think you need to tweak the intel_iommu
>>> a bit to suit the order requirement. :)
>>
>> VTD doesn't support switching from DMA domain to PT atomically, so
>switches
>> to block domain in between, see intel_iommu_attach_device() in kernel.
>>
>> So with this sequence is DMA->block->PT domain, we have guaranteed the
>order
>> you shared? See vtd_pasid_cache_sync_locked().
>
>I see. So guest helps it. This might be a bit weak since we rely on
>guest behavior. I think you may add a TODO or add comment to note it.
Make sense, will add.
>
>BTW. I think the subject can be refined since the real purpose is to
>make tracking dirty pages in the unmap happen in region_del.
>
>vfio/listener: Add missing dirty tracking in region_del
Will do.
>
>with this and the prior check, this patch looks good to me.
>
>Reviewed-by: Yi Liu <yi.l.liu@intel.com>
>
>>>
>>> BTW. should the iommu MRs even go to this try_unmap branch? I think for
>>> such MRs, it relies on the vIOMMU to unmap explicitly (hence trigger the
>>> vfio_iommu_map_notify()).
>>
>> Yes, it's unnecessary, but it's hard for VFIO to distinguish if try_unmap is due
>to
>> domain switch or a real unmap. I think it's harmless because the second
>try_unmap
>> unmaps nothing.
>
>hmmm. can a unmap path go to region_del()? Not quite get the second
>try_unmap, do you mean when vIOMMU unmaps everything via
>vfio_iommu_map_notify() and then switch AS which triggers the region_del
>and this try_unmap branch?
Yes, vfio_iommu_map_notify() is first try_unmap, unmap in region_del is the second try_unmap.
Thanks
Zhenzhong
^ permalink raw reply [flat|nested] 18+ messages in thread
* RE: [PATCH v5 6/9] intel_iommu: Fix unmap_bitmap failure with legacy VFIO backend
2025-11-06 4:20 ` [PATCH v5 6/9] intel_iommu: Fix unmap_bitmap failure with legacy VFIO backend Zhenzhong Duan
@ 2025-11-28 3:51 ` Duan, Zhenzhong
0 siblings, 0 replies; 18+ messages in thread
From: Duan, Zhenzhong @ 2025-11-28 3:51 UTC (permalink / raw)
To: qemu-devel@nongnu.org
Cc: alex@shazbot.org, clg@redhat.com, mst@redhat.com,
jasowang@redhat.com, Liu, Yi L, clement.mathieu--drif@eviden.com,
eric.auger@redhat.com, joao.m.martins@oracle.com,
avihaih@nvidia.com, Hao, Xudong, Cabiddu, Giovanni, Rohith S R,
Gross, Mark, Van De Ven, Arjan
>-----Original Message-----
>From: Duan, Zhenzhong <zhenzhong.duan@intel.com>
>Subject: [PATCH v5 6/9] intel_iommu: Fix unmap_bitmap failure with legacy
>VFIO backend
>
>If a VFIO device in guest switches from IOMMU domain to block domain,
>vtd_address_space_unmap() is called to unmap whole address space.
>
>If that happens during migration, migration fails with legacy VFIO
>backend as below:
>
>Status: failed (vfio_container_dma_unmap(0x561bbbd92d90,
>0x100000000000, 0x100000000000) = -7 (Argument list too long))
>
>Because legacy VFIO limits maximum bitmap size to 256MB which maps to
>8TB on
>4K page system, when 16TB sized UNMAP notification is sent, unmap_bitmap
>ioctl fails. Normally such large UNMAP notification come from IOVA range
>rather than system memory.
>
>Apart from that, vtd_address_space_unmap() sends UNMAP notification with
>translated_addr = 0, because there is no valid translated_addr for unmapping
>a whole iommu memory region. This breaks dirty tracking no matter which
>VFIO
>backend is used.
>
>Fix them all by iterating over DMAMap list to unmap each range with active
>mapping when migration is active. If migration is not active, unmapping the
>whole address space in one go is optimal.
>
>Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
>Reviewed-by: Yi Liu <yi.l.liu@intel.com>
>Tested-by: Giovannio Cabiddu <giovanni.cabiddu@intel.com>
>Tested-by: Rohith S R <rohith.s.r@intel.com>
>---
> hw/i386/intel_iommu.c | 42
>++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 42 insertions(+)
>
>diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>index c402643b56..8e98b0b71d 100644
>--- a/hw/i386/intel_iommu.c
>+++ b/hw/i386/intel_iommu.c
>@@ -37,6 +37,7 @@
> #include "system/system.h"
> #include "hw/i386/apic_internal.h"
> #include "kvm/kvm_i386.h"
>+#include "migration/misc.h"
> #include "migration/vmstate.h"
> #include "trace.h"
>
>@@ -4695,6 +4696,42 @@ static void vtd_dev_unset_iommu_device(PCIBus
>*bus, void *opaque, int devfn)
> vtd_iommu_unlock(s);
> }
>
>+/*
>+ * Unmapping a large range in one go is not optimal during migration
>because
>+ * a large dirty bitmap needs to be allocated while there may be only small
>+ * mappings, iterate over DMAMap list to unmap each range with active
>mapping.
>+ */
>+static void vtd_address_space_unmap_in_migration(VTDAddressSpace *as,
>+ IOMMUNotifier
>*n)
>+{
>+ const DMAMap *map;
>+ const DMAMap target = {
>+ .iova = n->start,
>+ .size = n->end,
>+ };
>+ IOVATree *tree = as->iova_tree;
>+
>+ /*
>+ * DMAMap is created during IOMMU page table sync, it's either 4KB
>or huge
>+ * page size and always a power of 2 in size. So the range of DMAMap
>could
>+ * be used for UNMAP notification directly.
>+ */
>+ while ((map = iova_tree_find(tree, &target))) {
>+ IOMMUTLBEvent event;
>+
>+ event.type = IOMMU_NOTIFIER_UNMAP;
>+ event.entry.iova = map->iova;
>+ event.entry.addr_mask = map->size;
>+ event.entry.target_as = &address_space_memory;
>+ event.entry.perm = IOMMU_NONE;
>+ /* This field is needed to set dirty bigmap */
>+ event.entry.translated_addr = map->translated_addr;
>+ memory_region_notify_iommu_one(n, &event);
>+
>+ iova_tree_remove(tree, *map);
>+ }
>+}
>+
> /* Unmap the whole range in the notifier's scope. */
> static void vtd_address_space_unmap(VTDAddressSpace *as,
>IOMMUNotifier *n)
> {
>@@ -4704,6 +4741,11 @@ static void
>vtd_address_space_unmap(VTDAddressSpace *as, IOMMUNotifier *n)
> IntelIOMMUState *s = as->iommu_state;
> DMAMap map;
>
>+ if (migration_is_running()) {
Hmm, I just realized it may be better to check global_dirty_tracking instead,
because dirty rate/limit QMP also need it.
Zhenzhong
>+ vtd_address_space_unmap_in_migration(as, n);
>+ return;
>+ }
>+
> /*
> * Note: all the codes in this function has a assumption that IOVA
> * bits are no more than VTD_MGAW bits (which is restricted by
>--
>2.47.1
^ permalink raw reply [flat|nested] 18+ messages in thread
end of thread, other threads:[~2025-11-28 3:51 UTC | newest]
Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-06 4:20 [PATCH v5 0/9] vfio: relax the vIOMMU check Zhenzhong Duan
2025-11-06 4:20 ` [PATCH v5 1/9] vfio/iommufd: Add framework code to support getting dirty bitmap before unmap Zhenzhong Duan
2025-11-06 4:20 ` [PATCH v5 2/9] vfio/iommufd: Query dirty bitmap before DMA unmap Zhenzhong Duan
2025-11-06 4:20 ` [PATCH v5 3/9] vfio/container-legacy: rename vfio_dma_unmap_bitmap() to vfio_legacy_dma_unmap_get_dirty_bitmap() Zhenzhong Duan
2025-11-06 4:20 ` [PATCH v5 4/9] vfio: Add a backend_flag parameter to vfio_contianer_query_dirty_bitmap() Zhenzhong Duan
2025-11-06 4:20 ` [PATCH v5 5/9] vfio/iommufd: Add IOMMU_HWPT_GET_DIRTY_BITMAP_NO_CLEAR flag support Zhenzhong Duan
2025-11-06 4:20 ` [PATCH v5 6/9] intel_iommu: Fix unmap_bitmap failure with legacy VFIO backend Zhenzhong Duan
2025-11-28 3:51 ` Duan, Zhenzhong
2025-11-06 4:20 ` [PATCH v5 7/9] vfio/listener: Construct iotlb entry when unmap memory address space Zhenzhong Duan
2025-11-25 10:04 ` Yi Liu
2025-11-26 5:45 ` Duan, Zhenzhong
2025-11-27 13:23 ` Yi Liu
2025-11-28 2:58 ` Duan, Zhenzhong
2025-11-06 4:20 ` [PATCH v5 8/9] vfio/migration: Add migration blocker if VM memory is too large to cause unmap_bitmap failure Zhenzhong Duan
2025-11-25 9:20 ` Yi Liu
2025-11-06 4:20 ` [PATCH v5 9/9] vfio/migration: Allow live migration with vIOMMU without VFs using device dirty tracking Zhenzhong Duan
2025-11-20 9:29 ` [PATCH v5 0/9] vfio: relax the vIOMMU check Duan, Zhenzhong
2025-11-25 7:40 ` Cédric Le Goater
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).