* [PULL 0/5] vfio queue
@ 2025-07-29 6:26 Cédric Le Goater
2025-07-29 16:59 ` Stefan Hajnoczi
0 siblings, 1 reply; 9+ messages in thread
From: Cédric Le Goater @ 2025-07-29 6:26 UTC (permalink / raw)
To: qemu-devel; +Cc: Alex Williamson, Cédric Le Goater
The following changes since commit 92c05be4dfb59a71033d4c57dac944b29f7dabf0:
Merge tag 'pull-qga-2025-07-28' of https://repo.or.cz/qemu/armbru into staging (2025-07-28 09:31:19 -0400)
are available in the Git repository at:
https://github.com/legoater/qemu/ tags/pull-vfio-20250729
for you to fetch changes up to 0db7e4cb62026196f06755c77f943294d9879e5a:
vfio/igd: Fix VGA regions are not exposed in legacy mode (2025-07-28 17:52:34 +0200)
----------------------------------------------------------------
vfio queue:
* Fixed regression introduced by the `use-legacy-x86-rom` property
* Fixed regressions on IGD passthrough in legacy mode
* Fixed region mappings of sub-page BARs after CPR
* Removed build of SEV on 32-bit hosts
----------------------------------------------------------------
Cédric Le Goater (2):
hw/i386: Fix 'use-legacy-x86-rom' property compatibility
i386: Build SEV only for 64-bit target
Steve Sistare (1):
vfio: fix sub-page bar after cpr
Tomita Moeko (2):
vfio/igd: Require host VGA decode for legacy mode
vfio/igd: Fix VGA regions are not exposed in legacy mode
docs/igd-assign.txt | 1 +
hw/vfio/pci.h | 2 ++
hw/vfio/types.h | 2 ++
hw/core/machine.c | 2 +-
hw/i386/microvm.c | 2 +-
hw/i386/pc_piix.c | 2 +-
hw/i386/pc_q35.c | 2 +-
hw/vfio/cpr.c | 2 ++
hw/vfio/igd.c | 19 ++++++++++++-------
hw/vfio/pci.c | 29 ++++++++++++++++++++++++-----
hw/i386/Kconfig | 2 +-
11 files changed, 48 insertions(+), 17 deletions(-)
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PULL 0/5] vfio queue
2025-07-29 6:26 Cédric Le Goater
@ 2025-07-29 16:59 ` Stefan Hajnoczi
0 siblings, 0 replies; 9+ messages in thread
From: Stefan Hajnoczi @ 2025-07-29 16:59 UTC (permalink / raw)
To: Cédric Le Goater; +Cc: qemu-devel, Alex Williamson, Cédric Le Goater
[-- Attachment #1: Type: text/plain, Size: 116 bytes --]
Applied, thanks.
Please update the changelog at https://wiki.qemu.org/ChangeLog/10.1 for any user-visible changes.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PULL 0/5] vfio queue
@ 2025-10-03 10:33 Cédric Le Goater
2025-10-03 10:33 ` [PULL 1/5] vfio: Remove workaround for kernel DMA unmap overflow bug Cédric Le Goater
` (5 more replies)
0 siblings, 6 replies; 9+ messages in thread
From: Cédric Le Goater @ 2025-10-03 10:33 UTC (permalink / raw)
To: qemu-devel; +Cc: Alex Williamson, Cédric Le Goater
The following changes since commit 29b77c1a2db2d796bc3847852a5c8dc2a1e6e83b:
Merge tag 'rust-ci-pull-request' of https://gitlab.com/marcandre.lureau/qemu into staging (2025-09-30 09:29:38 -0700)
are available in the Git repository at:
https://github.com/legoater/qemu/ tags/pull-vfio-20251003
for you to fetch changes up to f0b52aa08ab0868c18d881381a8fda4b59b37517:
hw/vfio: Use uint64_t for IOVA mapping size in vfio_container_dma_*map (2025-10-02 10:41:23 +0200)
----------------------------------------------------------------
vfio queue:
* Remove workaround for kernel DMA unmap overflow
* Remove invalid uses of ram_addr_t type
----------------------------------------------------------------
Cédric Le Goater (1):
vfio: Remove workaround for kernel DMA unmap overflow bug
Philippe Mathieu-Daudé (4):
system/iommufd: Use uint64_t type for IOVA mapping size
hw/vfio: Reorder vfio_container_query_dirty_bitmap() trace format
hw/vfio: Avoid ram_addr_t in vfio_container_query_dirty_bitmap()
hw/vfio: Use uint64_t for IOVA mapping size in vfio_container_dma_*map
include/hw/vfio/vfio-container.h | 13 +++++++------
include/hw/vfio/vfio-cpr.h | 2 +-
include/system/iommufd.h | 6 +++---
backends/iommufd.c | 6 +++---
hw/vfio-user/container.c | 4 ++--
hw/vfio/container-legacy.c | 28 +++++-----------------------
hw/vfio/container.c | 15 ++++++++-------
hw/vfio/cpr-legacy.c | 2 +-
hw/vfio/iommufd.c | 6 +++---
hw/vfio/listener.c | 18 +++++++++---------
hw/vfio/trace-events | 7 +++----
11 files changed, 45 insertions(+), 62 deletions(-)
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PULL 1/5] vfio: Remove workaround for kernel DMA unmap overflow bug
2025-10-03 10:33 [PULL 0/5] vfio queue Cédric Le Goater
@ 2025-10-03 10:33 ` Cédric Le Goater
2025-10-03 10:33 ` [PULL 2/5] system/iommufd: Use uint64_t type for IOVA mapping size Cédric Le Goater
` (4 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Cédric Le Goater @ 2025-10-03 10:33 UTC (permalink / raw)
To: qemu-devel; +Cc: Alex Williamson, Cédric Le Goater, Zhenzhong Duan
A kernel bug was introduced in Linux v4.15 via commit 71a7d3d78e3c
("vfio/type1: Check for address space wrap-around on unmap"), which
added a test for address space wrap-around in the vfio DMA unmap path.
Unfortunately, due to an integer overflow, the kernel would
incorrectly detect an unmap of the last page in the 64-bit address
space as a wrap-around, causing the unmap to fail with -EINVAL.
A QEMU workaround was introduced in commit 567d7d3e6be5 ("vfio/common:
Work around kernel overflow bug in DMA unmap") to retry the unmap,
excluding the final page of the range.
The kernel bug was then fixed in Linux v5.0 via commit 58fec830fc19
("vfio/type1: Fix dma_unmap wrap-around check"). Since the oldest
supported LTS kernel is now v5.4, kernels affected by this bug are
considered deprecated, and the workaround is no longer necessary.
This change reverts 567d7d3e6be5, removing the workaround.
Link: https://bugzilla.redhat.com/show_bug.cgi?id=1662291
Reviewed-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Link: https://lore.kernel.org/qemu-devel/20250926085423.375547-1-clg@redhat.com
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
hw/vfio/container-legacy.c | 20 +-------------------
hw/vfio/trace-events | 1 -
2 files changed, 1 insertion(+), 20 deletions(-)
diff --git a/hw/vfio/container-legacy.c b/hw/vfio/container-legacy.c
index c0f87f774a00805cab4a8f3b3386ddd99c3d9111..25a15ea8674c159b7e624425c52953240b8c1179 100644
--- a/hw/vfio/container-legacy.c
+++ b/hw/vfio/container-legacy.c
@@ -147,25 +147,7 @@ static int vfio_legacy_dma_unmap_one(const VFIOContainer *bcontainer,
need_dirty_sync = true;
}
- while (ioctl(container->fd, VFIO_IOMMU_UNMAP_DMA, &unmap)) {
- /*
- * The type1 backend has an off-by-one bug in the kernel (71a7d3d78e3c
- * v4.15) where an overflow in its wrap-around check prevents us from
- * unmapping the last page of the address space. Test for the error
- * condition and re-try the unmap excluding the last page. The
- * expectation is that we've never mapped the last page anyway and this
- * unmap request comes via vIOMMU support which also makes it unlikely
- * that this page is used. This bug was introduced well after type1 v2
- * support was introduced, so we shouldn't need to test for v1. A fix
- * is queued for kernel v5.0 so this workaround can be removed once
- * affected kernels are sufficiently deprecated.
- */
- if (errno == EINVAL && unmap.size && !(unmap.iova + unmap.size) &&
- container->iommu_type == VFIO_TYPE1v2_IOMMU) {
- trace_vfio_legacy_dma_unmap_overflow_workaround();
- unmap.size -= 1ULL << ctz64(bcontainer->pgsizes);
- continue;
- }
+ if (ioctl(container->fd, VFIO_IOMMU_UNMAP_DMA, &unmap)) {
return -errno;
}
diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events
index e3d571f8c845dad85de5738f8ca768bdfc336252..7496e1b64b5de0168974a251eab698399a6a1d54 100644
--- a/hw/vfio/trace-events
+++ b/hw/vfio/trace-events
@@ -112,7 +112,6 @@ vfio_container_disconnect(int fd) "close container->fd=%d"
vfio_group_put(int fd) "close group->fd=%d"
vfio_device_get(const char * name, unsigned int flags, unsigned int num_regions, unsigned int num_irqs) "Device %s flags: %u, regions: %u, irqs: %u"
vfio_device_put(int fd) "close vdev->fd=%d"
-vfio_legacy_dma_unmap_overflow_workaround(void) ""
# region.c
vfio_region_write(const char *name, int index, uint64_t addr, uint64_t data, unsigned size) " (%s:region%d+0x%"PRIx64", 0x%"PRIx64 ", %d)"
--
2.51.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PULL 2/5] system/iommufd: Use uint64_t type for IOVA mapping size
2025-10-03 10:33 [PULL 0/5] vfio queue Cédric Le Goater
2025-10-03 10:33 ` [PULL 1/5] vfio: Remove workaround for kernel DMA unmap overflow bug Cédric Le Goater
@ 2025-10-03 10:33 ` Cédric Le Goater
2025-10-03 10:33 ` [PULL 3/5] hw/vfio: Reorder vfio_container_query_dirty_bitmap() trace format Cédric Le Goater
` (3 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Cédric Le Goater @ 2025-10-03 10:33 UTC (permalink / raw)
To: qemu-devel
Cc: Alex Williamson, Philippe Mathieu-Daudé,
Cédric Le Goater
From: Philippe Mathieu-Daudé <philmd@linaro.org>
The 'ram_addr_t' type is described as:
a QEMU internal address space that maps guest RAM physical
addresses into an intermediate address space that can map
to host virtual address spaces.
This doesn't represent well an IOVA mapping size. Simply use
the uint64_t type.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Link: https://lore.kernel.org/qemu-devel/20250930123528.42878-2-philmd@linaro.org
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
include/system/iommufd.h | 6 +++---
backends/iommufd.c | 6 +++---
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/include/system/iommufd.h b/include/system/iommufd.h
index c9c72ffc4509d7b5d09e8129c5065478aa23aec0..a659f36a20fdcf2f4855ebca43fc801a1b0e6634 100644
--- a/include/system/iommufd.h
+++ b/include/system/iommufd.h
@@ -45,12 +45,12 @@ bool iommufd_backend_alloc_ioas(IOMMUFDBackend *be, uint32_t *ioas_id,
Error **errp);
void iommufd_backend_free_id(IOMMUFDBackend *be, uint32_t id);
int iommufd_backend_map_file_dma(IOMMUFDBackend *be, uint32_t ioas_id,
- hwaddr iova, ram_addr_t size, int fd,
+ hwaddr iova, uint64_t size, int fd,
unsigned long start, bool readonly);
int iommufd_backend_map_dma(IOMMUFDBackend *be, uint32_t ioas_id, hwaddr iova,
- ram_addr_t size, void *vaddr, bool readonly);
+ uint64_t size, void *vaddr, bool readonly);
int iommufd_backend_unmap_dma(IOMMUFDBackend *be, uint32_t ioas_id,
- hwaddr iova, ram_addr_t size);
+ hwaddr iova, uint64_t size);
bool iommufd_backend_get_device_info(IOMMUFDBackend *be, uint32_t devid,
uint32_t *type, void *data, uint32_t len,
uint64_t *caps, Error **errp);
diff --git a/backends/iommufd.c b/backends/iommufd.c
index 2a33c7ab0bcdc9aabda55258741022debab0bdad..fdfb7c9d67197da11d35290ba2f44e0b005c2690 100644
--- a/backends/iommufd.c
+++ b/backends/iommufd.c
@@ -197,7 +197,7 @@ void iommufd_backend_free_id(IOMMUFDBackend *be, uint32_t id)
}
int iommufd_backend_map_dma(IOMMUFDBackend *be, uint32_t ioas_id, hwaddr iova,
- ram_addr_t size, void *vaddr, bool readonly)
+ uint64_t size, void *vaddr, bool readonly)
{
int ret, fd = be->fd;
struct iommu_ioas_map map = {
@@ -230,7 +230,7 @@ int iommufd_backend_map_dma(IOMMUFDBackend *be, uint32_t ioas_id, hwaddr iova,
}
int iommufd_backend_map_file_dma(IOMMUFDBackend *be, uint32_t ioas_id,
- hwaddr iova, ram_addr_t size,
+ hwaddr iova, uint64_t size,
int mfd, unsigned long start, bool readonly)
{
int ret, fd = be->fd;
@@ -268,7 +268,7 @@ int iommufd_backend_map_file_dma(IOMMUFDBackend *be, uint32_t ioas_id,
}
int iommufd_backend_unmap_dma(IOMMUFDBackend *be, uint32_t ioas_id,
- hwaddr iova, ram_addr_t size)
+ hwaddr iova, uint64_t size)
{
int ret, fd = be->fd;
struct iommu_ioas_unmap unmap = {
--
2.51.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PULL 3/5] hw/vfio: Reorder vfio_container_query_dirty_bitmap() trace format
2025-10-03 10:33 [PULL 0/5] vfio queue Cédric Le Goater
2025-10-03 10:33 ` [PULL 1/5] vfio: Remove workaround for kernel DMA unmap overflow bug Cédric Le Goater
2025-10-03 10:33 ` [PULL 2/5] system/iommufd: Use uint64_t type for IOVA mapping size Cédric Le Goater
@ 2025-10-03 10:33 ` Cédric Le Goater
2025-10-03 10:33 ` [PULL 4/5] hw/vfio: Avoid ram_addr_t in vfio_container_query_dirty_bitmap() Cédric Le Goater
` (2 subsequent siblings)
5 siblings, 0 replies; 9+ messages in thread
From: Cédric Le Goater @ 2025-10-03 10:33 UTC (permalink / raw)
To: qemu-devel
Cc: Alex Williamson, Philippe Mathieu-Daudé,
Cédric Le Goater
From: Philippe Mathieu-Daudé <philmd@linaro.org>
Update the trace-events comments after the changes from
commit dcce51b1938 ("hw/vfio/container-base.c: rename file
to container.c") and commit a3bcae62b6a ("hw/vfio/container.c:
rename file to container-legacy.c").
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Link: https://lore.kernel.org/qemu-devel/20250930123528.42878-3-philmd@linaro.org
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
hw/vfio/trace-events | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events
index 7496e1b64b5de0168974a251eab698399a6a1d54..b1b470cc295ecd07db60910843d9fe869ab7b924 100644
--- a/hw/vfio/trace-events
+++ b/hw/vfio/trace-events
@@ -104,10 +104,10 @@ vfio_device_dirty_tracking_update(uint64_t start, uint64_t end, uint64_t min, ui
vfio_device_dirty_tracking_start(int nr_ranges, uint64_t min32, uint64_t max32, uint64_t min64, uint64_t max64, uint64_t minpci, uint64_t maxpci) "nr_ranges %d 32:[0x%"PRIx64" - 0x%"PRIx64"], 64:[0x%"PRIx64" - 0x%"PRIx64"], pci64:[0x%"PRIx64" - 0x%"PRIx64"]"
vfio_iommu_map_dirty_notify(uint64_t iova_start, uint64_t iova_end) "iommu dirty @ 0x%"PRIx64" - 0x%"PRIx64
-# container-base.c
+# container.c
vfio_container_query_dirty_bitmap(uint64_t iova, uint64_t size, uint64_t bitmap_size, uint64_t start, uint64_t dirty_pages) "iova=0x%"PRIx64" size= 0x%"PRIx64" bitmap_size=0x%"PRIx64" start=0x%"PRIx64" dirty_pages=%"PRIu64
-# container.c
+# container-legacy.c
vfio_container_disconnect(int fd) "close container->fd=%d"
vfio_group_put(int fd) "close group->fd=%d"
vfio_device_get(const char * name, unsigned int flags, unsigned int num_regions, unsigned int num_irqs) "Device %s flags: %u, regions: %u, irqs: %u"
--
2.51.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PULL 4/5] hw/vfio: Avoid ram_addr_t in vfio_container_query_dirty_bitmap()
2025-10-03 10:33 [PULL 0/5] vfio queue Cédric Le Goater
` (2 preceding siblings ...)
2025-10-03 10:33 ` [PULL 3/5] hw/vfio: Reorder vfio_container_query_dirty_bitmap() trace format Cédric Le Goater
@ 2025-10-03 10:33 ` Cédric Le Goater
2025-10-03 10:33 ` [PULL 5/5] hw/vfio: Use uint64_t for IOVA mapping size in vfio_container_dma_*map Cédric Le Goater
2025-10-03 17:33 ` [PULL 0/5] vfio queue Richard Henderson
5 siblings, 0 replies; 9+ messages in thread
From: Cédric Le Goater @ 2025-10-03 10:33 UTC (permalink / raw)
To: qemu-devel
Cc: Alex Williamson, Philippe Mathieu-Daudé,
Cédric Le Goater
From: Philippe Mathieu-Daudé <philmd@linaro.org>
The 'ram_addr_t' type is described as:
a QEMU internal address space that maps guest RAM physical
addresses into an intermediate address space that can map
to host virtual address spaces.
vfio_container_query_dirty_bitmap() doesn't expect such QEMU
intermediate address, but a guest physical addresses. Use the
appropriate 'hwaddr' type, rename as @translated_addr for
clarity.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Link: https://lore.kernel.org/qemu-devel/20250930123528.42878-4-philmd@linaro.org
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
include/hw/vfio/vfio-container.h | 3 ++-
hw/vfio/container.c | 11 ++++++-----
hw/vfio/listener.c | 18 +++++++++---------
hw/vfio/trace-events | 2 +-
4 files changed, 18 insertions(+), 16 deletions(-)
diff --git a/include/hw/vfio/vfio-container.h b/include/hw/vfio/vfio-container.h
index b8fb2b8b5d72b1d2a4c00dc89b214b48a371f555..093c360f0eef5547d493525df64d486475d6680b 100644
--- a/include/hw/vfio/vfio-container.h
+++ b/include/hw/vfio/vfio-container.h
@@ -98,7 +98,8 @@ bool vfio_container_dirty_tracking_is_started(
bool vfio_container_devices_dirty_tracking_is_supported(
const VFIOContainer *bcontainer);
int vfio_container_query_dirty_bitmap(const VFIOContainer *bcontainer,
- uint64_t iova, uint64_t size, ram_addr_t ram_addr, Error **errp);
+ uint64_t iova, uint64_t size,
+ hwaddr translated_addr, Error **errp);
GList *vfio_container_get_iova_ranges(const VFIOContainer *bcontainer);
diff --git a/hw/vfio/container.c b/hw/vfio/container.c
index 250b20f424522f4b6a4e864906eed2d8d13efbcd..9d69439371402940fcbc926737215eb9308b237a 100644
--- a/hw/vfio/container.c
+++ b/hw/vfio/container.c
@@ -246,7 +246,7 @@ static int vfio_container_devices_query_dirty_bitmap(
int vfio_container_query_dirty_bitmap(const VFIOContainer *bcontainer,
uint64_t iova, uint64_t size,
- ram_addr_t ram_addr, Error **errp)
+ hwaddr translated_addr, Error **errp)
{
bool all_device_dirty_tracking =
vfio_container_devices_dirty_tracking_is_supported(bcontainer);
@@ -255,7 +255,7 @@ int vfio_container_query_dirty_bitmap(const VFIOContainer *bcontainer,
int ret;
if (!bcontainer->dirty_pages_supported && !all_device_dirty_tracking) {
- cpu_physical_memory_set_dirty_range(ram_addr, size,
+ cpu_physical_memory_set_dirty_range(translated_addr, size,
tcg_enabled() ? DIRTY_CLIENTS_ALL :
DIRTY_CLIENTS_NOCODE);
return 0;
@@ -280,11 +280,12 @@ int vfio_container_query_dirty_bitmap(const VFIOContainer *bcontainer,
goto out;
}
- dirty_pages = cpu_physical_memory_set_dirty_lebitmap(vbmap.bitmap, ram_addr,
+ dirty_pages = cpu_physical_memory_set_dirty_lebitmap(vbmap.bitmap,
+ translated_addr,
vbmap.pages);
- trace_vfio_container_query_dirty_bitmap(iova, size, vbmap.size, ram_addr,
- dirty_pages);
+ trace_vfio_container_query_dirty_bitmap(iova, size, vbmap.size,
+ translated_addr, dirty_pages);
out:
g_free(vbmap.bitmap);
diff --git a/hw/vfio/listener.c b/hw/vfio/listener.c
index 3b6f17f0c3aa7ef08091f8cc1c3230eff97b5cd7..a2c19a3cec1a923631fe5c31d084cd7615f888c0 100644
--- a/hw/vfio/listener.c
+++ b/hw/vfio/listener.c
@@ -1059,7 +1059,7 @@ static void vfio_iommu_map_dirty_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
VFIOGuestIOMMU *giommu = gdn->giommu;
VFIOContainer *bcontainer = giommu->bcontainer;
hwaddr iova = iotlb->iova + giommu->iommu_offset;
- ram_addr_t translated_addr;
+ hwaddr translated_addr;
Error *local_err = NULL;
int ret = -EINVAL;
MemoryRegion *mr;
@@ -1108,8 +1108,8 @@ static int vfio_ram_discard_query_dirty_bitmap(MemoryRegionSection *section,
{
const hwaddr size = int128_get64(section->size);
const hwaddr iova = section->offset_within_address_space;
- const ram_addr_t ram_addr = memory_region_get_ram_addr(section->mr) +
- section->offset_within_region;
+ const hwaddr translated_addr = memory_region_get_ram_addr(section->mr) +
+ section->offset_within_region;
VFIORamDiscardListener *vrdl = opaque;
Error *local_err = NULL;
int ret;
@@ -1118,8 +1118,8 @@ static int vfio_ram_discard_query_dirty_bitmap(MemoryRegionSection *section,
* Sync the whole mapped region (spanning multiple individual mappings)
* in one go.
*/
- ret = vfio_container_query_dirty_bitmap(vrdl->bcontainer, iova, size, ram_addr,
- &local_err);
+ ret = vfio_container_query_dirty_bitmap(vrdl->bcontainer, iova, size,
+ translated_addr, &local_err);
if (ret) {
error_report_err(local_err);
}
@@ -1183,7 +1183,7 @@ static int vfio_sync_iommu_dirty_bitmap(VFIOContainer *bcontainer,
static int vfio_sync_dirty_bitmap(VFIOContainer *bcontainer,
MemoryRegionSection *section, Error **errp)
{
- ram_addr_t ram_addr;
+ hwaddr translated_addr;
if (memory_region_is_iommu(section->mr)) {
return vfio_sync_iommu_dirty_bitmap(bcontainer, section);
@@ -1198,12 +1198,12 @@ static int vfio_sync_dirty_bitmap(VFIOContainer *bcontainer,
return ret;
}
- ram_addr = memory_region_get_ram_addr(section->mr) +
- section->offset_within_region;
+ translated_addr = memory_region_get_ram_addr(section->mr) +
+ section->offset_within_region;
return vfio_container_query_dirty_bitmap(bcontainer,
REAL_HOST_PAGE_ALIGN(section->offset_within_address_space),
- int128_get64(section->size), ram_addr, errp);
+ int128_get64(section->size), translated_addr, errp);
}
static void vfio_listener_log_sync(MemoryListener *listener,
diff --git a/hw/vfio/trace-events b/hw/vfio/trace-events
index b1b470cc295ecd07db60910843d9fe869ab7b924..1e895448cd9b4f6b0dd5bb445de4acef6090ec62 100644
--- a/hw/vfio/trace-events
+++ b/hw/vfio/trace-events
@@ -105,7 +105,7 @@ vfio_device_dirty_tracking_start(int nr_ranges, uint64_t min32, uint64_t max32,
vfio_iommu_map_dirty_notify(uint64_t iova_start, uint64_t iova_end) "iommu dirty @ 0x%"PRIx64" - 0x%"PRIx64
# container.c
-vfio_container_query_dirty_bitmap(uint64_t iova, uint64_t size, uint64_t bitmap_size, uint64_t start, uint64_t dirty_pages) "iova=0x%"PRIx64" size= 0x%"PRIx64" bitmap_size=0x%"PRIx64" start=0x%"PRIx64" dirty_pages=%"PRIu64
+vfio_container_query_dirty_bitmap(uint64_t iova, uint64_t size, uint64_t bitmap_size, uint64_t translated_addr, uint64_t dirty_pages) "iova=0x%"PRIx64" size= 0x%"PRIx64" bitmap_size=0x%"PRIx64" gpa=0x%"PRIx64" dirty_pages=%"PRIu64
# container-legacy.c
vfio_container_disconnect(int fd) "close container->fd=%d"
--
2.51.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* [PULL 5/5] hw/vfio: Use uint64_t for IOVA mapping size in vfio_container_dma_*map
2025-10-03 10:33 [PULL 0/5] vfio queue Cédric Le Goater
` (3 preceding siblings ...)
2025-10-03 10:33 ` [PULL 4/5] hw/vfio: Avoid ram_addr_t in vfio_container_query_dirty_bitmap() Cédric Le Goater
@ 2025-10-03 10:33 ` Cédric Le Goater
2025-10-03 17:33 ` [PULL 0/5] vfio queue Richard Henderson
5 siblings, 0 replies; 9+ messages in thread
From: Cédric Le Goater @ 2025-10-03 10:33 UTC (permalink / raw)
To: qemu-devel
Cc: Alex Williamson, Philippe Mathieu-Daudé,
Cédric Le Goater
From: Philippe Mathieu-Daudé <philmd@linaro.org>
The 'ram_addr_t' type is described as:
a QEMU internal address space that maps guest RAM physical
addresses into an intermediate address space that can map
to host virtual address spaces.
This doesn't represent well an IOVA mapping size. Simply use
the uint64_t type.
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Link: https://lore.kernel.org/qemu-devel/20250930123528.42878-5-philmd@linaro.org
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
include/hw/vfio/vfio-container.h | 10 +++++-----
include/hw/vfio/vfio-cpr.h | 2 +-
hw/vfio-user/container.c | 4 ++--
hw/vfio/container-legacy.c | 8 ++++----
hw/vfio/container.c | 4 ++--
hw/vfio/cpr-legacy.c | 2 +-
hw/vfio/iommufd.c | 6 +++---
7 files changed, 18 insertions(+), 18 deletions(-)
diff --git a/include/hw/vfio/vfio-container.h b/include/hw/vfio/vfio-container.h
index 093c360f0eef5547d493525df64d486475d6680b..c4b58d664b7e705d29be0e7116d609c6df0d42a9 100644
--- a/include/hw/vfio/vfio-container.h
+++ b/include/hw/vfio/vfio-container.h
@@ -81,10 +81,10 @@ void vfio_address_space_insert(VFIOAddressSpace *space,
VFIOContainer *bcontainer);
int vfio_container_dma_map(VFIOContainer *bcontainer,
- hwaddr iova, ram_addr_t size,
+ hwaddr iova, uint64_t size,
void *vaddr, bool readonly, MemoryRegion *mr);
int vfio_container_dma_unmap(VFIOContainer *bcontainer,
- hwaddr iova, ram_addr_t size,
+ hwaddr iova, uint64_t size,
IOMMUTLBEntry *iotlb, bool unmap_all);
bool vfio_container_add_section_window(VFIOContainer *bcontainer,
MemoryRegionSection *section,
@@ -167,7 +167,7 @@ struct VFIOIOMMUClass {
* Returns 0 to indicate success and -errno otherwise.
*/
int (*dma_map)(const VFIOContainer *bcontainer,
- hwaddr iova, ram_addr_t size,
+ hwaddr iova, uint64_t size,
void *vaddr, bool readonly, MemoryRegion *mr);
/**
* @dma_map_file
@@ -182,7 +182,7 @@ struct VFIOIOMMUClass {
* @readonly: map read only if true
*/
int (*dma_map_file)(const VFIOContainer *bcontainer,
- hwaddr iova, ram_addr_t size,
+ hwaddr iova, uint64_t size,
int fd, unsigned long start, bool readonly);
/**
* @dma_unmap
@@ -198,7 +198,7 @@ struct VFIOIOMMUClass {
* Returns 0 to indicate success and -errno otherwise.
*/
int (*dma_unmap)(const VFIOContainer *bcontainer,
- hwaddr iova, ram_addr_t size,
+ hwaddr iova, uint64_t size,
IOMMUTLBEntry *iotlb, bool unmap_all);
diff --git a/include/hw/vfio/vfio-cpr.h b/include/hw/vfio/vfio-cpr.h
index 26ee0c4fe15ac74b5123f57c20c94486171d4779..81f4e24e229ef35f5b14582ce6e58415e0ebf3df 100644
--- a/include/hw/vfio/vfio-cpr.h
+++ b/include/hw/vfio/vfio-cpr.h
@@ -21,7 +21,7 @@ struct VFIOIOMMUFDContainer;
struct IOMMUFDBackend;
typedef int (*dma_map_fn)(const struct VFIOContainer *bcontainer,
- hwaddr iova, ram_addr_t size, void *vaddr,
+ hwaddr iova, uint64_t size, void *vaddr,
bool readonly, MemoryRegion *mr);
typedef struct VFIOContainerCPR {
diff --git a/hw/vfio-user/container.c b/hw/vfio-user/container.c
index 411eb7b28b72a25cd68d494ffc4a8f9b55b4862d..e45192fef6531872e484372d45dca82fac6cb88f 100644
--- a/hw/vfio-user/container.c
+++ b/hw/vfio-user/container.c
@@ -39,7 +39,7 @@ static void vfio_user_listener_commit(VFIOContainer *bcontainer)
}
static int vfio_user_dma_unmap(const VFIOContainer *bcontainer,
- hwaddr iova, ram_addr_t size,
+ hwaddr iova, uint64_t size,
IOMMUTLBEntry *iotlb, bool unmap_all)
{
VFIOUserContainer *container = VFIO_IOMMU_USER(bcontainer);
@@ -81,7 +81,7 @@ static int vfio_user_dma_unmap(const VFIOContainer *bcontainer,
}
static int vfio_user_dma_map(const VFIOContainer *bcontainer, hwaddr iova,
- ram_addr_t size, void *vaddr, bool readonly,
+ uint64_t size, void *vaddr, bool readonly,
MemoryRegion *mrp)
{
VFIOUserContainer *container = VFIO_IOMMU_USER(bcontainer);
diff --git a/hw/vfio/container-legacy.c b/hw/vfio/container-legacy.c
index 25a15ea8674c159b7e624425c52953240b8c1179..34352dd31fc9b1963c8597ac9e7f8a76fe653ad9 100644
--- a/hw/vfio/container-legacy.c
+++ b/hw/vfio/container-legacy.c
@@ -69,7 +69,7 @@ static int vfio_ram_block_discard_disable(VFIOLegacyContainer *container,
}
static int vfio_dma_unmap_bitmap(const VFIOLegacyContainer *container,
- hwaddr iova, ram_addr_t size,
+ hwaddr iova, uint64_t size,
IOMMUTLBEntry *iotlb)
{
const VFIOContainer *bcontainer = VFIO_IOMMU(container);
@@ -122,7 +122,7 @@ unmap_exit:
}
static int vfio_legacy_dma_unmap_one(const VFIOContainer *bcontainer,
- hwaddr iova, ram_addr_t size,
+ hwaddr iova, uint64_t size,
IOMMUTLBEntry *iotlb)
{
const VFIOLegacyContainer *container = VFIO_IOMMU_LEGACY(bcontainer);
@@ -167,7 +167,7 @@ static int vfio_legacy_dma_unmap_one(const VFIOContainer *bcontainer,
* DMA - Mapping and unmapping for the "type1" IOMMU interface used on x86
*/
static int vfio_legacy_dma_unmap(const VFIOContainer *bcontainer,
- hwaddr iova, ram_addr_t size,
+ hwaddr iova, uint64_t size,
IOMMUTLBEntry *iotlb, bool unmap_all)
{
int ret;
@@ -192,7 +192,7 @@ static int vfio_legacy_dma_unmap(const VFIOContainer *bcontainer,
}
static int vfio_legacy_dma_map(const VFIOContainer *bcontainer, hwaddr iova,
- ram_addr_t size, void *vaddr, bool readonly,
+ uint64_t size, void *vaddr, bool readonly,
MemoryRegion *mr)
{
const VFIOLegacyContainer *container = VFIO_IOMMU_LEGACY(bcontainer);
diff --git a/hw/vfio/container.c b/hw/vfio/container.c
index 9d69439371402940fcbc926737215eb9308b237a..41de343924614ddab08b5a02af11a5415272c29a 100644
--- a/hw/vfio/container.c
+++ b/hw/vfio/container.c
@@ -74,7 +74,7 @@ void vfio_address_space_insert(VFIOAddressSpace *space,
}
int vfio_container_dma_map(VFIOContainer *bcontainer,
- hwaddr iova, ram_addr_t size,
+ hwaddr iova, uint64_t size,
void *vaddr, bool readonly, MemoryRegion *mr)
{
VFIOIOMMUClass *vioc = VFIO_IOMMU_GET_CLASS(bcontainer);
@@ -93,7 +93,7 @@ int vfio_container_dma_map(VFIOContainer *bcontainer,
}
int vfio_container_dma_unmap(VFIOContainer *bcontainer,
- hwaddr iova, ram_addr_t size,
+ hwaddr iova, uint64_t size,
IOMMUTLBEntry *iotlb, bool unmap_all)
{
VFIOIOMMUClass *vioc = VFIO_IOMMU_GET_CLASS(bcontainer);
diff --git a/hw/vfio/cpr-legacy.c b/hw/vfio/cpr-legacy.c
index bbf7a0d35f0ba2b78fd40a60b6e47337665dcbb9..3a1d126556e11c60502084d43138a49c29327ba9 100644
--- a/hw/vfio/cpr-legacy.c
+++ b/hw/vfio/cpr-legacy.c
@@ -39,7 +39,7 @@ static bool vfio_dma_unmap_vaddr_all(VFIOLegacyContainer *container,
* The incoming state is cleared thereafter.
*/
static int vfio_legacy_cpr_dma_map(const VFIOContainer *bcontainer,
- hwaddr iova, ram_addr_t size, void *vaddr,
+ hwaddr iova, uint64_t size, void *vaddr,
bool readonly, MemoryRegion *mr)
{
const VFIOLegacyContainer *container = VFIO_IOMMU_LEGACY(bcontainer);
diff --git a/hw/vfio/iommufd.c b/hw/vfio/iommufd.c
index f0ffe2359196505468dd5ed159440f4655847d42..68470d552eccc67afbf757de192ba53431e4840b 100644
--- a/hw/vfio/iommufd.c
+++ b/hw/vfio/iommufd.c
@@ -35,7 +35,7 @@
TYPE_HOST_IOMMU_DEVICE_IOMMUFD "-vfio"
static int iommufd_cdev_map(const VFIOContainer *bcontainer, hwaddr iova,
- ram_addr_t size, void *vaddr, bool readonly,
+ uint64_t size, void *vaddr, bool readonly,
MemoryRegion *mr)
{
const VFIOIOMMUFDContainer *container = VFIO_IOMMU_IOMMUFD(bcontainer);
@@ -46,7 +46,7 @@ static int iommufd_cdev_map(const VFIOContainer *bcontainer, hwaddr iova,
}
static int iommufd_cdev_map_file(const VFIOContainer *bcontainer,
- hwaddr iova, ram_addr_t size,
+ hwaddr iova, uint64_t size,
int fd, unsigned long start, bool readonly)
{
const VFIOIOMMUFDContainer *container = VFIO_IOMMU_IOMMUFD(bcontainer);
@@ -57,7 +57,7 @@ static int iommufd_cdev_map_file(const VFIOContainer *bcontainer,
}
static int iommufd_cdev_unmap(const VFIOContainer *bcontainer,
- hwaddr iova, ram_addr_t size,
+ hwaddr iova, uint64_t size,
IOMMUTLBEntry *iotlb, bool unmap_all)
{
const VFIOIOMMUFDContainer *container = VFIO_IOMMU_IOMMUFD(bcontainer);
--
2.51.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PULL 0/5] vfio queue
2025-10-03 10:33 [PULL 0/5] vfio queue Cédric Le Goater
` (4 preceding siblings ...)
2025-10-03 10:33 ` [PULL 5/5] hw/vfio: Use uint64_t for IOVA mapping size in vfio_container_dma_*map Cédric Le Goater
@ 2025-10-03 17:33 ` Richard Henderson
5 siblings, 0 replies; 9+ messages in thread
From: Richard Henderson @ 2025-10-03 17:33 UTC (permalink / raw)
To: qemu-devel
On 10/3/25 03:33, Cédric Le Goater wrote:
> The following changes since commit 29b77c1a2db2d796bc3847852a5c8dc2a1e6e83b:
>
> Merge tag 'rust-ci-pull-request' ofhttps://gitlab.com/marcandre.lureau/qemu into staging (2025-09-30 09:29:38 -0700)
>
> are available in the Git repository at:
>
> https://github.com/legoater/qemu/ tags/pull-vfio-20251003
>
> for you to fetch changes up to f0b52aa08ab0868c18d881381a8fda4b59b37517:
>
> hw/vfio: Use uint64_t for IOVA mapping size in vfio_container_dma_*map (2025-10-02 10:41:23 +0200)
>
> ----------------------------------------------------------------
> vfio queue:
>
> * Remove workaround for kernel DMA unmap overflow
> * Remove invalid uses of ram_addr_t type
Applied, thanks. Please update https://wiki.qemu.org/ChangeLog/10.2 as appropriate.
r~
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2025-10-03 17:44 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-03 10:33 [PULL 0/5] vfio queue Cédric Le Goater
2025-10-03 10:33 ` [PULL 1/5] vfio: Remove workaround for kernel DMA unmap overflow bug Cédric Le Goater
2025-10-03 10:33 ` [PULL 2/5] system/iommufd: Use uint64_t type for IOVA mapping size Cédric Le Goater
2025-10-03 10:33 ` [PULL 3/5] hw/vfio: Reorder vfio_container_query_dirty_bitmap() trace format Cédric Le Goater
2025-10-03 10:33 ` [PULL 4/5] hw/vfio: Avoid ram_addr_t in vfio_container_query_dirty_bitmap() Cédric Le Goater
2025-10-03 10:33 ` [PULL 5/5] hw/vfio: Use uint64_t for IOVA mapping size in vfio_container_dma_*map Cédric Le Goater
2025-10-03 17:33 ` [PULL 0/5] vfio queue Richard Henderson
-- strict thread matches above, loose matches on Subject: below --
2025-07-29 6:26 Cédric Le Goater
2025-07-29 16:59 ` Stefan Hajnoczi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).