From: "Cédric Le Goater" <clg@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Peter Xu" <peterx@redhat.com>, "Fabiano Rosas" <farosas@suse.de>,
"Alex Williamson" <alex.williamson@redhat.com>,
"Avihai Horon" <avihaih@nvidia.com>,
"Eric Auger" <eric.auger@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Markus Armbruster" <armbru@redhat.com>
Subject: Re: [PATCH v7 8/9] vfio: Add Error** argument to .get_dirty_bitmap() handler
Date: Thu, 16 May 2024 16:56:37 +0200 [thread overview]
Message-ID: <c23e4964-e59c-41c4-99f0-24d6f85b5350@redhat.com> (raw)
In-Reply-To: <20240516124658.850504-9-clg@redhat.com>
On 5/16/24 14:46, Cédric Le Goater wrote:
> Let the callers do the error reporting. Add documentation while at it.
>
> Reviewed-by: Eric Auger <eric.auger@redhat.com>
> Reviewed-by: Avihai Horon <avihaih@nvidia.com>
> Signed-off-by: Cédric Le Goater <clg@redhat.com>
> ---
>
> Changes in v7:
>
> - Fixed even more line wrapping of *dirty_bitmap() routines (Avihai)
> - vfio_sync_dirty_bitmap()
> Fixed return when vfio_sync_ram_discard_listener_dirty_bitmap() is called (Avihai)
>
> include/hw/vfio/vfio-common.h | 5 +--
> include/hw/vfio/vfio-container-base.h | 19 +++++++--
> hw/vfio/common.c | 60 +++++++++++++++++----------
> hw/vfio/container-base.c | 6 +--
> hw/vfio/container.c | 14 ++++---
> 5 files changed, 67 insertions(+), 37 deletions(-)
>
> diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
> index 3ff633ad3b395e953a55683f5f0308bca50af3dd..b6ac24953667bc5f72f28480a6bf0f4722069cb9 100644
> --- a/include/hw/vfio/vfio-common.h
> +++ b/include/hw/vfio/vfio-common.h
> @@ -273,10 +273,9 @@ vfio_devices_all_running_and_mig_active(const VFIOContainerBase *bcontainer);
> bool
> vfio_devices_all_device_dirty_tracking(const VFIOContainerBase *bcontainer);
> int vfio_devices_query_dirty_bitmap(const VFIOContainerBase *bcontainer,
> - VFIOBitmap *vbmap, hwaddr iova,
> - hwaddr size);
> + VFIOBitmap *vbmap, hwaddr iova, hwaddr size, Error **errp);
> int vfio_get_dirty_bitmap(const VFIOContainerBase *bcontainer, uint64_t iova,
> - uint64_t size, ram_addr_t ram_addr);
> + uint64_t size, ram_addr_t ram_addr, Error **errp);
>
> /* Returns 0 on success, or a negative errno. */
> int vfio_device_get_name(VFIODevice *vbasedev, Error **errp);
> diff --git a/include/hw/vfio/vfio-container-base.h b/include/hw/vfio/vfio-container-base.h
> index 326ceea52a2030eec9dad289a9845866c4a8c090..b04057ad1aff73d974ecec718d0fe45f7a930b59 100644
> --- a/include/hw/vfio/vfio-container-base.h
> +++ b/include/hw/vfio/vfio-container-base.h
> @@ -84,8 +84,7 @@ void vfio_container_del_section_window(VFIOContainerBase *bcontainer,
> int vfio_container_set_dirty_page_tracking(VFIOContainerBase *bcontainer,
> bool start, Error **errp);
> int vfio_container_query_dirty_bitmap(const VFIOContainerBase *bcontainer,
> - VFIOBitmap *vbmap,
> - hwaddr iova, hwaddr size);
> + VFIOBitmap *vbmap, hwaddr iova, hwaddr size, Error **errp);
>
> void vfio_container_init(VFIOContainerBase *bcontainer,
> VFIOAddressSpace *space,
> @@ -138,9 +137,21 @@ struct VFIOIOMMUClass {
> */
> int (*set_dirty_page_tracking)(const VFIOContainerBase *bcontainer,
> bool start, Error **errp);
> + /**
> + * @query_dirty_bitmap
> + *
> + * Get bitmap of dirty pages from container
> + *
> + * @bcontainer: #VFIOContainerBase from which to get dirty pages
> + * @vbmap: #VFIOBitmap internal bitmap structure
> + * @iova: iova base address
> + * @size: size of iova range
> + * @errp: pointer to Error*, to store an error if it happens.
> + *
> + * Returns zero to indicate success and negative for error
> + */
> int (*query_dirty_bitmap)(const VFIOContainerBase *bcontainer,
> - VFIOBitmap *vbmap,
> - hwaddr iova, hwaddr size);
> + VFIOBitmap *vbmap, hwaddr iova, hwaddr size, Error **errp);
> /* PCI specific */
> int (*pci_hot_reset)(VFIODevice *vbasedev, bool single);
>
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 7313043f1d161ed0326b5ba3fa1085608eaf6740..21910802c0c58a0efdb07d31c5a709660e89e328 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -1140,8 +1140,7 @@ static int vfio_device_dma_logging_report(VFIODevice *vbasedev, hwaddr iova,
> }
>
> int vfio_devices_query_dirty_bitmap(const VFIOContainerBase *bcontainer,
> - VFIOBitmap *vbmap, hwaddr iova,
> - hwaddr size)
> + VFIOBitmap *vbmap, hwaddr iova, hwaddr size, Error **errp)
> {
> VFIODevice *vbasedev;
> int ret;
> @@ -1150,10 +1149,10 @@ int vfio_devices_query_dirty_bitmap(const VFIOContainerBase *bcontainer,
> ret = vfio_device_dma_logging_report(vbasedev, iova, size,
> vbmap->bitmap);
> if (ret) {
> - error_report("%s: Failed to get DMA logging report, iova: "
> - "0x%" HWADDR_PRIx ", size: 0x%" HWADDR_PRIx
> - ", err: %d (%s)",
> - vbasedev->name, iova, size, ret, strerror(-ret));
> + error_setg_errno(errp, -ret,
> + "%s: Failed to get DMA logging report, iova: "
> + "0x%" HWADDR_PRIx ", size: 0x%" HWADDR_PRIx,
> + vbasedev->name, iova, size);
>
> return ret;
> }
> @@ -1163,7 +1162,7 @@ int vfio_devices_query_dirty_bitmap(const VFIOContainerBase *bcontainer,
> }
>
> int vfio_get_dirty_bitmap(const VFIOContainerBase *bcontainer, uint64_t iova,
> - uint64_t size, ram_addr_t ram_addr)
> + uint64_t size, ram_addr_t ram_addr, Error **errp)
> {
> bool all_device_dirty_tracking =
> vfio_devices_all_device_dirty_tracking(bcontainer);
> @@ -1180,13 +1179,17 @@ int vfio_get_dirty_bitmap(const VFIOContainerBase *bcontainer, uint64_t iova,
>
> ret = vfio_bitmap_alloc(&vbmap, size);
> if (ret) {
> + error_setg_errno(errp, -ret,
> + "Failed to allocate dirty tracking bitmap");
> return ret;
> }
>
> if (all_device_dirty_tracking) {
> - ret = vfio_devices_query_dirty_bitmap(bcontainer, &vbmap, iova, size);
> + ret = vfio_devices_query_dirty_bitmap(bcontainer, &vbmap, iova, size,
> + errp);
> } else {
> - ret = vfio_container_query_dirty_bitmap(bcontainer, &vbmap, iova, size);
> + ret = vfio_container_query_dirty_bitmap(bcontainer, &vbmap, iova, size,
> + errp);
> }
>
> if (ret) {
> @@ -1234,12 +1237,13 @@ static void vfio_iommu_map_dirty_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb)
> }
>
> ret = vfio_get_dirty_bitmap(bcontainer, iova, iotlb->addr_mask + 1,
> - translated_addr);
> + translated_addr, &local_err);
> if (ret) {
> - error_report("vfio_iommu_map_dirty_notify(%p, 0x%"HWADDR_PRIx", "
> - "0x%"HWADDR_PRIx") = %d (%s)",
> - bcontainer, iova, iotlb->addr_mask + 1, ret,
> - strerror(-ret));
> + error_prepend(&local_err,
> + "vfio_iommu_map_dirty_notify(%p, 0x%"HWADDR_PRIx", "
> + "0x%"HWADDR_PRIx") failed - ", bcontainer, iova,
> + iotlb->addr_mask + 1);
> + error_report_err(local_err);
> }
>
> out_unlock:
> @@ -1259,12 +1263,19 @@ static int vfio_ram_discard_get_dirty_bitmap(MemoryRegionSection *section,
> const ram_addr_t ram_addr = memory_region_get_ram_addr(section->mr) +
> section->offset_within_region;
> VFIORamDiscardListener *vrdl = opaque;
> + Error *local_err = NULL;
> + int ret;
>
> /*
> * Sync the whole mapped region (spanning multiple individual mappings)
> * in one go.
> */
> - return vfio_get_dirty_bitmap(vrdl->bcontainer, iova, size, ram_addr);
> + ret = vfio_get_dirty_bitmap(vrdl->bcontainer, iova, size, ram_addr,
> + &local_err);
> + if (ret) {
> + error_report_err(local_err);
> + }
> + return ret;
> }
>
> static int
> @@ -1296,7 +1307,7 @@ vfio_sync_ram_discard_listener_dirty_bitmap(VFIOContainerBase *bcontainer,
> }
>
> static int vfio_sync_dirty_bitmap(VFIOContainerBase *bcontainer,
> - MemoryRegionSection *section)
> + MemoryRegionSection *section, Error **errp)
> {
> ram_addr_t ram_addr;
>
> @@ -1327,7 +1338,14 @@ static int vfio_sync_dirty_bitmap(VFIOContainerBase *bcontainer,
> }
> return 0;
> } else if (memory_region_has_ram_discard_manager(section->mr)) {
> - return vfio_sync_ram_discard_listener_dirty_bitmap(bcontainer, section);
> + int ret;
> +
> + ret = vfio_sync_ram_discard_listener_dirty_bitmap(bcontainer, section);
> + if (ret) {
> + error_setg(errp,
> + "Failed to sync dirty bitmap with RAM discard listener");
> + return ret;
sigh. I missed that change, although I said it was done in the commit log :/
Hopefully I caught it and will fix inline. Sorry about that.
Thanks,
C.
> + }
> }
>
> ram_addr = memory_region_get_ram_addr(section->mr) +
> @@ -1335,7 +1353,7 @@ static int vfio_sync_dirty_bitmap(VFIOContainerBase *bcontainer,
>
> return vfio_get_dirty_bitmap(bcontainer,
> REAL_HOST_PAGE_ALIGN(section->offset_within_address_space),
> - int128_get64(section->size), ram_addr);
> + int128_get64(section->size), ram_addr, errp);
> }
>
> static void vfio_listener_log_sync(MemoryListener *listener,
> @@ -1344,16 +1362,16 @@ static void vfio_listener_log_sync(MemoryListener *listener,
> VFIOContainerBase *bcontainer = container_of(listener, VFIOContainerBase,
> listener);
> int ret;
> + Error *local_err = NULL;
>
> if (vfio_listener_skipped_section(section)) {
> return;
> }
>
> if (vfio_devices_all_dirty_tracking(bcontainer)) {
> - ret = vfio_sync_dirty_bitmap(bcontainer, section);
> + ret = vfio_sync_dirty_bitmap(bcontainer, section, &local_err);
> if (ret) {
> - error_report("vfio: Failed to sync dirty bitmap, err: %d (%s)", ret,
> - strerror(-ret));
> + error_report_err(local_err);
> vfio_set_migration_error(ret);
> }
> }
> diff --git a/hw/vfio/container-base.c b/hw/vfio/container-base.c
> index 7c0764121d24b02b6c4e66e368d7dff78a6d65aa..26f4bb464a720c9895b35c7c9e01c84d6322c3c9 100644
> --- a/hw/vfio/container-base.c
> +++ b/hw/vfio/container-base.c
> @@ -64,11 +64,11 @@ int vfio_container_set_dirty_page_tracking(VFIOContainerBase *bcontainer,
> }
>
> int vfio_container_query_dirty_bitmap(const VFIOContainerBase *bcontainer,
> - VFIOBitmap *vbmap,
> - hwaddr iova, hwaddr size)
> + VFIOBitmap *vbmap, hwaddr iova, hwaddr size, Error **errp)
> {
> g_assert(bcontainer->ops->query_dirty_bitmap);
> - return bcontainer->ops->query_dirty_bitmap(bcontainer, vbmap, iova, size);
> + return bcontainer->ops->query_dirty_bitmap(bcontainer, vbmap, iova, size,
> + errp);
> }
>
> void vfio_container_init(VFIOContainerBase *bcontainer, VFIOAddressSpace *space,
> diff --git a/hw/vfio/container.c b/hw/vfio/container.c
> index c35221fbe7dc5453050f97cd186fc958e24f28f7..9534120d4ac835bb58e37667dad8d39205404c08 100644
> --- a/hw/vfio/container.c
> +++ b/hw/vfio/container.c
> @@ -130,6 +130,7 @@ static int vfio_legacy_dma_unmap(const VFIOContainerBase *bcontainer,
> };
> bool need_dirty_sync = false;
> int ret;
> + Error *local_err = NULL;
>
> if (iotlb && vfio_devices_all_running_and_mig_active(bcontainer)) {
> if (!vfio_devices_all_device_dirty_tracking(bcontainer) &&
> @@ -165,8 +166,9 @@ static int vfio_legacy_dma_unmap(const VFIOContainerBase *bcontainer,
>
> if (need_dirty_sync) {
> ret = vfio_get_dirty_bitmap(bcontainer, iova, size,
> - iotlb->translated_addr);
> + iotlb->translated_addr, &local_err);
> if (ret) {
> + error_report_err(local_err);
> return ret;
> }
> }
> @@ -235,8 +237,7 @@ vfio_legacy_set_dirty_page_tracking(const VFIOContainerBase *bcontainer,
> }
>
> static int vfio_legacy_query_dirty_bitmap(const VFIOContainerBase *bcontainer,
> - VFIOBitmap *vbmap,
> - hwaddr iova, hwaddr size)
> + VFIOBitmap *vbmap, hwaddr iova, hwaddr size, Error **errp)
> {
> const VFIOContainer *container = container_of(bcontainer, VFIOContainer,
> bcontainer);
> @@ -264,9 +265,10 @@ static int vfio_legacy_query_dirty_bitmap(const VFIOContainerBase *bcontainer,
> ret = ioctl(container->fd, VFIO_IOMMU_DIRTY_PAGES, dbitmap);
> if (ret) {
> ret = -errno;
> - error_report("Failed to get dirty bitmap for iova: 0x%"PRIx64
> - " size: 0x%"PRIx64" err: %d", (uint64_t)range->iova,
> - (uint64_t)range->size, errno);
> + error_setg_errno(errp, errno,
> + "Failed to get dirty bitmap for iova: 0x%"PRIx64
> + " size: 0x%"PRIx64, (uint64_t)range->iova,
> + (uint64_t)range->size);
> }
>
> g_free(dbitmap);
next prev parent reply other threads:[~2024-05-16 14:57 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-16 12:46 [PATCH v7 0/9] vfio: Improve error reporting (part 2) Cédric Le Goater
2024-05-16 12:46 ` [PATCH v7 1/9] vfio: Add Error** argument to .set_dirty_page_tracking() handler Cédric Le Goater
2024-05-29 6:26 ` Markus Armbruster
2024-05-29 9:45 ` Cédric Le Goater
2024-05-16 12:46 ` [PATCH v7 2/9] vfio: Add Error** argument to vfio_devices_dma_logging_start() Cédric Le Goater
2024-05-16 12:46 ` [PATCH v7 3/9] migration: Extend migration_file_set_error() with Error* argument Cédric Le Goater
2024-05-16 12:46 ` [PATCH v7 4/9] vfio/migration: Add an Error** argument to vfio_migration_set_state() Cédric Le Goater
2024-05-16 12:46 ` [PATCH v7 5/9] vfio/migration: Add Error** argument to .vfio_save_config() handler Cédric Le Goater
2024-05-16 12:46 ` [PATCH v7 6/9] vfio: Reverse test on vfio_get_xlat_addr() Cédric Le Goater
2024-05-16 12:46 ` [PATCH v7 7/9] memory: Add Error** argument to memory_get_xlat_addr() Cédric Le Goater
2024-05-16 12:46 ` [PATCH v7 8/9] vfio: Add Error** argument to .get_dirty_bitmap() handler Cédric Le Goater
2024-05-16 14:56 ` Cédric Le Goater [this message]
2024-05-16 12:46 ` [PATCH v7 9/9] vfio: Also trace event failures in vfio_save_complete_precopy() Cédric Le Goater
2024-05-16 16:22 ` [PATCH v7 0/9] vfio: Improve error reporting (part 2) Cédric Le Goater
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c23e4964-e59c-41c4-99f0-24d6f85b5350@redhat.com \
--to=clg@redhat.com \
--cc=alex.williamson@redhat.com \
--cc=armbru@redhat.com \
--cc=avihaih@nvidia.com \
--cc=eric.auger@redhat.com \
--cc=farosas@suse.de \
--cc=peterx@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).