From: Alex Williamson <alex.williamson@redhat.com>
To: Avihai Horon <avihaih@nvidia.com>
Cc: qemu-devel@nongnu.org, "Cédric Le Goater" <clg@redhat.com>,
"Juan Quintela" <quintela@redhat.com>,
"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Peter Xu" <peterx@redhat.com>,
"Jason Wang" <jasowang@redhat.com>,
"Marcel Apfelbaum" <marcel.apfelbaum@gmail.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Richard Henderson" <richard.henderson@linaro.org>,
"Eduardo Habkost" <eduardo@habkost.net>,
"David Hildenbrand" <david@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Yishai Hadas" <yishaih@nvidia.com>,
"Jason Gunthorpe" <jgg@nvidia.com>,
"Maor Gottlieb" <maorg@nvidia.com>,
"Kirti Wankhede" <kwankhede@nvidia.com>,
"Tarun Gupta" <targupta@nvidia.com>,
"Joao Martins" <joao.m.martins@oracle.com>
Subject: Re: [PATCH v2 11/20] vfio/common: Add device dirty page tracking start/stop
Date: Wed, 22 Feb 2023 15:40:43 -0700 [thread overview]
Message-ID: <20230222154043.35644d31.alex.williamson@redhat.com> (raw)
In-Reply-To: <20230222174915.5647-12-avihaih@nvidia.com>
On Wed, 22 Feb 2023 19:49:06 +0200
Avihai Horon <avihaih@nvidia.com> wrote:
> From: Joao Martins <joao.m.martins@oracle.com>
>
> Add device dirty page tracking start/stop functionality. This uses the
> device DMA logging uAPI to start and stop dirty page tracking by device.
>
> Device dirty page tracking is used only if all devices within a
> container support device dirty page tracking.
>
> Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
> Signed-off-by: Avihai Horon <avihaih@nvidia.com>
> ---
> include/hw/vfio/vfio-common.h | 2 +
> hw/vfio/common.c | 211 +++++++++++++++++++++++++++++++++-
> 2 files changed, 211 insertions(+), 2 deletions(-)
>
> diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
> index 6f36876ce0..1f21e1fa43 100644
> --- a/include/hw/vfio/vfio-common.h
> +++ b/include/hw/vfio/vfio-common.h
> @@ -149,6 +149,8 @@ typedef struct VFIODevice {
> VFIOMigration *migration;
> Error *migration_blocker;
> OnOffAuto pre_copy_dirty_page_tracking;
> + bool dirty_pages_supported;
> + bool dirty_tracking;
> } VFIODevice;
>
> struct VFIODeviceOps {
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 6041da6c7e..740153e7d7 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -473,6 +473,22 @@ static bool vfio_devices_all_dirty_tracking(VFIOContainer *container)
> return true;
> }
>
> +static bool vfio_devices_all_device_dirty_tracking(VFIOContainer *container)
> +{
> + VFIOGroup *group;
> + VFIODevice *vbasedev;
> +
> + QLIST_FOREACH(group, &container->group_list, container_next) {
> + QLIST_FOREACH(vbasedev, &group->device_list, next) {
> + if (!vbasedev->dirty_pages_supported) {
> + return false;
> + }
> + }
> + }
> +
> + return true;
> +}
> +
> /*
> * Check if all VFIO devices are running and migration is active, which is
> * essentially equivalent to the migration being in pre-copy phase.
> @@ -1404,13 +1420,192 @@ static int vfio_set_dirty_page_tracking(VFIOContainer *container, bool start)
> return ret;
> }
>
> +static int vfio_devices_dma_logging_set(VFIOContainer *container,
> + struct vfio_device_feature *feature)
> +{
> + bool status = (feature->flags & VFIO_DEVICE_FEATURE_MASK) ==
> + VFIO_DEVICE_FEATURE_DMA_LOGGING_START;
> + VFIODevice *vbasedev;
> + VFIOGroup *group;
> + int ret = 0;
> +
> + QLIST_FOREACH(group, &container->group_list, container_next) {
> + QLIST_FOREACH(vbasedev, &group->device_list, next) {
> + if (vbasedev->dirty_tracking == status) {
> + continue;
> + }
> +
> + ret = ioctl(vbasedev->fd, VFIO_DEVICE_FEATURE, feature);
> + if (ret) {
> + ret = -errno;
> + error_report("%s: Failed to set DMA logging %s, err %d (%s)",
> + vbasedev->name, status ? "start" : "stop", ret,
> + strerror(errno));
> + goto out;
> + }
> + vbasedev->dirty_tracking = status;
> + }
> + }
> +
> +out:
> + return ret;
> +}
> +
> +static int vfio_devices_dma_logging_stop(VFIOContainer *container)
> +{
> + uint64_t buf[DIV_ROUND_UP(sizeof(struct vfio_device_feature),
> + sizeof(uint64_t))] = {};
> + struct vfio_device_feature *feature = (struct vfio_device_feature *)buf;
> +
> + feature->argsz = sizeof(buf);
> + feature->flags = VFIO_DEVICE_FEATURE_SET;
> + feature->flags |= VFIO_DEVICE_FEATURE_DMA_LOGGING_STOP;
> +
> + return vfio_devices_dma_logging_set(container, feature);
> +}
> +
> +static gboolean vfio_device_dma_logging_range_add(DMAMap *map, gpointer data)
> +{
> + struct vfio_device_feature_dma_logging_range **out = data;
> + struct vfio_device_feature_dma_logging_range *range = *out;
> +
> + range->iova = map->iova;
> + /* IOVATree is inclusive, DMA logging uAPI isn't, so add 1 to length */
> + range->length = map->size + 1;
> +
> + *out = ++range;
> +
> + return false;
> +}
> +
> +static gboolean vfio_iova_tree_get_first(DMAMap *map, gpointer data)
> +{
> + DMAMap *first = data;
> +
> + first->iova = map->iova;
> + first->size = map->size;
> +
> + return true;
> +}
> +
> +static gboolean vfio_iova_tree_get_last(DMAMap *map, gpointer data)
> +{
> + DMAMap *last = data;
> +
> + last->iova = map->iova;
> + last->size = map->size;
> +
> + return false;
> +}
> +
> +static struct vfio_device_feature *
> +vfio_device_feature_dma_logging_start_create(VFIOContainer *container)
> +{
> + struct vfio_device_feature *feature;
> + size_t feature_size;
> + struct vfio_device_feature_dma_logging_control *control;
> + struct vfio_device_feature_dma_logging_range *ranges;
> + unsigned int max_ranges;
> + unsigned int cur_ranges;
> +
> + feature_size = sizeof(struct vfio_device_feature) +
> + sizeof(struct vfio_device_feature_dma_logging_control);
> + feature = g_malloc0(feature_size);
> + feature->argsz = feature_size;
> + feature->flags = VFIO_DEVICE_FEATURE_SET;
> + feature->flags |= VFIO_DEVICE_FEATURE_DMA_LOGGING_START;
> +
> + control = (struct vfio_device_feature_dma_logging_control *)feature->data;
> + control->page_size = qemu_real_host_page_size();
> +
> + QEMU_LOCK_GUARD(&container->mappings_mutex);
> +
> + /*
> + * DMA logging uAPI guarantees to support at least num_ranges that fits into
> + * a single host kernel page. To be on the safe side, use this as a limit
> + * from which to merge to a single range.
> + */
> + max_ranges = qemu_real_host_page_size() / sizeof(*ranges);
> + cur_ranges = iova_tree_nnodes(container->mappings);
> + control->num_ranges = (cur_ranges <= max_ranges) ? cur_ranges : 1;
This makes me suspicious that we're implementing to the characteristics
of a specific device rather than strictly to the vfio migration API.
Are we just trying to avoid the error handling to support the try and
fall back to a single range behavior? If we want to make a
simplification, then document it as such. The "[t]o be on the safe
side" phrasing above could later be interpreted as avoiding an issue
and might discourage a more complete implementation. Thanks,
Alex
> + ranges = g_try_new0(struct vfio_device_feature_dma_logging_range,
> + control->num_ranges);
> + if (!ranges) {
> + g_free(feature);
> + errno = ENOMEM;
> +
> + return NULL;
> + }
> +
> + control->ranges = (uint64_t)ranges;
> + if (cur_ranges <= max_ranges) {
> + iova_tree_foreach(container->mappings,
> + vfio_device_dma_logging_range_add, &ranges);
> + } else {
> + DMAMap first, last;
> +
> + iova_tree_foreach(container->mappings, vfio_iova_tree_get_first,
> + &first);
> + iova_tree_foreach(container->mappings, vfio_iova_tree_get_last, &last);
> + ranges->iova = first.iova;
> + /* IOVATree is inclusive, DMA logging uAPI isn't, so add 1 to length */
> + ranges->length = (last.iova - first.iova) + last.size + 1;
> + }
> +
> + return feature;
> +}
> +
> +static void vfio_device_feature_dma_logging_start_destroy(
> + struct vfio_device_feature *feature)
> +{
> + struct vfio_device_feature_dma_logging_control *control =
> + (struct vfio_device_feature_dma_logging_control *)feature->data;
> + struct vfio_device_feature_dma_logging_range *ranges =
> + (struct vfio_device_feature_dma_logging_range *)control->ranges;
> +
> + g_free(ranges);
> + g_free(feature);
> +}
> +
> +static int vfio_devices_dma_logging_start(VFIOContainer *container)
> +{
> + struct vfio_device_feature *feature;
> + int ret;
> +
> + feature = vfio_device_feature_dma_logging_start_create(container);
> + if (!feature) {
> + return -errno;
> + }
> +
> + ret = vfio_devices_dma_logging_set(container, feature);
> + if (ret) {
> + vfio_devices_dma_logging_stop(container);
> + }
> +
> + vfio_device_feature_dma_logging_start_destroy(feature);
> +
> + return ret;
> +}
> +
> static void vfio_listener_log_global_start(MemoryListener *listener)
> {
> VFIOContainer *container = container_of(listener, VFIOContainer, listener);
> int ret;
>
> - ret = vfio_set_dirty_page_tracking(container, true);
> + if (vfio_devices_all_device_dirty_tracking(container)) {
> + if (vfio_have_giommu(container)) {
> + /* Device dirty page tracking currently doesn't support vIOMMU */
> + return;
> + }
> +
> + ret = vfio_devices_dma_logging_start(container);
> + } else {
> + ret = vfio_set_dirty_page_tracking(container, true);
> + }
> +
> if (ret) {
> + error_report("vfio: Could not start dirty page tracking, err: %d (%s)",
> + ret, strerror(-ret));
> vfio_set_migration_error(ret);
> }
> }
> @@ -1420,8 +1615,20 @@ static void vfio_listener_log_global_stop(MemoryListener *listener)
> VFIOContainer *container = container_of(listener, VFIOContainer, listener);
> int ret;
>
> - ret = vfio_set_dirty_page_tracking(container, false);
> + if (vfio_devices_all_device_dirty_tracking(container)) {
> + if (vfio_have_giommu(container)) {
> + /* Device dirty page tracking currently doesn't support vIOMMU */
> + return;
> + }
> +
> + ret = vfio_devices_dma_logging_stop(container);
> + } else {
> + ret = vfio_set_dirty_page_tracking(container, false);
> + }
> +
> if (ret) {
> + error_report("vfio: Could not stop dirty page tracking, err: %d (%s)",
> + ret, strerror(-ret));
> vfio_set_migration_error(ret);
> }
> }
next prev parent reply other threads:[~2023-02-22 22:41 UTC|newest]
Thread overview: 93+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-22 17:48 [PATCH v2 00/20] vfio: Add migration pre-copy support and device dirty tracking Avihai Horon
2023-02-22 17:48 ` [PATCH v2 01/20] migration: Pass threshold_size to .state_pending_{estimate, exact}() Avihai Horon via
2023-02-22 17:48 ` [PATCH v2 02/20] vfio/migration: Refactor vfio_save_block() to return saved data size Avihai Horon
2023-02-27 14:10 ` Cédric Le Goater
2023-02-22 17:48 ` [PATCH v2 03/20] vfio/migration: Add VFIO migration pre-copy support Avihai Horon
2023-02-22 20:58 ` Alex Williamson
2023-02-23 15:25 ` Avihai Horon
2023-02-23 21:16 ` Alex Williamson
2023-02-26 16:43 ` Avihai Horon
2023-02-27 16:14 ` Alex Williamson
2023-02-27 17:26 ` Jason Gunthorpe
2023-02-27 17:43 ` Alex Williamson
2023-03-01 18:49 ` Avihai Horon
2023-03-01 19:55 ` Alex Williamson
2023-03-01 21:12 ` Jason Gunthorpe
2023-03-01 22:39 ` Alex Williamson
2023-03-06 19:01 ` Jason Gunthorpe
2023-02-22 17:48 ` [PATCH v2 04/20] vfio/common: Fix error reporting in vfio_get_dirty_bitmap() Avihai Horon
2023-02-22 17:49 ` [PATCH v2 05/20] vfio/common: Fix wrong %m usages Avihai Horon
2023-02-22 17:49 ` [PATCH v2 06/20] vfio/common: Abort migration if dirty log start/stop/sync fails Avihai Horon
2023-02-22 17:49 ` [PATCH v2 07/20] vfio/common: Add VFIOBitmap and (de)alloc functions Avihai Horon
2023-02-22 21:40 ` Alex Williamson
2023-02-23 15:27 ` Avihai Horon
2023-02-27 14:09 ` Cédric Le Goater
2023-03-01 18:56 ` Avihai Horon
2023-03-02 13:24 ` Joao Martins
2023-03-02 14:52 ` Cédric Le Goater
2023-03-02 16:30 ` Joao Martins
2023-03-04 0:23 ` Joao Martins
2023-02-22 17:49 ` [PATCH v2 08/20] util: Add iova_tree_nnodes() Avihai Horon
2023-02-22 17:49 ` [PATCH v2 09/20] util: Extend iova_tree_foreach() to take data argument Avihai Horon
2023-02-22 17:49 ` [PATCH v2 10/20] vfio/common: Record DMA mapped IOVA ranges Avihai Horon
2023-02-22 22:10 ` Alex Williamson
2023-02-23 10:37 ` Joao Martins
2023-02-23 21:05 ` Alex Williamson
2023-02-23 21:19 ` Joao Martins
2023-02-23 21:50 ` Alex Williamson
2023-02-23 21:54 ` Joao Martins
2023-02-28 12:11 ` Joao Martins
2023-02-28 20:36 ` Alex Williamson
2023-03-02 0:07 ` Joao Martins
2023-03-02 0:13 ` Joao Martins
2023-03-02 18:42 ` Alex Williamson
2023-03-03 0:19 ` Joao Martins
2023-03-03 16:58 ` Joao Martins
2023-03-03 17:05 ` Alex Williamson
2023-03-03 19:14 ` Joao Martins
2023-03-03 19:40 ` Alex Williamson
2023-03-03 20:16 ` Joao Martins
2023-03-03 23:47 ` Alex Williamson
2023-03-03 23:57 ` Joao Martins
2023-03-04 0:21 ` Joao Martins
2023-02-22 17:49 ` [PATCH v2 11/20] vfio/common: Add device dirty page tracking start/stop Avihai Horon
2023-02-22 22:40 ` Alex Williamson [this message]
2023-02-23 2:02 ` Jason Gunthorpe
2023-02-23 19:27 ` Alex Williamson
2023-02-23 19:30 ` Jason Gunthorpe
2023-02-23 20:16 ` Alex Williamson
2023-02-23 20:54 ` Jason Gunthorpe
2023-02-26 16:54 ` Avihai Horon
2023-02-23 15:36 ` Avihai Horon
2023-02-22 17:49 ` [PATCH v2 12/20] vfio/common: Extract code from vfio_get_dirty_bitmap() to new function Avihai Horon
2023-02-22 17:49 ` [PATCH v2 13/20] vfio/common: Add device dirty page bitmap sync Avihai Horon
2023-02-22 17:49 ` [PATCH v2 14/20] vfio/common: Extract vIOMMU code from vfio_sync_dirty_bitmap() Avihai Horon
2023-02-22 17:49 ` [PATCH v2 15/20] memory/iommu: Add IOMMU_ATTR_MAX_IOVA attribute Avihai Horon
2023-02-22 17:49 ` [PATCH v2 16/20] intel-iommu: Implement get_attr() method Avihai Horon
2023-02-22 17:49 ` [PATCH v2 17/20] vfio/common: Support device dirty page tracking with vIOMMU Avihai Horon
2023-02-22 23:34 ` Alex Williamson
2023-02-23 2:08 ` Jason Gunthorpe
2023-02-23 20:06 ` Alex Williamson
2023-02-23 20:55 ` Jason Gunthorpe
2023-02-23 21:30 ` Joao Martins
2023-02-23 22:33 ` Alex Williamson
2023-02-23 23:26 ` Jason Gunthorpe
2023-02-24 11:25 ` Joao Martins
2023-02-24 12:53 ` Joao Martins
2023-02-24 15:47 ` Jason Gunthorpe
2023-02-24 15:56 ` Alex Williamson
2023-02-24 19:16 ` Joao Martins
2023-02-22 17:49 ` [PATCH v2 18/20] vfio/common: Optimize " Avihai Horon
2023-02-22 17:49 ` [PATCH v2 19/20] vfio/migration: Query device dirty page tracking support Avihai Horon
2023-02-22 17:49 ` [PATCH v2 20/20] docs/devel: Document VFIO device dirty page tracking Avihai Horon
2023-02-27 14:29 ` Cédric Le Goater
2023-02-22 18:00 ` [PATCH v2 00/20] vfio: Add migration pre-copy support and device dirty tracking Avihai Horon
2023-02-22 20:55 ` Alex Williamson
2023-02-23 10:05 ` Cédric Le Goater
2023-02-23 15:07 ` Avihai Horon
2023-02-27 10:24 ` Cédric Le Goater
2023-02-23 14:56 ` Avihai Horon
2023-02-24 19:26 ` Joao Martins
2023-02-26 17:00 ` Avihai Horon
2023-02-27 13:50 ` Cédric Le Goater
2023-03-01 19:04 ` Avihai Horon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230222154043.35644d31.alex.williamson@redhat.com \
--to=alex.williamson@redhat.com \
--cc=avihaih@nvidia.com \
--cc=clg@redhat.com \
--cc=david@redhat.com \
--cc=dgilbert@redhat.com \
--cc=eduardo@habkost.net \
--cc=jasowang@redhat.com \
--cc=jgg@nvidia.com \
--cc=joao.m.martins@oracle.com \
--cc=kwankhede@nvidia.com \
--cc=maorg@nvidia.com \
--cc=marcel.apfelbaum@gmail.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=quintela@redhat.com \
--cc=richard.henderson@linaro.org \
--cc=targupta@nvidia.com \
--cc=yishaih@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).