From: Eric Auger <eric.auger@redhat.com>
To: Zhenzhong Duan <zhenzhong.duan@intel.com>, qemu-devel@nongnu.org
Cc: alex.williamson@redhat.com, clg@redhat.com, jgg@nvidia.com,
nicolinc@nvidia.com, joao.m.martins@oracle.com,
peterx@redhat.com, jasowang@redhat.com, kevin.tian@intel.com,
yi.l.liu@intel.com, yi.y.sun@intel.com, chao.p.peng@intel.com
Subject: Re: [PATCH v1 06/22] vfio/common: Add a vfio device iterator
Date: Wed, 20 Sep 2023 14:25:47 +0200 [thread overview]
Message-ID: <7a8611de-eef9-9e11-766e-77c20d6973b7@redhat.com> (raw)
In-Reply-To: <20230830103754.36461-7-zhenzhong.duan@intel.com>
Hi Zhenzhong,
On 8/30/23 12:37, Zhenzhong Duan wrote:
> With a vfio device iterator added, we can make some migration and reset
> related functions group agnostic.
> E.x:
> vfio_mig_active
> vfio_migratable_device_num
> vfio_devices_all_dirty_tracking
> vfio_devices_all_device_dirty_tracking
> vfio_devices_all_running_and_mig_active
> vfio_devices_dma_logging_stop
> vfio_devices_dma_logging_start
> vfio_devices_query_dirty_bitmap
> vfio_reset_handler
>
> Or else we need to add container specific callback variants for above
> functions just because they iterate devices based on group.
>
> Move the reset handler registration/unregistration to a place that is not
> group specific, saying first vfio address space created instead of the
> first group.
I would move the reset handler registration/unregistration changes in a
separate patch.
besides, I don't catch what you mean by
"saying first vfio address space created instead of the first group."
>
> Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
> ---
> hw/vfio/common.c | 224 ++++++++++++++++++++++++++---------------------
> 1 file changed, 122 insertions(+), 102 deletions(-)
>
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 949ad6714a..51c6e7598e 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -84,6 +84,26 @@ static int vfio_ram_block_discard_disable(VFIOContainer *container, bool state)
> }
> }
>
I would add a comment:
iterate on all devices from all groups attached to a container
> +static VFIODevice *vfio_container_dev_iter_next(VFIOContainer *container,
> + VFIODevice *curr)
> +{
> + VFIOGroup *group;
> +
> + if (!curr) {
> + group = QLIST_FIRST(&container->group_list);
> + } else {
> + if (curr->next.le_next) {
> + return curr->next.le_next;
> + }
> + group = curr->group->container_next.le_next;
> + }
> +
> + if (!group) {
> + return NULL;
> + }
> + return QLIST_FIRST(&group->device_list);
> +}
> +
> /*
> * Device state interfaces
> */
> @@ -112,17 +132,22 @@ static int vfio_get_dirty_bitmap(VFIOContainer *container, uint64_t iova,
>
> bool vfio_mig_active(void)
> {
> - VFIOGroup *group;
> + VFIOAddressSpace *space;
> + VFIOContainer *container;
> VFIODevice *vbasedev;
>
> - if (QLIST_EMPTY(&vfio_group_list)) {
> + if (QLIST_EMPTY(&vfio_address_spaces)) {
> return false;
> }
>
> - QLIST_FOREACH(group, &vfio_group_list, next) {
> - QLIST_FOREACH(vbasedev, &group->device_list, next) {
> - if (vbasedev->migration_blocker) {
> - return false;
> + QLIST_FOREACH(space, &vfio_address_spaces, list) {
> + QLIST_FOREACH(container, &space->containers, next) {
> + vbasedev = NULL;
> + while ((vbasedev = vfio_container_dev_iter_next(container,
> + vbasedev))) {
Couldn't you use an extra define such as:
#define CONTAINER_FOREACH_DEV(container, vbasedev) \
vbasedev = NULL
while ((vbasedev = vfio_container_dev_iter_next(container, vbasedev)))
> + if (vbasedev->migration_blocker) {
> + return false;
> + }
> }
> }
> }
> @@ -133,14 +158,19 @@ static Error *multiple_devices_migration_blocker;
>
> static unsigned int vfio_migratable_device_num(void)
> {
> - VFIOGroup *group;
> + VFIOAddressSpace *space;
> + VFIOContainer *container;
> VFIODevice *vbasedev;
> unsigned int device_num = 0;
>
> - QLIST_FOREACH(group, &vfio_group_list, next) {
> - QLIST_FOREACH(vbasedev, &group->device_list, next) {
> - if (vbasedev->migration) {
> - device_num++;
> + QLIST_FOREACH(space, &vfio_address_spaces, list) {
> + QLIST_FOREACH(container, &space->containers, next) {
> + vbasedev = NULL;
> + while ((vbasedev = vfio_container_dev_iter_next(container,
> + vbasedev))) {
> + if (vbasedev->migration) {
> + device_num++;
> + }
> }
> }
> }
> @@ -207,8 +237,7 @@ static void vfio_set_migration_error(int err)
>
> static bool vfio_devices_all_dirty_tracking(VFIOContainer *container)
> {
> - VFIOGroup *group;
> - VFIODevice *vbasedev;
> + VFIODevice *vbasedev = NULL;
> MigrationState *ms = migrate_get_current();
>
> if (ms->state != MIGRATION_STATUS_ACTIVE &&
> @@ -216,19 +245,17 @@ static bool vfio_devices_all_dirty_tracking(VFIOContainer *container)
> return false;
> }
>
> - QLIST_FOREACH(group, &container->group_list, container_next) {
> - QLIST_FOREACH(vbasedev, &group->device_list, next) {
> - VFIOMigration *migration = vbasedev->migration;
> + while ((vbasedev = vfio_container_dev_iter_next(container, vbasedev))) {
> + VFIOMigration *migration = vbasedev->migration;
>
> - if (!migration) {
> - return false;
> - }
> + if (!migration) {
> + return false;
> + }
>
> - if (vbasedev->pre_copy_dirty_page_tracking == ON_OFF_AUTO_OFF &&
> - (migration->device_state == VFIO_DEVICE_STATE_RUNNING ||
> - migration->device_state == VFIO_DEVICE_STATE_PRE_COPY)) {
> - return false;
> - }
> + if (vbasedev->pre_copy_dirty_page_tracking == ON_OFF_AUTO_OFF &&
> + (migration->device_state == VFIO_DEVICE_STATE_RUNNING ||
> + migration->device_state == VFIO_DEVICE_STATE_PRE_COPY)) {
> + return false;
> }
> }
> return true;
> @@ -236,14 +263,11 @@ static bool vfio_devices_all_dirty_tracking(VFIOContainer *container)
>
> static bool vfio_devices_all_device_dirty_tracking(VFIOContainer *container)
> {
> - VFIOGroup *group;
> - VFIODevice *vbasedev;
> + VFIODevice *vbasedev = NULL;
>
> - QLIST_FOREACH(group, &container->group_list, container_next) {
> - QLIST_FOREACH(vbasedev, &group->device_list, next) {
> - if (!vbasedev->dirty_pages_supported) {
> - return false;
> - }
> + while ((vbasedev = vfio_container_dev_iter_next(container, vbasedev))) {
> + if (!vbasedev->dirty_pages_supported) {
> + return false;
> }
> }
>
> @@ -256,27 +280,24 @@ static bool vfio_devices_all_device_dirty_tracking(VFIOContainer *container)
> */
> static bool vfio_devices_all_running_and_mig_active(VFIOContainer *container)
> {
> - VFIOGroup *group;
> - VFIODevice *vbasedev;
> + VFIODevice *vbasedev = NULL;
>
> if (!migration_is_active(migrate_get_current())) {
> return false;
> }
>
> - QLIST_FOREACH(group, &container->group_list, container_next) {
> - QLIST_FOREACH(vbasedev, &group->device_list, next) {
> - VFIOMigration *migration = vbasedev->migration;
> + while ((vbasedev = vfio_container_dev_iter_next(container, vbasedev))) {
> + VFIOMigration *migration = vbasedev->migration;
>
> - if (!migration) {
> - return false;
> - }
> + if (!migration) {
> + return false;
> + }
>
> - if (migration->device_state == VFIO_DEVICE_STATE_RUNNING ||
> - migration->device_state == VFIO_DEVICE_STATE_PRE_COPY) {
> - continue;
> - } else {
> - return false;
> - }
> + if (migration->device_state == VFIO_DEVICE_STATE_RUNNING ||
> + migration->device_state == VFIO_DEVICE_STATE_PRE_COPY) {
> + continue;
> + } else {
> + return false;
> }
> }
> return true;
> @@ -1243,25 +1264,22 @@ static void vfio_devices_dma_logging_stop(VFIOContainer *container)
> uint64_t buf[DIV_ROUND_UP(sizeof(struct vfio_device_feature),
> sizeof(uint64_t))] = {};
> struct vfio_device_feature *feature = (struct vfio_device_feature *)buf;
> - VFIODevice *vbasedev;
> - VFIOGroup *group;
> + VFIODevice *vbasedev = NULL;
>
> feature->argsz = sizeof(buf);
> feature->flags = VFIO_DEVICE_FEATURE_SET |
> VFIO_DEVICE_FEATURE_DMA_LOGGING_STOP;
>
> - QLIST_FOREACH(group, &container->group_list, container_next) {
> - QLIST_FOREACH(vbasedev, &group->device_list, next) {
> - if (!vbasedev->dirty_tracking) {
> - continue;
> - }
> + while ((vbasedev = vfio_container_dev_iter_next(container, vbasedev))) {
> + if (!vbasedev->dirty_tracking) {
> + continue;
> + }
>
> - if (ioctl(vbasedev->fd, VFIO_DEVICE_FEATURE, feature)) {
> - warn_report("%s: Failed to stop DMA logging, err %d (%s)",
> - vbasedev->name, -errno, strerror(errno));
> - }
> - vbasedev->dirty_tracking = false;
> + if (ioctl(vbasedev->fd, VFIO_DEVICE_FEATURE, feature)) {
> + warn_report("%s: Failed to stop DMA logging, err %d (%s)",
> + vbasedev->name, -errno, strerror(errno));
> }
> + vbasedev->dirty_tracking = false;
> }
> }
>
> @@ -1336,8 +1354,7 @@ static int vfio_devices_dma_logging_start(VFIOContainer *container)
> {
> struct vfio_device_feature *feature;
> VFIODirtyRanges ranges;
> - VFIODevice *vbasedev;
> - VFIOGroup *group;
> + VFIODevice *vbasedev = NULL;
> int ret = 0;
>
> vfio_dirty_tracking_init(container, &ranges);
> @@ -1347,21 +1364,19 @@ static int vfio_devices_dma_logging_start(VFIOContainer *container)
> return -errno;
> }
>
> - QLIST_FOREACH(group, &container->group_list, container_next) {
> - QLIST_FOREACH(vbasedev, &group->device_list, next) {
> - if (vbasedev->dirty_tracking) {
> - continue;
> - }
> + while ((vbasedev = vfio_container_dev_iter_next(container, vbasedev))) {
> + if (vbasedev->dirty_tracking) {
> + continue;
> + }
>
> - ret = ioctl(vbasedev->fd, VFIO_DEVICE_FEATURE, feature);
> - if (ret) {
> - ret = -errno;
> - error_report("%s: Failed to start DMA logging, err %d (%s)",
> - vbasedev->name, ret, strerror(errno));
> - goto out;
> - }
> - vbasedev->dirty_tracking = true;
> + ret = ioctl(vbasedev->fd, VFIO_DEVICE_FEATURE, feature);
> + if (ret) {
> + ret = -errno;
> + error_report("%s: Failed to start DMA logging, err %d (%s)",
> + vbasedev->name, ret, strerror(errno));
> + goto out;
> }
> + vbasedev->dirty_tracking = true;
> }
>
> out:
> @@ -1440,22 +1455,19 @@ static int vfio_devices_query_dirty_bitmap(VFIOContainer *container,
> VFIOBitmap *vbmap, hwaddr iova,
> hwaddr size)
> {
> - VFIODevice *vbasedev;
> - VFIOGroup *group;
> + VFIODevice *vbasedev = NULL;
> int ret;
>
> - QLIST_FOREACH(group, &container->group_list, container_next) {
> - QLIST_FOREACH(vbasedev, &group->device_list, next) {
> - ret = vfio_device_dma_logging_report(vbasedev, iova, size,
> - vbmap->bitmap);
> - if (ret) {
> - error_report("%s: Failed to get DMA logging report, iova: "
> - "0x%" HWADDR_PRIx ", size: 0x%" HWADDR_PRIx
> - ", err: %d (%s)",
> - vbasedev->name, iova, size, ret, strerror(-ret));
> + while ((vbasedev = vfio_container_dev_iter_next(container, vbasedev))) {
> + ret = vfio_device_dma_logging_report(vbasedev, iova, size,
> + vbmap->bitmap);
> + if (ret) {
> + error_report("%s: Failed to get DMA logging report, iova: "
> + "0x%" HWADDR_PRIx ", size: 0x%" HWADDR_PRIx
> + ", err: %d (%s)",
> + vbasedev->name, iova, size, ret, strerror(-ret));
>
> - return ret;
> - }
> + return ret;
> }
> }
>
> @@ -1739,21 +1751,30 @@ bool vfio_get_info_dma_avail(struct vfio_iommu_type1_info *info,
>
> void vfio_reset_handler(void *opaque)
> {
> - VFIOGroup *group;
> + VFIOAddressSpace *space;
> + VFIOContainer *container;
> VFIODevice *vbasedev;
>
> - QLIST_FOREACH(group, &vfio_group_list, next) {
> - QLIST_FOREACH(vbasedev, &group->device_list, next) {
> - if (vbasedev->dev->realized) {
> - vbasedev->ops->vfio_compute_needs_reset(vbasedev);
> + QLIST_FOREACH(space, &vfio_address_spaces, list) {
> + QLIST_FOREACH(container, &space->containers, next) {
> + vbasedev = NULL;
> + while ((vbasedev = vfio_container_dev_iter_next(container,
> + vbasedev))) {
> + if (vbasedev->dev->realized) {
> + vbasedev->ops->vfio_compute_needs_reset(vbasedev);
> + }
> }
> }
> }
>
> - QLIST_FOREACH(group, &vfio_group_list, next) {
> - QLIST_FOREACH(vbasedev, &group->device_list, next) {
> - if (vbasedev->dev->realized && vbasedev->needs_reset) {
> - vbasedev->ops->vfio_hot_reset_multi(vbasedev);
> + QLIST_FOREACH(space, &vfio_address_spaces, list) {
> + QLIST_FOREACH(container, &space->containers, next) {
> + vbasedev = NULL;
> + while ((vbasedev = vfio_container_dev_iter_next(container,
> + vbasedev))) {
> + if (vbasedev->dev->realized && vbasedev->needs_reset) {
> + vbasedev->ops->vfio_hot_reset_multi(vbasedev);
> + }
> }
> }
> }
> @@ -1841,6 +1862,10 @@ static VFIOAddressSpace *vfio_get_address_space(AddressSpace *as)
> space->as = as;
> QLIST_INIT(&space->containers);
>
> + if (QLIST_EMPTY(&vfio_address_spaces)) {
> + qemu_register_reset(vfio_reset_handler, NULL);
> + }
> +
> QLIST_INSERT_HEAD(&vfio_address_spaces, space, list);
>
> return space;
> @@ -1852,6 +1877,9 @@ static void vfio_put_address_space(VFIOAddressSpace *space)
> QLIST_REMOVE(space, list);
> g_free(space);
> }
> + if (QLIST_EMPTY(&vfio_address_spaces)) {
> + qemu_unregister_reset(vfio_reset_handler, NULL);
> + }
> }
>
> /*
> @@ -2317,10 +2345,6 @@ VFIOGroup *vfio_get_group(int groupid, AddressSpace *as, Error **errp)
> goto close_fd_exit;
> }
>
> - if (QLIST_EMPTY(&vfio_group_list)) {
> - qemu_register_reset(vfio_reset_handler, NULL);
> - }
> -
> QLIST_INSERT_HEAD(&vfio_group_list, group, next);
>
> return group;
> @@ -2349,10 +2373,6 @@ void vfio_put_group(VFIOGroup *group)
> trace_vfio_put_group(group->fd);
> close(group->fd);
> g_free(group);
> -
> - if (QLIST_EMPTY(&vfio_group_list)) {
> - qemu_unregister_reset(vfio_reset_handler, NULL);
> - }
> }
>
> struct vfio_device_info *vfio_get_device_info(int fd)
Thanks
Eric
next prev parent reply other threads:[~2023-09-20 12:27 UTC|newest]
Thread overview: 109+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-30 10:37 [PATCH v1 00/22] vfio: Adopt iommufd Zhenzhong Duan
2023-08-30 10:37 ` [PATCH v1 01/22] scripts/update-linux-headers: Add iommufd.h Zhenzhong Duan
2023-08-30 10:37 ` [PATCH v1 02/22] Update linux-header to support iommufd cdev and hwpt alloc Zhenzhong Duan
2023-09-14 14:46 ` Eric Auger
2023-09-15 3:02 ` Duan, Zhenzhong
2023-09-20 11:04 ` Eric Auger
2023-09-20 11:15 ` Duan, Zhenzhong
2023-08-30 10:37 ` [PATCH v1 03/22] vfio/common: Move IOMMU agnostic helpers to a separate file Zhenzhong Duan
2023-08-30 10:37 ` [PATCH v1 04/22] vfio/common: Introduce vfio_container_add|del_section_window() Zhenzhong Duan
2023-09-20 11:23 ` Eric Auger
2023-09-20 12:18 ` Duan, Zhenzhong
2023-09-21 8:28 ` Cédric Le Goater
2023-09-21 10:14 ` Duan, Zhenzhong
2023-09-21 10:55 ` Cédric Le Goater
2023-09-27 2:08 ` Duan, Zhenzhong
2023-09-27 6:50 ` Cédric Le Goater
2023-08-30 10:37 ` [PATCH v1 05/22] vfio/common: Extract out vfio_kvm_device_[add/del]_fd Zhenzhong Duan
2023-09-20 11:49 ` Eric Auger
2023-09-21 2:04 ` Duan, Zhenzhong
2023-09-21 8:42 ` Cédric Le Goater
2023-09-21 10:22 ` Duan, Zhenzhong
2023-09-21 10:53 ` Cédric Le Goater
2023-09-20 21:39 ` Alex Williamson
2023-09-21 6:03 ` Duan, Zhenzhong
2023-08-30 10:37 ` [PATCH v1 06/22] vfio/common: Add a vfio device iterator Zhenzhong Duan
2023-09-20 12:25 ` Eric Auger [this message]
2023-09-21 2:27 ` Duan, Zhenzhong
2023-09-20 22:16 ` Alex Williamson
2023-09-21 2:16 ` Duan, Zhenzhong
2023-08-30 10:37 ` [PATCH v1 07/22] vfio/common: Refactor vfio_viommu_preset() to be group agnostic Zhenzhong Duan
2023-09-20 13:00 ` Eric Auger
2023-09-21 2:52 ` Duan, Zhenzhong
2023-09-20 22:51 ` Alex Williamson
2023-09-21 6:13 ` Duan, Zhenzhong
2023-08-30 10:37 ` [PATCH v1 08/22] vfio/common: Move legacy VFIO backend code into separate container.c Zhenzhong Duan
2023-09-20 13:12 ` Eric Auger
2023-09-21 3:02 ` Duan, Zhenzhong
2023-08-30 10:37 ` [PATCH v1 09/22] vfio/container: Introduce vfio_[attach/detach]_device Zhenzhong Duan
2023-09-20 13:33 ` Eric Auger
2023-09-21 3:08 ` Duan, Zhenzhong
2023-09-21 9:44 ` Cédric Le Goater
2023-09-21 10:26 ` Duan, Zhenzhong
2023-08-30 10:37 ` [PATCH v1 10/22] vfio/platform: Use vfio_[attach/detach]_device Zhenzhong Duan
2023-09-21 12:17 ` Cédric Le Goater
2023-08-30 10:37 ` [PATCH v1 11/22] vfio/ap: " Zhenzhong Duan
2023-08-30 10:37 ` [PATCH v1 12/22] vfio/ccw: " Zhenzhong Duan
2023-09-21 12:19 ` Cédric Le Goater
2023-09-21 13:00 ` Duan, Zhenzhong
2023-09-21 13:24 ` Cédric Le Goater
2023-08-30 10:37 ` [PATCH v1 13/22] vfio: Add base container Zhenzhong Duan
2023-09-19 17:23 ` Cédric Le Goater
2023-09-20 8:48 ` Duan, Zhenzhong
2023-09-20 12:57 ` Cédric Le Goater
2023-09-20 13:58 ` Eric Auger
2023-09-21 2:51 ` Duan, Zhenzhong
2023-09-20 13:53 ` Eric Auger
2023-09-21 3:12 ` Duan, Zhenzhong
2023-09-20 17:31 ` Eric Auger
2023-09-21 3:35 ` Duan, Zhenzhong
2023-09-21 6:28 ` Eric Auger
2023-09-21 17:20 ` Eric Auger
2023-09-22 2:52 ` Duan, Zhenzhong
2023-08-30 10:37 ` [PATCH v1 14/22] vfio/common: Simplify vfio_viommu_preset() Zhenzhong Duan
2023-09-19 16:01 ` Cédric Le Goater
2023-09-20 2:59 ` Duan, Zhenzhong
2023-08-30 10:37 ` [PATCH v1 15/22] Add iommufd configure option Zhenzhong Duan
2023-09-19 17:07 ` Cédric Le Goater
2023-09-20 3:42 ` Duan, Zhenzhong
2023-09-20 12:19 ` Cédric Le Goater
2023-09-20 12:51 ` Jason Gunthorpe
2023-09-20 13:01 ` Daniel P. Berrangé
2023-09-20 13:07 ` Jason Gunthorpe
2023-09-20 13:02 ` Cédric Le Goater
2023-09-20 17:37 ` Eric Auger
2023-09-20 17:49 ` Jason Gunthorpe
2023-09-20 18:17 ` Alex Williamson
2023-09-20 18:19 ` Jason Gunthorpe
2023-09-21 3:43 ` Duan, Zhenzhong
2023-09-26 6:05 ` Tian, Kevin
2023-09-21 4:00 ` Duan, Zhenzhong
2023-09-21 2:11 ` Duan, Zhenzhong
2023-09-20 18:01 ` Alex Williamson
2023-09-20 18:12 ` Jason Gunthorpe
2023-09-20 20:29 ` Alex Williamson
2023-09-20 18:15 ` Daniel P. Berrangé
2023-08-30 10:37 ` [PATCH v1 16/22] backends/iommufd: Introduce the iommufd object Zhenzhong Duan
2023-09-22 7:15 ` Cédric Le Goater
2023-09-22 8:39 ` Duan, Zhenzhong
2023-08-30 10:37 ` [PATCH v1 17/22] util/char_dev: Add open_cdev() Zhenzhong Duan
2023-09-20 12:39 ` Daniel P. Berrangé
2023-09-20 12:53 ` Jason Gunthorpe
2023-09-20 12:56 ` Daniel P. Berrangé
2023-09-21 2:37 ` Duan, Zhenzhong
2023-08-30 10:37 ` [PATCH v1 18/22] vfio/iommufd: Implement the iommufd backend Zhenzhong Duan
2023-08-30 10:37 ` [PATCH v1 19/22] vfio/iommufd: Add vfio device iterator callback for iommufd Zhenzhong Duan
2023-08-30 10:37 ` [PATCH v1 20/22] vfio/pci: Adapt vfio pci hot reset support with iommufd BE Zhenzhong Duan
2023-08-30 10:37 ` [PATCH v1 21/22] vfio/pci: Allow the selection of a given iommu backend Zhenzhong Duan
2023-09-06 18:10 ` Jason Gunthorpe
2023-09-06 19:09 ` Alex Williamson
2023-09-07 1:10 ` Jason Gunthorpe
2023-09-07 2:27 ` Duan, Zhenzhong
2023-08-30 10:37 ` [PATCH v1 22/22] vfio/pci: Make vfio cdev pre-openable by passing a file handle Zhenzhong Duan
2023-09-14 9:04 ` [PATCH v1 00/22] vfio: Adopt iommufd Eric Auger
2023-09-14 9:27 ` Duan, Zhenzhong
2023-09-15 12:42 ` Cédric Le Goater
2023-09-15 13:14 ` Duan, Zhenzhong
2023-09-18 11:51 ` Jason Gunthorpe
2023-09-18 12:23 ` Cédric Le Goater
2023-09-18 17:56 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7a8611de-eef9-9e11-766e-77c20d6973b7@redhat.com \
--to=eric.auger@redhat.com \
--cc=alex.williamson@redhat.com \
--cc=chao.p.peng@intel.com \
--cc=clg@redhat.com \
--cc=jasowang@redhat.com \
--cc=jgg@nvidia.com \
--cc=joao.m.martins@oracle.com \
--cc=kevin.tian@intel.com \
--cc=nicolinc@nvidia.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=yi.l.liu@intel.com \
--cc=yi.y.sun@intel.com \
--cc=zhenzhong.duan@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).