qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: Eric Auger <eric.auger@redhat.com>
Cc: eric.auger.pro@gmail.com, qemu-devel@nongnu.org,
	qemu-arm@nongnu.org, clg@redhat.com, jean-philippe@linaro.org,
	mst@redhat.com, pbonzini@redhat.com, peter.maydell@linaro.org,
	peterx@redhat.com, david@redhat.com, philmd@linaro.org,
	zhenzhong.duan@intel.com, yi.l.liu@intel.com
Subject: Re: [PATCH v3 03/13] vfio: Collect container iova range info
Date: Wed, 18 Oct 2023 13:07:37 -0600	[thread overview]
Message-ID: <20231018130737.3815d3c4.alex.williamson@redhat.com> (raw)
In-Reply-To: <20231011175516.541374-4-eric.auger@redhat.com>

On Wed, 11 Oct 2023 19:52:19 +0200
Eric Auger <eric.auger@redhat.com> wrote:

> Collect iova range information if VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE
> capability is supported.
> 
> This allows to propagate the information though the IOMMU MR
> set_iova_ranges() callback so that virtual IOMMUs
> get aware of those aperture constraints. This is only done if
> the info is available and the number of iova ranges is greater than
> 0.
> 
> A new vfio_get_info_iova_range helper is introduced matching
> the coding style of existing vfio_get_info_dma_avail. The
> boolean returned value isn't used though. Code is aligned
> between both.
> 
> Signed-off-by: Eric Auger <eric.auger@redhat.com>
> 
> ---
> 
> v2 -> v3:
> - Turn nr_iovas into a int initialized to -1
> - memory_region_iommu_set_iova_ranges only is called if nr_iovas > 0
> - vfio_get_info_iova_range returns a bool to match
>   vfio_get_info_dma_avail. Uniformize both code by using !hdr in
>   the check
> - rebase on top of vfio-next
> ---
>  include/hw/vfio/vfio-common.h |  2 ++
>  hw/vfio/common.c              |  9 +++++++
>  hw/vfio/container.c           | 44 ++++++++++++++++++++++++++++++++---
>  3 files changed, 52 insertions(+), 3 deletions(-)
> 
> diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
> index 7780b9073a..848ff47960 100644
> --- a/include/hw/vfio/vfio-common.h
> +++ b/include/hw/vfio/vfio-common.h
> @@ -99,6 +99,8 @@ typedef struct VFIOContainer {
>      QLIST_HEAD(, VFIORamDiscardListener) vrdl_list;
>      QLIST_ENTRY(VFIOContainer) next;
>      QLIST_HEAD(, VFIODevice) device_list;
> +    int nr_iovas;
> +    GList *iova_ranges;

Nit, nr_iovas seems like it has a pretty weak use case here.  We can
just test iova_ranges != NULL for calling set_iova_ranges.  In patch 13
we can again test against NULL, which I think also negates the need to
assert nr_iovas since the NULL test automatically catches the zero
case.  Otherwise

Reviewed-by: Alex Williamson <alex.williamson@redhat.com>

>  } VFIOContainer;
>  
>  typedef struct VFIOGuestIOMMU {
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 5ff5acf1d8..9d804152ba 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -699,6 +699,15 @@ static void vfio_listener_region_add(MemoryListener *listener,
>              goto fail;
>          }
>  
> +        if (container->nr_iovas > 0) {
> +            ret = memory_region_iommu_set_iova_ranges(giommu->iommu_mr,
> +                    container->iova_ranges, &err);
> +            if (ret) {
> +                g_free(giommu);
> +                goto fail;
> +            }
> +        }
> +
>          ret = memory_region_register_iommu_notifier(section->mr, &giommu->n,
>                                                      &err);
>          if (ret) {
> diff --git a/hw/vfio/container.c b/hw/vfio/container.c
> index adc467210f..5122ff6d92 100644
> --- a/hw/vfio/container.c
> +++ b/hw/vfio/container.c
> @@ -382,7 +382,7 @@ bool vfio_get_info_dma_avail(struct vfio_iommu_type1_info *info,
>      /* If the capability cannot be found, assume no DMA limiting */
>      hdr = vfio_get_iommu_type1_info_cap(info,
>                                          VFIO_IOMMU_TYPE1_INFO_DMA_AVAIL);
> -    if (hdr == NULL) {
> +    if (!hdr) {
>          return false;
>      }
>  
> @@ -394,6 +394,33 @@ bool vfio_get_info_dma_avail(struct vfio_iommu_type1_info *info,
>      return true;
>  }
>  
> +static bool vfio_get_info_iova_range(struct vfio_iommu_type1_info *info,
> +                                     VFIOContainer *container)
> +{
> +    struct vfio_info_cap_header *hdr;
> +    struct vfio_iommu_type1_info_cap_iova_range *cap;
> +
> +    hdr = vfio_get_iommu_type1_info_cap(info,
> +                                        VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE);
> +    if (!hdr) {
> +        return false;
> +    }
> +
> +    cap = (void *)hdr;
> +
> +    container->nr_iovas = cap->nr_iovas;
> +    for (int i = 0; i < cap->nr_iovas; i++) {
> +        Range *range = g_new(Range, 1);
> +
> +        range_set_bounds(range, cap->iova_ranges[i].start,
> +                         cap->iova_ranges[i].end);
> +        container->iova_ranges =
> +            range_list_insert(container->iova_ranges, range);
> +    }
> +
> +    return true;
> +}
> +
>  static void vfio_kvm_device_add_group(VFIOGroup *group)
>  {
>      Error *err = NULL;
> @@ -535,6 +562,12 @@ static void vfio_get_iommu_info_migration(VFIOContainer *container,
>      }
>  }
>  
> +static void vfio_free_container(VFIOContainer *container)
> +{
> +    g_list_free_full(container->iova_ranges, g_free);
> +    g_free(container);
> +}
> +
>  static int vfio_connect_container(VFIOGroup *group, AddressSpace *as,
>                                    Error **errp)
>  {
> @@ -616,6 +649,8 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as,
>      container->error = NULL;
>      container->dirty_pages_supported = false;
>      container->dma_max_mappings = 0;
> +    container->nr_iovas = -1;
> +    container->iova_ranges = NULL;
>      QLIST_INIT(&container->giommu_list);
>      QLIST_INIT(&container->hostwin_list);
>      QLIST_INIT(&container->vrdl_list);
> @@ -652,6 +687,9 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as,
>          if (!vfio_get_info_dma_avail(info, &container->dma_max_mappings)) {
>              container->dma_max_mappings = 65535;
>          }
> +
> +        vfio_get_info_iova_range(info, container);
> +
>          vfio_get_iommu_info_migration(container, info);
>          g_free(info);
>  
> @@ -765,7 +803,7 @@ enable_discards_exit:
>      vfio_ram_block_discard_disable(container, false);
>  
>  free_container_exit:
> -    g_free(container);
> +    vfio_free_container(container);
>  
>  close_fd_exit:
>      close(fd);
> @@ -819,7 +857,7 @@ static void vfio_disconnect_container(VFIOGroup *group)
>  
>          trace_vfio_disconnect_container(container->fd);
>          close(container->fd);
> -        g_free(container);
> +        vfio_free_container(container);
>  
>          vfio_put_address_space(space);
>      }



  reply	other threads:[~2023-10-18 19:08 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-11 17:52 [PATCH v3 00/13] VIRTIO-IOMMU/VFIO: Don't assume 64b IOVA space Eric Auger
2023-10-11 17:52 ` [PATCH v3 01/13] memory: Let ReservedRegion use Range Eric Auger
2023-10-11 17:52 ` [PATCH v3 02/13] memory: Introduce memory_region_iommu_set_iova_ranges Eric Auger
2023-10-18 22:07   ` Peter Xu
2023-10-11 17:52 ` [PATCH v3 03/13] vfio: Collect container iova range info Eric Auger
2023-10-18 19:07   ` Alex Williamson [this message]
2023-10-19  6:39     ` Eric Auger
2023-10-11 17:52 ` [PATCH v3 04/13] virtio-iommu: Rename reserved_regions into prop_resv_regions Eric Auger
2023-10-11 17:52 ` [PATCH v3 05/13] range: Make range_compare() public Eric Auger
2023-10-11 17:52 ` [PATCH v3 06/13] util/reserved-region: Add new ReservedRegion helpers Eric Auger
2023-10-11 17:52 ` [PATCH v3 07/13] virtio-iommu: Introduce per IOMMUDevice reserved regions Eric Auger
2023-10-11 17:52 ` [PATCH v3 08/13] range: Introduce range_inverse_array() Eric Auger
2023-10-11 17:52 ` [PATCH v3 09/13] virtio-iommu: Record whether a probe request has been issued Eric Auger
2023-10-11 17:52 ` [PATCH v3 10/13] virtio-iommu: Implement set_iova_ranges() callback Eric Auger
2023-10-11 17:52 ` [PATCH v3 11/13] virtio-iommu: Consolidate host reserved regions and property set ones Eric Auger
2023-10-11 17:52 ` [PATCH v3 12/13] test: Add some tests for range and resv-mem helpers Eric Auger
2023-10-30  7:48   ` Cédric Le Goater
2023-10-11 17:52 ` [PATCH v3 13/13] vfio: Remove 64-bit IOVA address space assumption Eric Auger
2023-10-18 21:42   ` Alex Williamson
2023-10-19  6:37     ` Eric Auger
2023-10-18 13:37 ` [PATCH v3 00/13] VIRTIO-IOMMU/VFIO: Don't assume 64b IOVA space Michael S. Tsirkin
2023-10-19  9:07   ` YangHang Liu
2023-10-19  9:08     ` Eric Auger
2023-10-19 11:07   ` Cédric Le Goater
2023-10-19 11:20     ` Michael S. Tsirkin
2023-10-19 13:51     ` Eric Auger
2023-10-19 17:40       ` Cédric Le Goater

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231018130737.3815d3c4.alex.williamson@redhat.com \
    --to=alex.williamson@redhat.com \
    --cc=clg@redhat.com \
    --cc=david@redhat.com \
    --cc=eric.auger.pro@gmail.com \
    --cc=eric.auger@redhat.com \
    --cc=jean-philippe@linaro.org \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=peter.maydell@linaro.org \
    --cc=peterx@redhat.com \
    --cc=philmd@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=yi.l.liu@intel.com \
    --cc=zhenzhong.duan@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).