From: "Cédric Le Goater" <clg@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Eric Auger" <eric.auger@redhat.com>,
"Zhenzhong Duan" <zhenzhong.duan@intel.com>,
"Peter Maydell" <peter.maydell@linaro.org>,
"Richard Henderson" <richard.henderson@linaro.org>,
"Nicholas Piggin" <npiggin@gmail.com>,
"Harsh Prateek Bora" <harshpb@linux.ibm.com>,
"Thomas Huth" <thuth@redhat.com>,
"Eric Farman" <farman@linux.ibm.com>,
"Alex Williamson" <alex.williamson@redhat.com>,
"Matthew Rosato" <mjrosato@linux.ibm.com>,
"Cédric Le Goater" <clg@redhat.com>,
"Nicolin Chen" <nicolinc@nvidia.com>
Subject: [PULL 25/47] vfio/iommufd: Add support for iova_ranges and pgsizes
Date: Tue, 19 Dec 2023 19:56:21 +0100 [thread overview]
Message-ID: <20231219185643.725448-26-clg@redhat.com> (raw)
In-Reply-To: <20231219185643.725448-1-clg@redhat.com>
From: Zhenzhong Duan <zhenzhong.duan@intel.com>
Some vIOMMU such as virtio-iommu use IOVA ranges from host side to
setup reserved ranges for passthrough device, so that guest will not
use an IOVA range beyond host support.
Use an uAPI of IOMMUFD to get IOVA ranges of host side and pass to
vIOMMU just like the legacy backend, if this fails, fallback to
64bit IOVA range.
Also use out_iova_alignment returned from uAPI as pgsizes instead of
qemu_real_host_page_size() as a fallback.
Signed-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>
Reviewed-by: Cédric Le Goater <clg@redhat.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Nicolin Chen <nicolinc@nvidia.com>
Signed-off-by: Cédric Le Goater <clg@redhat.com>
---
hw/vfio/iommufd.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 55 insertions(+), 1 deletion(-)
diff --git a/hw/vfio/iommufd.c b/hw/vfio/iommufd.c
index 6d31aeac7bd8781a103328f8a438c011cdc2db1e..01b448e840581e0dd6d3df1897169665f79dcbe3 100644
--- a/hw/vfio/iommufd.c
+++ b/hw/vfio/iommufd.c
@@ -261,6 +261,53 @@ static int iommufd_cdev_ram_block_discard_disable(bool state)
return ram_block_uncoordinated_discard_disable(state);
}
+static int iommufd_cdev_get_info_iova_range(VFIOIOMMUFDContainer *container,
+ uint32_t ioas_id, Error **errp)
+{
+ VFIOContainerBase *bcontainer = &container->bcontainer;
+ struct iommu_ioas_iova_ranges *info;
+ struct iommu_iova_range *iova_ranges;
+ int ret, sz, fd = container->be->fd;
+
+ info = g_malloc0(sizeof(*info));
+ info->size = sizeof(*info);
+ info->ioas_id = ioas_id;
+
+ ret = ioctl(fd, IOMMU_IOAS_IOVA_RANGES, info);
+ if (ret && errno != EMSGSIZE) {
+ goto error;
+ }
+
+ sz = info->num_iovas * sizeof(struct iommu_iova_range);
+ info = g_realloc(info, sizeof(*info) + sz);
+ info->allowed_iovas = (uintptr_t)(info + 1);
+
+ ret = ioctl(fd, IOMMU_IOAS_IOVA_RANGES, info);
+ if (ret) {
+ goto error;
+ }
+
+ iova_ranges = (struct iommu_iova_range *)(uintptr_t)info->allowed_iovas;
+
+ for (int i = 0; i < info->num_iovas; i++) {
+ Range *range = g_new(Range, 1);
+
+ range_set_bounds(range, iova_ranges[i].start, iova_ranges[i].last);
+ bcontainer->iova_ranges =
+ range_list_insert(bcontainer->iova_ranges, range);
+ }
+ bcontainer->pgsizes = info->out_iova_alignment;
+
+ g_free(info);
+ return 0;
+
+error:
+ ret = -errno;
+ g_free(info);
+ error_setg_errno(errp, errno, "Cannot get IOVA ranges");
+ return ret;
+}
+
static int iommufd_cdev_attach(const char *name, VFIODevice *vbasedev,
AddressSpace *as, Error **errp)
{
@@ -335,7 +382,14 @@ static int iommufd_cdev_attach(const char *name, VFIODevice *vbasedev,
goto err_discard_disable;
}
- bcontainer->pgsizes = qemu_real_host_page_size();
+ ret = iommufd_cdev_get_info_iova_range(container, ioas_id, &err);
+ if (ret) {
+ error_append_hint(&err,
+ "Fallback to default 64bit IOVA range and 4K page size\n");
+ warn_report_err(err);
+ err = NULL;
+ bcontainer->pgsizes = qemu_real_host_page_size();
+ }
bcontainer->listener = vfio_memory_listener;
memory_listener_register(&bcontainer->listener, bcontainer->space->as);
--
2.43.0
next prev parent reply other threads:[~2023-12-19 19:03 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-19 18:55 [PULL 00/47] vfio queue Cédric Le Goater
2023-12-19 18:55 ` [PULL 01/47] vfio: Introduce base object for VFIOContainer and targeted interface Cédric Le Goater
2023-12-19 18:55 ` [PULL 02/47] vfio/container: Introduce a empty VFIOIOMMUOps Cédric Le Goater
2023-12-19 18:55 ` [PULL 03/47] vfio/container: Switch to dma_map|unmap API Cédric Le Goater
2023-12-19 18:56 ` [PULL 04/47] vfio/common: Introduce vfio_container_init/destroy helper Cédric Le Goater
2023-12-19 18:56 ` [PULL 05/47] vfio/common: Move giommu_list in base container Cédric Le Goater
2023-12-19 18:56 ` [PULL 06/47] vfio/container: Move space field to " Cédric Le Goater
2023-12-19 18:56 ` [PULL 07/47] vfio/container: Switch to IOMMU BE set_dirty_page_tracking/query_dirty_bitmap API Cédric Le Goater
2023-12-19 18:56 ` [PULL 08/47] vfio/container: Move per container device list in base container Cédric Le Goater
2023-12-19 18:56 ` [PULL 09/47] vfio/container: Convert functions to " Cédric Le Goater
2023-12-19 18:56 ` [PULL 10/47] vfio/container: Move pgsizes and dma_max_mappings " Cédric Le Goater
2023-12-19 18:56 ` [PULL 11/47] vfio/container: Move vrdl_list " Cédric Le Goater
2023-12-19 18:56 ` [PULL 12/47] vfio/container: Move listener " Cédric Le Goater
2023-12-19 18:56 ` [PULL 13/47] vfio/container: Move dirty_pgsizes and max_dirty_bitmap_size " Cédric Le Goater
2023-12-19 18:56 ` [PULL 14/47] vfio/container: Move iova_ranges " Cédric Le Goater
2023-12-19 18:56 ` [PULL 15/47] vfio/container: Implement attach/detach_device Cédric Le Goater
2023-12-19 18:56 ` [PULL 16/47] vfio/spapr: Introduce spapr backend and target interface Cédric Le Goater
2023-12-19 18:56 ` [PULL 17/47] vfio/spapr: switch to spapr IOMMU BE add/del_section_window Cédric Le Goater
2023-12-19 18:56 ` [PULL 18/47] vfio/spapr: Move prereg_listener into spapr container Cédric Le Goater
2023-12-19 18:56 ` [PULL 19/47] vfio/spapr: Move hostwin_list " Cédric Le Goater
2023-12-19 18:56 ` [PULL 20/47] backends/iommufd: Introduce the iommufd object Cédric Le Goater
2023-12-21 16:00 ` Cédric Le Goater
2023-12-21 17:14 ` Eric Auger
2023-12-21 21:23 ` Cédric Le Goater
2023-12-22 10:09 ` Eric Auger
2023-12-22 10:34 ` Cédric Le Goater
2023-12-22 2:41 ` Duan, Zhenzhong
2023-12-19 18:56 ` [PULL 21/47] util/char_dev: Add open_cdev() Cédric Le Goater
2023-12-19 18:56 ` [PULL 22/47] vfio/common: return early if space isn't empty Cédric Le Goater
2023-12-19 18:56 ` [PULL 23/47] vfio/iommufd: Implement the iommufd backend Cédric Le Goater
2023-12-19 18:56 ` [PULL 24/47] vfio/iommufd: Relax assert check for " Cédric Le Goater
2023-12-19 18:56 ` Cédric Le Goater [this message]
2023-12-19 18:56 ` [PULL 26/47] vfio/pci: Extract out a helper vfio_pci_get_pci_hot_reset_info Cédric Le Goater
2023-12-19 18:56 ` [PULL 27/47] vfio/pci: Introduce a vfio pci hot reset interface Cédric Le Goater
2023-12-19 18:56 ` [PULL 28/47] vfio/iommufd: Enable pci hot reset through iommufd cdev interface Cédric Le Goater
2023-12-19 18:56 ` [PULL 29/47] vfio/pci: Allow the selection of a given iommu backend Cédric Le Goater
2023-12-19 18:56 ` [PULL 30/47] vfio/pci: Make vfio cdev pre-openable by passing a file handle Cédric Le Goater
2023-12-19 18:56 ` [PULL 31/47] vfio/platform: Allow the selection of a given iommu backend Cédric Le Goater
2023-12-19 18:56 ` [PULL 32/47] vfio/platform: Make vfio cdev pre-openable by passing a file handle Cédric Le Goater
2023-12-19 18:56 ` [PULL 33/47] vfio/ap: Allow the selection of a given iommu backend Cédric Le Goater
2023-12-19 18:56 ` [PULL 34/47] vfio/ap: Make vfio cdev pre-openable by passing a file handle Cédric Le Goater
2023-12-19 18:56 ` [PULL 35/47] vfio/ccw: Allow the selection of a given iommu backend Cédric Le Goater
2023-12-19 18:56 ` [PULL 36/47] vfio/ccw: Make vfio cdev pre-openable by passing a file handle Cédric Le Goater
2023-12-19 18:56 ` [PULL 37/47] vfio: Make VFIOContainerBase poiner parameter const in VFIOIOMMUOps callbacks Cédric Le Goater
2023-12-19 18:56 ` [PULL 38/47] hw/arm: Activate IOMMUFD for virt machines Cédric Le Goater
2023-12-19 18:56 ` [PULL 39/47] kconfig: Activate IOMMUFD for s390x machines Cédric Le Goater
2023-12-19 18:56 ` [PULL 40/47] hw/i386: Activate IOMMUFD for q35 machines Cédric Le Goater
2023-12-19 18:56 ` [PULL 41/47] vfio/pci: Move VFIODevice initializations in vfio_instance_init Cédric Le Goater
2023-12-19 18:56 ` [PULL 42/47] vfio/platform: Move VFIODevice initializations in vfio_platform_instance_init Cédric Le Goater
2023-12-19 18:56 ` [PULL 43/47] vfio/ap: Move VFIODevice initializations in vfio_ap_instance_init Cédric Le Goater
2023-12-19 18:56 ` [PULL 44/47] vfio/ccw: Move VFIODevice initializations in vfio_ccw_instance_init Cédric Le Goater
2023-12-19 18:56 ` [PULL 45/47] vfio: Introduce a helper function to initialize VFIODevice Cédric Le Goater
2023-12-19 18:56 ` [PULL 46/47] docs/devel: Add VFIO iommufd backend documentation Cédric Le Goater
2023-12-19 18:56 ` [PULL 47/47] hw/ppc/Kconfig: Imply VFIO_PCI Cédric Le Goater
2023-12-20 16:03 ` [PULL 00/47] vfio queue Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231219185643.725448-26-clg@redhat.com \
--to=clg@redhat.com \
--cc=alex.williamson@redhat.com \
--cc=eric.auger@redhat.com \
--cc=farman@linux.ibm.com \
--cc=harshpb@linux.ibm.com \
--cc=mjrosato@linux.ibm.com \
--cc=nicolinc@nvidia.com \
--cc=npiggin@gmail.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=richard.henderson@linaro.org \
--cc=thuth@redhat.com \
--cc=zhenzhong.duan@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).