From: Alexey Kardashevskiy <aik@ozlabs.ru>
To: qemu-devel@nongnu.org
Cc: Alexey Kardashevskiy <aik@ozlabs.ru>,
Alex Williamson <alex.williamson@redhat.com>,
qemu-ppc@nongnu.org, David Gibson <david@gibson.dropbear.id.au>
Subject: [Qemu-devel] [PATCH qemu 5/5] vfio: spapr: Add SPAPR IOMMU v2 support (DMA memory preregistering)
Date: Fri, 10 Jul 2015 20:43:48 +1000 [thread overview]
Message-ID: <1436525028-23963-6-git-send-email-aik@ozlabs.ru> (raw)
In-Reply-To: <1436525028-23963-1-git-send-email-aik@ozlabs.ru>
This makes use of the new "memory registering" feature. The idea is
to provide the userspace ability to notify the host kernel about pages
which are going to be used for DMA. Having this information, the host
kernel can pin them all once per user process, do locked pages
accounting (once) and not spent time on doing that in real time with
possible failures which cannot be handled nicely in some cases.
This adds a guest RAM memory listener which notifies a VFIO container
about memory which needs to be pinned/unpinned. VFIO MMIO regions
(i.e. "skip dump" regions) are skipped.
The feature is only enabled for SPAPR IOMMU v2. The host kernel changes
are required. Since v2 does not need/support VFIO_IOMMU_ENABLE, this does
not call it when v2 is detected and enabled.
This does not change the guest visible interface.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
Changes:
v11:
* merged register_listener into vfio_memory_listener
v9:
* since there is no more SPAPR-specific data in container::iommu_data,
the memory preregistration fields are common and potentially can be used
by other architectures
v7:
* in vfio_spapr_ram_listener_region_del(), do unref() after ioctl()
* s'ramlistener'register_listener'
v6:
* fixed commit log (s/guest/userspace/), added note about no guest visible
change
* fixed error checking if ram registration failed
* added alignment check for section->offset_within_region
v5:
* simplified the patch
* added trace points
* added round_up() for the size
* SPAPR IOMMU v2 used
---
hw/vfio/common.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++---------
trace-events | 2 ++
2 files changed, 51 insertions(+), 9 deletions(-)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 77b5ab0..8f90c23 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -410,6 +410,19 @@ static void vfio_listener_region_add(MemoryListener *listener,
goto error_exit;
}
break;
+
+ case VFIO_SPAPR_TCE_v2_IOMMU: {
+ struct vfio_iommu_spapr_register_memory reg = {
+ .argsz = sizeof(reg),
+ .flags = 0,
+ .vaddr = (uint64_t) vaddr,
+ .size = end - iova
+ };
+
+ ret = ioctl(container->fd, VFIO_IOMMU_SPAPR_REGISTER_MEMORY, ®);
+ trace_vfio_ram_register(reg.vaddr, reg.size, ret ? -errno : 0);
+ break;
+ }
}
return;
@@ -497,6 +510,22 @@ static void vfio_listener_region_del(MemoryListener *listener,
container, iova, end - iova, ret);
}
break;
+
+ case VFIO_SPAPR_TCE_v2_IOMMU: {
+ void *vaddr = memory_region_get_ram_ptr(section->mr) +
+ section->offset_within_region +
+ (iova - section->offset_within_address_space);
+ struct vfio_iommu_spapr_register_memory reg = {
+ .argsz = sizeof(reg),
+ .flags = 0,
+ .vaddr = (uint64_t) vaddr,
+ .size = end - iova
+ };
+
+ ret = ioctl(container->fd, VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY, ®);
+ trace_vfio_ram_unregister(reg.vaddr, reg.size, ret ? -errno : 0);
+ break;
+ }
}
}
@@ -720,14 +749,18 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
container->iommu_data.type1.initialized = true;
- } else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU)) {
+ } else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU) ||
+ ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_v2_IOMMU)) {
+ bool v2 = !!ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_v2_IOMMU);
+
ret = ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &fd);
if (ret) {
error_report("vfio: failed to set group container: %m");
ret = -errno;
goto free_container_exit;
}
- container->iommu_data.type = VFIO_SPAPR_TCE_IOMMU;
+ container->iommu_data.type =
+ v2 ? VFIO_SPAPR_TCE_v2_IOMMU : VFIO_SPAPR_TCE_IOMMU;
ret = ioctl(fd, VFIO_SET_IOMMU, container->iommu_data.type);
if (ret) {
error_report("vfio: failed to set iommu for container: %m");
@@ -740,18 +773,25 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
* when container fd is closed so we do not call it explicitly
* in this file.
*/
- ret = ioctl(fd, VFIO_IOMMU_ENABLE);
- if (ret) {
- error_report("vfio: failed to enable container: %m");
- ret = -errno;
- goto free_container_exit;
+ if (!v2) {
+ ret = ioctl(fd, VFIO_IOMMU_ENABLE);
+ if (ret) {
+ error_report("vfio: failed to enable container: %m");
+ ret = -errno;
+ goto free_container_exit;
+ }
}
container->iommu_data.type1.listener = vfio_memory_listener;
+ memory_listener_register(&container->iommu_data.type1.listener, NULL);
+
container->iommu_data.release = vfio_listener_release;
+ if (container->iommu_data.type1.error) {
+ error_report("vfio: RAM memory listener initialization failed for container");
+ goto listener_release_exit;
+ }
- memory_listener_register(&container->iommu_data.type1.listener,
- container->space->as);
+ container->iommu_data.type1.initialized = true;
} else {
error_report("vfio: No available IOMMU models");
diff --git a/trace-events b/trace-events
index d24d80a..f859ad0 100644
--- a/trace-events
+++ b/trace-events
@@ -1582,6 +1582,8 @@ vfio_disconnect_container(int fd) "close container->fd=%d"
vfio_put_group(int fd) "close group->fd=%d"
vfio_get_device(const char * name, unsigned int flags, unsigned int num_regions, unsigned int num_irqs) "Device %s flags: %u, regions: %u, irqs: %u"
vfio_put_base_device(int fd) "close vdev->fd=%d"
+vfio_ram_register(uint64_t va, uint64_t size, int ret) "va=%"PRIx64" size=%"PRIx64" ret=%d"
+vfio_ram_unregister(uint64_t va, uint64_t size, int ret) "va=%"PRIx64" size=%"PRIx64" ret=%d"
# hw/vfio/platform.c
vfio_platform_populate_regions(int region_index, unsigned long flag, unsigned long size, int fd, unsigned long offset) "- region %d flags = 0x%lx, size = 0x%lx, fd= %d, offset = 0x%lx"
--
2.4.0.rc3.8.gfb3e7d5
prev parent reply other threads:[~2015-07-10 10:45 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-07-10 10:43 [Qemu-devel] [PATCH qemu 0/5] vfio: SPAPR IOMMU v2 (memory preregistration support) Alexey Kardashevskiy
2015-07-10 10:43 ` [Qemu-devel] [PATCH qemu 1/5] vfio: Switch from TARGET_PAGE_MASK to qemu_real_host_page_mask Alexey Kardashevskiy
2015-07-13 6:15 ` David Gibson
2015-07-13 7:24 ` Alexey Kardashevskiy
2015-07-14 3:46 ` David Gibson
2015-07-13 20:32 ` Peter Crosthwaite
2015-07-14 7:08 ` Alexey Kardashevskiy
2015-07-10 10:43 ` [Qemu-devel] [PATCH qemu 2/5] vfio: Skip PCI BARs in memory listener Alexey Kardashevskiy
2015-07-13 17:08 ` Alex Williamson
2015-07-10 10:43 ` [Qemu-devel] [PATCH qemu 3/5] vfio: Store IOMMU type in container Alexey Kardashevskiy
2015-07-10 10:43 ` [Qemu-devel] [PATCH qemu 4/5] vfio: Refactor memory listener to accommodate more IOMMU types Alexey Kardashevskiy
2015-07-10 10:43 ` Alexey Kardashevskiy [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1436525028-23963-6-git-send-email-aik@ozlabs.ru \
--to=aik@ozlabs.ru \
--cc=alex.williamson@redhat.com \
--cc=david@gibson.dropbear.id.au \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).