* [Qemu-devel] [RFC PATCH qemu 1/4] memory: Add reporting of supported page sizes
2015-07-20 7:46 [Qemu-devel] [RFC PATCH qemu 0/4] vfio: SPAPR IOMMU v2 (memory preregistration support) Alexey Kardashevskiy
@ 2015-07-20 7:46 ` Alexey Kardashevskiy
2015-07-20 7:46 ` [Qemu-devel] [RFC PATCH qemu 2/4] vfio: Generalize IOMMU memory listener Alexey Kardashevskiy
` (3 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Alexey Kardashevskiy @ 2015-07-20 7:46 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Crosthwaite, Alexey Kardashevskiy, Michael Roth,
Alex Williamson, qemu-ppc, David Gibson
Every IOMMU has some granularity which MemoryRegionIOMMUOps::translate
uses when translating, however this information is not available outside
the translate context for various checks.
This adds a get_page_sizes callback to MemoryRegionIOMMUOps and
a wrapper for it so IOMMU users (such as VFIO) can know the actual
page size(s) used by an IOMMU.
The qemu_real_host_page_mask is used as fallback.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* s/1<<TARGET_PAGE_BITS/qemu_real_host_page_size/
---
hw/ppc/spapr_iommu.c | 8 ++++++++
include/exec/memory.h | 11 +++++++++++
memory.c | 9 +++++++++
3 files changed, 28 insertions(+)
diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
index f61504e..a2572c4 100644
--- a/hw/ppc/spapr_iommu.c
+++ b/hw/ppc/spapr_iommu.c
@@ -104,6 +104,13 @@ static IOMMUTLBEntry spapr_tce_translate_iommu(MemoryRegion *iommu, hwaddr addr,
return ret;
}
+static uint64_t spapr_tce_get_page_sizes(MemoryRegion *iommu)
+{
+ sPAPRTCETable *tcet = container_of(iommu, sPAPRTCETable, iommu);
+
+ return 1ULL << tcet->page_shift;
+}
+
static int spapr_tce_table_post_load(void *opaque, int version_id)
{
sPAPRTCETable *tcet = SPAPR_TCE_TABLE(opaque);
@@ -135,6 +142,7 @@ static const VMStateDescription vmstate_spapr_tce_table = {
static MemoryRegionIOMMUOps spapr_iommu_ops = {
.translate = spapr_tce_translate_iommu,
+ .get_page_sizes = spapr_tce_get_page_sizes,
};
static int spapr_tce_table_realize(DeviceState *dev)
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 1394715..dc90403 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -150,6 +150,8 @@ struct MemoryRegionOps {
typedef struct MemoryRegionIOMMUOps MemoryRegionIOMMUOps;
struct MemoryRegionIOMMUOps {
+ /* Returns supported page sizes */
+ uint64_t (*get_page_sizes)(MemoryRegion *iommu);
/* Return a TLB entry that contains a given address. */
IOMMUTLBEntry (*translate)(MemoryRegion *iommu, hwaddr addr, bool is_write);
};
@@ -552,6 +554,15 @@ static inline bool memory_region_is_romd(MemoryRegion *mr)
bool memory_region_is_iommu(MemoryRegion *mr);
/**
+ * memory_region_iommu_get_page_sizes: get supported page sizes in an iommu
+ *
+ * Returns %bitmap of supported page sizes for an iommu.
+ *
+ * @mr: the memory region being queried
+ */
+uint64_t memory_region_iommu_get_page_sizes(MemoryRegion *mr);
+
+/**
* memory_region_notify_iommu: notify a change in an IOMMU translation entry.
*
* @mr: the memory region that was changed
diff --git a/memory.c b/memory.c
index 5a0cc66..eec3746 100644
--- a/memory.c
+++ b/memory.c
@@ -1413,6 +1413,15 @@ bool memory_region_is_iommu(MemoryRegion *mr)
return mr->iommu_ops;
}
+uint64_t memory_region_iommu_get_page_sizes(MemoryRegion *mr)
+{
+ assert(memory_region_is_iommu(mr));
+ if (mr->iommu_ops && mr->iommu_ops->get_page_sizes) {
+ return mr->iommu_ops->get_page_sizes(mr);
+ }
+ return qemu_real_host_page_size;
+}
+
void memory_region_register_iommu_notifier(MemoryRegion *mr, Notifier *n)
{
notifier_list_add(&mr->iommu_notify, n);
--
2.4.0.rc3.8.gfb3e7d5
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [Qemu-devel] [RFC PATCH qemu 2/4] vfio: Generalize IOMMU memory listener
2015-07-20 7:46 [Qemu-devel] [RFC PATCH qemu 0/4] vfio: SPAPR IOMMU v2 (memory preregistration support) Alexey Kardashevskiy
2015-07-20 7:46 ` [Qemu-devel] [RFC PATCH qemu 1/4] memory: Add reporting of supported page sizes Alexey Kardashevskiy
@ 2015-07-20 7:46 ` Alexey Kardashevskiy
2015-07-20 7:46 ` [Qemu-devel] [RFC PATCH qemu 3/4] vfio: Use different page size for different IOMMU types Alexey Kardashevskiy
` (2 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: Alexey Kardashevskiy @ 2015-07-20 7:46 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Crosthwaite, Alexey Kardashevskiy, Michael Roth,
Alex Williamson, qemu-ppc, David Gibson
At the moment VFIOContainer has an union for per IOMMU type data which
is now an IOMMU memory listener and setup flags. The listener listens
on PCI address space for both Type1 and sPAPR IOMMUs. The setup flags
(@initialized and @error) are only used by Type1 now but the next patch
will use it on sPAPR too.
This introduces VFIOMemoryListener which is wrapper for MemoryListener
and stores a pointer to the container. This allows having multiple
memory listeners for the same container. This replaces Type1 listener
with @iommu_listener.
This moves @initialized and @error out of @iommu_data as these will be
used soon for memory pre-registration.
As there is only release() left in @iommu_data, this moves it to
VFIOContainer and removes @iommu_data and VFIOType1.
This stores @iommu_type in VFIOContainer. The prereg patch will use it
to know whether or not to do proper cleanup.
This should cause no change in behavior.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* used to be "vfio: Store IOMMU type in container"
* moved VFIOType1 content to container as it is not IOMMU type specific
---
hw/vfio/common.c | 74 +++++++++++++++++++++++++++----------------
include/hw/vfio/vfio-common.h | 25 +++++++--------
2 files changed, 59 insertions(+), 40 deletions(-)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 85ee9b0..6eb85c7 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -312,11 +312,10 @@ out:
rcu_read_unlock();
}
-static void vfio_listener_region_add(MemoryListener *listener,
+static void vfio_listener_region_add(VFIOMemoryListener *vlistener,
MemoryRegionSection *section)
{
- VFIOContainer *container = container_of(listener, VFIOContainer,
- iommu_data.type1.listener);
+ VFIOContainer *container = vlistener->container;
hwaddr iova, end;
Int128 llend;
void *vaddr;
@@ -406,9 +405,9 @@ static void vfio_listener_region_add(MemoryListener *listener,
* can gracefully fail. Runtime, there's not much we can do other
* than throw a hardware error.
*/
- if (!container->iommu_data.type1.initialized) {
- if (!container->iommu_data.type1.error) {
- container->iommu_data.type1.error = ret;
+ if (!container->initialized) {
+ if (!container->error) {
+ container->error = ret;
}
} else {
hw_error("vfio: DMA mapping failed, unable to continue");
@@ -416,11 +415,10 @@ static void vfio_listener_region_add(MemoryListener *listener,
}
}
-static void vfio_listener_region_del(MemoryListener *listener,
+static void vfio_listener_region_del(VFIOMemoryListener *vlistener,
MemoryRegionSection *section)
{
- VFIOContainer *container = container_of(listener, VFIOContainer,
- iommu_data.type1.listener);
+ VFIOContainer *container = vlistener->container;
hwaddr iova, end;
int ret;
@@ -478,14 +476,33 @@ static void vfio_listener_region_del(MemoryListener *listener,
}
}
-static const MemoryListener vfio_memory_listener = {
- .region_add = vfio_listener_region_add,
- .region_del = vfio_listener_region_del,
+static void vfio_iommu_listener_region_add(MemoryListener *listener,
+ MemoryRegionSection *section)
+{
+ VFIOMemoryListener *vlistener = container_of(listener, VFIOMemoryListener,
+ listener);
+
+ vfio_listener_region_add(vlistener, section);
+}
+
+
+static void vfio_iommu_listener_region_del(MemoryListener *listener,
+ MemoryRegionSection *section)
+{
+ VFIOMemoryListener *vlistener = container_of(listener, VFIOMemoryListener,
+ listener);
+
+ vfio_listener_region_del(vlistener, section);
+}
+
+static const MemoryListener vfio_iommu_listener = {
+ .region_add = vfio_iommu_listener_region_add,
+ .region_del = vfio_iommu_listener_region_del,
};
static void vfio_listener_release(VFIOContainer *container)
{
- memory_listener_unregister(&container->iommu_data.type1.listener);
+ memory_listener_unregister(&container->iommu_listener.listener);
}
int vfio_mmap_region(Object *obj, VFIORegion *region,
@@ -676,27 +693,28 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
goto free_container_exit;
}
- ret = ioctl(fd, VFIO_SET_IOMMU,
- v2 ? VFIO_TYPE1v2_IOMMU : VFIO_TYPE1_IOMMU);
+ container->iommu_type = v2 ? VFIO_TYPE1v2_IOMMU : VFIO_TYPE1_IOMMU;
+ ret = ioctl(fd, VFIO_SET_IOMMU, container->iommu_type);
if (ret) {
error_report("vfio: failed to set iommu for container: %m");
ret = -errno;
goto free_container_exit;
}
- container->iommu_data.type1.listener = vfio_memory_listener;
- container->iommu_data.release = vfio_listener_release;
+ container->iommu_listener.container = container;
+ container->iommu_listener.listener = vfio_iommu_listener;
+ container->release = vfio_listener_release;
- memory_listener_register(&container->iommu_data.type1.listener,
+ memory_listener_register(&container->iommu_listener.listener,
container->space->as);
- if (container->iommu_data.type1.error) {
- ret = container->iommu_data.type1.error;
+ if (container->error) {
+ ret = container->error;
error_report("vfio: memory listener initialization failed for container");
goto listener_release_exit;
}
- container->iommu_data.type1.initialized = true;
+ container->initialized = true;
} else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU)) {
ret = ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &fd);
@@ -705,7 +723,8 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
ret = -errno;
goto free_container_exit;
}
- ret = ioctl(fd, VFIO_SET_IOMMU, VFIO_SPAPR_TCE_IOMMU);
+ container->iommu_type = VFIO_SPAPR_TCE_IOMMU;
+ ret = ioctl(fd, VFIO_SET_IOMMU, container->iommu_type);
if (ret) {
error_report("vfio: failed to set iommu for container: %m");
ret = -errno;
@@ -724,10 +743,11 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
goto free_container_exit;
}
- container->iommu_data.type1.listener = vfio_memory_listener;
- container->iommu_data.release = vfio_listener_release;
+ container->iommu_listener.container = container;
+ container->iommu_listener.listener = vfio_iommu_listener;
+ container->release = vfio_listener_release;
- memory_listener_register(&container->iommu_data.type1.listener,
+ memory_listener_register(&container->iommu_listener.listener,
container->space->as);
} else {
@@ -774,8 +794,8 @@ static void vfio_disconnect_container(VFIOGroup *group)
VFIOAddressSpace *space = container->space;
VFIOGuestIOMMU *giommu, *tmp;
- if (container->iommu_data.release) {
- container->iommu_data.release(container);
+ if (container->release) {
+ container->release(container);
}
QLIST_REMOVE(container, next);
diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
index 59a321d..a0f9d36 100644
--- a/include/hw/vfio/vfio-common.h
+++ b/include/hw/vfio/vfio-common.h
@@ -62,24 +62,23 @@ typedef struct VFIOAddressSpace {
QLIST_ENTRY(VFIOAddressSpace) list;
} VFIOAddressSpace;
+typedef struct VFIOContainer VFIOContainer;
+
+typedef struct VFIOMemoryListener {
+ struct MemoryListener listener;
+ VFIOContainer *container;
+} VFIOMemoryListener;
+
struct VFIOGroup;
-typedef struct VFIOType1 {
- MemoryListener listener;
- int error;
- bool initialized;
-} VFIOType1;
-
typedef struct VFIOContainer {
VFIOAddressSpace *space;
int fd; /* /dev/vfio/vfio, empowered by the attached groups */
- struct {
- /* enable abstraction to support various iommu backends */
- union {
- VFIOType1 type1;
- };
- void (*release)(struct VFIOContainer *);
- } iommu_data;
+ unsigned iommu_type;
+ int error;
+ bool initialized;
+ VFIOMemoryListener iommu_listener;
+ void (*release)(struct VFIOContainer *);
QLIST_HEAD(, VFIOGuestIOMMU) giommu_list;
QLIST_HEAD(, VFIOGroup) group_list;
QLIST_ENTRY(VFIOContainer) next;
--
2.4.0.rc3.8.gfb3e7d5
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [Qemu-devel] [RFC PATCH qemu 3/4] vfio: Use different page size for different IOMMU types
2015-07-20 7:46 [Qemu-devel] [RFC PATCH qemu 0/4] vfio: SPAPR IOMMU v2 (memory preregistration support) Alexey Kardashevskiy
2015-07-20 7:46 ` [Qemu-devel] [RFC PATCH qemu 1/4] memory: Add reporting of supported page sizes Alexey Kardashevskiy
2015-07-20 7:46 ` [Qemu-devel] [RFC PATCH qemu 2/4] vfio: Generalize IOMMU memory listener Alexey Kardashevskiy
@ 2015-07-20 7:46 ` Alexey Kardashevskiy
2015-07-20 7:46 ` [Qemu-devel] [RFC PATCH qemu 4/4] vfio: spapr: Add SPAPR IOMMU v2 support (DMA memory preregistering) Alexey Kardashevskiy
2015-07-29 0:27 ` [Qemu-devel] [RFC PATCH qemu 0/4] vfio: SPAPR IOMMU v2 (memory preregistration support) Alexey Kardashevskiy
4 siblings, 0 replies; 8+ messages in thread
From: Alexey Kardashevskiy @ 2015-07-20 7:46 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Crosthwaite, Alexey Kardashevskiy, Michael Roth,
Alex Williamson, qemu-ppc, David Gibson
The existing memory listener is called on RAM or PCI address space
which implies potentially different page size.
This uses new memory_region_iommu_get_page_sizes() for IOMMU regions
or falls back to qemu_real_host_page_size if RAM.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
* uses the smallest page size for mask as IOMMU MR can support multple
page sizes
---
hw/vfio/common.c | 28 ++++++++++++++++++++--------
1 file changed, 20 insertions(+), 8 deletions(-)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 6eb85c7..171c6ad 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -312,6 +312,16 @@ out:
rcu_read_unlock();
}
+static hwaddr vfio_iommu_page_mask(MemoryRegion *mr)
+{
+ if (memory_region_is_iommu(mr)) {
+ int smallest = ffs(memory_region_iommu_get_page_sizes(mr)) - 1;
+
+ return ~((1ULL << smallest) - 1);
+ }
+ return qemu_real_host_page_mask;
+}
+
static void vfio_listener_region_add(VFIOMemoryListener *vlistener,
MemoryRegionSection *section)
{
@@ -320,6 +330,7 @@ static void vfio_listener_region_add(VFIOMemoryListener *vlistener,
Int128 llend;
void *vaddr;
int ret;
+ hwaddr page_mask = vfio_iommu_page_mask(section->mr);
if (vfio_listener_skipped_section(section)) {
trace_vfio_listener_region_add_skip(
@@ -329,16 +340,16 @@ static void vfio_listener_region_add(VFIOMemoryListener *vlistener,
return;
}
- if (unlikely((section->offset_within_address_space & ~TARGET_PAGE_MASK) !=
- (section->offset_within_region & ~TARGET_PAGE_MASK))) {
+ if (unlikely((section->offset_within_address_space & ~page_mask) !=
+ (section->offset_within_region & ~page_mask))) {
error_report("%s received unaligned region", __func__);
return;
}
- iova = TARGET_PAGE_ALIGN(section->offset_within_address_space);
+ iova = ROUND_UP(section->offset_within_address_space, ~page_mask + 1);
llend = int128_make64(section->offset_within_address_space);
llend = int128_add(llend, section->size);
- llend = int128_and(llend, int128_exts64(TARGET_PAGE_MASK));
+ llend = int128_and(llend, int128_exts64(page_mask));
if (int128_ge(int128_make64(iova), llend)) {
return;
@@ -421,6 +432,7 @@ static void vfio_listener_region_del(VFIOMemoryListener *vlistener,
VFIOContainer *container = vlistener->container;
hwaddr iova, end;
int ret;
+ hwaddr page_mask = vfio_iommu_page_mask(section->mr);
if (vfio_listener_skipped_section(section)) {
trace_vfio_listener_region_del_skip(
@@ -430,8 +442,8 @@ static void vfio_listener_region_del(VFIOMemoryListener *vlistener,
return;
}
- if (unlikely((section->offset_within_address_space & ~TARGET_PAGE_MASK) !=
- (section->offset_within_region & ~TARGET_PAGE_MASK))) {
+ if (unlikely((section->offset_within_address_space & ~page_mask) !=
+ (section->offset_within_region & ~page_mask))) {
error_report("%s received unaligned region", __func__);
return;
}
@@ -457,9 +469,9 @@ static void vfio_listener_region_del(VFIOMemoryListener *vlistener,
*/
}
- iova = TARGET_PAGE_ALIGN(section->offset_within_address_space);
+ iova = ROUND_UP(section->offset_within_address_space, ~page_mask + 1);
end = (section->offset_within_address_space + int128_get64(section->size)) &
- TARGET_PAGE_MASK;
+ page_mask;
if (iova >= end) {
return;
--
2.4.0.rc3.8.gfb3e7d5
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [Qemu-devel] [RFC PATCH qemu 4/4] vfio: spapr: Add SPAPR IOMMU v2 support (DMA memory preregistering)
2015-07-20 7:46 [Qemu-devel] [RFC PATCH qemu 0/4] vfio: SPAPR IOMMU v2 (memory preregistration support) Alexey Kardashevskiy
` (2 preceding siblings ...)
2015-07-20 7:46 ` [Qemu-devel] [RFC PATCH qemu 3/4] vfio: Use different page size for different IOMMU types Alexey Kardashevskiy
@ 2015-07-20 7:46 ` Alexey Kardashevskiy
2015-07-29 0:27 ` [Qemu-devel] [RFC PATCH qemu 0/4] vfio: SPAPR IOMMU v2 (memory preregistration support) Alexey Kardashevskiy
4 siblings, 0 replies; 8+ messages in thread
From: Alexey Kardashevskiy @ 2015-07-20 7:46 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Crosthwaite, Alexey Kardashevskiy, Michael Roth,
Alex Williamson, qemu-ppc, David Gibson
This makes use of the new "memory registering" feature. The idea is
to provide the userspace ability to notify the host kernel about pages
which are going to be used for DMA. Having this information, the host
kernel can pin them all once per user process, do locked pages
accounting (once) and not spent time on doing that in real time with
possible failures which cannot be handled nicely in some cases.
This adds a prereg memory listener which listens on address_space_memory
and notifies a VFIO container about memory which needs to be
pinned/unpinned. VFIO MMIO regions (i.e. "skip dump" regions) are skipped.
The feature is only enabled for SPAPR IOMMU v2. The host kernel changes
are required. Since v2 does not need/support VFIO_IOMMU_ENABLE, this does
not call it when v2 is detected and enabled.
This does not change the guest visible interface.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
---
Changes:
v4:
* s/ram_listener/prereg_listener/ - listener names suggest what they do,
not what they listen on
* put prereg_listener registeration first
v3:
* new RAM listener skips BARs (i.e. "skip dump" regions)
v2:
* added another listener for RAM
---
hw/vfio/common.c | 117 +++++++++++++++++++++++++++++++++++++-----
include/hw/vfio/vfio-common.h | 1 +
trace-events | 2 +
3 files changed, 108 insertions(+), 12 deletions(-)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 171c6ad..6d2ee2d 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -326,13 +326,15 @@ static void vfio_listener_region_add(VFIOMemoryListener *vlistener,
MemoryRegionSection *section)
{
VFIOContainer *container = vlistener->container;
+ bool is_prereg = (vlistener == &container->prereg_listener);
hwaddr iova, end;
Int128 llend;
void *vaddr;
int ret;
hwaddr page_mask = vfio_iommu_page_mask(section->mr);
- if (vfio_listener_skipped_section(section)) {
+ if (vfio_listener_skipped_section(section) ||
+ (is_prereg && memory_region_is_skip_dump(section->mr))) {
trace_vfio_listener_region_add_skip(
section->offset_within_address_space,
section->offset_within_address_space +
@@ -357,7 +359,7 @@ static void vfio_listener_region_add(VFIOMemoryListener *vlistener,
memory_region_ref(section->mr);
- if (memory_region_is_iommu(section->mr)) {
+ if (!is_prereg && memory_region_is_iommu(section->mr)) {
VFIOGuestIOMMU *giommu;
trace_vfio_listener_region_add_iommu(iova,
@@ -405,6 +407,33 @@ static void vfio_listener_region_add(VFIOMemoryListener *vlistener,
trace_vfio_listener_region_add_ram(iova, end - 1, vaddr);
+ if (is_prereg) {
+ struct vfio_iommu_spapr_register_memory reg = {
+ .argsz = sizeof(reg),
+ .flags = 0,
+ .vaddr = (uint64_t) vaddr,
+ .size = end - iova
+ };
+
+ ret = ioctl(container->fd, VFIO_IOMMU_SPAPR_REGISTER_MEMORY, ®);
+ trace_vfio_ram_register(reg.vaddr, reg.size, ret ? -errno : 0);
+ if (ret) {
+ /*
+ * On the initfn path, store the first error in the container so we
+ * can gracefully fail. Runtime, there's not much we can do other
+ * than throw a hardware error.
+ */
+ if (!container->initialized) {
+ if (!container->error) {
+ container->error = ret;
+ }
+ } else {
+ hw_error("vfio: DMA mapping failed, unable to continue");
+ }
+ }
+ return;
+ }
+
ret = vfio_dma_map(container, iova, end - iova, vaddr, section->readonly);
if (ret) {
error_report("vfio_dma_map(%p, 0x%"HWADDR_PRIx", "
@@ -430,11 +459,13 @@ static void vfio_listener_region_del(VFIOMemoryListener *vlistener,
MemoryRegionSection *section)
{
VFIOContainer *container = vlistener->container;
+ bool is_prereg = (vlistener == &container->prereg_listener);
hwaddr iova, end;
int ret;
hwaddr page_mask = vfio_iommu_page_mask(section->mr);
- if (vfio_listener_skipped_section(section)) {
+ if (vfio_listener_skipped_section(section) ||
+ (is_prereg && memory_region_is_skip_dump(section->mr))) {
trace_vfio_listener_region_del_skip(
section->offset_within_address_space,
section->offset_within_address_space +
@@ -448,7 +479,7 @@ static void vfio_listener_region_del(VFIOMemoryListener *vlistener,
return;
}
- if (memory_region_is_iommu(section->mr)) {
+ if (!is_prereg && memory_region_is_iommu(section->mr)) {
VFIOGuestIOMMU *giommu;
QLIST_FOREACH(giommu, &container->giommu_list, giommu_next) {
@@ -477,8 +508,24 @@ static void vfio_listener_region_del(VFIOMemoryListener *vlistener,
return;
}
+ if (is_prereg) {
+ void *vaddr = memory_region_get_ram_ptr(section->mr) +
+ section->offset_within_region +
+ (iova - section->offset_within_address_space);
+ struct vfio_iommu_spapr_register_memory reg = {
+ .argsz = sizeof(reg),
+ .flags = 0,
+ .vaddr = (uint64_t) vaddr,
+ .size = end - iova
+ };
+
+ ret = ioctl(container->fd, VFIO_IOMMU_SPAPR_UNREGISTER_MEMORY,
+ ®);
+ trace_vfio_ram_unregister(reg.vaddr, reg.size, ret ? -errno : 0);
+ return;
+ }
+
trace_vfio_listener_region_del(iova, end - 1);
-
ret = vfio_dma_unmap(container, iova, end - iova);
memory_region_unref(section->mr);
if (ret) {
@@ -512,9 +559,35 @@ static const MemoryListener vfio_iommu_listener = {
.region_del = vfio_iommu_listener_region_del,
};
+static void vfio_spapr_prereg_listener_region_add(MemoryListener *listener,
+ MemoryRegionSection *section)
+{
+ VFIOMemoryListener *vlistener = container_of(listener, VFIOMemoryListener,
+ listener);
+
+ vfio_listener_region_add(vlistener, section);
+}
+
+static void vfio_spapr_prereg_listener_region_del(MemoryListener *listener,
+ MemoryRegionSection *section)
+{
+ VFIOMemoryListener *vlistener = container_of(listener, VFIOMemoryListener,
+ listener);
+
+ vfio_listener_region_del(vlistener, section);
+}
+
+static const MemoryListener vfio_spapr_prereg_listener = {
+ .region_add = vfio_spapr_prereg_listener_region_add,
+ .region_del = vfio_spapr_prereg_listener_region_del,
+};
+
static void vfio_listener_release(VFIOContainer *container)
{
memory_listener_unregister(&container->iommu_listener.listener);
+ if (container->iommu_type == VFIO_SPAPR_TCE_v2_IOMMU) {
+ memory_listener_unregister(&container->prereg_listener.listener);
+ }
}
int vfio_mmap_region(Object *obj, VFIORegion *region,
@@ -728,14 +801,18 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
container->initialized = true;
- } else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU)) {
+ } else if (ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_IOMMU) ||
+ ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_v2_IOMMU)) {
+ bool v2 = !!ioctl(fd, VFIO_CHECK_EXTENSION, VFIO_SPAPR_TCE_v2_IOMMU);
+
ret = ioctl(group->fd, VFIO_GROUP_SET_CONTAINER, &fd);
if (ret) {
error_report("vfio: failed to set group container: %m");
ret = -errno;
goto free_container_exit;
}
- container->iommu_type = VFIO_SPAPR_TCE_IOMMU;
+ container->iommu_type =
+ v2 ? VFIO_SPAPR_TCE_v2_IOMMU : VFIO_SPAPR_TCE_IOMMU;
ret = ioctl(fd, VFIO_SET_IOMMU, container->iommu_type);
if (ret) {
error_report("vfio: failed to set iommu for container: %m");
@@ -748,11 +825,27 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as)
* when container fd is closed so we do not call it explicitly
* in this file.
*/
- ret = ioctl(fd, VFIO_IOMMU_ENABLE);
- if (ret) {
- error_report("vfio: failed to enable container: %m");
- ret = -errno;
- goto free_container_exit;
+ if (!v2) {
+ ret = ioctl(fd, VFIO_IOMMU_ENABLE);
+ if (ret) {
+ error_report("vfio: failed to enable container: %m");
+ ret = -errno;
+ goto free_container_exit;
+ }
+ } else {
+ container->prereg_listener.container = container;
+ container->prereg_listener.listener = vfio_spapr_prereg_listener;
+
+ memory_listener_register(&container->prereg_listener.listener,
+ &address_space_memory);
+ if (container->error) {
+ error_report("vfio: RAM memory listener initialization failed for container");
+ memory_listener_unregister(
+ &container->prereg_listener.listener);
+ goto free_container_exit;
+ }
+
+ container->initialized = true;
}
container->iommu_listener.container = container;
diff --git a/include/hw/vfio/vfio-common.h b/include/hw/vfio/vfio-common.h
index a0f9d36..6ba8e67 100644
--- a/include/hw/vfio/vfio-common.h
+++ b/include/hw/vfio/vfio-common.h
@@ -78,6 +78,7 @@ typedef struct VFIOContainer {
int error;
bool initialized;
VFIOMemoryListener iommu_listener;
+ VFIOMemoryListener prereg_listener;
void (*release)(struct VFIOContainer *);
QLIST_HEAD(, VFIOGuestIOMMU) giommu_list;
QLIST_HEAD(, VFIOGroup) group_list;
diff --git a/trace-events b/trace-events
index d24d80a..f859ad0 100644
--- a/trace-events
+++ b/trace-events
@@ -1582,6 +1582,8 @@ vfio_disconnect_container(int fd) "close container->fd=%d"
vfio_put_group(int fd) "close group->fd=%d"
vfio_get_device(const char * name, unsigned int flags, unsigned int num_regions, unsigned int num_irqs) "Device %s flags: %u, regions: %u, irqs: %u"
vfio_put_base_device(int fd) "close vdev->fd=%d"
+vfio_ram_register(uint64_t va, uint64_t size, int ret) "va=%"PRIx64" size=%"PRIx64" ret=%d"
+vfio_ram_unregister(uint64_t va, uint64_t size, int ret) "va=%"PRIx64" size=%"PRIx64" ret=%d"
# hw/vfio/platform.c
vfio_platform_populate_regions(int region_index, unsigned long flag, unsigned long size, int fd, unsigned long offset) "- region %d flags = 0x%lx, size = 0x%lx, fd= %d, offset = 0x%lx"
--
2.4.0.rc3.8.gfb3e7d5
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [RFC PATCH qemu 0/4] vfio: SPAPR IOMMU v2 (memory preregistration support)
2015-07-20 7:46 [Qemu-devel] [RFC PATCH qemu 0/4] vfio: SPAPR IOMMU v2 (memory preregistration support) Alexey Kardashevskiy
` (3 preceding siblings ...)
2015-07-20 7:46 ` [Qemu-devel] [RFC PATCH qemu 4/4] vfio: spapr: Add SPAPR IOMMU v2 support (DMA memory preregistering) Alexey Kardashevskiy
@ 2015-07-29 0:27 ` Alexey Kardashevskiy
2015-08-06 3:16 ` Alexey Kardashevskiy
4 siblings, 1 reply; 8+ messages in thread
From: Alexey Kardashevskiy @ 2015-07-29 0:27 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Crosthwaite, Alex Williamson, qemu-ppc, Michael Roth,
David Gibson
Oh, just noticed, this is missing "v4" in the subject line.
On 07/20/2015 05:46 PM, Alexey Kardashevskiy wrote:
> Yet another try, reworked the whole patchset.
>
> Here are few patches to prepare an existing listener for handling memory
> preregistration for SPAPR guests running on POWER8.
>
> This used to be a part of DDW patchset but now is separated as requested.
>
>
> Please comment. Thanks!
>
>
> Changes:
> v4:
> * have 2 listeners now - "iommu" and "prereg"
> * removed iommu_data
> * many smaller changes
>
> v3:
> * removed incorrect "vfio: Skip PCI BARs in memory listener"
> * removed page size changes from quirks as they did not completely fix
> the crashes happening on POWER8 (only total removal helps there)
> * added "memory: Add reporting of supported page sizes"
>
>
> Alexey Kardashevskiy (4):
> memory: Add reporting of supported page sizes
> vfio: Generalize IOMMU memory listener
> vfio: Use different page size for different IOMMU types
> vfio: spapr: Add SPAPR IOMMU v2 support (DMA memory preregistering)
>
> hw/ppc/spapr_iommu.c | 8 ++
> hw/vfio/common.c | 217 +++++++++++++++++++++++++++++++++---------
> include/exec/memory.h | 11 +++
> include/hw/vfio/vfio-common.h | 26 ++---
> memory.c | 9 ++
> trace-events | 2 +
> 6 files changed, 214 insertions(+), 59 deletions(-)
>
--
Alexey
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [RFC PATCH qemu 0/4] vfio: SPAPR IOMMU v2 (memory preregistration support)
2015-07-29 0:27 ` [Qemu-devel] [RFC PATCH qemu 0/4] vfio: SPAPR IOMMU v2 (memory preregistration support) Alexey Kardashevskiy
@ 2015-08-06 3:16 ` Alexey Kardashevskiy
2015-08-07 20:20 ` Alex Williamson
0 siblings, 1 reply; 8+ messages in thread
From: Alexey Kardashevskiy @ 2015-08-06 3:16 UTC (permalink / raw)
To: qemu-devel
Cc: Peter Crosthwaite, Alex Williamson, qemu-ppc, Michael Roth,
David Gibson
On 07/29/2015 10:27 AM, Alexey Kardashevskiy wrote:
> Oh, just noticed, this is missing "v4" in the subject line.
Anyone, ping? Thanks
>
>
> On 07/20/2015 05:46 PM, Alexey Kardashevskiy wrote:
>> Yet another try, reworked the whole patchset.
>>
>> Here are few patches to prepare an existing listener for handling memory
>> preregistration for SPAPR guests running on POWER8.
>>
>> This used to be a part of DDW patchset but now is separated as requested.
>>
>>
>> Please comment. Thanks!
>>
>>
>> Changes:
>> v4:
>> * have 2 listeners now - "iommu" and "prereg"
>> * removed iommu_data
>> * many smaller changes
>>
>> v3:
>> * removed incorrect "vfio: Skip PCI BARs in memory listener"
>> * removed page size changes from quirks as they did not completely fix
>> the crashes happening on POWER8 (only total removal helps there)
>> * added "memory: Add reporting of supported page sizes"
>>
>>
>> Alexey Kardashevskiy (4):
>> memory: Add reporting of supported page sizes
>> vfio: Generalize IOMMU memory listener
>> vfio: Use different page size for different IOMMU types
>> vfio: spapr: Add SPAPR IOMMU v2 support (DMA memory preregistering)
>>
>> hw/ppc/spapr_iommu.c | 8 ++
>> hw/vfio/common.c | 217
>> +++++++++++++++++++++++++++++++++---------
>> include/exec/memory.h | 11 +++
>> include/hw/vfio/vfio-common.h | 26 ++---
>> memory.c | 9 ++
>> trace-events | 2 +
>> 6 files changed, 214 insertions(+), 59 deletions(-)
>>
>
>
--
Alexey
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [RFC PATCH qemu 0/4] vfio: SPAPR IOMMU v2 (memory preregistration support)
2015-08-06 3:16 ` Alexey Kardashevskiy
@ 2015-08-07 20:20 ` Alex Williamson
0 siblings, 0 replies; 8+ messages in thread
From: Alex Williamson @ 2015-08-07 20:20 UTC (permalink / raw)
To: Alexey Kardashevskiy
Cc: Michael Roth, Peter Crosthwaite, qemu-ppc, qemu-devel,
David Gibson
On Thu, 2015-08-06 at 13:16 +1000, Alexey Kardashevskiy wrote:
> On 07/29/2015 10:27 AM, Alexey Kardashevskiy wrote:
> > Oh, just noticed, this is missing "v4" in the subject line.
>
> Anyone, ping? Thanks
I think David had some ideas on re-working this, but I'm not sure if
he's had any time to implement them. Thanks,
Alex
> > On 07/20/2015 05:46 PM, Alexey Kardashevskiy wrote:
> >> Yet another try, reworked the whole patchset.
> >>
> >> Here are few patches to prepare an existing listener for handling memory
> >> preregistration for SPAPR guests running on POWER8.
> >>
> >> This used to be a part of DDW patchset but now is separated as requested.
> >>
> >>
> >> Please comment. Thanks!
> >>
> >>
> >> Changes:
> >> v4:
> >> * have 2 listeners now - "iommu" and "prereg"
> >> * removed iommu_data
> >> * many smaller changes
> >>
> >> v3:
> >> * removed incorrect "vfio: Skip PCI BARs in memory listener"
> >> * removed page size changes from quirks as they did not completely fix
> >> the crashes happening on POWER8 (only total removal helps there)
> >> * added "memory: Add reporting of supported page sizes"
> >>
> >>
> >> Alexey Kardashevskiy (4):
> >> memory: Add reporting of supported page sizes
> >> vfio: Generalize IOMMU memory listener
> >> vfio: Use different page size for different IOMMU types
> >> vfio: spapr: Add SPAPR IOMMU v2 support (DMA memory preregistering)
> >>
> >> hw/ppc/spapr_iommu.c | 8 ++
> >> hw/vfio/common.c | 217
> >> +++++++++++++++++++++++++++++++++---------
> >> include/exec/memory.h | 11 +++
> >> include/hw/vfio/vfio-common.h | 26 ++---
> >> memory.c | 9 ++
> >> trace-events | 2 +
> >> 6 files changed, 214 insertions(+), 59 deletions(-)
> >>
> >
> >
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread