* [PATCH V1 0/3] Passthru device support under emulated amd-iommu
@ 2020-09-28 20:05 Wei Huang
2020-09-28 20:05 ` [PATCH V1 1/3] amd-iommu: Add address space notifier and replay support Wei Huang
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Wei Huang @ 2020-09-28 20:05 UTC (permalink / raw)
To: qemu-devel
Cc: ehabkost, mst, wei.huang2, peterx, alex.williamson, pbonzini,
Suravee.Suthikulpanit, rth
This patchset adds support for passthru devices to run inside VMs under
the management of an emulated amd-iommu device (vIOMMU). This feature
has a variety of benefits, including enhanced I/O security and user-mode
driver support, for guest VMs.
This patchset has been tested with both 1G and 10G NICs on AMD boxes.
Thanks,
-Wei
Wei Huang (3):
amd-iommu: Add address space notifier and replay support
amd-iommu: Sync IOVA-to-GPA translation during page invalidation
amd-iommu: Fix up amdvi_mmio_trace() to differentiate MMIO R/W
hw/i386/amd_iommu.c | 243 ++++++++++++++++++++++++++++++++++++++++++--
hw/i386/amd_iommu.h | 13 +++
hw/vfio/common.c | 3 +-
3 files changed, 247 insertions(+), 12 deletions(-)
--
2.25.2
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH V1 1/3] amd-iommu: Add address space notifier and replay support
2020-09-28 20:05 [PATCH V1 0/3] Passthru device support under emulated amd-iommu Wei Huang
@ 2020-09-28 20:05 ` Wei Huang
2020-09-28 20:05 ` [PATCH V1 2/3] amd-iommu: Sync IOVA-to-GPA translation during page invalidation Wei Huang
` (2 subsequent siblings)
3 siblings, 0 replies; 7+ messages in thread
From: Wei Huang @ 2020-09-28 20:05 UTC (permalink / raw)
To: qemu-devel
Cc: ehabkost, mst, wei.huang2, peterx, alex.williamson, pbonzini,
Suravee.Suthikulpanit, rth
Currently the emulated amd-iommu device does not support memory address
space notifier and replay. These two functions are required to have I/O
devices supported inside guest VMs as passthru devices. This patch adds
basic as_notifier infrastructure and replay function in amd_iommu.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
---
hw/i386/amd_iommu.c | 45 +++++++++++++++++++++++++++++++++++++++------
hw/i386/amd_iommu.h | 3 +++
2 files changed, 42 insertions(+), 6 deletions(-)
diff --git a/hw/i386/amd_iommu.c b/hw/i386/amd_iommu.c
index 74a93a5d93f4..c7d24a05484d 100644
--- a/hw/i386/amd_iommu.c
+++ b/hw/i386/amd_iommu.c
@@ -63,6 +63,8 @@ struct AMDVIAddressSpace {
IOMMUMemoryRegion iommu; /* Device's address translation region */
MemoryRegion iommu_ir; /* Device's interrupt remapping region */
AddressSpace as; /* device's corresponding address space */
+ IOMMUNotifierFlag notifier_flags; /* notifier flags of address space */
+ QLIST_ENTRY(AMDVIAddressSpace) next; /* notifier linked list */
};
/* AMDVI cache entry */
@@ -425,6 +427,22 @@ static void amdvi_inval_all(AMDVIState *s, uint64_t *cmd)
trace_amdvi_all_inval();
}
+static void amdvi_address_space_unmap(AMDVIAddressSpace *as, IOMMUNotifier *n)
+{
+ IOMMUTLBEntry entry;
+ hwaddr start = n->start;
+ hwaddr end = n->end;
+ hwaddr size = end - start + 1;
+
+ entry.target_as = &address_space_memory;
+ entry.iova = start;
+ entry.translated_addr = 0;
+ entry.perm = IOMMU_NONE;
+ entry.addr_mask = size - 1;
+
+ memory_region_notify_one(n, &entry);
+}
+
static gboolean amdvi_iotlb_remove_by_domid(gpointer key, gpointer value,
gpointer user_data)
{
@@ -1473,14 +1491,17 @@ static int amdvi_iommu_notify_flag_changed(IOMMUMemoryRegion *iommu,
Error **errp)
{
AMDVIAddressSpace *as = container_of(iommu, AMDVIAddressSpace, iommu);
+ AMDVIState *s = as->iommu_state;
- if (new & IOMMU_NOTIFIER_MAP) {
- error_setg(errp,
- "device %02x.%02x.%x requires iommu notifier which is not "
- "currently supported", as->bus_num, PCI_SLOT(as->devfn),
- PCI_FUNC(as->devfn));
- return -EINVAL;
+ /* Update address space notifier flags */
+ as->notifier_flags = new;
+
+ if (old == IOMMU_NOTIFIER_NONE) {
+ QLIST_INSERT_HEAD(&s->amdvi_as_with_notifiers, as, next);
+ } else if (new == IOMMU_NOTIFIER_NONE) {
+ QLIST_REMOVE(as, next);
}
+
return 0;
}
@@ -1573,6 +1594,8 @@ static void amdvi_realize(DeviceState *dev, Error **errp)
/* Pseudo address space under root PCI bus. */
x86ms->ioapic_as = amdvi_host_dma_iommu(bus, s, AMDVI_IOAPIC_SB_DEVID);
+ QLIST_INIT(&s->amdvi_as_with_notifiers);
+
/* set up MMIO */
memory_region_init_io(&s->mmio, OBJECT(s), &mmio_mem_ops, s, "amdvi-mmio",
AMDVI_MMIO_SIZE);
@@ -1631,12 +1654,22 @@ static const TypeInfo amdviPCI = {
},
};
+static void amdvi_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n)
+{
+ AMDVIAddressSpace *as = container_of(iommu_mr, AMDVIAddressSpace, iommu);
+
+ amdvi_address_space_unmap(as, n);
+
+ return;
+}
+
static void amdvi_iommu_memory_region_class_init(ObjectClass *klass, void *data)
{
IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_CLASS(klass);
imrc->translate = amdvi_translate;
imrc->notify_flag_changed = amdvi_iommu_notify_flag_changed;
+ imrc->replay = amdvi_iommu_replay;
}
static const TypeInfo amdvi_iommu_memory_region_info = {
diff --git a/hw/i386/amd_iommu.h b/hw/i386/amd_iommu.h
index fa5feb183c03..aeed9fd1cbb0 100644
--- a/hw/i386/amd_iommu.h
+++ b/hw/i386/amd_iommu.h
@@ -364,6 +364,9 @@ struct AMDVIState {
/* for each served device */
AMDVIAddressSpace **address_spaces[PCI_BUS_MAX];
+ /* list of registered notifiers */
+ QLIST_HEAD(, AMDVIAddressSpace) amdvi_as_with_notifiers;
+
/* IOTLB */
GHashTable *iotlb;
--
2.25.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH V1 2/3] amd-iommu: Sync IOVA-to-GPA translation during page invalidation
2020-09-28 20:05 [PATCH V1 0/3] Passthru device support under emulated amd-iommu Wei Huang
2020-09-28 20:05 ` [PATCH V1 1/3] amd-iommu: Add address space notifier and replay support Wei Huang
@ 2020-09-28 20:05 ` Wei Huang
2020-09-29 19:34 ` Alex Williamson
2020-09-28 20:05 ` [PATCH V1 3/3] amd-iommu: Fix amdvi_mmio_trace() to differentiate MMIO R/W Wei Huang
2020-09-29 2:08 ` [PATCH V1 0/3] Passthru device support under emulated amd-iommu no-reply
3 siblings, 1 reply; 7+ messages in thread
From: Wei Huang @ 2020-09-28 20:05 UTC (permalink / raw)
To: qemu-devel
Cc: ehabkost, mst, wei.huang2, peterx, alex.williamson, pbonzini,
Suravee.Suthikulpanit, rth
Add support to sync the IOVA-to-GPA translation at the time of IOMMU
page invalidation. This function is called when two IOMMU commands,
AMDVI_CMD_INVAL_AMDVI_PAGES and AMDVI_CMD_INVAL_AMDVI_ALL, are
intercepted. Address space notifiers are called accordingly.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
---
hw/i386/amd_iommu.c | 177 ++++++++++++++++++++++++++++++++++++++++++++
hw/i386/amd_iommu.h | 10 +++
hw/vfio/common.c | 3 +-
3 files changed, 189 insertions(+), 1 deletion(-)
diff --git a/hw/i386/amd_iommu.c b/hw/i386/amd_iommu.c
index c7d24a05484d..7604e2080595 100644
--- a/hw/i386/amd_iommu.c
+++ b/hw/i386/amd_iommu.c
@@ -76,6 +76,12 @@ typedef struct AMDVIIOTLBEntry {
uint64_t page_mask; /* physical page size */
} AMDVIIOTLBEntry;
+static bool amdvi_get_dte(AMDVIState *s, int devid, uint64_t *entry);
+static void amdvi_sync_domain(AMDVIState *s, uint32_t domid,
+ uint64_t addr, uint16_t flags);
+static void amdvi_walk_level(AMDVIAddressSpace *as, uint64_t pte,
+ uint64_t iova, uint64_t partial);
+
/* configure MMIO registers at startup/reset */
static void amdvi_set_quad(AMDVIState *s, hwaddr addr, uint64_t val,
uint64_t romask, uint64_t w1cmask)
@@ -443,6 +449,78 @@ static void amdvi_address_space_unmap(AMDVIAddressSpace *as, IOMMUNotifier *n)
memory_region_notify_one(n, &entry);
}
+/*
+ * Sync the IOVA-to-GPA translation at the time of IOMMU page invalidation.
+ * This function is called when IOMMU commands, AMDVI_CMD_INVAL_AMDVI_PAGES
+ * and AMDVI_CMD_INVAL_AMDVI_ALL, are triggred.
+ *
+ * The range of addr invalidation is determined by addr and flags, using
+ * the following rules:
+ * - All pages
+ * In this case, we unmap the whole address space and then re-walk the
+ * I/O page table to sync the mapping relationship.
+ * - Single page:
+ * Re-walk the page based on the specified iova, and only sync the
+ * newly mapped page.
+ */
+static void amdvi_sync_domain(AMDVIState *s, uint32_t domid,
+ uint64_t addr, uint16_t flags)
+{
+ AMDVIAddressSpace *as;
+ bool sync_all_domains = false;
+ uint64_t mask, size = 0x1000;
+
+ if (domid == AMDVI_DOMAIN_ALL) {
+ sync_all_domains = true;
+ }
+
+ /* S=1 means the invalidation size is from addr field; otherwise 4KB */
+ if (flags & AMDVI_CMD_INVAL_IOMMU_PAGES_S_BIT) {
+ uint32_t zbit = cto64(addr | 0xFFF) + 1;
+
+ size = 1ULL << zbit;
+
+ if (size < 0x1000) {
+ addr = 0;
+ size = AMDVI_PGSZ_ENTIRE;
+ } else {
+ mask = ~(size - 1);
+ addr &= mask;
+ }
+ }
+
+ QLIST_FOREACH(as, &s->amdvi_as_with_notifiers, next) {
+ uint64_t dte[4];
+ IOMMUNotifier *n;
+
+ if (!amdvi_get_dte(s, as->devfn, dte)) {
+ continue;
+ }
+
+ if (!sync_all_domains && (domid != (dte[1] & 0xFFFULL))) {
+ continue;
+ }
+
+ /*
+ * In case of syncing more than a page, we invalidate the entire
+ * address range and re-walk the whole page table.
+ */
+ if (size == AMDVI_PGSZ_ENTIRE) {
+ IOMMU_NOTIFIER_FOREACH(n, &as->iommu) {
+ amdvi_address_space_unmap(as, n);
+ }
+ } else if (size > 0x1000) {
+ IOMMU_NOTIFIER_FOREACH(n, &as->iommu) {
+ if (n->start <= addr && addr + size < n->end) {
+ amdvi_address_space_unmap(as, n);
+ }
+ }
+ }
+
+ amdvi_walk_level(as, dte[0], addr, 0);
+ }
+}
+
static gboolean amdvi_iotlb_remove_by_domid(gpointer key, gpointer value,
gpointer user_data)
{
@@ -455,6 +533,8 @@ static gboolean amdvi_iotlb_remove_by_domid(gpointer key, gpointer value,
static void amdvi_inval_pages(AMDVIState *s, uint64_t *cmd)
{
uint16_t domid = cpu_to_le16((uint16_t)extract64(cmd[0], 32, 16));
+ uint64_t addr = cpu_to_le64(extract64(cmd[1], 12, 52)) << 12;
+ uint16_t flags = cpu_to_le16((uint16_t)extract64(cmd[1], 0, 12));
if (extract64(cmd[0], 20, 12) || extract64(cmd[0], 48, 12) ||
extract64(cmd[1], 3, 9)) {
@@ -465,6 +545,8 @@ static void amdvi_inval_pages(AMDVIState *s, uint64_t *cmd)
g_hash_table_foreach_remove(s->iotlb, amdvi_iotlb_remove_by_domid,
&domid);
trace_amdvi_pages_inval(domid);
+
+ amdvi_sync_domain(s, domid, addr, flags);
}
static void amdvi_prefetch_pages(AMDVIState *s, uint64_t *cmd)
@@ -910,6 +992,101 @@ static inline uint64_t amdvi_get_pte_entry(AMDVIState *s, uint64_t pte_addr,
return pte;
}
+static inline uint64_t pte_get_page_size(uint64_t level)
+{
+ return 1UL << ((level * 9) + 3);
+}
+
+static void amdvi_sync_iova(AMDVIAddressSpace *as, uint64_t pte, uint64_t iova)
+{
+ IOMMUTLBEntry entry;
+ uint64_t addr = pte & AMDVI_DEV_PT_ROOT_MASK;
+ uint32_t level = get_pte_translation_mode(pte);
+ uint64_t size = pte_get_page_size(level + 1);
+ uint64_t perm = amdvi_get_perms(pte);
+
+ assert(level == 0 || level == 7);
+
+ entry.target_as = &address_space_memory;
+ entry.iova = iova ;
+ entry.perm = perm;
+ if (level == 0) {
+ entry.addr_mask = size - 1;
+ entry.translated_addr = addr;
+ } else if (level == 7) {
+ entry.addr_mask = (1 << (cto64(addr | 0xFFF) + 1)) - 1;
+ entry.translated_addr = addr & ~entry.addr_mask;
+ }
+
+ memory_region_notify_iommu(&as->iommu, 0, entry);
+}
+
+/*
+ * Walk the I/O page table and notify mapping change. Note that iova
+ * determines if this function's behavior:
+ * - iova == 0: re-walk the whole page table
+ * - iova != 0: re-walk the address defined in iova
+ */
+static void amdvi_walk_level(AMDVIAddressSpace *as, uint64_t pte,
+ uint64_t iova, uint64_t partial)
+{
+ uint64_t index = 0;
+ uint8_t level = get_pte_translation_mode(pte);
+ uint64_t cur_addr = pte & AMDVI_DEV_PT_ROOT_MASK;
+ uint64_t end_addr = cur_addr + 4096;
+ uint64_t new_partial = 0;
+
+ if (!(pte & AMDVI_PTE_PRESENT)) {
+ return;
+ }
+
+ if (level == 7) {
+ amdvi_sync_iova(as, pte, iova);
+ return;
+ }
+
+ /* narrow the scope of table walk if iova != 0 */
+ if (iova) {
+ cur_addr += ((iova >> (3 + 9 * level)) & 0x1FF) << 3;
+ end_addr = cur_addr + 8;
+ }
+
+ while (cur_addr < end_addr) {
+ int cur_addr_inc = 8;
+ int index_inc = 1;
+
+ pte = amdvi_get_pte_entry(as->iommu_state, cur_addr, as->devfn);
+ /* validate the entry */
+ if (!(pte & AMDVI_PTE_PRESENT)) {
+ goto next;
+ }
+
+ if (level > 1) {
+ new_partial = (partial << 9) | index;
+ amdvi_walk_level(as, pte, iova, new_partial);
+ } else {
+ /* found a page, sync the mapping first */
+ if (iova) {
+ amdvi_sync_iova(as, pte, iova);
+ } else {
+ amdvi_sync_iova(as, pte, ((partial << 9) | index) << 12);
+ }
+
+ /* skip following entries when a large page is found */
+ if (get_pte_translation_mode(pte) == 7) {
+ int skipped = 1 << (cto64(pte >> 12) + 1);
+
+ cur_addr_inc = 8 * skipped;
+ index_inc = skipped;
+ }
+ }
+
+next:
+ cur_addr += cur_addr_inc;
+ index += index_inc;
+ }
+}
+
static void amdvi_page_walk(AMDVIAddressSpace *as, uint64_t *dte,
IOMMUTLBEntry *ret, unsigned perms,
hwaddr addr)
diff --git a/hw/i386/amd_iommu.h b/hw/i386/amd_iommu.h
index aeed9fd1cbb0..22f846837a95 100644
--- a/hw/i386/amd_iommu.h
+++ b/hw/i386/amd_iommu.h
@@ -123,6 +123,8 @@
#define AMDVI_CMD_COMPLETE_PPR_REQUEST 0x07
#define AMDVI_CMD_INVAL_AMDVI_ALL 0x08
+#define AMDVI_CMD_INVAL_IOMMU_PAGES_S_BIT (1ULL << 0)
+
#define AMDVI_DEVTAB_ENTRY_SIZE 32
/* Device table entry bits 0:63 */
@@ -148,6 +150,9 @@
#define AMDVI_EVENT_ILLEGAL_COMMAND_ERROR (0x5U << 12)
#define AMDVI_EVENT_COMMAND_HW_ERROR (0x6U << 12)
+/* PTE bits */
+#define AMDVI_PTE_PRESENT (1ULL << 0)
+
#define AMDVI_EVENT_LEN 16
#define AMDVI_PERM_READ (1 << 0)
#define AMDVI_PERM_WRITE (1 << 1)
@@ -198,6 +203,11 @@
#define AMDVI_MAX_PH_ADDR (40UL << 8)
#define AMDVI_MAX_GVA_ADDR (48UL << 15)
+#define AMDVI_PGSZ_ENTIRE (0X0007FFFFFFFFF000ULL)
+
+/* The domain id is 16-bit, so use 32-bit all 1's to represent all domains */
+#define AMDVI_DOMAIN_ALL (UINT32_MAX)
+
/* Completion Wait data size */
#define AMDVI_COMPLETION_DATA_SIZE 8
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 13471ae29436..243216499ce0 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -346,7 +346,8 @@ static int vfio_dma_map(VFIOContainer *container, hwaddr iova,
* the VGA ROM space.
*/
if (ioctl(container->fd, VFIO_IOMMU_MAP_DMA, &map) == 0 ||
- (errno == EBUSY && vfio_dma_unmap(container, iova, size) == 0 &&
+ ((errno == EEXIST || errno == EBUSY) &&
+ vfio_dma_unmap(container, iova, size) == 0 &&
ioctl(container->fd, VFIO_IOMMU_MAP_DMA, &map) == 0)) {
return 0;
}
--
2.25.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH V1 3/3] amd-iommu: Fix amdvi_mmio_trace() to differentiate MMIO R/W
2020-09-28 20:05 [PATCH V1 0/3] Passthru device support under emulated amd-iommu Wei Huang
2020-09-28 20:05 ` [PATCH V1 1/3] amd-iommu: Add address space notifier and replay support Wei Huang
2020-09-28 20:05 ` [PATCH V1 2/3] amd-iommu: Sync IOVA-to-GPA translation during page invalidation Wei Huang
@ 2020-09-28 20:05 ` Wei Huang
2020-09-29 2:08 ` [PATCH V1 0/3] Passthru device support under emulated amd-iommu no-reply
3 siblings, 0 replies; 7+ messages in thread
From: Wei Huang @ 2020-09-28 20:05 UTC (permalink / raw)
To: qemu-devel
Cc: ehabkost, mst, wei.huang2, peterx, alex.williamson, pbonzini,
Suravee.Suthikulpanit, rth
amd-iommu MMIO trace function does not differentiate MMIO writes from
reads. Let us extend it to support both types.
Co-developed-by: Wei Huang <wei.huang2@amd.com>
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
---
hw/i386/amd_iommu.c | 21 ++++++++++++++++-----
1 file changed, 16 insertions(+), 5 deletions(-)
diff --git a/hw/i386/amd_iommu.c b/hw/i386/amd_iommu.c
index 7604e2080595..827818b9f781 100644
--- a/hw/i386/amd_iommu.c
+++ b/hw/i386/amd_iommu.c
@@ -662,17 +662,28 @@ static void amdvi_cmdbuf_run(AMDVIState *s)
}
}
-static void amdvi_mmio_trace(hwaddr addr, unsigned size)
+static void amdvi_mmio_trace(hwaddr addr, unsigned size, bool iswrite,
+ uint64_t val)
{
uint8_t index = (addr & ~0x2000) / 8;
if ((addr & 0x2000)) {
/* high table */
index = index >= AMDVI_MMIO_REGS_HIGH ? AMDVI_MMIO_REGS_HIGH : index;
- trace_amdvi_mmio_read(amdvi_mmio_high[index], addr, size, addr & ~0x07);
+ if (!iswrite)
+ trace_amdvi_mmio_read(amdvi_mmio_high[index], addr, size,
+ addr & ~0x07);
+ else
+ trace_amdvi_mmio_write(amdvi_mmio_high[index], addr, size, val,
+ addr & ~0x07);
} else {
index = index >= AMDVI_MMIO_REGS_LOW ? AMDVI_MMIO_REGS_LOW : index;
- trace_amdvi_mmio_read(amdvi_mmio_low[index], addr, size, addr & ~0x07);
+ if (!iswrite)
+ trace_amdvi_mmio_read(amdvi_mmio_low[index], addr, size,
+ addr & ~0x07);
+ else
+ trace_amdvi_mmio_write(amdvi_mmio_low[index], addr, size, val,
+ addr & ~0x07);
}
}
@@ -693,7 +704,7 @@ static uint64_t amdvi_mmio_read(void *opaque, hwaddr addr, unsigned size)
} else if (size == 8) {
val = amdvi_readq(s, addr);
}
- amdvi_mmio_trace(addr, size);
+ amdvi_mmio_trace(addr, size, 0, val);
return val;
}
@@ -840,7 +851,7 @@ static void amdvi_mmio_write(void *opaque, hwaddr addr, uint64_t val,
return;
}
- amdvi_mmio_trace(addr, size);
+ amdvi_mmio_trace(addr, size, 1, val);
switch (addr & ~0x07) {
case AMDVI_MMIO_CONTROL:
amdvi_mmio_reg_write(s, size, val, addr);
--
2.25.2
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH V1 0/3] Passthru device support under emulated amd-iommu
2020-09-28 20:05 [PATCH V1 0/3] Passthru device support under emulated amd-iommu Wei Huang
` (2 preceding siblings ...)
2020-09-28 20:05 ` [PATCH V1 3/3] amd-iommu: Fix amdvi_mmio_trace() to differentiate MMIO R/W Wei Huang
@ 2020-09-29 2:08 ` no-reply
3 siblings, 0 replies; 7+ messages in thread
From: no-reply @ 2020-09-29 2:08 UTC (permalink / raw)
To: wei.huang2
Cc: ehabkost, mst, wei.huang2, qemu-devel, peterx, alex.williamson,
Suravee.Suthikulpanit, pbonzini, rth
Patchew URL: https://patchew.org/QEMU/20200928200506.75441-1-wei.huang2@amd.com/
Hi,
This series failed the docker-quick@centos7 build test. Please find the testing commands and
their output below. If you have Docker installed, you can probably reproduce it
locally.
=== TEST SCRIPT BEGIN ===
#!/bin/bash
make docker-image-centos7 V=1 NETWORK=1
time make docker-test-quick@centos7 SHOW_ENV=1 J=14 NETWORK=1
=== TEST SCRIPT END ===
C linker for the host machine: cc ld.bfd 2.27-43
Host machine cpu family: x86_64
Host machine cpu: x86_64
../src/meson.build:10: WARNING: Module unstable-keyval has no backwards or forwards compatibility and might not exist in future releases.
Program sh found: YES
Program python3 found: YES (/usr/bin/python3)
Configuring ninjatool using configuration
---
TEST iotest-qcow2: 018
socket_accept failed: Resource temporarily unavailable
**
ERROR:../src/tests/qtest/libqtest.c:301:qtest_init_without_qmp_handshake: assertion failed: (s->fd >= 0 && s->qmp_fd >= 0)
../src/tests/qtest/libqtest.c:166: kill_qemu() tried to terminate QEMU process but encountered exit status 1 (expected 0)
ERROR qtest-x86_64: bios-tables-test - Bail out! ERROR:../src/tests/qtest/libqtest.c:301:qtest_init_without_qmp_handshake: assertion failed: (s->fd >= 0 && s->qmp_fd >= 0)
make: *** [run-test-138] Error 1
make: *** Waiting for unfinished jobs....
TEST iotest-qcow2: 019
TEST iotest-qcow2: 020
---
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--rm', '--label', 'com.qemu.instance.uuid=b033790bfc424d119959312ad4731adb', '-u', '1003', '--security-opt', 'seccomp=unconfined', '-e', 'TARGET_LIST=', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 'SHOW_ENV=1', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew2/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-2xrejy4s/src/docker-src.2020-09-28-21.51.04.32600:/var/tmp/qemu:z,ro', 'qemu/centos7', '/var/tmp/qemu/run', 'test-quick']' returned non-zero exit status 2.
filter=--filter=label=com.qemu.instance.uuid=b033790bfc424d119959312ad4731adb
make[1]: *** [docker-run] Error 1
make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-2xrejy4s/src'
make: *** [docker-run-test-quick@centos7] Error 2
real 17m41.977s
user 0m21.950s
The full log is available at
http://patchew.org/logs/20200928200506.75441-1-wei.huang2@amd.com/testing.docker-quick@centos7/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH V1 2/3] amd-iommu: Sync IOVA-to-GPA translation during page invalidation
2020-09-28 20:05 ` [PATCH V1 2/3] amd-iommu: Sync IOVA-to-GPA translation during page invalidation Wei Huang
@ 2020-09-29 19:34 ` Alex Williamson
2020-09-30 20:43 ` Wei Huang
0 siblings, 1 reply; 7+ messages in thread
From: Alex Williamson @ 2020-09-29 19:34 UTC (permalink / raw)
To: Wei Huang
Cc: ehabkost, mst, qemu-devel, peterx, pbonzini,
Suravee.Suthikulpanit, rth
On Mon, 28 Sep 2020 15:05:05 -0500
Wei Huang <wei.huang2@amd.com> wrote:
> Add support to sync the IOVA-to-GPA translation at the time of IOMMU
> page invalidation. This function is called when two IOMMU commands,
> AMDVI_CMD_INVAL_AMDVI_PAGES and AMDVI_CMD_INVAL_AMDVI_ALL, are
> intercepted. Address space notifiers are called accordingly.
>
> Co-developed-by: Wei Huang <wei.huang2@amd.com>
> Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
> ---
> hw/i386/amd_iommu.c | 177 ++++++++++++++++++++++++++++++++++++++++++++
> hw/i386/amd_iommu.h | 10 +++
> hw/vfio/common.c | 3 +-
> 3 files changed, 189 insertions(+), 1 deletion(-)
...
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 13471ae29436..243216499ce0 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
> @@ -346,7 +346,8 @@ static int vfio_dma_map(VFIOContainer *container, hwaddr iova,
> * the VGA ROM space.
> */
> if (ioctl(container->fd, VFIO_IOMMU_MAP_DMA, &map) == 0 ||
> - (errno == EBUSY && vfio_dma_unmap(container, iova, size) == 0 &&
> + ((errno == EEXIST || errno == EBUSY) &&
> + vfio_dma_unmap(container, iova, size) == 0 &&
> ioctl(container->fd, VFIO_IOMMU_MAP_DMA, &map) == 0)) {
> return 0;
> }
This seems like it should be a separate patch. AFAICT the commit log
doesn't even hint at why this change is necessary. I think the -EBUSY
error pre-dates vIOMMU as well. Responding the same for an -EEXIST
almost suggests a coherency issue between QEMU and the kernel, or a
direct mapping replacement without an invalidation, which doesn't seem
to be what this patch is implementing. Thanks,
Alex
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH V1 2/3] amd-iommu: Sync IOVA-to-GPA translation during page invalidation
2020-09-29 19:34 ` Alex Williamson
@ 2020-09-30 20:43 ` Wei Huang
0 siblings, 0 replies; 7+ messages in thread
From: Wei Huang @ 2020-09-30 20:43 UTC (permalink / raw)
To: Alex Williamson
Cc: ehabkost, mst, qemu-devel, peterx, pbonzini,
Suravee.Suthikulpanit, rth
On 09/29 01:34, Alex Williamson wrote:
> On Mon, 28 Sep 2020 15:05:05 -0500
> Wei Huang <wei.huang2@amd.com> wrote:
>
> > Add support to sync the IOVA-to-GPA translation at the time of IOMMU
> > page invalidation. This function is called when two IOMMU commands,
> > AMDVI_CMD_INVAL_AMDVI_PAGES and AMDVI_CMD_INVAL_AMDVI_ALL, are
> > intercepted. Address space notifiers are called accordingly.
> >
> > Co-developed-by: Wei Huang <wei.huang2@amd.com>
> > Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
> > ---
> > hw/i386/amd_iommu.c | 177 ++++++++++++++++++++++++++++++++++++++++++++
> > hw/i386/amd_iommu.h | 10 +++
> > hw/vfio/common.c | 3 +-
> > 3 files changed, 189 insertions(+), 1 deletion(-)
> ...
> > diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> > index 13471ae29436..243216499ce0 100644
> > --- a/hw/vfio/common.c
> > +++ b/hw/vfio/common.c
> > @@ -346,7 +346,8 @@ static int vfio_dma_map(VFIOContainer *container, hwaddr iova,
> > * the VGA ROM space.
> > */
> > if (ioctl(container->fd, VFIO_IOMMU_MAP_DMA, &map) == 0 ||
> > - (errno == EBUSY && vfio_dma_unmap(container, iova, size) == 0 &&
> > + ((errno == EEXIST || errno == EBUSY) &&
> > + vfio_dma_unmap(container, iova, size) == 0 &&
> > ioctl(container->fd, VFIO_IOMMU_MAP_DMA, &map) == 0)) {
> > return 0;
> > }
>
>
> This seems like it should be a separate patch. AFAICT the commit log
> doesn't even hint at why this change is necessary. I think the -EBUSY
> error pre-dates vIOMMU as well. Responding the same for an -EEXIST
> almost suggests a coherency issue between QEMU and the kernel, or a
> direct mapping replacement without an invalidation, which doesn't seem
> to be what this patch is implementing. Thanks,
I went back to check it. Removing this checking code (original code) didn't
trigger any issues with Intel 10G passthru NIC. I think this was from the
residual debugging code when we started to implement it. Sorry for the
confusion. I will remove this code in V2 with more tests.
-Wei
>
> Alex
>
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2020-09-30 21:09 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-09-28 20:05 [PATCH V1 0/3] Passthru device support under emulated amd-iommu Wei Huang
2020-09-28 20:05 ` [PATCH V1 1/3] amd-iommu: Add address space notifier and replay support Wei Huang
2020-09-28 20:05 ` [PATCH V1 2/3] amd-iommu: Sync IOVA-to-GPA translation during page invalidation Wei Huang
2020-09-29 19:34 ` Alex Williamson
2020-09-30 20:43 ` Wei Huang
2020-09-28 20:05 ` [PATCH V1 3/3] amd-iommu: Fix amdvi_mmio_trace() to differentiate MMIO R/W Wei Huang
2020-09-29 2:08 ` [PATCH V1 0/3] Passthru device support under emulated amd-iommu no-reply
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).