* [PATCH v14 0/4] vhost-vdpa: add support for vIOMMU @ 2023-03-20 16:19 Cindy Lu 2023-03-20 16:19 ` [PATCH v14 1/4] vhost: expose function vhost_dev_has_iommu() Cindy Lu ` (3 more replies) 0 siblings, 4 replies; 10+ messages in thread From: Cindy Lu @ 2023-03-20 16:19 UTC (permalink / raw) To: lulu, jasowang, mst; +Cc: qemu-devel These patches are to support vIOMMU in vdpa device changes in V3 1. Move function vfio_get_xlat_addr to memory.c 2. Use the existing memory listener, while the MR is iommu MR then call the function iommu_region_add/ iommu_region_del changes in V4 1.make the comments in vfio_get_xlat_addr more general changes in V5 1. Address the comments in the last version 2. Add a new arg in the function vfio_get_xlat_addr, which shows whether the memory is backed by a discard manager. So the device can have its own warning. changes in V6 move the error_report for the unpopulated discard back to memeory_get_xlat_addr changes in V7 organize the error massage to avoid the duplicate information changes in V8 Organize the code follow the comments in the last version changes in V9 Organize the code follow the comments changes in V10 Address the comments changes in V11 Address the comments fix the crash found in test changes in V12 Address the comments, squash patch 1 into the next patch improve the code style issue changes in V13 fail to start if IOMMU and svq enable at same time improve the code style issue changes in V14 Address the comments Cindy Lu (4): vhost: expose function vhost_dev_has_iommu() vhost_vdpa: fix the input in trace_vhost_vdpa_listener_region_del() vhost-vdpa: Add check for full 64-bit in region delete vhost-vdpa: Add support for vIOMMU. hw/virtio/vhost-vdpa.c | 172 +++++++++++++++++++++++++++++++-- hw/virtio/vhost.c | 2 +- include/hw/virtio/vhost-vdpa.h | 11 +++ include/hw/virtio/vhost.h | 1 + 4 files changed, 175 insertions(+), 11 deletions(-) -- 2.34.3 ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v14 1/4] vhost: expose function vhost_dev_has_iommu() 2023-03-20 16:19 [PATCH v14 0/4] vhost-vdpa: add support for vIOMMU Cindy Lu @ 2023-03-20 16:19 ` Cindy Lu 2023-03-21 2:30 ` Jason Wang 2023-03-20 16:19 ` [PATCH v14 2/4] vhost_vdpa: fix the input in trace_vhost_vdpa_listener_region_del() Cindy Lu ` (2 subsequent siblings) 3 siblings, 1 reply; 10+ messages in thread From: Cindy Lu @ 2023-03-20 16:19 UTC (permalink / raw) To: lulu, jasowang, mst; +Cc: qemu-devel To support vIOMMU in vdpa, need to exposed the function vhost_dev_has_iommu, vdpa will use this function to check if vIOMMU enable. Signed-off-by: Cindy Lu <lulu@redhat.com> --- hw/virtio/vhost.c | 2 +- include/hw/virtio/vhost.h | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c index a266396576..fd746b085b 100644 --- a/hw/virtio/vhost.c +++ b/hw/virtio/vhost.c @@ -107,7 +107,7 @@ static void vhost_dev_sync_region(struct vhost_dev *dev, } } -static bool vhost_dev_has_iommu(struct vhost_dev *dev) +bool vhost_dev_has_iommu(struct vhost_dev *dev) { VirtIODevice *vdev = dev->vdev; diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h index a52f273347..f7f10c8fb7 100644 --- a/include/hw/virtio/vhost.h +++ b/include/hw/virtio/vhost.h @@ -336,4 +336,5 @@ int vhost_dev_set_inflight(struct vhost_dev *dev, struct vhost_inflight *inflight); int vhost_dev_get_inflight(struct vhost_dev *dev, uint16_t queue_size, struct vhost_inflight *inflight); +bool vhost_dev_has_iommu(struct vhost_dev *dev); #endif -- 2.34.3 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v14 1/4] vhost: expose function vhost_dev_has_iommu() 2023-03-20 16:19 ` [PATCH v14 1/4] vhost: expose function vhost_dev_has_iommu() Cindy Lu @ 2023-03-21 2:30 ` Jason Wang 0 siblings, 0 replies; 10+ messages in thread From: Jason Wang @ 2023-03-21 2:30 UTC (permalink / raw) To: Cindy Lu; +Cc: mst, qemu-devel On Tue, Mar 21, 2023 at 12:20 AM Cindy Lu <lulu@redhat.com> wrote: > > To support vIOMMU in vdpa, need to exposed the function > vhost_dev_has_iommu, vdpa will use this function to check > if vIOMMU enable. > > Signed-off-by: Cindy Lu <lulu@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Thanks > --- > hw/virtio/vhost.c | 2 +- > include/hw/virtio/vhost.h | 1 + > 2 files changed, 2 insertions(+), 1 deletion(-) > > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c > index a266396576..fd746b085b 100644 > --- a/hw/virtio/vhost.c > +++ b/hw/virtio/vhost.c > @@ -107,7 +107,7 @@ static void vhost_dev_sync_region(struct vhost_dev *dev, > } > } > > -static bool vhost_dev_has_iommu(struct vhost_dev *dev) > +bool vhost_dev_has_iommu(struct vhost_dev *dev) > { > VirtIODevice *vdev = dev->vdev; > > diff --git a/include/hw/virtio/vhost.h b/include/hw/virtio/vhost.h > index a52f273347..f7f10c8fb7 100644 > --- a/include/hw/virtio/vhost.h > +++ b/include/hw/virtio/vhost.h > @@ -336,4 +336,5 @@ int vhost_dev_set_inflight(struct vhost_dev *dev, > struct vhost_inflight *inflight); > int vhost_dev_get_inflight(struct vhost_dev *dev, uint16_t queue_size, > struct vhost_inflight *inflight); > +bool vhost_dev_has_iommu(struct vhost_dev *dev); > #endif > -- > 2.34.3 > ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v14 2/4] vhost_vdpa: fix the input in trace_vhost_vdpa_listener_region_del() 2023-03-20 16:19 [PATCH v14 0/4] vhost-vdpa: add support for vIOMMU Cindy Lu 2023-03-20 16:19 ` [PATCH v14 1/4] vhost: expose function vhost_dev_has_iommu() Cindy Lu @ 2023-03-20 16:19 ` Cindy Lu 2023-03-21 2:32 ` Jason Wang 2023-03-20 16:19 ` [PATCH v14 3/4] vhost-vdpa: Add check for full 64-bit in region delete Cindy Lu 2023-03-20 16:19 ` [PATCH v14 4/4] vhost-vdpa: Add support for vIOMMU Cindy Lu 3 siblings, 1 reply; 10+ messages in thread From: Cindy Lu @ 2023-03-20 16:19 UTC (permalink / raw) To: lulu, jasowang, mst; +Cc: qemu-devel In trace_vhost_vdpa_listener_region_del, the value for llend should change to int128_get64(int128_sub(llend, int128_one())) Signed-off-by: Cindy Lu <lulu@redhat.com> --- hw/virtio/vhost-vdpa.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index bc6bad23d5..92c2413c76 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -288,7 +288,8 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener, iova = TARGET_PAGE_ALIGN(section->offset_within_address_space); llend = vhost_vdpa_section_end(section); - trace_vhost_vdpa_listener_region_del(v, iova, int128_get64(llend)); + trace_vhost_vdpa_listener_region_del(v, iova, + int128_get64(int128_sub(llend, int128_one()))); if (int128_ge(int128_make64(iova), llend)) { return; -- 2.34.3 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v14 2/4] vhost_vdpa: fix the input in trace_vhost_vdpa_listener_region_del() 2023-03-20 16:19 ` [PATCH v14 2/4] vhost_vdpa: fix the input in trace_vhost_vdpa_listener_region_del() Cindy Lu @ 2023-03-21 2:32 ` Jason Wang 0 siblings, 0 replies; 10+ messages in thread From: Jason Wang @ 2023-03-21 2:32 UTC (permalink / raw) To: Cindy Lu; +Cc: mst, qemu-devel On Tue, Mar 21, 2023 at 12:20 AM Cindy Lu <lulu@redhat.com> wrote: > > In trace_vhost_vdpa_listener_region_del, the value for llend > should change to int128_get64(int128_sub(llend, int128_one())) > > Signed-off-by: Cindy Lu <lulu@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Thanks > --- > hw/virtio/vhost-vdpa.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c > index bc6bad23d5..92c2413c76 100644 > --- a/hw/virtio/vhost-vdpa.c > +++ b/hw/virtio/vhost-vdpa.c > @@ -288,7 +288,8 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener, > iova = TARGET_PAGE_ALIGN(section->offset_within_address_space); > llend = vhost_vdpa_section_end(section); > > - trace_vhost_vdpa_listener_region_del(v, iova, int128_get64(llend)); > + trace_vhost_vdpa_listener_region_del(v, iova, > + int128_get64(int128_sub(llend, int128_one()))); > > if (int128_ge(int128_make64(iova), llend)) { > return; > -- > 2.34.3 > ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v14 3/4] vhost-vdpa: Add check for full 64-bit in region delete 2023-03-20 16:19 [PATCH v14 0/4] vhost-vdpa: add support for vIOMMU Cindy Lu 2023-03-20 16:19 ` [PATCH v14 1/4] vhost: expose function vhost_dev_has_iommu() Cindy Lu 2023-03-20 16:19 ` [PATCH v14 2/4] vhost_vdpa: fix the input in trace_vhost_vdpa_listener_region_del() Cindy Lu @ 2023-03-20 16:19 ` Cindy Lu 2023-03-21 3:14 ` Jason Wang 2023-03-20 16:19 ` [PATCH v14 4/4] vhost-vdpa: Add support for vIOMMU Cindy Lu 3 siblings, 1 reply; 10+ messages in thread From: Cindy Lu @ 2023-03-20 16:19 UTC (permalink / raw) To: lulu, jasowang, mst; +Cc: qemu-devel The unmap ioctl doesn't accept a full 64-bit span. So need to add check for the section's size in vhost_vdpa_listener_region_del(). Signed-off-by: Cindy Lu <lulu@redhat.com> --- hw/virtio/vhost-vdpa.c | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 92c2413c76..0c8c37e786 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -316,10 +316,28 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener, vhost_iova_tree_remove(v->iova_tree, *result); } vhost_vdpa_iotlb_batch_begin_once(v); + /* + * The unmap ioctl doesn't accept a full 64-bit. need to check it + */ + if (int128_eq(llsize, int128_2_64())) { + llsize = int128_rshift(llsize, 1); + ret = vhost_vdpa_dma_unmap(v, VHOST_VDPA_GUEST_PA_ASID, iova, + int128_get64(llsize)); + + if (ret) { + error_report("vhost_vdpa_dma_unmap(%p, 0x%" HWADDR_PRIx ", " + "0x%" HWADDR_PRIx ") = %d (%m)", + v, iova, int128_get64(llsize), ret); + } + iova += int128_get64(llsize); + } ret = vhost_vdpa_dma_unmap(v, VHOST_VDPA_GUEST_PA_ASID, iova, int128_get64(llsize)); + if (ret) { - error_report("vhost_vdpa dma unmap error!"); + error_report("vhost_vdpa_dma_unmap(%p, 0x%" HWADDR_PRIx ", " + "0x%" HWADDR_PRIx ") = %d (%m)", + v, iova, int128_get64(llsize), ret); } memory_region_unref(section->mr); -- 2.34.3 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v14 3/4] vhost-vdpa: Add check for full 64-bit in region delete 2023-03-20 16:19 ` [PATCH v14 3/4] vhost-vdpa: Add check for full 64-bit in region delete Cindy Lu @ 2023-03-21 3:14 ` Jason Wang 0 siblings, 0 replies; 10+ messages in thread From: Jason Wang @ 2023-03-21 3:14 UTC (permalink / raw) To: Cindy Lu; +Cc: mst, qemu-devel On Tue, Mar 21, 2023 at 12:20 AM Cindy Lu <lulu@redhat.com> wrote: > > The unmap ioctl doesn't accept a full 64-bit span. So need to > add check for the section's size in vhost_vdpa_listener_region_del(). > > Signed-off-by: Cindy Lu <lulu@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Thanks > --- > hw/virtio/vhost-vdpa.c | 20 +++++++++++++++++++- > 1 file changed, 19 insertions(+), 1 deletion(-) > > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c > index 92c2413c76..0c8c37e786 100644 > --- a/hw/virtio/vhost-vdpa.c > +++ b/hw/virtio/vhost-vdpa.c > @@ -316,10 +316,28 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener, > vhost_iova_tree_remove(v->iova_tree, *result); > } > vhost_vdpa_iotlb_batch_begin_once(v); > + /* > + * The unmap ioctl doesn't accept a full 64-bit. need to check it > + */ > + if (int128_eq(llsize, int128_2_64())) { > + llsize = int128_rshift(llsize, 1); > + ret = vhost_vdpa_dma_unmap(v, VHOST_VDPA_GUEST_PA_ASID, iova, > + int128_get64(llsize)); > + > + if (ret) { > + error_report("vhost_vdpa_dma_unmap(%p, 0x%" HWADDR_PRIx ", " > + "0x%" HWADDR_PRIx ") = %d (%m)", > + v, iova, int128_get64(llsize), ret); > + } > + iova += int128_get64(llsize); > + } > ret = vhost_vdpa_dma_unmap(v, VHOST_VDPA_GUEST_PA_ASID, iova, > int128_get64(llsize)); > + > if (ret) { > - error_report("vhost_vdpa dma unmap error!"); > + error_report("vhost_vdpa_dma_unmap(%p, 0x%" HWADDR_PRIx ", " > + "0x%" HWADDR_PRIx ") = %d (%m)", > + v, iova, int128_get64(llsize), ret); > } > > memory_region_unref(section->mr); > -- > 2.34.3 > ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH v14 4/4] vhost-vdpa: Add support for vIOMMU. 2023-03-20 16:19 [PATCH v14 0/4] vhost-vdpa: add support for vIOMMU Cindy Lu ` (2 preceding siblings ...) 2023-03-20 16:19 ` [PATCH v14 3/4] vhost-vdpa: Add check for full 64-bit in region delete Cindy Lu @ 2023-03-20 16:19 ` Cindy Lu 2023-03-21 3:21 ` Jason Wang 3 siblings, 1 reply; 10+ messages in thread From: Cindy Lu @ 2023-03-20 16:19 UTC (permalink / raw) To: lulu, jasowang, mst; +Cc: qemu-devel 1. The vIOMMU support will make vDPA can work in IOMMU mode. This will fix security issues while using the no-IOMMU mode. To support this feature we need to add new functions for IOMMU MR adds and deletes. Also since the SVQ does not support vIOMMU yet, add the check for IOMMU in vhost_vdpa_dev_start, if the SVQ and IOMMU enable at the same time the function will return fail. 2. Skip the iova_max check vhost_vdpa_listener_skipped_section(). While MR is IOMMU, move this check to vhost_vdpa_iommu_map_notify() Verified in vp_vdpa and vdpa_sim_net driver Signed-off-by: Cindy Lu <lulu@redhat.com> --- hw/virtio/vhost-vdpa.c | 149 +++++++++++++++++++++++++++++++-- include/hw/virtio/vhost-vdpa.h | 11 +++ 2 files changed, 152 insertions(+), 8 deletions(-) diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c index 0c8c37e786..b36922b365 100644 --- a/hw/virtio/vhost-vdpa.c +++ b/hw/virtio/vhost-vdpa.c @@ -26,6 +26,7 @@ #include "cpu.h" #include "trace.h" #include "qapi/error.h" +#include "hw/virtio/virtio-access.h" /* * Return one past the end of the end of section. Be careful with uint64_t @@ -60,15 +61,22 @@ static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section, iova_min, section->offset_within_address_space); return true; } + /* + * While using vIOMMU, sometimes the section will be larger than iova_max, + * but the memory that actually maps is smaller, so move the check to + * function vhost_vdpa_iommu_map_notify(). That function will use the actual + * size that maps to the kernel + */ - llend = vhost_vdpa_section_end(section); - if (int128_gt(llend, int128_make64(iova_max))) { - error_report("RAM section out of device range (max=0x%" PRIx64 - ", end addr=0x%" PRIx64 ")", - iova_max, int128_get64(llend)); - return true; + if (!memory_region_is_iommu(section->mr)) { + llend = vhost_vdpa_section_end(section); + if (int128_gt(llend, int128_make64(iova_max))) { + error_report("RAM section out of device range (max=0x%" PRIx64 + ", end addr=0x%" PRIx64 ")", + iova_max, int128_get64(llend)); + return true; + } } - return false; } @@ -185,6 +193,118 @@ static void vhost_vdpa_listener_commit(MemoryListener *listener) v->iotlb_batch_begin_sent = false; } +static void vhost_vdpa_iommu_map_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb) +{ + struct vdpa_iommu *iommu = container_of(n, struct vdpa_iommu, n); + + hwaddr iova = iotlb->iova + iommu->iommu_offset; + struct vhost_vdpa *v = iommu->dev; + void *vaddr; + int ret; + Int128 llend; + + if (iotlb->target_as != &address_space_memory) { + error_report("Wrong target AS \"%s\", only system memory is allowed", + iotlb->target_as->name ? iotlb->target_as->name : "none"); + return; + } + RCU_READ_LOCK_GUARD(); + /* check if RAM section out of device range */ + llend = int128_add(int128_makes64(iotlb->addr_mask), int128_makes64(iova)); + if (int128_gt(llend, int128_make64(v->iova_range.last))) { + error_report("RAM section out of device range (max=0x%" PRIx64 + ", end addr=0x%" PRIx64 ")", + v->iova_range.last, int128_get64(llend)); + return; + } + + vhost_vdpa_iotlb_batch_begin_once(v); + + if ((iotlb->perm & IOMMU_RW) != IOMMU_NONE) { + bool read_only; + + if (!memory_get_xlat_addr(iotlb, &vaddr, NULL, &read_only, NULL)) { + return; + } + + ret = vhost_vdpa_dma_map(v, VHOST_VDPA_GUEST_PA_ASID, iova, + iotlb->addr_mask + 1, vaddr, read_only); + if (ret) { + error_report("vhost_vdpa_dma_map(%p, 0x%" HWADDR_PRIx ", " + "0x%" HWADDR_PRIx ", %p) = %d (%m)", + v, iova, iotlb->addr_mask + 1, vaddr, ret); + } + } else { + ret = vhost_vdpa_dma_unmap(v, VHOST_VDPA_GUEST_PA_ASID, iova, + iotlb->addr_mask + 1); + if (ret) { + error_report("vhost_vdpa_dma_unmap(%p, 0x%" HWADDR_PRIx ", " + "0x%" HWADDR_PRIx ") = %d (%m)", + v, iova, iotlb->addr_mask + 1, ret); + } + } +} + +static void vhost_vdpa_iommu_region_add(MemoryListener *listener, + MemoryRegionSection *section) +{ + struct vhost_vdpa *v = container_of(listener, struct vhost_vdpa, listener); + + struct vdpa_iommu *iommu; + Int128 end; + int iommu_idx; + IOMMUMemoryRegion *iommu_mr; + int ret; + + iommu_mr = IOMMU_MEMORY_REGION(section->mr); + + iommu = g_malloc0(sizeof(*iommu)); + end = int128_add(int128_make64(section->offset_within_region), + section->size); + end = int128_sub(end, int128_one()); + iommu_idx = memory_region_iommu_attrs_to_index(iommu_mr, + MEMTXATTRS_UNSPECIFIED); + iommu->iommu_mr = iommu_mr; + iommu_notifier_init(&iommu->n, vhost_vdpa_iommu_map_notify, + IOMMU_NOTIFIER_IOTLB_EVENTS, + section->offset_within_region, + int128_get64(end), + iommu_idx); + iommu->iommu_offset = section->offset_within_address_space - + section->offset_within_region; + iommu->dev = v; + + ret = memory_region_register_iommu_notifier(section->mr, &iommu->n, NULL); + if (ret) { + g_free(iommu); + return; + } + + QLIST_INSERT_HEAD(&v->iommu_list, iommu, iommu_next); + memory_region_iommu_replay(iommu->iommu_mr, &iommu->n); + + return; +} + +static void vhost_vdpa_iommu_region_del(MemoryListener *listener, + MemoryRegionSection *section) +{ + struct vhost_vdpa *v = container_of(listener, struct vhost_vdpa, listener); + + struct vdpa_iommu *iommu; + + QLIST_FOREACH(iommu, &v->iommu_list, iommu_next) + { + if (MEMORY_REGION(iommu->iommu_mr) == section->mr && + iommu->n.start == section->offset_within_region) { + memory_region_unregister_iommu_notifier(section->mr, &iommu->n); + QLIST_REMOVE(iommu, iommu_next); + g_free(iommu); + break; + } + } +} + static void vhost_vdpa_listener_region_add(MemoryListener *listener, MemoryRegionSection *section) { @@ -199,6 +319,10 @@ static void vhost_vdpa_listener_region_add(MemoryListener *listener, v->iova_range.last)) { return; } + if (memory_region_is_iommu(section->mr)) { + vhost_vdpa_iommu_region_add(listener, section); + return; + } if (unlikely((section->offset_within_address_space & ~TARGET_PAGE_MASK) != (section->offset_within_region & ~TARGET_PAGE_MASK))) { @@ -278,6 +402,9 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener, v->iova_range.last)) { return; } + if (memory_region_is_iommu(section->mr)) { + vhost_vdpa_iommu_region_del(listener, section); + } if (unlikely((section->offset_within_address_space & ~TARGET_PAGE_MASK) != (section->offset_within_region & ~TARGET_PAGE_MASK))) { @@ -1182,7 +1309,13 @@ static int vhost_vdpa_dev_start(struct vhost_dev *dev, bool started) } if (started) { - memory_listener_register(&v->listener, &address_space_memory); + if (vhost_dev_has_iommu(dev) && (v->shadow_vqs_enabled)) { + error_report("SVQ can not work while IOMMU enable, please disable" + "IOMMU and try again"); + return -1; + } + memory_listener_register(&v->listener, dev->vdev->dma_as); + return vhost_vdpa_add_status(dev, VIRTIO_CONFIG_S_DRIVER_OK); } diff --git a/include/hw/virtio/vhost-vdpa.h b/include/hw/virtio/vhost-vdpa.h index c278a2a8de..e64bfc7f98 100644 --- a/include/hw/virtio/vhost-vdpa.h +++ b/include/hw/virtio/vhost-vdpa.h @@ -52,6 +52,8 @@ typedef struct vhost_vdpa { struct vhost_dev *dev; Error *migration_blocker; VhostVDPAHostNotifier notifier[VIRTIO_QUEUE_MAX]; + QLIST_HEAD(, vdpa_iommu) iommu_list; + IOMMUNotifier n; } VhostVDPA; int vhost_vdpa_get_iova_range(int fd, struct vhost_vdpa_iova_range *iova_range); @@ -61,4 +63,13 @@ int vhost_vdpa_dma_map(struct vhost_vdpa *v, uint32_t asid, hwaddr iova, int vhost_vdpa_dma_unmap(struct vhost_vdpa *v, uint32_t asid, hwaddr iova, hwaddr size); +typedef struct vdpa_iommu { + struct vhost_vdpa *dev; + IOMMUMemoryRegion *iommu_mr; + hwaddr iommu_offset; + IOMMUNotifier n; + QLIST_ENTRY(vdpa_iommu) iommu_next; +} VDPAIOMMUState; + + #endif -- 2.34.3 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH v14 4/4] vhost-vdpa: Add support for vIOMMU. 2023-03-20 16:19 ` [PATCH v14 4/4] vhost-vdpa: Add support for vIOMMU Cindy Lu @ 2023-03-21 3:21 ` Jason Wang 2023-03-21 8:20 ` Cindy Lu 0 siblings, 1 reply; 10+ messages in thread From: Jason Wang @ 2023-03-21 3:21 UTC (permalink / raw) To: Cindy Lu; +Cc: mst, qemu-devel On Tue, Mar 21, 2023 at 12:20 AM Cindy Lu <lulu@redhat.com> wrote: > > 1. The vIOMMU support will make vDPA can work in IOMMU mode. This > will fix security issues while using the no-IOMMU mode. > To support this feature we need to add new functions for IOMMU MR adds and > deletes. > > Also since the SVQ does not support vIOMMU yet, add the check for IOMMU > in vhost_vdpa_dev_start, if the SVQ and IOMMU enable at the same time > the function will return fail. > > 2. Skip the iova_max check vhost_vdpa_listener_skipped_section(). While > MR is IOMMU, move this check to vhost_vdpa_iommu_map_notify() > > Verified in vp_vdpa and vdpa_sim_net driver > > Signed-off-by: Cindy Lu <lulu@redhat.com> > --- > hw/virtio/vhost-vdpa.c | 149 +++++++++++++++++++++++++++++++-- > include/hw/virtio/vhost-vdpa.h | 11 +++ > 2 files changed, 152 insertions(+), 8 deletions(-) > > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c > index 0c8c37e786..b36922b365 100644 > --- a/hw/virtio/vhost-vdpa.c > +++ b/hw/virtio/vhost-vdpa.c > @@ -26,6 +26,7 @@ > #include "cpu.h" > #include "trace.h" > #include "qapi/error.h" > +#include "hw/virtio/virtio-access.h" > > /* > * Return one past the end of the end of section. Be careful with uint64_t > @@ -60,15 +61,22 @@ static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section, > iova_min, section->offset_within_address_space); > return true; > } > + /* > + * While using vIOMMU, sometimes the section will be larger than iova_max, > + * but the memory that actually maps is smaller, so move the check to > + * function vhost_vdpa_iommu_map_notify(). That function will use the actual > + * size that maps to the kernel > + */ > > - llend = vhost_vdpa_section_end(section); > - if (int128_gt(llend, int128_make64(iova_max))) { > - error_report("RAM section out of device range (max=0x%" PRIx64 > - ", end addr=0x%" PRIx64 ")", > - iova_max, int128_get64(llend)); > - return true; > + if (!memory_region_is_iommu(section->mr)) { > + llend = vhost_vdpa_section_end(section); > + if (int128_gt(llend, int128_make64(iova_max))) { > + error_report("RAM section out of device range (max=0x%" PRIx64 > + ", end addr=0x%" PRIx64 ")", > + iova_max, int128_get64(llend)); > + return true; > + } > } > - Unnecessary changes. > return false; > } > > @@ -185,6 +193,118 @@ static void vhost_vdpa_listener_commit(MemoryListener *listener) > v->iotlb_batch_begin_sent = false; > } > > +static void vhost_vdpa_iommu_map_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb) > +{ > + struct vdpa_iommu *iommu = container_of(n, struct vdpa_iommu, n); > + > + hwaddr iova = iotlb->iova + iommu->iommu_offset; > + struct vhost_vdpa *v = iommu->dev; > + void *vaddr; > + int ret; > + Int128 llend; > + > + if (iotlb->target_as != &address_space_memory) { > + error_report("Wrong target AS \"%s\", only system memory is allowed", > + iotlb->target_as->name ? iotlb->target_as->name : "none"); > + return; > + } > + RCU_READ_LOCK_GUARD(); > + /* check if RAM section out of device range */ > + llend = int128_add(int128_makes64(iotlb->addr_mask), int128_makes64(iova)); > + if (int128_gt(llend, int128_make64(v->iova_range.last))) { > + error_report("RAM section out of device range (max=0x%" PRIx64 > + ", end addr=0x%" PRIx64 ")", > + v->iova_range.last, int128_get64(llend)); > + return; > + } > + > + vhost_vdpa_iotlb_batch_begin_once(v); Quoted from you answer in V1: " the VHOST_IOTLB_BATCH_END message was send by vhost_vdpa_listener_commit, because we only use one vhost_vdpa_memory_listener and no-iommu mode will also need to use this listener, So we still need to add the batch begin here, based on my testing after the notify function was called, the listener_commit function was also called .so it works well in this situation " This assumes the map_notify to be called within the memory transactions which is not necessarily the case. I think it could be triggered when guest tries to establish a new mapping in the vIOMMU. In this case there's no memory transactions at all? Thanks ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH v14 4/4] vhost-vdpa: Add support for vIOMMU. 2023-03-21 3:21 ` Jason Wang @ 2023-03-21 8:20 ` Cindy Lu 0 siblings, 0 replies; 10+ messages in thread From: Cindy Lu @ 2023-03-21 8:20 UTC (permalink / raw) To: Jason Wang; +Cc: mst, qemu-devel On Tue, Mar 21, 2023 at 11:21 AM Jason Wang <jasowang@redhat.com> wrote: > > On Tue, Mar 21, 2023 at 12:20 AM Cindy Lu <lulu@redhat.com> wrote: > > > > 1. The vIOMMU support will make vDPA can work in IOMMU mode. This > > will fix security issues while using the no-IOMMU mode. > > To support this feature we need to add new functions for IOMMU MR adds and > > deletes. > > > > Also since the SVQ does not support vIOMMU yet, add the check for IOMMU > > in vhost_vdpa_dev_start, if the SVQ and IOMMU enable at the same time > > the function will return fail. > > > > 2. Skip the iova_max check vhost_vdpa_listener_skipped_section(). While > > MR is IOMMU, move this check to vhost_vdpa_iommu_map_notify() > > > > Verified in vp_vdpa and vdpa_sim_net driver > > > > Signed-off-by: Cindy Lu <lulu@redhat.com> > > --- > > hw/virtio/vhost-vdpa.c | 149 +++++++++++++++++++++++++++++++-- > > include/hw/virtio/vhost-vdpa.h | 11 +++ > > 2 files changed, 152 insertions(+), 8 deletions(-) > > > > diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c > > index 0c8c37e786..b36922b365 100644 > > --- a/hw/virtio/vhost-vdpa.c > > +++ b/hw/virtio/vhost-vdpa.c > > @@ -26,6 +26,7 @@ > > #include "cpu.h" > > #include "trace.h" > > #include "qapi/error.h" > > +#include "hw/virtio/virtio-access.h" > > > > /* > > * Return one past the end of the end of section. Be careful with uint64_t > > @@ -60,15 +61,22 @@ static bool vhost_vdpa_listener_skipped_section(MemoryRegionSection *section, > > iova_min, section->offset_within_address_space); > > return true; > > } > > + /* > > + * While using vIOMMU, sometimes the section will be larger than iova_max, > > + * but the memory that actually maps is smaller, so move the check to > > + * function vhost_vdpa_iommu_map_notify(). That function will use the actual > > + * size that maps to the kernel > > + */ > > > > - llend = vhost_vdpa_section_end(section); > > - if (int128_gt(llend, int128_make64(iova_max))) { > > - error_report("RAM section out of device range (max=0x%" PRIx64 > > - ", end addr=0x%" PRIx64 ")", > > - iova_max, int128_get64(llend)); > > - return true; > > + if (!memory_region_is_iommu(section->mr)) { > > + llend = vhost_vdpa_section_end(section); > > + if (int128_gt(llend, int128_make64(iova_max))) { > > + error_report("RAM section out of device range (max=0x%" PRIx64 > > + ", end addr=0x%" PRIx64 ")", > > + iova_max, int128_get64(llend)); > > + return true; > > + } > > } > > - > > Unnecessary changes. > will fix this > > return false; > > } > > > > @@ -185,6 +193,118 @@ static void vhost_vdpa_listener_commit(MemoryListener *listener) > > v->iotlb_batch_begin_sent = false; > > } > > > > +static void vhost_vdpa_iommu_map_notify(IOMMUNotifier *n, IOMMUTLBEntry *iotlb) > > +{ > > + struct vdpa_iommu *iommu = container_of(n, struct vdpa_iommu, n); > > + > > + hwaddr iova = iotlb->iova + iommu->iommu_offset; > > + struct vhost_vdpa *v = iommu->dev; > > + void *vaddr; > > + int ret; > > + Int128 llend; > > + > > + if (iotlb->target_as != &address_space_memory) { > > + error_report("Wrong target AS \"%s\", only system memory is allowed", > > + iotlb->target_as->name ? iotlb->target_as->name : "none"); > > + return; > > + } > > + RCU_READ_LOCK_GUARD(); > > + /* check if RAM section out of device range */ > > + llend = int128_add(int128_makes64(iotlb->addr_mask), int128_makes64(iova)); > > + if (int128_gt(llend, int128_make64(v->iova_range.last))) { > > + error_report("RAM section out of device range (max=0x%" PRIx64 > > + ", end addr=0x%" PRIx64 ")", > > + v->iova_range.last, int128_get64(llend)); > > + return; > > + } > > + > > + vhost_vdpa_iotlb_batch_begin_once(v); > > Quoted from you answer in V1: > > " > the VHOST_IOTLB_BATCH_END message was send by > vhost_vdpa_listener_commit, because we only use > one vhost_vdpa_memory_listener and no-iommu mode will also need to use > this listener, So we still need to add the batch begin here, based on > my testing after the notify function was called, the listener_commit > function was also called .so it works well in this situation > " > > This assumes the map_notify to be called within the memory > transactions which is not necessarily the case. > > I think it could be triggered when guest tries to establish a new > mapping in the vIOMMU. In this case there's no memory transactions at > all? > sure, thanks will fix this > Thanks > ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2023-03-21 8:22 UTC | newest] Thread overview: 10+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2023-03-20 16:19 [PATCH v14 0/4] vhost-vdpa: add support for vIOMMU Cindy Lu 2023-03-20 16:19 ` [PATCH v14 1/4] vhost: expose function vhost_dev_has_iommu() Cindy Lu 2023-03-21 2:30 ` Jason Wang 2023-03-20 16:19 ` [PATCH v14 2/4] vhost_vdpa: fix the input in trace_vhost_vdpa_listener_region_del() Cindy Lu 2023-03-21 2:32 ` Jason Wang 2023-03-20 16:19 ` [PATCH v14 3/4] vhost-vdpa: Add check for full 64-bit in region delete Cindy Lu 2023-03-21 3:14 ` Jason Wang 2023-03-20 16:19 ` [PATCH v14 4/4] vhost-vdpa: Add support for vIOMMU Cindy Lu 2023-03-21 3:21 ` Jason Wang 2023-03-21 8:20 ` Cindy Lu
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).