* [PATCH v8 1/8] hw/vfio/common: Remove error print on mmio region translation by viommu
2020-03-18 10:11 [PATCH v8 0/8] virtio-iommu: VFIO integration Bharat Bhushan
@ 2020-03-18 10:11 ` Bharat Bhushan
2020-03-18 10:11 ` [PATCH v8 2/8] memory: Add interface to set iommu page size mask Bharat Bhushan
` (7 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Bharat Bhushan @ 2020-03-18 10:11 UTC (permalink / raw)
To: peter.maydell, peterx, eric.auger.pro, alex.williamson,
kevin.tian, mst, tnowicki, drjones, linuc.decode, qemu-devel,
qemu-arm, bharatb.linux, jean-philippe, yang.zhong
Cc: Eric Auger, Bharat Bhushan
On ARM, the MSI doorbell is translated by the virtual IOMMU.
As such address_space_translate() returns the MSI controller
MMIO region and we get an "iommu map to non memory area"
message. Let's remove this latter.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
---
hw/vfio/common.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index 5ca11488d6..c586edf47a 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -426,8 +426,6 @@ static bool vfio_get_vaddr(IOMMUTLBEntry *iotlb, void **vaddr,
&xlat, &len, writable,
MEMTXATTRS_UNSPECIFIED);
if (!memory_region_is_ram(mr)) {
- error_report("iommu map to non memory area %"HWADDR_PRIx"",
- xlat);
return false;
}
--
2.17.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v8 2/8] memory: Add interface to set iommu page size mask
2020-03-18 10:11 [PATCH v8 0/8] virtio-iommu: VFIO integration Bharat Bhushan
2020-03-18 10:11 ` [PATCH v8 1/8] hw/vfio/common: Remove error print on mmio region translation by viommu Bharat Bhushan
@ 2020-03-18 10:11 ` Bharat Bhushan
2020-03-18 10:11 ` [PATCH v8 3/8] vfio: set iommu page size as per host supported page size Bharat Bhushan
` (6 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Bharat Bhushan @ 2020-03-18 10:11 UTC (permalink / raw)
To: peter.maydell, peterx, eric.auger.pro, alex.williamson,
kevin.tian, mst, tnowicki, drjones, linuc.decode, qemu-devel,
qemu-arm, bharatb.linux, jean-philippe, yang.zhong
Cc: Bharat Bhushan
Allow to set page size mask to be supported by iommu.
This is required to expose page size mask compatible with
host with virtio-iommu.
Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
---
v7->v8:
- new patch
include/exec/memory.h | 20 ++++++++++++++++++++
memory.c | 10 ++++++++++
2 files changed, 30 insertions(+)
diff --git a/include/exec/memory.h b/include/exec/memory.h
index e85b7de99a..063c424854 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -355,6 +355,16 @@ typedef struct IOMMUMemoryRegionClass {
* @iommu: the IOMMUMemoryRegion
*/
int (*num_indexes)(IOMMUMemoryRegion *iommu);
+
+ /*
+ * Set supported IOMMU page size
+ *
+ * Optional method: if this is supported then set page size that
+ * can be supported by IOMMU. This is called to set supported page
+ * size as per host Linux.
+ */
+ void (*iommu_set_page_size_mask)(IOMMUMemoryRegion *iommu,
+ uint64_t page_size_mask);
} IOMMUMemoryRegionClass;
typedef struct CoalescedMemoryRange CoalescedMemoryRange;
@@ -1363,6 +1373,16 @@ int memory_region_iommu_attrs_to_index(IOMMUMemoryRegion *iommu_mr,
*/
int memory_region_iommu_num_indexes(IOMMUMemoryRegion *iommu_mr);
+/**
+ * memory_region_iommu_set_page_size_mask: set the supported pages
+ * size by iommu.
+ *
+ * @iommu_mr: the memory region
+ * @page_size_mask: supported page size mask
+ */
+void memory_region_iommu_set_page_size_mask(IOMMUMemoryRegion *iommu_mr,
+ uint64_t page_size_mask);
+
/**
* memory_region_name: get a memory region's name
*
diff --git a/memory.c b/memory.c
index aeaa8dcc9e..14c8783084 100644
--- a/memory.c
+++ b/memory.c
@@ -1833,6 +1833,16 @@ static int memory_region_update_iommu_notify_flags(IOMMUMemoryRegion *iommu_mr,
return ret;
}
+void memory_region_iommu_set_page_size_mask(IOMMUMemoryRegion *iommu_mr,
+ uint64_t page_size_mask)
+{
+ IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_GET_CLASS(iommu_mr);
+
+ if (imrc->iommu_set_page_size_mask) {
+ imrc->iommu_set_page_size_mask(iommu_mr, page_size_mask);
+ }
+}
+
int memory_region_register_iommu_notifier(MemoryRegion *mr,
IOMMUNotifier *n, Error **errp)
{
--
2.17.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v8 3/8] vfio: set iommu page size as per host supported page size
2020-03-18 10:11 [PATCH v8 0/8] virtio-iommu: VFIO integration Bharat Bhushan
2020-03-18 10:11 ` [PATCH v8 1/8] hw/vfio/common: Remove error print on mmio region translation by viommu Bharat Bhushan
2020-03-18 10:11 ` [PATCH v8 2/8] memory: Add interface to set iommu page size mask Bharat Bhushan
@ 2020-03-18 10:11 ` Bharat Bhushan
2020-03-18 10:11 ` [PATCH v8 4/8] virtio-iommu: set supported page size mask Bharat Bhushan
` (5 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Bharat Bhushan @ 2020-03-18 10:11 UTC (permalink / raw)
To: peter.maydell, peterx, eric.auger.pro, alex.williamson,
kevin.tian, mst, tnowicki, drjones, linuc.decode, qemu-devel,
qemu-arm, bharatb.linux, jean-philippe, yang.zhong
Cc: Bharat Bhushan
Set iommu supported page size mask same as host Linux
supported page size mask.
Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
---
v7->v8:
- new patch
hw/vfio/common.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index c586edf47a..6ea50d696f 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -635,6 +635,9 @@ static void vfio_listener_region_add(MemoryListener *listener,
int128_get64(llend),
iommu_idx);
+ memory_region_iommu_set_page_size_mask(giommu->iommu,
+ container->pgsizes);
+
ret = memory_region_register_iommu_notifier(section->mr, &giommu->n,
&err);
if (ret) {
--
2.17.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v8 4/8] virtio-iommu: set supported page size mask
2020-03-18 10:11 [PATCH v8 0/8] virtio-iommu: VFIO integration Bharat Bhushan
` (2 preceding siblings ...)
2020-03-18 10:11 ` [PATCH v8 3/8] vfio: set iommu page size as per host supported page size Bharat Bhushan
@ 2020-03-18 10:11 ` Bharat Bhushan
2020-03-18 11:28 ` Auger Eric
2020-03-18 10:11 ` [PATCH v8 5/8] virtio-iommu: Add iommu notifier for map/unmap Bharat Bhushan
` (4 subsequent siblings)
8 siblings, 1 reply; 13+ messages in thread
From: Bharat Bhushan @ 2020-03-18 10:11 UTC (permalink / raw)
To: peter.maydell, peterx, eric.auger.pro, alex.williamson,
kevin.tian, mst, tnowicki, drjones, linuc.decode, qemu-devel,
qemu-arm, bharatb.linux, jean-philippe, yang.zhong
Cc: Bharat Bhushan
Add optional interface to set page size mask.
Currently this is set global configuration and not
per endpoint.
Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
---
v7->v8:
- new patch
hw/virtio/virtio-iommu.c | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index 4cee8083bc..c00a55348d 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -650,6 +650,15 @@ static gint int_cmp(gconstpointer a, gconstpointer b, gpointer user_data)
return (ua > ub) - (ua < ub);
}
+static void virtio_iommu_set_page_size_mask(IOMMUMemoryRegion *mr,
+ uint64_t page_size_mask)
+{
+ IOMMUDevice *sdev = container_of(mr, IOMMUDevice, iommu_mr);
+ VirtIOIOMMU *s = sdev->viommu;
+
+ s->config.page_size_mask = page_size_mask;
+}
+
static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
{
VirtIODevice *vdev = VIRTIO_DEVICE(dev);
@@ -865,6 +874,7 @@ static void virtio_iommu_memory_region_class_init(ObjectClass *klass,
IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_CLASS(klass);
imrc->translate = virtio_iommu_translate;
+ imrc->iommu_set_page_size_mask = virtio_iommu_set_page_size_mask;
}
static const TypeInfo virtio_iommu_info = {
--
2.17.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v8 4/8] virtio-iommu: set supported page size mask
2020-03-18 10:11 ` [PATCH v8 4/8] virtio-iommu: set supported page size mask Bharat Bhushan
@ 2020-03-18 11:28 ` Auger Eric
2020-03-18 14:35 ` Bharat Bhushan
0 siblings, 1 reply; 13+ messages in thread
From: Auger Eric @ 2020-03-18 11:28 UTC (permalink / raw)
To: Bharat Bhushan, peter.maydell, peterx, eric.auger.pro,
alex.williamson, kevin.tian, mst, tnowicki, drjones, linuc.decode,
qemu-devel, qemu-arm, bharatb.linux, jean-philippe, yang.zhong
Hi Bharat,
On 3/18/20 11:11 AM, Bharat Bhushan wrote:
> Add optional interface to set page size mask.
> Currently this is set global configuration and not
> per endpoint.
>
> Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
> ---
> v7->v8:
> - new patch
>
> hw/virtio/virtio-iommu.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
> index 4cee8083bc..c00a55348d 100644
> --- a/hw/virtio/virtio-iommu.c
> +++ b/hw/virtio/virtio-iommu.c
> @@ -650,6 +650,15 @@ static gint int_cmp(gconstpointer a, gconstpointer b, gpointer user_data)
> return (ua > ub) - (ua < ub);
> }
>
> +static void virtio_iommu_set_page_size_mask(IOMMUMemoryRegion *mr,
> + uint64_t page_size_mask)
> +{
> + IOMMUDevice *sdev = container_of(mr, IOMMUDevice, iommu_mr);
> + VirtIOIOMMU *s = sdev->viommu;
> +
> + s->config.page_size_mask = page_size_mask;
The problem is page_size_mask is global to the VIRTIO-IOMMU.
- Can't different VFIO containers impose different/inconsistent settings?
- VFIO devices can be hotplugged. So we may start with some default
page_size_mask which is latter overriden by a host imposed one. Assume
you first launch the VM with a virtio NIC. This uses 64K. Then you
hotplug a VFIO device behind a physical IOMMU which only supports 4K
pages. Isn't it a valid scenario?
Thanks
Eric
> +}
> +
> static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
> {
> VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> @@ -865,6 +874,7 @@ static void virtio_iommu_memory_region_class_init(ObjectClass *klass,
> IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_CLASS(klass);
>
> imrc->translate = virtio_iommu_translate;
> + imrc->iommu_set_page_size_mask = virtio_iommu_set_page_size_mask;
> }
>
> static const TypeInfo virtio_iommu_info = {
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v8 4/8] virtio-iommu: set supported page size mask
2020-03-18 11:28 ` Auger Eric
@ 2020-03-18 14:35 ` Bharat Bhushan
2020-03-23 8:43 ` Bharat Bhushan
0 siblings, 1 reply; 13+ messages in thread
From: Bharat Bhushan @ 2020-03-18 14:35 UTC (permalink / raw)
To: Auger Eric
Cc: Zhong, Yang, Peter Maydell, kevin.tian, tnowicki, mst, drjones,
peterx, qemu-devel, alex.williamson, qemu-arm,
Jean-Philippe Brucker, Bharat Bhushan, linuc.decode,
eric.auger.pro
Hi Eric,
On Wed, Mar 18, 2020 at 4:58 PM Auger Eric <eric.auger@redhat.com> wrote:
>
> Hi Bharat,
>
> On 3/18/20 11:11 AM, Bharat Bhushan wrote:
> > Add optional interface to set page size mask.
> > Currently this is set global configuration and not
> > per endpoint.
> >
> > Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
> > ---
> > v7->v8:
> > - new patch
> >
> > hw/virtio/virtio-iommu.c | 10 ++++++++++
> > 1 file changed, 10 insertions(+)
> >
> > diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
> > index 4cee8083bc..c00a55348d 100644
> > --- a/hw/virtio/virtio-iommu.c
> > +++ b/hw/virtio/virtio-iommu.c
> > @@ -650,6 +650,15 @@ static gint int_cmp(gconstpointer a, gconstpointer b, gpointer user_data)
> > return (ua > ub) - (ua < ub);
> > }
> >
> > +static void virtio_iommu_set_page_size_mask(IOMMUMemoryRegion *mr,
> > + uint64_t page_size_mask)
> > +{
> > + IOMMUDevice *sdev = container_of(mr, IOMMUDevice, iommu_mr);
> > + VirtIOIOMMU *s = sdev->viommu;
> > +
> > + s->config.page_size_mask = page_size_mask;
> The problem is page_size_mask is global to the VIRTIO-IOMMU.
>
> - Can't different VFIO containers impose different/inconsistent settings?
> - VFIO devices can be hotplugged.
This is possible if we different iommu's, which we support. correct?
> So we may start with some default
> page_size_mask which is latter overriden by a host imposed one. Assume
> you first launch the VM with a virtio NIC. This uses 64K. Then you
> hotplug a VFIO device behind a physical IOMMU which only supports 4K
> pages. Isn't it a valid scenario?
So we need to expose page_size_mask per endpoint?
Thanks
-Bharat
>
> Thanks
>
> Eric
>
> > +}
> > +
> > static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
> > {
> > VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > @@ -865,6 +874,7 @@ static void virtio_iommu_memory_region_class_init(ObjectClass *klass,
> > IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_CLASS(klass);
> >
> > imrc->translate = virtio_iommu_translate;
> > + imrc->iommu_set_page_size_mask = virtio_iommu_set_page_size_mask;
> > }
> >
> > static const TypeInfo virtio_iommu_info = {
> >
>
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH v8 4/8] virtio-iommu: set supported page size mask
2020-03-18 14:35 ` Bharat Bhushan
@ 2020-03-23 8:43 ` Bharat Bhushan
0 siblings, 0 replies; 13+ messages in thread
From: Bharat Bhushan @ 2020-03-23 8:43 UTC (permalink / raw)
To: Auger Eric
Cc: Zhong, Yang, Peter Maydell, kevin.tian, tnowicki, mst, drjones,
peterx, qemu-devel, alex.williamson, qemu-arm,
Jean-Philippe Brucker, Bharat Bhushan, linuc.decode,
eric.auger.pro
Hi Eric/Jean,
On Wed, Mar 18, 2020 at 8:05 PM Bharat Bhushan <bharatb.linux@gmail.com> wrote:
>
> Hi Eric,
>
> On Wed, Mar 18, 2020 at 4:58 PM Auger Eric <eric.auger@redhat.com> wrote:
> >
> > Hi Bharat,
> >
> > On 3/18/20 11:11 AM, Bharat Bhushan wrote:
> > > Add optional interface to set page size mask.
> > > Currently this is set global configuration and not
> > > per endpoint.
> > >
> > > Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
> > > ---
> > > v7->v8:
> > > - new patch
> > >
> > > hw/virtio/virtio-iommu.c | 10 ++++++++++
> > > 1 file changed, 10 insertions(+)
> > >
> > > diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
> > > index 4cee8083bc..c00a55348d 100644
> > > --- a/hw/virtio/virtio-iommu.c
> > > +++ b/hw/virtio/virtio-iommu.c
> > > @@ -650,6 +650,15 @@ static gint int_cmp(gconstpointer a, gconstpointer b, gpointer user_data)
> > > return (ua > ub) - (ua < ub);
> > > }
> > >
> > > +static void virtio_iommu_set_page_size_mask(IOMMUMemoryRegion *mr,
> > > + uint64_t page_size_mask)
> > > +{
> > > + IOMMUDevice *sdev = container_of(mr, IOMMUDevice, iommu_mr);
> > > + VirtIOIOMMU *s = sdev->viommu;
> > > +
> > > + s->config.page_size_mask = page_size_mask;
> > The problem is page_size_mask is global to the VIRTIO-IOMMU.
> >
> > - Can't different VFIO containers impose different/inconsistent settings?
> > - VFIO devices can be hotplugged.
>
> This is possible if we different iommu's, which we support. correct?
>
> > So we may start with some default
> > page_size_mask which is latter overriden by a host imposed one. Assume
> > you first launch the VM with a virtio NIC. This uses 64K. Then you
> > hotplug a VFIO device behind a physical IOMMU which only supports 4K
> > pages. Isn't it a valid scenario?
>
> So we need to expose page_size_mask per endpoint?
Just sent Linux RFC patch to use page-size-mask per endpoint.
QEMU changes are also ready, will share soon.
Thanks
-Bharat
>
> Thanks
> -Bharat
>
> >
> > Thanks
> >
> > Eric
> >
> > > +}
> > > +
> > > static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
> > > {
> > > VirtIODevice *vdev = VIRTIO_DEVICE(dev);
> > > @@ -865,6 +874,7 @@ static void virtio_iommu_memory_region_class_init(ObjectClass *klass,
> > > IOMMUMemoryRegionClass *imrc = IOMMU_MEMORY_REGION_CLASS(klass);
> > >
> > > imrc->translate = virtio_iommu_translate;
> > > + imrc->iommu_set_page_size_mask = virtio_iommu_set_page_size_mask;
> > > }
> > >
> > > static const TypeInfo virtio_iommu_info = {
> > >
> >
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH v8 5/8] virtio-iommu: Add iommu notifier for map/unmap
2020-03-18 10:11 [PATCH v8 0/8] virtio-iommu: VFIO integration Bharat Bhushan
` (3 preceding siblings ...)
2020-03-18 10:11 ` [PATCH v8 4/8] virtio-iommu: set supported page size mask Bharat Bhushan
@ 2020-03-18 10:11 ` Bharat Bhushan
2020-03-18 10:11 ` [PATCH v8 6/8] virtio-iommu: Call iommu notifier for attach/detach Bharat Bhushan
` (3 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Bharat Bhushan @ 2020-03-18 10:11 UTC (permalink / raw)
To: peter.maydell, peterx, eric.auger.pro, alex.williamson,
kevin.tian, mst, tnowicki, drjones, linuc.decode, qemu-devel,
qemu-arm, bharatb.linux, jean-philippe, yang.zhong
Cc: Eric Auger, Bharat Bhushan, Bharat Bhushan
This patch extends VIRTIO_IOMMU_T_MAP/UNMAP request to
notify registered iommu-notifier. Which will call vfio
notifier to map/unmap region in iommu.
Signed-off-by: Bharat Bhushan <Bharat.Bhushan@nxp.com>
Signed-off-by: Eric Auger <eric.auger@redhat.com>
---
include/hw/virtio/virtio-iommu.h | 2 +
hw/virtio/virtio-iommu.c | 67 +++++++++++++++++++++++++++++++-
hw/virtio/trace-events | 2 +
3 files changed, 70 insertions(+), 1 deletion(-)
diff --git a/include/hw/virtio/virtio-iommu.h b/include/hw/virtio/virtio-iommu.h
index 6f67f1020a..65ad3bf4ee 100644
--- a/include/hw/virtio/virtio-iommu.h
+++ b/include/hw/virtio/virtio-iommu.h
@@ -37,6 +37,7 @@ typedef struct IOMMUDevice {
int devfn;
IOMMUMemoryRegion iommu_mr;
AddressSpace as;
+ QLIST_ENTRY(IOMMUDevice) next;
} IOMMUDevice;
typedef struct IOMMUPciBus {
@@ -56,6 +57,7 @@ typedef struct VirtIOIOMMU {
GTree *domains;
QemuMutex mutex;
GTree *endpoints;
+ QLIST_HEAD(, IOMMUDevice) notifiers_list;
} VirtIOIOMMU;
#endif
diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index c00a55348d..623b477b9c 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -123,6 +123,38 @@ static gint interval_cmp(gconstpointer a, gconstpointer b, gpointer user_data)
}
}
+static void virtio_iommu_notify_map(IOMMUMemoryRegion *mr, hwaddr iova,
+ hwaddr paddr, hwaddr size)
+{
+ IOMMUTLBEntry entry;
+
+ entry.target_as = &address_space_memory;
+ entry.addr_mask = size - 1;
+
+ entry.iova = iova;
+ trace_virtio_iommu_notify_map(mr->parent_obj.name, iova, paddr, size);
+ entry.perm = IOMMU_RW;
+ entry.translated_addr = paddr;
+
+ memory_region_notify_iommu(mr, 0, entry);
+}
+
+static void virtio_iommu_notify_unmap(IOMMUMemoryRegion *mr, hwaddr iova,
+ hwaddr size)
+{
+ IOMMUTLBEntry entry;
+
+ entry.target_as = &address_space_memory;
+ entry.addr_mask = size - 1;
+
+ entry.iova = iova;
+ trace_virtio_iommu_notify_unmap(mr->parent_obj.name, iova, size);
+ entry.perm = IOMMU_NONE;
+ entry.translated_addr = 0;
+
+ memory_region_notify_iommu(mr, 0, entry);
+}
+
static void virtio_iommu_detach_endpoint_from_domain(VirtIOIOMMUEndpoint *ep)
{
if (!ep->domain) {
@@ -307,9 +339,12 @@ static int virtio_iommu_map(VirtIOIOMMU *s,
uint64_t virt_start = le64_to_cpu(req->virt_start);
uint64_t virt_end = le64_to_cpu(req->virt_end);
uint32_t flags = le32_to_cpu(req->flags);
+ hwaddr size = virt_end - virt_start + 1;
VirtIOIOMMUDomain *domain;
VirtIOIOMMUInterval *interval;
VirtIOIOMMUMapping *mapping;
+ VirtIOIOMMUEndpoint *ep;
+ IOMMUDevice *sdev;
if (flags & ~VIRTIO_IOMMU_MAP_F_MASK) {
return VIRTIO_IOMMU_S_INVAL;
@@ -339,9 +374,38 @@ static int virtio_iommu_map(VirtIOIOMMU *s,
g_tree_insert(domain->mappings, interval, mapping);
+ /* All devices in an address-space share mapping */
+ QLIST_FOREACH(sdev, &s->notifiers_list, next) {
+ QLIST_FOREACH(ep, &domain->endpoint_list, next) {
+ if (ep->id == sdev->devfn) {
+ virtio_iommu_notify_map(&sdev->iommu_mr,
+ virt_start, phys_start, size);
+ }
+ }
+ }
+
return VIRTIO_IOMMU_S_OK;
}
+static void virtio_iommu_remove_mapping(VirtIOIOMMU *s,
+ VirtIOIOMMUDomain *domain,
+ VirtIOIOMMUInterval *interval)
+{
+ VirtIOIOMMUEndpoint *ep;
+ IOMMUDevice *sdev;
+
+ QLIST_FOREACH(sdev, &s->notifiers_list, next) {
+ QLIST_FOREACH(ep, &domain->endpoint_list, next) {
+ if (ep->id == sdev->devfn) {
+ virtio_iommu_notify_unmap(&sdev->iommu_mr,
+ interval->low,
+ interval->high - interval->low + 1);
+ }
+ }
+ }
+ g_tree_remove(domain->mappings, (gpointer)(interval));
+}
+
static int virtio_iommu_unmap(VirtIOIOMMU *s,
struct virtio_iommu_req_unmap *req)
{
@@ -368,7 +432,7 @@ static int virtio_iommu_unmap(VirtIOIOMMU *s,
uint64_t current_high = iter_key->high;
if (interval.low <= current_low && interval.high >= current_high) {
- g_tree_remove(domain->mappings, iter_key);
+ virtio_iommu_remove_mapping(s, domain, iter_key);
trace_virtio_iommu_unmap_done(domain_id, current_low, current_high);
} else {
ret = VIRTIO_IOMMU_S_RANGE;
@@ -664,6 +728,7 @@ static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
VirtIODevice *vdev = VIRTIO_DEVICE(dev);
VirtIOIOMMU *s = VIRTIO_IOMMU(dev);
+ QLIST_INIT(&s->notifiers_list);
virtio_init(vdev, "virtio-iommu", VIRTIO_ID_IOMMU,
sizeof(struct virtio_iommu_config));
diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index e83500bee9..d94a1cd8a3 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -73,3 +73,5 @@ virtio_iommu_get_domain(uint32_t domain_id) "Alloc domain=%d"
virtio_iommu_put_domain(uint32_t domain_id) "Free domain=%d"
virtio_iommu_translate_out(uint64_t virt_addr, uint64_t phys_addr, uint32_t sid) "0x%"PRIx64" -> 0x%"PRIx64 " for sid=%d"
virtio_iommu_report_fault(uint8_t reason, uint32_t flags, uint32_t endpoint, uint64_t addr) "FAULT reason=%d flags=%d endpoint=%d address =0x%"PRIx64
+virtio_iommu_notify_map(const char *name, uint64_t iova, uint64_t paddr, uint64_t map_size) "mr=%s iova=0x%"PRIx64" pa=0x%" PRIx64" size=0x%"PRIx64
+virtio_iommu_notify_unmap(const char *name, uint64_t iova, uint64_t map_size) "mr=%s iova=0x%"PRIx64" size=0x%"PRIx64
--
2.17.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v8 6/8] virtio-iommu: Call iommu notifier for attach/detach
2020-03-18 10:11 [PATCH v8 0/8] virtio-iommu: VFIO integration Bharat Bhushan
` (4 preceding siblings ...)
2020-03-18 10:11 ` [PATCH v8 5/8] virtio-iommu: Add iommu notifier for map/unmap Bharat Bhushan
@ 2020-03-18 10:11 ` Bharat Bhushan
2020-03-18 10:11 ` [PATCH v8 7/8] virtio-iommu: add iommu replay Bharat Bhushan
` (2 subsequent siblings)
8 siblings, 0 replies; 13+ messages in thread
From: Bharat Bhushan @ 2020-03-18 10:11 UTC (permalink / raw)
To: peter.maydell, peterx, eric.auger.pro, alex.williamson,
kevin.tian, mst, tnowicki, drjones, linuc.decode, qemu-devel,
qemu-arm, bharatb.linux, jean-philippe, yang.zhong
Cc: Bharat Bhushan
iommu-notifier are called when a device is attached
or detached to as address-space.
This is needed for VFIO.
Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
---
hw/virtio/virtio-iommu.c | 49 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 49 insertions(+)
diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index 623b477b9c..4d522a636a 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -49,6 +49,7 @@ typedef struct VirtIOIOMMUEndpoint {
uint32_t id;
VirtIOIOMMUDomain *domain;
QLIST_ENTRY(VirtIOIOMMUEndpoint) next;
+ VirtIOIOMMU *viommu;
} VirtIOIOMMUEndpoint;
typedef struct VirtIOIOMMUInterval {
@@ -155,11 +156,48 @@ static void virtio_iommu_notify_unmap(IOMMUMemoryRegion *mr, hwaddr iova,
memory_region_notify_iommu(mr, 0, entry);
}
+static gboolean virtio_iommu_mapping_unmap(gpointer key, gpointer value,
+ gpointer data)
+{
+ VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
+ IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
+
+ virtio_iommu_notify_unmap(mr, interval->low,
+ interval->high - interval->low + 1);
+
+ return false;
+}
+
+static gboolean virtio_iommu_mapping_map(gpointer key, gpointer value,
+ gpointer data)
+{
+ VirtIOIOMMUMapping *mapping = (VirtIOIOMMUMapping *) value;
+ VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
+ IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
+
+ virtio_iommu_notify_map(mr, interval->low, mapping->phys_addr,
+ interval->high - interval->low + 1);
+
+ return false;
+}
+
static void virtio_iommu_detach_endpoint_from_domain(VirtIOIOMMUEndpoint *ep)
{
+ VirtIOIOMMU *s = ep->viommu;
+ VirtIOIOMMUDomain *domain = ep->domain;
+ IOMMUDevice *sdev;
+
if (!ep->domain) {
return;
}
+
+ QLIST_FOREACH(sdev, &s->notifiers_list, next) {
+ if (ep->id == sdev->devfn) {
+ g_tree_foreach(domain->mappings, virtio_iommu_mapping_unmap,
+ &sdev->iommu_mr);
+ }
+ }
+
QLIST_REMOVE(ep, next);
ep->domain = NULL;
}
@@ -178,6 +216,7 @@ static VirtIOIOMMUEndpoint *virtio_iommu_get_endpoint(VirtIOIOMMU *s,
}
ep = g_malloc0(sizeof(*ep));
ep->id = ep_id;
+ ep->viommu = s;
trace_virtio_iommu_get_endpoint(ep_id);
g_tree_insert(s->endpoints, GUINT_TO_POINTER(ep_id), ep);
return ep;
@@ -274,6 +313,7 @@ static int virtio_iommu_attach(VirtIOIOMMU *s,
uint32_t ep_id = le32_to_cpu(req->endpoint);
VirtIOIOMMUDomain *domain;
VirtIOIOMMUEndpoint *ep;
+ IOMMUDevice *sdev;
trace_virtio_iommu_attach(domain_id, ep_id);
@@ -299,6 +339,14 @@ static int virtio_iommu_attach(VirtIOIOMMU *s,
ep->domain = domain;
+ /* Replay domain mappings on the associated memory region */
+ QLIST_FOREACH(sdev, &s->notifiers_list, next) {
+ if (ep_id == sdev->devfn) {
+ g_tree_foreach(domain->mappings, virtio_iommu_mapping_map,
+ &sdev->iommu_mr);
+ }
+ }
+
return VIRTIO_IOMMU_S_OK;
}
@@ -873,6 +921,7 @@ static gboolean reconstruct_endpoints(gpointer key, gpointer value,
QLIST_FOREACH(iter, &d->endpoint_list, next) {
iter->domain = d;
+ iter->viommu = s;
g_tree_insert(s->endpoints, GUINT_TO_POINTER(iter->id), iter);
}
return false; /* continue the domain traversal */
--
2.17.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v8 7/8] virtio-iommu: add iommu replay
2020-03-18 10:11 [PATCH v8 0/8] virtio-iommu: VFIO integration Bharat Bhushan
` (5 preceding siblings ...)
2020-03-18 10:11 ` [PATCH v8 6/8] virtio-iommu: Call iommu notifier for attach/detach Bharat Bhushan
@ 2020-03-18 10:11 ` Bharat Bhushan
2020-03-18 10:11 ` [PATCH v8 8/8] virtio-iommu: add iommu notifier memory-region Bharat Bhushan
2020-03-18 10:53 ` [PATCH v8 0/8] virtio-iommu: VFIO integration Auger Eric
8 siblings, 0 replies; 13+ messages in thread
From: Bharat Bhushan @ 2020-03-18 10:11 UTC (permalink / raw)
To: peter.maydell, peterx, eric.auger.pro, alex.williamson,
kevin.tian, mst, tnowicki, drjones, linuc.decode, qemu-devel,
qemu-arm, bharatb.linux, jean-philippe, yang.zhong
Cc: Bharat Bhushan
Default replay does not work with virtio-iommu,
so this patch provide virtio-iommu replay functionality.
Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
---
hw/virtio/virtio-iommu.c | 44 ++++++++++++++++++++++++++++++++++++++++
hw/virtio/trace-events | 1 +
2 files changed, 45 insertions(+)
diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index 4d522a636a..b68644f7c3 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -771,6 +771,49 @@ static void virtio_iommu_set_page_size_mask(IOMMUMemoryRegion *mr,
s->config.page_size_mask = page_size_mask;
}
+static gboolean virtio_iommu_remap(gpointer key, gpointer value, gpointer data)
+{
+ VirtIOIOMMUMapping *mapping = (VirtIOIOMMUMapping *) value;
+ VirtIOIOMMUInterval *interval = (VirtIOIOMMUInterval *) key;
+ IOMMUMemoryRegion *mr = (IOMMUMemoryRegion *) data;
+
+ trace_virtio_iommu_remap(interval->low, mapping->phys_addr,
+ interval->high - interval->low + 1);
+ /* unmap previous entry and map again */
+ virtio_iommu_notify_unmap(mr, interval->low,
+ interval->high - interval->low + 1);
+
+ virtio_iommu_notify_map(mr, interval->low, mapping->phys_addr,
+ interval->high - interval->low + 1);
+ return false;
+}
+
+static void virtio_iommu_replay(IOMMUMemoryRegion *mr, IOMMUNotifier *n)
+{
+ IOMMUDevice *sdev = container_of(mr, IOMMUDevice, iommu_mr);
+ VirtIOIOMMU *s = sdev->viommu;
+ uint32_t sid;
+ VirtIOIOMMUEndpoint *ep;
+
+ sid = virtio_iommu_get_bdf(sdev);
+
+ qemu_mutex_lock(&s->mutex);
+
+ if (!s->endpoints) {
+ goto unlock;
+ }
+
+ ep = g_tree_lookup(s->endpoints, GUINT_TO_POINTER(sid));
+ if (!ep || !ep->domain) {
+ goto unlock;
+ }
+
+ g_tree_foreach(ep->domain->mappings, virtio_iommu_remap, mr);
+
+unlock:
+ qemu_mutex_unlock(&s->mutex);
+}
+
static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
{
VirtIODevice *vdev = VIRTIO_DEVICE(dev);
@@ -989,6 +1032,7 @@ static void virtio_iommu_memory_region_class_init(ObjectClass *klass,
imrc->translate = virtio_iommu_translate;
imrc->iommu_set_page_size_mask = virtio_iommu_set_page_size_mask;
+ imrc->replay = virtio_iommu_replay;
}
static const TypeInfo virtio_iommu_info = {
diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index d94a1cd8a3..8bae651191 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -75,3 +75,4 @@ virtio_iommu_translate_out(uint64_t virt_addr, uint64_t phys_addr, uint32_t sid)
virtio_iommu_report_fault(uint8_t reason, uint32_t flags, uint32_t endpoint, uint64_t addr) "FAULT reason=%d flags=%d endpoint=%d address =0x%"PRIx64
virtio_iommu_notify_map(const char *name, uint64_t iova, uint64_t paddr, uint64_t map_size) "mr=%s iova=0x%"PRIx64" pa=0x%" PRIx64" size=0x%"PRIx64
virtio_iommu_notify_unmap(const char *name, uint64_t iova, uint64_t map_size) "mr=%s iova=0x%"PRIx64" size=0x%"PRIx64
+virtio_iommu_remap(uint64_t iova, uint64_t pa, uint64_t size) "iova=0x%"PRIx64" pa=0x%" PRIx64" size=0x%"PRIx64""
--
2.17.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH v8 8/8] virtio-iommu: add iommu notifier memory-region
2020-03-18 10:11 [PATCH v8 0/8] virtio-iommu: VFIO integration Bharat Bhushan
` (6 preceding siblings ...)
2020-03-18 10:11 ` [PATCH v8 7/8] virtio-iommu: add iommu replay Bharat Bhushan
@ 2020-03-18 10:11 ` Bharat Bhushan
2020-03-18 10:53 ` [PATCH v8 0/8] virtio-iommu: VFIO integration Auger Eric
8 siblings, 0 replies; 13+ messages in thread
From: Bharat Bhushan @ 2020-03-18 10:11 UTC (permalink / raw)
To: peter.maydell, peterx, eric.auger.pro, alex.williamson,
kevin.tian, mst, tnowicki, drjones, linuc.decode, qemu-devel,
qemu-arm, bharatb.linux, jean-philippe, yang.zhong
Cc: Bharat Bhushan
Finally add notify_flag_changed() to for memory-region
access flag iommu flag change notifier
Finally add the memory notifier
Signed-off-by: Bharat Bhushan <bbhushan2@marvell.com>
---
hw/virtio/virtio-iommu.c | 22 ++++++++++++++++++++++
hw/virtio/trace-events | 2 ++
2 files changed, 24 insertions(+)
diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c
index b68644f7c3..515c965e3c 100644
--- a/hw/virtio/virtio-iommu.c
+++ b/hw/virtio/virtio-iommu.c
@@ -814,6 +814,27 @@ unlock:
qemu_mutex_unlock(&s->mutex);
}
+static int virtio_iommu_notify_flag_changed(IOMMUMemoryRegion *iommu_mr,
+ IOMMUNotifierFlag old,
+ IOMMUNotifierFlag new,
+ Error **errp)
+{
+ IOMMUDevice *sdev = container_of(iommu_mr, IOMMUDevice, iommu_mr);
+ VirtIOIOMMU *s = sdev->viommu;
+
+ if (old == IOMMU_NOTIFIER_NONE) {
+ trace_virtio_iommu_notify_flag_add(iommu_mr->parent_obj.name);
+ QLIST_INSERT_HEAD(&s->notifiers_list, sdev, next);
+ return 0;
+ }
+
+ if (new == IOMMU_NOTIFIER_NONE) {
+ trace_virtio_iommu_notify_flag_del(iommu_mr->parent_obj.name);
+ QLIST_REMOVE(sdev, next);
+ }
+ return 0;
+}
+
static void virtio_iommu_device_realize(DeviceState *dev, Error **errp)
{
VirtIODevice *vdev = VIRTIO_DEVICE(dev);
@@ -1033,6 +1054,7 @@ static void virtio_iommu_memory_region_class_init(ObjectClass *klass,
imrc->translate = virtio_iommu_translate;
imrc->iommu_set_page_size_mask = virtio_iommu_set_page_size_mask;
imrc->replay = virtio_iommu_replay;
+ imrc->notify_flag_changed = virtio_iommu_notify_flag_changed;
}
static const TypeInfo virtio_iommu_info = {
diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events
index 8bae651191..a486adcf6d 100644
--- a/hw/virtio/trace-events
+++ b/hw/virtio/trace-events
@@ -76,3 +76,5 @@ virtio_iommu_report_fault(uint8_t reason, uint32_t flags, uint32_t endpoint, uin
virtio_iommu_notify_map(const char *name, uint64_t iova, uint64_t paddr, uint64_t map_size) "mr=%s iova=0x%"PRIx64" pa=0x%" PRIx64" size=0x%"PRIx64
virtio_iommu_notify_unmap(const char *name, uint64_t iova, uint64_t map_size) "mr=%s iova=0x%"PRIx64" size=0x%"PRIx64
virtio_iommu_remap(uint64_t iova, uint64_t pa, uint64_t size) "iova=0x%"PRIx64" pa=0x%" PRIx64" size=0x%"PRIx64""
+virtio_iommu_notify_flag_add(const char *iommu) "Add virtio-iommu notifier node for memory region %s"
+virtio_iommu_notify_flag_del(const char *iommu) "Del virtio-iommu notifier node for memory region %s"
--
2.17.1
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH v8 0/8] virtio-iommu: VFIO integration
2020-03-18 10:11 [PATCH v8 0/8] virtio-iommu: VFIO integration Bharat Bhushan
` (7 preceding siblings ...)
2020-03-18 10:11 ` [PATCH v8 8/8] virtio-iommu: add iommu notifier memory-region Bharat Bhushan
@ 2020-03-18 10:53 ` Auger Eric
8 siblings, 0 replies; 13+ messages in thread
From: Auger Eric @ 2020-03-18 10:53 UTC (permalink / raw)
To: Bharat Bhushan, peter.maydell, peterx, eric.auger.pro,
alex.williamson, kevin.tian, mst, tnowicki, drjones, linuc.decode,
qemu-devel, qemu-arm, bharatb.linux, jean-philippe, yang.zhong
Hi Bharat
On 3/18/20 11:11 AM, Bharat Bhushan wrote:
> This patch series integrates VFIO with virtio-iommu.
> This is only applicable for PCI pass-through with virtio-iommu.
>
> This series is available at:
> https://github.com/bharat-bhushan-devel/qemu.git virtio-iommu-vfio-integration-v8
>
> This is tested with assigning more than one pci devices to Virtual Machine.
>
> This series is based on:
> - virtio-iommu device emulation by Eric Augur.
Auger ;-)
> [v16,00/10] VIRTIO-IOMMU device
> https://github.com/eauger/qemu/tree/v4.2-virtio-iommu-v16
This is now upstream so no need to put that ref anymore
Thanks
Eric
>
> - Linux 5.6.0-rc4
>
> v7->v8:
> - Set page size mask as per host
> This fixes issue with 64K host/guest
> - Device list from IOMMUDevice directly removed VirtioIOMMUNotifierNode
> - Add missing iep->viommu init on post-load
>
> v6->v7:
> - corrected email-address
>
> v5->v6:
> - Rebase to v16 version from Eric
> - Tested with upstream Linux
> - Added a patch from Eric/Myself on removing mmio-region error print in vfio
>
> v4->v5:
> - Rebase to v9 version from Eric
> - PCIe device hotplug fix
> - Added Patch 1/5 from Eric previous series (Eric somehow dropped in
> last version.
> - Patch "Translate the MSI doorbell in kvm_arch_fixup_msi_route"
> already integrated with vsmmu3
>
> v3->v4:
> - Rebase to v4 version from Eric
> - Fixes from Eric with DPDK in VM
> - Logical division in multiple patches
>
> v2->v3:
> - This series is based on "[RFC v3 0/8] VIRTIO-IOMMU device"
> Which is based on top of v2.10-rc0 that
> - Fixed issue with two PCI devices
> - Addressed review comments
>
> v1->v2:
> - Added trace events
> - removed vSMMU3 link in patch description
>
> Bharat Bhushan (8):
> hw/vfio/common: Remove error print on mmio region translation by
> viommu
> memory: Add interface to set iommu page size mask
> vfio: set iommu page size as per host supported page size
> virtio-iommu: set supported page size mask
> virtio-iommu: Add iommu notifier for map/unmap
> virtio-iommu: Call iommu notifier for attach/detach
> virtio-iommu: add iommu replay
> virtio-iommu: add iommu notifier memory-region
>
> include/exec/memory.h | 20 ++++
> include/hw/virtio/virtio-iommu.h | 2 +
> hw/vfio/common.c | 5 +-
> hw/virtio/virtio-iommu.c | 192 ++++++++++++++++++++++++++++++-
> memory.c | 10 ++
> hw/virtio/trace-events | 5 +
> 6 files changed, 231 insertions(+), 3 deletions(-)
>
^ permalink raw reply [flat|nested] 13+ messages in thread