* [PATCH v2 0/3] iommu/virtio: Misc fixes
@ 2020-03-26 9:35 Jean-Philippe Brucker
2020-03-26 9:35 ` [PATCH v2 1/3] iommu/virtio: Fix sparse warning Jean-Philippe Brucker
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: Jean-Philippe Brucker @ 2020-03-26 9:35 UTC (permalink / raw)
To: iommu
Cc: Jean-Philippe Brucker, mst, joro, virtualization, eric.auger,
bbhushan2
A collection of fixes for the virtio-iommu driver. It might be too late
for v5.6 since they need more review. Patch 2 is new, the others were
posted separately.
Jean-Philippe Brucker (3):
iommu/virtio: Fix sparse warning
iommu/virtio: Fix freeing of incomplete domains
iommu/virtio: Reject IOMMU page granule larger than PAGE_SIZE
include/uapi/linux/virtio_iommu.h | 12 ++++++------
drivers/iommu/virtio-iommu.c | 30 +++++++++++++++++++++---------
2 files changed, 27 insertions(+), 15 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 9+ messages in thread* [PATCH v2 1/3] iommu/virtio: Fix sparse warning 2020-03-26 9:35 [PATCH v2 0/3] iommu/virtio: Misc fixes Jean-Philippe Brucker @ 2020-03-26 9:35 ` Jean-Philippe Brucker [not found] ` <20200326093558.2641019-1-jean-philippe-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org> 2020-03-26 9:35 ` [PATCH v2 3/3] iommu/virtio: Reject IOMMU page granule larger than PAGE_SIZE Jean-Philippe Brucker 2 siblings, 0 replies; 9+ messages in thread From: Jean-Philippe Brucker @ 2020-03-26 9:35 UTC (permalink / raw) To: iommu Cc: Jean-Philippe Brucker, kbuild test robot, mst, joro, virtualization, eric.auger, bbhushan2 We copied the virtio_iommu_config from the virtio-iommu specification, which declares the fields using little-endian annotations (for example le32). Unfortunately this causes sparse to warn about comparison between little- and cpu-endian, because of the typecheck() in virtio_cread(): drivers/iommu/virtio-iommu.c:1024:9: sparse: sparse: incompatible types in comparison expression (different base types): drivers/iommu/virtio-iommu.c:1024:9: sparse: restricted __le64 * drivers/iommu/virtio-iommu.c:1024:9: sparse: unsigned long long * drivers/iommu/virtio-iommu.c:1036:9: sparse: sparse: incompatible types in comparison expression (different base types): drivers/iommu/virtio-iommu.c:1036:9: sparse: restricted __le64 * drivers/iommu/virtio-iommu.c:1036:9: sparse: unsigned long long * drivers/iommu/virtio-iommu.c:1040:9: sparse: sparse: incompatible types in comparison expression (different base types): drivers/iommu/virtio-iommu.c:1040:9: sparse: restricted __le64 * drivers/iommu/virtio-iommu.c:1040:9: sparse: unsigned long long * drivers/iommu/virtio-iommu.c:1044:9: sparse: sparse: incompatible types in comparison expression (different base types): drivers/iommu/virtio-iommu.c:1044:9: sparse: restricted __le32 * drivers/iommu/virtio-iommu.c:1044:9: sparse: unsigned int * drivers/iommu/virtio-iommu.c:1048:9: sparse: sparse: incompatible types in comparison expression (different base types): drivers/iommu/virtio-iommu.c:1048:9: sparse: restricted __le32 * drivers/iommu/virtio-iommu.c:1048:9: sparse: unsigned int * drivers/iommu/virtio-iommu.c:1052:9: sparse: sparse: incompatible types in comparison expression (different base types): drivers/iommu/virtio-iommu.c:1052:9: sparse: restricted __le32 * drivers/iommu/virtio-iommu.c:1052:9: sparse: unsigned int * Although virtio_cread() does convert virtio-endian (in our case little-endian) to cpu-endian, the typecheck() needs the two arguments to have the same endianness. Do as UAPI headers of other virtio devices do, and remove the endian annotation from the device config. Even though we change the UAPI this shouldn't cause any regression since QEMU, the existing implementation of virtio-iommu that uses this header, already removes the annotations when importing headers. Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> --- include/uapi/linux/virtio_iommu.h | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/include/uapi/linux/virtio_iommu.h b/include/uapi/linux/virtio_iommu.h index 237e36a280cb..48e3c29223b5 100644 --- a/include/uapi/linux/virtio_iommu.h +++ b/include/uapi/linux/virtio_iommu.h @@ -18,24 +18,24 @@ #define VIRTIO_IOMMU_F_MMIO 5 struct virtio_iommu_range_64 { - __le64 start; - __le64 end; + __u64 start; + __u64 end; }; struct virtio_iommu_range_32 { - __le32 start; - __le32 end; + __u32 start; + __u32 end; }; struct virtio_iommu_config { /* Supported page sizes */ - __le64 page_size_mask; + __u64 page_size_mask; /* Supported IOVA range */ struct virtio_iommu_range_64 input_range; /* Max domain ID size */ struct virtio_iommu_range_32 domain_range; /* Probe buffer size */ - __le32 probe_size; + __u32 probe_size; }; /* Request types */ -- 2.25.1 ^ permalink raw reply related [flat|nested] 9+ messages in thread
[parent not found: <20200326093558.2641019-1-jean-philippe-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>]
* [PATCH v2 2/3] iommu/virtio: Fix freeing of incomplete domains [not found] ` <20200326093558.2641019-1-jean-philippe-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org> @ 2020-03-26 9:35 ` Jean-Philippe Brucker 2020-03-26 12:09 ` Robin Murphy 2020-03-27 10:11 ` [PATCH v2 0/3] iommu/virtio: Misc fixes Joerg Roedel 1 sibling, 1 reply; 9+ messages in thread From: Jean-Philippe Brucker @ 2020-03-26 9:35 UTC (permalink / raw) To: iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA Cc: Jean-Philippe Brucker, mst-H+wXaHxf7aLQT0dZR+AlfA, virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, bbhushan2-eYqpPyKDWXRBDgjK7y7TUQ, jasowang-H+wXaHxf7aLQT0dZR+AlfA Calling viommu_domain_free() on a domain that hasn't been finalised (not attached to any device, for example) can currently cause an Oops, because we attempt to call ida_free() on ID 0, which may either be unallocated or used by another domain. Only initialise the vdomain->viommu pointer, which denotes a finalised domain, at the end of a successful viommu_domain_finalise(). Fixes: edcd69ab9a32 ("iommu: Add virtio-iommu driver") Reported-by: Eric Auger <eric.auger-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> Signed-off-by: Jean-Philippe Brucker <jean-philippe-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org> --- drivers/iommu/virtio-iommu.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c index cce329d71fba..5eed75cd121f 100644 --- a/drivers/iommu/virtio-iommu.c +++ b/drivers/iommu/virtio-iommu.c @@ -613,18 +613,20 @@ static int viommu_domain_finalise(struct viommu_dev *viommu, int ret; struct viommu_domain *vdomain = to_viommu_domain(domain); - vdomain->viommu = viommu; - vdomain->map_flags = viommu->map_flags; + ret = ida_alloc_range(&viommu->domain_ids, viommu->first_domain, + viommu->last_domain, GFP_KERNEL); + if (ret < 0) + return ret; + + vdomain->id = (unsigned int)ret; domain->pgsize_bitmap = viommu->pgsize_bitmap; domain->geometry = viommu->geometry; - ret = ida_alloc_range(&viommu->domain_ids, viommu->first_domain, - viommu->last_domain, GFP_KERNEL); - if (ret >= 0) - vdomain->id = (unsigned int)ret; + vdomain->map_flags = viommu->map_flags; + vdomain->viommu = viommu; - return ret > 0 ? 0 : ret; + return 0; } static void viommu_domain_free(struct iommu_domain *domain) -- 2.25.1 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v2 2/3] iommu/virtio: Fix freeing of incomplete domains 2020-03-26 9:35 ` [PATCH v2 2/3] iommu/virtio: Fix freeing of incomplete domains Jean-Philippe Brucker @ 2020-03-26 12:09 ` Robin Murphy 0 siblings, 0 replies; 9+ messages in thread From: Robin Murphy @ 2020-03-26 12:09 UTC (permalink / raw) To: Jean-Philippe Brucker, iommu; +Cc: bbhushan2, virtualization, mst On 2020-03-26 9:35 am, Jean-Philippe Brucker wrote: > Calling viommu_domain_free() on a domain that hasn't been finalised (not > attached to any device, for example) can currently cause an Oops, > because we attempt to call ida_free() on ID 0, which may either be > unallocated or used by another domain. > > Only initialise the vdomain->viommu pointer, which denotes a finalised > domain, at the end of a successful viommu_domain_finalise(). Reviewed-by: Robin Murphy <robin.murphy@arm.com> > Fixes: edcd69ab9a32 ("iommu: Add virtio-iommu driver") > Reported-by: Eric Auger <eric.auger@redhat.com> > Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> > --- > drivers/iommu/virtio-iommu.c | 16 +++++++++------- > 1 file changed, 9 insertions(+), 7 deletions(-) > > diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c > index cce329d71fba..5eed75cd121f 100644 > --- a/drivers/iommu/virtio-iommu.c > +++ b/drivers/iommu/virtio-iommu.c > @@ -613,18 +613,20 @@ static int viommu_domain_finalise(struct viommu_dev *viommu, > int ret; > struct viommu_domain *vdomain = to_viommu_domain(domain); > > - vdomain->viommu = viommu; > - vdomain->map_flags = viommu->map_flags; > + ret = ida_alloc_range(&viommu->domain_ids, viommu->first_domain, > + viommu->last_domain, GFP_KERNEL); > + if (ret < 0) > + return ret; > + > + vdomain->id = (unsigned int)ret; > > domain->pgsize_bitmap = viommu->pgsize_bitmap; > domain->geometry = viommu->geometry; > > - ret = ida_alloc_range(&viommu->domain_ids, viommu->first_domain, > - viommu->last_domain, GFP_KERNEL); > - if (ret >= 0) > - vdomain->id = (unsigned int)ret; > + vdomain->map_flags = viommu->map_flags; > + vdomain->viommu = viommu; > > - return ret > 0 ? 0 : ret; > + return 0; > } > > static void viommu_domain_free(struct iommu_domain *domain) > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 0/3] iommu/virtio: Misc fixes [not found] ` <20200326093558.2641019-1-jean-philippe-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org> 2020-03-26 9:35 ` [PATCH v2 2/3] iommu/virtio: Fix freeing of incomplete domains Jean-Philippe Brucker @ 2020-03-27 10:11 ` Joerg Roedel 1 sibling, 0 replies; 9+ messages in thread From: Joerg Roedel @ 2020-03-27 10:11 UTC (permalink / raw) To: Jean-Philippe Brucker Cc: mst-H+wXaHxf7aLQT0dZR+AlfA, jasowang-H+wXaHxf7aLQT0dZR+AlfA, virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA, bbhushan2-eYqpPyKDWXRBDgjK7y7TUQ On Thu, Mar 26, 2020 at 10:35:55AM +0100, Jean-Philippe Brucker wrote: > A collection of fixes for the virtio-iommu driver. It might be too late > for v5.6 since they need more review. Patch 2 is new, the others were > posted separately. > > Jean-Philippe Brucker (3): > iommu/virtio: Fix sparse warning > iommu/virtio: Fix freeing of incomplete domains > iommu/virtio: Reject IOMMU page granule larger than PAGE_SIZE Applied, thanks. ^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH v2 3/3] iommu/virtio: Reject IOMMU page granule larger than PAGE_SIZE 2020-03-26 9:35 [PATCH v2 0/3] iommu/virtio: Misc fixes Jean-Philippe Brucker 2020-03-26 9:35 ` [PATCH v2 1/3] iommu/virtio: Fix sparse warning Jean-Philippe Brucker [not found] ` <20200326093558.2641019-1-jean-philippe-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org> @ 2020-03-26 9:35 ` Jean-Philippe Brucker 2020-03-26 12:13 ` Robin Murphy 2020-03-26 17:41 ` Auger Eric 2 siblings, 2 replies; 9+ messages in thread From: Jean-Philippe Brucker @ 2020-03-26 9:35 UTC (permalink / raw) To: iommu Cc: Jean-Philippe Brucker, mst, joro, virtualization, eric.auger, bbhushan2 We don't currently support IOMMUs with a page granule larger than the system page size. The IOVA allocator has a BUG_ON() in this case, and VFIO has a WARN_ON(). Removing these obstacles ranges doesn't seem possible without major changes to the DMA API and VFIO. Some callers of iommu_map(), for example, want to map multiple page-aligned regions adjacent to each others for scatter-gather purposes. Even in simple DMA API uses, a call to dma_map_page() would let the endpoint access neighbouring memory. And VFIO users cannot ensure that their virtual address buffer is physically contiguous at the IOMMU granule. Rather than triggering the IOVA BUG_ON() on mismatched page sizes, abort the vdomain finalise() with an error message. We could simply abort the viommu probe(), but an upcoming extension to virtio-iommu will allow setting different page masks for each endpoint. Reported-by: Bharat Bhushan <bbhushan2@marvell.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> --- v1->v2: Move to vdomain_finalise(), improve commit message --- drivers/iommu/virtio-iommu.c | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c index 5eed75cd121f..750f69c49b95 100644 --- a/drivers/iommu/virtio-iommu.c +++ b/drivers/iommu/virtio-iommu.c @@ -607,12 +607,22 @@ static struct iommu_domain *viommu_domain_alloc(unsigned type) return &vdomain->domain; } -static int viommu_domain_finalise(struct viommu_dev *viommu, +static int viommu_domain_finalise(struct viommu_endpoint *vdev, struct iommu_domain *domain) { int ret; + unsigned long viommu_page_size; + struct viommu_dev *viommu = vdev->viommu; struct viommu_domain *vdomain = to_viommu_domain(domain); + viommu_page_size = 1UL << __ffs(viommu->pgsize_bitmap); + if (viommu_page_size > PAGE_SIZE) { + dev_err(vdev->dev, + "granule 0x%lx larger than system page size 0x%lx\n", + viommu_page_size, PAGE_SIZE); + return -EINVAL; + } + ret = ida_alloc_range(&viommu->domain_ids, viommu->first_domain, viommu->last_domain, GFP_KERNEL); if (ret < 0) @@ -659,7 +669,7 @@ static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev) * Properly initialize the domain now that we know which viommu * owns it. */ - ret = viommu_domain_finalise(vdev->viommu, domain); + ret = viommu_domain_finalise(vdev, domain); } else if (vdomain->viommu != vdev->viommu) { dev_err(dev, "cannot attach to foreign vIOMMU\n"); ret = -EXDEV; -- 2.25.1 ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [PATCH v2 3/3] iommu/virtio: Reject IOMMU page granule larger than PAGE_SIZE 2020-03-26 9:35 ` [PATCH v2 3/3] iommu/virtio: Reject IOMMU page granule larger than PAGE_SIZE Jean-Philippe Brucker @ 2020-03-26 12:13 ` Robin Murphy 2020-03-26 17:41 ` Auger Eric 1 sibling, 0 replies; 9+ messages in thread From: Robin Murphy @ 2020-03-26 12:13 UTC (permalink / raw) To: Jean-Philippe Brucker, iommu; +Cc: bbhushan2, virtualization, mst On 2020-03-26 9:35 am, Jean-Philippe Brucker wrote: > We don't currently support IOMMUs with a page granule larger than the > system page size. The IOVA allocator has a BUG_ON() in this case, and > VFIO has a WARN_ON(). > > Removing these obstacles ranges doesn't seem possible without major > changes to the DMA API and VFIO. Some callers of iommu_map(), for > example, want to map multiple page-aligned regions adjacent to each > others for scatter-gather purposes. Even in simple DMA API uses, a call > to dma_map_page() would let the endpoint access neighbouring memory. And > VFIO users cannot ensure that their virtual address buffer is physically > contiguous at the IOMMU granule. > > Rather than triggering the IOVA BUG_ON() on mismatched page sizes, abort > the vdomain finalise() with an error message. We could simply abort the > viommu probe(), but an upcoming extension to virtio-iommu will allow > setting different page masks for each endpoint. Reviewed-by: Robin Murphy <robin.murphy@arm.com> > Reported-by: Bharat Bhushan <bbhushan2@marvell.com> > Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> > --- > v1->v2: Move to vdomain_finalise(), improve commit message > --- > drivers/iommu/virtio-iommu.c | 14 ++++++++++++-- > 1 file changed, 12 insertions(+), 2 deletions(-) > > diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c > index 5eed75cd121f..750f69c49b95 100644 > --- a/drivers/iommu/virtio-iommu.c > +++ b/drivers/iommu/virtio-iommu.c > @@ -607,12 +607,22 @@ static struct iommu_domain *viommu_domain_alloc(unsigned type) > return &vdomain->domain; > } > > -static int viommu_domain_finalise(struct viommu_dev *viommu, > +static int viommu_domain_finalise(struct viommu_endpoint *vdev, > struct iommu_domain *domain) > { > int ret; > + unsigned long viommu_page_size; > + struct viommu_dev *viommu = vdev->viommu; > struct viommu_domain *vdomain = to_viommu_domain(domain); > > + viommu_page_size = 1UL << __ffs(viommu->pgsize_bitmap); > + if (viommu_page_size > PAGE_SIZE) { > + dev_err(vdev->dev, > + "granule 0x%lx larger than system page size 0x%lx\n", > + viommu_page_size, PAGE_SIZE); > + return -EINVAL; > + } > + > ret = ida_alloc_range(&viommu->domain_ids, viommu->first_domain, > viommu->last_domain, GFP_KERNEL); > if (ret < 0) > @@ -659,7 +669,7 @@ static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev) > * Properly initialize the domain now that we know which viommu > * owns it. > */ > - ret = viommu_domain_finalise(vdev->viommu, domain); > + ret = viommu_domain_finalise(vdev, domain); > } else if (vdomain->viommu != vdev->viommu) { > dev_err(dev, "cannot attach to foreign vIOMMU\n"); > ret = -EXDEV; > ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH v2 3/3] iommu/virtio: Reject IOMMU page granule larger than PAGE_SIZE 2020-03-26 9:35 ` [PATCH v2 3/3] iommu/virtio: Reject IOMMU page granule larger than PAGE_SIZE Jean-Philippe Brucker 2020-03-26 12:13 ` Robin Murphy @ 2020-03-26 17:41 ` Auger Eric [not found] ` <04ed8a3e-5602-1a5f-1fa3-ac517fede0b1-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> 1 sibling, 1 reply; 9+ messages in thread From: Auger Eric @ 2020-03-26 17:41 UTC (permalink / raw) To: Jean-Philippe Brucker, iommu; +Cc: joro, mst, bbhushan2, virtualization Hi Jean, On 3/26/20 10:35 AM, Jean-Philippe Brucker wrote: > We don't currently support IOMMUs with a page granule larger than the > system page size. The IOVA allocator has a BUG_ON() in this case, and > VFIO has a WARN_ON(). > > Removing these obstacles ranges doesn't seem possible without major > changes to the DMA API and VFIO. Some callers of iommu_map(), for > example, want to map multiple page-aligned regions adjacent to each > others for scatter-gather purposes. Even in simple DMA API uses, a call > to dma_map_page() would let the endpoint access neighbouring memory. And > VFIO users cannot ensure that their virtual address buffer is physically > contiguous at the IOMMU granule. > > Rather than triggering the IOVA BUG_ON() on mismatched page sizes, abort > the vdomain finalise() with an error message. We could simply abort the > viommu probe(), but an upcoming extension to virtio-iommu will allow > setting different page masks for each endpoint. > > Reported-by: Bharat Bhushan <bbhushan2@marvell.com> > Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Thanks Eric > --- > v1->v2: Move to vdomain_finalise(), improve commit message > --- > drivers/iommu/virtio-iommu.c | 14 ++++++++++++-- > 1 file changed, 12 insertions(+), 2 deletions(-) > > diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c > index 5eed75cd121f..750f69c49b95 100644 > --- a/drivers/iommu/virtio-iommu.c > +++ b/drivers/iommu/virtio-iommu.c > @@ -607,12 +607,22 @@ static struct iommu_domain *viommu_domain_alloc(unsigned type) > return &vdomain->domain; > } > > -static int viommu_domain_finalise(struct viommu_dev *viommu, > +static int viommu_domain_finalise(struct viommu_endpoint *vdev, > struct iommu_domain *domain) > { > int ret; > + unsigned long viommu_page_size; > + struct viommu_dev *viommu = vdev->viommu; > struct viommu_domain *vdomain = to_viommu_domain(domain); > > + viommu_page_size = 1UL << __ffs(viommu->pgsize_bitmap); > + if (viommu_page_size > PAGE_SIZE) { > + dev_err(vdev->dev, > + "granule 0x%lx larger than system page size 0x%lx\n", > + viommu_page_size, PAGE_SIZE); > + return -EINVAL; > + } > + > ret = ida_alloc_range(&viommu->domain_ids, viommu->first_domain, > viommu->last_domain, GFP_KERNEL); > if (ret < 0) > @@ -659,7 +669,7 @@ static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev) > * Properly initialize the domain now that we know which viommu > * owns it. > */ > - ret = viommu_domain_finalise(vdev->viommu, domain); > + ret = viommu_domain_finalise(vdev, domain); > } else if (vdomain->viommu != vdev->viommu) { > dev_err(dev, "cannot attach to foreign vIOMMU\n"); > ret = -EXDEV; > ^ permalink raw reply [flat|nested] 9+ messages in thread
[parent not found: <04ed8a3e-5602-1a5f-1fa3-ac517fede0b1-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>]
* RE: [EXT] Re: [PATCH v2 3/3] iommu/virtio: Reject IOMMU page granule larger than PAGE_SIZE [not found] ` <04ed8a3e-5602-1a5f-1fa3-ac517fede0b1-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> @ 2020-03-27 5:48 ` Bharat Bhushan 0 siblings, 0 replies; 9+ messages in thread From: Bharat Bhushan @ 2020-03-27 5:48 UTC (permalink / raw) To: Auger Eric, Jean-Philippe Brucker, iommu-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Cc: mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org, jasowang-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org, virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org Hi Jean, > -----Original Message----- > From: Auger Eric <eric.auger-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> > Sent: Thursday, March 26, 2020 11:11 PM > To: Jean-Philippe Brucker <jean-philippe-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>; iommu-cunTk1MwBs/ROKNJybVBZg@public.gmane.org > foundation.org > Cc: virtualization-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA@public.gmane.org; joro-zLv9SwRftAIdnm+yROfE0A@public.gmane.org; mst-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org; > jasowang-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org; Bharat Bhushan <bbhushan2-eYqpPyKDWXRBDgjK7y7TUQ@public.gmane.org> > Subject: [EXT] Re: [PATCH v2 3/3] iommu/virtio: Reject IOMMU page granule larger > than PAGE_SIZE > > External Email > > ---------------------------------------------------------------------- > Hi Jean, > > On 3/26/20 10:35 AM, Jean-Philippe Brucker wrote: > > We don't currently support IOMMUs with a page granule larger than the > > system page size. The IOVA allocator has a BUG_ON() in this case, and > > VFIO has a WARN_ON(). > > > > Removing these obstacles ranges doesn't seem possible without major > > changes to the DMA API and VFIO. Some callers of iommu_map(), for > > example, want to map multiple page-aligned regions adjacent to each > > others for scatter-gather purposes. Even in simple DMA API uses, a > > call to dma_map_page() would let the endpoint access neighbouring > > memory. And VFIO users cannot ensure that their virtual address buffer > > is physically contiguous at the IOMMU granule. > > > > Rather than triggering the IOVA BUG_ON() on mismatched page sizes, > > abort the vdomain finalise() with an error message. We could simply > > abort the viommu probe(), but an upcoming extension to virtio-iommu > > will allow setting different page masks for each endpoint. > > > > Reported-by: Bharat Bhushan <bbhushan2-eYqpPyKDWXRBDgjK7y7TUQ@public.gmane.org> > > Signed-off-by: Jean-Philippe Brucker <jean-philippe-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org> > Reviewed-by: Eric Auger <eric.auger-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> Reviewed-by: Eric Auger <bbhushan2-eYqpPyKDWXRBDgjK7y7TUQ@public.gmane.org> Thanks -Bharat > > Thanks > > Eric > > --- > > v1->v2: Move to vdomain_finalise(), improve commit message > > --- > > drivers/iommu/virtio-iommu.c | 14 ++++++++++++-- > > 1 file changed, 12 insertions(+), 2 deletions(-) > > > > diff --git a/drivers/iommu/virtio-iommu.c > > b/drivers/iommu/virtio-iommu.c index 5eed75cd121f..750f69c49b95 100644 > > --- a/drivers/iommu/virtio-iommu.c > > +++ b/drivers/iommu/virtio-iommu.c > > @@ -607,12 +607,22 @@ static struct iommu_domain > *viommu_domain_alloc(unsigned type) > > return &vdomain->domain; > > } > > > > -static int viommu_domain_finalise(struct viommu_dev *viommu, > > +static int viommu_domain_finalise(struct viommu_endpoint *vdev, > > struct iommu_domain *domain) > > { > > int ret; > > + unsigned long viommu_page_size; > > + struct viommu_dev *viommu = vdev->viommu; > > struct viommu_domain *vdomain = to_viommu_domain(domain); > > > > + viommu_page_size = 1UL << __ffs(viommu->pgsize_bitmap); > > + if (viommu_page_size > PAGE_SIZE) { > > + dev_err(vdev->dev, > > + "granule 0x%lx larger than system page size 0x%lx\n", > > + viommu_page_size, PAGE_SIZE); > > + return -EINVAL; > > + } > > + > > ret = ida_alloc_range(&viommu->domain_ids, viommu->first_domain, > > viommu->last_domain, GFP_KERNEL); > > if (ret < 0) > > @@ -659,7 +669,7 @@ static int viommu_attach_dev(struct iommu_domain > *domain, struct device *dev) > > * Properly initialize the domain now that we know which viommu > > * owns it. > > */ > > - ret = viommu_domain_finalise(vdev->viommu, domain); > > + ret = viommu_domain_finalise(vdev, domain); > > } else if (vdomain->viommu != vdev->viommu) { > > dev_err(dev, "cannot attach to foreign vIOMMU\n"); > > ret = -EXDEV; > > ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2020-03-27 10:11 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-03-26 9:35 [PATCH v2 0/3] iommu/virtio: Misc fixes Jean-Philippe Brucker
2020-03-26 9:35 ` [PATCH v2 1/3] iommu/virtio: Fix sparse warning Jean-Philippe Brucker
[not found] ` <20200326093558.2641019-1-jean-philippe-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org>
2020-03-26 9:35 ` [PATCH v2 2/3] iommu/virtio: Fix freeing of incomplete domains Jean-Philippe Brucker
2020-03-26 12:09 ` Robin Murphy
2020-03-27 10:11 ` [PATCH v2 0/3] iommu/virtio: Misc fixes Joerg Roedel
2020-03-26 9:35 ` [PATCH v2 3/3] iommu/virtio: Reject IOMMU page granule larger than PAGE_SIZE Jean-Philippe Brucker
2020-03-26 12:13 ` Robin Murphy
2020-03-26 17:41 ` Auger Eric
[not found] ` <04ed8a3e-5602-1a5f-1fa3-ac517fede0b1-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2020-03-27 5:48 ` [EXT] " Bharat Bhushan
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).