virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/5] Convert virtio-iommu to domain_alloc_paging()
@ 2025-02-28  0:19 Jason Gunthorpe
  2025-02-28  0:19 ` [PATCH v2 1/5] iommu/virtio: Break out bypass identity support into a global static Jason Gunthorpe
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Jason Gunthorpe @ 2025-02-28  0:19 UTC (permalink / raw)
  To: iommu, Jean-Philippe Brucker, Joerg Roedel, Robin Murphy,
	virtualization, Will Deacon
  Cc: Eric Auger, patches

virtio-iommu and fsl_pamu are the only two drivers left that still
rely on ops.domain_alloc()

Update virtio-iommu to have a global static identity domain, implement
domain_alloc_paging(), and finalize its domain during allocation instead
of on first attach.

As virtio-iommu was the last real iommu driver using domain_alloc() update
the core code to isolate the remains to fsl_pamu.

I tested this using qemu x86 7.0.0 and 9.2.0 with a mlx5 VFIO PCI
device. The kernel behaves the same after this series.

However, there seem to be some unrelated qemu bugs. 7.0.0 does not have
working VIRTIO_IOMMU_F_BYPASS_CONFIG and neither qemu works with an
IDENTITY domain using !VIRTIO_IOMMU_F_BYPASS_CONFIG. It prints:

 qemu-system-x86_64: iommu has granularity incompatible with target AS
 qemu-system-x86_64: iommu map to non memory area 80000000
 qemu-system-x86_64: iommu map to non memory area c0000000
 qemu-system-x86_64: iommu map to non memory area e0000000
 qemu-system-x86_64: iommu map to non memory area f0000000
 qemu-system-x86_64: iommu map to non memory area f8000000
 qemu-system-x86_64: iommu map to non memory area fc000000
 qemu-system-x86_64: iommu map to non memory area fe000000
 qemu-system-x86_64: iommu map to non memory area fe800000
 qemu-system-x86_64: iommu map to non memory area 0
 qemu-system-x86_64: iommu map to non memory area fef00000
 qemu-system-x86_64: iommu map to non memory area ff000000
 qemu-system-x86_64: iommu has granularity incompatible with target AS
 qemu-system-x86_64: iommu map to non memory area 200000000
 qemu-system-x86_64: iommu map to non memory area 400000000
 qemu-system-x86_64: iommu map to non memory area 800000000
 qemu-system-x86_64: iommu map to non memory area 1000000000
 qemu-system-x86_64: iommu map to non memory area 2000000000
 qemu-system-x86_64: iommu map to non memory area 4000000000

v2:
 - Remove the viommu_ops.identity_domain = NULL in the last patch and
   re-constify the ops
 - Fix incorrect tracking in the identity domain attach
 - Christmas tree variable order and some other formatting
v1: https://patch.msgid.link/r/0-v1-91eed9c8014a+53a37-iommu_virtio_domains_jgg@nvidia.com

Cc: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Jason Gunthorpe (5):
  iommu/virtio: Break out bypass identity support into a global static
  iommu: Add domain_alloc_identity()
  iommu/virtio: Move to domain_alloc_paging()
  iommu: Do not call domain_alloc() in iommu_sva_domain_alloc()
  iommu: Hide ops.domain_alloc behind CONFIG_FSL_PAMU

 drivers/iommu/iommu-sva.c    |  15 ++-
 drivers/iommu/iommu.c        |  16 +--
 drivers/iommu/virtio-iommu.c | 197 ++++++++++++++++++++---------------
 include/linux/iommu.h        |  10 +-
 4 files changed, 136 insertions(+), 102 deletions(-)


base-commit: 2014c95afecee3e76ca4a56956a936e23283f05b
-- 
2.43.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v2 1/5] iommu/virtio: Break out bypass identity support into a global static
  2025-02-28  0:19 [PATCH v2 0/5] Convert virtio-iommu to domain_alloc_paging() Jason Gunthorpe
@ 2025-02-28  0:19 ` Jason Gunthorpe
  2025-03-03 15:22   ` Jean-Philippe Brucker
  2025-02-28  0:20 ` [PATCH v2 2/5] iommu: Add domain_alloc_identity() Jason Gunthorpe
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Jason Gunthorpe @ 2025-02-28  0:19 UTC (permalink / raw)
  To: iommu, Jean-Philippe Brucker, Joerg Roedel, Robin Murphy,
	virtualization, Will Deacon
  Cc: Eric Auger, patches

To make way for a domain_alloc_paging conversion add the typical global
static IDENTITY domain. This supports VMMs that have a
VIRTIO_IOMMU_F_BYPASS_CONFIG config.

If the VMM does not have support then the domain_alloc path is still used,
which creates an IDENTITY domain out of a paging domain.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/virtio-iommu.c | 86 ++++++++++++++++++++++++++++--------
 1 file changed, 67 insertions(+), 19 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index b85ce6310ddbda..55a2188197c621 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -48,6 +48,7 @@ struct viommu_dev {
 	u64				pgsize_bitmap;
 	u32				first_domain;
 	u32				last_domain;
+	u32				identity_domain_id;
 	/* Supported MAP flags */
 	u32				map_flags;
 	u32				probe_size;
@@ -70,7 +71,6 @@ struct viommu_domain {
 	struct rb_root_cached		mappings;
 
 	unsigned long			nr_endpoints;
-	bool				bypass;
 };
 
 struct viommu_endpoint {
@@ -305,6 +305,22 @@ static int viommu_send_req_sync(struct viommu_dev *viommu, void *buf,
 	return ret;
 }
 
+static int viommu_send_attach_req(struct viommu_dev *viommu, struct device *dev,
+				  struct virtio_iommu_req_attach *req)
+{
+	int ret;
+	unsigned int i;
+	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+
+	for (i = 0; i < fwspec->num_ids; i++) {
+		req->endpoint = cpu_to_le32(fwspec->ids[i]);
+		ret = viommu_send_req_sync(viommu, req, sizeof(*req));
+		if (ret)
+			return ret;
+	}
+	return 0;
+}
+
 /*
  * viommu_add_mapping - add a mapping to the internal tree
  *
@@ -687,12 +703,6 @@ static int viommu_domain_finalise(struct viommu_endpoint *vdev,
 	vdomain->viommu		= viommu;
 
 	if (domain->type == IOMMU_DOMAIN_IDENTITY) {
-		if (virtio_has_feature(viommu->vdev,
-				       VIRTIO_IOMMU_F_BYPASS_CONFIG)) {
-			vdomain->bypass = true;
-			return 0;
-		}
-
 		ret = viommu_domain_map_identity(vdev, vdomain);
 		if (ret) {
 			ida_free(&viommu->domain_ids, vdomain->id);
@@ -719,10 +729,8 @@ static void viommu_domain_free(struct iommu_domain *domain)
 
 static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
 {
-	int i;
 	int ret = 0;
 	struct virtio_iommu_req_attach req;
-	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
 	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
 	struct viommu_domain *vdomain = to_viommu_domain(domain);
 
@@ -761,16 +769,9 @@ static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
 		.domain		= cpu_to_le32(vdomain->id),
 	};
 
-	if (vdomain->bypass)
-		req.flags |= cpu_to_le32(VIRTIO_IOMMU_ATTACH_F_BYPASS);
-
-	for (i = 0; i < fwspec->num_ids; i++) {
-		req.endpoint = cpu_to_le32(fwspec->ids[i]);
-
-		ret = viommu_send_req_sync(vdomain->viommu, &req, sizeof(req));
-		if (ret)
-			return ret;
-	}
+	ret = viommu_send_attach_req(vdomain->viommu, dev, &req);
+	if (ret)
+		return ret;
 
 	if (!vdomain->nr_endpoints) {
 		/*
@@ -788,6 +789,40 @@ static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
 	return 0;
 }
 
+static int viommu_attach_identity_domain(struct iommu_domain *domain,
+					 struct device *dev)
+{
+	int ret = 0;
+	struct virtio_iommu_req_attach req;
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	req = (struct virtio_iommu_req_attach) {
+		.head.type	= VIRTIO_IOMMU_T_ATTACH,
+		.domain		= cpu_to_le32(vdev->viommu->identity_domain_id),
+		.flags          = cpu_to_le32(VIRTIO_IOMMU_ATTACH_F_BYPASS),
+	};
+
+	ret = viommu_send_attach_req(vdev->viommu, dev, &req);
+	if (ret)
+		return ret;
+
+	if (vdev->vdomain)
+		vdev->vdomain->nr_endpoints--;
+	vdomain->nr_endpoints++;
+	vdev->vdomain = vdomain;
+	return 0;
+}
+
+static struct viommu_domain viommu_identity_domain = {
+	.domain = {
+		.type = IOMMU_DOMAIN_IDENTITY,
+		.ops = &(const struct iommu_domain_ops) {
+			.attach_dev = viommu_attach_identity_domain,
+		},
+	},
+};
+
 static void viommu_detach_dev(struct viommu_endpoint *vdev)
 {
 	int i;
@@ -1061,6 +1096,7 @@ static bool viommu_capable(struct device *dev, enum iommu_cap cap)
 }
 
 static struct iommu_ops viommu_ops = {
+	.identity_domain	= &viommu_identity_domain.domain,
 	.capable		= viommu_capable,
 	.domain_alloc		= viommu_domain_alloc,
 	.probe_device		= viommu_probe_device,
@@ -1184,6 +1220,18 @@ static int viommu_probe(struct virtio_device *vdev)
 	if (virtio_has_feature(vdev, VIRTIO_IOMMU_F_MMIO))
 		viommu->map_flags |= VIRTIO_IOMMU_MAP_F_MMIO;
 
+	/* Reserve an ID to use as the bypass domain */
+	if (virtio_has_feature(viommu->vdev, VIRTIO_IOMMU_F_BYPASS_CONFIG)) {
+		viommu->identity_domain_id = viommu->first_domain;
+		viommu->first_domain++;
+	} else {
+		/*
+		 * Assume the VMM is sensible and it either supports bypass on
+		 * all instances or no instances.
+		 */
+		viommu_ops.identity_domain = NULL;
+	}
+
 	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
 
 	virtio_device_ready(vdev);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 2/5] iommu: Add domain_alloc_identity()
  2025-02-28  0:19 [PATCH v2 0/5] Convert virtio-iommu to domain_alloc_paging() Jason Gunthorpe
  2025-02-28  0:19 ` [PATCH v2 1/5] iommu/virtio: Break out bypass identity support into a global static Jason Gunthorpe
@ 2025-02-28  0:20 ` Jason Gunthorpe
  2025-02-28  1:59   ` Baolu Lu
  2025-02-28  0:20 ` [PATCH v2 3/5] iommu/virtio: Move to domain_alloc_paging() Jason Gunthorpe
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Jason Gunthorpe @ 2025-02-28  0:20 UTC (permalink / raw)
  To: iommu, Jean-Philippe Brucker, Joerg Roedel, Robin Murphy,
	virtualization, Will Deacon
  Cc: Eric Auger, patches

virtio-iommu has a mode where the IDENTITY domain is actually a paging
domain with an identity mapping covering some of the system address
space manually created.

To support this add a new domain_alloc_identity() op that accepts
the struct device so that virtio can allocate and fully finalize a
paging domain to return.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c        | 20 ++++++++++++--------
 drivers/iommu/virtio-iommu.c | 11 ++---------
 include/linux/iommu.h        |  4 ++++
 3 files changed, 18 insertions(+), 17 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 870c3cdbd0f622..ee33d26dfcd40d 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1595,15 +1595,19 @@ static struct iommu_domain *__iommu_alloc_identity_domain(struct device *dev)
 	if (ops->identity_domain)
 		return ops->identity_domain;
 
-	/* Older drivers create the identity domain via ops->domain_alloc() */
-	if (!ops->domain_alloc)
+	if (ops->domain_alloc_identity) {
+		domain = ops->domain_alloc_identity(dev);
+		if (IS_ERR(domain))
+			return domain;
+	} else if (ops->domain_alloc) {
+		domain = ops->domain_alloc(IOMMU_DOMAIN_IDENTITY);
+		if (!domain)
+			return ERR_PTR(-ENOMEM);
+		if (IS_ERR(domain))
+			return domain;
+	} else {
 		return ERR_PTR(-EOPNOTSUPP);
-
-	domain = ops->domain_alloc(IOMMU_DOMAIN_IDENTITY);
-	if (IS_ERR(domain))
-		return domain;
-	if (!domain)
-		return ERR_PTR(-ENOMEM);
+	}
 
 	iommu_domain_init(domain, IOMMU_DOMAIN_IDENTITY, ops);
 	return domain;
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 55a2188197c621..2287b89967067d 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -1007,7 +1007,7 @@ static void viommu_get_resv_regions(struct device *dev, struct list_head *head)
 	iommu_dma_get_resv_regions(dev, head);
 }
 
-static struct iommu_ops viommu_ops;
+static const struct iommu_ops viommu_ops;
 static struct virtio_driver virtio_iommu_drv;
 
 static int viommu_match_node(struct device *dev, const void *data)
@@ -1095,8 +1095,7 @@ static bool viommu_capable(struct device *dev, enum iommu_cap cap)
 	}
 }
 
-static struct iommu_ops viommu_ops = {
-	.identity_domain	= &viommu_identity_domain.domain,
+static const struct iommu_ops viommu_ops = {
 	.capable		= viommu_capable,
 	.domain_alloc		= viommu_domain_alloc,
 	.probe_device		= viommu_probe_device,
@@ -1224,12 +1223,6 @@ static int viommu_probe(struct virtio_device *vdev)
 	if (virtio_has_feature(viommu->vdev, VIRTIO_IOMMU_F_BYPASS_CONFIG)) {
 		viommu->identity_domain_id = viommu->first_domain;
 		viommu->first_domain++;
-	} else {
-		/*
-		 * Assume the VMM is sensible and it either supports bypass on
-		 * all instances or no instances.
-		 */
-		viommu_ops.identity_domain = NULL;
 	}
 
 	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 38c65e92ecd091..6389d59178ba3f 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -557,6 +557,9 @@ iommu_copy_struct_from_full_user_array(void *kdst, size_t kdst_entry_size,
  * @domain_alloc: allocate and return an iommu domain if success. Otherwise
  *                NULL is returned. The domain is not fully initialized until
  *                the caller iommu_domain_alloc() returns.
+ * @domain_alloc_identity: allocate an IDENTITY domain. Drivers should prefer to
+ *                         use identity_domain instead. This should only be used
+ *                         if dynamic logic is necessary.
  * @domain_alloc_paging_flags: Allocate an iommu domain corresponding to the
  *                     input parameters as defined in
  *                     include/uapi/linux/iommufd.h. The @user_data can be
@@ -615,6 +618,7 @@ struct iommu_ops {
 
 	/* Domain allocation and freeing by the iommu driver */
 	struct iommu_domain *(*domain_alloc)(unsigned iommu_domain_type);
+	struct iommu_domain *(*domain_alloc_identity)(struct device *dev);
 	struct iommu_domain *(*domain_alloc_paging_flags)(
 		struct device *dev, u32 flags,
 		const struct iommu_user_data *user_data);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 3/5] iommu/virtio: Move to domain_alloc_paging()
  2025-02-28  0:19 [PATCH v2 0/5] Convert virtio-iommu to domain_alloc_paging() Jason Gunthorpe
  2025-02-28  0:19 ` [PATCH v2 1/5] iommu/virtio: Break out bypass identity support into a global static Jason Gunthorpe
  2025-02-28  0:20 ` [PATCH v2 2/5] iommu: Add domain_alloc_identity() Jason Gunthorpe
@ 2025-02-28  0:20 ` Jason Gunthorpe
  2025-03-03 15:21   ` Jean-Philippe Brucker
  2025-02-28  0:20 ` [PATCH v2 4/5] iommu: Do not call domain_alloc() in iommu_sva_domain_alloc() Jason Gunthorpe
  2025-02-28  0:20 ` [PATCH v2 5/5] iommu: Hide ops.domain_alloc behind CONFIG_FSL_PAMU Jason Gunthorpe
  4 siblings, 1 reply; 13+ messages in thread
From: Jason Gunthorpe @ 2025-02-28  0:20 UTC (permalink / raw)
  To: iommu, Jean-Philippe Brucker, Joerg Roedel, Robin Murphy,
	virtualization, Will Deacon
  Cc: Eric Auger, patches

virtio has the complication that it sometimes wants to return a paging
domain for IDENTITY which makes this conversion a little different than
other drivers.

Add a viommu_domain_alloc_paging() that combines viommu_domain_alloc() and
viommu_domain_finalise() to always return a fully initialized and
finalized paging domain.

Use viommu_domain_alloc_identity() to implement the special non-bypass
IDENTITY flow by calling viommu_domain_alloc_paging() then
viommu_domain_map_identity().

Remove support for deferred finalize and the vdomain->mutex.

Remove core support for domain_alloc() IDENTITY as virtio was the last
driver using it.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c        |   6 --
 drivers/iommu/virtio-iommu.c | 114 ++++++++++++++++-------------------
 2 files changed, 53 insertions(+), 67 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index ee33d26dfcd40d..73a05b34de4768 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1599,12 +1599,6 @@ static struct iommu_domain *__iommu_alloc_identity_domain(struct device *dev)
 		domain = ops->domain_alloc_identity(dev);
 		if (IS_ERR(domain))
 			return domain;
-	} else if (ops->domain_alloc) {
-		domain = ops->domain_alloc(IOMMU_DOMAIN_IDENTITY);
-		if (!domain)
-			return ERR_PTR(-ENOMEM);
-		if (IS_ERR(domain))
-			return domain;
 	} else {
 		return ERR_PTR(-EOPNOTSUPP);
 	}
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 2287b89967067d..cdb8034ece9bde 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -63,7 +63,6 @@ struct viommu_mapping {
 struct viommu_domain {
 	struct iommu_domain		domain;
 	struct viommu_dev		*viommu;
-	struct mutex			mutex; /* protects viommu pointer */
 	unsigned int			id;
 	u32				map_flags;
 
@@ -97,6 +96,8 @@ struct viommu_event {
 	};
 };
 
+static struct viommu_domain viommu_identity_domain;
+
 #define to_viommu_domain(domain)	\
 	container_of(domain, struct viommu_domain, domain)
 
@@ -653,65 +654,45 @@ static void viommu_event_handler(struct virtqueue *vq)
 
 /* IOMMU API */
 
-static struct iommu_domain *viommu_domain_alloc(unsigned type)
+static struct iommu_domain *viommu_domain_alloc_paging(struct device *dev)
 {
-	struct viommu_domain *vdomain;
-
-	if (type != IOMMU_DOMAIN_UNMANAGED &&
-	    type != IOMMU_DOMAIN_DMA &&
-	    type != IOMMU_DOMAIN_IDENTITY)
-		return NULL;
-
-	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
-	if (!vdomain)
-		return NULL;
-
-	mutex_init(&vdomain->mutex);
-	spin_lock_init(&vdomain->mappings_lock);
-	vdomain->mappings = RB_ROOT_CACHED;
-
-	return &vdomain->domain;
-}
-
-static int viommu_domain_finalise(struct viommu_endpoint *vdev,
-				  struct iommu_domain *domain)
-{
-	int ret;
-	unsigned long viommu_page_size;
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
 	struct viommu_dev *viommu = vdev->viommu;
-	struct viommu_domain *vdomain = to_viommu_domain(domain);
+	unsigned long viommu_page_size;
+	struct viommu_domain *vdomain;
+	int ret;
 
 	viommu_page_size = 1UL << __ffs(viommu->pgsize_bitmap);
 	if (viommu_page_size > PAGE_SIZE) {
 		dev_err(vdev->dev,
 			"granule 0x%lx larger than system page size 0x%lx\n",
 			viommu_page_size, PAGE_SIZE);
-		return -ENODEV;
+		return ERR_PTR(-ENODEV);
 	}
 
+	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
+	if (!vdomain)
+		return ERR_PTR(-ENOMEM);
+
+	spin_lock_init(&vdomain->mappings_lock);
+	vdomain->mappings = RB_ROOT_CACHED;
+
 	ret = ida_alloc_range(&viommu->domain_ids, viommu->first_domain,
 			      viommu->last_domain, GFP_KERNEL);
-	if (ret < 0)
-		return ret;
-
-	vdomain->id		= (unsigned int)ret;
-
-	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
-	domain->geometry	= viommu->geometry;
-
-	vdomain->map_flags	= viommu->map_flags;
-	vdomain->viommu		= viommu;
-
-	if (domain->type == IOMMU_DOMAIN_IDENTITY) {
-		ret = viommu_domain_map_identity(vdev, vdomain);
-		if (ret) {
-			ida_free(&viommu->domain_ids, vdomain->id);
-			vdomain->viommu = NULL;
-			return ret;
-		}
+	if (ret < 0) {
+		kfree(vdomain);
+		return ERR_PTR(ret);
 	}
 
-	return 0;
+	vdomain->id = (unsigned int)ret;
+
+	vdomain->domain.pgsize_bitmap = viommu->pgsize_bitmap;
+	vdomain->domain.geometry = viommu->geometry;
+
+	vdomain->map_flags = viommu->map_flags;
+	vdomain->viommu = viommu;
+
+	return &vdomain->domain;
 }
 
 static void viommu_domain_free(struct iommu_domain *domain)
@@ -727,6 +708,28 @@ static void viommu_domain_free(struct iommu_domain *domain)
 	kfree(vdomain);
 }
 
+static struct iommu_domain *viommu_domain_alloc_identity(struct device *dev)
+{
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+	struct iommu_domain *domain;
+	int ret;
+
+	if (virtio_has_feature(vdev->viommu->vdev,
+			       VIRTIO_IOMMU_F_BYPASS_CONFIG))
+		return &viommu_identity_domain.domain;
+
+	domain = viommu_domain_alloc_paging(dev);
+	if (IS_ERR(domain))
+		return domain;
+
+	ret = viommu_domain_map_identity(vdev, to_viommu_domain(domain));
+	if (ret) {
+		viommu_domain_free(domain);
+		return ERR_PTR(ret);
+	}
+	return domain;
+}
+
 static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
 {
 	int ret = 0;
@@ -734,20 +737,8 @@ static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
 	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
 	struct viommu_domain *vdomain = to_viommu_domain(domain);
 
-	mutex_lock(&vdomain->mutex);
-	if (!vdomain->viommu) {
-		/*
-		 * Properly initialize the domain now that we know which viommu
-		 * owns it.
-		 */
-		ret = viommu_domain_finalise(vdev, domain);
-	} else if (vdomain->viommu != vdev->viommu) {
-		ret = -EINVAL;
-	}
-	mutex_unlock(&vdomain->mutex);
-
-	if (ret)
-		return ret;
+	if (vdomain->viommu != vdev->viommu)
+		return -EINVAL;
 
 	/*
 	 * In the virtio-iommu device, when attaching the endpoint to a new
@@ -1097,7 +1088,8 @@ static bool viommu_capable(struct device *dev, enum iommu_cap cap)
 
 static const struct iommu_ops viommu_ops = {
 	.capable		= viommu_capable,
-	.domain_alloc		= viommu_domain_alloc,
+	.domain_alloc_identity	= viommu_domain_alloc_identity,
+	.domain_alloc_paging	= viommu_domain_alloc_paging,
 	.probe_device		= viommu_probe_device,
 	.release_device		= viommu_release_device,
 	.device_group		= viommu_device_group,
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 4/5] iommu: Do not call domain_alloc() in iommu_sva_domain_alloc()
  2025-02-28  0:19 [PATCH v2 0/5] Convert virtio-iommu to domain_alloc_paging() Jason Gunthorpe
                   ` (2 preceding siblings ...)
  2025-02-28  0:20 ` [PATCH v2 3/5] iommu/virtio: Move to domain_alloc_paging() Jason Gunthorpe
@ 2025-02-28  0:20 ` Jason Gunthorpe
  2025-02-28  2:03   ` Baolu Lu
  2025-02-28  0:20 ` [PATCH v2 5/5] iommu: Hide ops.domain_alloc behind CONFIG_FSL_PAMU Jason Gunthorpe
  4 siblings, 1 reply; 13+ messages in thread
From: Jason Gunthorpe @ 2025-02-28  0:20 UTC (permalink / raw)
  To: iommu, Jean-Philippe Brucker, Joerg Roedel, Robin Murphy,
	virtualization, Will Deacon
  Cc: Eric Auger, patches

No driver implements SVA under domain_alloc() anymore, this is dead
code.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu-sva.c | 15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
index 503c5d23c1ea27..b1cd32e665b61c 100644
--- a/drivers/iommu/iommu-sva.c
+++ b/drivers/iommu/iommu-sva.c
@@ -299,15 +299,12 @@ static struct iommu_domain *iommu_sva_domain_alloc(struct device *dev,
 	const struct iommu_ops *ops = dev_iommu_ops(dev);
 	struct iommu_domain *domain;
 
-	if (ops->domain_alloc_sva) {
-		domain = ops->domain_alloc_sva(dev, mm);
-		if (IS_ERR(domain))
-			return domain;
-	} else {
-		domain = ops->domain_alloc(IOMMU_DOMAIN_SVA);
-		if (!domain)
-			return ERR_PTR(-ENOMEM);
-	}
+	if (!ops->domain_alloc_sva)
+		return ERR_PTR(-EOPNOTSUPP);
+
+	domain = ops->domain_alloc_sva(dev, mm);
+	if (IS_ERR(domain))
+		return domain;
 
 	domain->type = IOMMU_DOMAIN_SVA;
 	mmgrab(mm);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 5/5] iommu: Hide ops.domain_alloc behind CONFIG_FSL_PAMU
  2025-02-28  0:19 [PATCH v2 0/5] Convert virtio-iommu to domain_alloc_paging() Jason Gunthorpe
                   ` (3 preceding siblings ...)
  2025-02-28  0:20 ` [PATCH v2 4/5] iommu: Do not call domain_alloc() in iommu_sva_domain_alloc() Jason Gunthorpe
@ 2025-02-28  0:20 ` Jason Gunthorpe
  2025-02-28  2:04   ` Baolu Lu
  4 siblings, 1 reply; 13+ messages in thread
From: Jason Gunthorpe @ 2025-02-28  0:20 UTC (permalink / raw)
  To: iommu, Jean-Philippe Brucker, Joerg Roedel, Robin Murphy,
	virtualization, Will Deacon
  Cc: Eric Auger, patches

fsl_pamu is the last user of domain_alloc(), and it is using it to create
something weird that doesn't really fit into the iommu subsystem
architecture. It is a not a paging domain since it doesn't have any
map/unmap ops. It may be some special kind of identity domain.

For now just leave it as is. Wrap it's definition in CONFIG_FSL_PAMU to
discourage any new drivers from attempting to use it.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 2 ++
 include/linux/iommu.h | 6 +++---
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 73a05b34de4768..b81a82df9fed25 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1987,8 +1987,10 @@ __iommu_paging_domain_alloc_flags(struct device *dev, unsigned int type,
 		domain = ops->domain_alloc_paging(dev);
 	else if (ops->domain_alloc_paging_flags)
 		domain = ops->domain_alloc_paging_flags(dev, flags, NULL);
+#if IS_ENABLED(CONFIG_FSL_PAMU)
 	else if (ops->domain_alloc && !flags)
 		domain = ops->domain_alloc(IOMMU_DOMAIN_UNMANAGED);
+#endif
 	else
 		return ERR_PTR(-EOPNOTSUPP);
 
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 6389d59178ba3f..55cc87bc62f5d0 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -554,9 +554,7 @@ iommu_copy_struct_from_full_user_array(void *kdst, size_t kdst_entry_size,
  *           op is allocated in the iommu driver and freed by the caller after
  *           use. The information type is one of enum iommu_hw_info_type defined
  *           in include/uapi/linux/iommufd.h.
- * @domain_alloc: allocate and return an iommu domain if success. Otherwise
- *                NULL is returned. The domain is not fully initialized until
- *                the caller iommu_domain_alloc() returns.
+ * @domain_alloc: Do not use in new drivers
  * @domain_alloc_identity: allocate an IDENTITY domain. Drivers should prefer to
  *                         use identity_domain instead. This should only be used
  *                         if dynamic logic is necessary.
@@ -617,7 +615,9 @@ struct iommu_ops {
 	void *(*hw_info)(struct device *dev, u32 *length, u32 *type);
 
 	/* Domain allocation and freeing by the iommu driver */
+#if IS_ENABLED(CONFIG_FSL_PAMU)
 	struct iommu_domain *(*domain_alloc)(unsigned iommu_domain_type);
+#endif
 	struct iommu_domain *(*domain_alloc_identity)(struct device *dev);
 	struct iommu_domain *(*domain_alloc_paging_flags)(
 		struct device *dev, u32 flags,
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/5] iommu: Add domain_alloc_identity()
  2025-02-28  0:20 ` [PATCH v2 2/5] iommu: Add domain_alloc_identity() Jason Gunthorpe
@ 2025-02-28  1:59   ` Baolu Lu
  2025-02-28 14:31     ` Jason Gunthorpe
  0 siblings, 1 reply; 13+ messages in thread
From: Baolu Lu @ 2025-02-28  1:59 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu, Jean-Philippe Brucker, Joerg Roedel,
	Robin Murphy, virtualization, Will Deacon
  Cc: Eric Auger, patches

On 2/28/25 08:20, Jason Gunthorpe wrote:
> virtio-iommu has a mode where the IDENTITY domain is actually a paging
> domain with an identity mapping covering some of the system address
> space manually created.
> 
> To support this add a new domain_alloc_identity() op that accepts
> the struct device so that virtio can allocate and fully finalize a
> paging domain to return.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
>   drivers/iommu/iommu.c        | 20 ++++++++++++--------
>   drivers/iommu/virtio-iommu.c | 11 ++---------
>   include/linux/iommu.h        |  4 ++++
>   3 files changed, 18 insertions(+), 17 deletions(-)
> 

[...]

> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 38c65e92ecd091..6389d59178ba3f 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -557,6 +557,9 @@ iommu_copy_struct_from_full_user_array(void *kdst, size_t kdst_entry_size,
>    * @domain_alloc: allocate and return an iommu domain if success. Otherwise
>    *                NULL is returned. The domain is not fully initialized until
>    *                the caller iommu_domain_alloc() returns.
> + * @domain_alloc_identity: allocate an IDENTITY domain. Drivers should prefer to
> + *                         use identity_domain instead. This should only be used
> + *                         if dynamic logic is necessary.
>    * @domain_alloc_paging_flags: Allocate an iommu domain corresponding to the
>    *                     input parameters as defined in
>    *                     include/uapi/linux/iommufd.h. The @user_data can be
> @@ -615,6 +618,7 @@ struct iommu_ops {
>   
>   	/* Domain allocation and freeing by the iommu driver */
>   	struct iommu_domain *(*domain_alloc)(unsigned iommu_domain_type);
> +	struct iommu_domain *(*domain_alloc_identity)(struct device *dev);
>   	struct iommu_domain *(*domain_alloc_paging_flags)(
>   		struct device *dev, u32 flags,
>   		const struct iommu_user_data *user_data);

This seems to be a paging domain with identity mappings for some
specific ranges. We make it an IDENTITY domain type because it should be
static and drivers are not allowed to change the mappings once it is
created.

So perhaps naming the callback as domain_alloc_paging_identity, or
domain_alloc_paging_static?

Thanks,
baolu

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 4/5] iommu: Do not call domain_alloc() in iommu_sva_domain_alloc()
  2025-02-28  0:20 ` [PATCH v2 4/5] iommu: Do not call domain_alloc() in iommu_sva_domain_alloc() Jason Gunthorpe
@ 2025-02-28  2:03   ` Baolu Lu
  0 siblings, 0 replies; 13+ messages in thread
From: Baolu Lu @ 2025-02-28  2:03 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu, Jean-Philippe Brucker, Joerg Roedel,
	Robin Murphy, virtualization, Will Deacon
  Cc: Eric Auger, patches

On 2/28/25 08:20, Jason Gunthorpe wrote:
> No driver implements SVA under domain_alloc() anymore, this is dead
> code.
> 
> Signed-off-by: Jason Gunthorpe<jgg@nvidia.com>

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 5/5] iommu: Hide ops.domain_alloc behind CONFIG_FSL_PAMU
  2025-02-28  0:20 ` [PATCH v2 5/5] iommu: Hide ops.domain_alloc behind CONFIG_FSL_PAMU Jason Gunthorpe
@ 2025-02-28  2:04   ` Baolu Lu
  0 siblings, 0 replies; 13+ messages in thread
From: Baolu Lu @ 2025-02-28  2:04 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu, Jean-Philippe Brucker, Joerg Roedel,
	Robin Murphy, virtualization, Will Deacon
  Cc: Eric Auger, patches

On 2/28/25 08:20, Jason Gunthorpe wrote:
> fsl_pamu is the last user of domain_alloc(), and it is using it to create
> something weird that doesn't really fit into the iommu subsystem
> architecture. It is a not a paging domain since it doesn't have any
> map/unmap ops. It may be some special kind of identity domain.
> 
> For now just leave it as is. Wrap it's definition in CONFIG_FSL_PAMU to
> discourage any new drivers from attempting to use it.
> 
> Signed-off-by: Jason Gunthorpe<jgg@nvidia.com>

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/5] iommu: Add domain_alloc_identity()
  2025-02-28  1:59   ` Baolu Lu
@ 2025-02-28 14:31     ` Jason Gunthorpe
  0 siblings, 0 replies; 13+ messages in thread
From: Jason Gunthorpe @ 2025-02-28 14:31 UTC (permalink / raw)
  To: Baolu Lu
  Cc: iommu, Jean-Philippe Brucker, Joerg Roedel, Robin Murphy,
	virtualization, Will Deacon, Eric Auger, patches

On Fri, Feb 28, 2025 at 09:59:53AM +0800, Baolu Lu wrote:

> This seems to be a paging domain with identity mappings for some
> specific ranges. 

Sometimes it's the global static too

> So perhaps naming the callback as domain_alloc_paging_identity, or
> domain_alloc_paging_static?

s390 and dart are looking like they will use this function to
conditionally support identity with the static domain, so I think we
should not add paging to the name

Jason

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 3/5] iommu/virtio: Move to domain_alloc_paging()
  2025-02-28  0:20 ` [PATCH v2 3/5] iommu/virtio: Move to domain_alloc_paging() Jason Gunthorpe
@ 2025-03-03 15:21   ` Jean-Philippe Brucker
  2025-03-03 17:02     ` Jason Gunthorpe
  0 siblings, 1 reply; 13+ messages in thread
From: Jean-Philippe Brucker @ 2025-03-03 15:21 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: iommu, Joerg Roedel, Robin Murphy, virtualization, Will Deacon,
	Eric Auger, patches

On Thu, Feb 27, 2025 at 08:20:01PM -0400, Jason Gunthorpe wrote:
> virtio has the complication that it sometimes wants to return a paging
> domain for IDENTITY which makes this conversion a little different than
> other drivers.
> 
> Add a viommu_domain_alloc_paging() that combines viommu_domain_alloc() and
> viommu_domain_finalise() to always return a fully initialized and
> finalized paging domain.
> 
> Use viommu_domain_alloc_identity() to implement the special non-bypass
> IDENTITY flow by calling viommu_domain_alloc_paging() then
> viommu_domain_map_identity().
> 
> Remove support for deferred finalize and the vdomain->mutex.
> 
> Remove core support for domain_alloc() IDENTITY as virtio was the last
> driver using it.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>

And my tests still pass (after fixing the build issue on patch 2)

> ---
>  drivers/iommu/iommu.c        |   6 --
>  drivers/iommu/virtio-iommu.c | 114 ++++++++++++++++-------------------
>  2 files changed, 53 insertions(+), 67 deletions(-)
> 
> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
> index ee33d26dfcd40d..73a05b34de4768 100644
> --- a/drivers/iommu/iommu.c
> +++ b/drivers/iommu/iommu.c
> @@ -1599,12 +1599,6 @@ static struct iommu_domain *__iommu_alloc_identity_domain(struct device *dev)
>  		domain = ops->domain_alloc_identity(dev);
>  		if (IS_ERR(domain))
>  			return domain;
> -	} else if (ops->domain_alloc) {
> -		domain = ops->domain_alloc(IOMMU_DOMAIN_IDENTITY);
> -		if (!domain)
> -			return ERR_PTR(-ENOMEM);
> -		if (IS_ERR(domain))
> -			return domain;
>  	} else {
>  		return ERR_PTR(-EOPNOTSUPP);
>  	}
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index 2287b89967067d..cdb8034ece9bde 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -63,7 +63,6 @@ struct viommu_mapping {
>  struct viommu_domain {
>  	struct iommu_domain		domain;
>  	struct viommu_dev		*viommu;
> -	struct mutex			mutex; /* protects viommu pointer */
>  	unsigned int			id;
>  	u32				map_flags;
>  
> @@ -97,6 +96,8 @@ struct viommu_event {
>  	};
>  };
>  
> +static struct viommu_domain viommu_identity_domain;
> +
>  #define to_viommu_domain(domain)	\
>  	container_of(domain, struct viommu_domain, domain)
>  
> @@ -653,65 +654,45 @@ static void viommu_event_handler(struct virtqueue *vq)
>  
>  /* IOMMU API */
>  
> -static struct iommu_domain *viommu_domain_alloc(unsigned type)
> +static struct iommu_domain *viommu_domain_alloc_paging(struct device *dev)
>  {
> -	struct viommu_domain *vdomain;
> -
> -	if (type != IOMMU_DOMAIN_UNMANAGED &&
> -	    type != IOMMU_DOMAIN_DMA &&
> -	    type != IOMMU_DOMAIN_IDENTITY)
> -		return NULL;
> -
> -	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
> -	if (!vdomain)
> -		return NULL;
> -
> -	mutex_init(&vdomain->mutex);
> -	spin_lock_init(&vdomain->mappings_lock);
> -	vdomain->mappings = RB_ROOT_CACHED;
> -
> -	return &vdomain->domain;
> -}
> -
> -static int viommu_domain_finalise(struct viommu_endpoint *vdev,
> -				  struct iommu_domain *domain)
> -{
> -	int ret;
> -	unsigned long viommu_page_size;
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
>  	struct viommu_dev *viommu = vdev->viommu;
> -	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +	unsigned long viommu_page_size;
> +	struct viommu_domain *vdomain;
> +	int ret;
>  
>  	viommu_page_size = 1UL << __ffs(viommu->pgsize_bitmap);
>  	if (viommu_page_size > PAGE_SIZE) {
>  		dev_err(vdev->dev,
>  			"granule 0x%lx larger than system page size 0x%lx\n",
>  			viommu_page_size, PAGE_SIZE);
> -		return -ENODEV;
> +		return ERR_PTR(-ENODEV);
>  	}
>  
> +	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
> +	if (!vdomain)
> +		return ERR_PTR(-ENOMEM);
> +
> +	spin_lock_init(&vdomain->mappings_lock);
> +	vdomain->mappings = RB_ROOT_CACHED;
> +
>  	ret = ida_alloc_range(&viommu->domain_ids, viommu->first_domain,
>  			      viommu->last_domain, GFP_KERNEL);
> -	if (ret < 0)
> -		return ret;
> -
> -	vdomain->id		= (unsigned int)ret;
> -
> -	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
> -	domain->geometry	= viommu->geometry;
> -
> -	vdomain->map_flags	= viommu->map_flags;
> -	vdomain->viommu		= viommu;
> -
> -	if (domain->type == IOMMU_DOMAIN_IDENTITY) {
> -		ret = viommu_domain_map_identity(vdev, vdomain);
> -		if (ret) {
> -			ida_free(&viommu->domain_ids, vdomain->id);
> -			vdomain->viommu = NULL;
> -			return ret;
> -		}
> +	if (ret < 0) {
> +		kfree(vdomain);
> +		return ERR_PTR(ret);
>  	}
>  
> -	return 0;
> +	vdomain->id = (unsigned int)ret;
> +
> +	vdomain->domain.pgsize_bitmap = viommu->pgsize_bitmap;
> +	vdomain->domain.geometry = viommu->geometry;
> +
> +	vdomain->map_flags = viommu->map_flags;
> +	vdomain->viommu = viommu;
> +
> +	return &vdomain->domain;
>  }
>  
>  static void viommu_domain_free(struct iommu_domain *domain)
> @@ -727,6 +708,28 @@ static void viommu_domain_free(struct iommu_domain *domain)
>  	kfree(vdomain);
>  }
>  
> +static struct iommu_domain *viommu_domain_alloc_identity(struct device *dev)
> +{
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +	struct iommu_domain *domain;
> +	int ret;
> +
> +	if (virtio_has_feature(vdev->viommu->vdev,
> +			       VIRTIO_IOMMU_F_BYPASS_CONFIG))
> +		return &viommu_identity_domain.domain;
> +
> +	domain = viommu_domain_alloc_paging(dev);
> +	if (IS_ERR(domain))
> +		return domain;
> +
> +	ret = viommu_domain_map_identity(vdev, to_viommu_domain(domain));
> +	if (ret) {
> +		viommu_domain_free(domain);
> +		return ERR_PTR(ret);
> +	}
> +	return domain;
> +}
> +
>  static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
>  {
>  	int ret = 0;
> @@ -734,20 +737,8 @@ static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
>  	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
>  	struct viommu_domain *vdomain = to_viommu_domain(domain);
>  
> -	mutex_lock(&vdomain->mutex);
> -	if (!vdomain->viommu) {
> -		/*
> -		 * Properly initialize the domain now that we know which viommu
> -		 * owns it.
> -		 */
> -		ret = viommu_domain_finalise(vdev, domain);
> -	} else if (vdomain->viommu != vdev->viommu) {
> -		ret = -EINVAL;
> -	}
> -	mutex_unlock(&vdomain->mutex);
> -
> -	if (ret)
> -		return ret;
> +	if (vdomain->viommu != vdev->viommu)
> +		return -EINVAL;
>  
>  	/*
>  	 * In the virtio-iommu device, when attaching the endpoint to a new
> @@ -1097,7 +1088,8 @@ static bool viommu_capable(struct device *dev, enum iommu_cap cap)
>  
>  static const struct iommu_ops viommu_ops = {
>  	.capable		= viommu_capable,
> -	.domain_alloc		= viommu_domain_alloc,
> +	.domain_alloc_identity	= viommu_domain_alloc_identity,
> +	.domain_alloc_paging	= viommu_domain_alloc_paging,
>  	.probe_device		= viommu_probe_device,
>  	.release_device		= viommu_release_device,
>  	.device_group		= viommu_device_group,
> -- 
> 2.43.0
> 
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/5] iommu/virtio: Break out bypass identity support into a global static
  2025-02-28  0:19 ` [PATCH v2 1/5] iommu/virtio: Break out bypass identity support into a global static Jason Gunthorpe
@ 2025-03-03 15:22   ` Jean-Philippe Brucker
  0 siblings, 0 replies; 13+ messages in thread
From: Jean-Philippe Brucker @ 2025-03-03 15:22 UTC (permalink / raw)
  To: Jason Gunthorpe
  Cc: iommu, Joerg Roedel, Robin Murphy, virtualization, Will Deacon,
	Eric Auger, patches

On Thu, Feb 27, 2025 at 08:19:59PM -0400, Jason Gunthorpe wrote:
> To make way for a domain_alloc_paging conversion add the typical global
> static IDENTITY domain. This supports VMMs that have a
> VIRTIO_IOMMU_F_BYPASS_CONFIG config.
> 
> If the VMM does not have support then the domain_alloc path is still used,
> which creates an IDENTITY domain out of a paging domain.
> 
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>

> ---
>  drivers/iommu/virtio-iommu.c | 86 ++++++++++++++++++++++++++++--------
>  1 file changed, 67 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
> index b85ce6310ddbda..55a2188197c621 100644
> --- a/drivers/iommu/virtio-iommu.c
> +++ b/drivers/iommu/virtio-iommu.c
> @@ -48,6 +48,7 @@ struct viommu_dev {
>  	u64				pgsize_bitmap;
>  	u32				first_domain;
>  	u32				last_domain;
> +	u32				identity_domain_id;
>  	/* Supported MAP flags */
>  	u32				map_flags;
>  	u32				probe_size;
> @@ -70,7 +71,6 @@ struct viommu_domain {
>  	struct rb_root_cached		mappings;
>  
>  	unsigned long			nr_endpoints;
> -	bool				bypass;
>  };
>  
>  struct viommu_endpoint {
> @@ -305,6 +305,22 @@ static int viommu_send_req_sync(struct viommu_dev *viommu, void *buf,
>  	return ret;
>  }
>  
> +static int viommu_send_attach_req(struct viommu_dev *viommu, struct device *dev,
> +				  struct virtio_iommu_req_attach *req)
> +{
> +	int ret;
> +	unsigned int i;
> +	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
> +
> +	for (i = 0; i < fwspec->num_ids; i++) {
> +		req->endpoint = cpu_to_le32(fwspec->ids[i]);
> +		ret = viommu_send_req_sync(viommu, req, sizeof(*req));
> +		if (ret)
> +			return ret;
> +	}
> +	return 0;
> +}
> +
>  /*
>   * viommu_add_mapping - add a mapping to the internal tree
>   *
> @@ -687,12 +703,6 @@ static int viommu_domain_finalise(struct viommu_endpoint *vdev,
>  	vdomain->viommu		= viommu;
>  
>  	if (domain->type == IOMMU_DOMAIN_IDENTITY) {
> -		if (virtio_has_feature(viommu->vdev,
> -				       VIRTIO_IOMMU_F_BYPASS_CONFIG)) {
> -			vdomain->bypass = true;
> -			return 0;
> -		}
> -
>  		ret = viommu_domain_map_identity(vdev, vdomain);
>  		if (ret) {
>  			ida_free(&viommu->domain_ids, vdomain->id);
> @@ -719,10 +729,8 @@ static void viommu_domain_free(struct iommu_domain *domain)
>  
>  static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
>  {
> -	int i;
>  	int ret = 0;
>  	struct virtio_iommu_req_attach req;
> -	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
>  	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
>  	struct viommu_domain *vdomain = to_viommu_domain(domain);
>  
> @@ -761,16 +769,9 @@ static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
>  		.domain		= cpu_to_le32(vdomain->id),
>  	};
>  
> -	if (vdomain->bypass)
> -		req.flags |= cpu_to_le32(VIRTIO_IOMMU_ATTACH_F_BYPASS);
> -
> -	for (i = 0; i < fwspec->num_ids; i++) {
> -		req.endpoint = cpu_to_le32(fwspec->ids[i]);
> -
> -		ret = viommu_send_req_sync(vdomain->viommu, &req, sizeof(req));
> -		if (ret)
> -			return ret;
> -	}
> +	ret = viommu_send_attach_req(vdomain->viommu, dev, &req);
> +	if (ret)
> +		return ret;
>  
>  	if (!vdomain->nr_endpoints) {
>  		/*
> @@ -788,6 +789,40 @@ static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
>  	return 0;
>  }
>  
> +static int viommu_attach_identity_domain(struct iommu_domain *domain,
> +					 struct device *dev)
> +{
> +	int ret = 0;
> +	struct virtio_iommu_req_attach req;
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +	struct viommu_domain *vdomain = to_viommu_domain(domain);
> +
> +	req = (struct virtio_iommu_req_attach) {
> +		.head.type	= VIRTIO_IOMMU_T_ATTACH,
> +		.domain		= cpu_to_le32(vdev->viommu->identity_domain_id),
> +		.flags          = cpu_to_le32(VIRTIO_IOMMU_ATTACH_F_BYPASS),
> +	};
> +
> +	ret = viommu_send_attach_req(vdev->viommu, dev, &req);
> +	if (ret)
> +		return ret;
> +
> +	if (vdev->vdomain)
> +		vdev->vdomain->nr_endpoints--;
> +	vdomain->nr_endpoints++;
> +	vdev->vdomain = vdomain;
> +	return 0;
> +}
> +
> +static struct viommu_domain viommu_identity_domain = {
> +	.domain = {
> +		.type = IOMMU_DOMAIN_IDENTITY,
> +		.ops = &(const struct iommu_domain_ops) {
> +			.attach_dev = viommu_attach_identity_domain,
> +		},
> +	},
> +};
> +
>  static void viommu_detach_dev(struct viommu_endpoint *vdev)
>  {
>  	int i;
> @@ -1061,6 +1096,7 @@ static bool viommu_capable(struct device *dev, enum iommu_cap cap)
>  }
>  
>  static struct iommu_ops viommu_ops = {
> +	.identity_domain	= &viommu_identity_domain.domain,
>  	.capable		= viommu_capable,
>  	.domain_alloc		= viommu_domain_alloc,
>  	.probe_device		= viommu_probe_device,
> @@ -1184,6 +1220,18 @@ static int viommu_probe(struct virtio_device *vdev)
>  	if (virtio_has_feature(vdev, VIRTIO_IOMMU_F_MMIO))
>  		viommu->map_flags |= VIRTIO_IOMMU_MAP_F_MMIO;
>  
> +	/* Reserve an ID to use as the bypass domain */
> +	if (virtio_has_feature(viommu->vdev, VIRTIO_IOMMU_F_BYPASS_CONFIG)) {
> +		viommu->identity_domain_id = viommu->first_domain;
> +		viommu->first_domain++;
> +	} else {
> +		/*
> +		 * Assume the VMM is sensible and it either supports bypass on
> +		 * all instances or no instances.
> +		 */
> +		viommu_ops.identity_domain = NULL;
> +	}
> +
>  	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
>  
>  	virtio_device_ready(vdev);
> -- 
> 2.43.0
> 
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 3/5] iommu/virtio: Move to domain_alloc_paging()
  2025-03-03 15:21   ` Jean-Philippe Brucker
@ 2025-03-03 17:02     ` Jason Gunthorpe
  0 siblings, 0 replies; 13+ messages in thread
From: Jason Gunthorpe @ 2025-03-03 17:02 UTC (permalink / raw)
  To: Jean-Philippe Brucker
  Cc: iommu, Joerg Roedel, Robin Murphy, virtualization, Will Deacon,
	Eric Auger, patches

On Mon, Mar 03, 2025 at 03:21:43PM +0000, Jean-Philippe Brucker wrote:
> On Thu, Feb 27, 2025 at 08:20:01PM -0400, Jason Gunthorpe wrote:
> > virtio has the complication that it sometimes wants to return a paging
> > domain for IDENTITY which makes this conversion a little different than
> > other drivers.
> > 
> > Add a viommu_domain_alloc_paging() that combines viommu_domain_alloc() and
> > viommu_domain_finalise() to always return a fully initialized and
> > finalized paging domain.
> > 
> > Use viommu_domain_alloc_identity() to implement the special non-bypass
> > IDENTITY flow by calling viommu_domain_alloc_paging() then
> > viommu_domain_map_identity().
> > 
> > Remove support for deferred finalize and the vdomain->mutex.
> > 
> > Remove core support for domain_alloc() IDENTITY as virtio was the last
> > driver using it.
> > 
> > Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> 
> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> 
> And my tests still pass (after fixing the build issue on patch 2)

Yeah, my mistake, my tree has a different fix for that in a followup
series :\ I will resend it

Thanks,
Jason

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2025-03-03 17:02 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-28  0:19 [PATCH v2 0/5] Convert virtio-iommu to domain_alloc_paging() Jason Gunthorpe
2025-02-28  0:19 ` [PATCH v2 1/5] iommu/virtio: Break out bypass identity support into a global static Jason Gunthorpe
2025-03-03 15:22   ` Jean-Philippe Brucker
2025-02-28  0:20 ` [PATCH v2 2/5] iommu: Add domain_alloc_identity() Jason Gunthorpe
2025-02-28  1:59   ` Baolu Lu
2025-02-28 14:31     ` Jason Gunthorpe
2025-02-28  0:20 ` [PATCH v2 3/5] iommu/virtio: Move to domain_alloc_paging() Jason Gunthorpe
2025-03-03 15:21   ` Jean-Philippe Brucker
2025-03-03 17:02     ` Jason Gunthorpe
2025-02-28  0:20 ` [PATCH v2 4/5] iommu: Do not call domain_alloc() in iommu_sva_domain_alloc() Jason Gunthorpe
2025-02-28  2:03   ` Baolu Lu
2025-02-28  0:20 ` [PATCH v2 5/5] iommu: Hide ops.domain_alloc behind CONFIG_FSL_PAMU Jason Gunthorpe
2025-02-28  2:04   ` Baolu Lu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).