virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/5] Convert virtio-iommu to domain_alloc_paging()
@ 2025-04-08 16:35 Jason Gunthorpe
  2025-04-08 16:35 ` [PATCH v4 1/5] iommu/virtio: Break out bypass identity support into a global static Jason Gunthorpe
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Jason Gunthorpe @ 2025-04-08 16:35 UTC (permalink / raw)
  To: iommu, Joerg Roedel, Robin Murphy, virtualization, Will Deacon
  Cc: Lu Baolu, Eric Auger, Jean-Philippe Brucker, patches

virtio-iommu and fsl_pamu are the only two drivers left that still
rely on ops.domain_alloc()

Update virtio-iommu to have a global static identity domain, implement
domain_alloc_paging(), and finalize its domain during allocation instead
of on first attach.

As virtio-iommu was the last real iommu driver using domain_alloc() update
the core code to isolate the remains to fsl_pamu.

I tested this using qemu x86 7.0.0 and 9.2.0 with a mlx5 VFIO PCI
device. The kernel behaves the same after this series.

However, there seem to be some unrelated qemu bugs. 7.0.0 does not have
working VIRTIO_IOMMU_F_BYPASS_CONFIG and neither qemu works with an
IDENTITY domain using !VIRTIO_IOMMU_F_BYPASS_CONFIG. It prints:

 qemu-system-x86_64: iommu has granularity incompatible with target AS
 qemu-system-x86_64: iommu map to non memory area 80000000
 qemu-system-x86_64: iommu map to non memory area c0000000
 qemu-system-x86_64: iommu map to non memory area e0000000
 qemu-system-x86_64: iommu map to non memory area f0000000
 qemu-system-x86_64: iommu map to non memory area f8000000
 qemu-system-x86_64: iommu map to non memory area fc000000
 qemu-system-x86_64: iommu map to non memory area fe000000
 qemu-system-x86_64: iommu map to non memory area fe800000
 qemu-system-x86_64: iommu map to non memory area 0
 qemu-system-x86_64: iommu map to non memory area fef00000
 qemu-system-x86_64: iommu map to non memory area ff000000
 qemu-system-x86_64: iommu has granularity incompatible with target AS
 qemu-system-x86_64: iommu map to non memory area 200000000
 qemu-system-x86_64: iommu map to non memory area 400000000
 qemu-system-x86_64: iommu map to non memory area 800000000
 qemu-system-x86_64: iommu map to non memory area 1000000000
 qemu-system-x86_64: iommu map to non memory area 2000000000
 qemu-system-x86_64: iommu map to non memory area 4000000000

v4:
 - Rebase on v6.15-rc1
v3: https://patch.msgid.link/r/0-v3-e005a650174b+53def-iommu_virtio_domains_jgg@nvidia.com
 - Fix rebasing error
 - Fix incorrect const remove that was from another series
v2: https://patch.msgid.link/r/0-v2-a3b72fd4b99a+1b6-iommu_virtio_domains_jgg@nvidia.com
 - Remove the viommu_ops.identity_domain = NULL in the last patch and
   re-constify the ops
 - Fix incorrect tracking in the identity domain attach
 - Christmas tree variable order and some other formatting
v1: https://patch.msgid.link/r/0-v1-91eed9c8014a+53a37-iommu_virtio_domains_jgg@nvidia.com

Jason Gunthorpe (5):
  iommu/virtio: Break out bypass identity support into a global static
  iommu: Add domain_alloc_identity()
  iommu/virtio: Move to domain_alloc_paging()
  iommu: Do not call domain_alloc() in iommu_sva_domain_alloc()
  iommu: Hide ops.domain_alloc behind CONFIG_FSL_PAMU

 drivers/iommu/iommu-sva.c    |  15 ++-
 drivers/iommu/iommu.c        |  16 +--
 drivers/iommu/virtio-iommu.c | 193 ++++++++++++++++++++---------------
 include/linux/iommu.h        |  10 +-
 4 files changed, 134 insertions(+), 100 deletions(-)


base-commit: 0af2f6be1b4281385b618cb86ad946eded089ac8
-- 
2.43.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v4 1/5] iommu/virtio: Break out bypass identity support into a global static
  2025-04-08 16:35 [PATCH v4 0/5] Convert virtio-iommu to domain_alloc_paging() Jason Gunthorpe
@ 2025-04-08 16:35 ` Jason Gunthorpe
  2025-04-10  5:46   ` Tian, Kevin
  2025-04-08 16:35 ` [PATCH v4 2/5] iommu: Add domain_alloc_identity() Jason Gunthorpe
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Jason Gunthorpe @ 2025-04-08 16:35 UTC (permalink / raw)
  To: iommu, Joerg Roedel, Robin Murphy, virtualization, Will Deacon
  Cc: Lu Baolu, Eric Auger, Jean-Philippe Brucker, patches

To make way for a domain_alloc_paging conversion add the typical global
static IDENTITY domain. This supports VMMs that have a
VIRTIO_IOMMU_F_BYPASS_CONFIG config.

If the VMM does not have support then the domain_alloc path is still used,
which creates an IDENTITY domain out of a paging domain.

Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/virtio-iommu.c | 86 ++++++++++++++++++++++++++++--------
 1 file changed, 67 insertions(+), 19 deletions(-)

diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index b85ce6310ddbda..55a2188197c621 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -48,6 +48,7 @@ struct viommu_dev {
 	u64				pgsize_bitmap;
 	u32				first_domain;
 	u32				last_domain;
+	u32				identity_domain_id;
 	/* Supported MAP flags */
 	u32				map_flags;
 	u32				probe_size;
@@ -70,7 +71,6 @@ struct viommu_domain {
 	struct rb_root_cached		mappings;
 
 	unsigned long			nr_endpoints;
-	bool				bypass;
 };
 
 struct viommu_endpoint {
@@ -305,6 +305,22 @@ static int viommu_send_req_sync(struct viommu_dev *viommu, void *buf,
 	return ret;
 }
 
+static int viommu_send_attach_req(struct viommu_dev *viommu, struct device *dev,
+				  struct virtio_iommu_req_attach *req)
+{
+	int ret;
+	unsigned int i;
+	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+
+	for (i = 0; i < fwspec->num_ids; i++) {
+		req->endpoint = cpu_to_le32(fwspec->ids[i]);
+		ret = viommu_send_req_sync(viommu, req, sizeof(*req));
+		if (ret)
+			return ret;
+	}
+	return 0;
+}
+
 /*
  * viommu_add_mapping - add a mapping to the internal tree
  *
@@ -687,12 +703,6 @@ static int viommu_domain_finalise(struct viommu_endpoint *vdev,
 	vdomain->viommu		= viommu;
 
 	if (domain->type == IOMMU_DOMAIN_IDENTITY) {
-		if (virtio_has_feature(viommu->vdev,
-				       VIRTIO_IOMMU_F_BYPASS_CONFIG)) {
-			vdomain->bypass = true;
-			return 0;
-		}
-
 		ret = viommu_domain_map_identity(vdev, vdomain);
 		if (ret) {
 			ida_free(&viommu->domain_ids, vdomain->id);
@@ -719,10 +729,8 @@ static void viommu_domain_free(struct iommu_domain *domain)
 
 static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
 {
-	int i;
 	int ret = 0;
 	struct virtio_iommu_req_attach req;
-	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
 	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
 	struct viommu_domain *vdomain = to_viommu_domain(domain);
 
@@ -761,16 +769,9 @@ static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
 		.domain		= cpu_to_le32(vdomain->id),
 	};
 
-	if (vdomain->bypass)
-		req.flags |= cpu_to_le32(VIRTIO_IOMMU_ATTACH_F_BYPASS);
-
-	for (i = 0; i < fwspec->num_ids; i++) {
-		req.endpoint = cpu_to_le32(fwspec->ids[i]);
-
-		ret = viommu_send_req_sync(vdomain->viommu, &req, sizeof(req));
-		if (ret)
-			return ret;
-	}
+	ret = viommu_send_attach_req(vdomain->viommu, dev, &req);
+	if (ret)
+		return ret;
 
 	if (!vdomain->nr_endpoints) {
 		/*
@@ -788,6 +789,40 @@ static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
 	return 0;
 }
 
+static int viommu_attach_identity_domain(struct iommu_domain *domain,
+					 struct device *dev)
+{
+	int ret = 0;
+	struct virtio_iommu_req_attach req;
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+	struct viommu_domain *vdomain = to_viommu_domain(domain);
+
+	req = (struct virtio_iommu_req_attach) {
+		.head.type	= VIRTIO_IOMMU_T_ATTACH,
+		.domain		= cpu_to_le32(vdev->viommu->identity_domain_id),
+		.flags          = cpu_to_le32(VIRTIO_IOMMU_ATTACH_F_BYPASS),
+	};
+
+	ret = viommu_send_attach_req(vdev->viommu, dev, &req);
+	if (ret)
+		return ret;
+
+	if (vdev->vdomain)
+		vdev->vdomain->nr_endpoints--;
+	vdomain->nr_endpoints++;
+	vdev->vdomain = vdomain;
+	return 0;
+}
+
+static struct viommu_domain viommu_identity_domain = {
+	.domain = {
+		.type = IOMMU_DOMAIN_IDENTITY,
+		.ops = &(const struct iommu_domain_ops) {
+			.attach_dev = viommu_attach_identity_domain,
+		},
+	},
+};
+
 static void viommu_detach_dev(struct viommu_endpoint *vdev)
 {
 	int i;
@@ -1061,6 +1096,7 @@ static bool viommu_capable(struct device *dev, enum iommu_cap cap)
 }
 
 static struct iommu_ops viommu_ops = {
+	.identity_domain	= &viommu_identity_domain.domain,
 	.capable		= viommu_capable,
 	.domain_alloc		= viommu_domain_alloc,
 	.probe_device		= viommu_probe_device,
@@ -1184,6 +1220,18 @@ static int viommu_probe(struct virtio_device *vdev)
 	if (virtio_has_feature(vdev, VIRTIO_IOMMU_F_MMIO))
 		viommu->map_flags |= VIRTIO_IOMMU_MAP_F_MMIO;
 
+	/* Reserve an ID to use as the bypass domain */
+	if (virtio_has_feature(viommu->vdev, VIRTIO_IOMMU_F_BYPASS_CONFIG)) {
+		viommu->identity_domain_id = viommu->first_domain;
+		viommu->first_domain++;
+	} else {
+		/*
+		 * Assume the VMM is sensible and it either supports bypass on
+		 * all instances or no instances.
+		 */
+		viommu_ops.identity_domain = NULL;
+	}
+
 	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
 
 	virtio_device_ready(vdev);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v4 2/5] iommu: Add domain_alloc_identity()
  2025-04-08 16:35 [PATCH v4 0/5] Convert virtio-iommu to domain_alloc_paging() Jason Gunthorpe
  2025-04-08 16:35 ` [PATCH v4 1/5] iommu/virtio: Break out bypass identity support into a global static Jason Gunthorpe
@ 2025-04-08 16:35 ` Jason Gunthorpe
  2025-04-10  5:48   ` Tian, Kevin
  2025-04-08 16:35 ` [PATCH v4 3/5] iommu/virtio: Move to domain_alloc_paging() Jason Gunthorpe
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Jason Gunthorpe @ 2025-04-08 16:35 UTC (permalink / raw)
  To: iommu, Joerg Roedel, Robin Murphy, virtualization, Will Deacon
  Cc: Lu Baolu, Eric Auger, Jean-Philippe Brucker, patches

virtio-iommu has a mode where the IDENTITY domain is actually a paging
domain with an identity mapping covering some of the system address
space manually created.

To support this add a new domain_alloc_identity() op that accepts
the struct device so that virtio can allocate and fully finalize a
paging domain to return.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 20 ++++++++++++--------
 include/linux/iommu.h |  4 ++++
 2 files changed, 16 insertions(+), 8 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index c8033ca6637771..a61cceb706ed43 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1626,15 +1626,19 @@ static struct iommu_domain *__iommu_alloc_identity_domain(struct device *dev)
 	if (ops->identity_domain)
 		return ops->identity_domain;
 
-	/* Older drivers create the identity domain via ops->domain_alloc() */
-	if (!ops->domain_alloc)
+	if (ops->domain_alloc_identity) {
+		domain = ops->domain_alloc_identity(dev);
+		if (IS_ERR(domain))
+			return domain;
+	} else if (ops->domain_alloc) {
+		domain = ops->domain_alloc(IOMMU_DOMAIN_IDENTITY);
+		if (!domain)
+			return ERR_PTR(-ENOMEM);
+		if (IS_ERR(domain))
+			return domain;
+	} else {
 		return ERR_PTR(-EOPNOTSUPP);
-
-	domain = ops->domain_alloc(IOMMU_DOMAIN_IDENTITY);
-	if (IS_ERR(domain))
-		return domain;
-	if (!domain)
-		return ERR_PTR(-ENOMEM);
+	}
 
 	iommu_domain_init(domain, IOMMU_DOMAIN_IDENTITY, ops);
 	return domain;
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index ccce8a751e2a55..0f7544b4da9e93 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -572,6 +572,9 @@ iommu_copy_struct_from_full_user_array(void *kdst, size_t kdst_entry_size,
  * @domain_alloc: allocate and return an iommu domain if success. Otherwise
  *                NULL is returned. The domain is not fully initialized until
  *                the caller iommu_domain_alloc() returns.
+ * @domain_alloc_identity: allocate an IDENTITY domain. Drivers should prefer to
+ *                         use identity_domain instead. This should only be used
+ *                         if dynamic logic is necessary.
  * @domain_alloc_paging_flags: Allocate an iommu domain corresponding to the
  *                     input parameters as defined in
  *                     include/uapi/linux/iommufd.h. The @user_data can be
@@ -630,6 +633,7 @@ struct iommu_ops {
 
 	/* Domain allocation and freeing by the iommu driver */
 	struct iommu_domain *(*domain_alloc)(unsigned iommu_domain_type);
+	struct iommu_domain *(*domain_alloc_identity)(struct device *dev);
 	struct iommu_domain *(*domain_alloc_paging_flags)(
 		struct device *dev, u32 flags,
 		const struct iommu_user_data *user_data);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v4 3/5] iommu/virtio: Move to domain_alloc_paging()
  2025-04-08 16:35 [PATCH v4 0/5] Convert virtio-iommu to domain_alloc_paging() Jason Gunthorpe
  2025-04-08 16:35 ` [PATCH v4 1/5] iommu/virtio: Break out bypass identity support into a global static Jason Gunthorpe
  2025-04-08 16:35 ` [PATCH v4 2/5] iommu: Add domain_alloc_identity() Jason Gunthorpe
@ 2025-04-08 16:35 ` Jason Gunthorpe
  2025-04-10  5:55   ` Tian, Kevin
  2025-04-08 16:35 ` [PATCH v4 4/5] iommu: Do not call domain_alloc() in iommu_sva_domain_alloc() Jason Gunthorpe
  2025-04-08 16:35 ` [PATCH v4 5/5] iommu: Hide ops.domain_alloc behind CONFIG_FSL_PAMU Jason Gunthorpe
  4 siblings, 1 reply; 13+ messages in thread
From: Jason Gunthorpe @ 2025-04-08 16:35 UTC (permalink / raw)
  To: iommu, Joerg Roedel, Robin Murphy, virtualization, Will Deacon
  Cc: Lu Baolu, Eric Auger, Jean-Philippe Brucker, patches

virtio has the complication that it sometimes wants to return a paging
domain for IDENTITY which makes this conversion a little different than
other drivers.

Add a viommu_domain_alloc_paging() that combines viommu_domain_alloc() and
viommu_domain_finalise() to always return a fully initialized and
finalized paging domain.

Use viommu_domain_alloc_identity() to implement the special non-bypass
IDENTITY flow by calling viommu_domain_alloc_paging() then
viommu_domain_map_identity().

Remove support for deferred finalize and the vdomain->mutex.

Remove core support for domain_alloc() IDENTITY as virtio was the last
driver using it.

Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c        |   6 --
 drivers/iommu/virtio-iommu.c | 121 +++++++++++++++--------------------
 2 files changed, 53 insertions(+), 74 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index a61cceb706ed43..3c32ca2d1aef10 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -1630,12 +1630,6 @@ static struct iommu_domain *__iommu_alloc_identity_domain(struct device *dev)
 		domain = ops->domain_alloc_identity(dev);
 		if (IS_ERR(domain))
 			return domain;
-	} else if (ops->domain_alloc) {
-		domain = ops->domain_alloc(IOMMU_DOMAIN_IDENTITY);
-		if (!domain)
-			return ERR_PTR(-ENOMEM);
-		if (IS_ERR(domain))
-			return domain;
 	} else {
 		return ERR_PTR(-EOPNOTSUPP);
 	}
diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
index 55a2188197c621..ecd41fb03e5a51 100644
--- a/drivers/iommu/virtio-iommu.c
+++ b/drivers/iommu/virtio-iommu.c
@@ -63,7 +63,6 @@ struct viommu_mapping {
 struct viommu_domain {
 	struct iommu_domain		domain;
 	struct viommu_dev		*viommu;
-	struct mutex			mutex; /* protects viommu pointer */
 	unsigned int			id;
 	u32				map_flags;
 
@@ -97,6 +96,8 @@ struct viommu_event {
 	};
 };
 
+static struct viommu_domain viommu_identity_domain;
+
 #define to_viommu_domain(domain)	\
 	container_of(domain, struct viommu_domain, domain)
 
@@ -653,65 +654,45 @@ static void viommu_event_handler(struct virtqueue *vq)
 
 /* IOMMU API */
 
-static struct iommu_domain *viommu_domain_alloc(unsigned type)
+static struct iommu_domain *viommu_domain_alloc_paging(struct device *dev)
 {
-	struct viommu_domain *vdomain;
-
-	if (type != IOMMU_DOMAIN_UNMANAGED &&
-	    type != IOMMU_DOMAIN_DMA &&
-	    type != IOMMU_DOMAIN_IDENTITY)
-		return NULL;
-
-	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
-	if (!vdomain)
-		return NULL;
-
-	mutex_init(&vdomain->mutex);
-	spin_lock_init(&vdomain->mappings_lock);
-	vdomain->mappings = RB_ROOT_CACHED;
-
-	return &vdomain->domain;
-}
-
-static int viommu_domain_finalise(struct viommu_endpoint *vdev,
-				  struct iommu_domain *domain)
-{
-	int ret;
-	unsigned long viommu_page_size;
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
 	struct viommu_dev *viommu = vdev->viommu;
-	struct viommu_domain *vdomain = to_viommu_domain(domain);
+	unsigned long viommu_page_size;
+	struct viommu_domain *vdomain;
+	int ret;
 
 	viommu_page_size = 1UL << __ffs(viommu->pgsize_bitmap);
 	if (viommu_page_size > PAGE_SIZE) {
 		dev_err(vdev->dev,
 			"granule 0x%lx larger than system page size 0x%lx\n",
 			viommu_page_size, PAGE_SIZE);
-		return -ENODEV;
+		return ERR_PTR(-ENODEV);
 	}
 
+	vdomain = kzalloc(sizeof(*vdomain), GFP_KERNEL);
+	if (!vdomain)
+		return ERR_PTR(-ENOMEM);
+
+	spin_lock_init(&vdomain->mappings_lock);
+	vdomain->mappings = RB_ROOT_CACHED;
+
 	ret = ida_alloc_range(&viommu->domain_ids, viommu->first_domain,
 			      viommu->last_domain, GFP_KERNEL);
-	if (ret < 0)
-		return ret;
-
-	vdomain->id		= (unsigned int)ret;
-
-	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
-	domain->geometry	= viommu->geometry;
-
-	vdomain->map_flags	= viommu->map_flags;
-	vdomain->viommu		= viommu;
-
-	if (domain->type == IOMMU_DOMAIN_IDENTITY) {
-		ret = viommu_domain_map_identity(vdev, vdomain);
-		if (ret) {
-			ida_free(&viommu->domain_ids, vdomain->id);
-			vdomain->viommu = NULL;
-			return ret;
-		}
+	if (ret < 0) {
+		kfree(vdomain);
+		return ERR_PTR(ret);
 	}
 
-	return 0;
+	vdomain->id = (unsigned int)ret;
+
+	vdomain->domain.pgsize_bitmap = viommu->pgsize_bitmap;
+	vdomain->domain.geometry = viommu->geometry;
+
+	vdomain->map_flags = viommu->map_flags;
+	vdomain->viommu = viommu;
+
+	return &vdomain->domain;
 }
 
 static void viommu_domain_free(struct iommu_domain *domain)
@@ -727,6 +708,28 @@ static void viommu_domain_free(struct iommu_domain *domain)
 	kfree(vdomain);
 }
 
+static struct iommu_domain *viommu_domain_alloc_identity(struct device *dev)
+{
+	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
+	struct iommu_domain *domain;
+	int ret;
+
+	if (virtio_has_feature(vdev->viommu->vdev,
+			       VIRTIO_IOMMU_F_BYPASS_CONFIG))
+		return &viommu_identity_domain.domain;
+
+	domain = viommu_domain_alloc_paging(dev);
+	if (IS_ERR(domain))
+		return domain;
+
+	ret = viommu_domain_map_identity(vdev, to_viommu_domain(domain));
+	if (ret) {
+		viommu_domain_free(domain);
+		return ERR_PTR(ret);
+	}
+	return domain;
+}
+
 static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
 {
 	int ret = 0;
@@ -734,20 +737,8 @@ static int viommu_attach_dev(struct iommu_domain *domain, struct device *dev)
 	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
 	struct viommu_domain *vdomain = to_viommu_domain(domain);
 
-	mutex_lock(&vdomain->mutex);
-	if (!vdomain->viommu) {
-		/*
-		 * Properly initialize the domain now that we know which viommu
-		 * owns it.
-		 */
-		ret = viommu_domain_finalise(vdev, domain);
-	} else if (vdomain->viommu != vdev->viommu) {
-		ret = -EINVAL;
-	}
-	mutex_unlock(&vdomain->mutex);
-
-	if (ret)
-		return ret;
+	if (vdomain->viommu != vdev->viommu)
+		return -EINVAL;
 
 	/*
 	 * In the virtio-iommu device, when attaching the endpoint to a new
@@ -1096,9 +1087,9 @@ static bool viommu_capable(struct device *dev, enum iommu_cap cap)
 }
 
 static struct iommu_ops viommu_ops = {
-	.identity_domain	= &viommu_identity_domain.domain,
 	.capable		= viommu_capable,
-	.domain_alloc		= viommu_domain_alloc,
+	.domain_alloc_identity	= viommu_domain_alloc_identity,
+	.domain_alloc_paging	= viommu_domain_alloc_paging,
 	.probe_device		= viommu_probe_device,
 	.release_device		= viommu_release_device,
 	.device_group		= viommu_device_group,
@@ -1224,12 +1215,6 @@ static int viommu_probe(struct virtio_device *vdev)
 	if (virtio_has_feature(viommu->vdev, VIRTIO_IOMMU_F_BYPASS_CONFIG)) {
 		viommu->identity_domain_id = viommu->first_domain;
 		viommu->first_domain++;
-	} else {
-		/*
-		 * Assume the VMM is sensible and it either supports bypass on
-		 * all instances or no instances.
-		 */
-		viommu_ops.identity_domain = NULL;
 	}
 
 	viommu_ops.pgsize_bitmap = viommu->pgsize_bitmap;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v4 4/5] iommu: Do not call domain_alloc() in iommu_sva_domain_alloc()
  2025-04-08 16:35 [PATCH v4 0/5] Convert virtio-iommu to domain_alloc_paging() Jason Gunthorpe
                   ` (2 preceding siblings ...)
  2025-04-08 16:35 ` [PATCH v4 3/5] iommu/virtio: Move to domain_alloc_paging() Jason Gunthorpe
@ 2025-04-08 16:35 ` Jason Gunthorpe
  2025-04-10  5:56   ` Tian, Kevin
  2025-04-08 16:35 ` [PATCH v4 5/5] iommu: Hide ops.domain_alloc behind CONFIG_FSL_PAMU Jason Gunthorpe
  4 siblings, 1 reply; 13+ messages in thread
From: Jason Gunthorpe @ 2025-04-08 16:35 UTC (permalink / raw)
  To: iommu, Joerg Roedel, Robin Murphy, virtualization, Will Deacon
  Cc: Lu Baolu, Eric Auger, Jean-Philippe Brucker, patches

No driver implements SVA under domain_alloc() anymore, this is dead
code.

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu-sva.c | 15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
index ab18bc494eefdd..9dcf2014da4d95 100644
--- a/drivers/iommu/iommu-sva.c
+++ b/drivers/iommu/iommu-sva.c
@@ -299,15 +299,12 @@ static struct iommu_domain *iommu_sva_domain_alloc(struct device *dev,
 	const struct iommu_ops *ops = dev_iommu_ops(dev);
 	struct iommu_domain *domain;
 
-	if (ops->domain_alloc_sva) {
-		domain = ops->domain_alloc_sva(dev, mm);
-		if (IS_ERR(domain))
-			return domain;
-	} else {
-		domain = ops->domain_alloc(IOMMU_DOMAIN_SVA);
-		if (!domain)
-			return ERR_PTR(-ENOMEM);
-	}
+	if (!ops->domain_alloc_sva)
+		return ERR_PTR(-EOPNOTSUPP);
+
+	domain = ops->domain_alloc_sva(dev, mm);
+	if (IS_ERR(domain))
+		return domain;
 
 	domain->type = IOMMU_DOMAIN_SVA;
 	domain->cookie_type = IOMMU_COOKIE_SVA;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v4 5/5] iommu: Hide ops.domain_alloc behind CONFIG_FSL_PAMU
  2025-04-08 16:35 [PATCH v4 0/5] Convert virtio-iommu to domain_alloc_paging() Jason Gunthorpe
                   ` (3 preceding siblings ...)
  2025-04-08 16:35 ` [PATCH v4 4/5] iommu: Do not call domain_alloc() in iommu_sva_domain_alloc() Jason Gunthorpe
@ 2025-04-08 16:35 ` Jason Gunthorpe
  2025-04-10  5:57   ` Tian, Kevin
  4 siblings, 1 reply; 13+ messages in thread
From: Jason Gunthorpe @ 2025-04-08 16:35 UTC (permalink / raw)
  To: iommu, Joerg Roedel, Robin Murphy, virtualization, Will Deacon
  Cc: Lu Baolu, Eric Auger, Jean-Philippe Brucker, patches

fsl_pamu is the last user of domain_alloc(), and it is using it to create
something weird that doesn't really fit into the iommu subsystem
architecture. It is a not a paging domain since it doesn't have any
map/unmap ops. It may be some special kind of identity domain.

For now just leave it as is. Wrap it's definition in CONFIG_FSL_PAMU to
discourage any new drivers from attempting to use it.

Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
---
 drivers/iommu/iommu.c | 2 ++
 include/linux/iommu.h | 6 +++---
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 3c32ca2d1aef10..990b932af9dabc 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -2020,8 +2020,10 @@ __iommu_paging_domain_alloc_flags(struct device *dev, unsigned int type,
 		domain = ops->domain_alloc_paging(dev);
 	else if (ops->domain_alloc_paging_flags)
 		domain = ops->domain_alloc_paging_flags(dev, flags, NULL);
+#if IS_ENABLED(CONFIG_FSL_PAMU)
 	else if (ops->domain_alloc && !flags)
 		domain = ops->domain_alloc(IOMMU_DOMAIN_UNMANAGED);
+#endif
 	else
 		return ERR_PTR(-EOPNOTSUPP);
 
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 0f7544b4da9e93..a28c66a038ce91 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -569,9 +569,7 @@ iommu_copy_struct_from_full_user_array(void *kdst, size_t kdst_entry_size,
  *           op is allocated in the iommu driver and freed by the caller after
  *           use. The information type is one of enum iommu_hw_info_type defined
  *           in include/uapi/linux/iommufd.h.
- * @domain_alloc: allocate and return an iommu domain if success. Otherwise
- *                NULL is returned. The domain is not fully initialized until
- *                the caller iommu_domain_alloc() returns.
+ * @domain_alloc: Do not use in new drivers
  * @domain_alloc_identity: allocate an IDENTITY domain. Drivers should prefer to
  *                         use identity_domain instead. This should only be used
  *                         if dynamic logic is necessary.
@@ -632,7 +630,9 @@ struct iommu_ops {
 	void *(*hw_info)(struct device *dev, u32 *length, u32 *type);
 
 	/* Domain allocation and freeing by the iommu driver */
+#if IS_ENABLED(CONFIG_FSL_PAMU)
 	struct iommu_domain *(*domain_alloc)(unsigned iommu_domain_type);
+#endif
 	struct iommu_domain *(*domain_alloc_identity)(struct device *dev);
 	struct iommu_domain *(*domain_alloc_paging_flags)(
 		struct device *dev, u32 flags,
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* RE: [PATCH v4 1/5] iommu/virtio: Break out bypass identity support into a global static
  2025-04-08 16:35 ` [PATCH v4 1/5] iommu/virtio: Break out bypass identity support into a global static Jason Gunthorpe
@ 2025-04-10  5:46   ` Tian, Kevin
  0 siblings, 0 replies; 13+ messages in thread
From: Tian, Kevin @ 2025-04-10  5:46 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu@lists.linux.dev, Joerg Roedel,
	Robin Murphy, virtualization@lists.linux.dev, Will Deacon
  Cc: Lu Baolu, Eric Auger, Jean-Philippe Brucker,
	patches@lists.linux.dev

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Wednesday, April 9, 2025 12:36 AM
> 
> To make way for a domain_alloc_paging conversion add the typical global
> static IDENTITY domain. This supports VMMs that have a
> VIRTIO_IOMMU_F_BYPASS_CONFIG config.
> 
> If the VMM does not have support then the domain_alloc path is still used,
> which creates an IDENTITY domain out of a paging domain.
> 
> Reviewed-by: Jean-Philippe Brucker <jean-philippe@linaro.org>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH v4 2/5] iommu: Add domain_alloc_identity()
  2025-04-08 16:35 ` [PATCH v4 2/5] iommu: Add domain_alloc_identity() Jason Gunthorpe
@ 2025-04-10  5:48   ` Tian, Kevin
  0 siblings, 0 replies; 13+ messages in thread
From: Tian, Kevin @ 2025-04-10  5:48 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu@lists.linux.dev, Joerg Roedel,
	Robin Murphy, virtualization@lists.linux.dev, Will Deacon
  Cc: Lu Baolu, Eric Auger, Jean-Philippe Brucker,
	patches@lists.linux.dev

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Wednesday, April 9, 2025 12:36 AM
> 
> + * @domain_alloc_identity: allocate an IDENTITY domain. Drivers should
> prefer to
> + *                         use identity_domain instead. This should only be used
> + *                         if dynamic logic is necessary.

it's probably clearer to expand what dynamic logic means based on
the commit msg.

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH v4 3/5] iommu/virtio: Move to domain_alloc_paging()
  2025-04-08 16:35 ` [PATCH v4 3/5] iommu/virtio: Move to domain_alloc_paging() Jason Gunthorpe
@ 2025-04-10  5:55   ` Tian, Kevin
  2025-04-10 12:10     ` Jason Gunthorpe
  0 siblings, 1 reply; 13+ messages in thread
From: Tian, Kevin @ 2025-04-10  5:55 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu@lists.linux.dev, Joerg Roedel,
	Robin Murphy, virtualization@lists.linux.dev, Will Deacon
  Cc: Lu Baolu, Eric Auger, Jean-Philippe Brucker,
	patches@lists.linux.dev

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Wednesday, April 9, 2025 12:36 AM
> 
> +static struct iommu_domain *viommu_domain_alloc_identity(struct device
> *dev)
> +{
> +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> +	struct iommu_domain *domain;
> +	int ret;
> +
> +	if (virtio_has_feature(vdev->viommu->vdev,
> +			       VIRTIO_IOMMU_F_BYPASS_CONFIG))
> +		return &viommu_identity_domain.domain;
> +

Does it make more sense to keep the @identity_domain field (same
as all other drivers) and rename this new ops to 
@domain_alloc_sw_identity() so it's clear that the new op is only
used when identity is emulated using paging?

Except that nit:

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH v4 4/5] iommu: Do not call domain_alloc() in iommu_sva_domain_alloc()
  2025-04-08 16:35 ` [PATCH v4 4/5] iommu: Do not call domain_alloc() in iommu_sva_domain_alloc() Jason Gunthorpe
@ 2025-04-10  5:56   ` Tian, Kevin
  0 siblings, 0 replies; 13+ messages in thread
From: Tian, Kevin @ 2025-04-10  5:56 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu@lists.linux.dev, Joerg Roedel,
	Robin Murphy, virtualization@lists.linux.dev, Will Deacon
  Cc: Lu Baolu, Eric Auger, Jean-Philippe Brucker,
	patches@lists.linux.dev

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Wednesday, April 9, 2025 12:36 AM
> 
> No driver implements SVA under domain_alloc() anymore, this is dead
> code.
> 
> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* RE: [PATCH v4 5/5] iommu: Hide ops.domain_alloc behind CONFIG_FSL_PAMU
  2025-04-08 16:35 ` [PATCH v4 5/5] iommu: Hide ops.domain_alloc behind CONFIG_FSL_PAMU Jason Gunthorpe
@ 2025-04-10  5:57   ` Tian, Kevin
  2025-04-10 12:08     ` Jason Gunthorpe
  0 siblings, 1 reply; 13+ messages in thread
From: Tian, Kevin @ 2025-04-10  5:57 UTC (permalink / raw)
  To: Jason Gunthorpe, iommu@lists.linux.dev, Joerg Roedel,
	Robin Murphy, virtualization@lists.linux.dev, Will Deacon
  Cc: Lu Baolu, Eric Auger, Jean-Philippe Brucker,
	patches@lists.linux.dev

> From: Jason Gunthorpe <jgg@nvidia.com>
> Sent: Wednesday, April 9, 2025 12:36 AM
> 
> +#if IS_ENABLED(CONFIG_FSL_PAMU)
>  	struct iommu_domain *(*domain_alloc)(unsigned
> iommu_domain_type);
> +#endif

what about directly calling it as domain_alloc_fsl(), given no
more drivers can support it?

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 5/5] iommu: Hide ops.domain_alloc behind CONFIG_FSL_PAMU
  2025-04-10  5:57   ` Tian, Kevin
@ 2025-04-10 12:08     ` Jason Gunthorpe
  0 siblings, 0 replies; 13+ messages in thread
From: Jason Gunthorpe @ 2025-04-10 12:08 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: iommu@lists.linux.dev, Joerg Roedel, Robin Murphy,
	virtualization@lists.linux.dev, Will Deacon, Lu Baolu, Eric Auger,
	Jean-Philippe Brucker, patches@lists.linux.dev

On Thu, Apr 10, 2025 at 05:57:44AM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe <jgg@nvidia.com>
> > Sent: Wednesday, April 9, 2025 12:36 AM
> > 
> > +#if IS_ENABLED(CONFIG_FSL_PAMU)
> >  	struct iommu_domain *(*domain_alloc)(unsigned
> > iommu_domain_type);
> > +#endif
> 
> what about directly calling it as domain_alloc_fsl(), given no
> more drivers can support it?

That would be good, but I'd want to change all the call sites too, and
I'm not sure how to reliably find them...

Jason

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v4 3/5] iommu/virtio: Move to domain_alloc_paging()
  2025-04-10  5:55   ` Tian, Kevin
@ 2025-04-10 12:10     ` Jason Gunthorpe
  0 siblings, 0 replies; 13+ messages in thread
From: Jason Gunthorpe @ 2025-04-10 12:10 UTC (permalink / raw)
  To: Tian, Kevin
  Cc: iommu@lists.linux.dev, Joerg Roedel, Robin Murphy,
	virtualization@lists.linux.dev, Will Deacon, Lu Baolu, Eric Auger,
	Jean-Philippe Brucker, patches@lists.linux.dev

On Thu, Apr 10, 2025 at 05:55:50AM +0000, Tian, Kevin wrote:
> > From: Jason Gunthorpe <jgg@nvidia.com>
> > Sent: Wednesday, April 9, 2025 12:36 AM
> > 
> > +static struct iommu_domain *viommu_domain_alloc_identity(struct device
> > *dev)
> > +{
> > +	struct viommu_endpoint *vdev = dev_iommu_priv_get(dev);
> > +	struct iommu_domain *domain;
> > +	int ret;
> > +
> > +	if (virtio_has_feature(vdev->viommu->vdev,
> > +			       VIRTIO_IOMMU_F_BYPASS_CONFIG))
> > +		return &viommu_identity_domain.domain;
> > +
> 
> Does it make more sense to keep the @identity_domain field (same
> as all other drivers) 

I think having two ways to get an identity domain is more confusion as
one will never be used..

> and rename this new ops to @domain_alloc_sw_identity() so it's clear
> that the new op is only used when identity is emulated using paging?

I think no, we had some use cases to return real identity domains too,
like DART can use this op to runtime detect if the iommu supports
identity.

s390 had something similar

Thanks,
Jason

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2025-04-10 12:10 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-08 16:35 [PATCH v4 0/5] Convert virtio-iommu to domain_alloc_paging() Jason Gunthorpe
2025-04-08 16:35 ` [PATCH v4 1/5] iommu/virtio: Break out bypass identity support into a global static Jason Gunthorpe
2025-04-10  5:46   ` Tian, Kevin
2025-04-08 16:35 ` [PATCH v4 2/5] iommu: Add domain_alloc_identity() Jason Gunthorpe
2025-04-10  5:48   ` Tian, Kevin
2025-04-08 16:35 ` [PATCH v4 3/5] iommu/virtio: Move to domain_alloc_paging() Jason Gunthorpe
2025-04-10  5:55   ` Tian, Kevin
2025-04-10 12:10     ` Jason Gunthorpe
2025-04-08 16:35 ` [PATCH v4 4/5] iommu: Do not call domain_alloc() in iommu_sva_domain_alloc() Jason Gunthorpe
2025-04-10  5:56   ` Tian, Kevin
2025-04-08 16:35 ` [PATCH v4 5/5] iommu: Hide ops.domain_alloc behind CONFIG_FSL_PAMU Jason Gunthorpe
2025-04-10  5:57   ` Tian, Kevin
2025-04-10 12:08     ` Jason Gunthorpe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).