* [PATCH v2 0/4] iommu: Isolate iova_cookie to actual owners
@ 2025-02-28 1:31 Nicolin Chen
2025-02-28 1:31 ` [PATCH v2 1/4] iommu: Add private_data_owner to iommu_domain Nicolin Chen
` (3 more replies)
0 siblings, 4 replies; 14+ messages in thread
From: Nicolin Chen @ 2025-02-28 1:31 UTC (permalink / raw)
To: jgg, kevin.tian, robin.murphy, joro, will; +Cc: iommu, linux-kernel
Now, iommufd implements its own sw_msi function that does not touch the
domain->iova_cookie but domain->iommufd_hwpt, as a domain owner pointer.
Isolate the iova_cookie from iommufd by putting it into the union where
the iommufd_hwpt is located.
This requires a set of preparations for iommu core to know what module
owns the domain private data. Since there are only two implementations
of the sw_msi function pointer, take an easier approach by calling the
two implementations directly according to a new private_data_owner tag.
This is a clean-up series for the sw_msi Part-1 core series, prior to
the Part-2/3 series. It's on github:
https://github.com/nicolinc/iommufd/commits/iommufd_msi_cleanup-v2
Changelog
v2
* Drop sw_msi function pointer
* Add a new private_data_owner tag in iommu_domain
* Let iommu core call the two sw_msi implementations directly
v1
https://lore.kernel.org/all/cover.1740600272.git.nicolinc@nvidia.com/
Thanks
Nicolin
Nicolin Chen (4):
iommu: Add private_data_owner to iommu_domain
iommufd: Move iommufd_sw_msi and related functions to driver.c
iommu: Drop sw_msi from iommu_domain
iommu: Turn iova_cookie to dma-iommu private pointer
drivers/iommu/dma-iommu.h | 9 ++
drivers/iommu/iommufd/iommufd_private.h | 5 +-
include/linux/iommu.h | 24 ++---
include/linux/iommufd.h | 9 ++
drivers/iommu/dma-iommu.c | 18 ++--
drivers/iommu/iommu.c | 16 ++-
drivers/iommu/iommufd/device.c | 122 -----------------------
drivers/iommu/iommufd/driver.c | 124 ++++++++++++++++++++++++
drivers/iommu/iommufd/hw_pagetable.c | 6 +-
9 files changed, 176 insertions(+), 157 deletions(-)
base-commit: 598749522d4254afb33b8a6c1bea614a95896868
--
2.43.0
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v2 1/4] iommu: Add private_data_owner to iommu_domain
2025-02-28 1:31 [PATCH v2 0/4] iommu: Isolate iova_cookie to actual owners Nicolin Chen
@ 2025-02-28 1:31 ` Nicolin Chen
2025-02-28 3:13 ` Baolu Lu
2025-02-28 16:29 ` Robin Murphy
2025-02-28 1:31 ` [PATCH v2 2/4] iommufd: Move iommufd_sw_msi and related functions to driver.c Nicolin Chen
` (2 subsequent siblings)
3 siblings, 2 replies; 14+ messages in thread
From: Nicolin Chen @ 2025-02-28 1:31 UTC (permalink / raw)
To: jgg, kevin.tian, robin.murphy, joro, will; +Cc: iommu, linux-kernel
Steal two bits from the 32-bit "type" in struct iommu_domain to store a
new tag for private data owned by either dma-iommu or iommufd.
Set the domain->private_data_owner in dma-iommu and iommufd. These will
be used to replace the sw_msi function pointer.
Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
include/linux/iommu.h | 7 ++++++-
drivers/iommu/dma-iommu.c | 2 ++
drivers/iommu/iommufd/hw_pagetable.c | 3 +++
3 files changed, 11 insertions(+), 1 deletion(-)
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index e93d2e918599..4f2774c08262 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -209,8 +209,13 @@ struct iommu_domain_geometry {
#define IOMMU_DOMAIN_PLATFORM (__IOMMU_DOMAIN_PLATFORM)
#define IOMMU_DOMAIN_NESTED (__IOMMU_DOMAIN_NESTED)
+#define IOMMU_DOMAIN_DATA_OWNER_NONE (0U)
+#define IOMMU_DOMAIN_DATA_OWNER_DMA (1U)
+#define IOMMU_DOMAIN_DATA_OWNER_IOMMUFD (2U)
+
struct iommu_domain {
- unsigned type;
+ u32 type : 30;
+ u32 private_data_owner : 2;
const struct iommu_domain_ops *ops;
const struct iommu_dirty_ops *dirty_ops;
const struct iommu_ops *owner; /* Whose domain_alloc we came from */
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 94263ed2c564..78915d74e8fa 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -403,6 +403,7 @@ int iommu_get_dma_cookie(struct iommu_domain *domain)
mutex_init(&domain->iova_cookie->mutex);
iommu_domain_set_sw_msi(domain, iommu_dma_sw_msi);
+ domain->private_data_owner = IOMMU_DOMAIN_DATA_OWNER_DMA;
return 0;
}
@@ -435,6 +436,7 @@ int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base)
cookie->msi_iova = base;
domain->iova_cookie = cookie;
iommu_domain_set_sw_msi(domain, iommu_dma_sw_msi);
+ domain->private_data_owner = IOMMU_DOMAIN_DATA_OWNER_DMA;
return 0;
}
EXPORT_SYMBOL(iommu_get_msi_cookie);
diff --git a/drivers/iommu/iommufd/hw_pagetable.c b/drivers/iommu/iommufd/hw_pagetable.c
index 7de6e914232e..5640444de475 100644
--- a/drivers/iommu/iommufd/hw_pagetable.c
+++ b/drivers/iommu/iommufd/hw_pagetable.c
@@ -157,6 +157,7 @@ iommufd_hwpt_paging_alloc(struct iommufd_ctx *ictx, struct iommufd_ioas *ioas,
}
}
iommu_domain_set_sw_msi(hwpt->domain, iommufd_sw_msi);
+ hwpt->domain->private_data_owner = IOMMU_DOMAIN_DATA_OWNER_IOMMUFD;
/*
* Set the coherency mode before we do iopt_table_add_domain() as some
@@ -253,6 +254,7 @@ iommufd_hwpt_nested_alloc(struct iommufd_ctx *ictx,
}
hwpt->domain->owner = ops;
iommu_domain_set_sw_msi(hwpt->domain, iommufd_sw_msi);
+ hwpt->domain->private_data_owner = IOMMU_DOMAIN_DATA_OWNER_IOMMUFD;
if (WARN_ON_ONCE(hwpt->domain->type != IOMMU_DOMAIN_NESTED)) {
rc = -EINVAL;
@@ -310,6 +312,7 @@ iommufd_viommu_alloc_hwpt_nested(struct iommufd_viommu *viommu, u32 flags,
}
hwpt->domain->owner = viommu->iommu_dev->ops;
iommu_domain_set_sw_msi(hwpt->domain, iommufd_sw_msi);
+ hwpt->domain->private_data_owner = IOMMU_DOMAIN_DATA_OWNER_IOMMUFD;
if (WARN_ON_ONCE(hwpt->domain->type != IOMMU_DOMAIN_NESTED)) {
rc = -EINVAL;
--
2.43.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 2/4] iommufd: Move iommufd_sw_msi and related functions to driver.c
2025-02-28 1:31 [PATCH v2 0/4] iommu: Isolate iova_cookie to actual owners Nicolin Chen
2025-02-28 1:31 ` [PATCH v2 1/4] iommu: Add private_data_owner to iommu_domain Nicolin Chen
@ 2025-02-28 1:31 ` Nicolin Chen
2025-02-28 17:32 ` Jason Gunthorpe
2025-02-28 1:31 ` [PATCH v2 3/4] iommu: Drop sw_msi from iommu_domain Nicolin Chen
2025-02-28 1:31 ` [PATCH v2 4/4] iommu: Turn iova_cookie to dma-iommu private pointer Nicolin Chen
3 siblings, 1 reply; 14+ messages in thread
From: Nicolin Chen @ 2025-02-28 1:31 UTC (permalink / raw)
To: jgg, kevin.tian, robin.murphy, joro, will; +Cc: iommu, linux-kernel
To provide the iommufd_sw_msi() to the iommu core that is under a different
Kconfig, move it and its related functions to driver.c. Then, stub it into
the public iommufd header. iommufd_sw_msi_install() continues to be used by
iommufd internal, so put it in the private header.
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
drivers/iommu/iommufd/iommufd_private.h | 5 +-
include/linux/iommufd.h | 9 ++
drivers/iommu/iommufd/device.c | 122 -----------------------
drivers/iommu/iommufd/driver.c | 124 ++++++++++++++++++++++++
4 files changed, 136 insertions(+), 124 deletions(-)
diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommufd/iommufd_private.h
index 246297452a44..51da18672c74 100644
--- a/drivers/iommu/iommufd/iommufd_private.h
+++ b/drivers/iommu/iommufd/iommufd_private.h
@@ -32,8 +32,9 @@ struct iommufd_sw_msi_maps {
DECLARE_BITMAP(bitmap, 64);
};
-int iommufd_sw_msi(struct iommu_domain *domain, struct msi_desc *desc,
- phys_addr_t msi_addr);
+int iommufd_sw_msi_install(struct iommufd_ctx *ictx,
+ struct iommufd_hwpt_paging *hwpt_paging,
+ struct iommufd_sw_msi_map *msi_map);
struct iommufd_ctx {
struct file *file;
diff --git a/include/linux/iommufd.h b/include/linux/iommufd.h
index 11110c749200..2f272863a207 100644
--- a/include/linux/iommufd.h
+++ b/include/linux/iommufd.h
@@ -8,6 +8,7 @@
#include <linux/err.h>
#include <linux/errno.h>
+#include <linux/msi.h>
#include <linux/refcount.h>
#include <linux/types.h>
#include <linux/xarray.h>
@@ -187,6 +188,8 @@ struct iommufd_object *_iommufd_object_alloc(struct iommufd_ctx *ictx,
enum iommufd_object_type type);
struct device *iommufd_viommu_find_dev(struct iommufd_viommu *viommu,
unsigned long vdev_id);
+int iommufd_sw_msi(struct iommu_domain *domain, struct msi_desc *desc,
+ phys_addr_t msi_addr);
#else /* !CONFIG_IOMMUFD_DRIVER_CORE */
static inline struct iommufd_object *
_iommufd_object_alloc(struct iommufd_ctx *ictx, size_t size,
@@ -200,6 +203,12 @@ iommufd_viommu_find_dev(struct iommufd_viommu *viommu, unsigned long vdev_id)
{
return NULL;
}
+
+static inline int iommufd_sw_msi(struct iommu_domain *domain,
+ struct msi_desc *desc, phys_addr_t msi_addr)
+{
+ return -EOPNOTSUPP;
+}
#endif /* CONFIG_IOMMUFD_DRIVER_CORE */
/*
diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c
index 6dccbf7217f5..9c1fe3170d16 100644
--- a/drivers/iommu/iommufd/device.c
+++ b/drivers/iommu/iommufd/device.c
@@ -294,128 +294,6 @@ u32 iommufd_device_to_id(struct iommufd_device *idev)
}
EXPORT_SYMBOL_NS_GPL(iommufd_device_to_id, "IOMMUFD");
-/*
- * Get a iommufd_sw_msi_map for the msi physical address requested by the irq
- * layer. The mapping to IOVA is global to the iommufd file descriptor, every
- * domain that is attached to a device using the same MSI parameters will use
- * the same IOVA.
- */
-static __maybe_unused struct iommufd_sw_msi_map *
-iommufd_sw_msi_get_map(struct iommufd_ctx *ictx, phys_addr_t msi_addr,
- phys_addr_t sw_msi_start)
-{
- struct iommufd_sw_msi_map *cur;
- unsigned int max_pgoff = 0;
-
- lockdep_assert_held(&ictx->sw_msi_lock);
-
- list_for_each_entry(cur, &ictx->sw_msi_list, sw_msi_item) {
- if (cur->sw_msi_start != sw_msi_start)
- continue;
- max_pgoff = max(max_pgoff, cur->pgoff + 1);
- if (cur->msi_addr == msi_addr)
- return cur;
- }
-
- if (ictx->sw_msi_id >=
- BITS_PER_BYTE * sizeof_field(struct iommufd_sw_msi_maps, bitmap))
- return ERR_PTR(-EOVERFLOW);
-
- cur = kzalloc(sizeof(*cur), GFP_KERNEL);
- if (!cur)
- cur = ERR_PTR(-ENOMEM);
- cur->sw_msi_start = sw_msi_start;
- cur->msi_addr = msi_addr;
- cur->pgoff = max_pgoff;
- cur->id = ictx->sw_msi_id++;
- list_add_tail(&cur->sw_msi_item, &ictx->sw_msi_list);
- return cur;
-}
-
-static int iommufd_sw_msi_install(struct iommufd_ctx *ictx,
- struct iommufd_hwpt_paging *hwpt_paging,
- struct iommufd_sw_msi_map *msi_map)
-{
- unsigned long iova;
-
- lockdep_assert_held(&ictx->sw_msi_lock);
-
- iova = msi_map->sw_msi_start + msi_map->pgoff * PAGE_SIZE;
- if (!test_bit(msi_map->id, hwpt_paging->present_sw_msi.bitmap)) {
- int rc;
-
- rc = iommu_map(hwpt_paging->common.domain, iova,
- msi_map->msi_addr, PAGE_SIZE,
- IOMMU_WRITE | IOMMU_READ | IOMMU_MMIO,
- GFP_KERNEL_ACCOUNT);
- if (rc)
- return rc;
- __set_bit(msi_map->id, hwpt_paging->present_sw_msi.bitmap);
- }
- return 0;
-}
-
-/*
- * Called by the irq code if the platform translates the MSI address through the
- * IOMMU. msi_addr is the physical address of the MSI page. iommufd will
- * allocate a fd global iova for the physical page that is the same on all
- * domains and devices.
- */
-#ifdef CONFIG_IRQ_MSI_IOMMU
-int iommufd_sw_msi(struct iommu_domain *domain, struct msi_desc *desc,
- phys_addr_t msi_addr)
-{
- struct device *dev = msi_desc_to_dev(desc);
- struct iommufd_hwpt_paging *hwpt_paging;
- struct iommu_attach_handle *raw_handle;
- struct iommufd_attach_handle *handle;
- struct iommufd_sw_msi_map *msi_map;
- struct iommufd_ctx *ictx;
- unsigned long iova;
- int rc;
-
- /*
- * It is safe to call iommu_attach_handle_get() here because the iommu
- * core code invokes this under the group mutex which also prevents any
- * change of the attach handle for the duration of this function.
- */
- iommu_group_mutex_assert(dev);
-
- raw_handle =
- iommu_attach_handle_get(dev->iommu_group, IOMMU_NO_PASID, 0);
- if (IS_ERR(raw_handle))
- return 0;
- hwpt_paging = find_hwpt_paging(domain->iommufd_hwpt);
-
- handle = to_iommufd_handle(raw_handle);
- /* No IOMMU_RESV_SW_MSI means no change to the msi_msg */
- if (handle->idev->igroup->sw_msi_start == PHYS_ADDR_MAX)
- return 0;
-
- ictx = handle->idev->ictx;
- guard(mutex)(&ictx->sw_msi_lock);
- /*
- * The input msi_addr is the exact byte offset of the MSI doorbell, we
- * assume the caller has checked that it is contained with a MMIO region
- * that is secure to map at PAGE_SIZE.
- */
- msi_map = iommufd_sw_msi_get_map(handle->idev->ictx,
- msi_addr & PAGE_MASK,
- handle->idev->igroup->sw_msi_start);
- if (IS_ERR(msi_map))
- return PTR_ERR(msi_map);
-
- rc = iommufd_sw_msi_install(ictx, hwpt_paging, msi_map);
- if (rc)
- return rc;
- __set_bit(msi_map->id, handle->idev->igroup->required_sw_msi.bitmap);
-
- iova = msi_map->sw_msi_start + msi_map->pgoff * PAGE_SIZE;
- msi_desc_set_iommu_msi_iova(desc, iova, PAGE_SHIFT);
- return 0;
-}
-#endif
-
static int iommufd_group_setup_msi(struct iommufd_group *igroup,
struct iommufd_hwpt_paging *hwpt_paging)
{
diff --git a/drivers/iommu/iommufd/driver.c b/drivers/iommu/iommufd/driver.c
index 2d98b04ff1cb..999da79dfa36 100644
--- a/drivers/iommu/iommufd/driver.c
+++ b/drivers/iommu/iommufd/driver.c
@@ -49,5 +49,129 @@ struct device *iommufd_viommu_find_dev(struct iommufd_viommu *viommu,
}
EXPORT_SYMBOL_NS_GPL(iommufd_viommu_find_dev, "IOMMUFD");
+/*
+ * Get a iommufd_sw_msi_map for the msi physical address requested by the irq
+ * layer. The mapping to IOVA is global to the iommufd file descriptor, every
+ * domain that is attached to a device using the same MSI parameters will use
+ * the same IOVA.
+ */
+static __maybe_unused struct iommufd_sw_msi_map *
+iommufd_sw_msi_get_map(struct iommufd_ctx *ictx, phys_addr_t msi_addr,
+ phys_addr_t sw_msi_start)
+{
+ struct iommufd_sw_msi_map *cur;
+ unsigned int max_pgoff = 0;
+
+ lockdep_assert_held(&ictx->sw_msi_lock);
+
+ list_for_each_entry(cur, &ictx->sw_msi_list, sw_msi_item) {
+ if (cur->sw_msi_start != sw_msi_start)
+ continue;
+ max_pgoff = max(max_pgoff, cur->pgoff + 1);
+ if (cur->msi_addr == msi_addr)
+ return cur;
+ }
+
+ if (ictx->sw_msi_id >=
+ BITS_PER_BYTE * sizeof_field(struct iommufd_sw_msi_maps, bitmap))
+ return ERR_PTR(-EOVERFLOW);
+
+ cur = kzalloc(sizeof(*cur), GFP_KERNEL);
+ if (!cur)
+ cur = ERR_PTR(-ENOMEM);
+ cur->sw_msi_start = sw_msi_start;
+ cur->msi_addr = msi_addr;
+ cur->pgoff = max_pgoff;
+ cur->id = ictx->sw_msi_id++;
+ list_add_tail(&cur->sw_msi_item, &ictx->sw_msi_list);
+ return cur;
+}
+
+int iommufd_sw_msi_install(struct iommufd_ctx *ictx,
+ struct iommufd_hwpt_paging *hwpt_paging,
+ struct iommufd_sw_msi_map *msi_map)
+{
+ unsigned long iova;
+
+ lockdep_assert_held(&ictx->sw_msi_lock);
+
+ iova = msi_map->sw_msi_start + msi_map->pgoff * PAGE_SIZE;
+ if (!test_bit(msi_map->id, hwpt_paging->present_sw_msi.bitmap)) {
+ int rc;
+
+ rc = iommu_map(hwpt_paging->common.domain, iova,
+ msi_map->msi_addr, PAGE_SIZE,
+ IOMMU_WRITE | IOMMU_READ | IOMMU_MMIO,
+ GFP_KERNEL_ACCOUNT);
+ if (rc)
+ return rc;
+ __set_bit(msi_map->id, hwpt_paging->present_sw_msi.bitmap);
+ }
+ return 0;
+}
+EXPORT_SYMBOL_NS_GPL(iommufd_sw_msi_install, "IOMMUFD");
+
+/*
+ * Called by the irq code if the platform translates the MSI address through the
+ * IOMMU. msi_addr is the physical address of the MSI page. iommufd will
+ * allocate a fd global iova for the physical page that is the same on all
+ * domains and devices.
+ */
+#ifdef CONFIG_IRQ_MSI_IOMMU
+int iommufd_sw_msi(struct iommu_domain *domain, struct msi_desc *desc,
+ phys_addr_t msi_addr)
+{
+ struct device *dev = msi_desc_to_dev(desc);
+ struct iommufd_hwpt_paging *hwpt_paging;
+ struct iommu_attach_handle *raw_handle;
+ struct iommufd_attach_handle *handle;
+ struct iommufd_sw_msi_map *msi_map;
+ struct iommufd_ctx *ictx;
+ unsigned long iova;
+ int rc;
+
+ /*
+ * It is safe to call iommu_attach_handle_get() here because the iommu
+ * core code invokes this under the group mutex which also prevents any
+ * change of the attach handle for the duration of this function.
+ */
+ iommu_group_mutex_assert(dev);
+
+ raw_handle =
+ iommu_attach_handle_get(dev->iommu_group, IOMMU_NO_PASID, 0);
+ if (IS_ERR(raw_handle))
+ return 0;
+ hwpt_paging = find_hwpt_paging(domain->iommufd_hwpt);
+
+ handle = to_iommufd_handle(raw_handle);
+ /* No IOMMU_RESV_SW_MSI means no change to the msi_msg */
+ if (handle->idev->igroup->sw_msi_start == PHYS_ADDR_MAX)
+ return 0;
+
+ ictx = handle->idev->ictx;
+ guard(mutex)(&ictx->sw_msi_lock);
+ /*
+ * The input msi_addr is the exact byte offset of the MSI doorbell, we
+ * assume the caller has checked that it is contained with a MMIO region
+ * that is secure to map at PAGE_SIZE.
+ */
+ msi_map = iommufd_sw_msi_get_map(handle->idev->ictx,
+ msi_addr & PAGE_MASK,
+ handle->idev->igroup->sw_msi_start);
+ if (IS_ERR(msi_map))
+ return PTR_ERR(msi_map);
+
+ rc = iommufd_sw_msi_install(ictx, hwpt_paging, msi_map);
+ if (rc)
+ return rc;
+ __set_bit(msi_map->id, handle->idev->igroup->required_sw_msi.bitmap);
+
+ iova = msi_map->sw_msi_start + msi_map->pgoff * PAGE_SIZE;
+ msi_desc_set_iommu_msi_iova(desc, iova, PAGE_SHIFT);
+ return 0;
+}
+EXPORT_SYMBOL_NS_GPL(iommufd_sw_msi, "IOMMUFD");
+#endif
+
MODULE_DESCRIPTION("iommufd code shared with builtin modules");
MODULE_LICENSE("GPL");
--
2.43.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 3/4] iommu: Drop sw_msi from iommu_domain
2025-02-28 1:31 [PATCH v2 0/4] iommu: Isolate iova_cookie to actual owners Nicolin Chen
2025-02-28 1:31 ` [PATCH v2 1/4] iommu: Add private_data_owner to iommu_domain Nicolin Chen
2025-02-28 1:31 ` [PATCH v2 2/4] iommufd: Move iommufd_sw_msi and related functions to driver.c Nicolin Chen
@ 2025-02-28 1:31 ` Nicolin Chen
2025-02-28 1:31 ` [PATCH v2 4/4] iommu: Turn iova_cookie to dma-iommu private pointer Nicolin Chen
3 siblings, 0 replies; 14+ messages in thread
From: Nicolin Chen @ 2025-02-28 1:31 UTC (permalink / raw)
To: jgg, kevin.tian, robin.murphy, joro, will; +Cc: iommu, linux-kernel
There are only two sw_msi implementations in the entire system, thus it's
not very necessary to have an sw_msi pointer.
Instead, check domain->private_data_owner to call the two implementations
directly from the core code.
Suggested-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
drivers/iommu/dma-iommu.h | 9 +++++++++
include/linux/iommu.h | 15 ---------------
drivers/iommu/dma-iommu.c | 13 +++----------
drivers/iommu/iommu.c | 16 ++++++++++++++--
drivers/iommu/iommufd/hw_pagetable.c | 3 ---
5 files changed, 26 insertions(+), 30 deletions(-)
diff --git a/drivers/iommu/dma-iommu.h b/drivers/iommu/dma-iommu.h
index c12d63457c76..97f1da1efbb4 100644
--- a/drivers/iommu/dma-iommu.h
+++ b/drivers/iommu/dma-iommu.h
@@ -18,6 +18,9 @@ int iommu_dma_init_fq(struct iommu_domain *domain);
void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list);
+int iommu_dma_sw_msi(struct iommu_domain *domain, struct msi_desc *desc,
+ phys_addr_t msi_addr);
+
extern bool iommu_dma_forcedac;
#else /* CONFIG_IOMMU_DMA */
@@ -44,5 +47,11 @@ static inline void iommu_dma_get_resv_regions(struct device *dev, struct list_he
{
}
+static inline int iommu_dma_sw_msi(struct iommu_domain *domain,
+ struct msi_desc *desc, phys_addr_t msi_addr)
+{
+ return -ENODEV;
+}
+
#endif /* CONFIG_IOMMU_DMA */
#endif /* __DMA_IOMMU_H */
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 4f2774c08262..29400060d648 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -224,11 +224,6 @@ struct iommu_domain {
struct iommu_dma_cookie *iova_cookie;
int (*iopf_handler)(struct iopf_group *group);
-#if IS_ENABLED(CONFIG_IRQ_MSI_IOMMU)
- int (*sw_msi)(struct iommu_domain *domain, struct msi_desc *desc,
- phys_addr_t msi_addr);
-#endif
-
union { /* Pointer usable by owner of the domain */
struct iommufd_hw_pagetable *iommufd_hwpt; /* iommufd */
};
@@ -249,16 +244,6 @@ struct iommu_domain {
};
};
-static inline void iommu_domain_set_sw_msi(
- struct iommu_domain *domain,
- int (*sw_msi)(struct iommu_domain *domain, struct msi_desc *desc,
- phys_addr_t msi_addr))
-{
-#if IS_ENABLED(CONFIG_IRQ_MSI_IOMMU)
- domain->sw_msi = sw_msi;
-#endif
-}
-
static inline bool iommu_is_dma_domain(struct iommu_domain *domain)
{
return domain->type & __IOMMU_DOMAIN_DMA_API;
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 78915d74e8fa..7ee71b9c53bd 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -103,9 +103,6 @@ static int __init iommu_dma_forcedac_setup(char *str)
}
early_param("iommu.forcedac", iommu_dma_forcedac_setup);
-static int iommu_dma_sw_msi(struct iommu_domain *domain, struct msi_desc *desc,
- phys_addr_t msi_addr);
-
/* Number of entries per flush queue */
#define IOVA_DEFAULT_FQ_SIZE 256
#define IOVA_SINGLE_FQ_SIZE 32768
@@ -402,7 +399,6 @@ int iommu_get_dma_cookie(struct iommu_domain *domain)
return -ENOMEM;
mutex_init(&domain->iova_cookie->mutex);
- iommu_domain_set_sw_msi(domain, iommu_dma_sw_msi);
domain->private_data_owner = IOMMU_DOMAIN_DATA_OWNER_DMA;
return 0;
}
@@ -435,7 +431,6 @@ int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base)
cookie->msi_iova = base;
domain->iova_cookie = cookie;
- iommu_domain_set_sw_msi(domain, iommu_dma_sw_msi);
domain->private_data_owner = IOMMU_DOMAIN_DATA_OWNER_DMA;
return 0;
}
@@ -451,10 +446,8 @@ void iommu_put_dma_cookie(struct iommu_domain *domain)
struct iommu_dma_cookie *cookie = domain->iova_cookie;
struct iommu_dma_msi_page *msi, *tmp;
-#if IS_ENABLED(CONFIG_IRQ_MSI_IOMMU)
- if (domain->sw_msi != iommu_dma_sw_msi)
+ if (domain->private_data_owner != IOMMU_DOMAIN_DATA_OWNER_DMA)
return;
-#endif
if (!cookie)
return;
@@ -1813,8 +1806,8 @@ static struct iommu_dma_msi_page *iommu_dma_get_msi_page(struct device *dev,
return NULL;
}
-static int iommu_dma_sw_msi(struct iommu_domain *domain, struct msi_desc *desc,
- phys_addr_t msi_addr)
+int iommu_dma_sw_msi(struct iommu_domain *domain, struct msi_desc *desc,
+ phys_addr_t msi_addr)
{
struct device *dev = msi_desc_to_dev(desc);
const struct iommu_dma_msi_page *msi_page;
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index 022bf96a18c5..462d7bc0d47a 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -18,6 +18,7 @@
#include <linux/errno.h>
#include <linux/host1x_context_bus.h>
#include <linux/iommu.h>
+#include <linux/iommufd.h>
#include <linux/idr.h>
#include <linux/err.h>
#include <linux/pci.h>
@@ -3619,8 +3620,19 @@ int iommu_dma_prepare_msi(struct msi_desc *desc, phys_addr_t msi_addr)
return 0;
mutex_lock(&group->mutex);
- if (group->domain && group->domain->sw_msi)
- ret = group->domain->sw_msi(group->domain, desc, msi_addr);
+ if (group->domain) {
+ switch (group->domain->private_data_owner) {
+ case IOMMU_DOMAIN_DATA_OWNER_DMA:
+ ret = iommu_dma_sw_msi(group->domain, desc, msi_addr);
+ break;
+ case IOMMU_DOMAIN_DATA_OWNER_IOMMUFD:
+ ret = iommufd_sw_msi(group->domain, desc, msi_addr);
+ break;
+ default:
+ ret = -EOPNOTSUPP;
+ break;
+ }
+ }
mutex_unlock(&group->mutex);
return ret;
}
diff --git a/drivers/iommu/iommufd/hw_pagetable.c b/drivers/iommu/iommufd/hw_pagetable.c
index 5640444de475..ba46f9c0a81f 100644
--- a/drivers/iommu/iommufd/hw_pagetable.c
+++ b/drivers/iommu/iommufd/hw_pagetable.c
@@ -156,7 +156,6 @@ iommufd_hwpt_paging_alloc(struct iommufd_ctx *ictx, struct iommufd_ioas *ioas,
goto out_abort;
}
}
- iommu_domain_set_sw_msi(hwpt->domain, iommufd_sw_msi);
hwpt->domain->private_data_owner = IOMMU_DOMAIN_DATA_OWNER_IOMMUFD;
/*
@@ -253,7 +252,6 @@ iommufd_hwpt_nested_alloc(struct iommufd_ctx *ictx,
goto out_abort;
}
hwpt->domain->owner = ops;
- iommu_domain_set_sw_msi(hwpt->domain, iommufd_sw_msi);
hwpt->domain->private_data_owner = IOMMU_DOMAIN_DATA_OWNER_IOMMUFD;
if (WARN_ON_ONCE(hwpt->domain->type != IOMMU_DOMAIN_NESTED)) {
@@ -311,7 +309,6 @@ iommufd_viommu_alloc_hwpt_nested(struct iommufd_viommu *viommu, u32 flags,
goto out_abort;
}
hwpt->domain->owner = viommu->iommu_dev->ops;
- iommu_domain_set_sw_msi(hwpt->domain, iommufd_sw_msi);
hwpt->domain->private_data_owner = IOMMU_DOMAIN_DATA_OWNER_IOMMUFD;
if (WARN_ON_ONCE(hwpt->domain->type != IOMMU_DOMAIN_NESTED)) {
--
2.43.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v2 4/4] iommu: Turn iova_cookie to dma-iommu private pointer
2025-02-28 1:31 [PATCH v2 0/4] iommu: Isolate iova_cookie to actual owners Nicolin Chen
` (2 preceding siblings ...)
2025-02-28 1:31 ` [PATCH v2 3/4] iommu: Drop sw_msi from iommu_domain Nicolin Chen
@ 2025-02-28 1:31 ` Nicolin Chen
3 siblings, 0 replies; 14+ messages in thread
From: Nicolin Chen @ 2025-02-28 1:31 UTC (permalink / raw)
To: jgg, kevin.tian, robin.murphy, joro, will; +Cc: iommu, linux-kernel
Now that iommufd does not rely on dma-iommu.c for any purpose. We can
combine the dma-iommu.c iova_cookie and the iommufd_hwpt under the same
union. This union is effectively 'owner data' and can be used by the
entity that allocated the domain. Note that legacy vfio type1 flows
continue to use dma-iommu.c for sw_msi and still need iova_cookie.
Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
---
include/linux/iommu.h | 2 +-
drivers/iommu/dma-iommu.c | 3 ++-
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/include/linux/iommu.h b/include/linux/iommu.h
index 29400060d648..5c6bca1ace27 100644
--- a/include/linux/iommu.h
+++ b/include/linux/iommu.h
@@ -221,10 +221,10 @@ struct iommu_domain {
const struct iommu_ops *owner; /* Whose domain_alloc we came from */
unsigned long pgsize_bitmap; /* Bitmap of page sizes in use */
struct iommu_domain_geometry geometry;
- struct iommu_dma_cookie *iova_cookie;
int (*iopf_handler)(struct iopf_group *group);
union { /* Pointer usable by owner of the domain */
+ struct iommu_dma_cookie *iova_cookie; /* dma-iommu */
struct iommufd_hw_pagetable *iommufd_hwpt; /* iommufd */
};
union { /* Fault handler */
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 7ee71b9c53bd..ee2fcc058aa3 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -443,12 +443,13 @@ EXPORT_SYMBOL(iommu_get_msi_cookie);
*/
void iommu_put_dma_cookie(struct iommu_domain *domain)
{
- struct iommu_dma_cookie *cookie = domain->iova_cookie;
struct iommu_dma_msi_page *msi, *tmp;
+ struct iommu_dma_cookie *cookie;
if (domain->private_data_owner != IOMMU_DOMAIN_DATA_OWNER_DMA)
return;
+ cookie = domain->iova_cookie;
if (!cookie)
return;
--
2.43.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/4] iommu: Add private_data_owner to iommu_domain
2025-02-28 1:31 ` [PATCH v2 1/4] iommu: Add private_data_owner to iommu_domain Nicolin Chen
@ 2025-02-28 3:13 ` Baolu Lu
2025-02-28 3:23 ` Nicolin Chen
2025-02-28 16:29 ` Robin Murphy
1 sibling, 1 reply; 14+ messages in thread
From: Baolu Lu @ 2025-02-28 3:13 UTC (permalink / raw)
To: Nicolin Chen, jgg, kevin.tian, robin.murphy, joro, will
Cc: iommu, linux-kernel
On 2/28/25 09:31, Nicolin Chen wrote:
> Steal two bits from the 32-bit "type" in struct iommu_domain to store a
> new tag for private data owned by either dma-iommu or iommufd.
>
> Set the domain->private_data_owner in dma-iommu and iommufd. These will
> be used to replace the sw_msi function pointer.
>
> Suggested-by: Jason Gunthorpe<jgg@nvidia.com>
> Signed-off-by: Nicolin Chen<nicolinc@nvidia.com>
> ---
> include/linux/iommu.h | 7 ++++++-
> drivers/iommu/dma-iommu.c | 2 ++
> drivers/iommu/iommufd/hw_pagetable.c | 3 +++
> 3 files changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index e93d2e918599..4f2774c08262 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -209,8 +209,13 @@ struct iommu_domain_geometry {
> #define IOMMU_DOMAIN_PLATFORM (__IOMMU_DOMAIN_PLATFORM)
> #define IOMMU_DOMAIN_NESTED (__IOMMU_DOMAIN_NESTED)
>
> +#define IOMMU_DOMAIN_DATA_OWNER_NONE (0U)
> +#define IOMMU_DOMAIN_DATA_OWNER_DMA (1U)
> +#define IOMMU_DOMAIN_DATA_OWNER_IOMMUFD (2U)
> +
> struct iommu_domain {
> - unsigned type;
> + u32 type : 30;
> + u32 private_data_owner : 2;
Is there any special consideration for reserving only 2 bits for the
private data owner? Is it possible to allocate more bits so that it
could be more extensible for the future?
For example, current iommu core allows a kernel device driver to
allocate a paging domain and replace the default domain for kernel DMA.
This suggests the private data owner bits may be needed beyond iommu-dma
and iommufd.
Thanks,
baolu
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/4] iommu: Add private_data_owner to iommu_domain
2025-02-28 3:13 ` Baolu Lu
@ 2025-02-28 3:23 ` Nicolin Chen
2025-02-28 3:29 ` Baolu Lu
0 siblings, 1 reply; 14+ messages in thread
From: Nicolin Chen @ 2025-02-28 3:23 UTC (permalink / raw)
To: Baolu Lu; +Cc: jgg, kevin.tian, robin.murphy, joro, will, iommu, linux-kernel
On Fri, Feb 28, 2025 at 11:13:23AM +0800, Baolu Lu wrote:
> On 2/28/25 09:31, Nicolin Chen wrote:
> > Steal two bits from the 32-bit "type" in struct iommu_domain to store a
> > new tag for private data owned by either dma-iommu or iommufd.
> >
> > Set the domain->private_data_owner in dma-iommu and iommufd. These will
> > be used to replace the sw_msi function pointer.
> >
> > Suggested-by: Jason Gunthorpe<jgg@nvidia.com>
> > Signed-off-by: Nicolin Chen<nicolinc@nvidia.com>
> > ---
> > include/linux/iommu.h | 7 ++++++-
> > drivers/iommu/dma-iommu.c | 2 ++
> > drivers/iommu/iommufd/hw_pagetable.c | 3 +++
> > 3 files changed, 11 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> > index e93d2e918599..4f2774c08262 100644
> > --- a/include/linux/iommu.h
> > +++ b/include/linux/iommu.h
> > @@ -209,8 +209,13 @@ struct iommu_domain_geometry {
> > #define IOMMU_DOMAIN_PLATFORM (__IOMMU_DOMAIN_PLATFORM)
> > #define IOMMU_DOMAIN_NESTED (__IOMMU_DOMAIN_NESTED)
> > +#define IOMMU_DOMAIN_DATA_OWNER_NONE (0U)
> > +#define IOMMU_DOMAIN_DATA_OWNER_DMA (1U)
> > +#define IOMMU_DOMAIN_DATA_OWNER_IOMMUFD (2U)
> > +
> > struct iommu_domain {
> > - unsigned type;
> > + u32 type : 30;
> > + u32 private_data_owner : 2;
>
> Is there any special consideration for reserving only 2 bits for the
> private data owner? Is it possible to allocate more bits so that it
> could be more extensible for the future?
It could. This "2" is copied from Jason's suggestion:
https://lore.kernel.org/linux-iommu/20250227194749.GJ39591@nvidia.com/
> For example, current iommu core allows a kernel device driver to
> allocate a paging domain and replace the default domain for kernel DMA.
> This suggests the private data owner bits may be needed beyond iommu-dma
> and iommufd.
What's the potential "private data" in this case?
Thanks
Nicolin
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/4] iommu: Add private_data_owner to iommu_domain
2025-02-28 3:23 ` Nicolin Chen
@ 2025-02-28 3:29 ` Baolu Lu
0 siblings, 0 replies; 14+ messages in thread
From: Baolu Lu @ 2025-02-28 3:29 UTC (permalink / raw)
To: Nicolin Chen
Cc: jgg, kevin.tian, robin.murphy, joro, will, iommu, linux-kernel
On 2/28/25 11:23, Nicolin Chen wrote:
> On Fri, Feb 28, 2025 at 11:13:23AM +0800, Baolu Lu wrote:
>> On 2/28/25 09:31, Nicolin Chen wrote:
>>> Steal two bits from the 32-bit "type" in struct iommu_domain to store a
>>> new tag for private data owned by either dma-iommu or iommufd.
>>>
>>> Set the domain->private_data_owner in dma-iommu and iommufd. These will
>>> be used to replace the sw_msi function pointer.
>>>
>>> Suggested-by: Jason Gunthorpe<jgg@nvidia.com>
>>> Signed-off-by: Nicolin Chen<nicolinc@nvidia.com>
>>> ---
>>> include/linux/iommu.h | 7 ++++++-
>>> drivers/iommu/dma-iommu.c | 2 ++
>>> drivers/iommu/iommufd/hw_pagetable.c | 3 +++
>>> 3 files changed, 11 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
>>> index e93d2e918599..4f2774c08262 100644
>>> --- a/include/linux/iommu.h
>>> +++ b/include/linux/iommu.h
>>> @@ -209,8 +209,13 @@ struct iommu_domain_geometry {
>>> #define IOMMU_DOMAIN_PLATFORM (__IOMMU_DOMAIN_PLATFORM)
>>> #define IOMMU_DOMAIN_NESTED (__IOMMU_DOMAIN_NESTED)
>>> +#define IOMMU_DOMAIN_DATA_OWNER_NONE (0U)
>>> +#define IOMMU_DOMAIN_DATA_OWNER_DMA (1U)
>>> +#define IOMMU_DOMAIN_DATA_OWNER_IOMMUFD (2U)
>>> +
>>> struct iommu_domain {
>>> - unsigned type;
>>> + u32 type : 30;
>>> + u32 private_data_owner : 2;
>> Is there any special consideration for reserving only 2 bits for the
>> private data owner? Is it possible to allocate more bits so that it
>> could be more extensible for the future?
> It could. This "2" is copied from Jason's suggestion:
> https://lore.kernel.org/linux-iommu/20250227194749.GJ39591@nvidia.com/
>
>> For example, current iommu core allows a kernel device driver to
>> allocate a paging domain and replace the default domain for kernel DMA.
>> This suggests the private data owner bits may be needed beyond iommu-dma
>> and iommufd.
> What's the potential "private data" in this case?
I have no idea, just feeling that we could leave room for tomorrow.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/4] iommu: Add private_data_owner to iommu_domain
2025-02-28 1:31 ` [PATCH v2 1/4] iommu: Add private_data_owner to iommu_domain Nicolin Chen
2025-02-28 3:13 ` Baolu Lu
@ 2025-02-28 16:29 ` Robin Murphy
2025-02-28 17:40 ` Nicolin Chen
1 sibling, 1 reply; 14+ messages in thread
From: Robin Murphy @ 2025-02-28 16:29 UTC (permalink / raw)
To: Nicolin Chen, jgg, kevin.tian, joro, will; +Cc: iommu, linux-kernel
On 28/02/2025 1:31 am, Nicolin Chen wrote:
> Steal two bits from the 32-bit "type" in struct iommu_domain to store a
> new tag for private data owned by either dma-iommu or iommufd.
>
> Set the domain->private_data_owner in dma-iommu and iommufd. These will
> be used to replace the sw_msi function pointer.
>
> Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
> Signed-off-by: Nicolin Chen <nicolinc@nvidia.com>
> ---
> include/linux/iommu.h | 7 ++++++-
> drivers/iommu/dma-iommu.c | 2 ++
> drivers/iommu/iommufd/hw_pagetable.c | 3 +++
> 3 files changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index e93d2e918599..4f2774c08262 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -209,8 +209,13 @@ struct iommu_domain_geometry {
> #define IOMMU_DOMAIN_PLATFORM (__IOMMU_DOMAIN_PLATFORM)
> #define IOMMU_DOMAIN_NESTED (__IOMMU_DOMAIN_NESTED)
>
> +#define IOMMU_DOMAIN_DATA_OWNER_NONE (0U)
> +#define IOMMU_DOMAIN_DATA_OWNER_DMA (1U)
> +#define IOMMU_DOMAIN_DATA_OWNER_IOMMUFD (2U)
> +
> struct iommu_domain {
> - unsigned type;
> + u32 type : 30;
> + u32 private_data_owner : 2;
> const struct iommu_domain_ops *ops;
> const struct iommu_dirty_ops *dirty_ops;
> const struct iommu_ops *owner; /* Whose domain_alloc we came from */
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index 94263ed2c564..78915d74e8fa 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -403,6 +403,7 @@ int iommu_get_dma_cookie(struct iommu_domain *domain)
>
> mutex_init(&domain->iova_cookie->mutex);
> iommu_domain_set_sw_msi(domain, iommu_dma_sw_msi);
> + domain->private_data_owner = IOMMU_DOMAIN_DATA_OWNER_DMA;
> return 0;
> }
>
> @@ -435,6 +436,7 @@ int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base)
> cookie->msi_iova = base;
> domain->iova_cookie = cookie;
> iommu_domain_set_sw_msi(domain, iommu_dma_sw_msi);
> + domain->private_data_owner = IOMMU_DOMAIN_DATA_OWNER_DMA;
This doesn't help all that much when it can still be called on any old
unmanaged domain and silently overwrite whatever the previous user
thought they owned...
And really the "owner" of MSI cookies is VFIO, it just happens that all
the code for them still lives in iommu-dma because it was written to
piggyback off the same irqchip hooks. TBH I've long been keen on
separating them further from "real" IOVA cookies, and generalising said
hooks really removes any remaining reason to keep the implementations
tied together at all, should they have cause to diverge further (e.g.
with makign the MSI cookie window programmable). What I absolutely want
to avoid is a notion of having two types of MSI-mapping cookies, one of
which is two types of MSI-mapping cookies.
Hopefully that is all clear in the patch I proposed.
Thanks,
Robin.
> return 0;
> }
> EXPORT_SYMBOL(iommu_get_msi_cookie);
> diff --git a/drivers/iommu/iommufd/hw_pagetable.c b/drivers/iommu/iommufd/hw_pagetable.c
> index 7de6e914232e..5640444de475 100644
> --- a/drivers/iommu/iommufd/hw_pagetable.c
> +++ b/drivers/iommu/iommufd/hw_pagetable.c
> @@ -157,6 +157,7 @@ iommufd_hwpt_paging_alloc(struct iommufd_ctx *ictx, struct iommufd_ioas *ioas,
> }
> }
> iommu_domain_set_sw_msi(hwpt->domain, iommufd_sw_msi);
> + hwpt->domain->private_data_owner = IOMMU_DOMAIN_DATA_OWNER_IOMMUFD;
>
> /*
> * Set the coherency mode before we do iopt_table_add_domain() as some
> @@ -253,6 +254,7 @@ iommufd_hwpt_nested_alloc(struct iommufd_ctx *ictx,
> }
> hwpt->domain->owner = ops;
> iommu_domain_set_sw_msi(hwpt->domain, iommufd_sw_msi);
> + hwpt->domain->private_data_owner = IOMMU_DOMAIN_DATA_OWNER_IOMMUFD;
>
> if (WARN_ON_ONCE(hwpt->domain->type != IOMMU_DOMAIN_NESTED)) {
> rc = -EINVAL;
> @@ -310,6 +312,7 @@ iommufd_viommu_alloc_hwpt_nested(struct iommufd_viommu *viommu, u32 flags,
> }
> hwpt->domain->owner = viommu->iommu_dev->ops;
> iommu_domain_set_sw_msi(hwpt->domain, iommufd_sw_msi);
> + hwpt->domain->private_data_owner = IOMMU_DOMAIN_DATA_OWNER_IOMMUFD;
>
> if (WARN_ON_ONCE(hwpt->domain->type != IOMMU_DOMAIN_NESTED)) {
> rc = -EINVAL;
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 2/4] iommufd: Move iommufd_sw_msi and related functions to driver.c
2025-02-28 1:31 ` [PATCH v2 2/4] iommufd: Move iommufd_sw_msi and related functions to driver.c Nicolin Chen
@ 2025-02-28 17:32 ` Jason Gunthorpe
2025-02-28 17:54 ` Nicolin Chen
0 siblings, 1 reply; 14+ messages in thread
From: Jason Gunthorpe @ 2025-02-28 17:32 UTC (permalink / raw)
To: Nicolin Chen; +Cc: kevin.tian, robin.murphy, joro, will, iommu, linux-kernel
On Thu, Feb 27, 2025 at 05:31:16PM -0800, Nicolin Chen wrote:
> @@ -187,6 +188,8 @@ struct iommufd_object *_iommufd_object_alloc(struct iommufd_ctx *ictx,
> enum iommufd_object_type type);
> struct device *iommufd_viommu_find_dev(struct iommufd_viommu *viommu,
> unsigned long vdev_id);
> +int iommufd_sw_msi(struct iommu_domain *domain, struct msi_desc *desc,
> + phys_addr_t msi_addr);
This should probably go into drivers/iommu/iommu-priv.h ?
> +int iommufd_sw_msi_install(struct iommufd_ctx *ictx,
> + struct iommufd_hwpt_paging *hwpt_paging,
> + struct iommufd_sw_msi_map *msi_map)
> +{
> + unsigned long iova;
> +
> + lockdep_assert_held(&ictx->sw_msi_lock);
> +
> + iova = msi_map->sw_msi_start + msi_map->pgoff * PAGE_SIZE;
> + if (!test_bit(msi_map->id, hwpt_paging->present_sw_msi.bitmap)) {
> + int rc;
> +
> + rc = iommu_map(hwpt_paging->common.domain, iova,
> + msi_map->msi_addr, PAGE_SIZE,
> + IOMMU_WRITE | IOMMU_READ | IOMMU_MMIO,
> + GFP_KERNEL_ACCOUNT);
> + if (rc)
> + return rc;
> + __set_bit(msi_map->id, hwpt_paging->present_sw_msi.bitmap);
> + }
> + return 0;
> +}
> +EXPORT_SYMBOL_NS_GPL(iommufd_sw_msi_install, "IOMMUFD");
Stubbed out too if CONFIG_IRQ_MSI_IOMMU ?
I'm still wondering if we should use a function pointer, how big was
this compiled anyhow?
Jason
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 1/4] iommu: Add private_data_owner to iommu_domain
2025-02-28 16:29 ` Robin Murphy
@ 2025-02-28 17:40 ` Nicolin Chen
0 siblings, 0 replies; 14+ messages in thread
From: Nicolin Chen @ 2025-02-28 17:40 UTC (permalink / raw)
To: Robin Murphy; +Cc: jgg, kevin.tian, joro, will, iommu, linux-kernel
On Fri, Feb 28, 2025 at 04:29:17PM +0000, Robin Murphy wrote:
> On 28/02/2025 1:31 am, Nicolin Chen wrote:
> > @@ -435,6 +436,7 @@ int iommu_get_msi_cookie(struct iommu_domain *domain, dma_addr_t base)
> > cookie->msi_iova = base;
> > domain->iova_cookie = cookie;
> > iommu_domain_set_sw_msi(domain, iommu_dma_sw_msi);
> > + domain->private_data_owner = IOMMU_DOMAIN_DATA_OWNER_DMA;
>
> This doesn't help all that much when it can still be called on any old
> unmanaged domain and silently overwrite whatever the previous user thought
> they owned...
We could add another validation on top of this function to reject
private_data_owner != IOMMU_DOMAIN_DATA_OWNER_NONE?
> And really the "owner" of MSI cookies is VFIO, it just happens that all the
> code for them still lives in iommu-dma because it was written to piggyback
> off the same irqchip hooks. TBH I've long been keen on separating them
> further from "real" IOVA cookies, and generalising said hooks really removes
> any remaining reason to keep the implementations tied together at all,
> should they have cause to diverge further (e.g. with makign the MSI cookie
> window programmable). What I absolutely want to avoid is a notion of having
> two types of MSI-mapping cookies, one of which is two types of MSI-mapping
> cookies.
>
> Hopefully that is all clear in the patch I proposed.
Yea, decoupling the MSI cookie from the iova_cookie makes sense
to me. I had the same feeling last week when I was preparing the
cleanup patch that added a iommu_put_msi_cookie(), but hesitated
after foreseeing a big rework that I was not sure you would like
or not. Glad that you did it..
With your patch now we even merged the two unions, which is nicer
from my point of view.
Thanks
Nicolin
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 2/4] iommufd: Move iommufd_sw_msi and related functions to driver.c
2025-02-28 17:32 ` Jason Gunthorpe
@ 2025-02-28 17:54 ` Nicolin Chen
2025-02-28 18:02 ` Jason Gunthorpe
0 siblings, 1 reply; 14+ messages in thread
From: Nicolin Chen @ 2025-02-28 17:54 UTC (permalink / raw)
To: Jason Gunthorpe; +Cc: kevin.tian, robin.murphy, joro, will, iommu, linux-kernel
On Fri, Feb 28, 2025 at 01:32:12PM -0400, Jason Gunthorpe wrote:
> On Thu, Feb 27, 2025 at 05:31:16PM -0800, Nicolin Chen wrote:
> > @@ -187,6 +188,8 @@ struct iommufd_object *_iommufd_object_alloc(struct iommufd_ctx *ictx,
> > enum iommufd_object_type type);
> > struct device *iommufd_viommu_find_dev(struct iommufd_viommu *viommu,
> > unsigned long vdev_id);
> > +int iommufd_sw_msi(struct iommu_domain *domain, struct msi_desc *desc,
> > + phys_addr_t msi_addr);
>
> This should probably go into drivers/iommu/iommu-priv.h ?
Oh right, I forgot we had that.
> > +int iommufd_sw_msi_install(struct iommufd_ctx *ictx,
> > + struct iommufd_hwpt_paging *hwpt_paging,
> > + struct iommufd_sw_msi_map *msi_map)
> > +{
> > + unsigned long iova;
> > +
> > + lockdep_assert_held(&ictx->sw_msi_lock);
> > +
> > + iova = msi_map->sw_msi_start + msi_map->pgoff * PAGE_SIZE;
> > + if (!test_bit(msi_map->id, hwpt_paging->present_sw_msi.bitmap)) {
> > + int rc;
> > +
> > + rc = iommu_map(hwpt_paging->common.domain, iova,
> > + msi_map->msi_addr, PAGE_SIZE,
> > + IOMMU_WRITE | IOMMU_READ | IOMMU_MMIO,
> > + GFP_KERNEL_ACCOUNT);
> > + if (rc)
> > + return rc;
> > + __set_bit(msi_map->id, hwpt_paging->present_sw_msi.bitmap);
> > + }
> > + return 0;
> > +}
> > +EXPORT_SYMBOL_NS_GPL(iommufd_sw_msi_install, "IOMMUFD");
>
> Stubbed out too if CONFIG_IRQ_MSI_IOMMU ?
In that case, the other caller iommufd_group_setup_msi() could be
{
#if IS_ENABLED(CONFIG_IRQ_MSI_IOMMU)
....
iommufd_sw_msi_install();
...
#endif
return 0;
}
?
> I'm still wondering if we should use a function pointer, how big was
> this compiled anyhow?
Hmm, you mean the size of driver.o? It's 192K (before) vs 212K
(after).
Thanks
Nicolin
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 2/4] iommufd: Move iommufd_sw_msi and related functions to driver.c
2025-02-28 17:54 ` Nicolin Chen
@ 2025-02-28 18:02 ` Jason Gunthorpe
2025-02-28 18:10 ` Nicolin Chen
0 siblings, 1 reply; 14+ messages in thread
From: Jason Gunthorpe @ 2025-02-28 18:02 UTC (permalink / raw)
To: Nicolin Chen; +Cc: kevin.tian, robin.murphy, joro, will, iommu, linux-kernel
On Fri, Feb 28, 2025 at 09:54:28AM -0800, Nicolin Chen wrote:
> > Stubbed out too if CONFIG_IRQ_MSI_IOMMU ?
>
> In that case, the other caller iommufd_group_setup_msi() could be
> {
> #if IS_ENABLED(CONFIG_IRQ_MSI_IOMMU)
> ....
> iommufd_sw_msi_install();
> ...
> #endif
> return 0;
> }
> ?
Or a empty static inline
> > I'm still wondering if we should use a function pointer, how big was
> > this compiled anyhow?
>
> Hmm, you mean the size of driver.o? It's 192K (before) vs 212K
> (after).
Yes, but use the 'size' command to measure before/after
Jason
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v2 2/4] iommufd: Move iommufd_sw_msi and related functions to driver.c
2025-02-28 18:02 ` Jason Gunthorpe
@ 2025-02-28 18:10 ` Nicolin Chen
0 siblings, 0 replies; 14+ messages in thread
From: Nicolin Chen @ 2025-02-28 18:10 UTC (permalink / raw)
To: Jason Gunthorpe; +Cc: kevin.tian, robin.murphy, joro, will, iommu, linux-kernel
On Fri, Feb 28, 2025 at 02:02:15PM -0400, Jason Gunthorpe wrote:
> On Fri, Feb 28, 2025 at 09:54:28AM -0800, Nicolin Chen wrote:
> > > Stubbed out too if CONFIG_IRQ_MSI_IOMMU ?
> >
> > In that case, the other caller iommufd_group_setup_msi() could be
> > {
> > #if IS_ENABLED(CONFIG_IRQ_MSI_IOMMU)
> > ....
> > iommufd_sw_msi_install();
> > ...
> > #endif
> > return 0;
> > }
> > ?
>
> Or a empty static inline
>
> > > I'm still wondering if we should use a function pointer, how big was
> > > this compiled anyhow?
> >
> > Hmm, you mean the size of driver.o? It's 192K (before) vs 212K
> > (after).
>
> Yes, but use the 'size' command to measure before/after
Before:
text data bss dec hex filename
722 44 0 766 2fe drivers/iommu/iommufd/driver.o
After:
text data bss dec hex filename
1888 100 0 1988 7c4 drivers/iommu/iommufd/driver.o
Thanks
Nicolin
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2025-02-28 18:11 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-28 1:31 [PATCH v2 0/4] iommu: Isolate iova_cookie to actual owners Nicolin Chen
2025-02-28 1:31 ` [PATCH v2 1/4] iommu: Add private_data_owner to iommu_domain Nicolin Chen
2025-02-28 3:13 ` Baolu Lu
2025-02-28 3:23 ` Nicolin Chen
2025-02-28 3:29 ` Baolu Lu
2025-02-28 16:29 ` Robin Murphy
2025-02-28 17:40 ` Nicolin Chen
2025-02-28 1:31 ` [PATCH v2 2/4] iommufd: Move iommufd_sw_msi and related functions to driver.c Nicolin Chen
2025-02-28 17:32 ` Jason Gunthorpe
2025-02-28 17:54 ` Nicolin Chen
2025-02-28 18:02 ` Jason Gunthorpe
2025-02-28 18:10 ` Nicolin Chen
2025-02-28 1:31 ` [PATCH v2 3/4] iommu: Drop sw_msi from iommu_domain Nicolin Chen
2025-02-28 1:31 ` [PATCH v2 4/4] iommu: Turn iova_cookie to dma-iommu private pointer Nicolin Chen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox