From: Baolu Lu <baolu.lu@linux.intel.com>
To: Nicolin Chen <nicolinc@nvidia.com>,
jgg@nvidia.com, kevin.tian@intel.com, robin.murphy@arm.com,
joro@8bytes.org, will@kernel.org
Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4 1/3] iommu: Sort out domain user data
Date: Fri, 7 Mar 2025 10:28:20 +0800 [thread overview]
Message-ID: <fabd6483-af48-4893-b539-2835640c5316@linux.intel.com> (raw)
In-Reply-To: <da7cc6d365ec6a77f6e6007f898eb3de2e581f80.1741294235.git.nicolinc@nvidia.com>
On 3/7/25 05:00, Nicolin Chen wrote:
> From: Robin Murphy<robin.murphy@arm.com>
>
> When DMA/MSI cookies were made first-class citizens back in commit
> 46983fcd67ac ("iommu: Pull IOVA cookie management into the core"), there
> was no real need to further expose the two different cookie types.
> However, now that IOMMUFD wants to add a third type of MSI-mapping
> cookie, we do have a nicely compelling reason to properly dismabiguate
> things at the domain level beyond just vaguely guessing from the domain
> type.
>
> Meanwhile, we also effectively have another "cookie" in the form of the
> anonymous union for other user data, which isn't much better in terms of
> being vague and unenforced. The fact is that all these cookie types are
> mutually exclusive, in the sense that combining them makes zero sense
> and/or would be catastrophic (iommu_set_fault_handler() on an SVA
> domain, anyone?) - the only combination which*might* be reasonable is
> perhaps a fault handler and an MSI cookie, but nobody's doing that at
> the moment, so let's rule it out as well for the sake of being clear and
> robust. To that end, we pull DMA and MSI cookies apart a little more,
> mostly to clear up the ambiguity at domain teardown, then for clarity
> (and to save a little space), move them into the union, whose ownership
> we can then properly describe and enforce entirely unambiguously.
>
> Signed-off-by: Robin Murphy<robin.murphy@arm.com>
> Reviewed-by: Kevin Tian<kevin.tian@intel.com>
> [nicolinc: rebase on latest tree; use prefix IOMMU_COOKIE_; merge unions
> in iommu_domain; add IOMMU_COOKIE_IOMMUFD for iommufd_hwpt]
> Signed-off-by: Nicolin Chen<nicolinc@nvidia.com>
> ---
> drivers/iommu/dma-iommu.h | 5 +
> include/linux/iommu.h | 20 ++-
> drivers/iommu/dma-iommu.c | 194 ++++++++++++++-------------
> drivers/iommu/iommu-sva.c | 1 +
> drivers/iommu/iommu.c | 18 ++-
> drivers/iommu/iommufd/hw_pagetable.c | 3 +
> 6 files changed, 143 insertions(+), 98 deletions(-)
>
> diff --git a/drivers/iommu/dma-iommu.h b/drivers/iommu/dma-iommu.h
> index c12d63457c76..9cca11806e5d 100644
> --- a/drivers/iommu/dma-iommu.h
> +++ b/drivers/iommu/dma-iommu.h
> @@ -13,6 +13,7 @@ void iommu_setup_dma_ops(struct device *dev);
>
> int iommu_get_dma_cookie(struct iommu_domain *domain);
> void iommu_put_dma_cookie(struct iommu_domain *domain);
> +void iommu_put_msi_cookie(struct iommu_domain *domain);
>
> int iommu_dma_init_fq(struct iommu_domain *domain);
>
> @@ -40,6 +41,10 @@ static inline void iommu_put_dma_cookie(struct iommu_domain *domain)
> {
> }
>
> +static inline void iommu_put_msi_cookie(struct iommu_domain *domain)
> +{
> +}
> +
> static inline void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list)
> {
> }
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index e93d2e918599..06cc14e9993d 100644
> --- a/include/linux/iommu.h
> +++ b/include/linux/iommu.h
> @@ -41,6 +41,7 @@ struct iommu_dirty_ops;
> struct notifier_block;
> struct iommu_sva;
> struct iommu_dma_cookie;
> +struct iommu_dma_msi_cookie;
> struct iommu_fault_param;
> struct iommufd_ctx;
> struct iommufd_viommu;
> @@ -165,6 +166,15 @@ struct iommu_domain_geometry {
> bool force_aperture; /* DMA only allowed in mappable range? */
> };
>
> +enum iommu_domain_cookie_type {
> + IOMMU_COOKIE_NONE,
> + IOMMU_COOKIE_DMA_IOVA,
> + IOMMU_COOKIE_DMA_MSI,
> + IOMMU_COOKIE_FAULT_HANDLER,
> + IOMMU_COOKIE_SVA,
> + IOMMU_COOKIE_IOMMUFD,
> +};
> +
> /* Domain feature flags */
> #define __IOMMU_DOMAIN_PAGING (1U << 0) /* Support for iommu_map/unmap */
> #define __IOMMU_DOMAIN_DMA_API (1U << 1) /* Domain for use in DMA-API
> @@ -211,12 +221,12 @@ struct iommu_domain_geometry {
>
> struct iommu_domain {
> unsigned type;
> + enum iommu_domain_cookie_type cookie_type;
> const struct iommu_domain_ops *ops;
> const struct iommu_dirty_ops *dirty_ops;
> const struct iommu_ops *owner; /* Whose domain_alloc we came from */
> unsigned long pgsize_bitmap; /* Bitmap of page sizes in use */
> struct iommu_domain_geometry geometry;
> - struct iommu_dma_cookie *iova_cookie;
> int (*iopf_handler)(struct iopf_group *group);
>
> #if IS_ENABLED(CONFIG_IRQ_MSI_IOMMU)
> @@ -224,10 +234,10 @@ struct iommu_domain {
> phys_addr_t msi_addr);
> #endif
>
> - union { /* Pointer usable by owner of the domain */
> - struct iommufd_hw_pagetable *iommufd_hwpt; /* iommufd */
> - };
> - union { /* Fault handler */
> + union { /* cookie */
> + struct iommu_dma_cookie *iova_cookie;
> + struct iommu_dma_msi_cookie *msi_cookie;
> + struct iommufd_hw_pagetable *iommufd_hwpt;
> struct {
> iommu_fault_handler_t handler;
> void *handler_token;
My feeling is that IOMMU_COOKIE_FAULT_HANDLER isn't exclusive to
IOMMU_COOKIE_DMA_IOVA; both might be used for kernel DMA with a paging
domain.
I am afraid that iommu_set_fault_handler() doesn't work anymore as the
domain's cookie type has already been set to IOMMU_COOKIE_DMA_IOVA.
void iommu_set_fault_handler(struct iommu_domain *domain,
iommu_fault_handler_t handler,
void *token)
{
if (WARN_ON(!domain || domain->cookie_type != IOMMU_COOKIE_NONE))
return;
domain->cookie_type = IOMMU_COOKIE_FAULT_HANDLER;
domain->handler = handler;
domain->handler_token = token;
}
EXPORT_SYMBOL_GPL(iommu_set_fault_handler);
Anybody has a chance to test whether above WARN_ON will be triggered?
Thanks,
baolu
next prev parent reply other threads:[~2025-03-07 2:31 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-06 21:00 [PATCH v4 0/3] iommu: Clean up cookie and sw_msi in struct iommu_domain Nicolin Chen
2025-03-06 21:00 ` [PATCH v4 1/3] iommu: Sort out domain user data Nicolin Chen
2025-03-07 2:28 ` Baolu Lu [this message]
2025-03-07 5:57 ` Nicolin Chen
2025-03-07 7:03 ` Baolu Lu
2025-03-07 11:49 ` Robin Murphy
2025-03-07 15:32 ` Jason Gunthorpe
2025-03-17 19:37 ` Jason Gunthorpe
2025-03-06 21:00 ` [PATCH v4 2/3] iommufd: Move iommufd_sw_msi and related functions to driver.c Nicolin Chen
2025-03-12 7:37 ` Tian, Kevin
2025-03-17 20:20 ` Jason Gunthorpe
2025-03-06 21:00 ` [PATCH v4 3/3] iommu: Drop sw_msi from iommu_domain Nicolin Chen
2025-03-17 20:20 ` Jason Gunthorpe
2025-03-24 16:25 ` Nathan Chancellor
2025-03-24 16:40 ` Jason Gunthorpe
2025-03-24 16:55 ` Nicolin Chen
2025-03-24 17:05 ` Nicolin Chen
2025-03-24 17:07 ` Nathan Chancellor
2025-03-24 20:05 ` Jason Gunthorpe
2025-03-24 20:43 ` Nathan Chancellor
2025-03-24 21:38 ` Nicolin Chen
2025-03-24 22:29 ` Jason Gunthorpe
2025-03-24 22:45 ` Nicolin Chen
2025-03-17 20:21 ` [PATCH v4 0/3] iommu: Clean up cookie and sw_msi in struct iommu_domain Jason Gunthorpe
2025-03-20 23:16 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fabd6483-af48-4893-b539-2835640c5316@linux.intel.com \
--to=baolu.lu@linux.intel.com \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=nicolinc@nvidia.com \
--cc=robin.murphy@arm.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox