patches.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
From: Baolu Lu <baolu.lu@linux.intel.com>
To: Jason Gunthorpe <jgg@nvidia.com>,
	David Woodhouse <dwmw2@infradead.org>,
	iommu@lists.linux.dev, Joerg Roedel <joro@8bytes.org>,
	Robin Murphy <robin.murphy@arm.com>,
	Will Deacon <will@kernel.org>
Cc: patches@lists.linux.dev, Wei Wang <wei.w.wang@intel.com>
Subject: Re: [PATCH 7/7] iommu/vtd: Split paging_domain_compatible()
Date: Tue, 10 Jun 2025 15:12:43 +0800	[thread overview]
Message-ID: <39400661-9f18-4ba3-8cb8-d56ef548c9b0@linux.intel.com> (raw)
In-Reply-To: <7-v1-20c73f153f4c+1895-vtd_prep_jgg@nvidia.com>

On 6/10/25 03:58, Jason Gunthorpe wrote:
> Make First/Second stage specific functions that follow the same pattern in
> intel_iommu_domain_alloc_first/second_stage() for computing
> EOPNOTSUPP. This makes the code easier to understand as if we couldn't
> create a domain with the parameters for this IOMMU instance then we
> certainly are not compatible with it.
> 
> Check superpage support directly against the per-stage cap bits and the
> pgsize_bitmap.
> 
> Add a note that the force_snooping is read without locking. The locking
> needs to cover the compatible check and the add of the device to the list.
> 
> Signed-off-by: Jason Gunthorpe<jgg@nvidia.com>
> ---
>   drivers/iommu/intel/iommu.c | 66 ++++++++++++++++++++++++++++++-------
>   1 file changed, 54 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index ab2e9fef75293c..a482d1b77d1203 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -3416,33 +3416,75 @@ static void intel_iommu_domain_free(struct iommu_domain *domain)
>   	kfree(dmar_domain);
>   }
>   
> +static int paging_domain_compatible_first_stage(struct dmar_domain *dmar_domain,
> +						struct intel_iommu *iommu)
> +{
> +	if (WARN_ON(dmar_domain->domain.dirty_ops ||
> +		    dmar_domain->nested_parent))
> +		return -EINVAL;
> +
> +	/* Only SL is available in legacy mode */
> +	if (!sm_supported(iommu) || !ecap_flts(iommu->ecap))
> +		return -EINVAL;
> +
> +	/* Same page size support */
> +	if (!cap_fl1gp_support(iommu->cap) &&
> +	    (dmar_domain->domain.pgsize_bitmap & SZ_1G))
> +		return -EINVAL;
> +	return 0;
> +}
> +
> +static int
> +paging_domain_compatible_second_stage(struct dmar_domain *dmar_domain,
> +				      struct intel_iommu *iommu)
> +{
> +	unsigned int sslps = cap_super_page_val(iommu->cap);
> +
> +	if (dmar_domain->domain.dirty_ops && !ssads_supported(iommu))
> +		return -EINVAL;
> +	if (dmar_domain->nested_parent && !nested_supported(iommu))
> +		return -EINVAL;
> +
> +	/* Legacy mode always supports second stage */
> +	if (sm_supported(iommu) && !ecap_slts(iommu->ecap))
> +		return -EINVAL;
> +
> +	/* Same page size support */
> +	if (!(sslps & BIT(0)) && (dmar_domain->domain.pgsize_bitmap & SZ_2M))
> +		return -EINVAL;
> +	if (!(sslps & BIT(1)) && (dmar_domain->domain.pgsize_bitmap & SZ_1G))
> +		return -EINVAL;
> +	return 0;
> +}
> +
>   int paging_domain_compatible(struct iommu_domain *domain, struct device *dev)
>   {
>   	struct device_domain_info *info = dev_iommu_priv_get(dev);
>   	struct dmar_domain *dmar_domain = to_dmar_domain(domain);
>   	struct intel_iommu *iommu = info->iommu;
> +	int ret = -EINVAL;
>   	int addr_width;
>   
> -	if (WARN_ON_ONCE(!(domain->type & __IOMMU_DOMAIN_PAGING)))
> -		return -EPERM;
> +	if (domain->ops == &intel_fs_paging_domain_ops)
> +		ret = paging_domain_compatible_first_stage(dmar_domain, iommu);
> +	else if (domain->ops == &intel_ss_paging_domain_ops)
> +		ret = paging_domain_compatible_second_stage(dmar_domain, iommu);
> +	else if (WARN_ON(true))
> +		ret = -EINVAL;
> +	if (ret)
> +		return ret;
>   
> +	/*
> +	 * FIXME this is locked wrong, it needs to be under the
> +	 * dmar_domain->lock
> +	 */
>   	if (dmar_domain->force_snooping && !ecap_sc_support(iommu->ecap))
>   		return -EINVAL;

Perhaps we can use group->mutex to fix this in the future?

paging_domain_compatible() is in the domain attaching path, which is
already synchronized by group->mutex. We can further expose an iommu
interface for cache coherency enforcement, which would also apply group-
 >mutex.

Thanks,
baolu

  reply	other threads:[~2025-06-10  7:13 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-09 19:58 [PATCH 0/7] Reorganize Intel VT-D to be ready for iommupt Jason Gunthorpe
2025-06-09 19:58 ` [PATCH 1/7] iommu/vtd: Lift the __pa to domain_setup_first_level/intel_svm_set_dev_pasid() Jason Gunthorpe
2025-06-10 18:34   ` Jason Gunthorpe
2025-06-09 19:58 ` [PATCH 2/7] iommu/vtd: Fold domain_exit() into intel_iommu_domain_free() Jason Gunthorpe
2025-06-10  9:15   ` Wang, Wei W
2025-06-10 13:29     ` Jason Gunthorpe
2025-06-09 19:58 ` [PATCH 3/7] iommu/vtd: Do not wipe out the page table NID when devices detach Jason Gunthorpe
2025-06-10  6:41   ` Baolu Lu
2025-06-10 13:18     ` Jason Gunthorpe
2025-06-10  9:14   ` Wang, Wei W
2025-06-09 19:58 ` [PATCH 4/7] iommu/vtd: Split intel_iommu_domain_alloc_paging_flags() Jason Gunthorpe
2025-06-10  9:14   ` Wang, Wei W
2025-06-10 13:25     ` Jason Gunthorpe
2025-06-09 19:58 ` [PATCH 5/7] iommu/vtd: Create unique domain ops for each stage Jason Gunthorpe
2025-06-10  9:14   ` Wang, Wei W
2025-06-10 13:26     ` Jason Gunthorpe
2025-06-09 19:58 ` [PATCH 6/7] iommu/vtd: Split intel_iommu_enforce_cache_coherency() Jason Gunthorpe
2025-06-09 19:58 ` [PATCH 7/7] iommu/vtd: Split paging_domain_compatible() Jason Gunthorpe
2025-06-10  7:12   ` Baolu Lu [this message]
2025-06-10 23:51     ` Jason Gunthorpe
2025-06-11  4:50       ` Baolu Lu
2025-06-12 13:47         ` Jason Gunthorpe
2025-06-13  3:15           ` Baolu Lu
2025-06-12  7:18 ` [PATCH 0/7] Reorganize Intel VT-D to be ready for iommupt Joerg Roedel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=39400661-9f18-4ba3-8cb8-d56ef548c9b0@linux.intel.com \
    --to=baolu.lu@linux.intel.com \
    --cc=dwmw2@infradead.org \
    --cc=iommu@lists.linux.dev \
    --cc=jgg@nvidia.com \
    --cc=joro@8bytes.org \
    --cc=patches@lists.linux.dev \
    --cc=robin.murphy@arm.com \
    --cc=wei.w.wang@intel.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).