From: Jason Gunthorpe <jgg@nvidia.com>
To: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Cc: iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org,
robin.murphy@arm.com, will@kernel.org, joro@8bytes.org,
ryan.roberts@arm.com, kevin.tian@intel.com, nicolinc@nvidia.com,
mshavit@google.com, eric.auger@redhat.com,
joao.m.martins@oracle.com, jiangkunkun@huawei.com,
zhukeqian1@huawei.com, linuxarm@huawei.com
Subject: Re: [PATCH v4 3/7] iommu/arm-smmu-v3: Add support for domain_alloc_user fn
Date: Sat, 1 Jun 2024 18:08:25 -0300 [thread overview]
Message-ID: <ZluNyUDmTzNIWjPw@nvidia.com> (raw)
In-Reply-To: <20240528071831.17560-4-shameerali.kolothum.thodi@huawei.com>
On Tue, May 28, 2024 at 08:18:27AM +0100, Shameer Kolothum wrote:
> @@ -2715,6 +2717,34 @@ static struct iommu_domain arm_smmu_blocked_domain = {
> .ops = &arm_smmu_blocked_ops,
> };
>
> +static struct iommu_domain *
> +arm_smmu_domain_alloc_user(struct device *dev, u32 flags,
> + struct iommu_domain *parent,
> + const struct iommu_user_data *user_data)
> +{
> + struct arm_smmu_master *master = dev_iommu_priv_get(dev);
> + struct arm_smmu_domain *smmu_domain;
> + int ret;
> +
> + if (flags || parent || user_data)
> + return ERR_PTR(-EINVAL);
This should be EOPNOTSUPP, and same in the following patch that
touches this.
Otherwise looks good
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Jason
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2024-06-01 21:09 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-28 7:18 [PATCH v4 0/7] iommu/smmuv3: Add IOMMUFD dirty tracking support for SMMUv3 Shameer Kolothum
2024-05-28 7:18 ` [PATCH v4 1/7] iommu/arm-smmu-v3: Convert to domain_alloc_sva() Shameer Kolothum
2024-05-28 7:18 ` [PATCH v4 2/7] iommu/arm-smmu-v3: Factor out a common arm_smmu_domain_alloc() Shameer Kolothum
2024-06-01 21:06 ` Jason Gunthorpe
2024-06-05 15:19 ` Shameerali Kolothum Thodi
2024-05-28 7:18 ` [PATCH v4 3/7] iommu/arm-smmu-v3: Add support for domain_alloc_user fn Shameer Kolothum
2024-06-01 21:08 ` Jason Gunthorpe [this message]
2024-05-28 7:18 ` [PATCH v4 4/7] iommu/arm-smmu-v3: Add feature detection for HTTU Shameer Kolothum
2024-05-28 7:18 ` [PATCH v4 5/7] iommu/io-pgtable-arm: Add read_and_clear_dirty() support Shameer Kolothum
2024-06-01 21:17 ` Jason Gunthorpe
2024-05-28 7:18 ` [PATCH v4 6/7] iommu/arm-smmu-v3: Add support for dirty tracking in domain alloc Shameer Kolothum
2024-05-28 7:18 ` [PATCH v4 7/7] iommu/arm-smmu-v3: Enable HTTU for stage1 with io-pgtable mapping Shameer Kolothum
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZluNyUDmTzNIWjPw@nvidia.com \
--to=jgg@nvidia.com \
--cc=eric.auger@redhat.com \
--cc=iommu@lists.linux.dev \
--cc=jiangkunkun@huawei.com \
--cc=joao.m.martins@oracle.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linuxarm@huawei.com \
--cc=mshavit@google.com \
--cc=nicolinc@nvidia.com \
--cc=robin.murphy@arm.com \
--cc=ryan.roberts@arm.com \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=will@kernel.org \
--cc=zhukeqian1@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).