From: Jason Gunthorpe <jgg@nvidia.com>
To: Lu Baolu <baolu.lu@linux.intel.com>
Cc: Joerg Roedel <joro@8bytes.org>,
Christoph Hellwig <hch@infradead.org>,
Kevin Tian <kevin.tian@intel.com>,
Ashok Raj <ashok.raj@intel.com>, Will Deacon <will@kernel.org>,
Robin Murphy <robin.murphy@arm.com>,
Jean-Philippe Brucker <jean-philippe@linaro.com>,
Eric Auger <eric.auger@redhat.com>, Liu Yi L <yi.l.liu@intel.com>,
Jacob jun Pan <jacob.jun.pan@intel.com>,
iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH RFC v2 04/11] iommu: Add attach/detach_dev_pasid domain ops
Date: Wed, 30 Mar 2022 16:08:52 -0300 [thread overview]
Message-ID: <20220330190852.GC2120790@nvidia.com> (raw)
In-Reply-To: <20220329053800.3049561-5-baolu.lu@linux.intel.com>
On Tue, Mar 29, 2022 at 01:37:53PM +0800, Lu Baolu wrote:
> Attaching an IOMMU domain to a PASID of a device is a generic operation
> for modern IOMMU drivers which support PASID-granular DMA address
> translation. Currently visible usage scenarios include (but not limited):
>
> - SVA (Shared Virtual Address)
> - kernel DMA with PASID
> - hardware-assist mediated device
>
> This adds a pair of common domain ops for this purpose and adds some
> common helpers to attach/detach a domain to/from a {device, PASID} and
> get/put the domain attached to {device, PASID}.
>
> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
> include/linux/iommu.h | 36 ++++++++++++++++++
> drivers/iommu/iommu.c | 88 +++++++++++++++++++++++++++++++++++++++++++
> 2 files changed, 124 insertions(+)
>
> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
> index 29c4c2edd706..a46285488a57 100644
> +++ b/include/linux/iommu.h
> @@ -269,6 +269,8 @@ struct iommu_ops {
> * struct iommu_domain_ops - domain specific operations
> * @attach_dev: attach an iommu domain to a device
> * @detach_dev: detach an iommu domain from a device
> + * @attach_dev_pasid: attach an iommu domain to a pasid of device
> + * @detach_dev_pasid: detach an iommu domain from a pasid of device
> * @map: map a physically contiguous memory region to an iommu domain
> * @map_pages: map a physically contiguous set of pages of the same size to
> * an iommu domain.
> @@ -286,6 +288,10 @@ struct iommu_ops {
> struct iommu_domain_ops {
> int (*attach_dev)(struct iommu_domain *domain, struct device *dev);
> void (*detach_dev)(struct iommu_domain *domain, struct device *dev);
> + int (*attach_dev_pasid)(struct iommu_domain *domain,
> + struct device *dev, ioasid_t id);
> + void (*detach_dev_pasid)(struct iommu_domain *domain,
> + struct device *dev, ioasid_t id);
ID should be pasid for consistency
> +int iommu_attach_device_pasid(struct iommu_domain *domain,
> + struct device *dev, ioasid_t pasid)
> +{
> + struct iommu_group *group;
> + int ret = -EINVAL;
> + void *curr;
> +
> + if (!domain->ops->attach_dev_pasid)
> + return -EINVAL;
> +
> + group = iommu_group_get(dev);
> + if (!group)
> + return -ENODEV;
> +
> + mutex_lock(&group->mutex);
> + /*
> + * To keep things simple, we currently don't support IOMMU groups
> + * with more than one device. Existing SVA-capable systems are not
> + * affected by the problems that required IOMMU groups (lack of ACS
> + * isolation, device ID aliasing and other hardware issues).
> + */
> + if (!iommu_group_singleton_lockdown(group))
> + goto out_unlock;
> +
> + xa_lock(&group->pasid_array);
> + curr = __xa_cmpxchg(&group->pasid_array, pasid, NULL,
> + domain, GFP_KERNEL);
> + xa_unlock(&group->pasid_array);
Why the xa_lock/unlock? Just call the normal xa_cmpxchg?
> +void iommu_detach_device_pasid(struct iommu_domain *domain,
> + struct device *dev, ioasid_t pasid)
> +{
> + struct iommu_group *group;
> +
> + group = iommu_group_get(dev);
> + if (WARN_ON(!group))
> + return;
This group_get stuff really needs some cleaning, this makes no sense
at all.
If the kref to group can go to zero within this function then the
initial access of the kref is already buggy:
if (group)
kobject_get(group->devices_kobj);
Because it will crash or WARN_ON.
We don't hit this because it is required that a group cannot be
destroyed while a struct device has a driver bound, and all these
paths are driver bound paths.
So none of these group_get/put patterns have any purpose at all, and
now we are adding impossible WARN_ONs to..
> +struct iommu_domain *
> +iommu_get_domain_for_dev_pasid(struct device *dev, ioasid_t pasid)
> +{
> + struct iommu_domain *domain;
> + struct iommu_group *group;
> +
> + group = iommu_group_get(dev);
> + if (!group)
> + return NULL;
And now we are doing useless things on a performance path!
> + mutex_lock(&group->mutex);
> + domain = xa_load(&group->pasid_array, pasid);
> + if (domain && domain->type == IOMMU_DOMAIN_SVA)
> + iommu_sva_domain_get_user(domain);
> + mutex_unlock(&group->mutex);
> + iommu_group_put(group);
Why do we need so much locking on a performance path? RCU out of the
xarray..
Not sure I see how this get_user refcounting can work ?
Jason
next prev parent reply other threads:[~2022-03-30 19:09 UTC|newest]
Thread overview: 62+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-29 5:37 [PATCH RFC v2 00/11] iommu: SVA and IOPF refactoring Lu Baolu
2022-03-29 5:37 ` [PATCH RFC v2 01/11] iommu: Add pasid_bits field in struct dev_iommu Lu Baolu
2022-03-29 21:00 ` Jacob Pan
2022-03-30 4:30 ` Lu Baolu
2022-03-30 7:05 ` Tian, Kevin
2022-03-30 11:58 ` Lu Baolu
2022-03-29 5:37 ` [PATCH RFC v2 02/11] iommu: Add iommu_group_singleton_lockdown() Lu Baolu
2022-03-29 8:42 ` Tian, Kevin
2022-03-29 11:42 ` Jason Gunthorpe
2022-03-30 6:50 ` Tian, Kevin
2022-03-30 11:57 ` Lu Baolu
2022-03-30 11:58 ` Jason Gunthorpe
2022-03-30 14:12 ` Tian, Kevin
2022-03-30 14:30 ` Jason Gunthorpe
2022-04-02 7:12 ` Tian, Kevin
2022-04-02 23:29 ` Jason Gunthorpe
2022-04-06 10:02 ` Lu Baolu
2022-04-06 10:44 ` Tian, Kevin
2022-04-06 11:03 ` Lu Baolu
2022-04-06 23:56 ` Tian, Kevin
2022-03-30 14:18 ` Tian, Kevin
2022-03-30 15:04 ` Alex Williamson
2022-04-04 5:43 ` Lu Baolu
2022-04-04 17:24 ` Jason Gunthorpe
2022-04-05 6:12 ` Lu Baolu
2022-04-05 14:10 ` Jason Gunthorpe
2022-04-06 9:51 ` Lu Baolu
2022-04-01 6:20 ` Yi Liu
2022-04-01 11:52 ` Jason Gunthorpe
2022-03-30 4:59 ` Lu Baolu
2022-03-30 6:55 ` Tian, Kevin
2022-04-01 5:49 ` Yi Liu
2022-03-29 5:37 ` [PATCH RFC v2 03/11] iommu/sva: Add iommu_domain type for SVA Lu Baolu
2022-03-29 21:38 ` Jacob Pan
2022-03-30 4:35 ` Lu Baolu
2022-03-30 19:02 ` Jason Gunthorpe
2022-04-02 8:43 ` Tian, Kevin
2022-04-02 23:32 ` Jason Gunthorpe
2022-04-04 6:09 ` Lu Baolu
2022-04-06 1:00 ` Tian, Kevin
2022-04-06 1:23 ` Jason Gunthorpe
2022-04-06 5:58 ` Tian, Kevin
2022-04-06 12:32 ` Robin Murphy
2022-04-06 13:06 ` Jason Gunthorpe
2022-04-06 13:37 ` Robin Murphy
2022-04-06 14:01 ` Jason Gunthorpe
2022-04-07 0:11 ` Tian, Kevin
2022-03-29 5:37 ` [PATCH RFC v2 04/11] iommu: Add attach/detach_dev_pasid domain ops Lu Baolu
2022-03-30 19:08 ` Jason Gunthorpe [this message]
2022-04-04 6:47 ` Lu Baolu
2022-03-29 5:37 ` [PATCH RFC v2 05/11] iommu/vt-d: Remove SVM_FLAG_SUPERVISOR_MODE suport Lu Baolu
2022-03-29 5:37 ` [PATCH RFC v2 06/11] iommu/vt-d: Add SVA domain support Lu Baolu
2022-03-30 19:09 ` Jason Gunthorpe
2022-04-04 6:52 ` Lu Baolu
2022-03-29 5:37 ` [PATCH RFC v2 07/11] arm-smmu-v3/sva: " Lu Baolu
2022-03-29 5:37 ` [PATCH RFC v2 08/11] iommu/sva: Use attach/detach_pasid_dev in SVA interfaces Lu Baolu
2022-03-31 20:59 ` Jacob Pan
2022-03-31 22:26 ` Jason Gunthorpe
2022-04-04 5:55 ` Lu Baolu
2022-03-29 5:37 ` [PATCH RFC v2 09/11] iommu: Remove SVA related callbacks from iommu ops Lu Baolu
2022-03-29 5:37 ` [PATCH RFC v2 10/11] iommu: Per-domain I/O page fault handling Lu Baolu
2022-03-29 5:38 ` [PATCH RFC v2 11/11] iommu: Rename iommu-sva-lib.{c,h} Lu Baolu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220330190852.GC2120790@nvidia.com \
--to=jgg@nvidia.com \
--cc=ashok.raj@intel.com \
--cc=baolu.lu@linux.intel.com \
--cc=eric.auger@redhat.com \
--cc=hch@infradead.org \
--cc=iommu@lists.linux-foundation.org \
--cc=jacob.jun.pan@intel.com \
--cc=jean-philippe@linaro.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=robin.murphy@arm.com \
--cc=will@kernel.org \
--cc=yi.l.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).