From: Jason Gunthorpe <jgg@nvidia.com>
To: Nicolin Chen <nicolinc@nvidia.com>
Cc: Yi Liu <yi.l.liu@intel.com>,
"Giani, Dhaval" <Dhaval.Giani@amd.com>,
Vasant Hegde <vasant.hegde@amd.com>,
Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
joro@8bytes.org, alex.williamson@redhat.com,
kevin.tian@intel.com, robin.murphy@arm.com,
baolu.lu@linux.intel.com, cohuck@redhat.com,
eric.auger@redhat.com, kvm@vger.kernel.org,
mjrosato@linux.ibm.com, chao.p.peng@linux.intel.com,
yi.y.sun@linux.intel.com, peterx@redhat.com, jasowang@redhat.com,
shameerali.kolothum.thodi@huawei.com, lulu@redhat.com,
iommu@lists.linux.dev, linux-kernel@vger.kernel.org,
linux-kselftest@vger.kernel.org, zhenzhong.duan@intel.com,
joao.m.martins@oracle.com, xin.zeng@intel.com,
yan.y.zhao@intel.com
Subject: Re: [PATCH v6 0/6] iommufd: Add nesting infrastructure (part 2/2)
Date: Tue, 12 Dec 2023 15:21:00 -0400 [thread overview]
Message-ID: <20231212192100.GP3014157@nvidia.com> (raw)
In-Reply-To: <ZXiw4XK/1+ZdsFV1@Asurada-Nvidia>
On Tue, Dec 12, 2023 at 11:13:37AM -0800, Nicolin Chen wrote:
> On Tue, Dec 12, 2023 at 10:44:21AM -0400, Jason Gunthorpe wrote:
> > On Mon, Dec 11, 2023 at 11:30:00PM -0800, Nicolin Chen wrote:
> >
> > > > > Could the structure just look like this?
> > > > > struct iommu_dev_assign_virtual_id {
> > > > > __u32 size;
> > > > > __u32 dev_id;
> > > > > __u32 id_type;
> > > > > __u32 id;
> > > > > };
> > > >
> > > > It needs to take in the viommu_id also, and I'd make the id 64 bits
> > > > just for good luck.
> > >
> > > What is viommu_id required for in this context? I thought we
> > > already know which SMMU instance to issue commands via dev_id?
> >
> > The viommu_id would be the container that holds the xarray that maps
> > the vRID to pRID
> >
> > Logically we could have multiple mappings per iommufd as we could have
> > multiple iommu instances working here.
>
> I see. This is the object to hold a shared stage-2 HWPT/domain then.
It could be done like that, yes. I wasn't thinking about linking the
stage two so tightly but perhaps? If we can avoid putting the hwpt
here that might be more general.
> // iommufd_private.h
>
> enum iommufd_object_type {
> ...
> + IOMMUFD_OBJ_VIOMMU,
> ...
> };
>
> +struct iommufd_viommu {
> + struct iommufd_object obj;
> + struct iommufd_hwpt_paging *hwpt;
> + struct xarray devices;
> +};
>
> struct iommufd_hwpt_paging hwpt {
> ...
> + struct list_head viommu_list;
> ...
> };
I'd probably first try to go backwards and link the hwpt to the
viommu.
> struct iommufd_group {
> ...
> + struct iommufd_viommu *viommu; // should we attach to viommu instead of hwpt?
> ...
> };
No. Attach is a statement of translation so you still attach to the HWPT.
> Question to finalize how we maps vRID-pRID in the xarray:
> how should IOMMUFD_DEV_INVALIDATE work? The ioctl structure has
> a dev_id and a list of commands that belongs to the device. So,
> it forwards the struct device pointer to the driver along with
> the commands. Then, doesn't the driver already know the pRID
> from the dev pointer without looking up a vRID-pRID table?
The first version of DEV_INVALIDATE should have no xarray. The
invalidate commands are stripped of the SID and executed on the given
dev_id period. VMM splits up the invalidate command list.
The second version maybe we have the xarray, or maybe we just push the
xarray to the eventual viommu series.
> struct iommu_hwpt_alloc {
> ...
> + __u32 viommu_id;
> };
>
> +enum iommu_dev_virtual_id_type {
> + IOMMU_DEV_VIRTUAL_ID_TYPE_AMD_VIOMMU_DID, // not sure how this fits the xarray in viommu obj.
> + IOMMU_DEV_VIRTUAL_ID_TYPE_AMD_VIOMMU_RID,
It is just DID. In both cases the ID is the index to the "STE" radix
tree, whatever the driver happens to call it.
> Then, I think that we also need an iommu_viommu_alloc structure
> and ioctl to allocate an object, and that VMM should know if it
> needs to allocate multiple viommu objects -- this probably needs
> the hw_info ioctl to return a piommu_id so VMM gets the list of
> piommus from the attached devices?
Yes and yes
> Another question, introducing the viommu obj complicates things
> a lot. Do we want it to come with iommu_dev_assign_virtual_id,
> or maybe put in a later series? We could stage the xarray in the
> iommu_hwpt_paging struct for now, so a single-IOMMU system could
> still work with that.
All this would be in its own series to enable HW accelerated viommu
support on ARM & AMD as we've been doing so far.
I imagine it after we get the basic invalidation done
> > > And should we rename the "cache_invalidate_user"? Would VT-d
> > > still uses it for device cache?
> >
> > I think vt-d will not implement it
>
> Then should we "s/cache_invalidate_user/iotlb_sync_user"?
I think cache_invalidate is still a fine name.. vt-d will generate ATC
invalidations under that function too.
Jason
next prev parent reply other threads:[~2023-12-12 19:21 UTC|newest]
Thread overview: 93+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-17 13:07 [PATCH v6 0/6] iommufd: Add nesting infrastructure (part 2/2) Yi Liu
2023-11-17 13:07 ` [PATCH v6 1/6] iommu: Add cache_invalidate_user op Yi Liu
2023-11-20 7:53 ` Tian, Kevin
2023-12-06 18:32 ` Jason Gunthorpe
2023-12-06 18:43 ` Nicolin Chen
2023-12-06 18:50 ` Jason Gunthorpe
2023-12-07 6:53 ` Yi Liu
2024-01-08 7:32 ` Binbin Wu
2023-11-17 13:07 ` [PATCH v6 2/6] iommufd: Add IOMMU_HWPT_INVALIDATE Yi Liu
2023-11-20 8:09 ` Tian, Kevin
2023-11-20 8:29 ` Yi Liu
2023-11-20 8:34 ` Tian, Kevin
2023-11-20 17:36 ` Nicolin Chen
2023-11-21 2:50 ` Tian, Kevin
2023-11-21 5:24 ` Nicolin Chen
2023-11-24 2:36 ` Tian, Kevin
2023-11-27 19:53 ` Nicolin Chen
2023-11-28 6:01 ` Yi Liu
2023-11-29 0:54 ` Nicolin Chen
2023-11-28 8:03 ` Tian, Kevin
2023-11-29 0:51 ` Nicolin Chen
2023-11-29 0:57 ` Jason Gunthorpe
2023-11-29 1:09 ` Nicolin Chen
2023-11-29 19:58 ` Jason Gunthorpe
2023-11-29 22:07 ` Nicolin Chen
2023-11-30 0:08 ` Jason Gunthorpe
2023-11-30 20:41 ` Nicolin Chen
2023-12-01 0:45 ` Jason Gunthorpe
2023-12-01 4:29 ` Nicolin Chen
2023-12-01 12:55 ` Jason Gunthorpe
2023-12-01 19:58 ` Nicolin Chen
2023-12-01 20:43 ` Jason Gunthorpe
2023-12-01 22:12 ` Nicolin Chen
2023-12-04 14:48 ` Jason Gunthorpe
2023-12-05 17:33 ` Nicolin Chen
2023-12-06 12:48 ` Jason Gunthorpe
2023-12-01 3:51 ` Yi Liu
2023-12-01 4:50 ` Nicolin Chen
2023-12-01 5:19 ` Tian, Kevin
2023-12-01 7:05 ` Yi Liu
2023-12-01 7:10 ` Tian, Kevin
2023-12-01 9:08 ` Yi Liu
2023-11-21 5:02 ` Baolu Lu
2023-11-21 5:19 ` Nicolin Chen
2023-11-28 5:54 ` Yi Liu
2023-12-06 18:33 ` Jason Gunthorpe
2023-12-07 6:59 ` Yi Liu
2023-12-07 9:04 ` Tian, Kevin
2023-12-07 14:42 ` Jason Gunthorpe
2023-12-11 7:53 ` Yi Liu
2023-12-11 13:21 ` Jason Gunthorpe
2023-12-12 13:45 ` Liu, Yi L
2023-12-12 14:40 ` Jason Gunthorpe
2023-12-13 13:47 ` Liu, Yi L
2023-12-13 14:11 ` Jason Gunthorpe
2023-12-11 7:49 ` Yi Liu
2023-11-17 13:07 ` [PATCH v6 3/6] iommu: Add iommu_copy_struct_from_user_array helper Yi Liu
2023-11-20 8:17 ` Tian, Kevin
2023-11-20 17:25 ` Nicolin Chen
2023-11-21 2:48 ` Tian, Kevin
2024-01-08 8:37 ` Binbin Wu
2023-11-17 13:07 ` [PATCH v6 4/6] iommufd/selftest: Add mock_domain_cache_invalidate_user support Yi Liu
2023-12-06 18:16 ` Jason Gunthorpe
2023-12-11 11:21 ` Yi Liu
2023-11-17 13:07 ` [PATCH v6 5/6] iommufd/selftest: Add IOMMU_TEST_OP_MD_CHECK_IOTLB test op Yi Liu
2023-11-17 13:07 ` [PATCH v6 6/6] iommufd/selftest: Add coverage for IOMMU_HWPT_INVALIDATE ioctl Yi Liu
2023-12-06 18:19 ` Jason Gunthorpe
2023-12-11 11:28 ` Yi Liu
2023-12-11 13:06 ` Jason Gunthorpe
2023-12-09 1:47 ` [PATCH v6 0/6] iommufd: Add nesting infrastructure (part 2/2) Jason Gunthorpe
2023-12-11 2:29 ` Tian, Kevin
2023-12-11 12:36 ` Yi Liu
2023-12-11 13:05 ` Jason Gunthorpe
2023-12-11 15:34 ` Suthikulpanit, Suravee
2023-12-11 16:06 ` Jason Gunthorpe
2023-12-11 12:35 ` Yi Liu
2023-12-11 13:20 ` Jason Gunthorpe
2023-12-11 20:11 ` Nicolin Chen
2023-12-11 21:48 ` Jason Gunthorpe
2023-12-11 17:35 ` Suthikulpanit, Suravee
2023-12-11 17:45 ` Jason Gunthorpe
2023-12-11 21:27 ` Nicolin Chen
2023-12-11 21:57 ` Jason Gunthorpe
2023-12-12 7:30 ` Nicolin Chen
2023-12-12 14:44 ` Jason Gunthorpe
2023-12-12 19:13 ` Nicolin Chen
2023-12-12 19:21 ` Jason Gunthorpe [this message]
2023-12-12 20:05 ` Nicolin Chen
2023-12-13 12:40 ` Jason Gunthorpe
2023-12-13 19:54 ` Nicolin Chen
[not found] ` <CGME20231217215720eucas1p2a590aca62ce8eb5ba81df6bc8b1a785d@eucas1p2.samsung.com>
2023-12-17 11:21 ` Joel Granados
2023-12-19 9:26 ` Yi Liu
2023-12-20 11:23 ` Joel Granados
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231212192100.GP3014157@nvidia.com \
--to=jgg@nvidia.com \
--cc=Dhaval.Giani@amd.com \
--cc=alex.williamson@redhat.com \
--cc=baolu.lu@linux.intel.com \
--cc=chao.p.peng@linux.intel.com \
--cc=cohuck@redhat.com \
--cc=eric.auger@redhat.com \
--cc=iommu@lists.linux.dev \
--cc=jasowang@redhat.com \
--cc=joao.m.martins@oracle.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=lulu@redhat.com \
--cc=mjrosato@linux.ibm.com \
--cc=nicolinc@nvidia.com \
--cc=peterx@redhat.com \
--cc=robin.murphy@arm.com \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=suravee.suthikulpanit@amd.com \
--cc=vasant.hegde@amd.com \
--cc=xin.zeng@intel.com \
--cc=yan.y.zhao@intel.com \
--cc=yi.l.liu@intel.com \
--cc=yi.y.sun@linux.intel.com \
--cc=zhenzhong.duan@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).