From: Nicolin Chen <nicolinc@nvidia.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: <kevin.tian@intel.com>, <will@kernel.org>, <joro@8bytes.org>,
<suravee.suthikulpanit@amd.com>, <robin.murphy@arm.com>,
<dwmw2@infradead.org>, <baolu.lu@linux.intel.com>,
<shuah@kernel.org>, <linux-kernel@vger.kernel.org>,
<iommu@lists.linux.dev>, <linux-arm-kernel@lists.infradead.org>,
<linux-kselftest@vger.kernel.org>, <eric.auger@redhat.com>,
<jean-philippe@linaro.org>, <mdf@kernel.org>,
<mshavit@google.com>, <shameerali.kolothum.thodi@huawei.com>,
<smostafa@google.com>, <yi.l.liu@intel.com>
Subject: Re: [PATCH v1 04/10] iommufd/viommu: Allow drivers to control vdev_id lifecycle
Date: Tue, 8 Oct 2024 10:39:43 -0700 [thread overview]
Message-ID: <ZwVuX1MB3Zr4UDdm@Asurada-Nvidia> (raw)
In-Reply-To: <20240905180119.GY1358970@nvidia.com>
Sorry for the late reply. Just sat down and started to look at
this series.
On Thu, Sep 05, 2024 at 03:01:19PM -0300, Jason Gunthorpe wrote:
> On Tue, Aug 27, 2024 at 10:02:06AM -0700, Nicolin Chen wrote:
> > The iommufd core provides a lookup helper for an IOMMU driver to find a
> > device pointer by device's per-viommu virtual ID. Yet a driver may need
> > an inverted lookup to find a device's per-viommu virtual ID by a device
> > pointer, e.g. when reporting virtual IRQs/events back to the user space.
> > In this case, it'd be unsafe for iommufd core to do an inverted lookup,
> > as the driver can't track the lifecycle of a viommu object or a vdev_id
> > object.
> >
> > Meanwhile, some HW can even support virtual device ID lookup by its HW-
> > accelerated virtualization feature. E.g. Tegra241 CMDQV HW supports to
> > execute vanilla guest-issued SMMU commands containing virtual Stream ID
> > but requires software to configure a link between virtual Stream ID and
> > physical Stream ID via HW registers. So not only the iommufd core needs
> > a vdev_id lookup table, drivers will want one too.
> >
> > Given the two justifications above, it's the best practice to provide a
> > a pair of set_vdev_id/unset_vdev_id ops in the viommu ops, so a driver
> > can implement them to control a vdev_id's lifecycle, and configure the
> > HW properly if required.
>
> I think the lifecycle rules should be much simpler.
>
> If a nested domain is attached to a STE/RID/device then the vIOMMU
> affiliated with that nested domain is pinned while the STE is in place
>
> So the driver only need to provide locking around attach changing the
> STE's vIOMMU vs async operations translating from a STE to a
> vIOMMU. This can be a simple driver lock of some kind, ie a rwlock
> across the STE table.
I was worried about the async between a vdev link (idev<=>vIOMMU)
and an regular attach link (idev<=>nested_domain).
Though practically the vdev link wouldn't break until the regular
attach link breaks first, it was still not safe from it happening.
So, having a master->lock to protect master->vdev could ensure.
Now, with the vdev being an object refcounting idev/vIOMMU objs,
I think we can simply pin the vdev in iommufd_hw_pagetable_attach
to ensure the vdev won't disappear. Then, a driver wouldn't need
to worry about that. [1]
Meanwhile, a nested_domain pins the vIOMMU object in the iommufd
level upon allocation. [2]
So, what's missing seems to be the attach between the master->dev
and the nested_domain itself [3]
idev <----------- vdev [1] ---------> vIOMMU
(dev) ---[3]--> nested_domain ----[2]-----^
A lock, protecting the attach(), which is in parallel with iommu
core's group->mutex could help I think?
> Generally that is how all the async events should work, go from the
> STE to the VIOMMU to a iommufd callback to the iommufd event
> queue. iommufd will translate the struct device from the driver to an
> idev_id (or maybe even a vdevid) the same basic way the PRI code works
My first attempt was putting the vdev into the attach_handle that
was created for PRI code. Yet later on I found the links were too
long and some of them weren't safe (perhaps with the new vdev and
those pins mentioned above, it worth another try). So letting the
driver hold the vdev itself became much simpler.
Thanks
Nicolin
next prev parent reply other threads:[~2024-10-08 17:52 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-27 17:02 [PATCH v1 00/10] iommufd: Add VIOMMU infrastructure (Part-2 VIRQ) Nicolin Chen
2024-08-27 17:02 ` [PATCH v1 01/10] iommufd: Rename IOMMUFD_OBJ_FAULT to IOMMUFD_OBJ_EVENT_IOPF Nicolin Chen
2024-08-27 17:02 ` [PATCH v1 02/10] iommufd: Rename fault.c to event.c Nicolin Chen
2024-08-27 17:02 ` [PATCH v1 03/10] iommufd: Add IOMMUFD_OBJ_EVENT_VIRQ and IOMMUFD_CMD_VIRQ_ALLOC Nicolin Chen
2024-08-27 17:02 ` [PATCH v1 04/10] iommufd/viommu: Allow drivers to control vdev_id lifecycle Nicolin Chen
2024-09-05 18:01 ` Jason Gunthorpe
2024-10-08 17:39 ` Nicolin Chen [this message]
2024-10-23 7:22 ` Nicolin Chen
2024-10-23 16:59 ` Jason Gunthorpe
2024-10-23 18:54 ` Nicolin Chen
2024-10-28 12:58 ` Jason Gunthorpe
2024-08-27 17:02 ` [PATCH v1 05/10] iommufd/viommu: Add iommufd_vdev_id_to_dev helper Nicolin Chen
2024-08-27 17:02 ` [PATCH v1 06/10] iommufd/viommu: Add iommufd_viommu_report_irq helper Nicolin Chen
2024-08-27 17:02 ` [PATCH v1 07/10] iommufd/selftest: Implement mock_viommu_set/unset_vdev_id Nicolin Chen
2024-08-27 17:02 ` [PATCH v1 08/10] iommufd/selftest: Add IOMMU_TEST_OP_TRIGGER_VIRQ for VIRQ coverage Nicolin Chen
2024-08-27 17:02 ` [PATCH v1 09/10] iommufd/selftest: Add EVENT_VIRQ test coverage Nicolin Chen
2024-08-27 17:02 ` [PATCH v1 10/10] iommu/arm-smmu-v3: Report virtual IRQ for device in user space Nicolin Chen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZwVuX1MB3Zr4UDdm@Asurada-Nvidia \
--to=nicolinc@nvidia.com \
--cc=baolu.lu@linux.intel.com \
--cc=dwmw2@infradead.org \
--cc=eric.auger@redhat.com \
--cc=iommu@lists.linux.dev \
--cc=jean-philippe@linaro.org \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=mdf@kernel.org \
--cc=mshavit@google.com \
--cc=robin.murphy@arm.com \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=shuah@kernel.org \
--cc=smostafa@google.com \
--cc=suravee.suthikulpanit@amd.com \
--cc=will@kernel.org \
--cc=yi.l.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox