From: Jason Gunthorpe <jgg@nvidia.com>
To: Robin Murphy <robin.murphy@arm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>,
Cornelia Huck <cohuck@redhat.com>,
iommu@lists.linux.dev, Joerg Roedel <joro@8bytes.org>,
kvm@vger.kernel.org, Will Deacon <will@kernel.org>,
Qian Cai <cai@lca.pw>, Joerg Roedel <jroedel@suse.de>,
Marek Szyprowski <m.szyprowski@samsung.com>,
Matthew Rosato <mjrosato@linux.ibm.com>
Subject: Re: [PATCH 4/4] iommu: Fix ordering of iommu_release_device()
Date: Fri, 9 Sep 2022 15:30:24 -0300 [thread overview]
Message-ID: <YxuGQDCzDsvKV2W8@nvidia.com> (raw)
In-Reply-To: <e0ff6dc1-91b3-2e41-212c-c83a2bf2b3a8@arm.com>
On Fri, Sep 09, 2022 at 06:57:59PM +0100, Robin Murphy wrote:
> > > That then only leaves the issue that that domain may still become
> > > invalid at any point after the group mutex has been dropped.
> >
> > So that is this race:
> >
> > CPU0 CPU1
> > iommu_release_device(a)
> > __iommu_group_remove_device(a)
> > iommu_device_use_default_domain(b)
> > iommu_domain_free(domain)
> > iommu_release_device(b)
> > ops->release_device(b)
> > ops->release_device(a)
> > // Boom, a is still attached to domain :(
> >
> > I can't think of how to solve this other than holding the group mutex
> > across release_device. See below.
>
> I see a few possibilities:
>
> - Backtrack slightly on its removal, and instead repurpose detach_dev
> into a specialised domain cleanup callback, called before or during
> iommu_group_remove_device(), with the group mutex held.
See below for why that is somewhat troublesome..
> - Drivers that hold any kind of internal per-device references to
> domains - which is generally the root of this issue in the first place -
> can implement proper reference counting, so even if a domain is "freed"
> with a device still attached as above, it doesn't actually go away until
> release_device(a) cleans up the final dangling reference. I suggested
> the core doing this generically, but on reflection I think it's actually
> a lot more straightforward as a driver-internal thing.
Isn't this every driver though? Like every single driver implementing
an UNMANAGED/DMA/DMA_FQ domain has a hidden reference to the
iommu_domain - minimally to point the HW to the IOPTEs it stores.
> - Drivers that basically just keep a list of devices in the domain and
> need to do a list_del() in release_device, can also list_del_init() any
> still-attached devices in domain_free, with a simple per-instance or
> global lock to serialise the two.
Compared to just locking ops->release_device() these all seem more
complicated?
IMHO the core code should try to protect the driver from racing
release with anything else.
Do you know a reason not to hold the group mutex across
release_device? I think that is the most straightforward and
future proof.
Arguably all the device ops should be serialized under the group
mutex.
> @@ -1022,6 +1030,14 @@ void iommu_group_remove_device(struct device *dev)
> dev_info(dev, "Removing from iommu group %d\n", group->id);
> mutex_lock(&group->mutex);
> + if (WARN_ON(group->domain != group->default_domain &&
> + group->domain != group->blocking_domain)) {
This will false trigger, if there are two VFIO devices then the group
will remained owned when we unplug one just of them, but the group's domain
will be a VFIO owned domain.
It is why I put the list_empty() protection, as the test only works if
it is the last device.
> + if (group->default_domain)
> + __iommu_attach_device(group->default_domain, dev);
> + else
> + __iommu_detach_device(group->domain, dev);
> + }
This was very appealing, but I rejected it because it is too difficult
to support multi-device groups that share the RID.
In that case we expect that the first attach/detach of a device on the
shared RID will reconfigure the iommu and the attach/deatch of all the
other devices on the group with the same parameters will be a NOP.
So in a VFIO configuration where two drivers are bound to a single
group with shared RID and we unplug one device, this will rebind the
shared RID and thus the entire group to blocking/default and break the
still running VFIO on the second device.
The device centric interface only works if we always apply the
operation to every device in the group..
This is why I kept it as ops->release_device() with an implicit detach
of the current domain inside the driver. release_device() has that
special meaning of 'detach the domain but do not change a shared RID'
And it misses the logic to WARN_ON if a domain is set and an external
entity wrongly uses iommu_group_remove_device()..
Jason
next prev parent reply other threads:[~2022-09-09 18:30 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-08 18:44 [PATCH 0/4] Fix splats releated to using the iommu_group after destroying devices Jason Gunthorpe
2022-09-08 18:44 ` [PATCH 1/4] vfio: Simplify vfio_create_group() Jason Gunthorpe
2022-09-20 19:45 ` Matthew Rosato
2022-09-08 18:44 ` [PATCH 2/4] vfio: Move the sanity check of the group to vfio_create_group() Jason Gunthorpe
2022-09-22 19:10 ` Alex Williamson
2022-09-22 19:36 ` Jason Gunthorpe
2022-09-22 21:23 ` Alex Williamson
2022-09-22 23:12 ` Jason Gunthorpe
2022-09-08 18:45 ` [PATCH 3/4] vfio: Follow a strict lifetime for struct iommu_group * Jason Gunthorpe
2022-09-20 19:32 ` Matthew Rosato
2022-09-08 18:45 ` [PATCH 4/4] iommu: Fix ordering of iommu_release_device() Jason Gunthorpe
2022-09-08 21:05 ` Robin Murphy
2022-09-08 21:27 ` Robin Murphy
2022-09-08 21:43 ` Jason Gunthorpe
2022-09-09 9:05 ` Robin Murphy
2022-09-09 13:25 ` Jason Gunthorpe
2022-09-09 17:57 ` Robin Murphy
2022-09-09 18:30 ` Jason Gunthorpe [this message]
2022-09-09 19:55 ` Robin Murphy
2022-09-09 23:45 ` Jason Gunthorpe
2022-09-12 11:13 ` Robin Murphy
2022-09-22 16:56 ` Jason Gunthorpe
2022-09-09 12:49 ` [PATCH 0/4] Fix splats releated to using the iommu_group after destroying devices Matthew Rosato
2022-09-09 16:24 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YxuGQDCzDsvKV2W8@nvidia.com \
--to=jgg@nvidia.com \
--cc=alex.williamson@redhat.com \
--cc=cai@lca.pw \
--cc=cohuck@redhat.com \
--cc=iommu@lists.linux.dev \
--cc=joro@8bytes.org \
--cc=jroedel@suse.de \
--cc=kvm@vger.kernel.org \
--cc=m.szyprowski@samsung.com \
--cc=mjrosato@linux.ibm.com \
--cc=robin.murphy@arm.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox