Linux IOMMU Development
 help / color / mirror / Atom feed
From: Robin Murphy <robin.murphy@arm.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
	kvm@vger.kernel.org, Will Deacon <will@kernel.org>,
	Cornelia Huck <cohuck@redhat.com>,
	iommu@lists.linux-foundation.org,
	Alex Williamson <alex.williamson@redhat.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH v2 0/4] Make the iommu driver no-snoop block feature consistent
Date: Fri, 8 Apr 2022 18:44:37 +0100	[thread overview]
Message-ID: <a8371724-2864-a316-439d-5aa7a8bb5739@arm.com> (raw)
In-Reply-To: <20220408133523.GX2120790@nvidia.com>

On 2022-04-08 14:35, Jason Gunthorpe wrote:
> On Fri, Apr 08, 2022 at 02:11:10PM +0100, Robin Murphy wrote:
> 
>>> However, this creates an oddball situation where the vfio_device and
>>> it's struct device could become unplugged from the system while the
>>> domain that the struct device spawned continues to exist and remains
>>> attached to other devices in the same group. ie the iommu driver has
>>> to be careful not to retain the struct device input..
>>
>> Oh, I rather assumed that VFIO might automatically tear down the
>> container/domain when the last real user disappears.
> 
> It does, that isn't quite what I mean..
> 
> Lets say a simple case with two groups and two devices.
> 
> Open a VFIO container FD
> 
> We open group A and SET_CONTAINER it. This results in an
>     domain_A = iommu_domain_alloc(device_A)
>     iommu_attach_group(domain_A, device_A->group)
> 
> We open group B and SET_CONTAINER it. Using the sharing logic we end
> up doing
>     iommu_attach_group(domain_A, device_B->group)
> 
> Now we close group A FD, detatch device_A->group from domain_A and the
> driver core hot-unplugs device A completely from the system.
> 
> However, domain_A remains in the system used by group B's open FD.
> 
> It is a bit funny at least.. I think it is just something to document
> and be aware of for iommu driver writers that they probably shouldn't
> try to store the allocation device in their domain struct.
> 
> IHMO the only purpose of the allocation device is to crystalize the
> configuration of the iommu_domain at allocation time.

Oh, for sure. When I implement the API switch, I can certainly try to 
document it as clearly as possible that the device argument is only for 
resolving the correct IOMMU ops and target instance, and the resulting 
domain is still not in any way tied to that specific device.

I hadn't thought about how it might look to future developers who aren't 
already familiar with all the history here, so thanks for the nudge!

>> as long as we take care not to release DMA ownership until that point also.
>> As you say, it just looks a bit funny.
> 
> The DMA ownership should be OK as we take ownership on each group FD
> open
>   
>>> I suppose that is inevitable to have sharing of domains across
>>> devices, so the iommu drivers will have to accommodate this.
>>
>> I think domain lifecycle management is already entirely up to the users and
>> not something that IOMMU drivers need to worry about. Drivers should only
>> need to look at per-device data in attach/detach (and, once I've finished,
>> alloc) from the device argument which can be assumed to be valid at that
>> point. Otherwise, all the relevant internal data for domain ops should
>> belong to the domain already.
> 
> Making attach/detach take a struct device would be nice - but I would
> expect the attach/detatch to use a strictly paired struct device and I
> don't think this trick of selecting an arbitary vfio_device will
> achieve that.
> 
> So, I suppose VFIO would want to attach/detatch on every vfio_device
> individually and it would iterate over the group instead of doing a
> list_first_entry() like above. This would not be hard to do in VFIO.

It feels like we've already beaten that discussion to death in other 
threads; regardless of what exact argument the iommu_attach/detach 
operations end up taking, they have to operate on the whole (explicit or 
implicit) iommu_group at once, because doing anything else would defeat 
the point of isolation groups, and be impossible for alias groups.

> Not sure what the iommu layer would have to do to accommodate this..

If it's significantly easier for VFIO to just run through a whole list 
of devices and attach each one without having to keep track of whether 
they might share an iommu_group which has already been attached, then we 
can probably relax the API a little such that attaching to a domain 
which is already the current domain becomes a no-op instead of returning 
-EBUSY, but I'd rather not create an expectation that anyone *has* to do 
that. For any other callers that would be forcing *more* iommu_group 
implementation details onto them, when we all want less.

Cheers,
Robin.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2022-04-08 17:44 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-07 15:23 [PATCH v2 0/4] Make the iommu driver no-snoop block feature consistent Jason Gunthorpe via iommu
2022-04-07 15:23 ` [PATCH v2 1/4] iommu: Introduce the domain op enforce_cache_coherency() Jason Gunthorpe via iommu
2022-04-08  8:05   ` Tian, Kevin
2022-04-09 12:44     ` Lu Baolu
2022-04-11 14:11     ` Jason Gunthorpe via iommu
2022-04-08  8:27   ` Tian, Kevin
2022-04-07 15:23 ` [PATCH v2 2/4] vfio: Move the Intel no-snoop control off of IOMMU_CACHE Jason Gunthorpe via iommu
2022-04-08  8:16   ` Tian, Kevin
2022-04-09 12:50     ` Lu Baolu
2022-04-12  7:44       ` Tian, Kevin
2022-04-12 13:13         ` Lu Baolu
2022-04-12 13:20           ` Jason Gunthorpe via iommu
2022-04-12 23:04             ` Tian, Kevin
2022-04-13 11:37               ` Lu Baolu
2022-04-08 15:47   ` Alex Williamson
2022-04-11 14:13     ` Jason Gunthorpe via iommu
2022-04-07 15:23 ` [PATCH v2 3/4] iommu: Redefine IOMMU_CAP_CACHE_COHERENCY as the cap flag for IOMMU_CACHE Jason Gunthorpe via iommu
2022-04-08  8:21   ` Tian, Kevin
2022-04-08 12:21     ` Jason Gunthorpe via iommu
2022-04-09 12:51   ` Lu Baolu
2022-04-07 15:23 ` [PATCH v2 4/4] vfio: Require that devices support DMA cache coherence Jason Gunthorpe via iommu
2022-04-08  8:26   ` Tian, Kevin
2022-04-08 12:22     ` Jason Gunthorpe via iommu
2022-04-08 13:28       ` Robin Murphy
2022-04-08 13:37         ` Jason Gunthorpe via iommu
2022-04-08 15:48   ` Alex Williamson
2022-07-01  4:57   ` Alexey Kardashevskiy
2022-07-01  6:07     ` Tian, Kevin
2022-07-01  6:24       ` Alexey Kardashevskiy
2022-04-07 17:03 ` [PATCH v2 0/4] Make the iommu driver no-snoop block feature consistent Robin Murphy
2022-04-07 17:43   ` Jason Gunthorpe via iommu
2022-04-07 18:02     ` Robin Murphy
2022-04-07 19:08       ` Jason Gunthorpe via iommu
2022-04-07 19:27         ` Robin Murphy
2022-04-08 12:18           ` Jason Gunthorpe via iommu
2022-04-08 13:11             ` Robin Murphy
2022-04-08 13:35               ` Jason Gunthorpe via iommu
2022-04-08 17:44                 ` Robin Murphy [this message]
2022-04-12  2:51                   ` Tian, Kevin
2022-04-08  9:08         ` Tian, Kevin
2022-04-08 10:11           ` Robin Murphy
2022-04-12  2:49             ` Tian, Kevin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a8371724-2864-a316-439d-5aa7a8bb5739@arm.com \
    --to=robin.murphy@arm.com \
    --cc=alex.williamson@redhat.com \
    --cc=cohuck@redhat.com \
    --cc=dwmw2@infradead.org \
    --cc=hch@lst.de \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jgg@nvidia.com \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox