public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Robin Murphy <robin.murphy@arm.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Alex Williamson <alex.williamson@redhat.com>,
	Lu Baolu <baolu.lu@linux.intel.com>,
	Cornelia Huck <cohuck@redhat.com>,
	David Woodhouse <dwmw2@infradead.org>,
	iommu@lists.linux-foundation.org, Joerg Roedel <joro@8bytes.org>,
	kvm@vger.kernel.org,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Will Deacon <will@kernel.org>, Christoph Hellwig <hch@lst.de>,
	"Tian, Kevin" <kevin.tian@intel.com>
Subject: Re: [PATCH v2 0/4] Make the iommu driver no-snoop block feature consistent
Date: Fri, 8 Apr 2022 14:11:10 +0100	[thread overview]
Message-ID: <4f93d16d-9606-bd1c-a82b-e4b00ae2364f@arm.com> (raw)
In-Reply-To: <20220408121845.GT2120790@nvidia.com>

On 2022-04-08 13:18, Jason Gunthorpe wrote:
> On Thu, Apr 07, 2022 at 08:27:05PM +0100, Robin Murphy wrote:
>> On 2022-04-07 20:08, Jason Gunthorpe wrote:
>>> On Thu, Apr 07, 2022 at 07:02:03PM +0100, Robin Murphy wrote:
>>>> On 2022-04-07 18:43, Jason Gunthorpe wrote:
>>>>> On Thu, Apr 07, 2022 at 06:03:37PM +0100, Robin Murphy wrote:
>>>>>> At a glance, this all looks about the right shape to me now, thanks!
>>>>>
>>>>> Thanks!
>>>>>
>>>>>> Ideally I'd hope patch #4 could go straight to device_iommu_capable() from
>>>>>> my Thunderbolt series, but we can figure that out in a couple of weeks once
>>>>>
>>>>> Yes, this does helps that because now the only iommu_capable call is
>>>>> in a context where a device is available :)
>>>>
>>>> Derp, of course I have *two* VFIO patches waiting, the other one touching
>>>> the iommu_capable() calls (there's still IOMMU_CAP_INTR_REMAP, which, much
>>>> as I hate it and would love to boot all that stuff over to
>>>> drivers/irqchip,
>>>
>>> Oh me too...
>>>
>>>> it's not in my way so I'm leaving it be for now). I'll have to rebase that
>>>> anyway, so merging this as-is is absolutely fine!
>>>
>>> This might help your effort - after this series and this below there
>>> are no 'bus' users of iommu_capable left at all.
>>
>> Thanks, but I still need a device for the iommu_domain_alloc() as well, so
>> at that point the interrupt check is OK to stay where it is.
> 
> It is a simple enough change that could avoid introducing the
> device_iommu_capable() at all perhaps.
> 
>> I figured out a locking strategy per my original idea that seems
>> pretty clean, it just needs vfio_group_viable() to go away first:
> 
> I think this should be more like:
> 
>    	        struct vfio_device *vdev;
> 
> 		mutex_lock(&group->device_lock);
> 		vdev = list_first_entry(group->device_list, struct vfio_device, group_next);
> 		ret = driver->ops->attach_group(data, group->iommu_group,
> 						group->type,
> 						vdev->dev);
> 		mutex_unlock(&group->device_lock);
> 
> Then don't do the iommu_group_for_each_dev() at all.
> 
> The problem with iommu_group_for_each_dev() is that it may select a
> struct device that does not have a vfio_device bound to it, so we
> would be using a random struct device that is not protected by any
> VFIO device_driver.

Yeah, I was just looking at the final puzzle piece of making sure to nab 
the actual VFIO device rather than some unbound device that's just along 
for the ride... If I can't come up with anything more self-contained 
I'll take this suggestion, thanks.

> However, this creates an oddball situation where the vfio_device and
> it's struct device could become unplugged from the system while the
> domain that the struct device spawned continues to exist and remains
> attached to other devices in the same group. ie the iommu driver has
> to be careful not to retain the struct device input..

Oh, I rather assumed that VFIO might automatically tear down the 
container/domain when the last real user disappears. I don't see there 
being an obvious problem if the domain does hang around with residual 
non-VFIO devices attached until userspace explicitly closes all the 
relevant handles, as long as we take care not to release DMA ownership 
until that point also. As you say, it just looks a bit funny.

> I suppose that is inevitable to have sharing of domains across
> devices, so the iommu drivers will have to accommodate this.

I think domain lifecycle management is already entirely up to the users 
and not something that IOMMU drivers need to worry about. Drivers should 
only need to look at per-device data in attach/detach (and, once I've 
finished, alloc) from the device argument which can be assumed to be 
valid at that point. Otherwise, all the relevant internal data for 
domain ops should belong to the domain already.

Cheers,
Robin.

  reply	other threads:[~2022-04-08 13:12 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-04-07 15:23 [PATCH v2 0/4] Make the iommu driver no-snoop block feature consistent Jason Gunthorpe
2022-04-07 15:23 ` [PATCH v2 1/4] iommu: Introduce the domain op enforce_cache_coherency() Jason Gunthorpe
2022-04-08  8:05   ` Tian, Kevin
2022-04-09 12:44     ` Lu Baolu
2022-04-11 14:11     ` Jason Gunthorpe
2022-04-08  8:27   ` Tian, Kevin
2022-04-07 15:23 ` [PATCH v2 2/4] vfio: Move the Intel no-snoop control off of IOMMU_CACHE Jason Gunthorpe
2022-04-08  8:16   ` Tian, Kevin
2022-04-09 12:50     ` Lu Baolu
2022-04-12  7:44       ` Tian, Kevin
2022-04-12 13:13         ` Lu Baolu
2022-04-12 13:20           ` Jason Gunthorpe
2022-04-12 23:04             ` Tian, Kevin
2022-04-13 11:37               ` Lu Baolu
2022-04-08 15:47   ` Alex Williamson
2022-04-11 14:13     ` Jason Gunthorpe
2022-04-07 15:23 ` [PATCH v2 3/4] iommu: Redefine IOMMU_CAP_CACHE_COHERENCY as the cap flag for IOMMU_CACHE Jason Gunthorpe
2022-04-08  8:21   ` Tian, Kevin
2022-04-08 12:21     ` Jason Gunthorpe
2022-04-09 12:51   ` Lu Baolu
2022-04-07 15:23 ` [PATCH v2 4/4] vfio: Require that devices support DMA cache coherence Jason Gunthorpe
2022-04-08  8:26   ` Tian, Kevin
2022-04-08 12:22     ` Jason Gunthorpe
2022-04-08 13:28       ` Robin Murphy
2022-04-08 13:37         ` Jason Gunthorpe
2022-04-08 15:48   ` Alex Williamson
2022-07-01  4:57   ` Alexey Kardashevskiy
2022-07-01  6:07     ` Tian, Kevin
2022-07-01  6:24       ` Alexey Kardashevskiy
2022-04-07 17:03 ` [PATCH v2 0/4] Make the iommu driver no-snoop block feature consistent Robin Murphy
2022-04-07 17:43   ` Jason Gunthorpe
2022-04-07 18:02     ` Robin Murphy
2022-04-07 19:08       ` Jason Gunthorpe
2022-04-07 19:27         ` Robin Murphy
2022-04-08 12:18           ` Jason Gunthorpe
2022-04-08 13:11             ` Robin Murphy [this message]
2022-04-08 13:35               ` Jason Gunthorpe
2022-04-08 17:44                 ` Robin Murphy
2022-04-12  2:51                   ` Tian, Kevin
2022-04-08  9:08         ` Tian, Kevin
2022-04-08 10:11           ` Robin Murphy
2022-04-12  2:49             ` Tian, Kevin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4f93d16d-9606-bd1c-a82b-e4b00ae2364f@arm.com \
    --to=robin.murphy@arm.com \
    --cc=alex.williamson@redhat.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=cohuck@redhat.com \
    --cc=dwmw2@infradead.org \
    --cc=hch@lst.de \
    --cc=iommu@lists.linux-foundation.org \
    --cc=jgg@nvidia.com \
    --cc=joro@8bytes.org \
    --cc=kevin.tian@intel.com \
    --cc=kvm@vger.kernel.org \
    --cc=suravee.suthikulpanit@amd.com \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox