From: Lu Baolu <baolu.lu@linux.intel.com>
To: David Gibson <david@gibson.dropbear.id.au>
Cc: baolu.lu@linux.intel.com, Jason Gunthorpe <jgg@nvidia.com>,
Joerg Roedel <joro@8bytes.org>,
"Tian, Kevin" <kevin.tian@intel.com>,
"Alex Williamson (alex.williamson@redhat.com)"
<alex.williamson@redhat.com>,
Jean-Philippe Brucker <jean-philippe@linaro.org>,
Jason Wang <jasowang@redhat.com>,
"parav@mellanox.com" <parav@mellanox.com>,
"Enrico Weigelt, metux IT consult" <lkml@metux.net>,
Paolo Bonzini <pbonzini@redhat.com>,
Shenming Lu <lushenming@huawei.com>,
Eric Auger <eric.auger@redhat.com>,
Jonathan Corbet <corbet@lwn.net>,
"Raj, Ashok" <ashok.raj@intel.com>,
"Liu, Yi L" <yi.l.liu@intel.com>, "Wu, Hao" <hao.wu@intel.com>,
"Jiang, Dave" <dave.jiang@intel.com>,
Jacob Pan <jacob.jun.pan@linux.intel.com>,
Kirti Wankhede <kwankhede@nvidia.com>,
Robin Murphy <robin.murphy@arm.com>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"iommu@lists.linux-foundation.org"
<iommu@lists.linux-foundation.org>,
David Woodhouse <dwmw2@infradead.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: Plan for /dev/ioasid RFC v2
Date: Thu, 24 Jun 2021 21:42:36 +0800 [thread overview]
Message-ID: <b77b9ffc-166e-3019-0328-59d20a437fd5@linux.intel.com> (raw)
In-Reply-To: <YNQEClb1nptFBIRB@yekko>
On 2021/6/24 12:03, David Gibson wrote:
> On Fri, Jun 18, 2021 at 01:21:47PM +0800, Lu Baolu wrote:
>> Hi David,
>>
>> On 6/17/21 1:22 PM, David Gibson wrote:
>>>> The iommu_group can guarantee the isolation among different physical
>>>> devices (represented by RIDs). But when it comes to sub-devices (ex. mdev or
>>>> vDPA devices represented by RID + SSID), we have to rely on the
>>>> device driver for isolation. The devices which are able to generate sub-
>>>> devices should either use their own on-device mechanisms or use the
>>>> platform features like Intel Scalable IOV to isolate the sub-devices.
>>> This seems like a misunderstanding of groups. Groups are not tied to
>>> any PCI meaning. Groups are the smallest unit of isolation, no matter
>>> what is providing that isolation.
>>>
>>> If mdevs are isolated from each other by clever software, even though
>>> they're on the same PCI device they are in different groups from each
>>> other*by definition*. They are also in a different group from their
>>> parent device (however the mdevs only exist when mdev driver is
>>> active, which implies that the parent device's group is owned by the
>>> kernel).
>>
>> You are right. This is also my understanding of an "isolation group".
>>
>> But, as I understand it, iommu_group is only the isolation group visible
>> to IOMMU. When we talk about sub-devices (sw-mdev or mdev w/ pasid),
>> only the device and device driver knows the details of isolation, hence
>> iommu_group could not be extended to cover them. The device drivers
>> should define their own isolation groups.
> So, "iommu group" isn't a perfect name. It came about because
> originally the main mechanism for isolation was the IOMMU, so it was
> typically the IOMMU's capabilities that determined if devices were
> isolated. However it was always known that there could be other
> reasons for failure of isolation. To simplify the model we decided
> that we'd put things into the same group if they were non-isolated for
> any reason.
Yes.
>
> The kernel has no notion of "isolation group" as distinct from "iommu
> group". What are called iommu groups in the kernel now*are*
> "isolation groups" and that was always the intention - it's just not a
> great name.
Fair enough.
>
>> Otherwise, the device driver has to fake an iommu_group and add hacky
>> code to link the related IOMMU elements (iommu device, domain, group
>> etc.) together. Actually this is part of the problem that this proposal
>> tries to solve.
> Yeah, that's not ideal.
>
>>>> Under above conditions, different sub-device from a same RID device
>>>> could be able to use different IOASID. This seems to means that we can't
>>>> support mixed mode where, for example, two RIDs share an iommu_group and
>>>> one (or both) of them have sub-devices.
>>> That doesn't necessarily follow. mdevs which can be successfully
>>> isolated by their mdev driver are in a different group from their
>>> parent device, and therefore need not be affected by whether the
>>> parent device shares a group with some other physical device. They
>>> *might* be, but that's up to the mdev driver to determine based on
>>> what it can safely isolate.
>>>
>> If we understand it as multiple levels of isolation, can we classify the
>> devices into the following categories?
>>
>> 1) Legacy devices
>> - devices without device-level isolation
>> - multiple devices could sit in a single iommu_group
>> - only a single I/O address space could be bound to IOMMU
> I'm not really clear on what that last statement means.
I mean a single iommu_domain should be used by all devices sharing a
single iommu_group.
>
>> 2) Modern devices
>> - devices capable of device-level isolation
> This will*typically* be true of modern devices, but I don't think we
> can really make it a hard API distinction. Legacy or buggy bridges
> can force modern devices into the same group as each other. Modern
> devices are not immune from bugs which would force lack of isolation
> (e.g. forgotten debug registers on function 0 which affect other
> functions).
>
Yes.
I am thinking whether it's feasible to change "bind/attach a device to
an IOASID" to "bind/attach an isolated unit to an IOASID". An isolated
unit could be
1) an iommu_ group including single or multiple devices;
2) a physical device which have a 1-device iommu group + device ID
(PASID/subStreamID) which represents an isolated subdevice inside the
physical one.
3) anything that we might have in the future.
A handler which represents the connection between device and iommu is
returned on any successful binding. This handler could be used to
GET_INFO and attach/detach after binding.
Best regards,
baolu
next prev parent reply other threads:[~2021-06-24 13:42 UTC|newest]
Thread overview: 81+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-06-07 2:58 Plan for /dev/ioasid RFC v2 Tian, Kevin
2021-06-09 8:14 ` Eric Auger
2021-06-09 9:37 ` Tian, Kevin
2021-06-09 10:14 ` Eric Auger
2021-06-09 9:01 ` Leon Romanovsky
2021-06-09 9:43 ` Tian, Kevin
2021-06-09 12:24 ` Joerg Roedel
2021-06-09 12:39 ` Jason Gunthorpe
2021-06-09 13:32 ` Joerg Roedel
2021-06-09 15:00 ` Jason Gunthorpe
2021-06-09 15:51 ` Joerg Roedel
2021-06-09 16:15 ` Alex Williamson
2021-06-09 16:27 ` Alex Williamson
2021-06-09 18:49 ` Jason Gunthorpe
2021-06-10 15:38 ` Alex Williamson
2021-06-11 0:58 ` Tian, Kevin
2021-06-11 21:38 ` Alex Williamson
2021-06-14 3:09 ` Tian, Kevin
2021-06-14 3:22 ` Alex Williamson
2021-06-15 1:05 ` Tian, Kevin
2021-06-14 13:38 ` Jason Gunthorpe
2021-06-15 1:21 ` Tian, Kevin
2021-06-15 16:56 ` Alex Williamson
2021-06-16 6:53 ` Tian, Kevin
2021-06-24 4:50 ` David Gibson
2021-06-11 16:45 ` Jason Gunthorpe
2021-06-11 19:38 ` Alex Williamson
2021-06-12 1:28 ` Jason Gunthorpe
2021-06-12 16:57 ` Alex Williamson
2021-06-14 14:07 ` Jason Gunthorpe
2021-06-14 16:28 ` Alex Williamson
2021-06-14 19:40 ` Jason Gunthorpe
2021-06-15 2:31 ` Tian, Kevin
2021-06-15 16:12 ` Alex Williamson
2021-06-16 6:43 ` Tian, Kevin
2021-06-16 19:39 ` Alex Williamson
2021-06-17 3:39 ` Liu Yi L
2021-06-17 7:31 ` Tian, Kevin
2021-06-17 21:14 ` Alex Williamson
2021-06-18 0:19 ` Jason Gunthorpe
2021-06-18 16:57 ` Tian, Kevin
2021-06-18 18:23 ` Jason Gunthorpe
2021-06-25 10:27 ` Tian, Kevin
2021-06-25 14:36 ` Jason Gunthorpe
2021-06-28 1:09 ` Tian, Kevin
2021-06-28 22:31 ` Alex Williamson
2021-06-28 22:48 ` Jason Gunthorpe
2021-06-28 23:09 ` Alex Williamson
2021-06-28 23:13 ` Jason Gunthorpe
2021-06-29 0:26 ` Tian, Kevin
2021-06-29 0:28 ` Tian, Kevin
2021-06-29 0:43 ` Tian, Kevin
2021-06-28 2:03 ` Tian, Kevin
2021-06-28 14:41 ` Jason Gunthorpe
2021-06-28 6:45 ` Tian, Kevin
2021-06-28 16:26 ` Jason Gunthorpe
2021-06-24 4:26 ` David Gibson
2021-06-24 5:59 ` Tian, Kevin
2021-06-24 12:22 ` Lu Baolu
2021-06-24 4:23 ` David Gibson
2021-06-18 0:52 ` Jason Gunthorpe
2021-06-18 13:47 ` Joerg Roedel
2021-06-18 15:15 ` Jason Gunthorpe
2021-06-18 15:37 ` Raj, Ashok
2021-06-18 15:51 ` Alex Williamson
2021-06-24 4:29 ` David Gibson
2021-06-24 11:56 ` Jason Gunthorpe
2021-06-18 0:10 ` Jason Gunthorpe
2021-06-17 5:29 ` David Gibson
2021-06-17 5:02 ` David Gibson
2021-06-17 23:04 ` Jason Gunthorpe
2021-06-24 4:37 ` David Gibson
2021-06-24 11:57 ` Jason Gunthorpe
2021-06-10 5:50 ` Lu Baolu
2021-06-17 5:22 ` David Gibson
2021-06-18 5:21 ` Lu Baolu
2021-06-24 4:03 ` David Gibson
2021-06-24 13:42 ` Lu Baolu [this message]
2021-06-17 4:45 ` David Gibson
2021-06-17 23:10 ` Jason Gunthorpe
2021-06-24 4:07 ` David Gibson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b77b9ffc-166e-3019-0328-59d20a437fd5@linux.intel.com \
--to=baolu.lu@linux.intel.com \
--cc=alex.williamson@redhat.com \
--cc=ashok.raj@intel.com \
--cc=corbet@lwn.net \
--cc=dave.jiang@intel.com \
--cc=david@gibson.dropbear.id.au \
--cc=dwmw2@infradead.org \
--cc=eric.auger@redhat.com \
--cc=hao.wu@intel.com \
--cc=iommu@lists.linux-foundation.org \
--cc=jacob.jun.pan@linux.intel.com \
--cc=jasowang@redhat.com \
--cc=jean-philippe@linaro.org \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=kwankhede@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=lkml@metux.net \
--cc=lushenming@huawei.com \
--cc=parav@mellanox.com \
--cc=pbonzini@redhat.com \
--cc=robin.murphy@arm.com \
--cc=yi.l.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox