From mboxrd@z Thu Jan 1 00:00:00 1970 From: Peter Kay Subject: Re: Determining iommu groups in Xen? Date: Fri, 29 Aug 2014 01:35:30 +0100 Message-ID: References: <9d3d72b8-afe1-4bde-917a-be5b3674889e@email.android.com> <53FF3479.9050907@citrix.com> <53FF6323.5060100@citrix.com> <53FF6EC7.8010805@citrix.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1783594561614303600==" Return-path: In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Andrew Cooper , xen-devel@lists.xen.org List-Id: xen-devel@lists.xenproject.org --===============1783594561614303600== Content-Type: multipart/alternative; boundary=047d7b41bf8093d7600501b9d498 --047d7b41bf8093d7600501b9d498 Content-Type: text/plain; charset=UTF-8 On 28 August 2014 19:45, Peter Kay wrote: > > > On 28 August 2014 19:02:47 BST, Andrew Cooper > wrote: > >On 28/08/14 18:53, Peter Kay wrote: > >> > >> On 28 August 2014 18:13:07 BST, Andrew Cooper > > wrote: > > >> An iommu group, as far as I'm aware, is the group of devices that are > >not protected from each other. In KVM, you must pass through the entire > >group to a VM at once, unless a 'don't go crying to me if it stomps > >over your memory space or worse' patch is applied to the kennel > >claiming that everything is fine. > > > >I have googled the term in the meantime, and it is what I initially > >thought. > > > >All PCI devices passed though to the same domain share the same single > >"iommu group" per Kernel/KVM terminology. There is not currently any > >support for multiple iommu contexts within a single VM. > > > >~Andrew > See http://lxr.free-electrons.com/source/drivers/iommu/iommu.c and intel-iommu.c (or amd-iommu.c). It is based on the ACS capability of the upstream device. See in particular intel_iommu_add_device() >>From https://www.kernel.org/doc/Documentation/vfio.txt 'Therefore, while for the most part an IOMMU may have device level granularity, any system is susceptible to reduced granularity. The IOMMU API therefore supports a notion of IOMMU groups. A group is a set of devices which is isolatable from all other devices in the system. Groups are therefore the unit of ownership used by VFIO' So far as reliable quirks go for ACS protection, see drivers/pci/quirks.c static const u16 pci_quirk_intel_pch_acs_ids[] and Red Hat bugzilla 1037684 I'll have to do some more testing to see if lspci -t is a reasonable indication of iommu groups or if I can write some code to figure them out. Obviously returning the information from the Linux source is ultimately not really a good idea(*), because the dom0 may not be Linux. It is in my case, because NetBSD is (unfortunately) not yet functional enough for my needs and I don't want to use Solaris derived OS, but that doesn't help everyone else. (*) Assuming it's possible at all, as the Linux dom0 is running on top of Xen and therefore is restricted in some ways. PK --047d7b41bf8093d7600501b9d498 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable


On 28 August 2014 19:45, Peter Kay <syllopsium@syllopsium.co= .uk> wrote:


On 28 August 2014 19:02:47 BST, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>On 28/08/14 18:53, Peter Kay wrote:
>>
>> On 28 August 2014 18:13:07 BST, Andrew Cooper
><andrew.cooper3@citrix.= com> wrote:

>> An iommu group, as far as I'm aware, is = the group of devices that are
>not protected from each other. In KVM, you must pass through the entire=
>group to a VM at once, unless a 'don't go crying to me if it st= omps
>over your memory space or worse' patch is applied to the kennel
>claiming that everything is fine.
>
>I have googled the term in the meantime, and it is what I initially
>thought.
>
>All PCI devices passed though to the same domain share the same single<= br> >"iommu group" per Kernel/KVM terminology.=C2=A0 There is not = currently any
>support for multiple iommu contexts within a single VM.
>
>~Andrew
=C2=A0
See =C2=A0http://lxr.= free-electrons.com/source/drivers/iommu/iommu.c =C2=A0and intel-iommu.c= (or amd-iommu.c). It is based on the ACS capability of the upstream device= . See in particular=C2=A0intel_iommu_add_device()

From =C2=A0https://www.kernel.org/doc/Documentation/vfio.txt

'Therefore, while for the most part an IOMMU may have = device level
granularity, any system is susceptible to reduced granularity. =C2=A0TheIOMMU API therefore supports a notion of IOMMU groups. =C2=A0A group isa set of devices which is isolatable from all other devices in the
syst= em. =C2=A0Groups are therefore the unit of ownership used by VFIO'

So far as reliable quirks go for ACS protection, see drivers= /pci/quirks.c static const u16 pci_quirk_intel_pch_acs_ids[] and Red Hat bu= gzilla 1037684

I'll have to do some more testing to see if lspci -t is a reasonable in= dication of iommu groups or if I can write some code to figure them out.

Obviousl= y returning the information from the Linux source is ultimately not really = a good idea(*), because the dom0 may not be Linux. It is in my case, becaus= e NetBSD is (unfortunately) not yet functional enough for my needs and I do= n't want to use Solaris derived OS, but that doesn't help everyone = else.

(*) Assumin= g it's possible at all, as the Linux dom0 is running on top of Xen and = therefore is restricted in some ways.

<= /div>
PK
--047d7b41bf8093d7600501b9d498-- --===============1783594561614303600== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel --===============1783594561614303600==--