From: Jason Gunthorpe <jgg@nvidia.com>
To: Matthew Rosato <mjrosato@linux.ibm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>,
linux-s390@vger.kernel.org, cohuck@redhat.com,
schnelle@linux.ibm.com, farman@linux.ibm.com,
pmorel@linux.ibm.com, borntraeger@linux.ibm.com,
hca@linux.ibm.com, gor@linux.ibm.com,
gerald.schaefer@linux.ibm.com, agordeev@linux.ibm.com,
svens@linux.ibm.com, frankja@linux.ibm.com, david@redhat.com,
imbrenda@linux.ibm.com, vneethv@linux.ibm.com,
oberpar@linux.ibm.com, freude@linux.ibm.com, thuth@redhat.com,
pasic@linux.ibm.com, joro@8bytes.org, will@kernel.org,
pbonzini@redhat.com, corbet@lwn.net, kvm@vger.kernel.org,
linux-kernel@vger.kernel.org, iommu@lists.linux-foundation.org,
linux-doc@vger.kernel.org
Subject: Re: [PATCH v4 15/32] vfio: introduce KVM-owned IOMMU type
Date: Tue, 15 Mar 2022 11:55:20 -0300 [thread overview]
Message-ID: <20220315145520.GZ11336@nvidia.com> (raw)
In-Reply-To: <9618afae-2a91-6e4e-e8c3-cb83e2f5c3d9@linux.ibm.com>
On Tue, Mar 15, 2022 at 09:36:08AM -0400, Matthew Rosato wrote:
> > If we do try to stick this into VFIO it should probably use the
> > VFIO_TYPE1_NESTING_IOMMU instead - however, we would like to delete
> > that flag entirely as it was never fully implemented, was never used,
> > and isn't part of what we are proposing for IOMMU nesting on ARM
> > anyhow. (So far I've found nobody to explain what the plan here was..)
> >
>
> I'm open to suggestions on how better to tie this into vfio. The scenario
> basically plays out that:
Ideally I would like it to follow the same 'user space page table'
design that Eric and Kevin are working on for HW iommu.
You have an 1st iommu_domain that maps and pins the entire guest physical
address space.
You have an nested iommu_domain that represents the user page table
(the ioat in your language I think)
When the guest says it wants to set a user page table then you create
the nested iommu_domain representing that user page table and pass in
the anchor (guest address of the root IOPTE) to the kernel to do the
work.
The rule for all other HW's is that the user space page table is
translated by the top level kernel page table. So when you traverse it
you fetch the CPU page storing the guest's IOPTE by doing an IOVA
translation through the first level page table - not through KVM.
Since the first level page table an the KVM GPA should be 1:1 this is
an equivalent operation.
> 1) the iommu will be domain_alloc'd once VFIO_SET_IOMMU is issued -- so at
> that time (or earlier) we have to make the decision on whether to use the
> standard IOMMU or this alternate KVM/nested IOMMU.
So in terms of iommufd I would see it this would be an iommufd 'create
a device specific iomm_domain' IOCTL and you can pass in a S390
specific data blob to make it into this special mode.
> > This is why I said the second level should be an explicit iommu_domain
> > all on its own that is explicitly coupled to the KVM to read the page
> > tables, if necessary.
>
> Maybe I misunderstood this. Are you proposing 2 layers of IOMMU that
> interact with each other within host kernel space?
>
> A second level runs the guest tables, pins the appropriate pieces from the
> guest to get the resulting phys_addr(s) which are then passed via iommu to a
> first level via map (or unmap)?
The first level iommu_domain has the 'type1' map and unmap and pins
the pages. This is the 1:1 map with the GPA and ends up pinning all
guest memory because the point is you don't want to take a memory pin
on your performance path
The second level iommu_domain points to a single IO page table in GPA
and is created/destroyed whenever the guest traps to the hypervisor to
manipulate the anchor (ie the GPA of the guest IO page table).
> > But I'm not sure that reading the userspace io page tables with KVM is
> > even the best thing to do - the iommu driver already has the pinned
> > memory, it would be faster and more modular to traverse the io page
> > tables through the pfns in the root iommu_domain than by having KVM do
> > the translations. Lets see what Matthew says..
>
> OK, you lost me a bit here. And this may be associated with the above.
>
> So, what the current implementation is doing is reading the guest DMA tables
> (which we must pin the first time we access them) and then map the PTEs of
> the associated guest DMA entries into the associated host DMA table (so,
> again pin and place the address, or unpin and invalidate). Basically we are
> shadowing the first level DMA table as a copy of the second level DMA table
> with the host address(es) of the pinned guest page(s).
You can't pin/unpin in this path, there is no real way to handle error
and ulimit stuff here, plus it is really slow. I didn't notice any of
this in your patches, so what do you mean by 'pin' above?
To be like other IOMMU nesting drivers the pages should already be
pinned and stored in the 1st iommu_domain, lets say in an xarray. This
xarray is populated by type1 map/unmap sytem calls like any
iommu_domain.
A nested iommu_domain should create the real HW IO page table and
associate it with the real HW IOMMU and record the parent 1st level iommu_domain.
When you do the shadowing you use the xarray of the 1st level
iommu_domain to translate from GPA to host physical and there is no
pinning/etc involved. After walking the guest table and learning the
final vIOVA it is translated through the xarray to a CPU physical and
then programmed into the real HW IO page table.
There is no reason to use KVM to do any of this, and is actively wrong
to place CPU pages from KVM into an IOPTE that did not come through
the type1 map/unmap calls that do all the proper validation and
accounting.
Jason
next prev parent reply other threads:[~2022-03-15 14:55 UTC|newest]
Thread overview: 66+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-14 19:44 [PATCH v4 00/32] KVM: s390: enable zPCI for interpretive execution Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 01/32] s390/sclp: detect the zPCI load/store interpretation facility Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 02/32] s390/sclp: detect the AISII facility Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 03/32] s390/sclp: detect the AENI facility Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 04/32] s390/sclp: detect the AISI facility Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 05/32] s390/airq: pass more TPI info to airq handlers Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 06/32] s390/airq: allow for airq structure that uses an input vector Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 07/32] s390/pci: externalize the SIC operation controls and routine Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 08/32] s390/pci: stash associated GISA designation Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 09/32] s390/pci: export some routines related to RPCIT processing Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 10/32] s390/pci: stash dtsm and maxstbl Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 11/32] s390/pci: add helper function to find device by handle Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 12/32] s390/pci: get SHM information from list pci Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 13/32] s390/pci: return status from zpci_refresh_trans Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 14/32] iommu: introduce iommu_domain_alloc_type and the KVM type Matthew Rosato
2022-03-14 21:36 ` Jason Gunthorpe
2022-03-15 10:49 ` Robin Murphy
2022-03-17 5:47 ` Tian, Kevin
2022-03-17 13:52 ` Jason Gunthorpe
2022-03-18 2:23 ` Tian, Kevin
2022-03-18 14:13 ` Jason Gunthorpe
2022-03-19 7:51 ` Tian, Kevin
2022-03-21 14:07 ` Jason Gunthorpe
2022-03-22 7:30 ` Tian, Kevin
2022-03-14 19:44 ` [PATCH v4 15/32] vfio: introduce KVM-owned IOMMU type Matthew Rosato
2022-03-14 21:38 ` Jason Gunthorpe
2022-03-15 13:49 ` Matthew Rosato
2022-03-15 14:38 ` Jason Gunthorpe
2022-03-15 16:29 ` Matthew Rosato
2022-03-15 17:25 ` Jason Gunthorpe
2022-03-17 18:51 ` Matthew Rosato
2022-03-14 22:50 ` Alex Williamson
2022-03-14 23:18 ` Jason Gunthorpe
2022-03-15 7:57 ` Tian, Kevin
2022-03-15 14:17 ` Matthew Rosato
2022-03-15 17:01 ` Matthew Rosato
2022-03-15 13:36 ` Matthew Rosato
2022-03-15 14:55 ` Jason Gunthorpe [this message]
2022-03-15 16:04 ` Matthew Rosato
2022-03-15 17:18 ` Jason Gunthorpe
2022-03-18 7:01 ` Tian, Kevin
2022-03-18 13:46 ` Jason Gunthorpe
2022-03-19 7:47 ` Tian, Kevin
2022-03-14 19:44 ` [PATCH v4 16/32] vfio-pci/zdev: add function handle to clp base capability Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 17/32] KVM: s390: pci: add basic kvm_zdev structure Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 18/32] iommu/s390: add support for IOMMU_DOMAIN_KVM Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 19/32] KVM: s390: pci: do initial setup for AEN interpretation Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 20/32] KVM: s390: pci: enable host forwarding of Adapter Event Notifications Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 21/32] KVM: s390: mechanism to enable guest zPCI Interpretation Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 22/32] KVM: s390: pci: routines for (dis)associating zPCI devices with a KVM Matthew Rosato
2022-03-14 21:46 ` Jason Gunthorpe
2022-03-15 16:39 ` Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 23/32] KVM: s390: pci: provide routines for enabling/disabling interpretation Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 24/32] KVM: s390: pci: provide routines for enabling/disabling interrupt forwarding Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 25/32] KVM: s390: pci: provide routines for enabling/disabling IOAT assist Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 26/32] KVM: s390: pci: handle refresh of PCI translations Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 27/32] KVM: s390: intercept the rpcit instruction Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 28/32] KVM: s390: add KVM_S390_ZPCI_OP to manage guest zPCI devices Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 29/32] vfio-pci/zdev: add DTSM to clp group capability Matthew Rosato
2022-03-14 21:49 ` Jason Gunthorpe
2022-03-15 14:39 ` Matthew Rosato
2022-03-15 14:56 ` Jason Gunthorpe
2022-03-14 19:44 ` [PATCH v4 30/32] KVM: s390: introduce CPU feature for zPCI Interpretation Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 31/32] MAINTAINERS: additional files related kvm s390 pci passthrough Matthew Rosato
2022-03-14 19:44 ` [PATCH v4 32/32] MAINTAINERS: update s390 IOMMU entry Matthew Rosato
2022-03-14 19:52 ` [PATCH v4 00/32] KVM: s390: enable zPCI for interpretive execution Matthew Rosato
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220315145520.GZ11336@nvidia.com \
--to=jgg@nvidia.com \
--cc=agordeev@linux.ibm.com \
--cc=alex.williamson@redhat.com \
--cc=borntraeger@linux.ibm.com \
--cc=cohuck@redhat.com \
--cc=corbet@lwn.net \
--cc=david@redhat.com \
--cc=farman@linux.ibm.com \
--cc=frankja@linux.ibm.com \
--cc=freude@linux.ibm.com \
--cc=gerald.schaefer@linux.ibm.com \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=imbrenda@linux.ibm.com \
--cc=iommu@lists.linux-foundation.org \
--cc=joro@8bytes.org \
--cc=kvm@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=mjrosato@linux.ibm.com \
--cc=oberpar@linux.ibm.com \
--cc=pasic@linux.ibm.com \
--cc=pbonzini@redhat.com \
--cc=pmorel@linux.ibm.com \
--cc=schnelle@linux.ibm.com \
--cc=svens@linux.ibm.com \
--cc=thuth@redhat.com \
--cc=vneethv@linux.ibm.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).