From: Neo Jia <cjia@nvidia.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: "Ruan, Shuai" <shuai.ruan@intel.com>,
"Tian, Kevin" <kevin.tian@intel.com>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"Song, Jike" <jike.song@intel.com>,
Kirti Wankhede <kwankhede@nvidia.com>,
"kraxel@redhat.com" <kraxel@redhat.com>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"Lv, Zhiyuan" <zhiyuan.lv@intel.com>
Subject: Re: [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU
Date: Fri, 11 Mar 2016 08:55:44 -0800 [thread overview]
Message-ID: <20160311165544.GA20782@nvidia.com> (raw)
In-Reply-To: <20160311091315.669084cf@t450s.home>
On Fri, Mar 11, 2016 at 09:13:15AM -0700, Alex Williamson wrote:
> On Fri, 11 Mar 2016 04:46:23 +0000
> "Tian, Kevin" <kevin.tian@intel.com> wrote:
>
> > > From: Neo Jia [mailto:cjia@nvidia.com]
> > > Sent: Friday, March 11, 2016 12:20 PM
> > >
> > > On Thu, Mar 10, 2016 at 11:10:10AM +0800, Jike Song wrote:
> > > >
> > > > >> Is it supposed to be the caller who should set
> > > > >> up IOMMU by DMA api such as dma_map_page(), after calling
> > > > >> vgpu_dma_do_translate()?
> > > > >>
> > > > >
> > > > > Don't think you need to call dma_map_page here. Once you have the pfn available
> > > > > to your GPU kernel driver, you can just go ahead to setup the mapping as you
> > > > > normally do such as calling pci_map_sg and its friends.
> > > > >
> > > >
> > > > Technically it's definitely OK to call DMA API from the caller rather than here,
> > > > however personally I think it is a bit counter-intuitive: IOMMU page tables
> > > > should be constructed within the VFIO IOMMU driver.
> > > >
> > >
> > > Hi Jike,
> > >
> > > For vGPU, what we have is just a virtual device and a fake IOMMU group, therefore
> > > the actual interaction with the real GPU should be managed by the GPU vendor driver.
> > >
> >
> > Hi, Neo,
> >
> > Seems we have a different thought on this. Regardless of whether it's a virtual/physical
> > device, imo, VFIO should manage IOMMU configuration. The only difference is:
> >
> > - for physical device, VFIO directly invokes IOMMU API to set IOMMU entry (GPA->HPA);
> > - for virtual device, VFIO invokes kernel DMA APIs which indirectly lead to IOMMU entry
> > set if CONFIG_IOMMU is enabled in kernel (GPA->IOVA);
> >
> > This would provide an unified way to manage the translation in VFIO, and then vendor
> > specific driver only needs to query and use returned IOVA corresponding to a GPA.
> >
> > Doing so has another benefit, to make underlying vGPU driver VMM agnostic. For KVM,
> > yes we can use pci_map_sg. However for Xen it's different (today Dom0 doesn't see
> > IOMMU. In the future there'll be a PVIOMMU implementation) so different code path is
> > required. It's better to abstract such specific knowledge out of vGPU driver, which just
> > uses whatever dma_addr returned by other agent (VFIO here, or another Xen specific
> > agent) in a centralized way.
> >
> > Alex, what's your opinion on this?
>
> The sticky point is how vfio, which is only handling the vGPU, has a
> reference to the physical GPU on which to call DMA API operations. If
> that reference is provided by the vendor vGPU driver, for example
> vgpu_dma_do_translate_for_pci(gpa, pci_dev), I don't see any reason to
> be opposed to such an API. I would not condone vfio deriving or owning
> a reference to the physical device on its own though, that's in the
> realm of the vendor vGPU driver. It does seem a bit cleaner and should
> reduce duplicate code if the vfio vGPU iommu interface could handle the
> iommu mapping for the vendor vgpu driver when necessary. Thanks,
Hi Alex,
Since we don't want to allow vfio iommu to derive or own a reference to the
physical device, I think it is still better not providing such pci_dev to the
vfio iommu type1 driver.
Also, I need to point out that if the vfio iommu is going to set up iommu page
table for the real underlying physical device, giving the fact of single RID we
are all having here, the iommu mapping code has to return the new "IOVA" that is
mapped to the HPA, which the GPU vendro driver will have to put on its DMA
engine. This is very different than the current VFIO IOMMU mapping logic.
And we still have to provide another interface to translate the GPA to
HPA for CPU mapping.
In the current RFC, we only need to have a single interface to provide the most
basic information to the GPU vendor driver and without taking the risk of
leaking a ref to VFIO IOMMU.
Thanks,
Neo
>
> Alex
next prev parent reply other threads:[~2016-03-11 16:55 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-23 16:24 [Qemu-devel] [RFC PATCH v2 1/3] vGPU Core driver Kirti Wankhede
2016-02-23 16:24 ` [Qemu-devel] [RFC PATCH v2 2/3] VFIO driver for vGPU device Kirti Wankhede
2016-02-23 16:24 ` [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU Kirti Wankhede
2016-03-02 8:38 ` Jike Song
2016-03-04 7:00 ` Neo Jia
2016-03-07 6:07 ` Jike Song
2016-03-08 0:31 ` Neo Jia
2016-03-10 3:10 ` Jike Song
2016-03-11 4:19 ` Neo Jia
2016-03-11 4:46 ` Tian, Kevin
2016-03-11 6:10 ` Neo Jia
2016-03-11 8:06 ` Tian, Kevin
2016-03-11 16:13 ` Alex Williamson
2016-03-11 16:55 ` Neo Jia [this message]
2016-03-11 17:56 ` Alex Williamson
2016-03-11 18:18 ` Neo Jia
2016-02-29 5:39 ` [Qemu-devel] [RFC PATCH v2 1/3] vGPU Core driver Tian, Kevin
2016-02-29 23:17 ` Neo Jia
2016-03-01 3:10 ` Jike Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160311165544.GA20782@nvidia.com \
--to=cjia@nvidia.com \
--cc=alex.williamson@redhat.com \
--cc=jike.song@intel.com \
--cc=kevin.tian@intel.com \
--cc=kraxel@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=kwankhede@nvidia.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=shuai.ruan@intel.com \
--cc=zhiyuan.lv@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).