From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:53114) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aeRdm-0006kF-D2 for qemu-devel@nongnu.org; Fri, 11 Mar 2016 13:18:23 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aeRdh-0003Oa-8a for qemu-devel@nongnu.org; Fri, 11 Mar 2016 13:18:22 -0500 Received: from hqemgate16.nvidia.com ([216.228.121.65]:14766) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aeRdh-0003O6-0F for qemu-devel@nongnu.org; Fri, 11 Mar 2016 13:18:17 -0500 Date: Fri, 11 Mar 2016 10:18:14 -0800 From: Neo Jia Message-ID: <20160311181814.GA21896@nvidia.com> References: <56D6A68A.50004@intel.com> <20160304070025.GA32070@nvidia.com> <20160308003139.GA22106@nvidia.com> <56E0E592.20008@intel.com> <20160311041947.GA12035@nvidia.com> <20160311091315.669084cf@t450s.home> <20160311165544.GA20782@nvidia.com> <20160311105624.3811eac6@t450s.home> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <20160311105624.3811eac6@t450s.home> Subject: Re: [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alex Williamson Cc: "Ruan, Shuai" , "Tian, Kevin" , "kvm@vger.kernel.org" , "qemu-devel@nongnu.org" , "Song, Jike" , Kirti Wankhede , "kraxel@redhat.com" , "pbonzini@redhat.com" , "Lv, Zhiyuan" On Fri, Mar 11, 2016 at 10:56:24AM -0700, Alex Williamson wrote: > On Fri, 11 Mar 2016 08:55:44 -0800 > Neo Jia wrote: > > > > > Alex, what's your opinion on this? > > > > > > The sticky point is how vfio, which is only handling the vGPU, has a > > > reference to the physical GPU on which to call DMA API operations. If > > > that reference is provided by the vendor vGPU driver, for example > > > vgpu_dma_do_translate_for_pci(gpa, pci_dev), I don't see any reason to > > > be opposed to such an API. I would not condone vfio deriving or owning > > > a reference to the physical device on its own though, that's in the > > > realm of the vendor vGPU driver. It does seem a bit cleaner and should > > > reduce duplicate code if the vfio vGPU iommu interface could handle the > > > iommu mapping for the vendor vgpu driver when necessary. Thanks, > > > > Hi Alex, > > > > Since we don't want to allow vfio iommu to derive or own a reference to the > > physical device, I think it is still better not providing such pci_dev to the > > vfio iommu type1 driver. > > > > Also, I need to point out that if the vfio iommu is going to set up iommu page > > table for the real underlying physical device, giving the fact of single RID we > > are all having here, the iommu mapping code has to return the new "IOVA" that is > > mapped to the HPA, which the GPU vendro driver will have to put on its DMA > > engine. This is very different than the current VFIO IOMMU mapping logic. > > > > And we still have to provide another interface to translate the GPA to > > HPA for CPU mapping. > > > > In the current RFC, we only need to have a single interface to provide the most > > basic information to the GPU vendor driver and without taking the risk of > > leaking a ref to VFIO IOMMU. > > I don't see this as some fundamental difference of opinion, it's really > just whether vfio provides a "pin this GFN and return the HPA" function > or whether that function could be extended to include "... and also map > it through the DMA API for the provided device and return the host > IOVA". It might even still be a single function to vfio for CPU vs > device mapping where the device and IOVA return pointer are NULL when > only pinning is required for CPU access (though maybe there are better > ways to provide CPU access than pinning). A wrapper could even give the > appearance that those are two separate functions. > > So long as vfio isn't owning or deriving the device for the DMA API > calls and we don't introduce some complication in page accounting, this > really just seems like a question of whether moving the DMA API > handling into vfio is common between the vendor vGPU drivers and are we > reducing the overall amount and complexity of code by giving the vendor > drivers the opportunity to do both operations with one interface. Hi Alex, OK, I will look into of adding such facilitation and probably include it in a bit later rev of VGPU IOMMU if we don't run any surprise or the issues you mentioned above. Thanks, Neo > If as Kevin suggest it also provides some additional abstractions > for Xen vs KVM, even better. Thanks, > > Alex