From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49588) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aeGHx-0000zW-LR for qemu-devel@nongnu.org; Fri, 11 Mar 2016 01:11:06 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aeGHu-0005gI-DH for qemu-devel@nongnu.org; Fri, 11 Mar 2016 01:11:05 -0500 Received: from hqemgate16.nvidia.com ([216.228.121.65]:14703) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aeGHu-0005gE-4c for qemu-devel@nongnu.org; Fri, 11 Mar 2016 01:11:02 -0500 Date: Thu, 10 Mar 2016 22:10:59 -0800 From: Neo Jia Message-ID: <20160311061059.GA13023@nvidia.com> References: <1456244666-25369-1-git-send-email-kwankhede@nvidia.com> <1456244666-25369-3-git-send-email-kwankhede@nvidia.com> <56D6A68A.50004@intel.com> <20160304070025.GA32070@nvidia.com> <20160308003139.GA22106@nvidia.com> <56E0E592.20008@intel.com> <20160311041947.GA12035@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: Subject: Re: [Qemu-devel] [RFC PATCH v2 3/3] VFIO: Type1 IOMMU mapping support for vGPU List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Tian, Kevin" Cc: "Ruan, Shuai" , "Song, Jike" , "kvm@vger.kernel.org" , "qemu-devel@nongnu.org" , Kirti Wankhede , Alex Williamson , "kraxel@redhat.com" , "pbonzini@redhat.com" , "Lv, Zhiyuan" On Fri, Mar 11, 2016 at 04:46:23AM +0000, Tian, Kevin wrote: > > From: Neo Jia [mailto:cjia@nvidia.com] > > Sent: Friday, March 11, 2016 12:20 PM > > > > On Thu, Mar 10, 2016 at 11:10:10AM +0800, Jike Song wrote: > > > > > > >> Is it supposed to be the caller who should set > > > >> up IOMMU by DMA api such as dma_map_page(), after calling > > > >> vgpu_dma_do_translate()? > > > >> > > > > > > > > Don't think you need to call dma_map_page here. Once you have the pfn available > > > > to your GPU kernel driver, you can just go ahead to setup the mapping as you > > > > normally do such as calling pci_map_sg and its friends. > > > > > > > > > > Technically it's definitely OK to call DMA API from the caller rather than here, > > > however personally I think it is a bit counter-intuitive: IOMMU page tables > > > should be constructed within the VFIO IOMMU driver. > > > > > > > Hi Jike, > > > > For vGPU, what we have is just a virtual device and a fake IOMMU group, therefore > > the actual interaction with the real GPU should be managed by the GPU vendor driver. > > > > Hi, Neo, > > Seems we have a different thought on this. Regardless of whether it's a virtual/physical > device, imo, VFIO should manage IOMMU configuration. The only difference is: > > - for physical device, VFIO directly invokes IOMMU API to set IOMMU entry (GPA->HPA); > - for virtual device, VFIO invokes kernel DMA APIs which indirectly lead to IOMMU entry > set if CONFIG_IOMMU is enabled in kernel (GPA->IOVA); How does it make any sense for us to do a dma_map_page for a physical device that we don't have any direct interaction with? > > This would provide an unified way to manage the translation in VFIO, and then vendor > specific driver only needs to query and use returned IOVA corresponding to a GPA. > > Doing so has another benefit, to make underlying vGPU driver VMM agnostic. For KVM, > yes we can use pci_map_sg. However for Xen it's different (today Dom0 doesn't see > IOMMU. In the future there'll be a PVIOMMU implementation) so different code path is > required. It's better to abstract such specific knowledge out of vGPU driver, which just > uses whatever dma_addr returned by other agent (VFIO here, or another Xen specific > agent) in a centralized way. > > Alex, what's your opinion on this? > > Thanks > Kevin