From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jike Song Subject: Re: [RFC PATCH v3 3/3] VFIO Type1 IOMMU change: to support with iommu and without iommu Date: Tue, 10 May 2016 15:52:27 +0800 Message-ID: <5731933B.90508@intel.com> References: <1462214441-3732-1-git-send-email-kwankhede@nvidia.com> <1462214441-3732-4-git-send-email-kwankhede@nvidia.com> <20160503164306.6a699fe3@t450s.home> <572AEE72.90008@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: "Ruan, Shuai" , Neo Jia , "kvm@vger.kernel.org" , "qemu-devel@nongnu.org" , Kirti Wankhede , Alex Williamson , "kraxel@redhat.com" , "pbonzini@redhat.com" , "Lv, Zhiyuan" To: "Tian, Kevin" Return-path: In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+gceq-qemu-devel=gmane.org@nongnu.org Sender: "Qemu-devel" List-Id: kvm.vger.kernel.org On 05/05/2016 05:27 PM, Tian, Kevin wrote: >> From: Song, Jike >> >> IIUC, an api-only domain is a VFIO domain *without* underlying IOMMU >> hardware. It just, as you said in another mail, "rather than >> programming them into an IOMMU for a device, it simply stores the >> translations for use by later requests". >> >> That imposes a constraint on gfx driver: hardware IOMMU must be disabled. >> Otherwise, if IOMMU is present, the gfx driver eventually programs >> the hardware IOMMU with IOVA returned by pci_map_page or dma_map_page; >> Meanwhile, the IOMMU backend for vgpu only maintains GPA <-> HPA >> translations without any knowledge about hardware IOMMU, how is the >> device model supposed to do to get an IOVA for a given GPA (thereby HPA >> by the IOMMU backend here)? >> >> If things go as guessed above, as vfio_pin_pages() indicates, it >> pin & translate vaddr to PFN, then it will be very difficult for the >> device model to figure out: >> >> 1, for a given GPA, how to avoid calling dma_map_page multiple times? >> 2, for which page to call dma_unmap_page? >> >> -- > > We have to support both w/ iommu and w/o iommu case, since > that fact is out of GPU driver control. A simple way is to use > dma_map_page which internally will cope with w/ and w/o iommu > case gracefully, i.e. return HPA w/o iommu and IOVA w/ iommu. > Then in this file we only need to cache GPA to whatever dmadr_t > returned by dma_map_page. > Hi Alex, Kirti and Neo, any thought on the IOMMU compatibility here? -- Thanks, Jike