From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39055) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1axXmT-0002kr-TK for qemu-devel@nongnu.org; Tue, 03 May 2016 06:42:56 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1axXle-0004K7-3k for qemu-devel@nongnu.org; Tue, 03 May 2016 06:42:12 -0400 Received: from mga04.intel.com ([192.55.52.120]:12000) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1axXld-0004A7-VB for qemu-devel@nongnu.org; Tue, 03 May 2016 06:41:26 -0400 Message-ID: <57288023.2020604@intel.com> Date: Tue, 03 May 2016 18:40:35 +0800 From: Jike Song MIME-Version: 1.0 References: <1462214441-3732-1-git-send-email-kwankhede@nvidia.com> <1462214441-3732-4-git-send-email-kwankhede@nvidia.com> In-Reply-To: <1462214441-3732-4-git-send-email-kwankhede@nvidia.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC PATCH v3 3/3] VFIO Type1 IOMMU change: to support with iommu and without iommu List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Kirti Wankhede Cc: alex.williamson@redhat.com, pbonzini@redhat.com, kraxel@redhat.com, cjia@nvidia.com, qemu-devel@nongnu.org, kvm@vger.kernel.org, kevin.tian@intel.com, shuai.ruan@intel.com, zhiyuan.lv@intel.com On 05/03/2016 02:40 AM, Kirti Wankhede wrote: > VFIO Type1 IOMMU driver is designed for the devices which are IOMMU capable. > vGPU device are only using IOMMU TYPE1 API, the underlying hardware can be > managed by an IOMMU domain. To use most of the code of IOMMU driver for vGPU > devices, type1 IOMMU driver is modified to support vGPU devices. This change > exports functions to pin and unpin pages for vGPU devices. > It maintains data of pinned pages for vGPU domain. This data is used to verify > unpinning request and also used to unpin pages from detach_group(). > > Tested by assigning below combinations of devices to a single VM: > - GPU pass through only > - vGPU device only > - One GPU pass through and one vGPU device > - two GPU pass through {the patch trimmed} Hi Kirti, I have a question: in the scenario above, how many PCI BDFs do your vGPUs consume? Per my understanding, you take the GPA of a KVM guest as the IOVA of IOMMU domain, and if there are multiple guests with vGPU assigned, the vGPUs must belong to different vGPUs (thereby having different BDFs). Do I miss anything? -- Thanks, Jike