From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35266) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b19gd-0003Ad-Vo for qemu-devel@nongnu.org; Fri, 13 May 2016 05:47:13 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b19ga-0004rZ-NL for qemu-devel@nongnu.org; Fri, 13 May 2016 05:47:11 -0400 Received: from mga14.intel.com ([192.55.52.115]:20691) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b19ga-0004qL-HD for qemu-devel@nongnu.org; Fri, 13 May 2016 05:47:08 -0400 Message-ID: <5735A269.5080909@intel.com> Date: Fri, 13 May 2016 17:46:17 +0800 From: Jike Song MIME-Version: 1.0 References: <1462214441-3732-4-git-send-email-kwankhede@nvidia.com> <20160503164306.6a699fe3@t450s.home> <572AEE72.90008@intel.com> <5731933B.90508@intel.com> <20160510160257.GA4125@nvidia.com> <5732F823.3090409@intel.com> <20160511160628.690876f9@t450s.home> <20160512130552.08974076@t450s.home> <20160512201258.GB24334@nvidia.com> In-Reply-To: <20160512201258.GB24334@nvidia.com> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC PATCH v3 3/3] VFIO Type1 IOMMU change: to support with iommu and without iommu List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Neo Jia Cc: Alex Williamson , "Tian, Kevin" , Kirti Wankhede , "pbonzini@redhat.com" , "kraxel@redhat.com" , "qemu-devel@nongnu.org" , "kvm@vger.kernel.org" , "Ruan, Shuai" , "Lv, Zhiyuan" On 05/13/2016 04:12 AM, Neo Jia wrote: > On Thu, May 12, 2016 at 01:05:52PM -0600, Alex Williamson wrote: >> >> If you're trying to equate the scale of what we need to track vs what >> type1 currently tracks, they're significantly different. Possible >> things we need to track include the pfn, the iova, and possibly a >> reference count or some sort of pinned page map. In the pin-all model >> we can assume that every page is pinned on map and unpinned on unmap, >> so a reference count or map is unnecessary. We can also assume that we >> can always regenerate the pfn with get_user_pages() from the vaddr, so >> we don't need to track that. > > Hi Alex, > > Thanks for pointing this out, we will not track those in our next rev and > get_user_pages will be used from the vaddr as you suggested to handle the > single VM with both passthru + mediated device case. > Just a gut feeling: Calling GUP every time for a particular vaddr, means locking mm->mmap_sem every time for a particular process. If the VM has dozens of VCPU, which is not rare, the semaphore is likely to be the bottleneck. -- Thanks, Jike