From mboxrd@z Thu Jan 1 00:00:00 1970 From: Neo Jia Subject: Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed Date: Mon, 4 Jul 2016 00:03:20 -0700 Message-ID: <20160704070314.GA13291@nvidia.com> References: <1467291711-3230-1-git-send-email-pbonzini@redhat.com> <577A049A.4000402@linux.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Cc: Paolo Bonzini , , , Kirti Wankhede , Andrea Arcangeli , Radim =?utf-8?B?S3LEjW3DocWZ?= To: Xiao Guangrong Return-path: Content-Disposition: inline In-Reply-To: <577A049A.4000402@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On Mon, Jul 04, 2016 at 02:39:22PM +0800, Xiao Guangrong wrote: > > > On 06/30/2016 09:01 PM, Paolo Bonzini wrote: > >The vGPU folks would like to trap the first access to a BAR by setting > >vm_ops on the VMAs produced by mmap-ing a VFIO device. The fault handler > >then can use remap_pfn_range to place some non-reserved pages in the VMA. > > Why does it require fetching the pfn when the fault is triggered rather > than when mmap() is called? Hi Guangrong, as such mapping information between virtual mmio to physical mmio is only available at runtime. > > Why the memory mapped by this mmap() is not a portion of MMIO from > underlayer physical device? If it is a valid system memory, is this interface > really needed to implemented in vfio? (you at least need to set VM_MIXEDMAP > if it mixed system memory with MMIO) > It actually is a portion of the physical mmio which is set by vfio mmap. > IIUC, the kernel assumes that VM_PFNMAP is a continuous memory, e.g, like > current KVM and vaddr_get_pfn() in vfio, but it seems nvdia's patchset > breaks this semantic as ops->validate_map_request() can adjust the physical > address arbitrarily. (again, the name 'validate' should be changed to match > the thing as it is really doing) The vgpu api will allow you to adjust the target mmio address and the size via validate_map_request, but it is still physical contiguous as . Thanks, Neo > >