From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753474AbcGDJQm (ORCPT ); Mon, 4 Jul 2016 05:16:42 -0400 Received: from hqemgate16.nvidia.com ([216.228.121.65]:1736 "EHLO hqemgate16.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753339AbcGDJQk (ORCPT ); Mon, 4 Jul 2016 05:16:40 -0400 X-PGP-Universal: processed; by hqnvupgp07.nvidia.com on Mon, 04 Jul 2016 02:14:49 -0700 Date: Mon, 4 Jul 2016 02:16:09 -0700 From: Neo Jia To: Xiao Guangrong CC: Paolo Bonzini , , , Kirti Wankhede , Andrea Arcangeli , Radim =?utf-8?B?S3LEjW3DocWZ?= Subject: Re: [PATCH 0/2] KVM: MMU: support VMAs that got remap_pfn_range-ed Message-ID: <20160704091609.GA14913@nvidia.com> References: <1467291711-3230-1-git-send-email-pbonzini@redhat.com> <577A049A.4000402@linux.intel.com> <20160704070314.GA13291@nvidia.com> <577A123F.1060909@linux.intel.com> <20160704075302.GA13470@nvidia.com> <577A1C08.1020509@linux.intel.com> <20160704084127.GA14638@nvidia.com> <577A2211.2030906@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <577A2211.2030906@linux.intel.com> X-NVConfidentiality: public User-Agent: Mutt/1.5.24 (2015-08-30) X-Originating-IP: [172.20.13.39] X-ClientProxiedBy: HQMAIL106.nvidia.com (172.18.146.12) To HQMAIL108.nvidia.com (172.18.146.13) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Jul 04, 2016 at 04:45:05PM +0800, Xiao Guangrong wrote: > > > On 07/04/2016 04:41 PM, Neo Jia wrote: > >On Mon, Jul 04, 2016 at 04:19:20PM +0800, Xiao Guangrong wrote: > >> > >> > >>On 07/04/2016 03:53 PM, Neo Jia wrote: > >>>On Mon, Jul 04, 2016 at 03:37:35PM +0800, Xiao Guangrong wrote: > >>>> > >>>> > >>>>On 07/04/2016 03:03 PM, Neo Jia wrote: > >>>>>On Mon, Jul 04, 2016 at 02:39:22PM +0800, Xiao Guangrong wrote: > >>>>>> > >>>>>> > >>>>>>On 06/30/2016 09:01 PM, Paolo Bonzini wrote: > >>>>>>>The vGPU folks would like to trap the first access to a BAR by setting > >>>>>>>vm_ops on the VMAs produced by mmap-ing a VFIO device. The fault handler > >>>>>>>then can use remap_pfn_range to place some non-reserved pages in the VMA. > >>>>>> > >>>>>>Why does it require fetching the pfn when the fault is triggered rather > >>>>>>than when mmap() is called? > >>>>> > >>>>>Hi Guangrong, > >>>>> > >>>>>as such mapping information between virtual mmio to physical mmio is only available > >>>>>at runtime. > >>>> > >>>>Sorry, i do not know what the different between mmap() and the time VM actually > >>>>accesses the memory for your case. Could you please more detail? > >>> > >>>Hi Guangrong, > >>> > >>>Sure. The mmap() gets called by qemu or any VFIO API userspace consumer when > >>>setting up the virtual mmio, at that moment nobody has any knowledge about how > >>>the physical mmio gets virtualized. > >>> > >>>When the vm (or application if we don't want to limit ourselves to vmm term) > >>>starts, the virtual and physical mmio gets mapped by mpci kernel module with the > >>>help from vendor supplied mediated host driver according to the hw resource > >>>assigned to this vm / application. > >> > >>Thanks for your expiation. > >> > >>It sounds like a strategy of resource allocation, you delay the allocation until VM really > >>accesses it, right? > > > >Yes, that is where the fault handler inside mpci code comes to the picture. > > > I am not sure this strategy is good. The instance is successfully created, and it is started > successful, but the VM is crashed due to the resource of that instance is not enough. That sounds > unreasonable. Sorry, I think I misread the "allocation" as "mapping". We only delay the cpu mapping, not the allocation. Thanks, Neo > > >