From mboxrd@z Thu Jan 1 00:00:00 1970 From: Neo Jia Subject: Re: VFIO based vGPU(was Re: [Announcement] 2015-Q3 release of XenGT - a Mediated ...) Date: Wed, 27 Jan 2016 13:48:37 -0800 Message-ID: <20160127214837.GA6182@nvidia.com> References: <56A6083E.10703@intel.com> <1453757426.32741.614.camel@redhat.com> <20160126102003.GA14400@nvidia.com> <1453838773.15515.1.camel@redhat.com> <20160126222830.GB21927@nvidia.com> <1453851038.18049.9.camel@redhat.com> <20160127091432.GA10936@nvidia.com> <1453911016.6261.3.camel@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: "Tian, Kevin" , "Song, Jike" , Gerd Hoffmann , Paolo Bonzini , "Lv, Zhiyuan" , "Ruan, Shuai" , "kvm@vger.kernel.org" , qemu-devel , "igvt-g@lists.01.org" , Kirti Wankhede To: Alex Williamson Return-path: Received: from hqemgate14.nvidia.com ([216.228.121.143]:9818 "EHLO hqemgate14.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S936200AbcA0Vsn convert rfc822-to-8bit (ORCPT ); Wed, 27 Jan 2016 16:48:43 -0500 Content-Disposition: inline In-Reply-To: <1453911016.6261.3.camel@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On Wed, Jan 27, 2016 at 09:10:16AM -0700, Alex Williamson wrote: > On Wed, 2016-01-27 at 01:14 -0800, Neo Jia wrote: > > On Tue, Jan 26, 2016 at 04:30:38PM -0700, Alex Williamson wrote: > > > On Tue, 2016-01-26 at 14:28 -0800, Neo Jia wrote: > > > > On Tue, Jan 26, 2016 at 01:06:13PM -0700, Alex Williamson wrote= : > > > > > > 1.1 Under per-physical device sysfs: > > > > > > -----------------------------------------------------------= ----------------------- > > > > > > =A0 > > > > > > vgpu_supported_types - RO, list the current supported virtu= al GPU types and its > > > > > > VGPU_ID. VGPU_ID - a vGPU type identifier returned from rea= ds of > > > > > > "vgpu_supported_types". > > > > > > =A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0= =A0=A0=A0=A0=A0=A0=A0=A0 > > > > > > vgpu_create - WO, input syntax , creat= e a virtual > > > > > > gpu device on a target physical GPU. idx: virtual device in= dex inside a VM > > > > > > =A0 > > > > > > vgpu_destroy - WO, input syntax , destroy a vi= rtual gpu device on a > > > > > > target physical GPU > > > > > =A0 > > > > > =A0 > > > > > I've noted in previous discussions that we need to separate u= ser policy > > > > > from kernel policy here, the kernel policy should not require= a "VM > > > > > UUID".=A0=A0A UUID simply represents a set of one or more dev= ices and an > > > > > index picks the device within the set.=A0=A0Whether that UUID= matches a VM > > > > > or is independently used is up to the user policy when creati= ng the > > > > > device. > > > > > =A0 > > > > > Personally I'd also prefer to get rid of the concept of index= es within a > > > > > UUID set of devices and instead have each device be independe= nt.=A0=A0This > > > > > seems to be an imposition on the nvidia implementation into t= he kernel > > > > > interface design. > > > > > =A0 > > > > =A0 > > > > Hi Alex, > > > > =A0 > > > > I agree with you that we should not put UUID concept into a ker= nel API. At > > > > this point (without any prototyping), I am thinking of using a = list of virtual > > > > devices instead of UUID. > > >=A0 > > > Hi Neo, > > >=A0 > > > A UUID is a perfectly fine name, so long as we let it be just a U= UID and > > > not the UUID matching some specific use case. > > >=A0 > > > > > > =A0 > > > > > > int vgpu_map_virtual_bar > > > > > > ( > > > > > > =A0=A0=A0=A0uint64_t virt_bar_addr, > > > > > > =A0=A0=A0=A0uint64_t phys_bar_addr, > > > > > > =A0=A0=A0=A0uint32_t len, > > > > > > =A0=A0=A0=A0uint32_t flags > > > > > > ) > > > > > > =A0 > > > > > > EXPORT_SYMBOL(vgpu_map_virtual_bar); > > > > > =A0 > > > > > =A0 > > > > > Per the implementation provided, this needs to be implemented= in the > > > > > vfio device driver, not in the iommu interface.=A0=A0Finding = the DMA mapping > > > > > of the device and replacing it is wrong.=A0=A0It should be re= mapped at the > > > > > vfio device file interface using vm_ops. > > > > > =A0 > > > > =A0 > > > > So you are basically suggesting that we are going to take a mma= p fault and > > > > within that fault handler, we will go into vendor driver to loo= k up the > > > > "pre-registered" mapping and remap there. > > > > =A0 > > > > Is my understanding correct? > > >=A0 > > > Essentially, hopefully the vendor driver will have already regist= ered > > > the backing for the mmap prior to the fault, but either way could= work. > > > I think the key though is that you want to remap it onto the vma > > > accessing the vfio device file, not scanning it out of an IOVA ma= pping > > > that might be dynamic and doing a vma lookup based on the point i= n time > > > mapping of the BAR.=A0=A0The latter doesn't give me much confiden= ce that > > > mappings couldn't change while the former should be a one time fa= ult. > >=A0 > > Hi Alex, > >=A0 > > The fact is that the vendor driver can only prevent such mmap fault= by looking > > up the mapping table that we have saved from IOMMU memo= ry listerner >=20 > Why do we need to prevent the fault?=A0=A0We need to handle the fault= when > it occurs. >=20 > > when the guest region gets programmed. Also, like you have mentione= d below, such > > mapping between iova and hva shouldn't be changed as long as the SB= IOS and > > guest OS are done with their job.=A0 >=20 > But you don't know they're done with their job. >=20 > > Yes, you are right it is one time fault, but the gpu work is heavil= y pipelined.=A0 >=20 > Why does that matter?=A0=A0We're talking about the first time the VM > accesses the range of the BAR that will be direct mapped to the physi= cal > GPU.=A0=A0This isn't going to happen in the middle of a benchmark, it= 's > going to happen during driver initialization in the guest. >=20 > > Probably we should just limit this interface to guest MMIO region a= nd we can have > > some crosscheck between the VFIO driver who has monitored the confi= g spcae > > access to make sure nothing getting moved around? >=20 > No, the solution for the bar is very clear, map on fault to the vma > accessing the mmap and be done with it for the remainder of this > instance of the VM. >=20 Hi Alex, I totally get your points, my previous comments were just trying to exp= lain the reasoning behind our current implementation. I think I have found a way= to hide the latency of the mmap fault, which might happen in the middle of runn= ing a benchmark. I will add a new registration interface to allow the driver vendor to p= rovide a fault handler callback, and from there pages will be installed.=20 > > > In case it's not clear to folks at Intel, the purpose of this is = that a > > > vGPU may directly map a segment of the physical GPU MMIO space, b= ut we > > > may not know what segment that is at setup time, when QEMU does a= n mmap > > > of the vfio device file descriptor.=A0=A0The thought is that we c= an create > > > an invalid mapping when QEMU calls mmap(), knowing that it won't = be > > > accessed until later, then we can fault in the real mmap on deman= d.=A0=A0Do > > > you need anything similar? > > >=A0 > > > > > =A0 > > > > > > int vgpu_dma_do_translate(dma_addr_t *gfn_buffer, uint32_t = count) > > > > > > =A0 > > > > > > EXPORT_SYMBOL(vgpu_dma_do_translate); > > > > > > =A0 > > > > > > Still a lot to be added and modified, such as supporting mu= ltiple VMs and=A0 > > > > > > multiple virtual devices, tracking the mapped / pinned regi= on within VGPU IOMMU=A0 > > > > > > kernel driver, error handling, roll-back and locked memory = size per user, etc.=A0 > > > > > =A0 > > > > > Particularly, handling of mapping changes is completely missi= ng.=A0=A0This > > > > > cannot be a point in time translation, the user is free to re= map > > > > > addresses whenever they wish and device translations need to = be updated > > > > > accordingly. > > > > > =A0 > > > > =A0 > > > > When you say "user", do you mean the QEMU? > > >=A0 > > > vfio is a generic userspace driver interface, QEMU is a very, ver= y > > > important user of the interface, but not the only user.=A0=A0So f= or this > > > conversation, we're mostly talking about QEMU as the user, but we= should > > > be careful about assuming QEMU is the only user. > > >=A0 > >=A0 > > Understood. I have to say that our focus at this moment is to suppo= rt QEMU and > > KVM, but I know VFIO interface is much more than that, and that is = why I think > > it is right to leverage this framework so we can together explore f= uture use > > case in the userland. > >=A0 > >=A0 > > > > Here, whenever the DMA that > > > > the guest driver is going to launch will be first pinned within= VM, and then > > > > registered to QEMU, therefore the IOMMU memory listener, eventu= ally the pages > > > > will be pinned by the GPU or DMA engine. > > > > =A0 > > > > Since we are keeping the upper level code same, thinking about = passthru case, > > > > where the GPU has already put the real IOVA into his PTEs, I do= n't know how QEMU > > > > can change that mapping without causing an IOMMU fault on a act= ive DMA device. > > >=A0 > > > For the virtual BAR mapping above, it's easy to imagine that mapp= ing a > > > BAR to a given address is at the guest discretion, it may be mapp= ed and > > > unmapped, it may be mapped to different addresses at different po= ints in > > > time, the guest BIOS may choose to map it at yet another address,= etc. > > > So if somehow we were trying to setup a mapping for peer-to-peer,= there > > > are lots of ways that IOVA could change.=A0=A0But even with RAM, = we can > > > support memory hotplug in a VM.=A0=A0What was once a DMA target m= ay be > > > removed or may now be backed by something else.=A0=A0Chipset conf= iguration > > > on the emulated platform may change how guest physical memory app= ears > > > and that might change between VM boots. > > >=A0 > > > Currently with physical device assignment the memory listener wat= ches > > > for both maps and unmaps and updates the iotlb to match.=A0=A0Jus= t like real > > > hardware doing these same sorts of things, we rely on the guest t= o stop > > > using memory that's going to be moved as a DMA target prior to mo= ving > > > it. > >=A0 > > Right,=A0=A0you can only do that when the device is quiescent. > >=A0 > > As long as this will be notified to the guest, I think we should be= able to > > support it although the real implementation will depend on how the = device gets into=A0 > > quiescent state. > >=A0 > > This is definitely a very interesting feature we should explore, bu= t I hope we > > probably can first focus on the most basic functionality. >=20 > If we only do a point-in-time translation and assume it never changes= , > that's good enough for a proof of concept, but it's not a complete > solution.=A0=A0I think this is=A0=A0practical problem, not just an ac= ademic > problem.=A0=A0There needs to be a mechanism mappings to be invalidate= d based > on VM memory changes.=A0=A0Thanks, >=20 Sorry, probably my previous comment is not very clear. I highly value y= our input and the information related to the memory hotplug scenarios, and I never ex= clude the support of such feature. The only question is when, that is why I would= like to defer such VM memory hotplug feature to phase 2 after the initial offic= ial launch. Thanks, Neo > Alex >=20