From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:43941) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aOoPg-0006UL-42 for qemu-devel@nongnu.org; Thu, 28 Jan 2016 10:23:13 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aOoPc-00005y-PR for qemu-devel@nongnu.org; Thu, 28 Jan 2016 10:23:12 -0500 Received: from mx1.redhat.com ([209.132.183.28]:50707) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aOoPc-00005q-Ha for qemu-devel@nongnu.org; Thu, 28 Jan 2016 10:23:08 -0500 Message-ID: <1453994586.29166.1.camel@redhat.com> From: Alex Williamson Date: Thu, 28 Jan 2016 08:23:06 -0700 In-Reply-To: <56A9AE69.3060604@intel.com> References: <569C5071.6080004@intel.com> <569CA8AD.6070200@intel.com> <1453143919.32741.169.camel@redhat.com> <569F4C86.2070501@intel.com> <56A6083E.10703@intel.com> <1453757426.32741.614.camel@redhat.com> <56A72313.9030009@intel.com> <56A77D2D.40109@gmail.com> <1453826249.26652.54.camel@redhat.com> <1453844613.18049.1.camel@redhat.com> <1453846073.18049.3.camel@redhat.com> <1453847250.18049.5.camel@redhat.com> <1453848975.18049.7.camel@redhat.com> <56A821AD.5090606@intel.com> <1453864068.3107.3.camel@redhat.com> <56A85913.1020506@intel.com> <1453911589.6261.5.camel@redhat.com> <56A9AE69.3060604@intel.com> Content-Type: text/plain; charset="UTF-8" Mime-Version: 1.0 Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] VFIO based vGPU(was Re: [Announcement] 2015-Q3 release of XenGT - a Mediated ...) List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jike Song Cc: Yang Zhang , "Ruan, Shuai" , "Tian, Kevin" , Neo Jia , "kvm@vger.kernel.org" , "igvt-g@lists.01.org" , qemu-devel , Gerd Hoffmann , Paolo Bonzini , "Lv, Zhiyuan" On Thu, 2016-01-28 at 14:00 +0800, Jike Song wrote: > On 01/28/2016 12:19 AM, Alex Williamson wrote: > > On Wed, 2016-01-27 at 13:43 +0800, Jike Song wrote: > {snip} >=C2=A0 > > > Had a look at eventfd, I would say yes, technically we are able to > > > achieve the goal: introduce a fd, with fop->{read|write} defined in= KVM, > > > call into vgpu device-model, also an iodev registered for a MMIO GP= A > > > range to invoke the fop->{read|write}.=C2=A0=C2=A0I just didn't und= erstand why > > > userspace can't register an iodev via API directly. > >=C2=A0 > > Please elaborate on how it would work via iodev. > >=C2=A0 >=C2=A0 > QEMU forwards BAR0 write to the bus driver, in the bus driver, if > found that MEM bit is enabled, register an iodev to KVM: with an > ops: >=C2=A0 >=C2=A0 const struct kvm_io_device_ops trap_mmio_ops =3D { >=C2=A0 .read =3D kvmgt_guest_mmio_read, >=C2=A0 .write =3D kvmgt_guest_mmio_write, >=C2=A0 }; >=C2=A0 > I may not be able to illustrated it clearly with descriptions but this > should not be a problem, thanks to your explanation, I can understand > and adopt it for KVMGT. You're still crossing modules with direct callbacks, right?=C2=A0=C2=A0Wh= at's the advantage versus using the file descriptor + offset approach which could offer the same performance and improve KVM overall by creating a new option for generically handling MMIO? > > > Besides, this doesn't necessarily require another thread, right? > > > I guess it can be within the VCPU thread?=C2=A0 > >=C2=A0 > > I would think so too, the vcpu is blocked on the MMIO access, we shou= ld > > be able to service it in that context.=C2=A0=C2=A0I hope. > >=C2=A0 >=C2=A0 > Thanks for confirmation. >=C2=A0 > > > And this brought another question: except the vfio bus drvier and > > > iommu backend (and the page_track ulitiy used for guest memory writ= e-protection),=C2=A0 > > > is it KVMGT allowed to call into kvm.ko (or modify)? Though we are > > > becoming less and less willing to do that with VFIO, it's still bet= ter > > > to know that before going wrong. > >=C2=A0 > > kvm and vfio are separate modules, for the most part, they know nothi= ng > > about each other and have no hard dependencies between them.=C2=A0=C2= =A0We do have > > various accelerations we can use to avoid paths through userspace, bu= t > > these are all via APIs that are agnostic of the party on the other en= d. > > For example, vfio signals interrups through eventfds and has no conce= pt > > of whether that eventfd terminates in userspace or into an irqfd in K= VM. > > vfio supports direct access to device MMIO regions via mmaps, but vfi= o > > has no idea if that mmap gets directly mapped into a VM address space= . > > Even with posted interrupts, we've introduced an irq bypass manager > > allowing interrupt producers and consumers to register independently = to > > form a connection without directly knowing anything about the other > > module.=C2=A0=C2=A0That sort or proper software layering needs to con= tinue.=C2=A0=C2=A0It > > would be wrong for a vfio bus driver to assume KVM is the user and > > directly call into KVM interfaces.=C2=A0=C2=A0Thanks, > >=C2=A0 >=C2=A0 > I understand and agree with your point, it's bad if the bus driver > assume KVM is the user and/or call into KVM interfaces. >=C2=A0 > However, the vgpu device-model, in intel case also a part of i915 drive= r, > will always need to call some hypervisor-specific interfaces. No, think differently. > For example, when a guest gfx driver submit GPU commands, the device-mo= del > may want to scan it for security or whatever-else purpose: >=C2=A0 >=C2=A0 - get a GPA (from GPU page tables) >=C2=A0 - want to read 16 bytes from that GPA >=C2=A0 - call hypervisor-specific read_gpa() method >=C2=A0 - for Xen, the GPA belongs to a foreign domain, it must find >=C2=A0 =C2=A0=C2=A0a way to map & read it - beyond our scope here; >=C2=A0 - for KVM, the GPA can converted to HVA, copy_from_user (if >=C2=A0 =C2=A0=C2=A0called from vcpu thread) or access_remote_vm (if cal= led from >=C2=A0 =C2=A0=C2=A0other threads); >=C2=A0 > Please note that this is not from the vfio bus driver, but from the vgp= u > device-model; also this is not DMA addr from GPU talbes, but real GPA. This is exactly why we're proposing that the vfio IOMMU interface be used as a database of guest translations.=C2=A0=C2=A0The type1 IOMMU mode= l in QEMU maps all of guest memory through the IOMMU, in the vGPU model type1 is simply collecting these and they map GPA to process virtual memory. When the GPU driver wants to get a GPA, it does so from this database. If it wants to read from it, it could get the mm and read from the virtual memory or pin the page for a GPA to HPA translation and read from the HPA.=C2=A0=C2=A0There is no reason to poke directly through to t= he hypervisor here.=C2=A0=C2=A0Let's design what you need into the vgpu vers= ion of the type1 IOMMU instead.=C2=A0=C2=A0Thanks, Alex