From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38142) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bfsDc-00013G-1o for qemu-devel@nongnu.org; Fri, 02 Sep 2016 13:25:33 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bfsDX-0004XA-Qo for qemu-devel@nongnu.org; Fri, 02 Sep 2016 13:25:30 -0400 Received: from mx1.redhat.com ([209.132.183.28]:50240) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bfsDX-0004Ws-ID for qemu-devel@nongnu.org; Fri, 02 Sep 2016 13:25:27 -0400 References: <1472097235-6332-1-git-send-email-kwankhede@nvidia.com> <20160830101638.49df467d@t450s.home> <78fedd65-6d62-e849-ff3b-d5105b2da816@redhat.com> <20160901105948.62f750aa@t450s.home> <98bbdbbf-c388-9120-3306-64f0cfb820a7@nvidia.com> <8682faeb-0331-f014-c13e-03c20f3f2bdf@redhat.com> From: Paolo Bonzini Message-ID: Date: Fri, 2 Sep 2016 19:25:18 +0200 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v7 0/4] Add Mediated device support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Kirti Wankhede , Michal Privoznik , Alex Williamson Cc: "Song, Jike" , "cjia@nvidia.com" , "kvm@vger.kernel.org" , "libvir-list@redhat.com" , "Tian, Kevin" , "qemu-devel@nongnu.org" , "kraxel@redhat.com" , Laine Stump , "bjsdjshi@linux.vnet.ibm.com" On 02/09/2016 19:15, Kirti Wankhede wrote: > On 9/2/2016 3:35 PM, Paolo Bonzini wrote: >> >> my-vgpu >> pci_0000_86_00_0 >> >> >> 0695d332-7831-493f-9e71-1c85c8911a08 >> >> >> >> After creating the vGPU, if required by the host driver, all the other >> type ids would disappear from "virsh nodedev-dumpxml pci_0000_86_00_0" too. > > Thanks Paolo for details. > 'nodedev-create' parse the xml file and accordingly write to 'create' > file in sysfs to create mdev device. Right? > At this moment, does libvirt know which VM this device would be > associated with? No, the VM will associate to the nodedev through the UUID. The nodedev is created separately from the VM. >> When dumping the mdev with nodedev-dumpxml, it could show more complete >> info, again taken from sysfs: >> >> >> my-vgpu >> pci_0000_86_00_0 >> >> 0695d332-7831-493f-9e71-1c85c8911a08 >> >> >> >> >> >> >> >> ... >> NVIDIA >> >> >> >> >> Notice how the parent has mdev inside pci; the vGPU, if it has to have >> pci at all, would have it inside mdev. This represents the difference >> between the mdev provider and the mdev device. > > Parent of mdev device might not always be a PCI device. I think we > shouldn't consider it as PCI capability. The in the vGPU means that it _will_ be exposed as a PCI device by VFIO. The in the physical GPU means that the GPU is a PCI device. >> Random proposal for the domain XML too: >> >> >> >> >> 0695d332-7831-493f-9e71-1c85c8911a08 >> >>
>> >> > > When user wants to assign two mdev devices to one VM, user have to add > such two entries or group the two devices in one entry? Two entries, one per UUID, each with its own PCI address in the guest. > On other mail thread with same subject we are thinking of creating group > of mdev devices to assign multiple mdev devices to one VM. What is the advantage in managing mdev groups? (Sorry didn't follow the other thread). Paolo