From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:44103) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eZWME-0007ui-AL for qemu-devel@nongnu.org; Thu, 11 Jan 2018 01:28:59 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eZWMB-0006nA-6f for qemu-devel@nongnu.org; Thu, 11 Jan 2018 01:28:58 -0500 Received: from mga05.intel.com ([192.55.52.43]:7652) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1eZWMA-0006lw-TH for qemu-devel@nongnu.org; Thu, 11 Jan 2018 01:28:55 -0500 Message-ID: <5A5704B4.5090502@intel.com> Date: Thu, 11 Jan 2018 14:31:16 +0800 From: Wei Wang MIME-Version: 1.0 References: <20180110161438.GA28096@stefanha-x1.localdomain> In-Reply-To: <20180110161438.GA28096@stefanha-x1.localdomain> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] vhost-pci and virtio-vhost-user List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Stefan Hajnoczi Cc: qemu-devel@nongnu.org On 01/11/2018 12:14 AM, Stefan Hajnoczi wrote: > Hi Wei, > I wanted to summarize the differences between the vhost-pci and > virtio-vhost-user approaches because previous discussions may have been > confusing. > > vhost-pci defines a new virtio device type for each vhost device type > (net, scsi, blk). It therefore requires a virtio device driver for each > device type inside the slave VM. > > Adding a new device type requires: > 1. Defining a new virtio device type in the VIRTIO specification. > 3. Implementing a new QEMU device model. > 2. Implementing a new virtio driver. > > virtio-vhost-user is a single virtio device that acts as a vhost-user > protocol transport for any vhost device type. It requires one virtio > driver inside the slave VM and device types are implemented using > existing vhost-user slave libraries (librte_vhost in DPDK and > libvhost-user in QEMU). > > Adding a new device type to virtio-vhost-user involves: > 1. Adding any new vhost-user protocol messages to the QEMU > virtio-vhost-user device model. > 2. Adding any new vhost-user protocol messages to the vhost-user slave > library. > 3. Implementing the new device slave. > > The simplest case is when no new vhost-user protocol messages are > required for the new device. Then all that's needed for > virtio-vhost-user is a device slave implementation (#3). That slave > implementation will also work with AF_UNIX because the vhost-user slave > library hides the transport (AF_UNIX vs virtio-vhost-user). Even > better, if another person has already implemented that device slave to > use with AF_UNIX then no new code is needed for virtio-vhost-user > support at all! > > If you compare this to vhost-pci, it would be necessary to design a new > virtio device, implement it in QEMU, and implement the virtio driver. > Much of virtio driver is more or less the same thing as the vhost-user > device slave but it cannot be reused because the vhost-user protocol > isn't being used by the virtio device. The result is a lot of > duplication in DPDK and other codebases that implement vhost-user > slaves. > > The way that vhost-pci is designed means that anyone wishing to support > a new device type has to become a virtio device designer. They need to > map vhost-user protocol concepts to a new virtio device type. This will > be time-consuming for everyone involved (e.g. the developer, the VIRTIO > community, etc). > > The virtio-vhost-user approach stays at the vhost-user protocol level as > much as possible. This way there are fewer concepts that need to be > mapped by people adding new device types. As a result, it will allow > virtio-vhost-user to keep up with AF_UNIX vhost-user and grow because > it's easier to work with. > > What do you think? > Thanks Stefan for the clarification. I agree with idea of making one single device for all device types. Would you think it is also possible with vhost-pci? (Fundamentally, the duty of the device is to use a bar to expose the master guest memory, and passes the master vring address info and memory region info, which has no dependency on device types) If you agree with the above, I think the main difference is what to pass to the driver. I think vhost-pci is simpler because it only passes the above mentioned info, which is sufficient. Relaying needs to 1) pass all the vhost-user messages to the driver, and 2) requires the driver to join the vhost-user negotiation. Without above two, the solution already works well, so I'm not sure why would we need the above two from functionality point of view. Finally, either we choose vhost-pci or virtio-vhost-user, future developers will need to study vhost-user protocol and virtio spec (one device). This wouldn't make much difference, right? Best, Wei