From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:42152) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dD7ec-0001vJ-0X for qemu-devel@nongnu.org; Tue, 23 May 2017 07:07:06 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dD7eY-0000Cs-Ss for qemu-devel@nongnu.org; Tue, 23 May 2017 07:07:06 -0400 Received: from mga01.intel.com ([192.55.52.88]:24102) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dD7eY-0000Cc-JX for qemu-devel@nongnu.org; Tue, 23 May 2017 07:07:02 -0400 Message-ID: <59241851.1040606@intel.com> Date: Tue, 23 May 2017 19:09:05 +0800 From: Wei Wang MIME-Version: 1.0 References: <1494578148-102868-1-git-send-email-wei.w.wang@intel.com> <591AA65F.8080608@intel.com> <7e1b48d5-83e6-a0ae-5d91-696d8db09d7c@redhat.com> <591D0EF5.9000807@intel.com> <591EB435.4080109@intel.com> <20170519234246-mutt-send-email-mst@kernel.org> In-Reply-To: <20170519234246-mutt-send-email-mst@kernel.org> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Michael S. Tsirkin" Cc: Jason Wang , stefanha@gmail.com, marcandre.lureau@gmail.com, pbonzini@redhat.com, virtio-dev@lists.oasis-open.org, qemu-devel@nongnu.org On 05/20/2017 04:44 AM, Michael S. Tsirkin wrote: > On Fri, May 19, 2017 at 05:00:37PM +0800, Wei Wang wrote: >>>> That being said, we compared to vhost-user, instead of vhost_net, >>>> because vhost-user is the one >>>> that is used in NFV, which we think is a major use case for vhost-pci. >>> If this is true, why not draft a pmd driver instead of a kernel one? >> Yes, that's right. There are actually two directions of the vhost-pci driver >> implementation - kernel driver >> and dpdk pmd. The QEMU side device patches are first posted out for >> discussion, because when the device >> part is ready, we will be able to have the related team work on the pmd >> driver as well. As usual, the pmd >> driver would give a much better throughput. > For PMD to work though, the protocol will need to support vIOMMU. > Not asking you to add it right now since it's work in progress > for vhost user at this point, but something you will have to > keep in mind. Further, reviewing vhost user iommu patches might be > a good idea for you. > For the dpdk pmd case, I'm not sure if vIOMMU is necessary to be used - Since it only needs to share a piece of memory between the two VMs, we can only send that piece of memory info for sharing, instead of sending the entire VM's memory and using vIOMMU to expose that accessible portion. Best, Wei