From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58146) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZZZSz-0007wk-8u for qemu-devel@nongnu.org; Wed, 09 Sep 2015 03:06:53 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1ZZZSv-00021d-UV for qemu-devel@nongnu.org; Wed, 09 Sep 2015 03:06:49 -0400 Received: from mx1.redhat.com ([209.132.183.28]:53426) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1ZZZSv-00020s-LQ for qemu-devel@nongnu.org; Wed, 09 Sep 2015 03:06:45 -0400 Date: Wed, 9 Sep 2015 10:06:41 +0300 From: "Michael S. Tsirkin" Message-ID: <20150909095258-mutt-send-email-mst@redhat.com> References: <20150831160655-mutt-send-email-mst@redhat.com> <55ED854A.1080804@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <55ED854A.1080804@huawei.com> Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] rfc: vhost user enhancements for vm2vm communication List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Claudio Fontana Cc: opnfv-tech-discuss@lists.opnfv.org, virtio-dev@lists.oasis-open.org, Jan Kiszka , qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org On Mon, Sep 07, 2015 at 02:38:34PM +0200, Claudio Fontana wrote: > Coming late to the party,=20 >=20 > On 31.08.2015 16:11, Michael S. Tsirkin wrote: > > Hello! > > During the KVM forum, we discussed supporting virtio on top > > of ivshmem. I have considered it, and came up with an alternative > > that has several advantages over that - please see below. > > Comments welcome. >=20 > as Jan mentioned we actually discussed a virtio-shmem device which woul= d incorporate the advantages of ivshmem (so no need for a separate ivshme= m device), which would use the well known virtio interface, taking advant= age of the new virtio-1 virtqueue layout to split r/w and read-only rings= as seen from the two sides, and make use also of BAR0 which has been fre= ed up for use by the device. >=20 > This way it would be possible to share the rings and the actual memory = for the buffers in the PCI bars. The guest VMs could decide to use the sh= ared memory regions directly as prepared by the hypervisor (in the jailho= use case) or QEMU/KVM, or perform their own validation on the input depen= ding on the use case. >=20 > Of course the communication between VMs needs in this case to be pre-co= nfigured and is quite static (which is actually beneficial in our use cas= e). >=20 > But still in your proposed solution, each VM needs to be pre-configured= to communicate with a specific other VM using a separate device right? >=20 > But I wonder if we are addressing the same problem.. in your case you a= re looking at having a shared memory pool for all VMs potentially visible= to all VMs (the vhost-user case), while in the virtio-shmem proposal we = discussed we were assuming specific different regions for every channel. >=20 > Ciao, >=20 > Claudio The problem, as I see it, is to allow inter-vm communication with polling (to get very low latencies) but polling within VMs only, without need to run a host thread (which when polling uses up a host CPU). What was proposed was to simply change virtio to allow "offset within BAR" instead of PA. This would allow VM2VM communication if there are only 2 VMs, but if data needs to be sent to multiple VMs, you must copy it. Additionally, it's a single-purpose feature: you can use it from a userspace PMD but linux will never use it. My proposal is a superset: don't require that BAR memory is used, use IOMMU translation tables. This way, data can be sent to multiple VMs by sharing the same memory with them all. It is still possible to put data in some device BAR if that's what the guest wants to do: just program the IOMMU to limit virtio to the memory range that is within this BAR. Another advantage here is that the feature is more generally useful. > >=20 > > ----- > >=20 > > Existing solutions to userspace switching between VMs on the > > same host are vhost-user and ivshmem. > >=20 > > vhost-user works by mapping memory of all VMs being bridged into the > > switch memory space. > >=20 > > By comparison, ivshmem works by exposing a shared region of memory to= all VMs. > > VMs are required to use this region to store packets. The switch only > > needs access to this region. > >=20 > > Another difference between vhost-user and ivshmem surfaces when polli= ng > > is used. With vhost-user, the switch is required to handle > > data movement between VMs, if using polling, this means that 1 host C= PU > > needs to be sacrificed for this task. > >=20 > > This is easiest to understand when one of the VMs is > > used with VF pass-through. This can be schematically shown below: > >=20 > > +-- VM1 --------------+ +---VM2-----------+ > > | virtio-pci +-vhost-user-+ virtio-pci -- VF | -- VFIO -- IO= MMU -- NIC > > +---------------------+ +-----------------+ > >=20 > >=20 > > With ivshmem in theory communication can happen directly, with two VM= s > > polling the shared memory region. > >=20 > >=20 > > I won't spend time listing advantages of vhost-user over ivshmem. > > Instead, having identified two advantages of ivshmem over vhost-user, > > below is a proposal to extend vhost-user to gain the advantages > > of ivshmem. > >=20 > >=20 > > 1: virtio in guest can be extended to allow support > > for IOMMUs. This provides guest with full flexibility > > about memory which is readable or write able by each device. > > By setting up a virtio device for each other VM we need to > > communicate to, guest gets full control of its security, from > > mapping all memory (like with current vhost-user) to only > > mapping buffers used for networking (like ivshmem) to > > transient mappings for the duration of data transfer only. > > This also allows use of VFIO within guests, for improved > > security. > >=20 > > vhost user would need to be extended to send the > > mappings programmed by guest IOMMU. > >=20 > > 2. qemu can be extended to serve as a vhost-user client: > > remote VM mappings over the vhost-user protocol, and > > map them into another VM's memory. > > This mapping can take, for example, the form of > > a BAR of a pci device, which I'll call here vhost-pci -=20 > > with bus address allowed > > by VM1's IOMMU mappings being translated into > > offsets within this BAR within VM2's physical > > memory space. > >=20 > > Since the translation can be a simple one, VM2 > > can perform it within its vhost-pci device driver. > >=20 > > While this setup would be the most useful with polling, > > VM1's ioeventfd can also be mapped to > > another VM2's irqfd, and vice versa, such that VMs > > can trigger interrupts to each other without need > > for a helper thread on the host. > >=20 > >=20 > > The resulting channel might look something like the following: > >=20 > > +-- VM1 --------------+ +---VM2-----------+ > > | virtio-pci -- iommu +--+ vhost-pci -- VF | -- VFIO -- IOMMU -- NIC > > +---------------------+ +-----------------+ > >=20 > > comparing the two diagrams, a vhost-user thread on the host is > > no longer required, reducing the host CPU utilization when > > polling is active. At the same time, VM2 can not access all of VM1's > > memory - it is limited by the iommu configuration setup by VM1. > >=20 > >=20 > > Advantages over ivshmem: > >=20 > > - more flexibility, endpoint VMs do not have to place data at any > > specific locations to use the device, in practice this likely > > means less data copies. > > - better standardization/code reuse > > virtio changes within guests would be fairly easy to implement > > and would also benefit other backends, besides vhost-user > > standard hotplug interfaces can be used to add and remove these > > channels as VMs are added or removed. > > - migration support > > It's easy to implement since ownership of memory is well defined. > > For example, during migration VM2 can notify hypervisor of VM1 > > by updating dirty bitmap each time is writes into VM1 memory. > >=20 > > Thanks, > >=20 >=20 >=20 > --=20 > Claudio Fontana > Server Virtualization Architect > Huawei Technologies Duesseldorf GmbH > Riesstra=DFe 25 - 80992 M=FCnchen