From: Jason Wang <jasowang@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: wei.w.wang@intel.com, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] vhost-pci and virtio-vhost-user
Date: Mon, 15 Jan 2018 14:56:31 +0800 [thread overview]
Message-ID: <a45c1da8-03e8-140f-725c-af0f31e3da62@redhat.com> (raw)
In-Reply-To: <20180112101807.GE7356@stefanha-x1.localdomain>
On 2018年01月12日 18:18, Stefan Hajnoczi wrote:
>>>> Form what I'm understanding, vhost-pci tries to build a scalable V2V private
>>>> datapath. But according to what you describe here, virito-vhost-user tries
>>>> to make it possible to implement the device inside another VM. I understand
>>>> the goal of vhost-pci could be done on top, but it looks to me it would then
>>>> rather similar to the design of Xen driver domain. So I can not figure out
>>>> how it can be done in a high performance way.
>>> vhost-pci and virtio-vhost-user both have the same goal. They allow
>>> a VM to implement a vhost device (net, scsi, blk, etc).
>> Looks not, if I read the code correctly, vhost-pci has a device
>> implementation in qemu, and in slave VM it only have a vhost-pci-net driver.
> You are right that the current "[PATCH v3 0/7] Vhost-pci for inter-VM
> communication" does not reach this goal yet. The patch series focusses
> on a subset of vhost-user-net for poll mode drivers.
>
> But the goal is to eventually let VMs implement any vhost device type.
> Even if Wei, you, or I don't implement scsi, for example, someone else
> should be able to do it based on vhost-pci or virtio-vhost-user.
>
> Wei: Do you agree?
>
>>> This allows
>>> software defined network or storage appliances running inside a VM to
>>> provide I/O services to other VMs.
>> Well, I think we can do it even with the existed virtio or whatever other
>> emulated device which should not be bounded to any specific kind of device.
> Please explain the approach you have in mind.
I just fail understand why we can't do software defined network or
storage with exist virtio device/drivers (or are there any shortcomings
that force us to invent new infrastructure).
>
>> And what's more important, according to the kvm 2016 slides of vhost-pci,
>> the motivation of vhost-pci is not building SDN but a chain of VNFs. So
>> bypassing the central vswitch through a private VM2VM path does make sense.
>> (Though whether or not vhost-pci is the best choice is still questionable).
> This is probably my fault. Maybe my networking terminology is wrong. I
> consider "virtual network functions" to be part of "software-defined
> networking" use cases. I'm not implying there must be a central virtual
> switch.
>
> To rephrase: vhost-pci enables exitless VM2VM communication.
The problem is, exitless is not what vhost-pci invents, it could be
achieved now when both sides are doing busypolling.
>
>>> To the other VMs the devices look
>>> like regular virtio devices.
>>>
>>> I'm not sure I understand your reference to the Xen driver domain or
>>> performance.
>> So what proposed here is basically memory sharing and event notification
>> through eventfd, this model have been used by Xen for many years through
>> grant table and event channel. Xen use this to move the backend
>> implementation from dom0 to a driver domain which has direct access to some
>> hardwares. Consider the case of network, it can then implement xen netback
>> inside driver domain which can access hardware NIC directly.
>>
>> This makes sense for Xen and for performance since driver domain (backend)
>> can access hardware directly and event was triggered through lower overhead
>> hypercall (or it can do busypolling). But for virtio-vhost-user, unless you
>> want SRIOV based solutions inside the slave VM, I believe we won't want to
>> go back to Xen since the hardware virtualization can bring extra overheads.
> Okay, this point is about the NFV use case. I can't answer that because
> I'm not familiar with it.
>
> Even if the NFV use case is not ideal for VMs, there are many other use
> cases for VMs implementing vhost devices. In the cloud the VM is the
> first-class object that users can manage. They do not have the ability
> to run vhost-user processes on the host. Therefore I/O appliances need
> to be able to run as VMs and vhost-pci (or virtio-vhost-user) solve that
> problem.
The question is why must use vhost-user? E.g in the case of SDN, you can
easily deploy an OVS instance with openflow inside a VM and it works
like a charm.
>
>>> Both vhost-pci and virtio-vhost-user work using shared
>>> memory access to the guest RAM of the other VM. Therefore they can poll
>>> virtqueues and avoid vmexit. They do also support cross-VM interrupts,
>>> thanks to QEMU setting up irqfd/ioeventfd appropriately on the host.
>>>
>>> Stefan
>> So in conclusion, consider the complexity, I would suggest to figure out
>> whether or not this (either vhost-pci or virito-vhost-user) is really
>> required before moving ahead. E.g, for VM2VM direct network path, this looks
>> simply an issue of network topology instead of the problem of device, so
>> there's a lot of trick, for vhost-user one can easily image to write an
>> application (or use testpmd) to build a zerocopied VM2VM datapath, isn't
>> this not sufficient for the case?
> See above, I described the general cloud I/O appliance use case.
>
> Stefan
So I understand vhost-user could be used to build I/O appliance. What I
don't understand is, the advantages of using vhost-user or why we must
use it inside a guest.
Thanks
next prev parent reply other threads:[~2018-01-15 6:56 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-10 16:14 [Qemu-devel] vhost-pci and virtio-vhost-user Stefan Hajnoczi
2018-01-11 6:31 ` Wei Wang
2018-01-11 9:56 ` Stefan Hajnoczi
2018-01-12 6:44 ` Wei Wang
2018-01-12 10:37 ` Stefan Hajnoczi
2018-01-14 3:36 ` Wang, Wei W
2018-01-15 14:02 ` Stefan Hajnoczi
2018-01-11 10:57 ` Jason Wang
2018-01-11 15:23 ` Stefan Hajnoczi
2018-01-12 3:32 ` Jason Wang
2018-01-12 5:20 ` Yang, Zhiyong
2018-01-15 3:09 ` Jason Wang
2018-01-12 10:18 ` Stefan Hajnoczi
2018-01-15 6:56 ` Jason Wang [this message]
2018-01-15 7:59 ` Wei Wang
2018-01-15 8:34 ` Jason Wang
2018-01-15 10:43 ` Wei Wang
2018-01-16 5:33 ` Jason Wang
2018-01-17 8:44 ` Wei Wang
2018-01-15 13:56 ` Stefan Hajnoczi
2018-01-16 5:41 ` Jason Wang
2018-01-18 10:51 ` Stefan Hajnoczi
2018-01-18 11:51 ` Jason Wang
2018-01-19 17:20 ` Stefan Hajnoczi
2018-01-22 3:54 ` Jason Wang
2018-01-23 11:52 ` Stefan Hajnoczi
2018-01-15 7:56 ` Wei Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a45c1da8-03e8-140f-725c-af0f31e3da62@redhat.com \
--to=jasowang@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=wei.w.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).