From: Jason Wang <jasowang@redhat.com>
To: "Michael S. Tsirkin" <mst@redhat.com>,
Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, zhiyong.yang@intel.com,
Maxime Coquelin <maxime.coquelin@redhat.com>,
Wei Wang <wei.w.wang@intel.com>
Subject: Re: [Qemu-devel] [RFC 0/2] virtio-vhost-user: add virtio-vhost-user device
Date: Tue, 23 Jan 2018 18:01:15 +0800 [thread overview]
Message-ID: <b0568f2d-928c-5124-757e-fe5b706c4df7@redhat.com> (raw)
In-Reply-To: <20180122215348-mutt-send-email-mst@kernel.org>
On 2018年01月23日 04:04, Michael S. Tsirkin wrote:
> On Mon, Jan 22, 2018 at 12:17:51PM +0000, Stefan Hajnoczi wrote:
>> On Mon, Jan 22, 2018 at 11:33:46AM +0800, Jason Wang wrote:
>>> On 2018年01月19日 21:06, Stefan Hajnoczi wrote:
>>>> These patches implement the virtio-vhost-user device design that I have
>>>> described here:
>>>> https://stefanha.github.io/virtio/vhost-user-slave.html#x1-2830007
>>> Thanks for the patches, looks rather interesting and similar to split device
>>> model used by Xen.
>>>
>>>> The goal is to let the guest act as the vhost device backend for other guests.
>>>> This allows virtual networking and storage appliances to run inside guests.
>>> So the question still, what kind of protocol do you want to run on top? If
>>> it was ethernet based, virtio-net work pretty well and it can even do
>>> migration.
>>>
>>>> This device is particularly interesting for poll mode drivers where exitless
>>>> VM-to-VM communication is possible, completely bypassing the hypervisor in the
>>>> data path.
>>> It's better to clarify the reason of hypervisor bypassing. (performance,
>>> security or scalability).
>> Performance - yes, definitely. Exitless VM-to-VM is the fastest
>> possible way to communicate between VMs. Today it can only be done
>> using ivshmem. This patch series allows virtio devices to take
>> advantage of it and will encourage people to use virtio instead of
>> non-standard ivshmem devices.
>>
>> Security - I don't think this feature is a security improvement. It
>> reduces isolation because VM1 has full shared memory access to VM2. In
>> fact, this is a reason for users to consider carefully whether they
>> even want to use this feature.
> True without an IOMMU, however using a vIOMMU within VM2
> can protect the VM2, can't it?
It's not clear to me how to do this. E.g need a way to report failure to
VM2 or #PF?
>
>> Scalability - much for the same reasons as the Performance section
>> above. Bypassing the hypervisor eliminates scalability bottlenecks
>> (e.g. host network stack and bridge).
>>
>>> Probably not for the following cases:
>>>
>>> 1) kick/call
>> I disagree here because kick/call is actually very efficient!
>>
>> VM1's irqfd is the ioeventfd for VM2. When VM2 writes to the ioeventfd
>> there is a single lightweight vmexit which injects an interrupt into
>> VM1. QEMU is not involved and the host kernel scheduler is not involved
>> so this is a low-latency operation.
Right, looks like I was wrong. But consider irqfd may do wakup which
means scheduler is still needed.
>> I haven't tested this yet but the ioeventfd code looks like this will
>> work.
>>
>>> 2) device IOTLB / IOMMU transaction (or any other case that backends needs
>>> metadata from qemu).
>> Yes, this is the big weakness of vhost-user in general. The IOMMU
>> feature doesn't offer good isolation
> I think that's an implementation issue, not a protocol issue.
>
>
>> and even when it does, performance
>> will be an issue.
> If the IOMMU mappings are dynamic - but they are mostly
> static with e.g. dpdk, right?
>
>
>>>> * Implement "Additional Device Resources over PCI" for shared memory,
>>>> doorbells, and notifications instead of hardcoding a BAR with magic
>>>> offsets into virtio-vhost-user:
>>>> https://stefanha.github.io/virtio/vhost-user-slave.html#x1-2920007
>>> Does this mean we need to standardize vhost-user protocol first?
>> Currently the draft spec says:
>>
>> This section relies on definitions from the Vhost-user Protocol [1].
>>
>> [1] https://git.qemu.org/?p=qemu.git;a=blob_plain;f=docs/interop/vhost-user.txt;hb=HEAD
>>
>> Michael: Is it okay to simply include this link?
>
> It is OK to include normative and non-normative references,
> they go in the introduction and then you refer to them
> anywhere in the document.
>
>
> I'm still reviewing the draft. At some level, this is a general tunnel
> feature, it can tunnel any protocol. That would be one way to
> isolate it.
Right, but it should not be the main motivation, consider we can tunnel
any protocol on top of ethernet too.
>
>>>> * Implement the VRING_KICK eventfd - currently vhost-user slaves must be poll
>>>> mode drivers.
>>>> * Optimize VRING_CALL doorbell with ioeventfd to avoid QEMU exit.
>>> The performance implication needs to be measured. It looks to me both kick
>>> and call will introduce more latency form the point of guest.
>> I described the irqfd + ioeventfd approach above. It should be faster
>> than virtio-net + bridge today.
>>
>>>> * vhost-user log feature
>>>> * UUID config field for stable device identification regardless of PCI
>>>> bus addresses.
>>>> * vhost-user IOMMU and SLAVE_REQ_FD feature
>>> So an assumption is the VM that implements vhost backends should be at least
>>> as secure as vhost-user backend process on host. Could we have this
>>> conclusion?
>> Yes.
>>
>> Sadly the vhost-user IOMMU protocol feature does not provide isolation.
>> At the moment IOMMU is basically a layer of indirection (mapping) but
>> the vhost-user backend process still has full access to guest RAM :(.
> An important feature would be to do the isolation in the qemu.
> So trust the qemu running VM2 but not VM2 itself.
Agree, we'd better not consider VM is as secure as qemu.
>
>
>>> Btw, it's better to have some early numbers, e.g what testpmd reports during
>>> forwarding.
>> I need to rely on others to do this (and many other things!) because
>> virtio-vhost-user isn't the focus of my work.
>>
>> These patches were written to demonstrate my suggestions for vhost-pci.
>> They were written at work but also on weekends, early mornings, and late
>> nights to avoid delaying Wei and Zhiyong's vhost-pci work too much.
Thanks a lot for the effort! If anyone want to benchmark, I would expect
compare the following three solutions:
1) vhost-pci
2) virtio-vhost-user
3) testpmd with two vhost-user ports
Performance number is really important to show the advantages of new ideas.
>>
>> If this approach has merit then I hope others will take over and I'll
>> play a smaller role addressing some of the todo items and cleanups.
It looks to me the advantages are 1) generic virtio layer (vhost-pci can
achieve this too if necessary) 2) some kind of code reusing (vhost pmd).
And I'd expect they have similar performance result consider no major
differences between them.
Thanks
>> Stefan
>
next prev parent reply other threads:[~2018-01-23 10:01 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-19 13:06 [Qemu-devel] [RFC 0/2] virtio-vhost-user: add virtio-vhost-user device Stefan Hajnoczi
2018-01-19 13:06 ` [Qemu-devel] [RFC 1/2] vhost-user: share the vhost-user protocol related structures Stefan Hajnoczi
2018-01-19 13:06 ` [Qemu-devel] [RFC 2/2] virtio-vhost-user: add virtio-vhost-user device Stefan Hajnoczi
2018-01-19 13:55 ` [Qemu-devel] [RFC 0/2] " Stefan Hajnoczi
2018-01-22 3:33 ` Jason Wang
2018-01-22 12:17 ` Stefan Hajnoczi
2018-01-22 20:04 ` Michael S. Tsirkin
2018-01-23 10:01 ` Jason Wang [this message]
2018-01-23 16:07 ` Michael S. Tsirkin
2018-01-25 14:07 ` Paolo Bonzini
2018-01-25 14:48 ` Michael S. Tsirkin
2018-01-26 3:49 ` Jason Wang
2018-01-23 10:09 ` Stefan Hajnoczi
2018-01-23 10:46 ` Wei Wang
2018-01-22 11:09 ` Wei Wang
2018-01-23 11:12 ` Stefan Hajnoczi
2018-01-23 13:06 ` Wei Wang
2018-01-24 11:40 ` Stefan Hajnoczi
2018-01-25 10:19 ` Wei Wang
2018-01-26 14:44 ` Stefan Hajnoczi
2018-01-30 12:09 ` Wei Wang
2018-02-01 17:08 ` Michael S. Tsirkin
2018-02-02 13:08 ` Wei Wang
2018-02-05 16:25 ` Stefan Hajnoczi
2018-02-06 1:28 ` Wang, Wei W
2018-02-06 9:31 ` Stefan Hajnoczi
2018-02-06 12:42 ` Wang, Wei W
2018-02-06 14:13 ` Stefan Hajnoczi
2018-02-02 15:25 ` Stefan Hajnoczi
2018-02-05 9:57 ` Wang, Wei W
2018-02-05 15:57 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=b0568f2d-928c-5124-757e-fe5b706c4df7@redhat.com \
--to=jasowang@redhat.com \
--cc=maxime.coquelin@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=wei.w.wang@intel.com \
--cc=zhiyong.yang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).