From: Stefan Hajnoczi <stefanha@gmail.com>
To: Wei Wang <wei.w.wang@intel.com>
Cc: "Stefan Hajnoczi" <stefanha@redhat.com>,
"Marc-André Lureau" <marcandre.lureau@gmail.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"virtio-dev@lists.oasis-open.org"
<virtio-dev@lists.oasis-open.org>
Subject: Re: [Qemu-devel] [virtio-dev] Vhost-pci RFC2.0
Date: Wed, 19 Apr 2017 16:24:05 +0100 [thread overview]
Message-ID: <CAJSP0QWUFNH4ac7-1H5PSp_psByCvXH7gq_vzuqmXRmzXGo_ww@mail.gmail.com> (raw)
In-Reply-To: <58F73F22.50108@intel.com>
On Wed, Apr 19, 2017 at 11:42 AM, Wei Wang <wei.w.wang@intel.com> wrote:
> On 04/19/2017 05:57 PM, Stefan Hajnoczi wrote:
>> On Wed, Apr 19, 2017 at 06:38:11AM +0000, Wang, Wei W wrote:
>>>
>>> We made some design changes to the original vhost-pci design, and want to
>>> open
>>> a discussion about the latest design (labelled 2.0) and its extension
>>> (2.1).
>>> 2.0 design: One VM shares the entire memory of another VM
>>> 2.1 design: One VM uses an intermediate memory shared with another VM for
>>> packet transmission.
>>
>> Hi,
>> Can you talk a bit about the motivation for the 2.x design and major
>> changes compared to 1.x?
>
>
> 1.x refers to the design we presented at KVM Form before. The major
> change includes:
> 1) inter-VM notification support
> 2) TX engine and RX engine, which is the structure built in the driver. From
> the device point of view, the local rings of the engines need to be
> registered.
It would be great to support any virtio device type.
The use case I'm thinking of is networking and storage appliances in
cloud environments (e.g. OpenStack). vhost-user doesn't fit nicely
because users may not be allowed to run host userspace processes. VMs
are first-class objects in compute clouds. It would be natural to
deploy networking and storage appliances as VMs using vhost-pci.
In order to achieve this vhost-pci needs to be a virtio transport and
not a virtio-net-specific PCI device. It would extend the VIRTIO 1.x
spec alongside virtio-pci, virtio-mmio, and virtio-ccw.
When you say TX and RX I'm not sure if the design only supports
virtio-net devices?
> The motivation is to build a common design for 2.0 and 2.1.
>
>>
>> What is the relationship between 2.0 and 2.1? Do you plan to upstream
>> both?
>
> 2.0 and 2.1 use different ways to share memory.
>
> 2.0: VM1 shares the entire memory of VM2, which achieves 0 copy
> between VMs while being less secure.
> 2.1: VM1 and VM2 use an intermediate shared memory to transmit
> packets, which results in 1 copy between VMs while being more secure.
>
> Yes, plan to upstream both. Since the difference is the way to share memory,
> I think it wouldn't have too many patches to upstream 2.1 if 2.0 is ready
> (or
> changing the order if needed).
Okay. "Asymmetric" (vhost-pci <-> virtio-pci) and "symmetric"
(vhost-pci <-> vhost-pci) mode might be a clearer way to distinguish
between the two. Or even "compatibility" mode and "native" mode since
existing guests only work in vhost-pci <-> virtio-pci mode. Using
version numbers to describe two different modes of operation could be
confusing.
Stefan
next prev parent reply other threads:[~2017-04-19 15:24 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-19 6:38 [Qemu-devel] Vhost-pci RFC2.0 Wang, Wei W
2017-04-19 7:31 ` Marc-André Lureau
2017-04-19 8:33 ` Wei Wang
2017-04-19 7:35 ` Jan Kiszka
2017-04-19 8:42 ` Wei Wang
2017-04-19 8:49 ` [Qemu-devel] [virtio-dev] " Jan Kiszka
2017-04-19 9:09 ` Wei Wang
2017-04-19 9:31 ` Jan Kiszka
2017-04-19 10:02 ` Wei Wang
2017-04-19 10:36 ` Jan Kiszka
2017-04-19 11:11 ` Wei Wang
2017-04-19 11:21 ` Jan Kiszka
2017-04-19 14:33 ` Wang, Wei W
2017-04-19 14:52 ` Jan Kiszka
2017-04-20 6:51 ` Wei Wang
2017-04-20 7:05 ` Jan Kiszka
2017-04-20 8:58 ` Wei Wang
2017-04-19 9:57 ` [Qemu-devel] [virtio-dev] " Stefan Hajnoczi
2017-04-19 10:42 ` Wei Wang
2017-04-19 15:24 ` Stefan Hajnoczi [this message]
2017-04-20 5:51 ` Wei Wang
2017-05-02 12:48 ` Stefan Hajnoczi
2017-05-03 6:02 ` Wei Wang
2017-05-05 4:05 ` Jason Wang
2017-05-05 6:18 ` Wei Wang
2017-05-05 9:18 ` Jason Wang
2017-05-08 1:39 ` Wei Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJSP0QWUFNH4ac7-1H5PSp_psByCvXH7gq_vzuqmXRmzXGo_ww@mail.gmail.com \
--to=stefanha@gmail.com \
--cc=marcandre.lureau@gmail.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=virtio-dev@lists.oasis-open.org \
--cc=wei.w.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).