From: Stefan Hajnoczi <stefanha@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: wei.w.wang@intel.com, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] vhost-pci and virtio-vhost-user
Date: Thu, 11 Jan 2018 15:23:45 +0000 [thread overview]
Message-ID: <20180111152345.GA7353@stefanha-x1.localdomain> (raw)
In-Reply-To: <dcb849d4-82fb-de8a-5e81-23d47026c9eb@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 4228 bytes --]
On Thu, Jan 11, 2018 at 06:57:03PM +0800, Jason Wang wrote:
>
>
> On 2018年01月11日 00:14, Stefan Hajnoczi wrote:
> > Hi Wei,
> > I wanted to summarize the differences between the vhost-pci and
> > virtio-vhost-user approaches because previous discussions may have been
> > confusing.
> >
> > vhost-pci defines a new virtio device type for each vhost device type
> > (net, scsi, blk). It therefore requires a virtio device driver for each
> > device type inside the slave VM.
> >
> > Adding a new device type requires:
> > 1. Defining a new virtio device type in the VIRTIO specification.
> > 3. Implementing a new QEMU device model.
> > 2. Implementing a new virtio driver.
> >
> > virtio-vhost-user is a single virtio device that acts as a vhost-user
> > protocol transport for any vhost device type. It requires one virtio
> > driver inside the slave VM and device types are implemented using
> > existing vhost-user slave libraries (librte_vhost in DPDK and
> > libvhost-user in QEMU).
> >
> > Adding a new device type to virtio-vhost-user involves:
> > 1. Adding any new vhost-user protocol messages to the QEMU
> > virtio-vhost-user device model.
> > 2. Adding any new vhost-user protocol messages to the vhost-user slave
> > library.
> > 3. Implementing the new device slave.
> >
> > The simplest case is when no new vhost-user protocol messages are
> > required for the new device. Then all that's needed for
> > virtio-vhost-user is a device slave implementation (#3). That slave
> > implementation will also work with AF_UNIX because the vhost-user slave
> > library hides the transport (AF_UNIX vs virtio-vhost-user). Even
> > better, if another person has already implemented that device slave to
> > use with AF_UNIX then no new code is needed for virtio-vhost-user
> > support at all!
> >
> > If you compare this to vhost-pci, it would be necessary to design a new
> > virtio device, implement it in QEMU, and implement the virtio driver.
> > Much of virtio driver is more or less the same thing as the vhost-user
> > device slave but it cannot be reused because the vhost-user protocol
> > isn't being used by the virtio device. The result is a lot of
> > duplication in DPDK and other codebases that implement vhost-user
> > slaves.
> >
> > The way that vhost-pci is designed means that anyone wishing to support
> > a new device type has to become a virtio device designer. They need to
> > map vhost-user protocol concepts to a new virtio device type. This will
> > be time-consuming for everyone involved (e.g. the developer, the VIRTIO
> > community, etc).
> >
> > The virtio-vhost-user approach stays at the vhost-user protocol level as
> > much as possible. This way there are fewer concepts that need to be
> > mapped by people adding new device types. As a result, it will allow
> > virtio-vhost-user to keep up with AF_UNIX vhost-user and grow because
> > it's easier to work with.
> >
> > What do you think?
> >
> > Stefan
>
> So a question is what's the motivation here?
>
> Form what I'm understanding, vhost-pci tries to build a scalable V2V private
> datapath. But according to what you describe here, virito-vhost-user tries
> to make it possible to implement the device inside another VM. I understand
> the goal of vhost-pci could be done on top, but it looks to me it would then
> rather similar to the design of Xen driver domain. So I can not figure out
> how it can be done in a high performance way.
vhost-pci and virtio-vhost-user both have the same goal. They allow
a VM to implement a vhost device (net, scsi, blk, etc). This allows
software defined network or storage appliances running inside a VM to
provide I/O services to other VMs. To the other VMs the devices look
like regular virtio devices.
I'm not sure I understand your reference to the Xen driver domain or
performance. Both vhost-pci and virtio-vhost-user work using shared
memory access to the guest RAM of the other VM. Therefore they can poll
virtqueues and avoid vmexit. They do also support cross-VM interrupts,
thanks to QEMU setting up irqfd/ioeventfd appropriately on the host.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
next prev parent reply other threads:[~2018-01-11 15:23 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-10 16:14 [Qemu-devel] vhost-pci and virtio-vhost-user Stefan Hajnoczi
2018-01-11 6:31 ` Wei Wang
2018-01-11 9:56 ` Stefan Hajnoczi
2018-01-12 6:44 ` Wei Wang
2018-01-12 10:37 ` Stefan Hajnoczi
2018-01-14 3:36 ` Wang, Wei W
2018-01-15 14:02 ` Stefan Hajnoczi
2018-01-11 10:57 ` Jason Wang
2018-01-11 15:23 ` Stefan Hajnoczi [this message]
2018-01-12 3:32 ` Jason Wang
2018-01-12 5:20 ` Yang, Zhiyong
2018-01-15 3:09 ` Jason Wang
2018-01-12 10:18 ` Stefan Hajnoczi
2018-01-15 6:56 ` Jason Wang
2018-01-15 7:59 ` Wei Wang
2018-01-15 8:34 ` Jason Wang
2018-01-15 10:43 ` Wei Wang
2018-01-16 5:33 ` Jason Wang
2018-01-17 8:44 ` Wei Wang
2018-01-15 13:56 ` Stefan Hajnoczi
2018-01-16 5:41 ` Jason Wang
2018-01-18 10:51 ` Stefan Hajnoczi
2018-01-18 11:51 ` Jason Wang
2018-01-19 17:20 ` Stefan Hajnoczi
2018-01-22 3:54 ` Jason Wang
2018-01-23 11:52 ` Stefan Hajnoczi
2018-01-15 7:56 ` Wei Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180111152345.GA7353@stefanha-x1.localdomain \
--to=stefanha@redhat.com \
--cc=jasowang@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=wei.w.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).