From: Jason Wang <jasowang@redhat.com>
To: "Wang, Wei W" <wei.w.wang@intel.com>,
"Marc-André Lureau" <marcandre.lureau@gmail.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Stefan Hajnoczi" <stefanha@gmail.com>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"virtio-dev@lists.oasis-open.org"
<virtio-dev@lists.oasis-open.org>
Subject: Re: [Qemu-devel] [virtio-dev] Vhost-pci RFC2.0
Date: Fri, 5 May 2017 12:05:06 +0800 [thread overview]
Message-ID: <5ec930ef-82e1-85ee-71bd-2d3f1b554a68@redhat.com> (raw)
In-Reply-To: <286AC319A985734F985F78AFA26841F7391EF490@shsmsx102.ccr.corp.intel.com>
On 2017年04月19日 14:38, Wang, Wei W wrote:
> Hi,
> We made some design changes to the original vhost-pci design, and want
> to open
> a discussion about the latest design (labelled 2.0) and its extension
> (2.1).
> 2.0 design: One VM shares the entire memory of another VM
> 2.1 design: One VM uses an intermediate memory shared with another VM for
> packet transmission.
> For the convenience of discussion, I have some pictures presented at
> this link:
> _https://github.com/wei-w-wang/vhost-pci-discussion/blob/master/vhost-pci-rfc2.0.pdf_
Hi, is there any doc or pointer that describes the the design in detail?
E.g patch 4 in v1
https://lists.gnu.org/archive/html/qemu-devel/2016-05/msg05163.html.
Thanks
> Fig. 1 shows the common driver frame that we want use to build the 2.0
> and 2.1
> design. A TX/RX engine consists of a local ring and an exotic ring.
> Local ring:
> 1) allocated by the driver itself;
> 2) registered with the device (i.e. virtio_add_queue())
> Exotic ring:
> 1) ring memory comes from the outside (of the driver), and exposed to
> the driver
> via a BAR MMIO;
> 2) does not have a registration in the device, so no ioeventfd/irqfd,
> configuration
> registers allocated in the device
> Fig. 2 shows how the driver frame is used to build the 2.0 design.
> 1) Asymmetric: vhost-pci-net <-> virtio-net
> 2) VM1 shares the entire memory of VM2, and the exotic rings are the rings
> from VM2.
> 3) Performance (in terms of copies between VMs):
> TX: 0-copy (packets are put to VM2’s RX ring directly)
> RX: 1-copy (the green arrow line in the VM1’s RX engine)
> Fig. 3 shows how the driver frame is used to build the 2.1 design.
> 1) Symmetric: vhost-pci-net <-> vhost-pci-net
> 2) Share an intermediate memory, allocated by VM1’s vhost-pci device,
> for data exchange, and the exotic rings are built on the shared memory
> 3) Performance:
> TX: 1-copy
> RX: 1-copy
> Fig. 4 shows the inter-VM notification path for 2.0 (2.1 is similar).
> The four eventfds are allocated by virtio-net, and shared with
> vhost-pci-net:
> Uses virtio-net’s TX/RX kickfd as the vhost-pci-net’s RX/TX callfd
> Uses virtio-net’s TX/RX callfd as the vhost-pci-net’s RX/TX kickfd
> Example of how it works:
> After packets are put into vhost-pci-net’s TX, the driver kicks TX, which
> causes the an interrupt associated with fd3 to be injected to virtio-net
> The draft code of the 2.0 design is ready, and can be found here:
> Qemu: _https://github.com/wei-w-wang/vhost-pci-device_
> Guest driver: _https://github.com/wei-w-wang/vhost-pci-driver_
> We tested the 2.0 implementation using the Spirent packet
> generator to transmit 64B packets, the results show that the
> throughput of vhost-pci reaches around 1.8Mpps, which is around
> two times larger than the legacy OVS+DPDK.
Does this mean OVS+DPDK can only have ~0.9Mpps? A little bit surprise
that the number looks rather low (I can get similar result if I use
kernel bridge).
Thanks
> Also, vhost-pci shows
> better scalability than OVS+DPDK.
> Best,
> Wei
next prev parent reply other threads:[~2017-05-05 4:05 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-19 6:38 [Qemu-devel] Vhost-pci RFC2.0 Wang, Wei W
2017-04-19 7:31 ` Marc-André Lureau
2017-04-19 8:33 ` Wei Wang
2017-04-19 7:35 ` Jan Kiszka
2017-04-19 8:42 ` Wei Wang
2017-04-19 8:49 ` [Qemu-devel] [virtio-dev] " Jan Kiszka
2017-04-19 9:09 ` Wei Wang
2017-04-19 9:31 ` Jan Kiszka
2017-04-19 10:02 ` Wei Wang
2017-04-19 10:36 ` Jan Kiszka
2017-04-19 11:11 ` Wei Wang
2017-04-19 11:21 ` Jan Kiszka
2017-04-19 14:33 ` Wang, Wei W
2017-04-19 14:52 ` Jan Kiszka
2017-04-20 6:51 ` Wei Wang
2017-04-20 7:05 ` Jan Kiszka
2017-04-20 8:58 ` Wei Wang
2017-04-19 9:57 ` [Qemu-devel] [virtio-dev] " Stefan Hajnoczi
2017-04-19 10:42 ` Wei Wang
2017-04-19 15:24 ` Stefan Hajnoczi
2017-04-20 5:51 ` Wei Wang
2017-05-02 12:48 ` Stefan Hajnoczi
2017-05-03 6:02 ` Wei Wang
2017-05-05 4:05 ` Jason Wang [this message]
2017-05-05 6:18 ` Wei Wang
2017-05-05 9:18 ` Jason Wang
2017-05-08 1:39 ` Wei Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5ec930ef-82e1-85ee-71bd-2d3f1b554a68@redhat.com \
--to=jasowang@redhat.com \
--cc=marcandre.lureau@gmail.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@gmail.com \
--cc=virtio-dev@lists.oasis-open.org \
--cc=wei.w.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).