From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: Wei Wang <wei.w.wang@intel.com>,
stefanha@gmail.com, marcandre.lureau@gmail.com,
pbonzini@redhat.com, virtio-dev@lists.oasis-open.org,
qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
Date: Fri, 19 May 2017 19:49:46 +0300 [thread overview]
Message-ID: <20170519194544-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <d7df5484-7e08-4e1c-9e35-69d73858c736@redhat.com>
On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
>
>
> On 2017年05月18日 11:03, Wei Wang wrote:
> > On 05/17/2017 02:22 PM, Jason Wang wrote:
> > >
> > >
> > > On 2017年05月17日 14:16, Jason Wang wrote:
> > > >
> > > >
> > > > On 2017年05月16日 15:12, Wei Wang wrote:
> > > > > > >
> > > > > >
> > > > > > Hi:
> > > > > >
> > > > > > Care to post the driver codes too?
> > > > > >
> > > > > OK. It may take some time to clean up the driver code before
> > > > > post it out. You can first
> > > > > have a check of the draft at the repo here:
> > > > > https://github.com/wei-w-wang/vhost-pci-driver
> > > > >
> > > > > Best,
> > > > > Wei
> > > >
> > > > Interesting, looks like there's one copy on tx side. We used to
> > > > have zerocopy support for tun for VM2VM traffic. Could you
> > > > please try to compare it with your vhost-pci-net by:
> > > >
> > We can analyze from the whole data path - from VM1's network stack to
> > send packets -> VM2's
> > network stack to receive packets. The number of copies are actually the
> > same for both.
>
> That's why I'm asking you to compare the performance. The only reason for
> vhost-pci is performance. You should prove it.
>
> >
> > vhost-pci: 1-copy happen in VM1's driver xmit(), which copes packets
> > from its network stack to VM2's
> > RX ring buffer. (we call it "zerocopy" because there is no intermediate
> > copy between VMs)
> > zerocopy enabled vhost-net: 1-copy happen in tun's recvmsg, which copies
> > packets from VM1's TX ring
> > buffer to VM2's RX ring buffer.
>
> Actually, there's a major difference here. You do copy in guest which
> consumes time slice of vcpu thread on host. Vhost_net do this in its own
> thread. So I feel vhost_net is even faster here, maybe I was wrong.
Yes but only if you have enough CPUs. The point of vhost-pci
is to put the switch in a VM and scale better with # of VMs.
> >
> > That being said, we compared to vhost-user, instead of vhost_net,
> > because vhost-user is the one
> > that is used in NFV, which we think is a major use case for vhost-pci.
>
> If this is true, why not draft a pmd driver instead of a kernel one? And do
> you use virtio-net kernel driver to compare the performance? If yes, has OVS
> dpdk optimized for kernel driver (I think not)?
>
> What's more important, if vhost-pci is faster, I think its kernel driver
> should be also faster than virtio-net, no?
If you have a vhost CPU per VCPU and can give a host CPU to each using
that will be faster. But not everyone has so many host CPUs.
> >
> >
> > > > - make sure zerocopy is enabled for vhost_net
> > > > - comment skb_orphan_frags() in tun_net_xmit()
> > > >
> > > > Thanks
> > > >
> > >
> > > You can even enable tx batching for tun by ethtool -C tap0 rx-frames
> > > N. This will greatly improve the performance according to my test.
> > >
> >
> > Thanks, but would this hurt latency?
> >
> > Best,
> > Wei
>
> I don't see this in my test.
>
> Thanks
next prev parent reply other threads:[~2017-05-19 16:49 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 01/16] vhost-user: share the vhost-user protocol related structures Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 02/16] vl: add the vhost-pci-slave command line option Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 03/16] vhost-pci-slave: create a vhost-user slave to support vhost-pci Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 04/16] vhost-pci-net: add vhost-pci-net Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 05/16] vhost-pci-net-pci: add vhost-pci-net-pci Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 06/16] virtio: add inter-vm notification support Wei Wang
2017-05-15 0:21 ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 07/16] vhost-user: send device id to the slave Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 08/16] vhost-user: send guest physical address of virtqueues " Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 09/16] vhost-user: send VHOST_USER_SET_VHOST_PCI_START/STOP Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 10/16] vhost-pci-net: send the negotiated feature bits to the master Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 11/16] vhost-user: add asynchronous read for the vhost-user master Wei Wang
2017-05-12 8:51 ` Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 12/16] vhost-user: handling VHOST_USER_SET_FEATURES Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 13/16] vhost-pci-slave: add "reset_virtio" Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 14/16] vhost-pci-slave: add support to delete a vhost-pci device Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 15/16] vhost-pci-net: tell the driver that it is ready to send packets Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 16/16] vl: enable vhost-pci-slave Wei Wang
2017-05-12 9:30 ` [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication no-reply
2017-05-16 15:21 ` Michael S. Tsirkin
2017-05-16 6:46 ` Jason Wang
2017-05-16 7:12 ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-05-17 6:16 ` Jason Wang
2017-05-17 6:22 ` Jason Wang
2017-05-18 3:03 ` Wei Wang
2017-05-19 3:10 ` [Qemu-devel] [virtio-dev] " Jason Wang
2017-05-19 9:00 ` Wei Wang
2017-05-19 9:53 ` Jason Wang
2017-05-19 20:44 ` Michael S. Tsirkin
2017-05-23 11:09 ` Wei Wang
2017-05-23 15:15 ` Michael S. Tsirkin
2017-05-19 15:33 ` Stefan Hajnoczi
2017-05-22 2:27 ` Jason Wang
2017-05-22 11:46 ` Wang, Wei W
2017-05-23 2:08 ` Jason Wang
2017-05-23 5:47 ` Wei Wang
2017-05-23 6:32 ` Jason Wang
2017-05-23 10:48 ` Wei Wang
2017-05-24 3:24 ` Jason Wang
2017-05-24 8:31 ` Wei Wang
2017-05-25 7:59 ` Jason Wang
2017-05-25 12:01 ` Wei Wang
2017-05-25 12:22 ` Jason Wang
2017-05-25 12:31 ` [Qemu-devel] [virtio-dev] " Jason Wang
2017-05-25 17:57 ` Michael S. Tsirkin
2017-06-04 10:34 ` Wei Wang
2017-06-05 2:21 ` Michael S. Tsirkin
2017-05-25 14:35 ` [Qemu-devel] " Eric Blake
2017-05-26 4:26 ` Jason Wang
2017-05-19 16:49 ` Michael S. Tsirkin [this message]
2017-05-22 2:22 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170519194544-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=jasowang@redhat.com \
--cc=marcandre.lureau@gmail.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@gmail.com \
--cc=virtio-dev@lists.oasis-open.org \
--cc=wei.w.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).