qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: "Yang, Zhiyong" <zhiyong.yang@intel.com>,
	Stefan Hajnoczi <stefanha@redhat.com>
Cc: "Wang, Wei W" <wei.w.wang@intel.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] vhost-pci and virtio-vhost-user
Date: Mon, 15 Jan 2018 11:09:46 +0800	[thread overview]
Message-ID: <a92a5335-005e-0d54-869d-0f01106c37df@redhat.com> (raw)
In-Reply-To: <E182254E98A5DA4EB1E657AC7CB9BD2A8B023E3B@BGSMSX101.gar.corp.intel.com>



On 2018年01月12日 13:20, Yang, Zhiyong wrote:
>>>    Both vhost-pci and virtio-vhost-user work using shared memory access
>>> to the guest RAM of the other VM.  Therefore they can poll virtqueues
>>> and avoid vmexit.  They do also support cross-VM interrupts, thanks to
>>> QEMU setting up irqfd/ioeventfd appropriately on the host.
>>>
>>> Stefan
>> So in conclusion, consider the complexity, I would suggest to figure out
>> whether or not this (either vhost-pci or virito-vhost-user) is really required
>> before moving ahead. E.g, for VM2VM direct network path, this looks simply
>> an issue of network topology instead of the problem of device, so there's a
>> lot of trick, for vhost-user one can easily image to write an application (or use
>> testpmd) to build a zerocopied VM2VM datapath, isn't this not sufficient for
>> the case?
> As far as I know,  dequeue zero copied feature of vhost user PMD can't help improve throughput for small packest ,such as 64 bytes.
> On the contrary, it causes perf drop.  The feature mainly helps large packets throughput.

Can you explain why? And what's the number of:

1) 64B/1500B zerocopy
2) 64B/1500B datacopy
3) 64B/1500B vhost-pci

It makes make feel that vhost-pci is dedicated for small bytes? We 
probably don't want a solution for just a specific size of packets.

>     
> Vhostpci can bring the following advantages compared to traditional solution(vhost/virtio PMD pairs)
> 1.  higher throughput for two VMs. ( Let us see the following  case,  if we use NIC passthrough way to two 2 VMs,  vhostpci RX or TX is handled  running 1 single core in VM1,  virtio PMD  is similar on VM2,
> Only RX or TX is handled running on one single core.
> for traditional solution,  except each virtio PMD is running inside each VM,  at least one extra core is needed for vhost user RX and TX as an mediator.
> In this case, the bottleneck lies in the two vhost user ports running on one single core, which has double workload.

Does this still make sense for packet size other than 64 byte (e.g 1500B)?

>   
>
> 2. Low latencies (have shorter data path than tradition soluton, doesn't need to pass host OS any more by vhost user)

Is this still true if you do busy polling on both sides?

>
> 3. reduce nearly 50% cores  because  OVS is not involved again if we apply vhostpci/virtio to VMs-chain case.

Well the differences to me is, copy in guest vs copy in host.

- vhost-pci move the copy from host process to pmd in guest, it probably 
save cores but sacrifice the performance of pmd which needs do copy now
- exist OVS may occupy more cores in host, but if saves the ability of 
guest pmd

 From the view of performance, it looks to me that copy in host is 
faster since it has less overhead e.g vmexits. Vhost-pci probably needs 
more vcpus to compete with current solution.

Thanks

>
> Thanks
> Zhiyong
>

  reply	other threads:[~2018-01-15  3:10 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-01-10 16:14 [Qemu-devel] vhost-pci and virtio-vhost-user Stefan Hajnoczi
2018-01-11  6:31 ` Wei Wang
2018-01-11  9:56   ` Stefan Hajnoczi
2018-01-12  6:44     ` Wei Wang
2018-01-12 10:37       ` Stefan Hajnoczi
2018-01-14  3:36         ` Wang, Wei W
2018-01-15 14:02           ` Stefan Hajnoczi
2018-01-11 10:57 ` Jason Wang
2018-01-11 15:23   ` Stefan Hajnoczi
2018-01-12  3:32     ` Jason Wang
2018-01-12  5:20       ` Yang, Zhiyong
2018-01-15  3:09         ` Jason Wang [this message]
2018-01-12 10:18       ` Stefan Hajnoczi
2018-01-15  6:56         ` Jason Wang
2018-01-15  7:59           ` Wei Wang
2018-01-15  8:34             ` Jason Wang
2018-01-15 10:43               ` Wei Wang
2018-01-16  5:33                 ` Jason Wang
2018-01-17  8:44                   ` Wei Wang
2018-01-15 13:56           ` Stefan Hajnoczi
2018-01-16  5:41             ` Jason Wang
2018-01-18 10:51               ` Stefan Hajnoczi
2018-01-18 11:51                 ` Jason Wang
2018-01-19 17:20                   ` Stefan Hajnoczi
2018-01-22  3:54                     ` Jason Wang
2018-01-23 11:52                       ` Stefan Hajnoczi
2018-01-15  7:56         ` Wei Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a92a5335-005e-0d54-869d-0f01106c37df@redhat.com \
    --to=jasowang@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    --cc=wei.w.wang@intel.com \
    --cc=zhiyong.yang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).