From: Wei Wang <wei.w.wang@intel.com>
To: Jason Wang <jasowang@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] vhost-pci and virtio-vhost-user
Date: Wed, 17 Jan 2018 16:44:45 +0800 [thread overview]
Message-ID: <5A5F0CFD.5030204@intel.com> (raw)
In-Reply-To: <dffa9693-18d8-6bb9-e024-0f137624494d@redhat.com>
On 01/16/2018 01:33 PM, Jason Wang wrote:
>
>
> On 2018年01月15日 18:43, Wei Wang wrote:
>> On 01/15/2018 04:34 PM, Jason Wang wrote:
>>>
>>>
>>> On 2018年01月15日 15:59, Wei Wang wrote:
>>>> On 01/15/2018 02:56 PM, Jason Wang wrote:
>>>>>
>>>>>
>>>>> On 2018年01月12日 18:18, Stefan Hajnoczi wrote:
>>>>>>
>>>>>
>>>>> I just fail understand why we can't do software defined network or
>>>>> storage with exist virtio device/drivers (or are there any
>>>>> shortcomings that force us to invent new infrastructure).
>>>>>
>>>>
>>>> Existing virtio-net works with a host central vSwitch, and it has
>>>> the following disadvantages:
>>>> 1) long code/data path;
>>>> 2) poor scalability; and
>>>> 3) host CPU sacrifice
>>>
>>> Please show me the numbers.
>>
>> Sure. For 64B packet transmission between two VMs: vhost-user reports
>> ~6.8Mpps, and vhost-pci reports ~11Mpps, which is ~1.62x faster.
>>
>
> This result is kind of incomplete. So still many questions left:
>
> - What's the configuration of the vhost-user?
> - What's the result of e.g 1500 byte?
> - You said it improves scalability, at least I can't get this
> conclusion just from what you provide here
> - You suspect long code/data path, but no latency numbers to prove it
>
Had an offline meeting with Jason. The future discussion will be more
focused on the design.
Here is a conclusion about more results we collected for 64B packet
transmission, compared to ovs-dpdk (though we are comparing to ovs-dpdk
here, but vhost-pci isn't meant to replace ovs-dpdk. It's for inter-VM
communication, and packets going to the outside world will go from the
traditional backend like ovs-dpdk):
1) 2VM communication: over 1.6x higher throughput;
2) 22% shorter latency;
3) in the 5-VM chain communication tests, vhost-pci shows ~6.5x higher
throughput thanks to its better scalability
We'll provide 1500B test results later.
Best,
Wei
next prev parent reply other threads:[~2018-01-17 8:42 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-10 16:14 [Qemu-devel] vhost-pci and virtio-vhost-user Stefan Hajnoczi
2018-01-11 6:31 ` Wei Wang
2018-01-11 9:56 ` Stefan Hajnoczi
2018-01-12 6:44 ` Wei Wang
2018-01-12 10:37 ` Stefan Hajnoczi
2018-01-14 3:36 ` Wang, Wei W
2018-01-15 14:02 ` Stefan Hajnoczi
2018-01-11 10:57 ` Jason Wang
2018-01-11 15:23 ` Stefan Hajnoczi
2018-01-12 3:32 ` Jason Wang
2018-01-12 5:20 ` Yang, Zhiyong
2018-01-15 3:09 ` Jason Wang
2018-01-12 10:18 ` Stefan Hajnoczi
2018-01-15 6:56 ` Jason Wang
2018-01-15 7:59 ` Wei Wang
2018-01-15 8:34 ` Jason Wang
2018-01-15 10:43 ` Wei Wang
2018-01-16 5:33 ` Jason Wang
2018-01-17 8:44 ` Wei Wang [this message]
2018-01-15 13:56 ` Stefan Hajnoczi
2018-01-16 5:41 ` Jason Wang
2018-01-18 10:51 ` Stefan Hajnoczi
2018-01-18 11:51 ` Jason Wang
2018-01-19 17:20 ` Stefan Hajnoczi
2018-01-22 3:54 ` Jason Wang
2018-01-23 11:52 ` Stefan Hajnoczi
2018-01-15 7:56 ` Wei Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5A5F0CFD.5030204@intel.com \
--to=wei.w.wang@intel.com \
--cc=jasowang@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).