From: Jason Wang <jasowang@redhat.com>
To: Wei Wang <wei.w.wang@intel.com>, Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] vhost-pci and virtio-vhost-user
Date: Tue, 16 Jan 2018 13:33:13 +0800 [thread overview]
Message-ID: <dffa9693-18d8-6bb9-e024-0f137624494d@redhat.com> (raw)
In-Reply-To: <5A5C85DD.10705@intel.com>
On 2018年01月15日 18:43, Wei Wang wrote:
> On 01/15/2018 04:34 PM, Jason Wang wrote:
>>
>>
>> On 2018年01月15日 15:59, Wei Wang wrote:
>>> On 01/15/2018 02:56 PM, Jason Wang wrote:
>>>>
>>>>
>>>> On 2018年01月12日 18:18, Stefan Hajnoczi wrote:
>>>>>
>>>>
>>>> I just fail understand why we can't do software defined network or
>>>> storage with exist virtio device/drivers (or are there any
>>>> shortcomings that force us to invent new infrastructure).
>>>>
>>>
>>> Existing virtio-net works with a host central vSwitch, and it has
>>> the following disadvantages:
>>> 1) long code/data path;
>>> 2) poor scalability; and
>>> 3) host CPU sacrifice
>>
>> Please show me the numbers.
>
> Sure. For 64B packet transmission between two VMs: vhost-user reports
> ~6.8Mpps, and vhost-pci reports ~11Mpps, which is ~1.62x faster.
>
This result is kind of incomplete. So still many questions left:
- What's the configuration of the vhost-user?
- What's the result of e.g 1500 byte?
- You said it improves scalability, at least I can't get this conclusion
just from what you provide here
- You suspect long code/data path, but no latency numbers to prove it
>
>>
>>>
>>> Vhost-pci solves the above issues by providing a point-to-point
>>> communication between VMs. No matter how the control path would look
>>> like finally, the key point is that the data path is P2P between VMs.
>>>
>>> Best,
>>> Wei
>>>
>>>
>>
>> Well, I think I've pointed out several times in the replies of
>> previous versions. Both vhost-pci-net and virtio-net is an ethernet
>> device, which is not tied to a central vswitch for sure. There're
>> just too many methods or tricks which can be used to build a point to
>> point data path.
>
>
> Could you please show an existing example that makes virtio-net work
> without a host vswitch/bridge?
For vhost-user, it's as simple as a testpmd which does io forwarding
between two vhost ports? For kernel, you can do even more tricks, tc,
bpf or whatever others.
> Could you also share other p2p data path solutions that you have in
> mind? Thanks.
>
>
> Best,
> Wei
>
So my point stands still: both vhost-pci-net and virtio-net are ethernet
devices, any ethernet device can connect to each other directly without
switch. Saying virtio-net can not connect to each other directly without
a switch obviously make no sense, it's a network topology issue for
sure. Even if it was not a typical setup or configuration, extending the
exist backends is 1st choice unless you can prove there're any design
limitations of exist solutions.
Thanks
next prev parent reply other threads:[~2018-01-16 5:33 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-10 16:14 [Qemu-devel] vhost-pci and virtio-vhost-user Stefan Hajnoczi
2018-01-11 6:31 ` Wei Wang
2018-01-11 9:56 ` Stefan Hajnoczi
2018-01-12 6:44 ` Wei Wang
2018-01-12 10:37 ` Stefan Hajnoczi
2018-01-14 3:36 ` Wang, Wei W
2018-01-15 14:02 ` Stefan Hajnoczi
2018-01-11 10:57 ` Jason Wang
2018-01-11 15:23 ` Stefan Hajnoczi
2018-01-12 3:32 ` Jason Wang
2018-01-12 5:20 ` Yang, Zhiyong
2018-01-15 3:09 ` Jason Wang
2018-01-12 10:18 ` Stefan Hajnoczi
2018-01-15 6:56 ` Jason Wang
2018-01-15 7:59 ` Wei Wang
2018-01-15 8:34 ` Jason Wang
2018-01-15 10:43 ` Wei Wang
2018-01-16 5:33 ` Jason Wang [this message]
2018-01-17 8:44 ` Wei Wang
2018-01-15 13:56 ` Stefan Hajnoczi
2018-01-16 5:41 ` Jason Wang
2018-01-18 10:51 ` Stefan Hajnoczi
2018-01-18 11:51 ` Jason Wang
2018-01-19 17:20 ` Stefan Hajnoczi
2018-01-22 3:54 ` Jason Wang
2018-01-23 11:52 ` Stefan Hajnoczi
2018-01-15 7:56 ` Wei Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=dffa9693-18d8-6bb9-e024-0f137624494d@redhat.com \
--to=jasowang@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=wei.w.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).