From: Wei Wang <wei.w.wang@intel.com>
To: Jason Wang <jasowang@redhat.com>, Stefan Hajnoczi <stefanha@gmail.com>
Cc: "virtio-dev@lists.oasis-open.org"
<virtio-dev@lists.oasis-open.org>,
"pbonzini@redhat.com" <pbonzini@redhat.com>,
"marcandre.lureau@gmail.com" <marcandre.lureau@gmail.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
"mst@redhat.com" <mst@redhat.com>
Subject: Re: [Qemu-devel] [virtio-dev] Re: [virtio-dev] Re: [PATCH v2 00/16] Vhost-pci for inter-VM communication
Date: Thu, 25 May 2017 20:01:48 +0800 [thread overview]
Message-ID: <5926C7AC.4080603@intel.com> (raw)
In-Reply-To: <23dac05e-ba3d-df6d-4831-feab9be1c6d2@redhat.com>
On 05/25/2017 03:59 PM, Jason Wang wrote:
>
>
> On 2017年05月24日 16:31, Wei Wang wrote:
>> On 05/24/2017 11:24 AM, Jason Wang wrote:
>>>
>>>
>>> On 2017年05月23日 18:48, Wei Wang wrote:
>>>> On 05/23/2017 02:32 PM, Jason Wang wrote:
>>>>>
>>>>>
>>>>> On 2017年05月23日 13:47, Wei Wang wrote:
>>>>>> On 05/23/2017 10:08 AM, Jason Wang wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 2017年05月22日 19:46, Wang, Wei W wrote:
>>>>>>>> On Monday, May 22, 2017 10:28 AM, Jason Wang wrote:
>>>>>>>>> On 2017年05月19日 23:33, Stefan Hajnoczi wrote:
>>>>>>>>>> On Fri, May 19, 2017 at 11:10:33AM +0800, Jason Wang wrote:
>>>>>>>>>>> On 2017年05月18日 11:03, Wei Wang wrote:
>>>>>>>>>>>> On 05/17/2017 02:22 PM, Jason Wang wrote:
>>>>>>>>>>>>> On 2017年05月17日 14:16, Jason Wang wrote:
>>>>>>>>>>>>>> On 2017年05月16日 15:12, Wei Wang wrote:
>>>>>>>>>>>>>>>> Hi:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Care to post the driver codes too?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> OK. It may take some time to clean up the driver code
>>>>>>>>>>>>>>> before post
>>>>>>>>>>>>>>> it out. You can first have a check of the draft at the
>>>>>>>>>>>>>>> repo here:
>>>>>>>>>>>>>>> https://github.com/wei-w-wang/vhost-pci-driver
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Best,
>>>>>>>>>>>>>>> Wei
>>>>>>>>>>>>>> Interesting, looks like there's one copy on tx side. We
>>>>>>>>>>>>>> used to
>>>>>>>>>>>>>> have zerocopy support for tun for VM2VM traffic. Could
>>>>>>>>>>>>>> you please
>>>>>>>>>>>>>> try to compare it with your vhost-pci-net by:
>>>>>>>>>>>>>>
>>>>>>>>>>>> We can analyze from the whole data path - from VM1's
>>>>>>>>>>>> network stack
>>>>>>>>>>>> to send packets -> VM2's network stack to receive packets. The
>>>>>>>>>>>> number of copies are actually the same for both.
>>>>>>>>>>> That's why I'm asking you to compare the performance. The
>>>>>>>>>>> only reason
>>>>>>>>>>> for vhost-pci is performance. You should prove it.
>>>>>>>>>> There is another reason for vhost-pci besides maximum
>>>>>>>>>> performance:
>>>>>>>>>>
>>>>>>>>>> vhost-pci makes it possible for end-users to run networking
>>>>>>>>>> or storage
>>>>>>>>>> appliances in compute clouds. Cloud providers do not allow
>>>>>>>>>> end-users
>>>>>>>>>> to run custom vhost-user processes on the host so you need
>>>>>>>>>> vhost-pci.
>>>>>>>>>>
>>>>>>>>>> Stefan
>>>>>>>>> Then it has non NFV use cases and the question goes back to
>>>>>>>>> the performance
>>>>>>>>> comparing between vhost-pci and zerocopy vhost_net. If it does
>>>>>>>>> not perform
>>>>>>>>> better, it was less interesting at least in this case.
>>>>>>>>>
>>>>>>>> Probably I can share what we got about vhost-pci and vhost-user:
>>>>>>>> https://github.com/wei-w-wang/vhost-pci-discussion/blob/master/vhost_pci_vs_vhost_user.pdf
>>>>>>>>
>>>>>>>> Right now, I don’t have the environment to add the vhost_net test.
>>>>>>>
>>>>>>> Thanks, the number looks good. But I have some questions:
>>>>>>>
>>>>>>> - Is the number measured through your vhost-pci kernel driver code?
>>>>>>
>>>>>> Yes, the kernel driver code.
>>>>>
>>>>> Interesting, in the above link, "l2fwd" was used in vhost-pci
>>>>> testing. I want to know more about the test configuration: If
>>>>> l2fwd is the one that dpdk had, want to know how can you make it
>>>>> work for kernel driver. (Maybe packet socket I think?) If not,
>>>>> want to know how do you configure it (e.g through bridge or
>>>>> act_mirred or others). And in OVS dpdk, is dpdk l2fwd + pmd used
>>>>> in the testing?
>>>>>
>>>>
>>>> Oh, that l2fwd is a kernel module from OPNFV vsperf
>>>> (http://artifacts.opnfv.org/vswitchperf/docs/userguide/quickstart.html)
>>>>
>>>> For both legacy and vhost-pci cases, they use the same l2fwd module.
>>>> No bridge is used, the module already works at L2 to forward packets
>>>> between two net devices.
>>>
>>> Thanks for the pointer. Just to confirm, I think virtio-net kernel
>>> driver is used in OVS-dpdk test?
>>
>> Yes. In both cases, the guests are using kernel drivers.
>>
>>>
>>> Another question is, can we manage to remove the copy in tx? If not,
>>> is it a limitation of virtio protocol?
>>>
>>
>> No, we can't. Use this example, VM1's Vhost-pci<->virtio-net of VM2,
>> VM1 sees VM2's memory, but
>> VM2 only sees its own memory.
>> What this copy achieves is to get data from VM1's memory to VM2's
>> memory, so that VM2 can deliver it's
>> own memory to its network stack.
>
> Then, as has been pointed out. Should we consider a vhost-pci to
> vhost-pci peer?
I think that's another direction or future extension.
We already have the vhost-pci to virtio-net model on the way, so I think
it would be better to start from here.
>
> Even with vhost-pci to virito-net configuration, I think rx zerocopy
> could be achieved but not implemented in your driver (probably more
> easier in pmd).
>
Yes, it would be easier with dpdk pmd. But I think it would not be
important in the NFV use case,
since the data flow goes to one direction often.
Best,
Wei
next prev parent reply other threads:[~2017-05-25 11:59 UTC|newest]
Thread overview: 52+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-12 8:35 [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 01/16] vhost-user: share the vhost-user protocol related structures Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 02/16] vl: add the vhost-pci-slave command line option Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 03/16] vhost-pci-slave: create a vhost-user slave to support vhost-pci Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 04/16] vhost-pci-net: add vhost-pci-net Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 05/16] vhost-pci-net-pci: add vhost-pci-net-pci Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 06/16] virtio: add inter-vm notification support Wei Wang
2017-05-15 0:21 ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 07/16] vhost-user: send device id to the slave Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 08/16] vhost-user: send guest physical address of virtqueues " Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 09/16] vhost-user: send VHOST_USER_SET_VHOST_PCI_START/STOP Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 10/16] vhost-pci-net: send the negotiated feature bits to the master Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 11/16] vhost-user: add asynchronous read for the vhost-user master Wei Wang
2017-05-12 8:51 ` Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 12/16] vhost-user: handling VHOST_USER_SET_FEATURES Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 13/16] vhost-pci-slave: add "reset_virtio" Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 14/16] vhost-pci-slave: add support to delete a vhost-pci device Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 15/16] vhost-pci-net: tell the driver that it is ready to send packets Wei Wang
2017-05-12 8:35 ` [Qemu-devel] [PATCH v2 16/16] vl: enable vhost-pci-slave Wei Wang
2017-05-12 9:30 ` [Qemu-devel] [PATCH v2 00/16] Vhost-pci for inter-VM communication no-reply
2017-05-16 15:21 ` Michael S. Tsirkin
2017-05-16 6:46 ` Jason Wang
2017-05-16 7:12 ` [Qemu-devel] [virtio-dev] " Wei Wang
2017-05-17 6:16 ` Jason Wang
2017-05-17 6:22 ` Jason Wang
2017-05-18 3:03 ` Wei Wang
2017-05-19 3:10 ` [Qemu-devel] [virtio-dev] " Jason Wang
2017-05-19 9:00 ` Wei Wang
2017-05-19 9:53 ` Jason Wang
2017-05-19 20:44 ` Michael S. Tsirkin
2017-05-23 11:09 ` Wei Wang
2017-05-23 15:15 ` Michael S. Tsirkin
2017-05-19 15:33 ` Stefan Hajnoczi
2017-05-22 2:27 ` Jason Wang
2017-05-22 11:46 ` Wang, Wei W
2017-05-23 2:08 ` Jason Wang
2017-05-23 5:47 ` Wei Wang
2017-05-23 6:32 ` Jason Wang
2017-05-23 10:48 ` Wei Wang
2017-05-24 3:24 ` Jason Wang
2017-05-24 8:31 ` Wei Wang
2017-05-25 7:59 ` Jason Wang
2017-05-25 12:01 ` Wei Wang [this message]
2017-05-25 12:22 ` Jason Wang
2017-05-25 12:31 ` [Qemu-devel] [virtio-dev] " Jason Wang
2017-05-25 17:57 ` Michael S. Tsirkin
2017-06-04 10:34 ` Wei Wang
2017-06-05 2:21 ` Michael S. Tsirkin
2017-05-25 14:35 ` [Qemu-devel] " Eric Blake
2017-05-26 4:26 ` Jason Wang
2017-05-19 16:49 ` Michael S. Tsirkin
2017-05-22 2:22 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5926C7AC.4080603@intel.com \
--to=wei.w.wang@intel.com \
--cc=jasowang@redhat.com \
--cc=marcandre.lureau@gmail.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@gmail.com \
--cc=virtio-dev@lists.oasis-open.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).