netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: jiangyiwen <jiangyiwen@huawei.com>, stefanha@redhat.com
Cc: netdev@vger.kernel.org, kvm@vger.kernel.org,
	virtualization@lists.linux-foundation.org
Subject: Re: [RFC] VSOCK: The performance problem of vhost_vsock.
Date: Wed, 17 Oct 2018 17:39:51 +0800	[thread overview]
Message-ID: <d16d2052-bfb1-7861-e210-b53b4ea3260c@redhat.com> (raw)
In-Reply-To: <5BC70069.4000600@huawei.com>


On 2018/10/17 下午5:27, jiangyiwen wrote:
> On 2018/10/15 14:12, jiangyiwen wrote:
>> On 2018/10/15 10:33, Jason Wang wrote:
>>>
>>> On 2018年10月15日 09:43, jiangyiwen wrote:
>>>> Hi Stefan & All:
>>>>
>>>> Now I find vhost-vsock has two performance problems even if it
>>>> is not designed for performance.
>>>>
>>>> First, I think vhost-vsock should faster than vhost-net because it
>>>> is no TCP/IP stack, but the real test result vhost-net is 5~10
>>>> times than vhost-vsock, currently I am looking for the reason.
>>> TCP/IP is not a must for vhost-net.
>>>
>>> How do you test and compare the performance?
>>>
>>> Thanks
>>>
>> I test the performance used my test tool, like follows:
>>
>> Server                   Client
>> socket()
>> bind()
>> listen()
>>
>>                           socket(AF_VSOCK) or socket(AF_INET)
>> Accept() <-------------->connect()
>>                           *======Start Record Time======*
>>                           Call syscall sendfile()
>> Recv()
>>                           Send end
>> Receive end
>> Send(file_size)
>>                           Recv(file_size)
>>                           *======End Record Time======*
>>
>> The test result, vhost-vsock is about 500MB/s, and vhost-net is about 2500MB/s.
>>
>> By the way, vhost-net use single queue.
>>
>> Thanks.
>>
>>>> Second, vhost-vsock only supports two vqs(tx and rx), that means
>>>> if multiple sockets in the guest will use the same vq to transmit
>>>> the message and get the response. So if there are multiple applications
>>>> in the guest, we should support "Multiqueue" feature for Virtio-vsock.
>>>>
>>>> Stefan, have you encountered these problems?
>>>>
>>>> Thanks,
>>>> Yiwen.
>>>>
>>>
>>> .
>>>
>>
> Hi Jason and Stefan,
>
> Maybe I find the reason of bad performance.
>
> I found pkt_len is limited to VIRTIO_VSOCK_DEFAULT_RX_BUF_SIZE(4K),
> it will cause the bandwidth is limited to 500~600MB/s. And once I
> increase to 64k, it can improve about 3 times(~1500MB/s).


Looks like the value was chosen for a balance between rx buffer size and 
performance. Allocating 64K always even for small packet is kind of 
waste and stress for guest memory. Virito-net try to avoid this by 
inventing the merge able rx buffer which allows big packet to be 
scattered in into different buffers. We can reuse this idea or revisit 
the idea of using virtio-net/vhost-net as a transport of vsock.

What interesting is the performance is still behind vhost-net.

Thanks

>
> By the way, I send to 64K in application once, and I don't use
> sg_init_one and rewrite function to packet sg list because pkt_len
> include multiple pages.
>
> Thanks,
> Yiwen.
>
>> _______________________________________________
>> Virtualization mailing list
>> Virtualization@lists.linux-foundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/virtualization
>>
>

  reply	other threads:[~2018-10-17 17:34 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-15  1:43 [RFC] VSOCK: The performance problem of vhost_vsock jiangyiwen
2018-10-15  2:33 ` Jason Wang
2018-10-15  6:12   ` jiangyiwen
2018-10-17  9:27     ` jiangyiwen
2018-10-17  9:39       ` Jason Wang [this message]
2018-10-17  9:51         ` Jason Wang
2018-10-17 11:41           ` jiangyiwen
2018-10-17 12:31             ` Jason Wang
2018-10-18  1:22               ` jiangyiwen
2018-10-18  2:45                 ` Jason Wang
2018-10-17 11:32         ` jiangyiwen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d16d2052-bfb1-7861-e210-b53b4ea3260c@redhat.com \
    --to=jasowang@redhat.com \
    --cc=jiangyiwen@huawei.com \
    --cc=kvm@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=stefanha@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).