From mboxrd@z Thu Jan 1 00:00:00 1970 From: jiangyiwen Subject: Re: [RFC] VSOCK: The performance problem of vhost_vsock. Date: Mon, 15 Oct 2018 14:12:40 +0800 Message-ID: <5BC42FD8.2070104@huawei.com> References: <5BC3F0D4.60409@huawei.com> <30d7c370-b206-cdac-dc85-53e9be1e1c63@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit Cc: , , To: Jason Wang , Return-path: Received: from szxga07-in.huawei.com ([45.249.212.35]:46280 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726273AbeJON4j (ORCPT ); Mon, 15 Oct 2018 09:56:39 -0400 In-Reply-To: <30d7c370-b206-cdac-dc85-53e9be1e1c63@redhat.com> Sender: netdev-owner@vger.kernel.org List-ID: On 2018/10/15 10:33, Jason Wang wrote: > > > On 2018年10月15日 09:43, jiangyiwen wrote: >> Hi Stefan & All: >> >> Now I find vhost-vsock has two performance problems even if it >> is not designed for performance. >> >> First, I think vhost-vsock should faster than vhost-net because it >> is no TCP/IP stack, but the real test result vhost-net is 5~10 >> times than vhost-vsock, currently I am looking for the reason. > > TCP/IP is not a must for vhost-net. > > How do you test and compare the performance? > > Thanks > I test the performance used my test tool, like follows: Server Client socket() bind() listen() socket(AF_VSOCK) or socket(AF_INET) Accept() <-------------->connect() *======Start Record Time======* Call syscall sendfile() Recv() Send end Receive end Send(file_size) Recv(file_size) *======End Record Time======* The test result, vhost-vsock is about 500MB/s, and vhost-net is about 2500MB/s. By the way, vhost-net use single queue. Thanks. >> Second, vhost-vsock only supports two vqs(tx and rx), that means >> if multiple sockets in the guest will use the same vq to transmit >> the message and get the response. So if there are multiple applications >> in the guest, we should support "Multiqueue" feature for Virtio-vsock. >> >> Stefan, have you encountered these problems? >> >> Thanks, >> Yiwen. >> > > > . >