From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Michael S. Tsirkin" Subject: Re: [RFC] Discuss about an new idea "Vsock over Virtio-net" Date: Mon, 3 Dec 2018 23:08:16 -0500 Message-ID: <20181203230459-mutt-send-email-mst@kernel.org> References: <61d57505-7ff6-23c6-d26c-6a0062e08445@redhat.com> <20181129085049-mutt-send-email-mst@kernel.org> <7e78fc3d-0d5a-090f-476d-03ad490ff8a2@redhat.com> <20181130075134-mutt-send-email-mst@kernel.org> <55352308-9ceb-413e-44f6-e3dfd8f642cc@redhat.com> <27cd8ac6-e892-cfaa-cd39-74f39b452681@redhat.com> <20181130083540-mutt-send-email-mst@kernel.org> <5C049EC2.3080002@huawei.com> <20181203202441-mutt-send-email-mst@kernel.org> <5C05E4B4.3040804@huawei.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Cc: Jason Wang , stefanha@redhat.com, stefanha@gmail.com, netdev@vger.kernel.org, kvm@vger.kernel.org, virtualization@lists.linux-foundation.org To: jiangyiwen Return-path: Received: from mx1.redhat.com ([209.132.183.28]:42634 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725971AbeLDEIU (ORCPT ); Mon, 3 Dec 2018 23:08:20 -0500 Content-Disposition: inline In-Reply-To: <5C05E4B4.3040804@huawei.com> Sender: netdev-owner@vger.kernel.org List-ID: On Tue, Dec 04, 2018 at 10:21:40AM +0800, jiangyiwen wrote: > On 2018/12/4 9:31, Michael S. Tsirkin wrote: > > On Mon, Dec 03, 2018 at 11:10:58AM +0800, jiangyiwen wrote: > >> On 2018/11/30 21:40, Michael S. Tsirkin wrote: > >>> On Fri, Nov 30, 2018 at 09:10:03PM +0800, Jason Wang wrote: > >>>> > >>>> On 2018/11/30 下午8:55, Jason Wang wrote: > >>>>> > >>>>> On 2018/11/30 下午8:52, Michael S. Tsirkin wrote: > >>>>>>>> If you want to compare it with > >>>>>>>> something that would be TCP or QUIC. The fundamental > >>>>>>>> difference between > >>>>>>>> virtio-vsock and e.g. TCP is that TCP operates in a packet > >>>>>>>> loss environment. > >>>>>>>> So they are using timers for reliability, and receiver is > >>>>>>>> always free to > >>>>>>>> discard any unacked data. > >>>>>>> Virtio-net knows nothing above L2, so they are totally > >>>>>>> transparent to device > >>>>>>> itself. I still don't get why not using virtio-net instead. > >>>>>>> > >>>>>>> > >>>>>>> Thanks > >>>>>> Is your question why is virtio-vsock used instead of TCP on top of IP > >>>>>> on top of virtio-net? > >>>>>> > >>>>>> > >>>>> > >>>>> No, my question is why not do vsock through virtio-net. > >>>>> > >>>>> Thanks > >>>>> > >>>> > >>>> Just to clarify, it's not about vosck over ethernet, and it's not about > >>>> inventing new features or APIs. It's probably something like: > >>>> > >>>> - Let virtio-net driver probe vsock device and do vosck specific things if > >>>> needed to share as much codes. > >>>> > >>>> - A new kind of sockfd (which is vsock based) for vhost-net for it to do > >>>> vsock specific things (hopefully it can be transparent). > >>>> > >>>> The change should be totally transparent to userspace applications. > >>>> > >>>> Thanks > >>> > >>> Which code is duplicated between virtio vsock and virtio net right now? > >>> > >> > >> Hi Michael, > >> > >> AFAIK, there is almost no duplicate code between virtio vsock and virtio net now. > >> > >> But, if virtio vsock wants to support mergeable rx buffer and multiqueue feature, > >> it has some duplicate codes from virtio net. Based on it, we both think vsock > >> may use virtio net as a transport channel, in this way, vsock can use some of > >> virtio net great features. > >> > >> Thanks, > >> Yiwen. > > > > What I would do is just copy some code and show a performance > > benefit. If that works out it will be clearer which code > > should be shared. > > > > Hi Michael, > > I have already sent a series of patches (VSOCK: support mergeable rx buffer in vhost-vsock) > a month ago, and the performance as follows: > > I write a tool to test the vhost-vsock performance, mainly send big > packet(64K) included guest->Host and Host->Guest. The result as > follows: > > Before performance: > Single socket Multiple sockets(Max Bandwidth) > Guest->Host ~400MB/s ~480MB/s > Host->Guest ~1450MB/s ~1600MB/s > > After performance: > Single socket Multiple sockets(Max Bandwidth) > Guest->Host ~1700MB/s ~2900MB/s > Host->Guest ~1700MB/s ~2900MB/s > > >From the test results, the performance is improved obviously, and guest > memory will not be wasted. Oh I didn't see that one. Pls CC me in the future. Looking at it I agree zero page allocation looks like an issue but besides that, I think we can merge something similar and look at refactoring and future extensions later. However, any interface change (e.g. a new feature) must be CC'd to one of virtio lists (subscriber-only). > In addition, multiqueue feature I have not implemented it yet. > > Thanks, > Yiwen. >