From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54520) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XHxQF-0003st-7I for qemu-devel@nongnu.org; Thu, 14 Aug 2014 11:58:44 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XHxQ9-00073e-Pq for qemu-devel@nongnu.org; Thu, 14 Aug 2014 11:58:38 -0400 Received: from mail-bn1blp0188.outbound.protection.outlook.com ([207.46.163.188]:58588 helo=na01-bn1-obe.outbound.protection.outlook.com) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XHxQ9-00073U-HJ for qemu-devel@nongnu.org; Thu, 14 Aug 2014 11:58:33 -0400 Message-ID: <53ECDCA2.5010808@amd.com> Date: Thu, 14 Aug 2014 10:58:26 -0500 From: Joel Schopp MIME-Version: 1.0 References: <53E97EBD.6070401@huawei.com> <53EC31FF.4060903@huawei.com> In-Reply-To: <53EC31FF.4060903@huawei.com> Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] The status about vhost-net on kvm-arm? List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Li Liu , Nikolay Nikolaev Cc: VirtualOpenSystems Technical Team , kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, qemu-devel >>>> we at Virtual Open Systems did some work and tested vhost-net on ARM >>>> back in March. >>>> The setup was based on: >>>> - host kernel with our ioeventfd patches: >>>> http://www.spinics.net/lists/kvm-arm/msg08413.html >>>> >>>> - qemu with the aforementioned patches from Ying-Shiuan Pan >>>> https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html >>>> >>>> The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3 >>>> Ethernet adapter connected to a 1Gbps switch. I can't find the actual >>>> numbers but I remember that with multiple streams the gain was clearly >>>> seen. Note that it used the minimum required ioventfd implementation >>>> and not irqfd. >>>> >>>> I guess it is feasible to think that it all can be put together and >>>> rebased + the recent irqfd work. One can achiev even better >>>> performance (because of the irqfd). >>>> >>> Managed to replicate the setup with the old versions e used in March: >>> >>> Single stream from another machine to chromebook with 1Gbps USB3 >>> Ethernet adapter. >>> iperf -c
-P 1 -i 1 -p 5001 -f k -t 10 >>> to HOST: 858316 Kbits/sec >>> to GUEST: 761563 Kbits/sec >> to GUEST vhost=off: 508150 Kbits/sec >>> 10 parallel streams >>> iperf -c
-P 10 -i 1 -p 5001 -f k -t 10 >>> to HOST: 842420 Kbits/sec >>> to GUEST: 625144 Kbits/sec >> to GUEST vhost=off: 425276 Kbits/sec > I have tested the same cases on a Hisilicon board (Cortex-A15@1G) > with Integrated 1Gbps Ethernet adapter. > > iperf -c
-P 1 -i 1 -p 5001 -f M -t 10 > to HOST: 906 Mbits/sec > to GUEST: 562 Mbits/sec > to GUEST vhost=off: 340 Mbits/sec > > 10 parallel streams, the performance gets <10% plus: > iperf -c
-P 10 -i 1 -p 5001 -f M -t 10 > to HOST: 923 Mbits/sec > to GUEST: 592 Mbits/sec > to GUEST vhost=off: 364 Mbits/sec > > I't easy to see vhost-net brings great performance improvements, > almost 50%+. That's pretty impressive for not even having irqfd. I guess we should renew some effort to get these patches merged upstream.