From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48603) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1W2YUw-0005Kk-6O for qemu-devel@nongnu.org; Sun, 12 Jan 2014 22:47:35 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1W2YUu-0001G0-Tf for qemu-devel@nongnu.org; Sun, 12 Jan 2014 22:47:34 -0500 Received: from mail-ig0-x22a.google.com ([2607:f8b0:4001:c05::22a]:58438) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1W2YUu-0001Fw-N0 for qemu-devel@nongnu.org; Sun, 12 Jan 2014 22:47:32 -0500 Received: by mail-ig0-f170.google.com with SMTP id m12so3425586iga.1 for ; Sun, 12 Jan 2014 19:47:31 -0800 (PST) MIME-Version: 1.0 In-Reply-To: References: From: Ying-Shiuan Pan Date: Mon, 13 Jan 2014 11:47:11 +0800 Message-ID: Content-Type: multipart/alternative; boundary=001a11c357c87a1e0b04efd1efa4 Subject: Re: [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm & KVM List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Barak Wasserstrom Cc: Peter Maydell , QEMU Developers , Pawel Moll --001a11c357c87a1e0b04efd1efa4 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi, Barak, We've tried vhost-net in kvm-arm on arndale Exynos-5250 board (it requires some patches in qemu and kvm, of course). It works (without irqfd support), however, the performance does not increase much. The throughput (iperf) of virtio-net and vhost-net are 93.5Mbps and 93.6Mbps respectively. I thought the result are because both virtio-net and vhost-net almost reached the limitation of 100Mbps Ethernet. The good news is that we even ported vhost-net in our kvm-a9 hypervisor (refer: http://academic.odysci.com/article/1010113020064758/evaluation-of-a-server-= grade-software-only-arm-hypervisor), and the throughput of vhost-net on that platform (with 1Gbps Ethernet) increased from 323Mbps to 435Mbps. -- Ying-Shiuan Pan, H Div., CCMA, ITRI, TW ---- Best Regards, =E6=BD=98=E7=A9=8E=E8=BB=92Ying-Shiuan Pan 2014/1/13 Peter Maydell > On 12 January 2014 21:49, Barak Wasserstrom wrote: > > Thanks - I got virtio-net-device running now, but performance is > terrible. > > When i look at the guest's ethernet interface features (ethtool -k eth0= ) > i > > see all offload features are disabled. > > I'm using a virtual tap on the host (tap0 bridged to eth3). > > On the tap i also see all offload features are disabled, while on br0 a= nd > > eth3 i see the expected offload features. > > Can this explain the terrible performance i'm facing? > > If so, how can this be changed? > > If not, what else can cause such bad performance? > > Do you know if vhost_net can be used on ARM Cortex A15 host/guest, even > > though the guest doesn't support PCI & MSIX? > > I have no idea, I'm afraid. I don't have enough time available to > investigate performance issues at the moment; if you find anything > specific you can submit patches... > > thanks > -- PMM > > --001a11c357c87a1e0b04efd1efa4 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi, Barak,

We've tried vhost-ne= t in kvm-arm on=20 arndale Exynos-5250 board (it requires some patches in qemu and kvm, of=20 course). It works (without irqfd support), however, the performance does not increase much. The throughput (iperf) of virtio-net and vhost-net=20 are 93.5Mbps and 93.6Mbps respectively. I thought the result are because both virtio-net and vhost-net almost reached the limitation of 100Mbps=20 Ethernet.

The good news is that we even ported vhost-net in our kvm-a9 hypervisor (refer:=20 http://academic.odysci.com/a= rticle/1010113020064758/evaluation-of-a-server-grade-software-only-arm-hype= rvisor), and the throughput of vhost-net on that platform (with 1Gbps Ethernet)=20 increased from 323Mbps to 435Mbps.

--
Ying-Shiuan Pan,
H Div.,= CCMA, ITRI, TW


<= div>----
Best Regards,
=E6=BD=98=E7=A9=8E=E8=BB=92Ying-Shiuan Pan

2014/1/13 Peter Maydell <peter.m= aydell@linaro.org>
On 12 January 2014 21:49, Barak Wasserstrom <wbarak@gmail.com> wrote:
> Thanks - I got virtio-net-device running now, but performance is terri= ble.
> When i look at the guest's ethernet interface features (ethtool -k= eth0) i
> see all offload features are disabled.
> I'm using a virtual tap on the host (tap0 bridged to eth3).
> On the tap i also see all offload features are disabled, while on br0 = and
> eth3 i see the expected offload features.
> Can this explain the terrible performance i'm facing?
> If so, how can this be changed?
> If not, what else can cause such bad performance?
> Do you know if vhost_net can be used on ARM Cortex A15 host/guest, eve= n
> though the guest doesn't support PCI & MSIX?

I have no idea, I'm afraid. I don't have enough time availabl= e to
investigate performance issues at the moment; if you find anything
specific you can submit patches...

thanks
-- PMM


--001a11c357c87a1e0b04efd1efa4--