From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58845) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1W2upG-0001uF-QU for qemu-devel@nongnu.org; Mon, 13 Jan 2014 22:38:04 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1W2upE-00026C-Nw for qemu-devel@nongnu.org; Mon, 13 Jan 2014 22:38:02 -0500 Received: from mail-ig0-x22f.google.com ([2607:f8b0:4001:c05::22f]:53682) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1W2upE-00025y-Fk for qemu-devel@nongnu.org; Mon, 13 Jan 2014 22:38:00 -0500 Received: by mail-ig0-f175.google.com with SMTP id uq10so4675324igb.2 for ; Mon, 13 Jan 2014 19:37:59 -0800 (PST) MIME-Version: 1.0 In-Reply-To: References: From: Ying-Shiuan Pan Date: Tue, 14 Jan 2014 11:37:39 +0800 Message-ID: Content-Type: multipart/alternative; boundary=089e01229f8639915004efe5eb03 Subject: Re: [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm & KVM List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Barak Wasserstrom Cc: Peter Maydell , QEMU Developers , Pawel Moll --089e01229f8639915004efe5eb03 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hi, Barak, Hope the following info can help you 1. HOST: http://git.linaro.org/people/christoffer.dall/linux-kvm-arm.git branch: v3.10-arndale config: arch/arm/configs/exynos5_arndale_defconfig dtb: arch/arm/boot/dts/exynos5250-arndale.dtb rootfs: Ubuntu 13.10 GUEST: Official 3.12 config: arch/arm/configs/vexpress_defconfig with virtio-devices enabled dtb: arch/arm/boot/dts/vexpress-v2p-ca15-tc1.dtb rootfs: Ubuntu 12.04 2. We are still developing it in progress and will try to open source asap. The main purpose of that patch is to introduce the ioeventfd into kvm-arm 3. as mentioned in 1. 4. qemu-1.6.0 5. We ported part of guest/host notifiers of virtio-pci to virtio-mmio 6. /usr/bin/qemu-system-arm -enable-kvm -kernel /root/nfs/zImage -m 128 --machine vexpress-a15 -cpu cortex-a15 -drive file=3D/root/nfs/guest-1G-precise-vm1.img,id=3Dvirtio-blk,if=3Dnone,cache= =3Dnone -device virtio-blk-device,drive=3Dvirtio-blk -append "earlyprintk=3DttyAMA0 console=3DttyAMA0 root=3D/dev/vda rw ip=3D192.168.101.101::192.168.101.1:vm= 1:eth0:off --no-log" -dtb /root/nfs/vexpress-v2p-ca15-tc1.dtb --nographic -chardev socket,id=3Dmon,path=3D/root/vm1.monitor,server,nowait -mon chardev=3Dmon,id=3Dmonitor,mode=3Dreadline -device virtio-net-device,netdev=3Dnet0,mac=3D"52:54:00:12:34:01" -netdev type=3Dtap,id=3Dnet0,script=3D/root/nfs/net.sh,downscript=3Dno,vhost=3Doff vhost-net could be truned on by changing the last parameter vhost=3Don. -- Ying-Shiuan Pan, H Div., CCMA, ITRI, TW ---- Best Regards, =E6=BD=98=E7=A9=8E=E8=BB=92Ying-Shiuan Pan 2014/1/13 Barak Wasserstrom > Ying-Shiuan Pan, > Your experiments with arndale Exynos-5250 board can help me greatly and i > would really appreciate if you share with me the following information: > 1. Which Linux kernel did you use for the host and for the guest? > 2. Which Linux kernel patches did you use for KVM? > 3. Which config files did you use for both the host and guest? > 4. Which QEMU did you use? > 5. Which QEMU patches did you use? > 6. What is the exact command line you used for invoking the guest, with > and without vhost-net? > > Many thanks in advance! > > Regards, > Barak > > > > On Mon, Jan 13, 2014 at 5:47 AM, Ying-Shiuan Pan > wrote: > >> Hi, Barak, >> >> We've tried vhost-net in kvm-arm on arndale Exynos-5250 board (it >> requires some patches in qemu and kvm, of course). It works (without irq= fd >> support), however, the performance does not increase much. The throughpu= t >> (iperf) of virtio-net and vhost-net are 93.5Mbps and 93.6Mbps respective= ly. >> I thought the result are because both virtio-net and vhost-net almost >> reached the limitation of 100Mbps Ethernet. >> >> The good news is that we even ported vhost-net in our kvm-a9 hypervisor >> (refer: >> http://academic.odysci.com/article/1010113020064758/evaluation-of-a-serv= er-grade-software-only-arm-hypervisor), >> and the throughput of vhost-net on that platform (with 1Gbps Ethernet) >> increased from 323Mbps to 435Mbps. >> >> -- >> Ying-Shiuan Pan, >> H Div., CCMA, ITRI, TW >> >> >> ---- >> Best Regards, >> =E6=BD=98=E7=A9=8E=E8=BB=92Ying-Shiuan Pan >> >> >> 2014/1/13 Peter Maydell >> >>> On 12 January 2014 21:49, Barak Wasserstrom wrote: >>> > Thanks - I got virtio-net-device running now, but performance is >>> terrible. >>> > When i look at the guest's ethernet interface features (ethtool -k >>> eth0) i >>> > see all offload features are disabled. >>> > I'm using a virtual tap on the host (tap0 bridged to eth3). >>> > On the tap i also see all offload features are disabled, while on br0 >>> and >>> > eth3 i see the expected offload features. >>> > Can this explain the terrible performance i'm facing? >>> > If so, how can this be changed? >>> > If not, what else can cause such bad performance? >>> > Do you know if vhost_net can be used on ARM Cortex A15 host/guest, ev= en >>> > though the guest doesn't support PCI & MSIX? >>> >>> I have no idea, I'm afraid. I don't have enough time available to >>> investigate performance issues at the moment; if you find anything >>> specific you can submit patches... >>> >>> thanks >>> -- PMM >>> >>> >> > --089e01229f8639915004efe5eb03 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hi, Barak,

=
Hope the following info can help you

1.
HOST:
branch: v3.10-arndale
config: arch/arm/configs/exynos5_arnda= le_defconfig
dtb: arch/arm/boot/dts/exynos5250-arndale.dtb
rootfs: Ubuntu 13.10

GUEST:
Official 3.12
config: arch/arm/configs/vexpress_defconfig=C2=A0 with virtio-devices enabl= ed
dtb: arch/arm/boot/dts/vexpress-v2p-ca15-tc1.dtb
rootfs: Ubu= ntu 12.04

2.
We ar= e still developing it in progress and will try to open source asap.
The main purpose of that patch is to introduce the ioeventfd int= o kvm-arm

3. as mentioned in 1.

<= div>4. qemu-1.6.0

5. We ported part of guest/host notifie= rs of virtio-pci to virtio-mmio

6. /usr/bin/qemu-system-arm -enable-kvm -kernel /root/nfs/zI= mage -m 128 --machine vexpress-a15 -cpu cortex-a15 -drive file=3D/root/nfs/= guest-1G-precise-vm1.img,id=3Dvirtio-blk,if=3Dnone,cache=3Dnone -device vir= tio-blk-device,drive=3Dvirtio-blk -append "earlyprintk=3DttyAMA0 conso= le=3DttyAMA0 root=3D/dev/vda rw ip=3D192.168.101.101::192.168.101.1:vm1:eth= 0:off --no-log" -dtb /root/nfs/vexpress-v2p-ca15-tc1.dtb --nographic -= chardev socket,id=3Dmon,path=3D/root/vm1.monitor,server,nowait -mon chardev= =3Dmon,id=3Dmonitor,mode=3Dreadline -device virtio-net-device,netdev=3Dnet0= ,mac=3D"52:54:00:12:34:01" -netdev type=3Dtap,id=3Dnet0,script=3D= /root/nfs/net.sh,downscript=3Dno,vhost=3Doff

vhost-net could be truned on by changing the last parameter vhost=3Don.=

--
Ying-Shiuan Pan,
H Div., CCMA, ITRI, TW


----
Best Regards,
=E6=BD=98=E7=A9=8E=E8=BB=92Ying-Shiuan Pan

2014/1/13 Barak Wasserstrom <wbarak@gmai= l.com>
Ying-Shiuan Pan,
Your experiments with arndale Exynos-5250 board can help me grea= tly and i would really appreciate if you share with me the following inform= ation:
1. Which Linux kernel did you use for the host and for the guest?
2. Whic= h Linux kernel patches did you use for KVM?
3. Which config files did= you use for both the host and guest?
4. Which Q= EMU did you use?
5. Which QEMU patches did you use?
6. What is the exac= t command line you used for invoking the guest, with and without vhost-net?=

Man= y thanks in advance!

Regards,
Barak



On Mon, Jan 13, 2014 at 5:47 = AM, Ying-Shiuan Pan <yingshiuan.pan@gmail.com> wrote:=
Hi, Barak,
We've tried vhost-net in kvm-arm on=20 arndale Exynos-5250 board (it requires some patches in qemu and kvm, of=20 course). It works (without irqfd support), however, the performance does not increase much. The throughput (iperf) of virtio-net and vhost-net=20 are 93.5Mbps and 93.6Mbps respectively. I thought the result are because both virtio-net and vhost-net almost reached the limitation of 100Mbps=20 Ethernet.

The good news is that we even ported vhost-net in our kvm-a9 hypervisor (refer:=20 http://aca= demic.odysci.com/article/1010113020064758/evaluation-of-a-server-grade-soft= ware-only-arm-hypervisor), and the throughput of vhost-net on that platform (with 1Gbps Ethernet)=20 increased from 323Mbps to 435Mbps.

--
Ying-Shiuan Pan,
H Div.,= CCMA, ITRI, TW


<= div>----
Best Regards,
=E6=BD=98=E7=A9=8E=E8=BB=92Ying-Shiuan Pan


2014/1/13 Peter Maydell <peter.m= aydell@linaro.org>
On 12 January 2014 21:49, Barak Wasserstrom <wbarak@gmail.com> wrote:
> Thanks - I got virtio-net-device running now, but performance is terri= ble.
> When i look at the guest's ethernet interface features (ethtool -k= eth0) i
> see all offload features are disabled.
> I'm using a virtual tap on the host (tap0 bridged to eth3).
> On the tap i also see all offload features are disabled, while on br0 = and
> eth3 i see the expected offload features.
> Can this explain the terrible performance i'm facing?
> If so, how can this be changed?
> If not, what else can cause such bad performance?
> Do you know if vhost_net can be used on ARM Cortex A15 host/guest, eve= n
> though the guest doesn't support PCI & MSIX?

I have no idea, I'm afraid. I don't have enough time availabl= e to
investigate performance issues at the moment; if you find anything
specific you can submit patches...

thanks
-- PMM




--089e01229f8639915004efe5eb03--