* [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm & KVM
@ 2014-01-09 12:25 Barak Wasserstrom
2014-01-12 21:15 ` Peter Maydell
0 siblings, 1 reply; 10+ messages in thread
From: Barak Wasserstrom @ 2014-01-09 12:25 UTC (permalink / raw)
To: qemu-devel; +Cc: peter.maydell, pawel.moll
[-- Attachment #1: Type: text/plain, Size: 587 bytes --]
Hi,
I would like to utilize virtio-net and vhost_net on an ARM Cortex A15
machine using qemu-system-arm & KVM.
I have few questions:
1. Do i need to build qemu-system-arm myself, or apt-get install it? When i
apt-get install it i get "KVM not supported for this target. "kvm"
accelerator does not exist. No accelerator found!".
2. Do i need to execute qemu-system-arm directly or through virsh? Does it
matter?
3. Must i use a machine that supports PCI controller or not? And if so,
which machine supports it? I saw that 'virt' and 'vexpress' don't support
it.
Thanks in advance,
Barak
[-- Attachment #2: Type: text/html, Size: 998 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm & KVM
2014-01-09 12:25 [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm & KVM Barak Wasserstrom
@ 2014-01-12 21:15 ` Peter Maydell
2014-01-12 21:49 ` Barak Wasserstrom
0 siblings, 1 reply; 10+ messages in thread
From: Peter Maydell @ 2014-01-12 21:15 UTC (permalink / raw)
To: Barak Wasserstrom; +Cc: QEMU Developers, Pawel Moll
On 9 January 2014 12:25, Barak Wasserstrom <wbarak@gmail.com> wrote:
> Hi,
> I would like to utilize virtio-net and vhost_net on an ARM Cortex A15
> machine using qemu-system-arm & KVM.
> I have few questions:
> 1. Do i need to build qemu-system-arm myself, or apt-get install it? When i
> apt-get install it i get "KVM not supported for this target. "kvm"
> accelerator does not exist. No accelerator found!".
This sounds like either:
(1) you're using too old a version of QEMU and need a newer one
(2) you configured QEMU without KVM support
Provided you have QEMU 1.6 or later it shouldn't matter whose
version you're using.
> 2. Do i need to execute qemu-system-arm directly or through virsh? Does it
> matter?
I know nothing about virsh but I don't expect it matters. It's
probably easier to get things working by running qemu-system-arm
directly first, before you try to work out how to get virsh to start
qemu with the correct arguments.
> 3. Must i use a machine that supports PCI controller or not? And if so,
> which machine supports it? I saw that 'virt' and 'vexpress' don't support
> it.
No. For KVM to work you need to use an A15 guest CPU; there
are no A15 boards in QEMU which have a PCI controller. So
instead you have to use the vexpress-a15 or virt machine's
virtio-mmio support. Note that generally the command line syntax
for this is different from that used by x86: you need to create
virtio-*-device devices, not virtio-* or virtio-*-pci devices, and you
can't rely on shorthands like if=virtio. So for instance for a block
device you need
-drive if=none,file=root,id=foo -device virtio-blk-device,drive=foo
thanks
-- PMM
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm & KVM
2014-01-12 21:15 ` Peter Maydell
@ 2014-01-12 21:49 ` Barak Wasserstrom
2014-01-12 22:00 ` Peter Maydell
0 siblings, 1 reply; 10+ messages in thread
From: Barak Wasserstrom @ 2014-01-12 21:49 UTC (permalink / raw)
To: Peter Maydell; +Cc: QEMU Developers, Pawel Moll
[-- Attachment #1: Type: text/plain, Size: 2459 bytes --]
Peter,
Thanks - I got virtio-net-device running now, but performance is terrible.
When i look at the guest's ethernet interface features (ethtool -k eth0) i
see all offload features are disabled.
I'm using a virtual tap on the host (tap0 bridged to eth3).
On the tap i also see all offload features are disabled, while on br0 and
eth3 i see the expected offload features.
Can this explain the terrible performance i'm facing?
If so, how can this be changed?
If not, what else can cause such bad performance?
Do you know if vhost_net can be used on ARM Cortex A15 host/guest, even
though the guest doesn't support PCI & MSIX?
Regards,
Barak
On Sun, Jan 12, 2014 at 11:15 PM, Peter Maydell <peter.maydell@linaro.org>wrote:
> On 9 January 2014 12:25, Barak Wasserstrom <wbarak@gmail.com> wrote:
> > Hi,
> > I would like to utilize virtio-net and vhost_net on an ARM Cortex A15
> > machine using qemu-system-arm & KVM.
> > I have few questions:
> > 1. Do i need to build qemu-system-arm myself, or apt-get install it?
> When i
> > apt-get install it i get "KVM not supported for this target. "kvm"
> > accelerator does not exist. No accelerator found!".
>
> This sounds like either:
> (1) you're using too old a version of QEMU and need a newer one
> (2) you configured QEMU without KVM support
>
> Provided you have QEMU 1.6 or later it shouldn't matter whose
> version you're using.
>
> > 2. Do i need to execute qemu-system-arm directly or through virsh? Does
> it
> > matter?
>
> I know nothing about virsh but I don't expect it matters. It's
> probably easier to get things working by running qemu-system-arm
> directly first, before you try to work out how to get virsh to start
> qemu with the correct arguments.
>
> > 3. Must i use a machine that supports PCI controller or not? And if so,
> > which machine supports it? I saw that 'virt' and 'vexpress' don't support
> > it.
>
> No. For KVM to work you need to use an A15 guest CPU; there
> are no A15 boards in QEMU which have a PCI controller. So
> instead you have to use the vexpress-a15 or virt machine's
> virtio-mmio support. Note that generally the command line syntax
> for this is different from that used by x86: you need to create
> virtio-*-device devices, not virtio-* or virtio-*-pci devices, and you
> can't rely on shorthands like if=virtio. So for instance for a block
> device you need
> -drive if=none,file=root,id=foo -device virtio-blk-device,drive=foo
>
> thanks
> -- PMM
>
[-- Attachment #2: Type: text/html, Size: 3253 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm & KVM
2014-01-12 21:49 ` Barak Wasserstrom
@ 2014-01-12 22:00 ` Peter Maydell
2014-01-13 3:47 ` Ying-Shiuan Pan
0 siblings, 1 reply; 10+ messages in thread
From: Peter Maydell @ 2014-01-12 22:00 UTC (permalink / raw)
To: Barak Wasserstrom; +Cc: QEMU Developers, Pawel Moll
On 12 January 2014 21:49, Barak Wasserstrom <wbarak@gmail.com> wrote:
> Thanks - I got virtio-net-device running now, but performance is terrible.
> When i look at the guest's ethernet interface features (ethtool -k eth0) i
> see all offload features are disabled.
> I'm using a virtual tap on the host (tap0 bridged to eth3).
> On the tap i also see all offload features are disabled, while on br0 and
> eth3 i see the expected offload features.
> Can this explain the terrible performance i'm facing?
> If so, how can this be changed?
> If not, what else can cause such bad performance?
> Do you know if vhost_net can be used on ARM Cortex A15 host/guest, even
> though the guest doesn't support PCI & MSIX?
I have no idea, I'm afraid. I don't have enough time available to
investigate performance issues at the moment; if you find anything
specific you can submit patches...
thanks
-- PMM
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm & KVM
2014-01-12 22:00 ` Peter Maydell
@ 2014-01-13 3:47 ` Ying-Shiuan Pan
2014-01-13 11:24 ` Barak Wasserstrom
0 siblings, 1 reply; 10+ messages in thread
From: Ying-Shiuan Pan @ 2014-01-13 3:47 UTC (permalink / raw)
To: Barak Wasserstrom; +Cc: Peter Maydell, QEMU Developers, Pawel Moll
[-- Attachment #1: Type: text/plain, Size: 1848 bytes --]
Hi, Barak,
We've tried vhost-net in kvm-arm on arndale Exynos-5250 board (it requires
some patches in qemu and kvm, of course). It works (without irqfd support),
however, the performance does not increase much. The throughput (iperf) of
virtio-net and vhost-net are 93.5Mbps and 93.6Mbps respectively. I thought
the result are because both virtio-net and vhost-net almost reached the
limitation of 100Mbps Ethernet.
The good news is that we even ported vhost-net in our kvm-a9 hypervisor
(refer:
http://academic.odysci.com/article/1010113020064758/evaluation-of-a-server-grade-software-only-arm-hypervisor),
and the throughput of vhost-net on that platform (with 1Gbps Ethernet)
increased from 323Mbps to 435Mbps.
--
Ying-Shiuan Pan,
H Div., CCMA, ITRI, TW
----
Best Regards,
潘穎軒Ying-Shiuan Pan
2014/1/13 Peter Maydell <peter.maydell@linaro.org>
> On 12 January 2014 21:49, Barak Wasserstrom <wbarak@gmail.com> wrote:
> > Thanks - I got virtio-net-device running now, but performance is
> terrible.
> > When i look at the guest's ethernet interface features (ethtool -k eth0)
> i
> > see all offload features are disabled.
> > I'm using a virtual tap on the host (tap0 bridged to eth3).
> > On the tap i also see all offload features are disabled, while on br0 and
> > eth3 i see the expected offload features.
> > Can this explain the terrible performance i'm facing?
> > If so, how can this be changed?
> > If not, what else can cause such bad performance?
> > Do you know if vhost_net can be used on ARM Cortex A15 host/guest, even
> > though the guest doesn't support PCI & MSIX?
>
> I have no idea, I'm afraid. I don't have enough time available to
> investigate performance issues at the moment; if you find anything
> specific you can submit patches...
>
> thanks
> -- PMM
>
>
[-- Attachment #2: Type: text/html, Size: 2522 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm & KVM
2014-01-13 3:47 ` Ying-Shiuan Pan
@ 2014-01-13 11:24 ` Barak Wasserstrom
2014-01-14 3:37 ` Ying-Shiuan Pan
0 siblings, 1 reply; 10+ messages in thread
From: Barak Wasserstrom @ 2014-01-13 11:24 UTC (permalink / raw)
To: Ying-Shiuan Pan; +Cc: Peter Maydell, QEMU Developers, Pawel Moll
[-- Attachment #1: Type: text/plain, Size: 2572 bytes --]
Ying-Shiuan Pan,
Your experiments with arndale Exynos-5250 board can help me greatly and i
would really appreciate if you share with me the following information:
1. Which Linux kernel did you use for the host and for the guest?
2. Which Linux kernel patches did you use for KVM?
3. Which config files did you use for both the host and guest?
4. Which QEMU did you use?
5. Which QEMU patches did you use?
6. What is the exact command line you used for invoking the guest, with and
without vhost-net?
Many thanks in advance!
Regards,
Barak
On Mon, Jan 13, 2014 at 5:47 AM, Ying-Shiuan Pan
<yingshiuan.pan@gmail.com>wrote:
> Hi, Barak,
>
> We've tried vhost-net in kvm-arm on arndale Exynos-5250 board (it requires
> some patches in qemu and kvm, of course). It works (without irqfd support),
> however, the performance does not increase much. The throughput (iperf) of
> virtio-net and vhost-net are 93.5Mbps and 93.6Mbps respectively. I thought
> the result are because both virtio-net and vhost-net almost reached the
> limitation of 100Mbps Ethernet.
>
> The good news is that we even ported vhost-net in our kvm-a9 hypervisor
> (refer:
> http://academic.odysci.com/article/1010113020064758/evaluation-of-a-server-grade-software-only-arm-hypervisor),
> and the throughput of vhost-net on that platform (with 1Gbps Ethernet)
> increased from 323Mbps to 435Mbps.
>
> --
> Ying-Shiuan Pan,
> H Div., CCMA, ITRI, TW
>
>
> ----
> Best Regards,
> 潘穎軒Ying-Shiuan Pan
>
>
> 2014/1/13 Peter Maydell <peter.maydell@linaro.org>
>
>> On 12 January 2014 21:49, Barak Wasserstrom <wbarak@gmail.com> wrote:
>> > Thanks - I got virtio-net-device running now, but performance is
>> terrible.
>> > When i look at the guest's ethernet interface features (ethtool -k
>> eth0) i
>> > see all offload features are disabled.
>> > I'm using a virtual tap on the host (tap0 bridged to eth3).
>> > On the tap i also see all offload features are disabled, while on br0
>> and
>> > eth3 i see the expected offload features.
>> > Can this explain the terrible performance i'm facing?
>> > If so, how can this be changed?
>> > If not, what else can cause such bad performance?
>> > Do you know if vhost_net can be used on ARM Cortex A15 host/guest, even
>> > though the guest doesn't support PCI & MSIX?
>>
>> I have no idea, I'm afraid. I don't have enough time available to
>> investigate performance issues at the moment; if you find anything
>> specific you can submit patches...
>>
>> thanks
>> -- PMM
>>
>>
>
[-- Attachment #2: Type: text/html, Size: 4575 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm & KVM
2014-01-13 11:24 ` Barak Wasserstrom
@ 2014-01-14 3:37 ` Ying-Shiuan Pan
2014-01-14 11:11 ` Barak Wasserstrom
0 siblings, 1 reply; 10+ messages in thread
From: Ying-Shiuan Pan @ 2014-01-14 3:37 UTC (permalink / raw)
To: Barak Wasserstrom; +Cc: Peter Maydell, QEMU Developers, Pawel Moll
[-- Attachment #1: Type: text/plain, Size: 4330 bytes --]
Hi, Barak,
Hope the following info can help you
1.
HOST:
<http://git.linaro.org/people/christoffer.dall/linux-kvm-arm.git>
http://git.linaro.org/people/christoffer.dall/linux-kvm-arm.git
branch: v3.10-arndale
config: arch/arm/configs/exynos5_arndale_defconfig
dtb: arch/arm/boot/dts/exynos5250-arndale.dtb
rootfs: Ubuntu 13.10
GUEST:
Official 3.12
config: arch/arm/configs/vexpress_defconfig with virtio-devices enabled
dtb: arch/arm/boot/dts/vexpress-v2p-ca15-tc1.dtb
rootfs: Ubuntu 12.04
2.
We are still developing it in progress and will try to open source asap.
The main purpose of that patch is to introduce the ioeventfd into kvm-arm
3. as mentioned in 1.
4. qemu-1.6.0
5. We ported part of guest/host notifiers of virtio-pci to virtio-mmio
6. /usr/bin/qemu-system-arm -enable-kvm -kernel /root/nfs/zImage -m 128
--machine vexpress-a15 -cpu cortex-a15 -drive
file=/root/nfs/guest-1G-precise-vm1.img,id=virtio-blk,if=none,cache=none
-device virtio-blk-device,drive=virtio-blk -append "earlyprintk=ttyAMA0
console=ttyAMA0 root=/dev/vda rw ip=192.168.101.101::192.168.101.1:vm1:eth0:off
--no-log" -dtb /root/nfs/vexpress-v2p-ca15-tc1.dtb --nographic -chardev
socket,id=mon,path=/root/vm1.monitor,server,nowait -mon
chardev=mon,id=monitor,mode=readline -device
virtio-net-device,netdev=net0,mac="52:54:00:12:34:01" -netdev
type=tap,id=net0,script=/root/nfs/net.sh,downscript=no,vhost=off
vhost-net could be truned on by changing the last parameter vhost=on.
--
Ying-Shiuan Pan,
H Div., CCMA, ITRI, TW
----
Best Regards,
潘穎軒Ying-Shiuan Pan
2014/1/13 Barak Wasserstrom <wbarak@gmail.com>
> Ying-Shiuan Pan,
> Your experiments with arndale Exynos-5250 board can help me greatly and i
> would really appreciate if you share with me the following information:
> 1. Which Linux kernel did you use for the host and for the guest?
> 2. Which Linux kernel patches did you use for KVM?
> 3. Which config files did you use for both the host and guest?
> 4. Which QEMU did you use?
> 5. Which QEMU patches did you use?
> 6. What is the exact command line you used for invoking the guest, with
> and without vhost-net?
>
> Many thanks in advance!
>
> Regards,
> Barak
>
>
>
> On Mon, Jan 13, 2014 at 5:47 AM, Ying-Shiuan Pan <yingshiuan.pan@gmail.com
> > wrote:
>
>> Hi, Barak,
>>
>> We've tried vhost-net in kvm-arm on arndale Exynos-5250 board (it
>> requires some patches in qemu and kvm, of course). It works (without irqfd
>> support), however, the performance does not increase much. The throughput
>> (iperf) of virtio-net and vhost-net are 93.5Mbps and 93.6Mbps respectively.
>> I thought the result are because both virtio-net and vhost-net almost
>> reached the limitation of 100Mbps Ethernet.
>>
>> The good news is that we even ported vhost-net in our kvm-a9 hypervisor
>> (refer:
>> http://academic.odysci.com/article/1010113020064758/evaluation-of-a-server-grade-software-only-arm-hypervisor),
>> and the throughput of vhost-net on that platform (with 1Gbps Ethernet)
>> increased from 323Mbps to 435Mbps.
>>
>> --
>> Ying-Shiuan Pan,
>> H Div., CCMA, ITRI, TW
>>
>>
>> ----
>> Best Regards,
>> 潘穎軒Ying-Shiuan Pan
>>
>>
>> 2014/1/13 Peter Maydell <peter.maydell@linaro.org>
>>
>>> On 12 January 2014 21:49, Barak Wasserstrom <wbarak@gmail.com> wrote:
>>> > Thanks - I got virtio-net-device running now, but performance is
>>> terrible.
>>> > When i look at the guest's ethernet interface features (ethtool -k
>>> eth0) i
>>> > see all offload features are disabled.
>>> > I'm using a virtual tap on the host (tap0 bridged to eth3).
>>> > On the tap i also see all offload features are disabled, while on br0
>>> and
>>> > eth3 i see the expected offload features.
>>> > Can this explain the terrible performance i'm facing?
>>> > If so, how can this be changed?
>>> > If not, what else can cause such bad performance?
>>> > Do you know if vhost_net can be used on ARM Cortex A15 host/guest, even
>>> > though the guest doesn't support PCI & MSIX?
>>>
>>> I have no idea, I'm afraid. I don't have enough time available to
>>> investigate performance issues at the moment; if you find anything
>>> specific you can submit patches...
>>>
>>> thanks
>>> -- PMM
>>>
>>>
>>
>
[-- Attachment #2: Type: text/html, Size: 7759 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm & KVM
2014-01-14 3:37 ` Ying-Shiuan Pan
@ 2014-01-14 11:11 ` Barak Wasserstrom
2014-01-15 2:42 ` Ying-Shiuan Pan
0 siblings, 1 reply; 10+ messages in thread
From: Barak Wasserstrom @ 2014-01-14 11:11 UTC (permalink / raw)
To: Ying-Shiuan Pan; +Cc: Peter Maydell, QEMU Developers, Pawel Moll
[-- Attachment #1: Type: text/plain, Size: 5421 bytes --]
Ying-Shiuan Pan,
Thanks again - please see few questions below.
Regards,
Barak
On Tue, Jan 14, 2014 at 5:37 AM, Ying-Shiuan Pan
<yingshiuan.pan@gmail.com>wrote:
> Hi, Barak,
>
> Hope the following info can help you
>
> 1.
> HOST:
> <http://git.linaro.org/people/christoffer.dall/linux-kvm-arm.git>
> http://git.linaro.org/people/christoffer.dall/linux-kvm-arm.git
> branch: v3.10-arndale
> config: arch/arm/configs/exynos5_arndale_defconfig
> dtb: arch/arm/boot/dts/exynos5250-arndale.dtb
> rootfs: Ubuntu 13.10
>
> GUEST:
> Official 3.12
> config: arch/arm/configs/vexpress_defconfig with virtio-devices enabled
> dtb: arch/arm/boot/dts/vexpress-v2p-ca15-tc1.dtb
> rootfs: Ubuntu 12.04
>
> 2.
> We are still developing it in progress and will try to open source asap.
> The main purpose of that patch is to introduce the ioeventfd into kvm-arm
>
[Barak] Do you have any estimation about when you can release these
patches?
[Barak] Is this required for enabling vhost-net?
>
> 3. as mentioned in 1.
>
> 4. qemu-1.6.0
>
> 5. We ported part of guest/host notifiers of virtio-pci to virtio-mmio
>
[Barak] Any patches available for this?
[Barak] Is this required for enabling vhost-net?
>
> 6. /usr/bin/qemu-system-arm -enable-kvm -kernel /root/nfs/zImage -m 128
> --machine vexpress-a15 -cpu cortex-a15 -drive
> file=/root/nfs/guest-1G-precise-vm1.img,id=virtio-blk,if=none,cache=none
> -device virtio-blk-device,drive=virtio-blk -append "earlyprintk=ttyAMA0
> console=ttyAMA0 root=/dev/vda rw ip=192.168.101.101::192.168.101.1:vm1:eth0:off
> --no-log" -dtb /root/nfs/vexpress-v2p-ca15-tc1.dtb --nographic -chardev
> socket,id=mon,path=/root/vm1.monitor,server,nowait -mon
> chardev=mon,id=monitor,mode=readline -device
> virtio-net-device,netdev=net0,mac="52:54:00:12:34:01" -netdev
> type=tap,id=net0,script=/root/nfs/net.sh,downscript=no,vhost=off
>
[Barak] Could you share "/root/nfs/net.sh" with me?
[Barak] In the guest i can see that eth0 has all offload features disabled
and cannot be enabled. I suspect this is related to the tap configuration
in the host. Do you have any ideas?
>
> vhost-net could be truned on by changing the last parameter vhost=on.
>
[Barak] When enabling vhost i get errors in qemu, do you know what might be
the reason?
[Barak] qemu-system-arm: binding does not support guest notifiers
[Barak] qemu-system-arm: unable to start vhost net: 38: falling back on
userspace virtio
>
>
> --
> Ying-Shiuan Pan,
> H Div., CCMA, ITRI, TW
>
>
> ----
> Best Regards,
> 潘穎軒Ying-Shiuan Pan
>
>
> 2014/1/13 Barak Wasserstrom <wbarak@gmail.com>
>
>> Ying-Shiuan Pan,
>> Your experiments with arndale Exynos-5250 board can help me greatly and
>> i would really appreciate if you share with me the following information:
>> 1. Which Linux kernel did you use for the host and for the guest?
>> 2. Which Linux kernel patches did you use for KVM?
>> 3. Which config files did you use for both the host and guest?
>> 4. Which QEMU did you use?
>> 5. Which QEMU patches did you use?
>> 6. What is the exact command line you used for invoking the guest, with
>> and without vhost-net?
>>
>> Many thanks in advance!
>>
>> Regards,
>> Barak
>>
>>
>>
>> On Mon, Jan 13, 2014 at 5:47 AM, Ying-Shiuan Pan <
>> yingshiuan.pan@gmail.com> wrote:
>>
>>> Hi, Barak,
>>>
>>> We've tried vhost-net in kvm-arm on arndale Exynos-5250 board (it
>>> requires some patches in qemu and kvm, of course). It works (without irqfd
>>> support), however, the performance does not increase much. The throughput
>>> (iperf) of virtio-net and vhost-net are 93.5Mbps and 93.6Mbps respectively.
>>> I thought the result are because both virtio-net and vhost-net almost
>>> reached the limitation of 100Mbps Ethernet.
>>>
>>> The good news is that we even ported vhost-net in our kvm-a9 hypervisor
>>> (refer:
>>> http://academic.odysci.com/article/1010113020064758/evaluation-of-a-server-grade-software-only-arm-hypervisor),
>>> and the throughput of vhost-net on that platform (with 1Gbps Ethernet)
>>> increased from 323Mbps to 435Mbps.
>>>
>>> --
>>> Ying-Shiuan Pan,
>>> H Div., CCMA, ITRI, TW
>>>
>>>
>>> ----
>>> Best Regards,
>>> 潘穎軒Ying-Shiuan Pan
>>>
>>>
>>> 2014/1/13 Peter Maydell <peter.maydell@linaro.org>
>>>
>>>> On 12 January 2014 21:49, Barak Wasserstrom <wbarak@gmail.com> wrote:
>>>> > Thanks - I got virtio-net-device running now, but performance is
>>>> terrible.
>>>> > When i look at the guest's ethernet interface features (ethtool -k
>>>> eth0) i
>>>> > see all offload features are disabled.
>>>> > I'm using a virtual tap on the host (tap0 bridged to eth3).
>>>> > On the tap i also see all offload features are disabled, while on br0
>>>> and
>>>> > eth3 i see the expected offload features.
>>>> > Can this explain the terrible performance i'm facing?
>>>> > If so, how can this be changed?
>>>> > If not, what else can cause such bad performance?
>>>> > Do you know if vhost_net can be used on ARM Cortex A15 host/guest,
>>>> even
>>>> > though the guest doesn't support PCI & MSIX?
>>>>
>>>> I have no idea, I'm afraid. I don't have enough time available to
>>>> investigate performance issues at the moment; if you find anything
>>>> specific you can submit patches...
>>>>
>>>> thanks
>>>> -- PMM
>>>>
>>>>
>>>
>>
>
[-- Attachment #2: Type: text/html, Size: 10614 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm & KVM
2014-01-14 11:11 ` Barak Wasserstrom
@ 2014-01-15 2:42 ` Ying-Shiuan Pan
2014-01-16 7:29 ` Barak Wasserstrom
0 siblings, 1 reply; 10+ messages in thread
From: Ying-Shiuan Pan @ 2014-01-15 2:42 UTC (permalink / raw)
To: Barak Wasserstrom; +Cc: Peter Maydell, QEMU Developers, Pawel Moll
[-- Attachment #1: Type: text/plain, Size: 6509 bytes --]
----
Best Regards,
潘穎軒Ying-Shiuan Pan
2014/1/14 Barak Wasserstrom <wbarak@gmail.com>
> Ying-Shiuan Pan,
> Thanks again - please see few questions below.
>
> Regards,
> Barak
>
>
> On Tue, Jan 14, 2014 at 5:37 AM, Ying-Shiuan Pan <yingshiuan.pan@gmail.com
> > wrote:
>
>> Hi, Barak,
>>
>> Hope the following info can help you
>>
>> 1.
>> HOST:
>> <http://git.linaro.org/people/christoffer.dall/linux-kvm-arm.git>
>> http://git.linaro.org/people/christoffer.dall/linux-kvm-arm.git
>> branch: v3.10-arndale
>> config: arch/arm/configs/exynos5_arndale_defconfig
>> dtb: arch/arm/boot/dts/exynos5250-arndale.dtb
>> rootfs: Ubuntu 13.10
>>
>> GUEST:
>> Official 3.12
>> config: arch/arm/configs/vexpress_defconfig with virtio-devices enabled
>> dtb: arch/arm/boot/dts/vexpress-v2p-ca15-tc1.dtb
>> rootfs: Ubuntu 12.04
>>
>> 2.
>> We are still developing it in progress and will try to open source asap.
>> The main purpose of that patch is to introduce the ioeventfd into kvm-arm
>>
> [Barak] Do you have any estimation about when you can release these
> patches?
>
Actually, No. I will discuss with my boss about the release plan.
> [Barak] Is this required for enabling vhost-net?
>
Yes, it is because vhost-net relies on ioeventfd to get kick request from
front-end driver.
>
>>
>> 3. as mentioned in 1.
>>
>> 4. qemu-1.6.0
>>
>> 5. We ported part of guest/host notifiers of virtio-pci to virtio-mmio
>>
> [Barak] Any patches available for this?
>
I did not see any.. but there might be somebody is also developing this..
> [Barak] Is this required for enabling vhost-net?
>
Yes. Without those notifiers, you will see the error messages as you
mentioned below.
>
>
>>
>> 6. /usr/bin/qemu-system-arm -enable-kvm -kernel /root/nfs/zImage -m 128
>> --machine vexpress-a15 -cpu cortex-a15 -drive
>> file=/root/nfs/guest-1G-precise-vm1.img,id=virtio-blk,if=none,cache=none
>> -device virtio-blk-device,drive=virtio-blk -append "earlyprintk=ttyAMA0
>> console=ttyAMA0 root=/dev/vda rw ip=192.168.101.101::192.168.101.1:vm1:eth0:off
>> --no-log" -dtb /root/nfs/vexpress-v2p-ca15-tc1.dtb --nographic -chardev
>> socket,id=mon,path=/root/vm1.monitor,server,nowait -mon
>> chardev=mon,id=monitor,mode=readline -device
>> virtio-net-device,netdev=net0,mac="52:54:00:12:34:01" -netdev
>> type=tap,id=net0,script=/root/nfs/net.sh,downscript=no,vhost=off
>>
> [Barak] Could you share "/root/nfs/net.sh" with me?
>
Sorry, I forgot that.
---------------
#!/bin/sh
ifconfig $1 0.0.0.0
brctl addif virbr0 $1
---------------
virbr0 is a bridge created by manual. The setup steps of virbr0 are also
provided:
brctl create virbr0
brctl addif virbr0 eth0
ifconfig virbr0 [ETH0_IP]
ifconfig eth0 0.0.0.0
[Barak] In the guest i can see that eth0 has all offload features disabled
> and cannot be enabled. I suspect this is related to the tap configuration
> in the host. Do you have any ideas?
>
>
>>
>> vhost-net could be truned on by changing the last parameter vhost=on.
>>
> [Barak] When enabling vhost i get errors in qemu, do you know what might
> be the reason?
> [Barak] qemu-system-arm: binding does not support guest notifiers
> [Barak] qemu-system-arm: unable to start vhost net: 38: falling back on
> userspace virtio
>
QEMU requires host/guest notifiers to setup vhost-net, but currently
virtio-mmio does not support yet.
That's why you got those error messages.
>
>
>>
>>
>> --
>> Ying-Shiuan Pan,
>> H Div., CCMA, ITRI, TW
>>
>>
>> ----
>> Best Regards,
>> 潘穎軒Ying-Shiuan Pan
>>
>>
>> 2014/1/13 Barak Wasserstrom <wbarak@gmail.com>
>>
>>> Ying-Shiuan Pan,
>>> Your experiments with arndale Exynos-5250 board can help me greatly and
>>> i would really appreciate if you share with me the following information:
>>> 1. Which Linux kernel did you use for the host and for the guest?
>>> 2. Which Linux kernel patches did you use for KVM?
>>> 3. Which config files did you use for both the host and guest?
>>> 4. Which QEMU did you use?
>>> 5. Which QEMU patches did you use?
>>> 6. What is the exact command line you used for invoking the guest, with
>>> and without vhost-net?
>>>
>>> Many thanks in advance!
>>>
>>> Regards,
>>> Barak
>>>
>>>
>>>
>>> On Mon, Jan 13, 2014 at 5:47 AM, Ying-Shiuan Pan <
>>> yingshiuan.pan@gmail.com> wrote:
>>>
>>>> Hi, Barak,
>>>>
>>>> We've tried vhost-net in kvm-arm on arndale Exynos-5250 board (it
>>>> requires some patches in qemu and kvm, of course). It works (without irqfd
>>>> support), however, the performance does not increase much. The throughput
>>>> (iperf) of virtio-net and vhost-net are 93.5Mbps and 93.6Mbps respectively.
>>>> I thought the result are because both virtio-net and vhost-net almost
>>>> reached the limitation of 100Mbps Ethernet.
>>>>
>>>> The good news is that we even ported vhost-net in our kvm-a9 hypervisor
>>>> (refer:
>>>> http://academic.odysci.com/article/1010113020064758/evaluation-of-a-server-grade-software-only-arm-hypervisor),
>>>> and the throughput of vhost-net on that platform (with 1Gbps Ethernet)
>>>> increased from 323Mbps to 435Mbps.
>>>>
>>>> --
>>>> Ying-Shiuan Pan,
>>>> H Div., CCMA, ITRI, TW
>>>>
>>>>
>>>> ----
>>>> Best Regards,
>>>> 潘穎軒Ying-Shiuan Pan
>>>>
>>>>
>>>> 2014/1/13 Peter Maydell <peter.maydell@linaro.org>
>>>>
>>>>> On 12 January 2014 21:49, Barak Wasserstrom <wbarak@gmail.com> wrote:
>>>>> > Thanks - I got virtio-net-device running now, but performance is
>>>>> terrible.
>>>>> > When i look at the guest's ethernet interface features (ethtool -k
>>>>> eth0) i
>>>>> > see all offload features are disabled.
>>>>> > I'm using a virtual tap on the host (tap0 bridged to eth3).
>>>>> > On the tap i also see all offload features are disabled, while on
>>>>> br0 and
>>>>> > eth3 i see the expected offload features.
>>>>> > Can this explain the terrible performance i'm facing?
>>>>> > If so, how can this be changed?
>>>>> > If not, what else can cause such bad performance?
>>>>> > Do you know if vhost_net can be used on ARM Cortex A15 host/guest,
>>>>> even
>>>>> > though the guest doesn't support PCI & MSIX?
>>>>>
>>>>> I have no idea, I'm afraid. I don't have enough time available to
>>>>> investigate performance issues at the moment; if you find anything
>>>>> specific you can submit patches...
>>>>>
>>>>> thanks
>>>>> -- PMM
>>>>>
>>>>>
>>>>
>>>
>>
>
[-- Attachment #2: Type: text/html, Size: 13284 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm & KVM
2014-01-15 2:42 ` Ying-Shiuan Pan
@ 2014-01-16 7:29 ` Barak Wasserstrom
0 siblings, 0 replies; 10+ messages in thread
From: Barak Wasserstrom @ 2014-01-16 7:29 UTC (permalink / raw)
To: Ying-Shiuan Pan; +Cc: Peter Maydell, QEMU Developers, Pawel Moll
[-- Attachment #1: Type: text/plain, Size: 7428 bytes --]
Ying-Shiuan Pan,
Thanks again - few questions.
1. Can you refer to my question about tap offload features? In the guest i
can see that eth0 has all offload features disabled and cannot be enabled.
I suspect this is related to the tap configuration in the host.
2. I can see that virtio-net notifies KVM upon each received packet, even
though the guest implements NAPI. This causes lots of switches from user
space to hypervisor. Isn't there support for RX packet coalescing in QEMU's
virtio-net?
3. What is your best TX, RX iperf results today on Cortex A15?
Regards,
Barak
On Wed, Jan 15, 2014 at 4:42 AM, Ying-Shiuan Pan
<yingshiuan.pan@gmail.com>wrote:
>
>
> ----
> Best Regards,
> 潘穎軒Ying-Shiuan Pan
>
>
> 2014/1/14 Barak Wasserstrom <wbarak@gmail.com>
>
>> Ying-Shiuan Pan,
>> Thanks again - please see few questions below.
>>
>> Regards,
>> Barak
>>
>>
>> On Tue, Jan 14, 2014 at 5:37 AM, Ying-Shiuan Pan <
>> yingshiuan.pan@gmail.com> wrote:
>>
>>> Hi, Barak,
>>>
>>> Hope the following info can help you
>>>
>>> 1.
>>> HOST:
>>> <http://git.linaro.org/people/christoffer.dall/linux-kvm-arm.git>
>>> http://git.linaro.org/people/christoffer.dall/linux-kvm-arm.git
>>> branch: v3.10-arndale
>>> config: arch/arm/configs/exynos5_arndale_defconfig
>>> dtb: arch/arm/boot/dts/exynos5250-arndale.dtb
>>> rootfs: Ubuntu 13.10
>>>
>>> GUEST:
>>> Official 3.12
>>> config: arch/arm/configs/vexpress_defconfig with virtio-devices enabled
>>> dtb: arch/arm/boot/dts/vexpress-v2p-ca15-tc1.dtb
>>> rootfs: Ubuntu 12.04
>>>
>>> 2.
>>> We are still developing it in progress and will try to open source asap.
>>> The main purpose of that patch is to introduce the ioeventfd into kvm-arm
>>>
>> [Barak] Do you have any estimation about when you can release these
>> patches?
>>
> Actually, No. I will discuss with my boss about the release plan.
>
>> [Barak] Is this required for enabling vhost-net?
>>
> Yes, it is because vhost-net relies on ioeventfd to get kick request from
> front-end driver.
>
>
>>
>>>
>>> 3. as mentioned in 1.
>>>
>>> 4. qemu-1.6.0
>>>
>>> 5. We ported part of guest/host notifiers of virtio-pci to virtio-mmio
>>>
>> [Barak] Any patches available for this?
>>
> I did not see any.. but there might be somebody is also developing this..
>
>> [Barak] Is this required for enabling vhost-net?
>>
> Yes. Without those notifiers, you will see the error messages as you
> mentioned below.
>
>>
>>
>>>
>>> 6. /usr/bin/qemu-system-arm -enable-kvm -kernel /root/nfs/zImage -m 128
>>> --machine vexpress-a15 -cpu cortex-a15 -drive
>>> file=/root/nfs/guest-1G-precise-vm1.img,id=virtio-blk,if=none,cache=none
>>> -device virtio-blk-device,drive=virtio-blk -append "earlyprintk=ttyAMA0
>>> console=ttyAMA0 root=/dev/vda rw ip=192.168.101.101::192.168.101.1:vm1:eth0:off
>>> --no-log" -dtb /root/nfs/vexpress-v2p-ca15-tc1.dtb --nographic -chardev
>>> socket,id=mon,path=/root/vm1.monitor,server,nowait -mon
>>> chardev=mon,id=monitor,mode=readline -device
>>> virtio-net-device,netdev=net0,mac="52:54:00:12:34:01" -netdev
>>> type=tap,id=net0,script=/root/nfs/net.sh,downscript=no,vhost=off
>>>
>> [Barak] Could you share "/root/nfs/net.sh" with me?
>>
> Sorry, I forgot that.
> ---------------
> #!/bin/sh
> ifconfig $1 0.0.0.0
> brctl addif virbr0 $1
> ---------------
>
> virbr0 is a bridge created by manual. The setup steps of virbr0 are also
> provided:
> brctl create virbr0
> brctl addif virbr0 eth0
> ifconfig virbr0 [ETH0_IP]
> ifconfig eth0 0.0.0.0
>
> [Barak] In the guest i can see that eth0 has all offload features disabled
>> and cannot be enabled. I suspect this is related to the tap configuration
>> in the host. Do you have any ideas?
>>
>>
>>>
>>> vhost-net could be truned on by changing the last parameter vhost=on.
>>>
>> [Barak] When enabling vhost i get errors in qemu, do you know what might
>> be the reason?
>> [Barak] qemu-system-arm: binding does not support guest notifiers
>> [Barak] qemu-system-arm: unable to start vhost net: 38: falling back on
>> userspace virtio
>>
> QEMU requires host/guest notifiers to setup vhost-net, but currently
> virtio-mmio does not support yet.
> That's why you got those error messages.
>
>>
>>
>>>
>>>
>>> --
>>> Ying-Shiuan Pan,
>>> H Div., CCMA, ITRI, TW
>>>
>>>
>>> ----
>>> Best Regards,
>>> 潘穎軒Ying-Shiuan Pan
>>>
>>>
>>> 2014/1/13 Barak Wasserstrom <wbarak@gmail.com>
>>>
>>>> Ying-Shiuan Pan,
>>>> Your experiments with arndale Exynos-5250 board can help me greatly
>>>> and i would really appreciate if you share with me the following
>>>> information:
>>>> 1. Which Linux kernel did you use for the host and for the guest?
>>>> 2. Which Linux kernel patches did you use for KVM?
>>>> 3. Which config files did you use for both the host and guest?
>>>> 4. Which QEMU did you use?
>>>> 5. Which QEMU patches did you use?
>>>> 6. What is the exact command line you used for invoking the guest, with
>>>> and without vhost-net?
>>>>
>>>> Many thanks in advance!
>>>>
>>>> Regards,
>>>> Barak
>>>>
>>>>
>>>>
>>>> On Mon, Jan 13, 2014 at 5:47 AM, Ying-Shiuan Pan <
>>>> yingshiuan.pan@gmail.com> wrote:
>>>>
>>>>> Hi, Barak,
>>>>>
>>>>> We've tried vhost-net in kvm-arm on arndale Exynos-5250 board (it
>>>>> requires some patches in qemu and kvm, of course). It works (without irqfd
>>>>> support), however, the performance does not increase much. The throughput
>>>>> (iperf) of virtio-net and vhost-net are 93.5Mbps and 93.6Mbps respectively.
>>>>> I thought the result are because both virtio-net and vhost-net almost
>>>>> reached the limitation of 100Mbps Ethernet.
>>>>>
>>>>> The good news is that we even ported vhost-net in our kvm-a9
>>>>> hypervisor (refer:
>>>>> http://academic.odysci.com/article/1010113020064758/evaluation-of-a-server-grade-software-only-arm-hypervisor),
>>>>> and the throughput of vhost-net on that platform (with 1Gbps Ethernet)
>>>>> increased from 323Mbps to 435Mbps.
>>>>>
>>>>> --
>>>>> Ying-Shiuan Pan,
>>>>> H Div., CCMA, ITRI, TW
>>>>>
>>>>>
>>>>> ----
>>>>> Best Regards,
>>>>> 潘穎軒Ying-Shiuan Pan
>>>>>
>>>>>
>>>>> 2014/1/13 Peter Maydell <peter.maydell@linaro.org>
>>>>>
>>>>>> On 12 January 2014 21:49, Barak Wasserstrom <wbarak@gmail.com> wrote:
>>>>>> > Thanks - I got virtio-net-device running now, but performance is
>>>>>> terrible.
>>>>>> > When i look at the guest's ethernet interface features (ethtool -k
>>>>>> eth0) i
>>>>>> > see all offload features are disabled.
>>>>>> > I'm using a virtual tap on the host (tap0 bridged to eth3).
>>>>>> > On the tap i also see all offload features are disabled, while on
>>>>>> br0 and
>>>>>> > eth3 i see the expected offload features.
>>>>>> > Can this explain the terrible performance i'm facing?
>>>>>> > If so, how can this be changed?
>>>>>> > If not, what else can cause such bad performance?
>>>>>> > Do you know if vhost_net can be used on ARM Cortex A15 host/guest,
>>>>>> even
>>>>>> > though the guest doesn't support PCI & MSIX?
>>>>>>
>>>>>> I have no idea, I'm afraid. I don't have enough time available to
>>>>>> investigate performance issues at the moment; if you find anything
>>>>>> specific you can submit patches...
>>>>>>
>>>>>> thanks
>>>>>> -- PMM
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
[-- Attachment #2: Type: text/html, Size: 15354 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2014-01-16 7:29 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-01-09 12:25 [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm & KVM Barak Wasserstrom
2014-01-12 21:15 ` Peter Maydell
2014-01-12 21:49 ` Barak Wasserstrom
2014-01-12 22:00 ` Peter Maydell
2014-01-13 3:47 ` Ying-Shiuan Pan
2014-01-13 11:24 ` Barak Wasserstrom
2014-01-14 3:37 ` Ying-Shiuan Pan
2014-01-14 11:11 ` Barak Wasserstrom
2014-01-15 2:42 ` Ying-Shiuan Pan
2014-01-16 7:29 ` Barak Wasserstrom
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).