qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] QEMU -netdev vhost=on + -device virtio-net-pci  bug
@ 2013-03-05  6:55 Alexey Kardashevskiy
  2013-03-05 12:56 ` Michael S. Tsirkin
  0 siblings, 1 reply; 9+ messages in thread
From: Alexey Kardashevskiy @ 2013-03-05  6:55 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: rusty, qemu-devel@nongnu.org, David Gibson

Hi!

The patch f56a12475ff1b8aa61210d08522c3c8aaf0e2648 "vhost: backend masking 
support" breaks virtio-net + vhost=on on PPC64 platform.

The problem command line is:
1) -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh,vhost=on \
-device virtio-net-pci,netdev=tapnet,addr=0.0 \

Without the patch, the eth0 in the guest works fine, with the patch it 
simply does not. The guest's eth0 also works with the following configs:

2) new -netdev interface with vhost=off:
-netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh \
-device virtio-net-pci,netdev=tapnet,addr=0.0

3) old -net interface with vhost=on:
-net tap,ifname=tap0,script=qemu-ifup.sh,vhost=on \
-net nic,model=virtio,addr=0:0:0

4) old -net interface with vhost=off:
-net tap,ifname=tap0,script=qemu-ifup.sh \
-net nic,model=virtio,addr=0:0:0

I run http://junkcode.samba.org/ftp/unpacked/junkcode/socklib/ on
10Gb ethernet and observe 1020MB/s for 1) (without the patch),
800MB/s for 2), 70MB/s for 3) and 4).

The virtio features (cat /sys/bus/virtio/devices/virtio0/features)
for 1) and 2) are:
"1100011111111111111100000000110000000000000000000000000000000000"
and for 3) and 4) they are:
"0000011000000001111100000000110000000000000000000000000000000000"


I guess this is because the old -net interface creates
an internal hub as "info qtree" shows vlan=0 and netdev=hub0port1
while the new -netdev interface does not seem to create any internal
hub (vlan=<null>, netdev=tapnet). btw why are the configs so different?

The network config is below. Both host and guest are running 3.8 kernel.
The qemu tree from qemu.org/master still has this problem.


What am I missing? Thanks.


The full command line is like below plus the network config from
the examples above:

sudo qemu-impreza/ppc64-softmmu/qemu-system-ppc64 -m 1024 -machine 
pseries,kernel_irqchip=on -trace events=trace_events \
-nographic -vga none -enable-kvm -kernel vml38_64k -initrd 1.cpio


This is the host config:

[aik@vpl2 ~]$ cat qemu-ifup.sh
#! /bin/sh

/sbin/ifconfig $1 0.0.0.0 promisc up
/usr/sbin/brctl addif brtest $1

[aik@vpl2 ~]$ brctl show
bridge name	bridge id		STP enabled	interfaces
brtest		8000.00145e992e88	no		eth0
[aik@vpl2 ~]$ ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
         inet6 fe80::214:5eff:fe99:2e88  prefixlen 64  scopeid 0x20<link>
         ether 00:14:5e:99:2e:88  txqueuelen 1000  (Ethernet)
         RX packets 1781219  bytes 124692636 (118.9 MiB)
         RX errors 0  dropped 0  overruns 0  frame 0
         TX packets 13734906  bytes 20755102658 (19.3 GiB)
         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
         device interrupt 49  memory 0x3c0500800000-3c0500800fff

[aik@vpl2 ~]$ lspci -vs 1:1:0.0
0001:01:00.0 Ethernet controller: Chelsio Communications Inc T310 10GbE 
Single Port Adapter
	Subsystem: IBM Device 038c
	Flags: bus master, fast devsel, latency 0, IRQ 49
	Memory at 3c0500800000 (64-bit, non-prefetchable) [size=4K]
	Memory at 3c0500000000 (64-bit, non-prefetchable) [size=8M]
	Memory at 3c0500801000 (64-bit, non-prefetchable) [size=4K]
	[virtual] Expansion ROM at 3c0500c00000 [disabled] [size=512K]
	Capabilities: <access denied>
	Kernel driver in use: cxgb3

[aik@vpl2 ~]$ ls -l /sys/bus/pci/devices/0001\:01\:00.0/net/
total 0
drwxr-xr-x. 5 root root 0 Mar  5 12:59 eth0
[aik@vpl2 ~]$ uname -a
Linux vpl2.ozlabs.ibm.com 3.8.0-kvm-64k-aik+ #239 SMP Tue Mar 5 12:50:05 
EST 2013 ppc64 ppc64 ppc64 GNU/Linux


This is the running guest:

root@erif_root:~# lspci -v
00:00.0 Ethernet controller: Qumranet, Inc. Virtio network device
	Subsystem: Qumranet, Inc. Device 0001
	Flags: bus master, fast devsel, latency 0, IRQ 19
	I/O ports at 0020 [size=32]
	Memory at 100b0000000 (32-bit, non-prefetchable) [size=4K]
	Expansion ROM at 100b0010000 [disabled] [size=64K]
	Capabilities: [40] MSI-X: Enable+ Count=3 Masked-
	Kernel driver in use: virtio-pci

root@erif_root:~# ifconfig -a
eth0      Link encap:Ethernet  HWaddr 52:54:00:12:34:56
           inet addr:172.20.1.2  Bcast:172.20.255.255  Mask:255.255.0.0
           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
           RX packets:36 errors:0 dropped:6 overruns:0 frame:0
           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
           collisions:0 txqueuelen:1000
           RX bytes:2268 (2.2 KiB)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback
           inet addr:127.0.0.1  Mask:255.0.0.0
           UP LOOPBACK RUNNING  MTU:65536  Metric:1
           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
           collisions:0 txqueuelen:0
           RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

root@erif_root:~# uname -a
Linux erif_root 3.8.0-aik-guest+ #262 SMP Mon Mar 4 15:58:55 EST 2013 ppc64 
GNU/Linux


-- 
Alexey

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] QEMU -netdev vhost=on + -device virtio-net-pci bug
  2013-03-05  6:55 [Qemu-devel] QEMU -netdev vhost=on + -device virtio-net-pci bug Alexey Kardashevskiy
@ 2013-03-05 12:56 ` Michael S. Tsirkin
  2013-03-05 13:21   ` Alexey Kardashevskiy
  0 siblings, 1 reply; 9+ messages in thread
From: Michael S. Tsirkin @ 2013-03-05 12:56 UTC (permalink / raw)
  To: Alexey Kardashevskiy; +Cc: rusty, qemu-devel@nongnu.org, David Gibson

On Tue, Mar 05, 2013 at 05:55:19PM +1100, Alexey Kardashevskiy wrote:
> Hi!
> 
> The patch f56a12475ff1b8aa61210d08522c3c8aaf0e2648 "vhost: backend
> masking support" breaks virtio-net + vhost=on on PPC64 platform.
> 
> The problem command line is:
> 1) -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh,vhost=on \
> -device virtio-net-pci,netdev=tapnet,addr=0.0 \

I think the issue is irqfd in not supported on kvm ppc.

Could you please check this:

+        /* If guest supports masking, set up irqfd now.
+         * Otherwise, delay until unmasked in the frontend.
+         */
+        if (proxy->vdev->guest_notifier_mask) {
+            ret = kvm_virtio_pci_irqfd_use(proxy, queue_no, vector);
+            if (ret < 0) {
+                kvm_virtio_pci_vq_vector_release(proxy, vector);
+                goto undo;
+            }
+        }


Could you please add a printf before "undo" and check whether the
error path above is triggered?


> Without the patch, the eth0 in the guest works fine, with the patch
> it simply does not. The guest's eth0 also works with the following
> configs:
> 
> 2) new -netdev interface with vhost=off:
> -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh \
> -device virtio-net-pci,netdev=tapnet,addr=0.0
> 
> 3) old -net interface with vhost=on:
> -net tap,ifname=tap0,script=qemu-ifup.sh,vhost=on \
> -net nic,model=virtio,addr=0:0:0
> 
> 4) old -net interface with vhost=off:
> -net tap,ifname=tap0,script=qemu-ifup.sh \
> -net nic,model=virtio,addr=0:0:0
> 
> I run http://junkcode.samba.org/ftp/unpacked/junkcode/socklib/ on
> 10Gb ethernet and observe 1020MB/s for 1) (without the patch),
> 800MB/s for 2), 70MB/s for 3) and 4).
> 
> The virtio features (cat /sys/bus/virtio/devices/virtio0/features)
> for 1) and 2) are:
> "1100011111111111111100000000110000000000000000000000000000000000"
> and for 3) and 4) they are:
> "0000011000000001111100000000110000000000000000000000000000000000"
> 
> 
> I guess this is because the old -net interface creates
> an internal hub as "info qtree" shows vlan=0 and netdev=hub0port1
> while the new -netdev interface does not seem to create any internal
> hub (vlan=<null>, netdev=tapnet). btw why are the configs so different?
> 
> The network config is below. Both host and guest are running 3.8 kernel.
> The qemu tree from qemu.org/master still has this problem.
> 
> 
> What am I missing? Thanks.
> 
> 
> The full command line is like below plus the network config from
> the examples above:
> 
> sudo qemu-impreza/ppc64-softmmu/qemu-system-ppc64 -m 1024 -machine
> pseries,kernel_irqchip=on -trace events=trace_events \
> -nographic -vga none -enable-kvm -kernel vml38_64k -initrd 1.cpio
> 
> 
> This is the host config:
> 
> [aik@vpl2 ~]$ cat qemu-ifup.sh
> #! /bin/sh
> 
> /sbin/ifconfig $1 0.0.0.0 promisc up
> /usr/sbin/brctl addif brtest $1
> 
> [aik@vpl2 ~]$ brctl show
> bridge name	bridge id		STP enabled	interfaces
> brtest		8000.00145e992e88	no		eth0
> [aik@vpl2 ~]$ ifconfig eth0
> eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
>         inet6 fe80::214:5eff:fe99:2e88  prefixlen 64  scopeid 0x20<link>
>         ether 00:14:5e:99:2e:88  txqueuelen 1000  (Ethernet)
>         RX packets 1781219  bytes 124692636 (118.9 MiB)
>         RX errors 0  dropped 0  overruns 0  frame 0
>         TX packets 13734906  bytes 20755102658 (19.3 GiB)
>         TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
>         device interrupt 49  memory 0x3c0500800000-3c0500800fff
> 
> [aik@vpl2 ~]$ lspci -vs 1:1:0.0
> 0001:01:00.0 Ethernet controller: Chelsio Communications Inc T310
> 10GbE Single Port Adapter
> 	Subsystem: IBM Device 038c
> 	Flags: bus master, fast devsel, latency 0, IRQ 49
> 	Memory at 3c0500800000 (64-bit, non-prefetchable) [size=4K]
> 	Memory at 3c0500000000 (64-bit, non-prefetchable) [size=8M]
> 	Memory at 3c0500801000 (64-bit, non-prefetchable) [size=4K]
> 	[virtual] Expansion ROM at 3c0500c00000 [disabled] [size=512K]
> 	Capabilities: <access denied>
> 	Kernel driver in use: cxgb3
> 
> [aik@vpl2 ~]$ ls -l /sys/bus/pci/devices/0001\:01\:00.0/net/
> total 0
> drwxr-xr-x. 5 root root 0 Mar  5 12:59 eth0
> [aik@vpl2 ~]$ uname -a
> Linux vpl2.ozlabs.ibm.com 3.8.0-kvm-64k-aik+ #239 SMP Tue Mar 5
> 12:50:05 EST 2013 ppc64 ppc64 ppc64 GNU/Linux
> 
> 
> This is the running guest:
> 
> root@erif_root:~# lspci -v
> 00:00.0 Ethernet controller: Qumranet, Inc. Virtio network device
> 	Subsystem: Qumranet, Inc. Device 0001
> 	Flags: bus master, fast devsel, latency 0, IRQ 19
> 	I/O ports at 0020 [size=32]
> 	Memory at 100b0000000 (32-bit, non-prefetchable) [size=4K]
> 	Expansion ROM at 100b0010000 [disabled] [size=64K]
> 	Capabilities: [40] MSI-X: Enable+ Count=3 Masked-
> 	Kernel driver in use: virtio-pci
> 
> root@erif_root:~# ifconfig -a
> eth0      Link encap:Ethernet  HWaddr 52:54:00:12:34:56
>           inet addr:172.20.1.2  Bcast:172.20.255.255  Mask:255.255.0.0
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           RX packets:36 errors:0 dropped:6 overruns:0 frame:0
>           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:1000
>           RX bytes:2268 (2.2 KiB)  TX bytes:0 (0.0 B)
> 
> lo        Link encap:Local Loopback
>           inet addr:127.0.0.1  Mask:255.0.0.0
>           UP LOOPBACK RUNNING  MTU:65536  Metric:1
>           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:0
>           RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
> 
> root@erif_root:~# uname -a
> Linux erif_root 3.8.0-aik-guest+ #262 SMP Mon Mar 4 15:58:55 EST
> 2013 ppc64 GNU/Linux
> 
> 
> -- 
> Alexey

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] QEMU -netdev vhost=on + -device virtio-net-pci bug
  2013-03-05 12:56 ` Michael S. Tsirkin
@ 2013-03-05 13:21   ` Alexey Kardashevskiy
  2013-03-05 14:23     ` Michael S. Tsirkin
  0 siblings, 1 reply; 9+ messages in thread
From: Alexey Kardashevskiy @ 2013-03-05 13:21 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: rusty, qemu-devel@nongnu.org, David Gibson

On 05/03/13 23:56, Michael S. Tsirkin wrote:
>> The patch f56a12475ff1b8aa61210d08522c3c8aaf0e2648 "vhost: backend
>> masking support" breaks virtio-net + vhost=on on PPC64 platform.
>>
>> The problem command line is:
>> 1) -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh,vhost=on \
>> -device virtio-net-pci,netdev=tapnet,addr=0.0 \
>
> I think the issue is irqfd in not supported on kvm ppc.

How can I make sure this is the case? Some work has been done there
recently but midnight is quite late to figure this out :)


> Could you please check this:
>
> +        /* If guest supports masking, set up irqfd now.
> +         * Otherwise, delay until unmasked in the frontend.
> +         */
> +        if (proxy->vdev->guest_notifier_mask) {
> +            ret = kvm_virtio_pci_irqfd_use(proxy, queue_no, vector);
> +            if (ret < 0) {
> +                kvm_virtio_pci_vq_vector_release(proxy, vector);
> +                goto undo;
> +            }
> +        }
>
>
> Could you please add a printf before "undo" and check whether the
> error path above is triggered?


Checked, it is not triggered.


-- 
Alexey

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] QEMU -netdev vhost=on + -device virtio-net-pci bug
  2013-03-05 13:21   ` Alexey Kardashevskiy
@ 2013-03-05 14:23     ` Michael S. Tsirkin
  2013-03-05 22:57       ` Alexey Kardashevskiy
  0 siblings, 1 reply; 9+ messages in thread
From: Michael S. Tsirkin @ 2013-03-05 14:23 UTC (permalink / raw)
  To: Alexey Kardashevskiy; +Cc: rusty, qemu-devel@nongnu.org, David Gibson

On Wed, Mar 06, 2013 at 12:21:47AM +1100, Alexey Kardashevskiy wrote:
> On 05/03/13 23:56, Michael S. Tsirkin wrote:
> >>The patch f56a12475ff1b8aa61210d08522c3c8aaf0e2648 "vhost: backend
> >>masking support" breaks virtio-net + vhost=on on PPC64 platform.
> >>
> >>The problem command line is:
> >>1) -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh,vhost=on \
> >>-device virtio-net-pci,netdev=tapnet,addr=0.0 \
> >
> >I think the issue is irqfd in not supported on kvm ppc.
> 
> How can I make sure this is the case? Some work has been done there
> recently but midnight is quite late to figure this out :)

Look in virtio_pci_set_guest_notifiers, what is the
value of with_irqfd?
  bool with_irqfd = msix_enabled(&proxy->pci_dev) &&
        kvm_msi_via_irqfd_enabled();

Also check what each of the values in the expression above is.


> 
> >Could you please check this:
> >
> >+        /* If guest supports masking, set up irqfd now.
> >+         * Otherwise, delay until unmasked in the frontend.
> >+         */
> >+        if (proxy->vdev->guest_notifier_mask) {
> >+            ret = kvm_virtio_pci_irqfd_use(proxy, queue_no, vector);
> >+            if (ret < 0) {
> >+                kvm_virtio_pci_vq_vector_release(proxy, vector);
> >+                goto undo;
> >+            }
> >+        }
> >
> >
> >Could you please add a printf before "undo" and check whether the
> >error path above is triggered?
> 
> 
> Checked, it is not triggered.
> 
> 
> -- 
> Alexey

I think I get it.
Does the following help (probably not the right thing to do, but just
for testing):

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

---

diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
index ba56ab2..c2a0c5a 100644
--- a/hw/virtio-pci.c
+++ b/hw/virtio-pci.c
@@ -800,6 +800,10 @@ static int virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool assign)
         }
     }
 
+    if (!with_irqfd && proxy->vdev->guest_notifier_mask) {
+        proxy->vdev->guest_notifier_mask(proxy->vdev, queue_no, !assign);
+    }
+
     /* Must set vector notifier after guest notifier has been assigned */
     if (with_irqfd && assign) {
         proxy->vector_irqfd =

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] QEMU -netdev vhost=on + -device virtio-net-pci bug
  2013-03-05 14:23     ` Michael S. Tsirkin
@ 2013-03-05 22:57       ` Alexey Kardashevskiy
  2013-03-06 10:31         ` Michael S. Tsirkin
  0 siblings, 1 reply; 9+ messages in thread
From: Alexey Kardashevskiy @ 2013-03-05 22:57 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: rusty, qemu-devel@nongnu.org, David Gibson

On 06/03/13 01:23, Michael S. Tsirkin wrote:
> On Wed, Mar 06, 2013 at 12:21:47AM +1100, Alexey Kardashevskiy wrote:
>> On 05/03/13 23:56, Michael S. Tsirkin wrote:
>>>> The patch f56a12475ff1b8aa61210d08522c3c8aaf0e2648 "vhost: backend
>>>> masking support" breaks virtio-net + vhost=on on PPC64 platform.
>>>>
>>>> The problem command line is:
>>>> 1) -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh,vhost=on \
>>>> -device virtio-net-pci,netdev=tapnet,addr=0.0 \
>>>
>>> I think the issue is irqfd in not supported on kvm ppc.
>>
>> How can I make sure this is the case? Some work has been done there
>> recently but midnight is quite late to figure this out :)
>
> Look in virtio_pci_set_guest_notifiers, what is the
> value of with_irqfd?
>    bool with_irqfd = msix_enabled(&proxy->pci_dev) &&
>          kvm_msi_via_irqfd_enabled();
>
> Also check what each of the values in the expression above is.

Yes, ppc does not have irqfd as kvm_msi_via_irqfd_enabled() returned "false".

>>> Could you please check this:
>>>
>>> +        /* If guest supports masking, set up irqfd now.
>>> +         * Otherwise, delay until unmasked in the frontend.
>>> +         */
>>> +        if (proxy->vdev->guest_notifier_mask) {
>>> +            ret = kvm_virtio_pci_irqfd_use(proxy, queue_no, vector);
>>> +            if (ret < 0) {
>>> +                kvm_virtio_pci_vq_vector_release(proxy, vector);
>>> +                goto undo;
>>> +            }
>>> +        }
>>>
>>>
>>> Could you please add a printf before "undo" and check whether the
>>> error path above is triggered?
>>
>>
>> Checked, it is not triggered.
>>
>>
>> --
>> Alexey
>
> I think I get it.
> Does the following help (probably not the right thing to do, but just
> for testing):


It did not compile (no "queue_no") :) I changed it a bit and now vhost=on 
works fine:

diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
index a869f53..df1e443 100644
--- a/hw/virtio-pci.c
+++ b/hw/virtio-pci.c
@@ -798,6 +798,10 @@ static int virtio_pci_set_guest_notifiers(DeviceState 
*d, int nvqs, bool assign)
          if (r < 0) {
              goto assign_error;
          }
+
+        if (!with_irqfd && proxy->vdev->guest_notifier_mask) {
+            proxy->vdev->guest_notifier_mask(proxy->vdev, n, !assign);
+        }
      }

      /* Must set vector notifier after guest notifier has been assigned */




> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>
> ---
>
> diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
> index ba56ab2..c2a0c5a 100644
> --- a/hw/virtio-pci.c
> +++ b/hw/virtio-pci.c
> @@ -800,6 +800,10 @@ static int virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool assign)
>           }
>       }
>
> +    if (!with_irqfd && proxy->vdev->guest_notifier_mask) {
> +        proxy->vdev->guest_notifier_mask(proxy->vdev, queue_no, !assign);
> +    }
> +
>       /* Must set vector notifier after guest notifier has been assigned */
>       if (with_irqfd && assign) {
>           proxy->vector_irqfd =
>


-- 
Alexey

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] QEMU -netdev vhost=on + -device virtio-net-pci bug
  2013-03-05 22:57       ` Alexey Kardashevskiy
@ 2013-03-06 10:31         ` Michael S. Tsirkin
  2013-03-08  4:48           ` Alexey Kardashevskiy
  0 siblings, 1 reply; 9+ messages in thread
From: Michael S. Tsirkin @ 2013-03-06 10:31 UTC (permalink / raw)
  To: Alexey Kardashevskiy; +Cc: rusty, qemu-devel@nongnu.org, David Gibson

On Wed, Mar 06, 2013 at 09:57:40AM +1100, Alexey Kardashevskiy wrote:
> On 06/03/13 01:23, Michael S. Tsirkin wrote:
> >On Wed, Mar 06, 2013 at 12:21:47AM +1100, Alexey Kardashevskiy wrote:
> >>On 05/03/13 23:56, Michael S. Tsirkin wrote:
> >>>>The patch f56a12475ff1b8aa61210d08522c3c8aaf0e2648 "vhost: backend
> >>>>masking support" breaks virtio-net + vhost=on on PPC64 platform.
> >>>>
> >>>>The problem command line is:
> >>>>1) -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh,vhost=on \
> >>>>-device virtio-net-pci,netdev=tapnet,addr=0.0 \
> >>>
> >>>I think the issue is irqfd in not supported on kvm ppc.
> >>
> >>How can I make sure this is the case? Some work has been done there
> >>recently but midnight is quite late to figure this out :)
> >
> >Look in virtio_pci_set_guest_notifiers, what is the
> >value of with_irqfd?
> >   bool with_irqfd = msix_enabled(&proxy->pci_dev) &&
> >         kvm_msi_via_irqfd_enabled();
> >
> >Also check what each of the values in the expression above is.
> 
> Yes, ppc does not have irqfd as kvm_msi_via_irqfd_enabled() returned "false".
> 
> >>>Could you please check this:
> >>>
> >>>+        /* If guest supports masking, set up irqfd now.
> >>>+         * Otherwise, delay until unmasked in the frontend.
> >>>+         */
> >>>+        if (proxy->vdev->guest_notifier_mask) {
> >>>+            ret = kvm_virtio_pci_irqfd_use(proxy, queue_no, vector);
> >>>+            if (ret < 0) {
> >>>+                kvm_virtio_pci_vq_vector_release(proxy, vector);
> >>>+                goto undo;
> >>>+            }
> >>>+        }
> >>>
> >>>
> >>>Could you please add a printf before "undo" and check whether the
> >>>error path above is triggered?
> >>
> >>
> >>Checked, it is not triggered.
> >>
> >>
> >>--
> >>Alexey
> >
> >I think I get it.
> >Does the following help (probably not the right thing to do, but just
> >for testing):
> 
> 
> It did not compile (no "queue_no") :) I changed it a bit and now
> vhost=on works fine:
> 
> diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
> index a869f53..df1e443 100644
> --- a/hw/virtio-pci.c
> +++ b/hw/virtio-pci.c
> @@ -798,6 +798,10 @@ static int
> virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool
> assign)
>          if (r < 0) {
>              goto assign_error;
>          }
> +
> +        if (!with_irqfd && proxy->vdev->guest_notifier_mask) {
> +            proxy->vdev->guest_notifier_mask(proxy->vdev, n, !assign);
> +        }
>      }
> 
>      /* Must set vector notifier after guest notifier has been assigned */
> 
> 

I see, OK, the issue is that vhost now starts in a masked state
and no one unmasks it. While the patch will work I think,
it does not benefit from backend masking, the right thing
to do is to add mask notifiers, like what the irqfd path does.

Will look into this, thanks.

-- 
MST

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] QEMU -netdev vhost=on + -device virtio-net-pci bug
  2013-03-06 10:31         ` Michael S. Tsirkin
@ 2013-03-08  4:48           ` Alexey Kardashevskiy
  2013-03-10  9:24             ` Michael S. Tsirkin
  0 siblings, 1 reply; 9+ messages in thread
From: Alexey Kardashevskiy @ 2013-03-08  4:48 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: rusty, qemu-devel@nongnu.org, David Gibson

Michael,

Thanks for the fix.

There was another question which was lost in the thread.

I am testing virtio-net in two ways:

Old -net interface:
-net tap,ifname=tap0,script=qemu-ifup.sh \
-net nic,model=virtio,addr=0:0:0

(qemu) info network
hub 0
  \ virtio-net-pci.0: 
index=0,type=nic,model=virtio-net-pci,macaddr=52:54:00:12:34:56
  \ tap.0: 
index=0,type=tap,ifname=tap0,script=qemu-ifup.sh,downscript=/etc/qemu-ifdown

New -netdev interface:
-netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh \
-device virtio-net-pci,netdev=tapnet,addr=0.0

(qemu) info network
virtio-net-pci.0: 
index=0,type=nic,model=virtio-net-pci,macaddr=52:54:00:12:34:56
  \ tapnet: 
index=0,type=tap,ifname=tap0,script=qemu-ifup.sh,downscript=/etc/qemu-ifdown


I get very different virtio0 device features and speed (70MB/s vs. 
700MB/s). I guess somehow the "hub 0" is responsible but there is no way to 
avoid it.

Is there any way to speed up the virtio-net using the old -net interface?



On 06/03/13 21:31, Michael S. Tsirkin wrote:
> On Wed, Mar 06, 2013 at 09:57:40AM +1100, Alexey Kardashevskiy wrote:
>> On 06/03/13 01:23, Michael S. Tsirkin wrote:
>>> On Wed, Mar 06, 2013 at 12:21:47AM +1100, Alexey Kardashevskiy wrote:
>>>> On 05/03/13 23:56, Michael S. Tsirkin wrote:
>>>>>> The patch f56a12475ff1b8aa61210d08522c3c8aaf0e2648 "vhost: backend
>>>>>> masking support" breaks virtio-net + vhost=on on PPC64 platform.
>>>>>>
>>>>>> The problem command line is:
>>>>>> 1) -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh,vhost=on \
>>>>>> -device virtio-net-pci,netdev=tapnet,addr=0.0 \
>>>>>
>>>>> I think the issue is irqfd in not supported on kvm ppc.
>>>>
>>>> How can I make sure this is the case? Some work has been done there
>>>> recently but midnight is quite late to figure this out :)
>>>
>>> Look in virtio_pci_set_guest_notifiers, what is the
>>> value of with_irqfd?
>>>    bool with_irqfd = msix_enabled(&proxy->pci_dev) &&
>>>          kvm_msi_via_irqfd_enabled();
>>>
>>> Also check what each of the values in the expression above is.
>>
>> Yes, ppc does not have irqfd as kvm_msi_via_irqfd_enabled() returned "false".
>>
>>>>> Could you please check this:
>>>>>
>>>>> +        /* If guest supports masking, set up irqfd now.
>>>>> +         * Otherwise, delay until unmasked in the frontend.
>>>>> +         */
>>>>> +        if (proxy->vdev->guest_notifier_mask) {
>>>>> +            ret = kvm_virtio_pci_irqfd_use(proxy, queue_no, vector);
>>>>> +            if (ret < 0) {
>>>>> +                kvm_virtio_pci_vq_vector_release(proxy, vector);
>>>>> +                goto undo;
>>>>> +            }
>>>>> +        }
>>>>>
>>>>>
>>>>> Could you please add a printf before "undo" and check whether the
>>>>> error path above is triggered?
>>>>
>>>>
>>>> Checked, it is not triggered.
>>>>
>>>>
>>>> --
>>>> Alexey
>>>
>>> I think I get it.
>>> Does the following help (probably not the right thing to do, but just
>>> for testing):
>>
>>
>> It did not compile (no "queue_no") :) I changed it a bit and now
>> vhost=on works fine:
>>
>> diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
>> index a869f53..df1e443 100644
>> --- a/hw/virtio-pci.c
>> +++ b/hw/virtio-pci.c
>> @@ -798,6 +798,10 @@ static int
>> virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool
>> assign)
>>           if (r < 0) {
>>               goto assign_error;
>>           }
>> +
>> +        if (!with_irqfd && proxy->vdev->guest_notifier_mask) {
>> +            proxy->vdev->guest_notifier_mask(proxy->vdev, n, !assign);
>> +        }
>>       }
>>
>>       /* Must set vector notifier after guest notifier has been assigned */
>>
>>
>
> I see, OK, the issue is that vhost now starts in a masked state
> and no one unmasks it. While the patch will work I think,
> it does not benefit from backend masking, the right thing
> to do is to add mask notifiers, like what the irqfd path does.
>
> Will look into this, thanks.
>


-- 
Alexey

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] QEMU -netdev vhost=on + -device virtio-net-pci bug
  2013-03-08  4:48           ` Alexey Kardashevskiy
@ 2013-03-10  9:24             ` Michael S. Tsirkin
  2013-03-10 11:25               ` Alexey Kardashevskiy
  0 siblings, 1 reply; 9+ messages in thread
From: Michael S. Tsirkin @ 2013-03-10  9:24 UTC (permalink / raw)
  To: Alexey Kardashevskiy; +Cc: rusty, qemu-devel@nongnu.org, David Gibson

On Fri, Mar 08, 2013 at 03:48:04PM +1100, Alexey Kardashevskiy wrote:
> Michael,
> 
> Thanks for the fix.
> 
> There was another question which was lost in the thread.
> 
> I am testing virtio-net in two ways:
> 
> Old -net interface:
> -net tap,ifname=tap0,script=qemu-ifup.sh \
> -net nic,model=virtio,addr=0:0:0
> 
> (qemu) info network
> hub 0
>  \ virtio-net-pci.0:
> index=0,type=nic,model=virtio-net-pci,macaddr=52:54:00:12:34:56
>  \ tap.0: index=0,type=tap,ifname=tap0,script=qemu-ifup.sh,downscript=/etc/qemu-ifdown
> 
> New -netdev interface:
> -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh \
> -device virtio-net-pci,netdev=tapnet,addr=0.0
> 
> (qemu) info network
> virtio-net-pci.0:
> index=0,type=nic,model=virtio-net-pci,macaddr=52:54:00:12:34:56
>  \ tapnet: index=0,type=tap,ifname=tap0,script=qemu-ifup.sh,downscript=/etc/qemu-ifdown
> 
> 
> I get very different virtio0 device features and speed (70MB/s vs.
> 700MB/s). I guess somehow the "hub 0" is responsible but there is no
> way to avoid it.
> 
> Is there any way to speed up the virtio-net using the old -net interface?

Not at the moment. Why do you want to use it?

> 
> On 06/03/13 21:31, Michael S. Tsirkin wrote:
> >On Wed, Mar 06, 2013 at 09:57:40AM +1100, Alexey Kardashevskiy wrote:
> >>On 06/03/13 01:23, Michael S. Tsirkin wrote:
> >>>On Wed, Mar 06, 2013 at 12:21:47AM +1100, Alexey Kardashevskiy wrote:
> >>>>On 05/03/13 23:56, Michael S. Tsirkin wrote:
> >>>>>>The patch f56a12475ff1b8aa61210d08522c3c8aaf0e2648 "vhost: backend
> >>>>>>masking support" breaks virtio-net + vhost=on on PPC64 platform.
> >>>>>>
> >>>>>>The problem command line is:
> >>>>>>1) -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh,vhost=on \
> >>>>>>-device virtio-net-pci,netdev=tapnet,addr=0.0 \
> >>>>>
> >>>>>I think the issue is irqfd in not supported on kvm ppc.
> >>>>
> >>>>How can I make sure this is the case? Some work has been done there
> >>>>recently but midnight is quite late to figure this out :)
> >>>
> >>>Look in virtio_pci_set_guest_notifiers, what is the
> >>>value of with_irqfd?
> >>>   bool with_irqfd = msix_enabled(&proxy->pci_dev) &&
> >>>         kvm_msi_via_irqfd_enabled();
> >>>
> >>>Also check what each of the values in the expression above is.
> >>
> >>Yes, ppc does not have irqfd as kvm_msi_via_irqfd_enabled() returned "false".
> >>
> >>>>>Could you please check this:
> >>>>>
> >>>>>+        /* If guest supports masking, set up irqfd now.
> >>>>>+         * Otherwise, delay until unmasked in the frontend.
> >>>>>+         */
> >>>>>+        if (proxy->vdev->guest_notifier_mask) {
> >>>>>+            ret = kvm_virtio_pci_irqfd_use(proxy, queue_no, vector);
> >>>>>+            if (ret < 0) {
> >>>>>+                kvm_virtio_pci_vq_vector_release(proxy, vector);
> >>>>>+                goto undo;
> >>>>>+            }
> >>>>>+        }
> >>>>>
> >>>>>
> >>>>>Could you please add a printf before "undo" and check whether the
> >>>>>error path above is triggered?
> >>>>
> >>>>
> >>>>Checked, it is not triggered.
> >>>>
> >>>>
> >>>>--
> >>>>Alexey
> >>>
> >>>I think I get it.
> >>>Does the following help (probably not the right thing to do, but just
> >>>for testing):
> >>
> >>
> >>It did not compile (no "queue_no") :) I changed it a bit and now
> >>vhost=on works fine:
> >>
> >>diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
> >>index a869f53..df1e443 100644
> >>--- a/hw/virtio-pci.c
> >>+++ b/hw/virtio-pci.c
> >>@@ -798,6 +798,10 @@ static int
> >>virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool
> >>assign)
> >>          if (r < 0) {
> >>              goto assign_error;
> >>          }
> >>+
> >>+        if (!with_irqfd && proxy->vdev->guest_notifier_mask) {
> >>+            proxy->vdev->guest_notifier_mask(proxy->vdev, n, !assign);
> >>+        }
> >>      }
> >>
> >>      /* Must set vector notifier after guest notifier has been assigned */
> >>
> >>
> >
> >I see, OK, the issue is that vhost now starts in a masked state
> >and no one unmasks it. While the patch will work I think,
> >it does not benefit from backend masking, the right thing
> >to do is to add mask notifiers, like what the irqfd path does.
> >
> >Will look into this, thanks.
> >
> 
> 
> -- 
> Alexey

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [Qemu-devel] QEMU -netdev vhost=on + -device virtio-net-pci bug
  2013-03-10  9:24             ` Michael S. Tsirkin
@ 2013-03-10 11:25               ` Alexey Kardashevskiy
  0 siblings, 0 replies; 9+ messages in thread
From: Alexey Kardashevskiy @ 2013-03-10 11:25 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: rusty, qemu-devel@nongnu.org, David Gibson

On 10/03/13 20:24, Michael S. Tsirkin wrote:
> On Fri, Mar 08, 2013 at 03:48:04PM +1100, Alexey Kardashevskiy wrote:
>> Michael,
>>
>> Thanks for the fix.
>>
>> There was another question which was lost in the thread.
>>
>> I am testing virtio-net in two ways:
>>
>> Old -net interface:
>> -net tap,ifname=tap0,script=qemu-ifup.sh \
>> -net nic,model=virtio,addr=0:0:0
>>
>> (qemu) info network
>> hub 0
>>   \ virtio-net-pci.0:
>> index=0,type=nic,model=virtio-net-pci,macaddr=52:54:00:12:34:56
>>   \ tap.0: index=0,type=tap,ifname=tap0,script=qemu-ifup.sh,downscript=/etc/qemu-ifdown
>>
>> New -netdev interface:
>> -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh \
>> -device virtio-net-pci,netdev=tapnet,addr=0.0
>>
>> (qemu) info network
>> virtio-net-pci.0:
>> index=0,type=nic,model=virtio-net-pci,macaddr=52:54:00:12:34:56
>>   \ tapnet: index=0,type=tap,ifname=tap0,script=qemu-ifup.sh,downscript=/etc/qemu-ifdown
>>
>>
>> I get very different virtio0 device features and speed (70MB/s vs.
>> 700MB/s). I guess somehow the "hub 0" is responsible but there is no
>> way to avoid it.
>>
>> Is there any way to speed up the virtio-net using the old -net interface?
>
> Not at the moment. Why do you want to use it?


It is not like I really want it, I was just trying to understand (now it is
more or less clear) why exactly "not" as the documentation is saying about 
-net and -netdev as synonyms while they are not.



-- 
Alexey

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2013-03-10 11:24 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-03-05  6:55 [Qemu-devel] QEMU -netdev vhost=on + -device virtio-net-pci bug Alexey Kardashevskiy
2013-03-05 12:56 ` Michael S. Tsirkin
2013-03-05 13:21   ` Alexey Kardashevskiy
2013-03-05 14:23     ` Michael S. Tsirkin
2013-03-05 22:57       ` Alexey Kardashevskiy
2013-03-06 10:31         ` Michael S. Tsirkin
2013-03-08  4:48           ` Alexey Kardashevskiy
2013-03-10  9:24             ` Michael S. Tsirkin
2013-03-10 11:25               ` Alexey Kardashevskiy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).