From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:43174) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UEcTw-0003KQ-Jz for qemu-devel@nongnu.org; Sun, 10 Mar 2013 05:23:55 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UEcTt-0003YF-Fb for qemu-devel@nongnu.org; Sun, 10 Mar 2013 05:23:52 -0400 Received: from mx1.redhat.com ([209.132.183.28]:42832) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UEcTt-0003Y9-7s for qemu-devel@nongnu.org; Sun, 10 Mar 2013 05:23:49 -0400 Date: Sun, 10 Mar 2013 11:24:02 +0200 From: "Michael S. Tsirkin" Message-ID: <20130310092402.GB8163@redhat.com> References: <513596D7.3060307@ozlabs.ru> <20130305125614.GA2256@redhat.com> <5135F16B.2010403@ozlabs.ru> <20130305142336.GE2256@redhat.com> <51367864.6080502@ozlabs.ru> <20130306103135.GD16921@redhat.com> <51396D84.3090404@ozlabs.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <51396D84.3090404@ozlabs.ru> Subject: Re: [Qemu-devel] QEMU -netdev vhost=on + -device virtio-net-pci bug List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alexey Kardashevskiy Cc: rusty@rustcorp.com.au, "qemu-devel@nongnu.org" , David Gibson On Fri, Mar 08, 2013 at 03:48:04PM +1100, Alexey Kardashevskiy wrote: > Michael, > > Thanks for the fix. > > There was another question which was lost in the thread. > > I am testing virtio-net in two ways: > > Old -net interface: > -net tap,ifname=tap0,script=qemu-ifup.sh \ > -net nic,model=virtio,addr=0:0:0 > > (qemu) info network > hub 0 > \ virtio-net-pci.0: > index=0,type=nic,model=virtio-net-pci,macaddr=52:54:00:12:34:56 > \ tap.0: index=0,type=tap,ifname=tap0,script=qemu-ifup.sh,downscript=/etc/qemu-ifdown > > New -netdev interface: > -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh \ > -device virtio-net-pci,netdev=tapnet,addr=0.0 > > (qemu) info network > virtio-net-pci.0: > index=0,type=nic,model=virtio-net-pci,macaddr=52:54:00:12:34:56 > \ tapnet: index=0,type=tap,ifname=tap0,script=qemu-ifup.sh,downscript=/etc/qemu-ifdown > > > I get very different virtio0 device features and speed (70MB/s vs. > 700MB/s). I guess somehow the "hub 0" is responsible but there is no > way to avoid it. > > Is there any way to speed up the virtio-net using the old -net interface? Not at the moment. Why do you want to use it? > > On 06/03/13 21:31, Michael S. Tsirkin wrote: > >On Wed, Mar 06, 2013 at 09:57:40AM +1100, Alexey Kardashevskiy wrote: > >>On 06/03/13 01:23, Michael S. Tsirkin wrote: > >>>On Wed, Mar 06, 2013 at 12:21:47AM +1100, Alexey Kardashevskiy wrote: > >>>>On 05/03/13 23:56, Michael S. Tsirkin wrote: > >>>>>>The patch f56a12475ff1b8aa61210d08522c3c8aaf0e2648 "vhost: backend > >>>>>>masking support" breaks virtio-net + vhost=on on PPC64 platform. > >>>>>> > >>>>>>The problem command line is: > >>>>>>1) -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh,vhost=on \ > >>>>>>-device virtio-net-pci,netdev=tapnet,addr=0.0 \ > >>>>> > >>>>>I think the issue is irqfd in not supported on kvm ppc. > >>>> > >>>>How can I make sure this is the case? Some work has been done there > >>>>recently but midnight is quite late to figure this out :) > >>> > >>>Look in virtio_pci_set_guest_notifiers, what is the > >>>value of with_irqfd? > >>> bool with_irqfd = msix_enabled(&proxy->pci_dev) && > >>> kvm_msi_via_irqfd_enabled(); > >>> > >>>Also check what each of the values in the expression above is. > >> > >>Yes, ppc does not have irqfd as kvm_msi_via_irqfd_enabled() returned "false". > >> > >>>>>Could you please check this: > >>>>> > >>>>>+ /* If guest supports masking, set up irqfd now. > >>>>>+ * Otherwise, delay until unmasked in the frontend. > >>>>>+ */ > >>>>>+ if (proxy->vdev->guest_notifier_mask) { > >>>>>+ ret = kvm_virtio_pci_irqfd_use(proxy, queue_no, vector); > >>>>>+ if (ret < 0) { > >>>>>+ kvm_virtio_pci_vq_vector_release(proxy, vector); > >>>>>+ goto undo; > >>>>>+ } > >>>>>+ } > >>>>> > >>>>> > >>>>>Could you please add a printf before "undo" and check whether the > >>>>>error path above is triggered? > >>>> > >>>> > >>>>Checked, it is not triggered. > >>>> > >>>> > >>>>-- > >>>>Alexey > >>> > >>>I think I get it. > >>>Does the following help (probably not the right thing to do, but just > >>>for testing): > >> > >> > >>It did not compile (no "queue_no") :) I changed it a bit and now > >>vhost=on works fine: > >> > >>diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c > >>index a869f53..df1e443 100644 > >>--- a/hw/virtio-pci.c > >>+++ b/hw/virtio-pci.c > >>@@ -798,6 +798,10 @@ static int > >>virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool > >>assign) > >> if (r < 0) { > >> goto assign_error; > >> } > >>+ > >>+ if (!with_irqfd && proxy->vdev->guest_notifier_mask) { > >>+ proxy->vdev->guest_notifier_mask(proxy->vdev, n, !assign); > >>+ } > >> } > >> > >> /* Must set vector notifier after guest notifier has been assigned */ > >> > >> > > > >I see, OK, the issue is that vhost now starts in a masked state > >and no one unmasks it. While the patch will work I think, > >it does not benefit from backend masking, the right thing > >to do is to add mask notifiers, like what the irqfd path does. > > > >Will look into this, thanks. > > > > > -- > Alexey