From: Alexey Kardashevskiy <aik@ozlabs.ru>
To: "Michael S. Tsirkin" <mst@redhat.com>
Cc: rusty@rustcorp.com.au,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
David Gibson <david@gibson.dropbear.id.au>
Subject: Re: [Qemu-devel] QEMU -netdev vhost=on + -device virtio-net-pci bug
Date: Fri, 08 Mar 2013 15:48:04 +1100 [thread overview]
Message-ID: <51396D84.3090404@ozlabs.ru> (raw)
In-Reply-To: <20130306103135.GD16921@redhat.com>
Michael,
Thanks for the fix.
There was another question which was lost in the thread.
I am testing virtio-net in two ways:
Old -net interface:
-net tap,ifname=tap0,script=qemu-ifup.sh \
-net nic,model=virtio,addr=0:0:0
(qemu) info network
hub 0
\ virtio-net-pci.0:
index=0,type=nic,model=virtio-net-pci,macaddr=52:54:00:12:34:56
\ tap.0:
index=0,type=tap,ifname=tap0,script=qemu-ifup.sh,downscript=/etc/qemu-ifdown
New -netdev interface:
-netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh \
-device virtio-net-pci,netdev=tapnet,addr=0.0
(qemu) info network
virtio-net-pci.0:
index=0,type=nic,model=virtio-net-pci,macaddr=52:54:00:12:34:56
\ tapnet:
index=0,type=tap,ifname=tap0,script=qemu-ifup.sh,downscript=/etc/qemu-ifdown
I get very different virtio0 device features and speed (70MB/s vs.
700MB/s). I guess somehow the "hub 0" is responsible but there is no way to
avoid it.
Is there any way to speed up the virtio-net using the old -net interface?
On 06/03/13 21:31, Michael S. Tsirkin wrote:
> On Wed, Mar 06, 2013 at 09:57:40AM +1100, Alexey Kardashevskiy wrote:
>> On 06/03/13 01:23, Michael S. Tsirkin wrote:
>>> On Wed, Mar 06, 2013 at 12:21:47AM +1100, Alexey Kardashevskiy wrote:
>>>> On 05/03/13 23:56, Michael S. Tsirkin wrote:
>>>>>> The patch f56a12475ff1b8aa61210d08522c3c8aaf0e2648 "vhost: backend
>>>>>> masking support" breaks virtio-net + vhost=on on PPC64 platform.
>>>>>>
>>>>>> The problem command line is:
>>>>>> 1) -netdev tap,id=tapnet,ifname=tap0,script=qemu-ifup.sh,vhost=on \
>>>>>> -device virtio-net-pci,netdev=tapnet,addr=0.0 \
>>>>>
>>>>> I think the issue is irqfd in not supported on kvm ppc.
>>>>
>>>> How can I make sure this is the case? Some work has been done there
>>>> recently but midnight is quite late to figure this out :)
>>>
>>> Look in virtio_pci_set_guest_notifiers, what is the
>>> value of with_irqfd?
>>> bool with_irqfd = msix_enabled(&proxy->pci_dev) &&
>>> kvm_msi_via_irqfd_enabled();
>>>
>>> Also check what each of the values in the expression above is.
>>
>> Yes, ppc does not have irqfd as kvm_msi_via_irqfd_enabled() returned "false".
>>
>>>>> Could you please check this:
>>>>>
>>>>> + /* If guest supports masking, set up irqfd now.
>>>>> + * Otherwise, delay until unmasked in the frontend.
>>>>> + */
>>>>> + if (proxy->vdev->guest_notifier_mask) {
>>>>> + ret = kvm_virtio_pci_irqfd_use(proxy, queue_no, vector);
>>>>> + if (ret < 0) {
>>>>> + kvm_virtio_pci_vq_vector_release(proxy, vector);
>>>>> + goto undo;
>>>>> + }
>>>>> + }
>>>>>
>>>>>
>>>>> Could you please add a printf before "undo" and check whether the
>>>>> error path above is triggered?
>>>>
>>>>
>>>> Checked, it is not triggered.
>>>>
>>>>
>>>> --
>>>> Alexey
>>>
>>> I think I get it.
>>> Does the following help (probably not the right thing to do, but just
>>> for testing):
>>
>>
>> It did not compile (no "queue_no") :) I changed it a bit and now
>> vhost=on works fine:
>>
>> diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
>> index a869f53..df1e443 100644
>> --- a/hw/virtio-pci.c
>> +++ b/hw/virtio-pci.c
>> @@ -798,6 +798,10 @@ static int
>> virtio_pci_set_guest_notifiers(DeviceState *d, int nvqs, bool
>> assign)
>> if (r < 0) {
>> goto assign_error;
>> }
>> +
>> + if (!with_irqfd && proxy->vdev->guest_notifier_mask) {
>> + proxy->vdev->guest_notifier_mask(proxy->vdev, n, !assign);
>> + }
>> }
>>
>> /* Must set vector notifier after guest notifier has been assigned */
>>
>>
>
> I see, OK, the issue is that vhost now starts in a masked state
> and no one unmasks it. While the patch will work I think,
> it does not benefit from backend masking, the right thing
> to do is to add mask notifiers, like what the irqfd path does.
>
> Will look into this, thanks.
>
--
Alexey
next prev parent reply other threads:[~2013-03-08 4:47 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-05 6:55 [Qemu-devel] QEMU -netdev vhost=on + -device virtio-net-pci bug Alexey Kardashevskiy
2013-03-05 12:56 ` Michael S. Tsirkin
2013-03-05 13:21 ` Alexey Kardashevskiy
2013-03-05 14:23 ` Michael S. Tsirkin
2013-03-05 22:57 ` Alexey Kardashevskiy
2013-03-06 10:31 ` Michael S. Tsirkin
2013-03-08 4:48 ` Alexey Kardashevskiy [this message]
2013-03-10 9:24 ` Michael S. Tsirkin
2013-03-10 11:25 ` Alexey Kardashevskiy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51396D84.3090404@ozlabs.ru \
--to=aik@ozlabs.ru \
--cc=david@gibson.dropbear.id.au \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=rusty@rustcorp.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).