From: Jason Wang <jasowang@redhat.com>
To: Yuri Benditovich <yuri.benditovich@daynix.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
Yan Vugenfirer <yan@daynix.com>,
qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
Date: Thu, 21 Feb 2019 14:49:10 +0800 [thread overview]
Message-ID: <a5c4b4a1-ea82-f357-e076-ade51ade7ddf@redhat.com> (raw)
In-Reply-To: <CAOEp5Odi9pScvMnHcAOLz6tp3Yv6rzZSs-rsswBc_oE7VRMt+w@mail.gmail.com>
On 2019/2/21 下午2:00, Yuri Benditovich wrote:
> On Tue, Feb 19, 2019 at 8:27 AM Jason Wang <jasowang@redhat.com> wrote:
>>
>> On 2019/2/19 上午7:34, Michael S. Tsirkin wrote:
>>> On Mon, Feb 18, 2019 at 10:49:08PM +0200, Yuri Benditovich wrote:
>>>> On Mon, Feb 18, 2019 at 6:39 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>>>>> On Mon, Feb 18, 2019 at 11:58:51AM +0200, Yuri Benditovich wrote:
>>>>>> On Mon, Feb 18, 2019 at 5:49 AM Jason Wang <jasowang@redhat.com> wrote:
>>>>>>> On 2019/2/13 下午10:51, Yuri Benditovich wrote:
>>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1608226
>>>>>>>> On startup/link-up in multiqueue configuration the virtio-net
>>>>>>>> tries to starts all the queues, including those that the guest
>>>>>>>> will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET.
>>>>>>>> If the guest driver does not allocate queues that it will not
>>>>>>>> use (for example, Windows driver does not) and number of actually
>>>>>>>> used queues is less that maximal number supported by the device,
>>>>>>> Is this a requirement of e.g NDIS? If not, could we simply allocate all
>>>>>>> queues in this case. This is usually what normal Linux driver did.
>>>>>>>
>>>>>>>
>>>>>>>> this causes vhost_net_start to fail and actually disables vhost
>>>>>>>> for all the queues, reducing the performance.
>>>>>>>> Current commit fixes this: initially only first queue is started,
>>>>>>>> upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues
>>>>>>>> requested by the guest.
>>>>>>>>
>>>>>>>> Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
>>>>>>>> ---
>>>>>>>> hw/net/virtio-net.c | 7 +++++--
>>>>>>>> 1 file changed, 5 insertions(+), 2 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
>>>>>>>> index 3f319ef723..d3b1ac6d3a 100644
>>>>>>>> --- a/hw/net/virtio-net.c
>>>>>>>> +++ b/hw/net/virtio-net.c
>>>>>>>> @@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
>>>>>>>> {
>>>>>>>> VirtIODevice *vdev = VIRTIO_DEVICE(n);
>>>>>>>> NetClientState *nc = qemu_get_queue(n->nic);
>>>>>>>> - int queues = n->multiqueue ? n->max_queues : 1;
>>>>>>>> + int queues = n->multiqueue ? n->curr_queues : 1;
>>>>>>>>
>>>>>>>> if (!get_vhost_net(nc->peer)) {
>>>>>>>> return;
>>>>>>>> @@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
>>>>>>>> return VIRTIO_NET_ERR;
>>>>>>>> }
>>>>>>>>
>>>>>>>> - n->curr_queues = queues;
>>>>>>>> /* stop the backend before changing the number of queues to avoid handling a
>>>>>>>> * disabled queue */
>>>>>>>> + virtio_net_set_status(vdev, 0);
>>>>>>> Any reason for doing this?
>>>>>> I think there are 2 reasons:
>>>>>> 1. The spec does not require guest SW to allocate unused queues.
>>>>>> 2. We spend guest's physical memory to just make vhost happy when it
>>>>>> touches queues that it should not use.
>>>>>>
>>>>>> Thanks,
>>>>>> Yuri Benditovich
>>>>> The spec also says:
>>>>> queue_enable The driver uses this to selectively prevent the device from executing requests from this
>>>>> virtqueue. 1 - enabled; 0 - disabled.
>>>>>
>>>>> While this is not a conformance clause this strongly implies that
>>>>> queues which are not enabled are never accessed by device.
>>>>>
>>>>> Yuri I am guessing you are not enabling these unused queues right?
>>>> Of course, we (Windows driver) do not.
>>>> The code of virtio-net passes max_queues to vhost and this causes
>>>> vhost to try accessing all the queues, fail on unused ones and finally
>>>> leave vhost disabled at all.
>>> Jason, at least for 1.0 accessing disabled queues looks like a spec
>>> violation. What do you think?
>>
>> Yes, but there's some issues:
>>
>> - How to detect a disabled queue for 0.9x device? Looks like there's no
>> way according to the spec, so device must assume all queues was enabled.
> Can you please add several words - what is 0.9 device (probably this
> is more about driver) and
> what is the problem with it?
It's not a net specific issue. 0.9x device is legacy device defined in
the spec. We don't have a method to disable and enable a specific queue
at that time. Michael said we can assume queue address 0 as disabled,
but there's still a question of how to enable it. Spec is unclear and it
was too late to add thing for legacy device. For 1.0 device we have
queue_enable, but its implementation is incomplete, since it can work
with vhost correctly, we probably need to add thing to make it work.
>
>> - For 1.0, if we depends on queue_enable, we should implement the
>> callback for vhost I think. Otherwise it's still buggy.
>>
>> So it looks tricky to enable and disable queues through set status
> If I succeed to modify the patch such a way that it will act only in
> 'target' case,
> i.e. only if some of queueus are not initialized (at time of
> driver_ok), will it be more safe?
For 1.0 device, we can fix the queue_enable, but for 0.9x device how do
you enable one specific queue in this case? (setting status?)
A fundamental question is what prevents you from just initialization all
queues during driver start? It looks to me this save lots of efforts
than allocating queue dynamically.
Thanks
>
>> Thanks
>>
>>
>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>>
>>>>>>>> +
>>>>>>>> + n->curr_queues = queues;
>>>>>>>> +
>>>>>>>> virtio_net_set_status(vdev, vdev->status);
>>>>>>>> virtio_net_set_queues(n);
>>>>>>>>
next prev parent reply other threads:[~2019-02-21 6:49 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-13 14:51 [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest Yuri Benditovich
2019-02-18 3:49 ` Jason Wang
2019-02-18 9:58 ` Yuri Benditovich
2019-02-18 16:39 ` Michael S. Tsirkin
2019-02-18 20:49 ` Yuri Benditovich
2019-02-18 23:34 ` Michael S. Tsirkin
2019-02-19 6:27 ` Jason Wang
2019-02-19 14:19 ` Michael S. Tsirkin
2019-02-20 10:13 ` Jason Wang
2019-02-21 6:00 ` Yuri Benditovich
2019-02-21 6:49 ` Jason Wang [this message]
2019-02-21 8:18 ` Yuri Benditovich
2019-02-21 9:40 ` Jason Wang
2019-02-22 1:35 ` Michael S. Tsirkin
2019-02-22 3:04 ` Jason Wang
2019-02-22 3:10 ` Michael S. Tsirkin
2019-02-22 4:22 ` Michael S. Tsirkin
2019-02-25 7:47 ` Jason Wang
2019-02-25 12:33 ` Michael S. Tsirkin
2019-02-28 14:08 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a5c4b4a1-ea82-f357-e076-ade51ade7ddf@redhat.com \
--to=jasowang@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=yan@daynix.com \
--cc=yuri.benditovich@daynix.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).