From: "Michael S. Tsirkin" <mst@redhat.com>
To: Yuri Benditovich <yuri.benditovich@daynix.com>
Cc: Jason Wang <jasowang@redhat.com>,
qemu-devel@nongnu.org, Yan Vugenfirer <yan@daynix.com>
Subject: Re: [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest
Date: Mon, 18 Feb 2019 18:34:21 -0500 [thread overview]
Message-ID: <20190218183353-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <CAOEp5OdVUcXTXmEFYyU6iFxd--P+4o5wkAjOr76EUG6eEne+KA@mail.gmail.com>
On Mon, Feb 18, 2019 at 10:49:08PM +0200, Yuri Benditovich wrote:
> On Mon, Feb 18, 2019 at 6:39 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >
> > On Mon, Feb 18, 2019 at 11:58:51AM +0200, Yuri Benditovich wrote:
> > > On Mon, Feb 18, 2019 at 5:49 AM Jason Wang <jasowang@redhat.com> wrote:
> > > >
> > > >
> > > > On 2019/2/13 下午10:51, Yuri Benditovich wrote:
> > > > > https://bugzilla.redhat.com/show_bug.cgi?id=1608226
> > > > > On startup/link-up in multiqueue configuration the virtio-net
> > > > > tries to starts all the queues, including those that the guest
> > > > > will not enable by VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET.
> > > > > If the guest driver does not allocate queues that it will not
> > > > > use (for example, Windows driver does not) and number of actually
> > > > > used queues is less that maximal number supported by the device,
> > > >
> > > >
> > > > Is this a requirement of e.g NDIS? If not, could we simply allocate all
> > > > queues in this case. This is usually what normal Linux driver did.
> > > >
> > > >
> > > > > this causes vhost_net_start to fail and actually disables vhost
> > > > > for all the queues, reducing the performance.
> > > > > Current commit fixes this: initially only first queue is started,
> > > > > upon VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET started all the queues
> > > > > requested by the guest.
> > > > >
> > > > > Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
> > > > > ---
> > > > > hw/net/virtio-net.c | 7 +++++--
> > > > > 1 file changed, 5 insertions(+), 2 deletions(-)
> > > > >
> > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > > index 3f319ef723..d3b1ac6d3a 100644
> > > > > --- a/hw/net/virtio-net.c
> > > > > +++ b/hw/net/virtio-net.c
> > > > > @@ -174,7 +174,7 @@ static void virtio_net_vhost_status(VirtIONet *n, uint8_t status)
> > > > > {
> > > > > VirtIODevice *vdev = VIRTIO_DEVICE(n);
> > > > > NetClientState *nc = qemu_get_queue(n->nic);
> > > > > - int queues = n->multiqueue ? n->max_queues : 1;
> > > > > + int queues = n->multiqueue ? n->curr_queues : 1;
> > > > >
> > > > > if (!get_vhost_net(nc->peer)) {
> > > > > return;
> > > > > @@ -1016,9 +1016,12 @@ static int virtio_net_handle_mq(VirtIONet *n, uint8_t cmd,
> > > > > return VIRTIO_NET_ERR;
> > > > > }
> > > > >
> > > > > - n->curr_queues = queues;
> > > > > /* stop the backend before changing the number of queues to avoid handling a
> > > > > * disabled queue */
> > > > > + virtio_net_set_status(vdev, 0);
> > > >
> > > >
> > > > Any reason for doing this?
> > >
> > > I think there are 2 reasons:
> > > 1. The spec does not require guest SW to allocate unused queues.
> > > 2. We spend guest's physical memory to just make vhost happy when it
> > > touches queues that it should not use.
> > >
> > > Thanks,
> > > Yuri Benditovich
> >
> > The spec also says:
> > queue_enable The driver uses this to selectively prevent the device from executing requests from this
> > virtqueue. 1 - enabled; 0 - disabled.
> >
> > While this is not a conformance clause this strongly implies that
> > queues which are not enabled are never accessed by device.
> >
> > Yuri I am guessing you are not enabling these unused queues right?
>
> Of course, we (Windows driver) do not.
> The code of virtio-net passes max_queues to vhost and this causes
> vhost to try accessing all the queues, fail on unused ones and finally
> leave vhost disabled at all.
Jason, at least for 1.0 accessing disabled queues looks like a spec
violation. What do you think?
> >
> >
> >
> > > >
> > > > Thanks
> > > >
> > > >
> > > > > +
> > > > > + n->curr_queues = queues;
> > > > > +
> > > > > virtio_net_set_status(vdev, vdev->status);
> > > > > virtio_net_set_queues(n);
> > > > >
next prev parent reply other threads:[~2019-02-18 23:34 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-02-13 14:51 [Qemu-devel] [PATCH] virtio-net: do not start queues that are not enabled by the guest Yuri Benditovich
2019-02-18 3:49 ` Jason Wang
2019-02-18 9:58 ` Yuri Benditovich
2019-02-18 16:39 ` Michael S. Tsirkin
2019-02-18 20:49 ` Yuri Benditovich
2019-02-18 23:34 ` Michael S. Tsirkin [this message]
2019-02-19 6:27 ` Jason Wang
2019-02-19 14:19 ` Michael S. Tsirkin
2019-02-20 10:13 ` Jason Wang
2019-02-21 6:00 ` Yuri Benditovich
2019-02-21 6:49 ` Jason Wang
2019-02-21 8:18 ` Yuri Benditovich
2019-02-21 9:40 ` Jason Wang
2019-02-22 1:35 ` Michael S. Tsirkin
2019-02-22 3:04 ` Jason Wang
2019-02-22 3:10 ` Michael S. Tsirkin
2019-02-22 4:22 ` Michael S. Tsirkin
2019-02-25 7:47 ` Jason Wang
2019-02-25 12:33 ` Michael S. Tsirkin
2019-02-28 14:08 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190218183353-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=jasowang@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=yan@daynix.com \
--cc=yuri.benditovich@daynix.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).