From: "Michael S. Tsirkin" <mst@redhat.com>
To: Wei Wang <wei.w.wang@intel.com>
Cc: Jason Wang <jasowang@redhat.com>,
eblake@redhat.com, virtio-dev@lists.oasis-open.org,
qemu-devel@nongnu.org, stefanha@gmail.com, armbru@redhat.com,
jan.scheurich@ericsson.com, marcandre.lureau@gmail.com,
pbonzini@redhat.com
Subject: Re: [Qemu-devel] [virtio-dev] Re: [PATCH v3 1/2] virtio-net: enable configurable tx queue size
Date: Tue, 27 Jun 2017 00:21:47 +0300 [thread overview]
Message-ID: <20170627002016-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <5950E331.6050409@intel.com>
On Mon, Jun 26, 2017 at 06:34:25PM +0800, Wei Wang wrote:
> On 06/26/2017 04:05 PM, Jason Wang wrote:
> >
> >
> > On 2017年06月26日 12:55, Wei Wang wrote:
> > > On 06/26/2017 11:18 AM, Jason Wang wrote:
> > > >
> > > > On 2017年06月23日 10:32, Wei Wang wrote:
> > > > > This patch enables the virtio-net tx queue size to be configurable
> > > > > between 256 (the default queue size) and 1024 by the user when the
> > > > > vhost-user backend is used.
> > > > >
> > > > > Currently, the maximum tx queue size for other backends is 512 due
> > > > > to the following limitations:
> > > > > - QEMU backend: the QEMU backend implementation in some cases may
> > > > > send 1024+1 iovs to writev.
> > > > > - Vhost_net backend: there are possibilities that the guest sends
> > > > > a vring_desc of memory which corsses a MemoryRegion thereby
> > > > > generating more than 1024 iovs after translattion from guest-physical
> > > > > address in the backend.
> > > > >
> > > > > Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> > > > > ---
> > > > > hw/net/virtio-net.c | 45
> > > > > +++++++++++++++++++++++++++++++++---------
> > > > > include/hw/virtio/virtio-net.h | 1 +
> > > > > 2 files changed, 37 insertions(+), 9 deletions(-)
> > > > >
> > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > > index 91eddaf..d13ca60 100644
> > > > > --- a/hw/net/virtio-net.c
> > > > > +++ b/hw/net/virtio-net.c
> > > > > @@ -34,8 +34,11 @@
> > > > > /* previously fixed value */
> > > > > #define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256
> > > > > +
> > > > > + /*
> > > > > + * Currently, backends other than vhost-user don't
> > > > > support 1024 queue
> > > > > + * size.
> > > > > + */
> > > > > + if (n->net_conf.tx_queue_size == VIRTQUEUE_MAX_SIZE &&
> > > > > + nc->peer->info->type != NET_CLIENT_DRIVER_VHOST_USER) {
> > > > > + n->net_conf.tx_queue_size = VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE;
> > > >
> > > > Do we really want assume all vhost-user backend support 1024
> > > > queue size?
> > > >
> > >
> > > Do you know any vhost-user backend implementation that doesn't support
> > > 1024 tx queue size?
> > >
> > >
> >
> > I don't but the issue is vhost-user uditis an open protocol, there could
> > be even closed source implementation on this. So we can't audit all
> > kinds of backends.
> >
>
> This is the discussion left before. Here is my thought:
>
> Until now, nobody is officially using 1024 tx queue size, including the
> unknown
> vhost-user backend implementation. So, applying this patch won't affect any
> already deployed products.
>
> If someday, people whose products are based on the unknown vhost-user
> implementation change to try with 1024 tx queue size and apply this patch,
> and find 1024 doesn't work for them, they can simply set the value to 512
> (which
> is better than the 256 size they were using) or apply the 2nd patch entitled
> "VIRTIO_F_MAX_CHAIN_SIZE" to use 1024 queue size.
>
> I didn't see any big issue here. Would you see any issues?
>
> @Michael, what is your opinion?
Since the default does not change, I think it's fine.
You need to both have a non-default value and be using
a backend that does not support the 1024 size.
Seems unlikely to me.
>
> > >
> > > > > + }
> > > > > +
> > > > > + for (i = 0; i < n->max_queues; i++) {
> > > > > + virtio_net_add_queue(n, i);
> > > > > + }
> > > >
> > > > Any reason to move virtio_net_add_queue() here?
> > > >
> > >
> > > Please check the whole init steps. It was moved here (after
> > > qemu_new_nic())
> > > to make sure nc->peer is not NULL.
> >
> > I don't get here, can't you get peer just from nic_conf.peers?
> >
>
> Correct, nic_conf.peers also works. Thanks.
>
> Best,
> Wei
>
>
>
next prev parent reply other threads:[~2017-06-26 21:21 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-23 2:32 [Qemu-devel] [PATCH v3 1/2] virtio-net: enable configurable tx queue size Wei Wang
2017-06-23 2:32 ` [Qemu-devel] [PATCH v3 2/2] virtio-net: code cleanup Wei Wang
2017-06-26 3:18 ` [Qemu-devel] [PATCH v3 1/2] virtio-net: enable configurable tx queue size Jason Wang
2017-06-26 4:55 ` Wei Wang
2017-06-26 8:05 ` [Qemu-devel] [virtio-dev] " Jason Wang
2017-06-26 10:34 ` Wei Wang
2017-06-26 21:21 ` Michael S. Tsirkin [this message]
2017-06-27 1:06 ` Wei Wang
2017-06-27 1:51 ` Michael S. Tsirkin
2017-06-27 2:19 ` Jason Wang
2017-06-27 5:22 ` Wang, Wei W
2017-06-27 1:51 ` [Qemu-devel] " Eric Blake
2017-06-27 5:24 ` Wang, Wei W
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170627002016-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=armbru@redhat.com \
--cc=eblake@redhat.com \
--cc=jan.scheurich@ericsson.com \
--cc=jasowang@redhat.com \
--cc=marcandre.lureau@gmail.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@gmail.com \
--cc=virtio-dev@lists.oasis-open.org \
--cc=wei.w.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).