From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39574) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dO2fD-000781-Sc for qemu-devel@nongnu.org; Thu, 22 Jun 2017 10:00:56 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dO2f9-0007FL-4x for qemu-devel@nongnu.org; Thu, 22 Jun 2017 10:00:51 -0400 Received: from mx1.redhat.com ([209.132.183.28]:19974) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dO2f8-0007FD-Lz for qemu-devel@nongnu.org; Thu, 22 Jun 2017 10:00:47 -0400 Date: Thu, 22 Jun 2017 17:00:37 +0300 From: "Michael S. Tsirkin" Message-ID: <20170622170016-mutt-send-email-mst@kernel.org> References: <1497610119-45041-1-git-send-email-wei.w.wang@intel.com> <20170616172531-mutt-send-email-mst@kernel.org> <5944EA6B.2080606@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5944EA6B.2080606@intel.com> Subject: Re: [Qemu-devel] [PATCH v2 1/2] virtio-net: enable configurable tx queue size List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Wei Wang Cc: jasowang@redhat.com, marcandre.lureau@gmail.com, virtio-dev@lists.oasis-open.org, qemu-devel@nongnu.org, stefanha@gmail.com, pbonzini@redhat.com, armbru@redhat.com, jan.scheurich@ericsson.com On Sat, Jun 17, 2017 at 04:38:03PM +0800, Wei Wang wrote: > On 06/16/2017 10:31 PM, Michael S. Tsirkin wrote: > > On Fri, Jun 16, 2017 at 06:48:38PM +0800, Wei Wang wrote: > > > This patch enables the virtio-net tx queue size to be configurable > > > between 256 (the default queue size) and 1024 by the user when the > > > vhost-user backend is used. > > > > > > Currently, the maximum tx queue size for other backends is 512 due > > > to the following limitations: > > > - QEMU backend: the QEMU backend implementation in some cases may > > > send 1024+1 iovs to writev. > > > - Vhost_net backend: there are possibilities that the guest sends > > > a vring_desc of memory which corsses a MemoryRegion thereby > > > generating more than 1024 iovs in total after translattion from > > > guest-physical address in the backend. > > > > > > Signed-off-by: Wei Wang > > > --- > > > hw/net/virtio-net.c | 46 ++++++++++++++++++++++++++++++++++-------- > > > include/hw/virtio/virtio-net.h | 1 + > > > 2 files changed, 39 insertions(+), 8 deletions(-) > > > > > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c > > > index 7d091c9..e1a08fd 100644 > > > --- a/hw/net/virtio-net.c > > > +++ b/hw/net/virtio-net.c > > > @@ -33,8 +33,11 @@ > > > /* previously fixed value */ > > > #define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256 > > > +#define VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE 256 > > > + > > > /* for now, only allow larger queues; with virtio-1, guest can downsize */ > > > #define VIRTIO_NET_RX_QUEUE_MIN_SIZE VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE > > > +#define VIRTIO_NET_TX_QUEUE_MIN_SIZE VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE > > > /* > > > * Calculate the number of bytes up to and including the given 'field' of > > > @@ -1491,18 +1494,33 @@ static void virtio_net_tx_bh(void *opaque) > > > static void virtio_net_add_queue(VirtIONet *n, int index) > > > { > > > VirtIODevice *vdev = VIRTIO_DEVICE(n); > > > + NetClientState *nc = qemu_get_queue(n->nic); > > > n->vqs[index].rx_vq = virtio_add_queue(vdev, n->net_conf.rx_queue_size, > > > virtio_net_handle_rx); > > > + > > > + /* > > > + * Currently, backends other than vhost-user don't support 1024 queue > > > + * size. > > > + */ > > > + if (n->net_conf.tx_queue_size == VIRTQUEUE_MAX_SIZE && > > > + nc->peer->info->type != NET_CLIENT_DRIVER_VHOST_USER) { > > > + fprintf(stderr, "warning: %s: queue size %d not supported\n", > > > + __func__, n->net_conf.tx_queue_size); > > > + n->net_conf.tx_queue_size = VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE; > > > + } > > > + > > Also, I suspect we can get here with no peer, and above will crash. > > It seems ugly to do this on each virtio_net_add_queue. > > How about moving this to realize? > > The code has been re-arranged to make sure nc->peer is ready before > it's used, but I agree that it looks better to move the above to realize(). > > Best, > Wei ping The issues left are minor, let's make progress and merge this asap