From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60478) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dPM16-0007IQ-HA for qemu-devel@nongnu.org; Mon, 26 Jun 2017 00:52:53 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dPM13-0003GS-Gj for qemu-devel@nongnu.org; Mon, 26 Jun 2017 00:52:52 -0400 Received: from mga05.intel.com ([192.55.52.43]:21297) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dPM13-0003ES-7k for qemu-devel@nongnu.org; Mon, 26 Jun 2017 00:52:49 -0400 Message-ID: <595093A8.1080902@intel.com> Date: Mon, 26 Jun 2017 12:55:04 +0800 From: Wei Wang MIME-Version: 1.0 References: <1498185154-29015-1-git-send-email-wei.w.wang@intel.com> <9c3402b1-961d-b953-ca6c-188b03ed756b@redhat.com> In-Reply-To: <9c3402b1-961d-b953-ca6c-188b03ed756b@redhat.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [PATCH v3 1/2] virtio-net: enable configurable tx queue size List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Jason Wang , mst@redhat.com, eblake@redhat.com, virtio-dev@lists.oasis-open.org, qemu-devel@nongnu.org Cc: stefanha@gmail.com, armbru@redhat.com, jan.scheurich@ericsson.com, marcandre.lureau@gmail.com, pbonzini@redhat.com On 06/26/2017 11:18 AM, Jason Wang wrote: > > On 2017年06月23日 10:32, Wei Wang wrote: >> This patch enables the virtio-net tx queue size to be configurable >> between 256 (the default queue size) and 1024 by the user when the >> vhost-user backend is used. >> >> Currently, the maximum tx queue size for other backends is 512 due >> to the following limitations: >> - QEMU backend: the QEMU backend implementation in some cases may >> send 1024+1 iovs to writev. >> - Vhost_net backend: there are possibilities that the guest sends >> a vring_desc of memory which corsses a MemoryRegion thereby >> generating more than 1024 iovs after translattion from guest-physical >> address in the backend. >> >> Signed-off-by: Wei Wang >> --- >> hw/net/virtio-net.c | 45 >> +++++++++++++++++++++++++++++++++--------- >> include/hw/virtio/virtio-net.h | 1 + >> 2 files changed, 37 insertions(+), 9 deletions(-) >> >> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c >> index 91eddaf..d13ca60 100644 >> --- a/hw/net/virtio-net.c >> +++ b/hw/net/virtio-net.c >> @@ -34,8 +34,11 @@ >> /* previously fixed value */ >> #define VIRTIO_NET_RX_QUEUE_DEFAULT_SIZE 256 >> + >> + /* >> + * Currently, backends other than vhost-user don't support 1024 >> queue >> + * size. >> + */ >> + if (n->net_conf.tx_queue_size == VIRTQUEUE_MAX_SIZE && >> + nc->peer->info->type != NET_CLIENT_DRIVER_VHOST_USER) { >> + n->net_conf.tx_queue_size = VIRTIO_NET_TX_QUEUE_DEFAULT_SIZE; > > Do we really want assume all vhost-user backend support 1024 queue size? > Do you know any vhost-user backend implementation that doesn't support 1024 tx queue size? >> + } >> + >> + for (i = 0; i < n->max_queues; i++) { >> + virtio_net_add_queue(n, i); >> + } > > Any reason to move virtio_net_add_queue() here? > Please check the whole init steps. It was moved here (after qemu_new_nic()) to make sure nc->peer is not NULL. Best, Wei