From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:33275) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YflN9-0007mB-2o for qemu-devel@nongnu.org; Wed, 08 Apr 2015 04:30:07 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YflN5-0005Jx-PX for qemu-devel@nongnu.org; Wed, 08 Apr 2015 04:30:07 -0400 Date: Wed, 08 Apr 2015 16:29:22 +0800 From: Jason Wang Message-Id: <1428481762.29276.3@smtp.corp.redhat.com> In-Reply-To: <552422E7.4080204@suse.de> References: <1427876112-12615-1-git-send-email-jasowang@redhat.com> <1427876112-12615-17-git-send-email-jasowang@redhat.com> <55240BAE.8000705@suse.de> <552422E7.4080204@suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Subject: Re: [Qemu-devel] [PATCH V5 16/18] virtio-pci: increase the maximum number of virtqueues to 513 List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Alexander Graf Cc: "Michael S. Tsirkin" , qemu-devel , qemu-ppc@nongnu.org, cornelia.huck@de.ibm.com, Paolo Bonzini , Luigi Rizzo , Richard Henderson On Wed, Apr 8, 2015 at 2:33 AM, Alexander Graf wrote: > On 04/07/2015 08:06 PM, Luigi Rizzo wrote: >> >> >> On Tue, Apr 7, 2015 at 6:54 PM, Alexander Graf wrote: >>> On 04/01/2015 10:15 AM, Jason Wang wrote: >>>> This patch increases the maximum number of virtqueues for pci from >>>> 64 >>>> to 513. This will allow booting a virtio-net-pci device with 256 >>>> queue >>>> pairs. >>>> ... >>>> * configuration space */ >>>> #define VIRTIO_PCI_CONFIG_SIZE(dev) >>>> VIRTIO_PCI_CONFIG_OFF(msix_enabled(dev)) >>>> -#define VIRTIO_PCI_QUEUE_MAX 64 >>>> +#define VIRTIO_PCI_QUEUE_MAX 513 >>> >>> 513 is an interesting number. Any particular reason for it? Maybe >>> this was mentioned before and I just missed it ;) >>> >> >> quite large, too. I thought multiple queue pairs were useful >> to split the load for multicore machines, but targeting VMs with >> up to 256 cores (and presumably an equal number in the host) >> seems really forward looking. > > They can also be useful in case your host tap queue is full, so going > higher than the host core count may make sense for throughput. > > However, I am in doubt that there is a one-size-fits-all answer to > this. Could we maybe make the queue size configurable via a qdev > property? We can do this on top but I'm not sure I understand the question. Do you mean a per-device limitation? > > > Alex >