From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39273) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1YfYKk-0005os-Cr for qemu-devel@nongnu.org; Tue, 07 Apr 2015 14:34:50 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1YfYKg-000183-2X for qemu-devel@nongnu.org; Tue, 07 Apr 2015 14:34:46 -0400 Message-ID: <552422E7.4080204@suse.de> Date: Tue, 07 Apr 2015 20:33:11 +0200 From: Alexander Graf MIME-Version: 1.0 References: <1427876112-12615-1-git-send-email-jasowang@redhat.com> <1427876112-12615-17-git-send-email-jasowang@redhat.com> <55240BAE.8000705@suse.de> In-Reply-To: Content-Type: multipart/alternative; boundary="------------060102030307010403030609" Subject: Re: [Qemu-devel] [PATCH V5 16/18] virtio-pci: increase the maximum number of virtqueues to 513 List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Luigi Rizzo Cc: "Michael S. Tsirkin" , Jason Wang , qemu-devel , qemu-ppc@nongnu.org, cornelia.huck@de.ibm.com, Paolo Bonzini , Richard Henderson This is a multi-part message in MIME format. --------------060102030307010403030609 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit On 04/07/2015 08:06 PM, Luigi Rizzo wrote: > > > On Tue, Apr 7, 2015 at 6:54 PM, Alexander Graf > wrote: > > On 04/01/2015 10:15 AM, Jason Wang wrote: > > This patch increases the maximum number of virtqueues for pci > from 64 > to 513. This will allow booting a virtio-net-pci device with > 256 queue > pairs. > ... > > * configuration space */ > #define VIRTIO_PCI_CONFIG_SIZE(dev) > VIRTIO_PCI_CONFIG_OFF(msix_enabled(dev)) > -#define VIRTIO_PCI_QUEUE_MAX 64 > +#define VIRTIO_PCI_QUEUE_MAX 513 > > > 513 is an interesting number. Any particular reason for it? Maybe > this was mentioned before and I just missed it ;) > > > quite large, too. I thought multiple queue pairs were useful > to split the load for multicore machines, but targeting VMs with > up to 256 cores (and presumably an equal number in the host) > seems really forward looking. They can also be useful in case your host tap queue is full, so going higher than the host core count may make sense for throughput. However, I am in doubt that there is a one-size-fits-all answer to this. Could we maybe make the queue size configurable via a qdev property? Alex --------------060102030307010403030609 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable
On 04/07/2015 08:06 PM, Luigi Rizzo wrote:


On Tue, Apr 7, 2015 at 6:54 PM, Alexander Graf <agraf@suse.de> wrote:
On 04/01/2015 10:15 AM, Jason Wang wrote:
This patch increases the maximum number of virtqueues for pci from 64
to 513. This will allow booting a virtio-net-pci device with 256 queue
pairs.
...

=A0 =A0* configuration space */
=A0 #define VIRTIO_PCI_CONFIG_SIZE(dev)=A0 =A0 =A0VIRTIO_PCI_CONFIG_OFF(msix_enabled(dev))
=A0 -#define VIRTIO_PCI_QUEUE_MAX 64
+#define VIRTIO_PCI_QUEUE_MAX 513

513 is an interesting number. Any particular reason for it? Maybe this was mentioned before and I just missed it ;)


quite large, too. I thought multiple queue pairs were useful
to split the load for multicore machines, but targeting VMs with
up to 256 cores (and presumably an equal number in the host)
seems really forward looking.

They can also be useful in case your host tap queue is full, so going higher than the host core count may make sense for throughput.<= br>
However, I am in doubt that there is a one-size-fits-all answer to this. Could we maybe make the queue size configurable via a qdev property?


Alex

--------------060102030307010403030609--