From: "Michael S. Tsirkin" <mst@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: jasowang@redhat.com, Yajun Wu <yajunw@mellanox.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: Any reason VIRTQUEUE_MAX_SIZE is 1024? Can we increase this limit?
Date: Mon, 10 Aug 2020 10:45:46 -0400 [thread overview]
Message-ID: <20200810104453-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20200806123708.GC379937@stefanha-x1.localdomain>
On Thu, Aug 06, 2020 at 01:37:08PM +0100, Stefan Hajnoczi wrote:
> On Wed, Aug 05, 2020 at 08:13:29AM -0400, Michael S. Tsirkin wrote:
> > On Wed, Aug 05, 2020 at 01:11:07PM +0100, Stefan Hajnoczi wrote:
> > > On Thu, Jul 30, 2020 at 07:46:09AM +0000, Yajun Wu wrote:
> > > > I'm doing iperf test on VIRTIO net through vhost-user(HW VDPA).
> > > > Find maximal acceptable tx_queue_size/rx_queue_size is 1024.
> > > > Basically increase queue size can get better RX rate for my case.
> > > >
> > > > Can we increase the limit(VIRTQUEUE_MAX_SIZE) to 8192 to possibly gain better performance?
> > >
> > > Hi,
> > > The VIRTIO 1.1 specification says the maximum number of descriptors is
> > > 32768 for both split and packed virtqueues.
> > >
> > > The vhost kernel code seems to support 32768.
> > >
> > > The 1024 limit is an implementation limit in QEMU. Increasing it would
> > > require QEMU code changes. For example, VIRTQUEUE_MAX_SIZE is used as
> > > the size of arrays.
> > >
> > > I can't think of a fundamental reason why QEMU needs to limit itself to
> > > 1024 descriptors. Raising the limit would require fixing up the code and
> > > ensuring that live migration remains compatible with older versions of
> > > QEMU.
> > >
> > > Stefan
> >
> > There's actually a reason for a limit: in theory the vq size
> > also sets a limit on the number of scatter/gather entries.
> > both QEMU and vhost can't handle a packet split over > 1k chunks.
> >
> > We could add an extra limit for s/g size like block and scsi do,
> > this will need spec, guest and host side work.
>
> Interesting, thanks for explaining! This could be made explicit by
> changing the QEMU code to:
>
> include/hw/virtio/virtio.h:#define VIRTQUEUE_MAX_SIZE IOV_MAX
>
> Looking more closely at the vhost kernel code I see that UIO_MAXIOV is
> used in some places but not in vhost_vring_set_num() (ioctl
> VHOST_SET_VRING_NUM). Is there a reason why UIO_MAXIOV isn't enforced
> when the application sets the queue size?
>
> Stefan
Backends such as vhost-user can handle > iov max. Devices such
as scsi and block have a limit for s/g separate from vq size.
--
MST
prev parent reply other threads:[~2020-08-10 14:46 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-30 7:46 Any reason VIRTQUEUE_MAX_SIZE is 1024? Can we increase this limit? Yajun Wu
2020-08-05 12:11 ` Stefan Hajnoczi
2020-08-05 12:13 ` Michael S. Tsirkin
2020-08-06 12:37 ` Stefan Hajnoczi
2020-08-07 3:35 ` Jason Wang
2020-08-07 9:59 ` Stefan Hajnoczi
2020-08-10 9:49 ` Jason Wang
2020-08-10 14:45 ` Michael S. Tsirkin [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200810104453-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=jasowang@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=yajunw@mellanox.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).