qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Sergio Lopez <slp@redhat.com>
Cc: "Kevin Wolf" <kwolf@redhat.com>, "Fam Zheng" <fam@euphon.net>,
	"Daniel P. Berrangé" <berrange@redhat.com>,
	"Eduardo Habkost" <ehabkost@redhat.com>,
	qemu-block@nongnu.org, "Cornelia Huck" <cohuck@redhat.com>,
	qemu-devel@nongnu.org, "Max Reitz" <mreitz@redhat.com>,
	"Stefan Hajnoczi" <stefanha@redhat.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>
Subject: Re: [PATCH v2 2/4] virtio-scsi: default num_queues to -smp N
Date: Mon, 3 Feb 2020 07:53:20 -0500	[thread overview]
Message-ID: <20200203075246-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20200203113949.hnjuqzkrqqwst54e@dritchie>

On Mon, Feb 03, 2020 at 12:39:49PM +0100, Sergio Lopez wrote:
> On Mon, Feb 03, 2020 at 10:57:44AM +0000, Daniel P. Berrangé wrote:
> > On Mon, Feb 03, 2020 at 11:25:29AM +0100, Sergio Lopez wrote:
> > > On Thu, Jan 30, 2020 at 10:52:35AM +0000, Stefan Hajnoczi wrote:
> > > > On Thu, Jan 30, 2020 at 01:29:16AM +0100, Paolo Bonzini wrote:
> > > > > On 29/01/20 16:44, Stefan Hajnoczi wrote:
> > > > > > On Mon, Jan 27, 2020 at 02:10:31PM +0100, Cornelia Huck wrote:
> > > > > >> On Fri, 24 Jan 2020 10:01:57 +0000
> > > > > >> Stefan Hajnoczi <stefanha@redhat.com> wrote:
> > > > > >>> @@ -47,10 +48,15 @@ static void vhost_scsi_pci_realize(VirtIOPCIProxy *vpci_dev, Error **errp)
> > > > > >>>  {
> > > > > >>>      VHostSCSIPCI *dev = VHOST_SCSI_PCI(vpci_dev);
> > > > > >>>      DeviceState *vdev = DEVICE(&dev->vdev);
> > > > > >>> -    VirtIOSCSICommon *vs = VIRTIO_SCSI_COMMON(vdev);
> > > > > >>> +    VirtIOSCSIConf *conf = &dev->vdev.parent_obj.parent_obj.conf;
> > > > > >>> +
> > > > > >>> +    /* 1:1 vq to vcpu mapping is ideal because it avoids IPIs */
> > > > > >>> +    if (conf->num_queues == VIRTIO_SCSI_AUTO_NUM_QUEUES) {
> > > > > >>> +        conf->num_queues = current_machine->smp.cpus;
> > > > > >> This now maps the request vqs 1:1 to the vcpus. What about the fixed
> > > > > >> vqs? If they don't really matter, amend the comment to explain that?
> > > > > > The fixed vqs don't matter.  They are typically not involved in the data
> > > > > > path, only the control path where performance doesn't matter.
> > > > > 
> > > > > Should we put a limit on the number of vCPUs?  For anything above ~128
> > > > > the guest is probably not going to be disk or network bound.
> > > > 
> > > > Michael Tsirkin pointed out there's a hard limit of VIRTIO_QUEUE_MAX
> > > > (1024).  We need to at least stay under that limit.
> > > > 
> > > > Should the guest have >128 virtqueues?  Each virtqueue requires guest
> > > > RAM and 2 host eventfds.  Eventually these resource requirements will
> > > > become a scalability problem, but how do we choose a hard limit and what
> > > > happens to guest performance above that limit?
> > > 
> > > From the UX perspective, I think it's safer to use a rather low upper
> > > limit for the automatic configuration.
> > > 
> > > Users of large VMs (>=32 vCPUs) aiming for the optimal performance are
> > > already facing the need of manually tuning (or relying on a software
> > > to do that for them) other aspects of it, like vNUMA, IOThreads and
> > > CPU pinning, so I don't think we should focus on this group.
> > 
> > Whether they're runing manually, or relying on software to tune for
> > them, we (QEMU maintainers) still need to provide credible guidance
> > on what todo with tuning for large CPU counts. Without clear info
> > from QEMU, it just descends into hearsay and guesswork, both of which
> > approaches leave QEMU looking bad.
> 
> I agree. Good documentation, ideally with some benchmarks, and safe
> defaults sound like a good approach to me.
> 
> > So I think we need to, at the very least, make a clear statement here
> > about what tuning approach should be applied vCPU count gets high,
> > and probably even apply that  as a default out of the box approach.
> 
> In general, I would agree, but in this particular case the
> optimization has an impact on something outside's QEMU control (host's
> resources), so we lack the information needed to make a proper guess.
> 
> My main concern here is users upgrading QEMU to hit some kind of crash
> or performance issue, without having touched their VM config. And
> let's not forget that Stefan said in the cover that this amounts to a
> 1-4% improvement on 4k operations on an SSD, and I guess that's with
> iodepth=1. I suspect with a larger block size and/or higher iodepth
> the improvement will be barely noticeable, which means it'll only have
> a positive impact on users running DB/OLTP or similar workloads on
> dedicated, directly attached, low-latency storage.
> 
> But don't get me wrong, this is a *good* optimization. It's just I
> think we should play safe here.
> 
> Sergio.

Yea I think a bit more benchmarking than with 4 vcpus
so at least we can see the trend can't hurt.



  reply	other threads:[~2020-02-03 12:54 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-24 10:01 [PATCH v2 0/4] virtio-pci: enable blk and scsi multi-queue by default Stefan Hajnoczi
2020-01-24 10:01 ` [PATCH v2 1/4] virtio-scsi: introduce a constant for fixed virtqueues Stefan Hajnoczi
2020-01-27 12:59   ` Cornelia Huck
2020-01-24 10:01 ` [PATCH v2 2/4] virtio-scsi: default num_queues to -smp N Stefan Hajnoczi
2020-01-27 13:10   ` Cornelia Huck
2020-01-29 15:44     ` Stefan Hajnoczi
2020-01-30  0:29       ` Paolo Bonzini
2020-01-30 10:52         ` Stefan Hajnoczi
2020-01-30 11:03           ` Cornelia Huck
2020-02-03 10:25           ` Sergio Lopez
2020-02-03 10:35             ` Michael S. Tsirkin
2020-02-03 10:51             ` Cornelia Huck
2020-02-03 10:57             ` Daniel P. Berrangé
2020-02-03 11:39               ` Sergio Lopez
2020-02-03 12:53                 ` Michael S. Tsirkin [this message]
2020-02-11 16:20                 ` Stefan Hajnoczi
2020-02-11 16:31                   ` Michael S. Tsirkin
2020-02-12 11:18                     ` Stefan Hajnoczi
2020-02-21 10:55                       ` Stefan Hajnoczi
2020-01-24 10:01 ` [PATCH v2 3/4] virtio-blk: " Stefan Hajnoczi
2020-01-27 13:14   ` Cornelia Huck
2020-01-24 10:01 ` [PATCH v2 4/4] vhost-user-blk: " Stefan Hajnoczi
2020-01-27 13:17   ` Cornelia Huck
2020-01-27  9:59 ` [PATCH v2 0/4] virtio-pci: enable blk and scsi multi-queue by default Stefano Garzarella

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200203075246-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=berrange@redhat.com \
    --cc=cohuck@redhat.com \
    --cc=ehabkost@redhat.com \
    --cc=fam@euphon.net \
    --cc=kwolf@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=slp@redhat.com \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).