qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@gmail.com>
To: Karl Rister <krister@redhat.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>,
	qemu-devel@nongnu.org, Andrew Theurer <atheurer@redhat.com>,
	borntraeger@de.ibm.com, Fam Zheng <famz@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Qemu-devel] [PATCH v2 0/4] aio: experimental virtio-blk polling mode
Date: Fri, 18 Nov 2016 10:54:36 +0000	[thread overview]
Message-ID: <20161118105435.GC28853@stefanha-x1.localdomain> (raw)
In-Reply-To: <1206accf-f537-bc34-2e59-c13e220f870c@redhat.com>

[-- Attachment #1: Type: text/plain, Size: 2097 bytes --]

On Thu, Nov 17, 2016 at 08:15:11AM -0600, Karl Rister wrote:
> I think these results look a bit more in line with expectations on the
> quick sniff test:
> 
> QEMU_AIO_POLL_MAX_NS      IOPs
>                unset    26,299
>                    1    25,929
>                    2    25,753
>                    4    27,214
>                    8    27,053
>                   16    26,861
>                   32    24,752
>                   64    25,058
>                  128    24,732
>                  256    25,560
>                  512    24,614
>                1,024    25,186
>                2,048    25,829
>                4,096    25,671
>                8,192    27,896
>               16,384    38,086
>               32,768    35,493
>               65,536    38,496
>              131,072    38,296
> 
> I did a spot check of CPU utilization when the polling started having
> benefits.
> 
> Without polling (QEMU_AIO_POLL_MAX_NS=unset) the iothread's CPU usage
> looked like this:
> 
> user time:   25.94%
> system time: 22.11%
> 
> With polling and QEMU_AIO_POLL_MAX_NS=16384 the iothread's CPU usage
> looked like this:
> 
> user time:   78.92%
> system time: 20.80%

Excellent, now there are two optimizations remaining that could be
useful:

Christian suggested disabling virtqueue notifications while polling.
This will reduce vmexits and avoid useless ioeventfd activity after
we've already polled.

Paolo suggested skipping the ppoll(2) or epoll_wait(2) call if polling
made progress.

These will be in v3.

Your results prove that the virtqueue kick is slow.  (I think the Linux
AIO completion isn't the bottleneck but we also poll for that.)

I'm still hesitant about adding polling to QEMU because tuning
QEMU_AIO_POLL_MAX_NS= is difficult.  Benchmarks will achieve higher
numbers but actual users will benefit less.

Is it time to drill down on why the virtqueue kick + ioeventfd mechanism
is so slow?  Polling achieved >40% IOPS improvements and I wonder where
that time is lost with ioeventfd.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

      reply	other threads:[~2016-11-18 10:54 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-16 17:46 [Qemu-devel] [PATCH v2 0/4] aio: experimental virtio-blk polling mode Stefan Hajnoczi
2016-11-16 17:46 ` [Qemu-devel] [PATCH v2 1/4] aio: add AioPollFn and io_poll() interface Stefan Hajnoczi
2016-11-17  6:14   ` Fam Zheng
2016-11-16 17:47 ` [Qemu-devel] [PATCH v2 2/4] aio: add polling mode to AioContext Stefan Hajnoczi
2016-11-16 18:14   ` Paolo Bonzini
2016-11-17 11:51     ` Stefan Hajnoczi
2016-11-16 18:30   ` Paolo Bonzini
2016-11-16 17:47 ` [Qemu-devel] [PATCH v2 3/4] virtio: poll virtqueues for new buffers Stefan Hajnoczi
2016-11-16 17:47 ` [Qemu-devel] [PATCH v2 4/4] linux-aio: poll ring for completions Stefan Hajnoczi
2016-11-16 18:30 ` [Qemu-devel] [PATCH v2 0/4] aio: experimental virtio-blk polling mode no-reply
2016-11-17 14:15 ` Karl Rister
2016-11-18 10:54   ` Stefan Hajnoczi [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161118105435.GC28853@stefanha-x1.localdomain \
    --to=stefanha@gmail.com \
    --cc=atheurer@redhat.com \
    --cc=borntraeger@de.ibm.com \
    --cc=famz@redhat.com \
    --cc=krister@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).