From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Fam Zheng <fam@euphon.net>,
Peter Maydell <peter.maydell@linaro.org>,
qemu-block@nongnu.org, Max Reitz <mreitz@redhat.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Kevin Wolf <kwolf@redhat.com>
Subject: [PULL 3/9] aio-posix: completely stop polling when disabled
Date: Wed, 11 Mar 2020 12:40:39 +0000 [thread overview]
Message-ID: <20200311124045.277969-4-stefanha@redhat.com> (raw)
In-Reply-To: <20200311124045.277969-1-stefanha@redhat.com>
One iteration of polling is always performed even when polling is
disabled. This is done because:
1. Userspace polling is cheaper than making a syscall. We might get
lucky.
2. We must poll once more after polling has stopped in case an event
occurred while stopping polling.
However, there are downsides:
1. Polling becomes a bottleneck when the number of event sources is very
high. It's more efficient to monitor fds in that case.
2. A high-frequency polling event source can starve non-polling event
sources because ppoll(2)/epoll(7) is never invoked.
This patch removes the forced polling iteration so that poll_ns=0 really
means no polling.
IOPS increases from 10k to 60k when the guest has 100
virtio-blk-pci,num-queues=32 devices and 1 virtio-blk-pci,num-queues=1
device because the large number of event sources being polled slows down
the event loop.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Link: https://lore.kernel.org/r/20200305170806.1313245-2-stefanha@redhat.com
Message-Id: <20200305170806.1313245-2-stefanha@redhat.com>
---
util/aio-posix.c | 22 +++++++++++++++-------
1 file changed, 15 insertions(+), 7 deletions(-)
diff --git a/util/aio-posix.c b/util/aio-posix.c
index b339aab12c..65964a2597 100644
--- a/util/aio-posix.c
+++ b/util/aio-posix.c
@@ -361,12 +361,13 @@ void aio_set_event_notifier_poll(AioContext *ctx,
(IOHandler *)io_poll_end);
}
-static void poll_set_started(AioContext *ctx, bool started)
+static bool poll_set_started(AioContext *ctx, bool started)
{
AioHandler *node;
+ bool progress = false;
if (started == ctx->poll_started) {
- return;
+ return false;
}
ctx->poll_started = started;
@@ -388,8 +389,15 @@ static void poll_set_started(AioContext *ctx, bool started)
if (fn) {
fn(node->opaque);
}
+
+ /* Poll one last time in case ->io_poll_end() raced with the event */
+ if (!started) {
+ progress = node->io_poll(node->opaque) || progress;
+ }
}
qemu_lockcnt_dec(&ctx->list_lock);
+
+ return progress;
}
@@ -670,12 +678,12 @@ static bool try_poll_mode(AioContext *ctx, int64_t *timeout)
}
}
- poll_set_started(ctx, false);
+ if (poll_set_started(ctx, false)) {
+ *timeout = 0;
+ return true;
+ }
- /* Even if we don't run busy polling, try polling once in case it can make
- * progress and the caller will be able to avoid ppoll(2)/epoll_wait(2).
- */
- return run_poll_handlers_once(ctx, timeout);
+ return false;
}
bool aio_poll(AioContext *ctx, bool blocking)
--
2.24.1
next prev parent reply other threads:[~2020-03-11 12:42 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-11 12:40 [PULL 0/9] Block patches Stefan Hajnoczi
2020-03-11 12:40 ` [PULL 1/9] qemu/queue.h: clear linked list pointers on remove Stefan Hajnoczi
2020-03-11 12:40 ` [PULL 2/9] aio-posix: remove confusing QLIST_SAFE_REMOVE() Stefan Hajnoczi
2020-03-11 12:40 ` Stefan Hajnoczi [this message]
2020-03-11 12:40 ` [PULL 4/9] aio-posix: move RCU_READ_LOCK() into run_poll_handlers() Stefan Hajnoczi
2020-03-11 12:40 ` [PULL 5/9] aio-posix: extract ppoll(2) and epoll(7) fd monitoring Stefan Hajnoczi
2020-03-11 12:40 ` [PULL 6/9] aio-posix: simplify FDMonOps->update() prototype Stefan Hajnoczi
2020-03-11 12:40 ` [PULL 7/9] aio-posix: add io_uring fd monitoring implementation Stefan Hajnoczi
2020-03-11 12:40 ` [PULL 8/9] aio-posix: support userspace polling of fd monitoring Stefan Hajnoczi
2020-03-11 12:40 ` [PULL 9/9] aio-posix: remove idle poll handlers to improve scalability Stefan Hajnoczi
2020-03-11 13:50 ` [PULL 0/9] Block patches no-reply
2020-03-11 16:54 ` Stefan Hajnoczi
2020-03-11 13:51 ` no-reply
2020-03-11 16:55 ` Stefan Hajnoczi
2020-03-11 17:06 ` Peter Maydell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200311124045.277969-4-stefanha@redhat.com \
--to=stefanha@redhat.com \
--cc=fam@euphon.net \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).