From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Fam Zheng <fam@euphon.net>,
Peter Maydell <peter.maydell@linaro.org>,
qemu-block@nongnu.org, Max Reitz <mreitz@redhat.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Kevin Wolf <kwolf@redhat.com>
Subject: [PULL 4/9] aio-posix: move RCU_READ_LOCK() into run_poll_handlers()
Date: Wed, 11 Mar 2020 12:40:40 +0000 [thread overview]
Message-ID: <20200311124045.277969-5-stefanha@redhat.com> (raw)
In-Reply-To: <20200311124045.277969-1-stefanha@redhat.com>
Now that run_poll_handlers_once() is only called by run_poll_handlers()
we can improve the CPU time profile by moving the expensive
RCU_READ_LOCK() out of the polling loop.
This reduces the run_poll_handlers() from 40% CPU to 10% CPU in perf's
sampling profiler output.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Link: https://lore.kernel.org/r/20200305170806.1313245-3-stefanha@redhat.com
Message-Id: <20200305170806.1313245-3-stefanha@redhat.com>
---
util/aio-posix.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/util/aio-posix.c b/util/aio-posix.c
index 65964a2597..11a4971955 100644
--- a/util/aio-posix.c
+++ b/util/aio-posix.c
@@ -583,16 +583,6 @@ static bool run_poll_handlers_once(AioContext *ctx, int64_t *timeout)
bool progress = false;
AioHandler *node;
- /*
- * Optimization: ->io_poll() handlers often contain RCU read critical
- * sections and we therefore see many rcu_read_lock() -> rcu_read_unlock()
- * -> rcu_read_lock() -> ... sequences with expensive memory
- * synchronization primitives. Make the entire polling loop an RCU
- * critical section because nested rcu_read_lock()/rcu_read_unlock() calls
- * are cheap.
- */
- RCU_READ_LOCK_GUARD();
-
QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) {
if (!QLIST_IS_INSERTED(node, node_deleted) && node->io_poll &&
aio_node_check(ctx, node->is_external) &&
@@ -636,6 +626,16 @@ static bool run_poll_handlers(AioContext *ctx, int64_t max_ns, int64_t *timeout)
trace_run_poll_handlers_begin(ctx, max_ns, *timeout);
+ /*
+ * Optimization: ->io_poll() handlers often contain RCU read critical
+ * sections and we therefore see many rcu_read_lock() -> rcu_read_unlock()
+ * -> rcu_read_lock() -> ... sequences with expensive memory
+ * synchronization primitives. Make the entire polling loop an RCU
+ * critical section because nested rcu_read_lock()/rcu_read_unlock() calls
+ * are cheap.
+ */
+ RCU_READ_LOCK_GUARD();
+
start_time = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
do {
progress = run_poll_handlers_once(ctx, timeout);
--
2.24.1
next prev parent reply other threads:[~2020-03-11 12:44 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-11 12:40 [PULL 0/9] Block patches Stefan Hajnoczi
2020-03-11 12:40 ` [PULL 1/9] qemu/queue.h: clear linked list pointers on remove Stefan Hajnoczi
2020-03-11 12:40 ` [PULL 2/9] aio-posix: remove confusing QLIST_SAFE_REMOVE() Stefan Hajnoczi
2020-03-11 12:40 ` [PULL 3/9] aio-posix: completely stop polling when disabled Stefan Hajnoczi
2020-03-11 12:40 ` Stefan Hajnoczi [this message]
2020-03-11 12:40 ` [PULL 5/9] aio-posix: extract ppoll(2) and epoll(7) fd monitoring Stefan Hajnoczi
2020-03-11 12:40 ` [PULL 6/9] aio-posix: simplify FDMonOps->update() prototype Stefan Hajnoczi
2020-03-11 12:40 ` [PULL 7/9] aio-posix: add io_uring fd monitoring implementation Stefan Hajnoczi
2020-03-11 12:40 ` [PULL 8/9] aio-posix: support userspace polling of fd monitoring Stefan Hajnoczi
2020-03-11 12:40 ` [PULL 9/9] aio-posix: remove idle poll handlers to improve scalability Stefan Hajnoczi
2020-03-11 13:50 ` [PULL 0/9] Block patches no-reply
2020-03-11 16:54 ` Stefan Hajnoczi
2020-03-11 13:51 ` no-reply
2020-03-11 16:55 ` Stefan Hajnoczi
2020-03-11 17:06 ` Peter Maydell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200311124045.277969-5-stefanha@redhat.com \
--to=stefanha@redhat.com \
--cc=fam@euphon.net \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).