From: Paolo Bonzini <pbonzini@redhat.com>
To: qemu-devel@nongnu.org
Cc: stefanha@redhat.com, famz@redhat.com
Subject: [Qemu-devel] [PATCH 15/17] aio-posix: partially inline aio_dispatch into aio_poll
Date: Fri, 20 Jan 2017 17:43:20 +0100 [thread overview]
Message-ID: <20170120164322.21851-16-pbonzini@redhat.com> (raw)
In-Reply-To: <20170120164322.21851-1-pbonzini@redhat.com>
This patch prepares for the removal of unnecessary lockcnt inc/dec pairs.
Extract the dispatching loop for file descriptor handlers into a new
function aio_dispatch_handlers, and then inline aio_dispatch into
aio_poll.
aio_dispatch can now become void.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
include/block/aio.h | 6 +-----
util/aio-posix.c | 44 ++++++++++++++------------------------------
util/aio-win32.c | 13 ++++---------
util/async.c | 2 +-
4 files changed, 20 insertions(+), 45 deletions(-)
diff --git a/include/block/aio.h b/include/block/aio.h
index 614cbc6..677b6ff 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -310,12 +310,8 @@ bool aio_pending(AioContext *ctx);
/* Dispatch any pending callbacks from the GSource attached to the AioContext.
*
* This is used internally in the implementation of the GSource.
- *
- * @dispatch_fds: true to process fds, false to skip them
- * (can be used as an optimization by callers that know there
- * are no fds ready)
*/
-bool aio_dispatch(AioContext *ctx, bool dispatch_fds);
+void aio_dispatch(AioContext *ctx);
/* Progress in completing AIO work to occur. This can issue new pending
* aio as a result of executing I/O completion or bh callbacks.
diff --git a/util/aio-posix.c b/util/aio-posix.c
index 6beebcd..51e92b8 100644
--- a/util/aio-posix.c
+++ b/util/aio-posix.c
@@ -386,12 +386,6 @@ static bool aio_dispatch_handlers(AioContext *ctx)
AioHandler *node, *tmp;
bool progress = false;
- /*
- * We have to walk very carefully in case aio_set_fd_handler is
- * called while we're walking.
- */
- qemu_lockcnt_inc(&ctx->list_lock);
-
QLIST_FOREACH_SAFE_RCU(node, &ctx->aio_handlers, node, tmp) {
int revents;
@@ -426,33 +420,18 @@ static bool aio_dispatch_handlers(AioContext *ctx)
}
}
- qemu_lockcnt_dec(&ctx->list_lock);
return progress;
}
-/*
- * Note that dispatch_fds == false has the side-effect of post-poning the
- * freeing of deleted handlers.
- */
-bool aio_dispatch(AioContext *ctx, bool dispatch_fds)
+void aio_dispatch(AioContext *ctx)
{
- bool progress;
-
- /*
- * If there are callbacks left that have been queued, we need to call them.
- * Do not call select in this case, because it is possible that the caller
- * does not need a complete flush (as is the case for aio_poll loops).
- */
- progress = aio_bh_poll(ctx);
+ aio_bh_poll(ctx);
- if (dispatch_fds) {
- progress |= aio_dispatch_handlers(ctx);
- }
-
- /* Run our timers */
- progress |= timerlistgroup_run_timers(&ctx->tlg);
+ qemu_lockcnt_inc(&ctx->list_lock);
+ aio_dispatch_handlers(ctx);
+ qemu_lockcnt_dec(&ctx->list_lock);
- return progress;
+ timerlistgroup_run_timers(&ctx->tlg);
}
/* These thread-local variables are used only in a small part of aio_poll
@@ -701,11 +680,16 @@ bool aio_poll(AioContext *ctx, bool blocking)
npfd = 0;
qemu_lockcnt_dec(&ctx->list_lock);
- /* Run dispatch even if there were no readable fds to run timers */
- if (aio_dispatch(ctx, ret > 0)) {
- progress = true;
+ progress |= aio_bh_poll(ctx);
+
+ if (ret > 0) {
+ qemu_lockcnt_inc(&ctx->list_lock);
+ progress |= aio_dispatch_handlers(ctx);
+ qemu_lockcnt_dec(&ctx->list_lock);
}
+ progress |= timerlistgroup_run_timers(&ctx->tlg);
+
return progress;
}
diff --git a/util/aio-win32.c b/util/aio-win32.c
index 20b63ce..442a179 100644
--- a/util/aio-win32.c
+++ b/util/aio-win32.c
@@ -309,16 +309,11 @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
return progress;
}
-bool aio_dispatch(AioContext *ctx, bool dispatch_fds)
+void aio_dispatch(AioContext *ctx)
{
- bool progress;
-
- progress = aio_bh_poll(ctx);
- if (dispatch_fds) {
- progress |= aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE);
- }
- progress |= timerlistgroup_run_timers(&ctx->tlg);
- return progress;
+ aio_bh_poll(ctx);
+ aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE);
+ timerlistgroup_run_timers(&ctx->tlg);
}
bool aio_poll(AioContext *ctx, bool blocking)
diff --git a/util/async.c b/util/async.c
index 99b9d7e..cc40735 100644
--- a/util/async.c
+++ b/util/async.c
@@ -258,7 +258,7 @@ aio_ctx_dispatch(GSource *source,
AioContext *ctx = (AioContext *) source;
assert(callback == NULL);
- aio_dispatch(ctx, true);
+ aio_dispatch(ctx);
return true;
}
--
2.9.3
next prev parent reply other threads:[~2017-01-20 16:43 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-01-20 16:43 [Qemu-devel] [PATCH 00/17] aio_context_acquire/release pushdown, part 2 Paolo Bonzini
2017-01-20 16:43 ` [Qemu-devel] [PATCH 01/17] aio: introduce aio_co_schedule and aio_co_wake Paolo Bonzini
2017-01-30 15:18 ` Stefan Hajnoczi
2017-01-30 20:15 ` Paolo Bonzini
2017-01-20 16:43 ` [Qemu-devel] [PATCH 02/17] block-backend: allow blk_prw from coroutine context Paolo Bonzini
2017-01-20 16:43 ` [Qemu-devel] [PATCH 03/17] test-thread-pool: use generic AioContext infrastructure Paolo Bonzini
2017-01-20 16:43 ` [Qemu-devel] [PATCH 04/17] block: move AioContext and QEMUTimer to libqemuutil Paolo Bonzini
2017-01-24 9:34 ` Daniel P. Berrange
2017-01-30 15:24 ` Stefan Hajnoczi
2017-01-20 16:43 ` [Qemu-devel] [PATCH 05/17] io: add methods to set I/O handlers on AioContext Paolo Bonzini
2017-01-24 9:35 ` Daniel P. Berrange
2017-01-30 15:32 ` Stefan Hajnoczi
2017-01-20 16:43 ` [Qemu-devel] [PATCH 06/17] io: make qio_channel_yield aware of AioContexts Paolo Bonzini
2017-01-24 9:36 ` Daniel P. Berrange
2017-01-24 9:38 ` Paolo Bonzini
2017-01-24 9:40 ` Daniel P. Berrange
2017-01-30 15:37 ` Stefan Hajnoczi
2017-01-20 16:43 ` [Qemu-devel] [PATCH 07/17] nbd: convert to use qio_channel_yield Paolo Bonzini
2017-01-30 15:50 ` Stefan Hajnoczi
2017-01-30 21:18 ` Paolo Bonzini
2017-01-31 13:50 ` Stefan Hajnoczi
2017-01-31 14:29 ` Paolo Bonzini
2017-01-20 16:43 ` [Qemu-devel] [PATCH 08/17] coroutine-lock: reschedule coroutine on the AioContext it was running on Paolo Bonzini
2017-01-20 16:43 ` [Qemu-devel] [PATCH 09/17] qed: introduce qed_aio_start_io and qed_aio_next_io_cb Paolo Bonzini
2017-01-20 16:43 ` [Qemu-devel] [PATCH 10/17] aio: push aio_context_acquire/release down to dispatching Paolo Bonzini
2017-01-20 16:43 ` [Qemu-devel] [PATCH 11/17] block: explicitly acquire aiocontext in timers that need it Paolo Bonzini
2017-01-30 16:01 ` Stefan Hajnoczi
2017-01-20 16:43 ` [Qemu-devel] [PATCH 12/17] block: explicitly acquire aiocontext in callbacks " Paolo Bonzini
2017-01-20 16:43 ` [Qemu-devel] [PATCH 13/17] block: explicitly acquire aiocontext in bottom halves " Paolo Bonzini
2017-01-20 16:43 ` [Qemu-devel] [PATCH 14/17] block: explicitly acquire aiocontext in aio callbacks " Paolo Bonzini
2017-01-20 16:43 ` Paolo Bonzini [this message]
2017-01-20 16:43 ` [Qemu-devel] [PATCH 16/17] async: remove unnecessary inc/dec pairs Paolo Bonzini
2017-01-20 16:43 ` [Qemu-devel] [PATCH 17/17] block: document fields protected by AioContext lock Paolo Bonzini
2017-01-24 9:38 ` [Qemu-devel] [PATCH 00/17] aio_context_acquire/release pushdown, part 2 Daniel P. Berrange
2017-01-29 20:48 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170120164322.21851-16-pbonzini@redhat.com \
--to=pbonzini@redhat.com \
--cc=famz@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).