From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Maydell <peter.maydell@linaro.org>,
Stefan Hajnoczi <stefanha@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: [Qemu-devel] [PULL 16/35] AioContext: export and use aio_dispatch
Date: Fri, 29 Aug 2014 17:29:44 +0100 [thread overview]
Message-ID: <1409329803-20744-17-git-send-email-stefanha@redhat.com> (raw)
In-Reply-To: <1409329803-20744-1-git-send-email-stefanha@redhat.com>
From: Paolo Bonzini <pbonzini@redhat.com>
So far, aio_poll's scheme was dispatch/poll/dispatch, where
the first dispatch phase was used only in the GSource case in
order to avoid a blocking poll. Earlier patches changed it to
dispatch/prepare/poll/dispatch, where prepare is aio_compute_timeout.
By making aio_dispatch public, we can remove the first dispatch
phase altogether, so that both aio_poll and the GSource use the same
prepare/poll/dispatch scheme.
This patch breaks the invariant that aio_poll(..., true) will not block
the first time it returns false. This used to be fundamental for
qemu_aio_flush's implementation as "while (qemu_aio_wait()) {}" but
no code in QEMU relies on this invariant anymore. The return value
of aio_poll() is now comparable with that of g_main_context_iteration.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
aio-posix.c | 55 +++++++++++++----------------------------------------
aio-win32.c | 31 ++++--------------------------
async.c | 2 +-
include/block/aio.h | 6 ++++++
4 files changed, 24 insertions(+), 70 deletions(-)
diff --git a/aio-posix.c b/aio-posix.c
index 798a3ff..0936b4f 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -119,12 +119,21 @@ bool aio_pending(AioContext *ctx)
return false;
}
-static bool aio_dispatch(AioContext *ctx)
+bool aio_dispatch(AioContext *ctx)
{
AioHandler *node;
bool progress = false;
/*
+ * If there are callbacks left that have been queued, we need to call them.
+ * Do not call select in this case, because it is possible that the caller
+ * does not need a complete flush (as is the case for aio_poll loops).
+ */
+ if (aio_bh_poll(ctx)) {
+ progress = true;
+ }
+
+ /*
* We have to walk very carefully in case aio_set_fd_handler is
* called while we're walking.
*/
@@ -184,22 +193,9 @@ bool aio_poll(AioContext *ctx, bool blocking)
/* aio_notify can avoid the expensive event_notifier_set if
* everything (file descriptors, bottom halves, timers) will
- * be re-evaluated before the next blocking poll(). This happens
- * in two cases:
- *
- * 1) when aio_poll is called with blocking == false
- *
- * 2) when we are called after poll(). If we are called before
- * poll(), bottom halves will not be re-evaluated and we need
- * aio_notify() if blocking == true.
- *
- * The first aio_dispatch() only does something when AioContext is
- * running as a GSource, and in that case aio_poll is used only
- * with blocking == false, so this optimization is already quite
- * effective. However, the code is ugly and should be restructured
- * to have a single aio_dispatch() call. To do this, we need to
- * reorganize aio_poll into a prepare/poll/dispatch model like
- * glib's.
+ * be re-evaluated before the next blocking poll(). This is
+ * already true when aio_poll is called with blocking == false;
+ * if blocking == true, it is only true after poll() returns.
*
* If we're in a nested event loop, ctx->dispatching might be true.
* In that case we can restore it just before returning, but we
@@ -207,26 +203,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
*/
aio_set_dispatching(ctx, !blocking);
- /*
- * If there are callbacks left that have been queued, we need to call them.
- * Do not call select in this case, because it is possible that the caller
- * does not need a complete flush (as is the case for aio_poll loops).
- */
- if (aio_bh_poll(ctx)) {
- blocking = false;
- progress = true;
- }
-
- /* Re-evaluate condition (1) above. */
- aio_set_dispatching(ctx, !blocking);
- if (aio_dispatch(ctx)) {
- progress = true;
- }
-
- if (progress && !blocking) {
- goto out;
- }
-
ctx->walking_handlers++;
g_array_set_size(ctx->pollfds, 0);
@@ -264,15 +240,10 @@ bool aio_poll(AioContext *ctx, bool blocking)
/* Run dispatch even if there were no readable fds to run timers */
aio_set_dispatching(ctx, true);
- if (aio_bh_poll(ctx)) {
- progress = true;
- }
-
if (aio_dispatch(ctx)) {
progress = true;
}
-out:
aio_set_dispatching(ctx, was_dispatching);
return progress;
}
diff --git a/aio-win32.c b/aio-win32.c
index 2ac38a8..1ec434a 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -130,11 +130,12 @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
return progress;
}
-static bool aio_dispatch(AioContext *ctx)
+bool aio_dispatch(AioContext *ctx)
{
bool progress;
- progress = aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE);
+ progress = aio_bh_poll(ctx);
+ progress |= aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE);
progress |= timerlistgroup_run_timers(&ctx->tlg);
return progress;
}
@@ -149,23 +150,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
progress = false;
- /*
- * If there are callbacks left that have been queued, we need to call then.
- * Do not call select in this case, because it is possible that the caller
- * does not need a complete flush (as is the case for aio_poll loops).
- */
- if (aio_bh_poll(ctx)) {
- blocking = false;
- progress = true;
- }
-
- /* Dispatch any pending callbacks from the GSource. */
- progress |= aio_dispatch(ctx);
-
- if (progress && !blocking) {
- return true;
- }
-
ctx->walking_handlers++;
/* fill fd sets */
@@ -205,14 +189,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
events[ret - WAIT_OBJECT_0] = events[--count];
}
- if (blocking) {
- /* Run the timers a second time. We do this because otherwise aio_wait
- * will not note progress - and will stop a drain early - if we have
- * a timer that was not ready to run entering g_poll but is ready
- * after g_poll. This will only do anything if a timer has expired.
- */
- progress |= timerlistgroup_run_timers(&ctx->tlg);
- }
+ progress |= timerlistgroup_run_timers(&ctx->tlg);
return progress;
}
diff --git a/async.c b/async.c
index 09e09c6..293a52a 100644
--- a/async.c
+++ b/async.c
@@ -213,7 +213,7 @@ aio_ctx_dispatch(GSource *source,
AioContext *ctx = (AioContext *) source;
assert(callback == NULL);
- aio_poll(ctx, false);
+ aio_dispatch(ctx);
return true;
}
diff --git a/include/block/aio.h b/include/block/aio.h
index 05b531c..7ba3e96 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -211,6 +211,12 @@ void qemu_bh_delete(QEMUBH *bh);
*/
bool aio_pending(AioContext *ctx);
+/* Dispatch any pending callbacks from the GSource attached to the AioContext.
+ *
+ * This is used internally in the implementation of the GSource.
+ */
+bool aio_dispatch(AioContext *ctx);
+
/* Progress in completing AIO work to occur. This can issue new pending
* aio as a result of executing I/O completion or bh callbacks.
*
--
1.9.3
next prev parent reply other threads:[~2014-08-29 16:30 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-08-29 16:29 [Qemu-devel] [PULL 00/35] Block patches Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 01/35] ide: Fix bootindex for bus_id > 9 Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 02/35] block.curl: adding 'timeout' option Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 03/35] qemu-img: fix img_commit() error return value Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 04/35] qemu-img: fix img_compare() flags error path Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 05/35] qemu-img: always goto out in img_snapshot() error paths Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 06/35] sheepdog: adopting protocol update for VDI locking Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 07/35] sheepdog: improve error handling for a case of failed lock Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 08/35] qapi: add read-pattern enum for quorum Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 09/35] block/quorum: add simple read pattern support Stefan Hajnoczi
2014-08-29 16:47 ` Benoît Canet
2014-09-01 15:21 ` Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 10/35] coroutine: Drop co_sleep_ns Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 11/35] blockdev: fix drive-mirror 'granularity' error message Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 12/35] AioContext: take bottom halves into account when computing aio_poll timeout Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 13/35] aio-win32: Evaluate timers after handles Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 14/35] aio-win32: Factor out duplicate code into aio_dispatch_handlers Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 15/35] AioContext: run bottom halves after polling Stefan Hajnoczi
2014-08-29 16:29 ` Stefan Hajnoczi [this message]
2014-08-29 16:29 ` [Qemu-devel] [PULL 17/35] test-aio: test timers on Windows too Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 18/35] aio-win32: add aio_set_dispatching optimization Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 19/35] AioContext: introduce aio_prepare Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 20/35] qemu-coroutine-io: fix for Win32 Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 21/35] aio-win32: add support for sockets Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 22/35] sheepdog: fix a core dump while do auto-reconnecting Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 23/35] nbd: Drop nbd_can_read() Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 24/35] block: Add AIO context notifiers Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 25/35] nbd: Follow the BDS' AIO context Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 26/35] block: fix overlapping multiwrite requests Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 27/35] qemu-iotests: add multiwrite test cases Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 28/35] linux-aio: avoid deadlock in nested aio_poll() calls Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 29/35] block: acquire AioContext in do_drive_del() Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 30/35] virtio-blk: allow drive_del with dataplane Stefan Hajnoczi
2014-08-29 16:29 ` [Qemu-devel] [PULL 31/35] curl: Allow a cookie or cookies to be sent with http/https requests Stefan Hajnoczi
2014-08-29 16:30 ` [Qemu-devel] [PULL 32/35] curl: Don't deref NULL pointer in call to aio_poll Stefan Hajnoczi
2014-08-29 16:30 ` [Qemu-devel] [PULL 33/35] nfs: Fix leak of opts in nfs_file_open Stefan Hajnoczi
2014-08-29 16:30 ` [Qemu-devel] [PULL 34/35] blkverify: Fix leak of opts in blkverify_open Stefan Hajnoczi
2014-08-29 16:30 ` [Qemu-devel] [PULL 35/35] quorum: Fix leak of opts in quorum_open Stefan Hajnoczi
2014-09-01 9:49 ` [Qemu-devel] [PULL 00/35] Block patches Peter Maydell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1409329803-20744-17-git-send-email-stefanha@redhat.com \
--to=stefanha@redhat.com \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).