qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: qemu-devel@nongnu.org
Subject: [Qemu-devel] [PATCH 05/10] AioContext: export and use aio_dispatch
Date: Wed,  9 Jul 2014 11:53:05 +0200	[thread overview]
Message-ID: <1404899590-24973-6-git-send-email-pbonzini@redhat.com> (raw)
In-Reply-To: <1404899590-24973-1-git-send-email-pbonzini@redhat.com>

So far, aio_poll's scheme was dispatch/poll/dispatch, where
the first dispatch phase was used only in the GSource case in
order to avoid a blocking poll.  Earlier patches changed it to
dispatch/prepare/poll/dispatch, where prepare is aio_compute_timeout.

By making aio_dispatch public, we can remove the first dispatch
phase altogether, so that both aio_poll and the GSource use the same
prepare/poll/dispatch scheme.

This patch breaks the invariant that aio_poll(..., true) will not block
the first time it returns false.  This used to be fundamental for
qemu_aio_flush's implementation as "while (qemu_aio_wait()) {}" but
no code in QEMU relies on this invariant anymore.  The return value
of aio_poll() is now comparable with that of g_main_context_iteration.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 aio-posix.c         | 55 +++++++++++++----------------------------------------
 aio-win32.c         | 31 ++++--------------------------
 async.c             |  2 +-
 include/block/aio.h |  6 ++++++
 4 files changed, 24 insertions(+), 70 deletions(-)

diff --git a/aio-posix.c b/aio-posix.c
index 798a3ff..0936b4f 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -119,12 +119,21 @@ bool aio_pending(AioContext *ctx)
     return false;
 }
 
-static bool aio_dispatch(AioContext *ctx)
+bool aio_dispatch(AioContext *ctx)
 {
     AioHandler *node;
     bool progress = false;
 
     /*
+     * If there are callbacks left that have been queued, we need to call them.
+     * Do not call select in this case, because it is possible that the caller
+     * does not need a complete flush (as is the case for aio_poll loops).
+     */
+    if (aio_bh_poll(ctx)) {
+        progress = true;
+    }
+
+    /*
      * We have to walk very carefully in case aio_set_fd_handler is
      * called while we're walking.
      */
@@ -184,22 +193,9 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     /* aio_notify can avoid the expensive event_notifier_set if
      * everything (file descriptors, bottom halves, timers) will
-     * be re-evaluated before the next blocking poll().  This happens
-     * in two cases:
-     *
-     * 1) when aio_poll is called with blocking == false
-     *
-     * 2) when we are called after poll().  If we are called before
-     *    poll(), bottom halves will not be re-evaluated and we need
-     *    aio_notify() if blocking == true.
-     *
-     * The first aio_dispatch() only does something when AioContext is
-     * running as a GSource, and in that case aio_poll is used only
-     * with blocking == false, so this optimization is already quite
-     * effective.  However, the code is ugly and should be restructured
-     * to have a single aio_dispatch() call.  To do this, we need to
-     * reorganize aio_poll into a prepare/poll/dispatch model like
-     * glib's.
+     * be re-evaluated before the next blocking poll().  This is
+     * already true when aio_poll is called with blocking == false;
+     * if blocking == true, it is only true after poll() returns.
      *
      * If we're in a nested event loop, ctx->dispatching might be true.
      * In that case we can restore it just before returning, but we
@@ -207,26 +203,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
      */
     aio_set_dispatching(ctx, !blocking);
 
-    /*
-     * If there are callbacks left that have been queued, we need to call them.
-     * Do not call select in this case, because it is possible that the caller
-     * does not need a complete flush (as is the case for aio_poll loops).
-     */
-    if (aio_bh_poll(ctx)) {
-        blocking = false;
-        progress = true;
-    }
-
-    /* Re-evaluate condition (1) above.  */
-    aio_set_dispatching(ctx, !blocking);
-    if (aio_dispatch(ctx)) {
-        progress = true;
-    }
-
-    if (progress && !blocking) {
-        goto out;
-    }
-
     ctx->walking_handlers++;
 
     g_array_set_size(ctx->pollfds, 0);
@@ -264,15 +240,10 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     /* Run dispatch even if there were no readable fds to run timers */
     aio_set_dispatching(ctx, true);
-    if (aio_bh_poll(ctx)) {
-        progress = true;
-    }
-
     if (aio_dispatch(ctx)) {
         progress = true;
     }
 
-out:
     aio_set_dispatching(ctx, was_dispatching);
     return progress;
 }
diff --git a/aio-win32.c b/aio-win32.c
index 2ac38a8..1ec434a 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -130,11 +130,12 @@ static bool aio_dispatch_handlers(AioContext *ctx, HANDLE event)
     return progress;
 }
 
-static bool aio_dispatch(AioContext *ctx)
+bool aio_dispatch(AioContext *ctx)
 {
     bool progress;
 
-    progress = aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE);
+    progress = aio_bh_poll(ctx);
+    progress |= aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE);
     progress |= timerlistgroup_run_timers(&ctx->tlg);
     return progress;
 }
@@ -149,23 +150,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     progress = false;
 
-    /*
-     * If there are callbacks left that have been queued, we need to call then.
-     * Do not call select in this case, because it is possible that the caller
-     * does not need a complete flush (as is the case for aio_poll loops).
-     */
-    if (aio_bh_poll(ctx)) {
-        blocking = false;
-        progress = true;
-    }
-
-    /* Dispatch any pending callbacks from the GSource.  */
-    progress |= aio_dispatch(ctx);
-
-    if (progress && !blocking) {
-        return true;
-    }
-
     ctx->walking_handlers++;
 
     /* fill fd sets */
@@ -205,14 +189,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
         events[ret - WAIT_OBJECT_0] = events[--count];
     }
 
-    if (blocking) {
-        /* Run the timers a second time. We do this because otherwise aio_wait
-         * will not note progress - and will stop a drain early - if we have
-         * a timer that was not ready to run entering g_poll but is ready
-         * after g_poll. This will only do anything if a timer has expired.
-         */
-        progress |= timerlistgroup_run_timers(&ctx->tlg);
-    }
+    progress |= timerlistgroup_run_timers(&ctx->tlg);
 
     return progress;
 }
diff --git a/async.c b/async.c
index ac40eab..a5126ff 100644
--- a/async.c
+++ b/async.c
@@ -213,7 +213,7 @@ aio_ctx_dispatch(GSource     *source,
     AioContext *ctx = (AioContext *) source;
 
     assert(callback == NULL);
-    aio_poll(ctx, false);
+    aio_dispatch(ctx);
     return true;
 }
 
diff --git a/include/block/aio.h b/include/block/aio.h
index 7eeb961..45408f7 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -211,6 +211,12 @@ void qemu_bh_delete(QEMUBH *bh);
  */
 bool aio_pending(AioContext *ctx);
 
+/* Dispatch any pending callbacks from the GSource attached to the AioContext.
+ *
+ * This is used internally in the implementation of the GSource.
+ */
+bool aio_dispatch(AioContext *ctx);
+
 /* Progress in completing AIO work to occur.  This can issue new pending
  * aio as a result of executing I/O completion or bh callbacks.
  *
-- 
1.9.3

  parent reply	other threads:[~2014-07-09  9:53 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-07-09  9:53 [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Paolo Bonzini
2014-07-09  9:53 ` [Qemu-devel] [PATCH 01/10] AioContext: take bottom halves into account when computing aio_poll timeout Paolo Bonzini
2014-08-01 14:34   ` Stefan Hajnoczi
2014-08-01 16:03     ` Paolo Bonzini
2014-07-09  9:53 ` [Qemu-devel] [PATCH 02/10] aio-win32: Evaluate timers after handles Paolo Bonzini
2014-07-09  9:53 ` [Qemu-devel] [PATCH 03/10] aio-win32: Factor out duplicate code into aio_dispatch_handlers Paolo Bonzini
2014-07-09  9:53 ` [Qemu-devel] [PATCH 04/10] AioContext: run bottom halves after polling Paolo Bonzini
2014-07-09  9:53 ` Paolo Bonzini [this message]
2014-07-09  9:53 ` [Qemu-devel] [PATCH 06/10] test-aio: test timers on Windows too Paolo Bonzini
2014-07-09  9:53 ` [Qemu-devel] [PATCH 07/10] aio-win32: add aio_set_dispatching optimization Paolo Bonzini
2014-07-09  9:53 ` [Qemu-devel] [PATCH 08/10] AioContext: introduce aio_prepare Paolo Bonzini
2014-07-09  9:53 ` [Qemu-devel] [PATCH 09/10] qemu-coroutine-io: fix for Win32 Paolo Bonzini
2014-07-09  9:53 ` [Qemu-devel] [PATCH 10/10] aio-win32: add support for sockets Paolo Bonzini
2014-09-12  1:39   ` TeLeMan
2014-09-12 10:05     ` Paolo Bonzini
2014-09-13  2:22       ` TeLeMan
2014-09-13 10:33         ` Paolo Bonzini
2014-09-15  1:18           ` TeLeMan
2014-09-15 15:16             ` Paolo Bonzini
2014-09-12 12:51     ` Stefan Hajnoczi
2014-09-12 12:52       ` Paolo Bonzini
2014-09-12  1:43   ` TeLeMan
2014-08-01 14:52 ` [Qemu-devel] [PATCH for-2.2 00/10] AioContext cleanups and Win32 socket support Stefan Hajnoczi
2014-08-01 15:07   ` Paolo Bonzini
2014-08-28 14:00 ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1404899590-24973-6-git-send-email-pbonzini@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).