qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH for 2.1 0/4] AioContext cleanups and optimizations
@ 2014-07-07 13:18 Paolo Bonzini
  2014-07-07 13:18 ` [Qemu-devel] [PATCH 1/4] block: prefer aio_poll to qemu_aio_wait Paolo Bonzini
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Paolo Bonzini @ 2014-07-07 13:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: kwolf, stefanha

These patches do some cleanup and optimization in AioContext land.

The first two drop AIO functions that operate on the main AioContext.
These are not needed anymore now that each BlockDriverState explicitly
operates on its own AioContext.  They are independent, and can be
skipped if the maintainers prefer doing so or have comments.

Patch 3 is a testsuite change for the aio_notify optimization, and
patch 4 is the aio_notify patch, now with a better comment
about the smp_mb optimization.

Paolo Bonzini (4):
  block: prefer aio_poll to qemu_aio_wait
  block: drop aio functions that operate on the main AioContext
  test-aio: fix GSource-based timer test
  AioContext: speed up aio_notify

 aio-posix.c               |  38 +++++++++++++++--
 aio-win32.c               |   6 +--
 async.c                   |  19 ++++++++-
 block.c                   |   2 +-
 blockjob.c                |   2 +-
 docs/aio_notify.promela   | 104 ++++++++++++++++++++++++++++++++++++++++++++++
 include/block/aio.h       |  26 +++++-------
 include/block/blockjob.h  |   4 +-
 include/block/coroutine.h |   2 +-
 main-loop.c               |  21 ----------
 qemu-io-cmds.c            |   4 +-
 tests/test-aio.c          |  13 +++---
 tests/test-thread-pool.c  |   4 +-
 13 files changed, 186 insertions(+), 59 deletions(-)
 create mode 100644 docs/aio_notify.promela

-- 
1.8.3.1

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [Qemu-devel] [PATCH 1/4] block: prefer aio_poll to qemu_aio_wait
  2014-07-07 13:18 [Qemu-devel] [PATCH for 2.1 0/4] AioContext cleanups and optimizations Paolo Bonzini
@ 2014-07-07 13:18 ` Paolo Bonzini
  2014-07-07 13:18 ` [Qemu-devel] [PATCH 2/4] block: drop aio functions that operate on the main AioContext Paolo Bonzini
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Paolo Bonzini @ 2014-07-07 13:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: kwolf, stefanha

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 block.c        | 2 +-
 blockjob.c     | 2 +-
 qemu-io-cmds.c | 4 ++--
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/block.c b/block.c
index f80e2b2..591a913 100644
--- a/block.c
+++ b/block.c
@@ -471,7 +471,7 @@ int bdrv_create(BlockDriver *drv, const char* filename,
         co = qemu_coroutine_create(bdrv_create_co_entry);
         qemu_coroutine_enter(co, &cco);
         while (cco.ret == NOT_DONE) {
-            qemu_aio_wait();
+            aio_poll(qemu_get_aio_context(), true);
         }
     }
 
diff --git a/blockjob.c b/blockjob.c
index 67a64ea..ca0b4e2 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -187,7 +187,7 @@ int block_job_cancel_sync(BlockJob *job)
     job->opaque = &data;
     block_job_cancel(job);
     while (data.ret == -EINPROGRESS) {
-        qemu_aio_wait();
+        aio_poll(bdrv_get_aio_context(bs), true);
     }
     return (data.cancelled && data.ret == 0) ? -ECANCELED : data.ret;
 }
diff --git a/qemu-io-cmds.c b/qemu-io-cmds.c
index 60c1ceb..c503fc6 100644
--- a/qemu-io-cmds.c
+++ b/qemu-io-cmds.c
@@ -483,7 +483,7 @@ static int do_co_write_zeroes(BlockDriverState *bs, int64_t offset, int count,
     co = qemu_coroutine_create(co_write_zeroes_entry);
     qemu_coroutine_enter(co, &data);
     while (!data.done) {
-        qemu_aio_wait();
+        aio_poll(bdrv_get_aio_context(bs), true);
     }
     if (data.ret < 0) {
         return data.ret;
@@ -2027,7 +2027,7 @@ static const cmdinfo_t resume_cmd = {
 static int wait_break_f(BlockDriverState *bs, int argc, char **argv)
 {
     while (!bdrv_debug_is_suspended(bs, argv[1])) {
-        qemu_aio_wait();
+        aio_poll(bdrv_get_aio_context(bs), true);
     }
 
     return 0;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [Qemu-devel] [PATCH 2/4] block: drop aio functions that operate on the main AioContext
  2014-07-07 13:18 [Qemu-devel] [PATCH for 2.1 0/4] AioContext cleanups and optimizations Paolo Bonzini
  2014-07-07 13:18 ` [Qemu-devel] [PATCH 1/4] block: prefer aio_poll to qemu_aio_wait Paolo Bonzini
@ 2014-07-07 13:18 ` Paolo Bonzini
  2014-07-07 13:18 ` [Qemu-devel] [PATCH 3/4] test-aio: fix GSource-based timer test Paolo Bonzini
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Paolo Bonzini @ 2014-07-07 13:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: kwolf, stefanha

The main AioContext should be accessed explicitly via qemu_get_aio_context().
Most of the time, using it is not the right thing to do.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 aio-posix.c               |  4 ++--
 aio-win32.c               |  6 +++---
 include/block/aio.h       | 17 ++---------------
 include/block/blockjob.h  |  4 ++--
 include/block/coroutine.h |  2 +-
 main-loop.c               | 21 ---------------------
 tests/test-thread-pool.c  |  4 ++--
 7 files changed, 12 insertions(+), 46 deletions(-)

diff --git a/aio-posix.c b/aio-posix.c
index f921d4f..44c4df3 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -125,7 +125,7 @@ static bool aio_dispatch(AioContext *ctx)
     bool progress = false;
 
     /*
-     * We have to walk very carefully in case qemu_aio_set_fd_handler is
+     * We have to walk very carefully in case aio_set_fd_handler is
      * called while we're walking.
      */
     node = QLIST_FIRST(&ctx->aio_handlers);
@@ -183,7 +183,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
     /*
      * If there are callbacks left that have been queued, we need to call them.
      * Do not call select in this case, because it is possible that the caller
-     * does not need a complete flush (as is the case for qemu_aio_wait loops).
+     * does not need a complete flush (as is the case for aio_poll loops).
      */
     if (aio_bh_poll(ctx)) {
         blocking = false;
diff --git a/aio-win32.c b/aio-win32.c
index 23f4e5b..c12f61e 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -102,7 +102,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
     /*
      * If there are callbacks left that have been queued, we need to call then.
      * Do not call select in this case, because it is possible that the caller
-     * does not need a complete flush (as is the case for qemu_aio_wait loops).
+     * does not need a complete flush (as is the case for aio_poll loops).
      */
     if (aio_bh_poll(ctx)) {
         blocking = false;
@@ -115,7 +115,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
     /*
      * Then dispatch any pending callbacks from the GSource.
      *
-     * We have to walk very carefully in case qemu_aio_set_fd_handler is
+     * We have to walk very carefully in case aio_set_fd_handler is
      * called while we're walking.
      */
     node = QLIST_FIRST(&ctx->aio_handlers);
@@ -177,7 +177,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
         blocking = false;
 
         /* we have to walk very carefully in case
-         * qemu_aio_set_fd_handler is called while we're walking */
+         * aio_set_fd_handler is called while we're walking */
         node = QLIST_FIRST(&ctx->aio_handlers);
         while (node) {
             AioHandler *tmp;
diff --git a/include/block/aio.h b/include/block/aio.h
index a92511b..d81250c 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -220,7 +220,7 @@ bool aio_poll(AioContext *ctx, bool blocking);
 #ifdef CONFIG_POSIX
 /* Register a file descriptor and associated callbacks.  Behaves very similarly
  * to qemu_set_fd_handler2.  Unlike qemu_set_fd_handler2, these callbacks will
- * be invoked when using qemu_aio_wait().
+ * be invoked when using aio_poll().
  *
  * Code that invokes AIO completion functions should rely on this function
  * instead of qemu_set_fd_handler[2].
@@ -234,7 +234,7 @@ void aio_set_fd_handler(AioContext *ctx,
 
 /* Register an event notifier and associated callbacks.  Behaves very similarly
  * to event_notifier_set_handler.  Unlike event_notifier_set_handler, these callbacks
- * will be invoked when using qemu_aio_wait().
+ * will be invoked when using aio_poll().
  *
  * Code that invokes AIO completion functions should rely on this function
  * instead of event_notifier_set_handler.
@@ -251,19 +251,6 @@ GSource *aio_get_g_source(AioContext *ctx);
 /* Return the ThreadPool bound to this AioContext */
 struct ThreadPool *aio_get_thread_pool(AioContext *ctx);
 
-/* Functions to operate on the main QEMU AioContext.  */
-
-bool qemu_aio_wait(void);
-void qemu_aio_set_event_notifier(EventNotifier *notifier,
-                                 EventNotifierHandler *io_read);
-
-#ifdef CONFIG_POSIX
-void qemu_aio_set_fd_handler(int fd,
-                             IOHandler *io_read,
-                             IOHandler *io_write,
-                             void *opaque);
-#endif
-
 /**
  * aio_timer_new:
  * @ctx: the aio context
diff --git a/include/block/blockjob.h b/include/block/blockjob.h
index f3cf63f..60aa835 100644
--- a/include/block/blockjob.h
+++ b/include/block/blockjob.h
@@ -74,7 +74,7 @@ struct BlockJob {
      * Set to true if the job should cancel itself.  The flag must
      * always be tested just before toggling the busy flag from false
      * to true.  After a job has been cancelled, it should only yield
-     * if #qemu_aio_wait will ("sooner or later") reenter the coroutine.
+     * if #aio_poll will ("sooner or later") reenter the coroutine.
      */
     bool cancelled;
 
@@ -87,7 +87,7 @@ struct BlockJob {
     /**
      * Set to false by the job while it is in a quiescent state, where
      * no I/O is pending and the job has yielded on any condition
-     * that is not detected by #qemu_aio_wait, such as a timer.
+     * that is not detected by #aio_poll, such as a timer.
      */
     bool busy;
 
diff --git a/include/block/coroutine.h b/include/block/coroutine.h
index a1797ae..b408f96 100644
--- a/include/block/coroutine.h
+++ b/include/block/coroutine.h
@@ -212,7 +212,7 @@ void coroutine_fn co_sleep_ns(QEMUClockType type, int64_t ns);
  * Yield the coroutine for a given duration
  *
  * Behaves similarly to co_sleep_ns(), but the sleeping coroutine will be
- * resumed when using qemu_aio_wait().
+ * resumed when using aio_poll().
  */
 void coroutine_fn co_aio_sleep_ns(AioContext *ctx, QEMUClockType type,
                                   int64_t ns);
diff --git a/main-loop.c b/main-loop.c
index 8a85493..3cc79f8 100644
--- a/main-loop.c
+++ b/main-loop.c
@@ -498,24 +498,3 @@ QEMUBH *qemu_bh_new(QEMUBHFunc *cb, void *opaque)
 {
     return aio_bh_new(qemu_aio_context, cb, opaque);
 }
-
-bool qemu_aio_wait(void)
-{
-    return aio_poll(qemu_aio_context, true);
-}
-
-#ifdef CONFIG_POSIX
-void qemu_aio_set_fd_handler(int fd,
-                             IOHandler *io_read,
-                             IOHandler *io_write,
-                             void *opaque)
-{
-    aio_set_fd_handler(qemu_aio_context, fd, io_read, io_write, opaque);
-}
-#endif
-
-void qemu_aio_set_event_notifier(EventNotifier *notifier,
-                                 EventNotifierHandler *io_read)
-{
-    aio_set_event_notifier(qemu_aio_context, notifier, io_read);
-}
diff --git a/tests/test-thread-pool.c b/tests/test-thread-pool.c
index aa156bc..f40b7fc 100644
--- a/tests/test-thread-pool.c
+++ b/tests/test-thread-pool.c
@@ -83,7 +83,7 @@ static void co_test_cb(void *opaque)
     data->ret = 0;
     active--;
 
-    /* The test continues in test_submit_co, after qemu_aio_wait_all... */
+    /* The test continues in test_submit_co, after aio_poll... */
 }
 
 static void test_submit_co(void)
@@ -98,7 +98,7 @@ static void test_submit_co(void)
     g_assert_cmpint(active, ==, 1);
     g_assert_cmpint(data.ret, ==, -EINPROGRESS);
 
-    /* qemu_aio_wait_all will execute the rest of the coroutine.  */
+    /* aio_poll will execute the rest of the coroutine.  */
 
     while (data.ret == -EINPROGRESS) {
         aio_poll(ctx, true);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [Qemu-devel] [PATCH 3/4] test-aio: fix GSource-based timer test
  2014-07-07 13:18 [Qemu-devel] [PATCH for 2.1 0/4] AioContext cleanups and optimizations Paolo Bonzini
  2014-07-07 13:18 ` [Qemu-devel] [PATCH 1/4] block: prefer aio_poll to qemu_aio_wait Paolo Bonzini
  2014-07-07 13:18 ` [Qemu-devel] [PATCH 2/4] block: drop aio functions that operate on the main AioContext Paolo Bonzini
@ 2014-07-07 13:18 ` Paolo Bonzini
  2014-07-07 13:18 ` [Qemu-devel] [PATCH 4/4] AioContext: speed up aio_notify Paolo Bonzini
  2014-07-08 15:11 ` [Qemu-devel] [PATCH for 2.1 0/4] AioContext cleanups and optimizations Kevin Wolf
  4 siblings, 0 replies; 6+ messages in thread
From: Paolo Bonzini @ 2014-07-07 13:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: kwolf, stefanha

The current test depends too much on the implementation of the AioContext
GSource.  Just iterate on the main loop until the callback has been invoked
the right number of times.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 tests/test-aio.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/tests/test-aio.c b/tests/test-aio.c
index e5f8b55..264dab9 100644
--- a/tests/test-aio.c
+++ b/tests/test-aio.c
@@ -806,17 +806,16 @@ static void test_source_timer_schedule(void)
     g_usleep(1 * G_USEC_PER_SEC);
     g_assert_cmpint(data.n, ==, 0);
 
-    g_assert(g_main_context_iteration(NULL, false));
+    g_assert(g_main_context_iteration(NULL, true));
     g_assert_cmpint(data.n, ==, 1);
+    expiry += data.ns;
 
-    /* The comment above was not kidding when it said this wakes up itself */
-    do {
-        g_assert(g_main_context_iteration(NULL, true));
-    } while (qemu_clock_get_ns(data.clock_type) <= expiry);
-    g_usleep(1 * G_USEC_PER_SEC);
-    g_main_context_iteration(NULL, false);
+    while (data.n < 2) {
+        g_main_context_iteration(NULL, true);
+    }
 
     g_assert_cmpint(data.n, ==, 2);
+    g_assert(qemu_clock_get_ns(data.clock_type) > expiry);
 
     aio_set_fd_handler(ctx, pipefd[0], NULL, NULL, NULL);
     close(pipefd[0]);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [Qemu-devel] [PATCH 4/4] AioContext: speed up aio_notify
  2014-07-07 13:18 [Qemu-devel] [PATCH for 2.1 0/4] AioContext cleanups and optimizations Paolo Bonzini
                   ` (2 preceding siblings ...)
  2014-07-07 13:18 ` [Qemu-devel] [PATCH 3/4] test-aio: fix GSource-based timer test Paolo Bonzini
@ 2014-07-07 13:18 ` Paolo Bonzini
  2014-07-08 15:11 ` [Qemu-devel] [PATCH for 2.1 0/4] AioContext cleanups and optimizations Kevin Wolf
  4 siblings, 0 replies; 6+ messages in thread
From: Paolo Bonzini @ 2014-07-07 13:18 UTC (permalink / raw)
  To: qemu-devel; +Cc: kwolf, stefanha

In many cases, the call to event_notifier_set in aio_notify is unnecessary.
In particular, if we are executing aio_dispatch, or if aio_poll is not
blocking, we know that we will soon get to the next loop iteration (if
necessary); the thread that hosts the AioContext's event loop does not
need any nudging.

The patch includes a Promela formal model that shows that this really
works and does not need any further complication such as generation
counts.  It needs a memory barrier though.

The generation counts are not needed because any change to
ctx->dispatching after the memory barrier is okay for aio_notify.
If it changes from zero to one, it is the right thing to skip
event_notifier_set.  If it changes from one to zero, the
event_notifier_set is unnecessary but harmless.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 aio-posix.c             |  34 +++++++++++++++-
 async.c                 |  19 ++++++++-
 docs/aio_notify.promela | 104 ++++++++++++++++++++++++++++++++++++++++++++++++
 include/block/aio.h     |   9 +++++
 4 files changed, 164 insertions(+), 2 deletions(-)
 create mode 100644 docs/aio_notify.promela

diff --git a/aio-posix.c b/aio-posix.c
index 44c4df3..2eada2e 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -175,11 +175,38 @@ static bool aio_dispatch(AioContext *ctx)
 bool aio_poll(AioContext *ctx, bool blocking)
 {
     AioHandler *node;
+    bool was_dispatching;
     int ret;
     bool progress;
 
+    was_dispatching = ctx->dispatching;
     progress = false;
 
+    /* aio_notify can avoid the expensive event_notifier_set if
+     * everything (file descriptors, bottom halves, timers) will
+     * be re-evaluated before the next blocking poll().  This happens
+     * in two cases:
+     *
+     * 1) when aio_poll is called with blocking == false
+     *
+     * 2) when we are called after poll().  If we are called before
+     *    poll(), bottom halves will not be re-evaluated and we need
+     *    aio_notify() if blocking == true.
+     *
+     * The first aio_dispatch() only does something when AioContext is
+     * running as a GSource, and in that case aio_poll is used only
+     * with blocking == false, so this optimization is already quite
+     * effective.  However, the code is ugly and should be restructured
+     * to have a single aio_dispatch() call.  To do this, we need to
+     * reorganize aio_poll into a prepare/poll/dispatch model like
+     * glib's.
+     *
+     * If we're in a nested event loop, ctx->dispatching might be true.
+     * In that case we can restore it just before returning, but we
+     * have to clear it now.
+     */
+    aio_set_dispatching(ctx, !blocking);
+
     /*
      * If there are callbacks left that have been queued, we need to call them.
      * Do not call select in this case, because it is possible that the caller
@@ -190,12 +217,14 @@ bool aio_poll(AioContext *ctx, bool blocking)
         progress = true;
     }
 
+    /* Re-evaluate condition (1) above.  */
+    aio_set_dispatching(ctx, !blocking);
     if (aio_dispatch(ctx)) {
         progress = true;
     }
 
     if (progress && !blocking) {
-        return true;
+        goto out;
     }
 
     ctx->walking_handlers++;
@@ -234,9 +263,12 @@ bool aio_poll(AioContext *ctx, bool blocking)
     }
 
     /* Run dispatch even if there were no readable fds to run timers */
+    aio_set_dispatching(ctx, true);
     if (aio_dispatch(ctx)) {
         progress = true;
     }
 
+out:
+    aio_set_dispatching(ctx, was_dispatching);
     return progress;
 }
diff --git a/async.c b/async.c
index 5b6fe6b..34af0b2 100644
--- a/async.c
+++ b/async.c
@@ -26,6 +26,7 @@
 #include "block/aio.h"
 #include "block/thread-pool.h"
 #include "qemu/main-loop.h"
+#include "qemu/atomic.h"
 
 /***********************************************************/
 /* bottom halves (can be seen as timers which expire ASAP) */
@@ -247,9 +248,25 @@ ThreadPool *aio_get_thread_pool(AioContext *ctx)
     return ctx->thread_pool;
 }
 
+void aio_set_dispatching(AioContext *ctx, bool dispatching)
+{
+    ctx->dispatching = dispatching;
+    if (!dispatching) {
+        /* Write ctx->dispatching before reading e.g. bh->scheduled.
+         * Optimization: this is only needed when we're entering the "unsafe"
+         * phase where other threads must call event_notifier_set.
+         */
+        smp_mb();
+    }
+}
+
 void aio_notify(AioContext *ctx)
 {
-    event_notifier_set(&ctx->notifier);
+    /* Write e.g. bh->scheduled before reading ctx->dispatching.  */
+    smp_mb();
+    if (!ctx->dispatching) {
+        event_notifier_set(&ctx->notifier);
+    }
 }
 
 static void aio_timerlist_notify(void *opaque)
diff --git a/docs/aio_notify.promela b/docs/aio_notify.promela
new file mode 100644
index 0000000..ad3f6f0
--- /dev/null
+++ b/docs/aio_notify.promela
@@ -0,0 +1,104 @@
+/*
+ * This model describes the interaction between aio_set_dispatching()
+ * and aio_notify().
+ *
+ * Author: Paolo Bonzini <pbonzini@redhat.com>
+ *
+ * This file is in the public domain.  If you really want a license,
+ * the WTFPL will do.
+ *
+ * To simulate it:
+ *     spin -p docs/aio_notify.promela
+ *
+ * To verify it:
+ *     spin -a docs/aio_notify.promela
+ *     gcc -O2 pan.c
+ *     ./a.out -a
+ */
+
+#define MAX   4
+#define LAST  (1 << (MAX - 1))
+#define FINAL ((LAST << 1) - 1)
+
+bool dispatching;
+bool event;
+
+int req, done;
+
+active proctype waiter()
+{
+     int fetch, blocking;
+
+     do
+        :: done != FINAL -> {
+            // Computing "blocking" is separate from execution of the
+            // "bottom half"
+            blocking = (req == 0);
+
+            // This is our "bottom half"
+            atomic { fetch = req; req = 0; }
+            done = done | fetch;
+
+            // Wait for a nudge from the other side
+            do
+                :: event == 1 -> { event = 0; break; }
+                :: !blocking  -> break;
+            od;
+
+            dispatching = 1;
+
+            // If you are simulating this model, you may want to add
+            // something like this here:
+            //
+            //      int foo; foo++; foo++; foo++;
+            //
+            // This only wastes some time and makes it more likely
+            // that the notifier process hits the "fast path".
+
+            dispatching = 0;
+        }
+        :: else -> break;
+    od
+}
+
+active proctype notifier()
+{
+    int next = 1;
+    int sets = 0;
+
+    do
+        :: next <= LAST -> {
+            // generate a request
+            req = req | next;
+            next = next << 1;
+
+            // aio_notify
+            if
+                :: dispatching == 0 -> sets++; event = 1;
+                :: else             -> skip;
+            fi;
+
+            // Test both synchronous and asynchronous delivery
+            if
+                :: 1 -> do
+                            :: req == 0 -> break;
+                        od;
+                :: 1 -> skip;
+            fi;
+        }
+        :: else -> break;
+    od;
+    printf("Skipped %d event_notifier_set\n", MAX - sets);
+}
+
+#define p (done == FINAL)
+
+never  {
+    do
+        :: 1                      // after an arbitrarily long prefix
+        :: p -> break             // p becomes true
+    od;
+    do
+        :: !p -> accept: break    // it then must remains true forever after
+    od
+}
diff --git a/include/block/aio.h b/include/block/aio.h
index d81250c..433e7ff 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -60,8 +60,14 @@ struct AioContext {
      */
     int walking_handlers;
 
+    /* Used to avoid unnecessary event_notifier_set calls in aio_notify.
+     * Writes protected by lock or BQL, reads are lockless.
+     */
+    bool dispatching;
+
     /* lock to protect between bh's adders and deleter */
     QemuMutex bh_lock;
+
     /* Anchor of the list of Bottom Halves belonging to the context */
     struct QEMUBH *first_bh;
 
@@ -83,6 +89,9 @@ struct AioContext {
     QEMUTimerListGroup tlg;
 };
 
+/* Used internally to synchronize aio_poll against qemu_bh_schedule.  */
+void aio_set_dispatching(AioContext *ctx, bool dispatching);
+
 /**
  * aio_context_new: Allocate a new AioContext.
  *
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [Qemu-devel] [PATCH for 2.1 0/4] AioContext cleanups and optimizations
  2014-07-07 13:18 [Qemu-devel] [PATCH for 2.1 0/4] AioContext cleanups and optimizations Paolo Bonzini
                   ` (3 preceding siblings ...)
  2014-07-07 13:18 ` [Qemu-devel] [PATCH 4/4] AioContext: speed up aio_notify Paolo Bonzini
@ 2014-07-08 15:11 ` Kevin Wolf
  4 siblings, 0 replies; 6+ messages in thread
From: Kevin Wolf @ 2014-07-08 15:11 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel, stefanha

Am 07.07.2014 um 15:18 hat Paolo Bonzini geschrieben:
> These patches do some cleanup and optimization in AioContext land.
> 
> The first two drop AIO functions that operate on the main AioContext.
> These are not needed anymore now that each BlockDriverState explicitly
> operates on its own AioContext.  They are independent, and can be
> skipped if the maintainers prefer doing so or have comments.
> 
> Patch 3 is a testsuite change for the aio_notify optimization, and
> patch 4 is the aio_notify patch, now with a better comment
> about the smp_mb optimization.

Thanks, applied to the block branch.

Kevin

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2014-07-08 15:11 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-07-07 13:18 [Qemu-devel] [PATCH for 2.1 0/4] AioContext cleanups and optimizations Paolo Bonzini
2014-07-07 13:18 ` [Qemu-devel] [PATCH 1/4] block: prefer aio_poll to qemu_aio_wait Paolo Bonzini
2014-07-07 13:18 ` [Qemu-devel] [PATCH 2/4] block: drop aio functions that operate on the main AioContext Paolo Bonzini
2014-07-07 13:18 ` [Qemu-devel] [PATCH 3/4] test-aio: fix GSource-based timer test Paolo Bonzini
2014-07-07 13:18 ` [Qemu-devel] [PATCH 4/4] AioContext: speed up aio_notify Paolo Bonzini
2014-07-08 15:11 ` [Qemu-devel] [PATCH for 2.1 0/4] AioContext cleanups and optimizations Kevin Wolf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).