qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC 00/13] aio: drop io_flush()
@ 2013-04-11 15:44 Stefan Hajnoczi
  2013-04-11 15:44 ` [Qemu-devel] [RFC 01/13] block: stop relying on io_flush() in bdrv_drain_all() Stefan Hajnoczi
                   ` (13 more replies)
  0 siblings, 14 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2013-04-11 15:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, pingfank

Here's my entry to the "let's get rid of io_flush()" effort.  It's based on
Paolo's insight about bdrv_drain_all() that the block layer already has a
tracked_requests list.  io_flush() is redundant since the block layer already
knows if requests are pending.

The point of this effort is to simplify our event loop(s).  If we can reduce
custom features like io_flush() it becomes easier to unify event loops and
reuse glib or other options.

This is also important to me for dataplane, since bdrv_drain_all() is one of
the synchronization points between threads.  QEMU monitor commands invoke
bdrv_drain_all() while the block device is accessed from a dataplane thread.

Background on io_flush() semantics:

The io_flush() handler must return 1 if this aio fd handler is active.  That
is, requests are pending and we'd like to make progress by monitoring the fd.

If io_flush() returns 0, the aio event loop skips monitoring this fd.  This is
critical for block drivers like iscsi where we have an idle TCP socket which we
want to block on *only* when there are pending requests.

The series works as follows:

1. Make bdrv_drain_all() use tracked_requests to determine when to stop
   waiting.  From this point onwards io_flush() is redundant.

2. Drop io_flush() handlers and block driver internal state (e.g. in_flight
   counters).  Just pass NULL for the io_flush argument.

3. Drop io_flush argument from aio_set_fd_handler() and related functions.

I split Step 2 from Step 3 so that experts in sheepdog, rbd, curl, etc can
review those patches in isolation.  Otherwise we'd have a monster patch that
touches all files and is hard to review.

Stefan Hajnoczi (13):
  block: stop relying on io_flush() in bdrv_drain_all()
  dataplane/virtio-blk: check exit conditions before aio_poll()
  aio: stop using .io_flush()
  block/curl: drop curl_aio_flush()
  block/gluster: drop qemu_gluster_aio_flush_cb()
  block/iscsi: drop iscsi_process_flush()
  block/linux-aio: drop qemu_laio_completion_cb()
  block/nbd: drop nbd_have_request()
  block/rbd: drop qemu_rbd_aio_flush_cb()
  block/sheepdog: drop have_co_req() and aio_flush_request()
  dataplane/virtio-blk: drop flush_true() and flush_io()
  thread-pool: drop thread_pool_active()
  aio: drop io_flush argument

 aio-posix.c                     | 31 ++++---------------------------
 aio-win32.c                     | 26 +++-----------------------
 async.c                         |  4 ++--
 block.c                         | 28 ++++++++++++++++++----------
 block/curl.c                    | 25 ++++---------------------
 block/gluster.c                 | 21 +++------------------
 block/iscsi.c                   | 10 +---------
 block/linux-aio.c               | 18 ++----------------
 block/nbd.c                     | 18 ++++--------------
 block/rbd.c                     | 16 ++--------------
 block/sheepdog.c                | 33 ++++++++-------------------------
 hw/block/dataplane/virtio-blk.c | 25 ++++++-------------------
 include/block/aio.h             |  8 ++------
 main-loop.c                     |  9 +++------
 thread-pool.c                   | 11 ++---------
 15 files changed, 64 insertions(+), 219 deletions(-)

-- 
1.8.1.4

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [Qemu-devel] [RFC 01/13] block: stop relying on io_flush() in bdrv_drain_all()
  2013-04-11 15:44 [Qemu-devel] [RFC 00/13] aio: drop io_flush() Stefan Hajnoczi
@ 2013-04-11 15:44 ` Stefan Hajnoczi
  2013-04-11 15:44 ` [Qemu-devel] [RFC 02/13] dataplane/virtio-blk: check exit conditions before aio_poll() Stefan Hajnoczi
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2013-04-11 15:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, pingfank

If a block driver has no file descriptors to monitor but there are still
active requests, it can return 1 from .io_flush().  This is used to spin
during synchronous I/O.

Stop relying on .io_flush() and instead check
QLIST_EMPTY(&bs->tracked_requests) to decide whether there are active
requests.

This is the first step in removing .io_flush() so that event loops no
longer need to have the concept of synchronous I/O.  Eventually we may
be able to kill synchronous I/O completely by running everything in a
coroutine, but that is future work.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block.c | 28 ++++++++++++++++++----------
 1 file changed, 18 insertions(+), 10 deletions(-)

diff --git a/block.c b/block.c
index 602d8a4..93d676b 100644
--- a/block.c
+++ b/block.c
@@ -1360,6 +1360,22 @@ void bdrv_close_all(void)
     }
 }
 
+/* Check if any requests are in-flight (including throttled requests) */
+static bool bdrv_requests_pending(void)
+{
+    BlockDriverState *bs;
+
+    QTAILQ_FOREACH(bs, &bdrv_states, list) {
+        if (!QLIST_EMPTY(&bs->tracked_requests)) {
+            return true;
+        }
+        if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
+            return true;
+        }
+    }
+    return false;
+}
+
 /*
  * Wait for pending requests to complete across all BlockDriverStates
  *
@@ -1375,26 +1391,18 @@ void bdrv_close_all(void)
 void bdrv_drain_all(void)
 {
     BlockDriverState *bs;
-    bool busy;
-
-    do {
-        busy = qemu_aio_wait();
 
+    while (bdrv_requests_pending()) {
         /* FIXME: We do not have timer support here, so this is effectively
          * a busy wait.
          */
         QTAILQ_FOREACH(bs, &bdrv_states, list) {
             if (!qemu_co_queue_empty(&bs->throttled_reqs)) {
                 qemu_co_queue_restart_all(&bs->throttled_reqs);
-                busy = true;
             }
         }
-    } while (busy);
 
-    /* If requests are still pending there is a bug somewhere */
-    QTAILQ_FOREACH(bs, &bdrv_states, list) {
-        assert(QLIST_EMPTY(&bs->tracked_requests));
-        assert(qemu_co_queue_empty(&bs->throttled_reqs));
+        qemu_aio_wait();
     }
 }
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [RFC 02/13] dataplane/virtio-blk: check exit conditions before aio_poll()
  2013-04-11 15:44 [Qemu-devel] [RFC 00/13] aio: drop io_flush() Stefan Hajnoczi
  2013-04-11 15:44 ` [Qemu-devel] [RFC 01/13] block: stop relying on io_flush() in bdrv_drain_all() Stefan Hajnoczi
@ 2013-04-11 15:44 ` Stefan Hajnoczi
  2013-04-11 15:44 ` [Qemu-devel] [RFC 03/13] aio: stop using .io_flush() Stefan Hajnoczi
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2013-04-11 15:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, pingfank

Check exit conditions before entering blocking aio_poll().  This is
mainly for consistency since it's unlikely that we are stopping in the
first event loop iteration.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 5baef23..982896c 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -378,9 +378,9 @@ static void *data_plane_thread(void *opaque)
 {
     VirtIOBlockDataPlane *s = opaque;
 
-    do {
+    while (!s->stopping || s->num_reqs > 0) {
         aio_poll(s->ctx, true);
-    } while (!s->stopping || s->num_reqs > 0);
+    }
     return NULL;
 }
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [RFC 03/13] aio: stop using .io_flush()
  2013-04-11 15:44 [Qemu-devel] [RFC 00/13] aio: drop io_flush() Stefan Hajnoczi
  2013-04-11 15:44 ` [Qemu-devel] [RFC 01/13] block: stop relying on io_flush() in bdrv_drain_all() Stefan Hajnoczi
  2013-04-11 15:44 ` [Qemu-devel] [RFC 02/13] dataplane/virtio-blk: check exit conditions before aio_poll() Stefan Hajnoczi
@ 2013-04-11 15:44 ` Stefan Hajnoczi
  2013-04-11 15:44 ` [Qemu-devel] [RFC 04/13] block/curl: drop curl_aio_flush() Stefan Hajnoczi
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2013-04-11 15:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, pingfank

Now that bdrv_drain_all() checks that requests are pending before
calling qemu_aio_wait(), it is no longer necessary to call .io_flush()
handlers.

Behavior of aio_poll() changes as follows:

.io_flush() is no longer invoked and file descriptors are *always*
monitored.  Previously returning 0 from .io_flush() would skip this file
descriptor.

Due to these changes it is essential to check that requests are pending
before calling qemu_aio_wait().  Failure to do so means we block, for
example, waiting for an idle iSCSI socket to become readable when there
are no requests.  Currently all qemu_aio_wait()/aio_poll() callers check
before calling.

The next patches will remove .io_flush() handler code until we can
finally drop the io_flush arguments to aio_set_fd_handler() and friends.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 aio-posix.c | 24 ++----------------------
 aio-win32.c | 23 ++---------------------
 2 files changed, 4 insertions(+), 43 deletions(-)

diff --git a/aio-posix.c b/aio-posix.c
index b68eccd..569e603 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -23,7 +23,6 @@ struct AioHandler
     GPollFD pfd;
     IOHandler *io_read;
     IOHandler *io_write;
-    AioFlushHandler *io_flush;
     int deleted;
     int pollfds_idx;
     void *opaque;
@@ -84,7 +83,6 @@ void aio_set_fd_handler(AioContext *ctx,
         /* Update handler with latest information */
         node->io_read = io_read;
         node->io_write = io_write;
-        node->io_flush = io_flush;
         node->opaque = opaque;
         node->pollfds_idx = -1;
 
@@ -173,7 +171,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
 {
     AioHandler *node;
     int ret;
-    bool busy, progress;
+    bool progress;
 
     progress = false;
 
@@ -200,20 +198,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
     g_array_set_size(ctx->pollfds, 0);
 
     /* fill pollfds */
-    busy = false;
     QLIST_FOREACH(node, &ctx->aio_handlers, node) {
         node->pollfds_idx = -1;
-
-        /* If there aren't pending AIO operations, don't invoke callbacks.
-         * Otherwise, if there are no AIO requests, qemu_aio_wait() would
-         * wait indefinitely.
-         */
-        if (!node->deleted && node->io_flush) {
-            if (node->io_flush(node->opaque) == 0) {
-                continue;
-            }
-            busy = true;
-        }
         if (!node->deleted && node->pfd.events) {
             GPollFD pfd = {
                 .fd = node->pfd.fd,
@@ -226,11 +212,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     ctx->walking_handlers--;
 
-    /* No AIO operations?  Get us out of here */
-    if (!busy) {
-        return progress;
-    }
-
     /* wait until next event */
     ret = g_poll((GPollFD *)ctx->pollfds->data,
                  ctx->pollfds->len,
@@ -250,6 +231,5 @@ bool aio_poll(AioContext *ctx, bool blocking)
         }
     }
 
-    assert(progress || busy);
-    return true;
+    return progress;
 }
diff --git a/aio-win32.c b/aio-win32.c
index 38723bf..0980d08 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -23,7 +23,6 @@
 struct AioHandler {
     EventNotifier *e;
     EventNotifierHandler *io_notify;
-    AioFlushEventNotifierHandler *io_flush;
     GPollFD pfd;
     int deleted;
     QLIST_ENTRY(AioHandler) node;
@@ -73,7 +72,6 @@ void aio_set_event_notifier(AioContext *ctx,
         }
         /* Update handler with latest information */
         node->io_notify = io_notify;
-        node->io_flush = io_flush;
     }
 
     aio_notify(ctx);
@@ -96,7 +94,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
 {
     AioHandler *node;
     HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
-    bool busy, progress;
+    bool progress;
     int count;
 
     progress = false;
@@ -147,19 +145,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
     ctx->walking_handlers++;
 
     /* fill fd sets */
-    busy = false;
     count = 0;
     QLIST_FOREACH(node, &ctx->aio_handlers, node) {
-        /* If there aren't pending AIO operations, don't invoke callbacks.
-         * Otherwise, if there are no AIO requests, qemu_aio_wait() would
-         * wait indefinitely.
-         */
-        if (!node->deleted && node->io_flush) {
-            if (node->io_flush(node->e) == 0) {
-                continue;
-            }
-            busy = true;
-        }
         if (!node->deleted && node->io_notify) {
             events[count++] = event_notifier_get_handle(node->e);
         }
@@ -167,11 +154,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
 
     ctx->walking_handlers--;
 
-    /* No AIO operations?  Get us out of here */
-    if (!busy) {
-        return progress;
-    }
-
     /* wait until next event */
     while (count > 0) {
         int timeout = blocking ? INFINITE : 0;
@@ -214,6 +196,5 @@ bool aio_poll(AioContext *ctx, bool blocking)
         events[ret - WAIT_OBJECT_0] = events[--count];
     }
 
-    assert(progress || busy);
-    return true;
+    return progress;
 }
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [RFC 04/13] block/curl: drop curl_aio_flush()
  2013-04-11 15:44 [Qemu-devel] [RFC 00/13] aio: drop io_flush() Stefan Hajnoczi
                   ` (2 preceding siblings ...)
  2013-04-11 15:44 ` [Qemu-devel] [RFC 03/13] aio: stop using .io_flush() Stefan Hajnoczi
@ 2013-04-11 15:44 ` Stefan Hajnoczi
  2013-04-11 15:44 ` [Qemu-devel] [RFC 05/13] block/gluster: drop qemu_gluster_aio_flush_cb() Stefan Hajnoczi
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2013-04-11 15:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, pingfank

.io_flush() is no longer called so drop curl_aio_flush().  The acb[]
array that the function checks is still used in other parts of
block/curl.c.  Therefore we cannot remove acb[], it is needed.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/curl.c | 22 +++-------------------
 1 file changed, 3 insertions(+), 19 deletions(-)

diff --git a/block/curl.c b/block/curl.c
index 186e3b0..41101c4 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -85,7 +85,6 @@ typedef struct BDRVCURLState {
 
 static void curl_clean_state(CURLState *s);
 static void curl_multi_do(void *arg);
-static int curl_aio_flush(void *opaque);
 
 static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
                         void *s, void *sp)
@@ -93,14 +92,14 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
     DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, fd);
     switch (action) {
         case CURL_POLL_IN:
-            qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, curl_aio_flush, s);
+            qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, NULL, s);
             break;
         case CURL_POLL_OUT:
-            qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, curl_aio_flush, s);
+            qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, NULL, s);
             break;
         case CURL_POLL_INOUT:
             qemu_aio_set_fd_handler(fd, curl_multi_do, curl_multi_do,
-                                    curl_aio_flush, s);
+                                    NULL, s);
             break;
         case CURL_POLL_REMOVE:
             qemu_aio_set_fd_handler(fd, NULL, NULL, NULL, NULL);
@@ -434,21 +433,6 @@ out_noclean:
     return -EINVAL;
 }
 
-static int curl_aio_flush(void *opaque)
-{
-    BDRVCURLState *s = opaque;
-    int i, j;
-
-    for (i=0; i < CURL_NUM_STATES; i++) {
-        for(j=0; j < CURL_NUM_ACB; j++) {
-            if (s->states[i].acb[j]) {
-                return 1;
-            }
-        }
-    }
-    return 0;
-}
-
 static void curl_aio_cancel(BlockDriverAIOCB *blockacb)
 {
     // Do we have to implement canceling? Seems to work without...
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [RFC 05/13] block/gluster: drop qemu_gluster_aio_flush_cb()
  2013-04-11 15:44 [Qemu-devel] [RFC 00/13] aio: drop io_flush() Stefan Hajnoczi
                   ` (3 preceding siblings ...)
  2013-04-11 15:44 ` [Qemu-devel] [RFC 04/13] block/curl: drop curl_aio_flush() Stefan Hajnoczi
@ 2013-04-11 15:44 ` Stefan Hajnoczi
  2013-04-11 15:44 ` [Qemu-devel] [RFC 06/13] block/iscsi: drop iscsi_process_flush() Stefan Hajnoczi
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2013-04-11 15:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, pingfank

Since .io_flush() is no longer called we do not need
qemu_gluster_aio_flush_cb() anymore.  It turns out that qemu_aio_count
is unused now and can be dropped.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/gluster.c | 16 +---------------
 1 file changed, 1 insertion(+), 15 deletions(-)

diff --git a/block/gluster.c b/block/gluster.c
index 9ccd4d4..ba80f41 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -32,7 +32,6 @@ typedef struct BDRVGlusterState {
     struct glfs *glfs;
     int fds[2];
     struct glfs_fd *fd;
-    int qemu_aio_count;
     int event_reader_pos;
     GlusterAIOCB *event_acb;
 } BDRVGlusterState;
@@ -247,7 +246,6 @@ static void qemu_gluster_complete_aio(GlusterAIOCB *acb, BDRVGlusterState *s)
         ret = -EIO; /* Partial read/write - fail it */
     }
 
-    s->qemu_aio_count--;
     qemu_aio_release(acb);
     cb(opaque, ret);
     if (finished) {
@@ -275,13 +273,6 @@ static void qemu_gluster_aio_event_reader(void *opaque)
     } while (ret < 0 && errno == EINTR);
 }
 
-static int qemu_gluster_aio_flush_cb(void *opaque)
-{
-    BDRVGlusterState *s = opaque;
-
-    return (s->qemu_aio_count > 0);
-}
-
 static int qemu_gluster_open(BlockDriverState *bs, const char *filename,
     QDict *options, int bdrv_flags)
 {
@@ -319,7 +310,7 @@ static int qemu_gluster_open(BlockDriverState *bs, const char *filename,
     }
     fcntl(s->fds[GLUSTER_FD_READ], F_SETFL, O_NONBLOCK);
     qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ],
-        qemu_gluster_aio_event_reader, NULL, qemu_gluster_aio_flush_cb, s);
+        qemu_gluster_aio_event_reader, NULL, NULL, s);
 
 out:
     qemu_gluster_gconf_free(gconf);
@@ -415,7 +406,6 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
         qemu_mutex_lock_iothread(); /* We are in gluster thread context */
         acb->common.cb(acb->common.opaque, -EIO);
         qemu_aio_release(acb);
-        s->qemu_aio_count--;
         close(s->fds[GLUSTER_FD_READ]);
         close(s->fds[GLUSTER_FD_WRITE]);
         qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL,
@@ -437,7 +427,6 @@ static BlockDriverAIOCB *qemu_gluster_aio_rw(BlockDriverState *bs,
 
     offset = sector_num * BDRV_SECTOR_SIZE;
     size = nb_sectors * BDRV_SECTOR_SIZE;
-    s->qemu_aio_count++;
 
     acb = qemu_aio_get(&gluster_aiocb_info, bs, cb, opaque);
     acb->size = size;
@@ -458,7 +447,6 @@ static BlockDriverAIOCB *qemu_gluster_aio_rw(BlockDriverState *bs,
     return &acb->common;
 
 out:
-    s->qemu_aio_count--;
     qemu_aio_release(acb);
     return NULL;
 }
@@ -488,7 +476,6 @@ static BlockDriverAIOCB *qemu_gluster_aio_flush(BlockDriverState *bs,
     acb->size = 0;
     acb->ret = 0;
     acb->finished = NULL;
-    s->qemu_aio_count++;
 
     ret = glfs_fsync_async(s->fd, &gluster_finish_aiocb, acb);
     if (ret < 0) {
@@ -497,7 +484,6 @@ static BlockDriverAIOCB *qemu_gluster_aio_flush(BlockDriverState *bs,
     return &acb->common;
 
 out:
-    s->qemu_aio_count--;
     qemu_aio_release(acb);
     return NULL;
 }
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [RFC 06/13] block/iscsi: drop iscsi_process_flush()
  2013-04-11 15:44 [Qemu-devel] [RFC 00/13] aio: drop io_flush() Stefan Hajnoczi
                   ` (4 preceding siblings ...)
  2013-04-11 15:44 ` [Qemu-devel] [RFC 05/13] block/gluster: drop qemu_gluster_aio_flush_cb() Stefan Hajnoczi
@ 2013-04-11 15:44 ` Stefan Hajnoczi
  2013-04-11 15:44 ` [Qemu-devel] [RFC 07/13] block/linux-aio: drop qemu_laio_completion_cb() Stefan Hajnoczi
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2013-04-11 15:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, pingfank

.io_flush() is no longer called so drop iscsi_process_flush().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/iscsi.c | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/block/iscsi.c b/block/iscsi.c
index 92d6eae..2f1a325 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -147,13 +147,6 @@ static const AIOCBInfo iscsi_aiocb_info = {
 static void iscsi_process_read(void *arg);
 static void iscsi_process_write(void *arg);
 
-static int iscsi_process_flush(void *arg)
-{
-    IscsiLun *iscsilun = arg;
-
-    return iscsi_queue_length(iscsilun->iscsi) > 0;
-}
-
 static void
 iscsi_set_events(IscsiLun *iscsilun)
 {
@@ -167,7 +160,7 @@ iscsi_set_events(IscsiLun *iscsilun)
         qemu_aio_set_fd_handler(iscsi_get_fd(iscsi),
                       iscsi_process_read,
                       (ev & POLLOUT) ? iscsi_process_write : NULL,
-                      iscsi_process_flush,
+                      NULL,
                       iscsilun);
 
     }
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [RFC 07/13] block/linux-aio: drop qemu_laio_completion_cb()
  2013-04-11 15:44 [Qemu-devel] [RFC 00/13] aio: drop io_flush() Stefan Hajnoczi
                   ` (5 preceding siblings ...)
  2013-04-11 15:44 ` [Qemu-devel] [RFC 06/13] block/iscsi: drop iscsi_process_flush() Stefan Hajnoczi
@ 2013-04-11 15:44 ` Stefan Hajnoczi
  2013-04-11 15:44 ` [Qemu-devel] [RFC 08/13] block/nbd: drop nbd_have_request() Stefan Hajnoczi
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2013-04-11 15:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, pingfank

.io_flush() is no longer called so drop qemu_laio_completion_cb().  It
turns out that count is now unused so drop that too.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/linux-aio.c | 17 ++---------------
 1 file changed, 2 insertions(+), 15 deletions(-)

diff --git a/block/linux-aio.c b/block/linux-aio.c
index ee0f8d1..d9128f3 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -39,7 +39,6 @@ struct qemu_laiocb {
 struct qemu_laio_state {
     io_context_t ctx;
     EventNotifier e;
-    int count;
 };
 
 static inline ssize_t io_event_ret(struct io_event *ev)
@@ -55,8 +54,6 @@ static void qemu_laio_process_completion(struct qemu_laio_state *s,
 {
     int ret;
 
-    s->count--;
-
     ret = laiocb->ret;
     if (ret != -ECANCELED) {
         if (ret == laiocb->nbytes) {
@@ -101,13 +98,6 @@ static void qemu_laio_completion_cb(EventNotifier *e)
     }
 }
 
-static int qemu_laio_flush_cb(EventNotifier *e)
-{
-    struct qemu_laio_state *s = container_of(e, struct qemu_laio_state, e);
-
-    return (s->count > 0) ? 1 : 0;
-}
-
 static void laio_cancel(BlockDriverAIOCB *blockacb)
 {
     struct qemu_laiocb *laiocb = (struct qemu_laiocb *)blockacb;
@@ -177,14 +167,11 @@ BlockDriverAIOCB *laio_submit(BlockDriverState *bs, void *aio_ctx, int fd,
         goto out_free_aiocb;
     }
     io_set_eventfd(&laiocb->iocb, event_notifier_get_fd(&s->e));
-    s->count++;
 
     if (io_submit(s->ctx, 1, &iocbs) < 0)
-        goto out_dec_count;
+        goto out_free_aiocb;
     return &laiocb->common;
 
-out_dec_count:
-    s->count--;
 out_free_aiocb:
     qemu_aio_release(laiocb);
     return NULL;
@@ -204,7 +191,7 @@ void *laio_init(void)
     }
 
     qemu_aio_set_event_notifier(&s->e, qemu_laio_completion_cb,
-                                qemu_laio_flush_cb);
+                                NULL);
 
     return s;
 
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [RFC 08/13] block/nbd: drop nbd_have_request()
  2013-04-11 15:44 [Qemu-devel] [RFC 00/13] aio: drop io_flush() Stefan Hajnoczi
                   ` (6 preceding siblings ...)
  2013-04-11 15:44 ` [Qemu-devel] [RFC 07/13] block/linux-aio: drop qemu_laio_completion_cb() Stefan Hajnoczi
@ 2013-04-11 15:44 ` Stefan Hajnoczi
  2013-04-11 15:44 ` [Qemu-devel] [RFC 09/13] block/rbd: drop qemu_rbd_aio_flush_cb() Stefan Hajnoczi
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2013-04-11 15:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, pingfank

.io_flush() is no longer called so drop nbd_have_request().  We cannot
drop in_flight since it is still used by other block/nbd.c code.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/nbd.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/block/nbd.c b/block/nbd.c
index eff683c..7830aa4 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -270,13 +270,6 @@ static void nbd_coroutine_start(BDRVNBDState *s, struct nbd_request *request)
     request->handle = INDEX_TO_HANDLE(s, i);
 }
 
-static int nbd_have_request(void *opaque)
-{
-    BDRVNBDState *s = opaque;
-
-    return s->in_flight > 0;
-}
-
 static void nbd_reply_ready(void *opaque)
 {
     BDRVNBDState *s = opaque;
@@ -333,7 +326,7 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
     qemu_co_mutex_lock(&s->send_mutex);
     s->send_coroutine = qemu_coroutine_self();
     qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write,
-                            nbd_have_request, s);
+                            NULL, s);
     rc = nbd_send_request(s->sock, request);
     if (rc >= 0 && qiov) {
         ret = qemu_co_sendv(s->sock, qiov->iov, qiov->niov,
@@ -343,7 +336,7 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
         }
     }
     qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL,
-                            nbd_have_request, s);
+                            NULL, s);
     s->send_coroutine = NULL;
     qemu_co_mutex_unlock(&s->send_mutex);
     return rc;
@@ -417,7 +410,7 @@ static int nbd_establish_connection(BlockDriverState *bs)
      * kick the reply mechanism.  */
     qemu_set_nonblock(sock);
     qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL,
-                            nbd_have_request, s);
+                            NULL, s);
 
     s->sock = sock;
     s->size = size;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [RFC 09/13] block/rbd: drop qemu_rbd_aio_flush_cb()
  2013-04-11 15:44 [Qemu-devel] [RFC 00/13] aio: drop io_flush() Stefan Hajnoczi
                   ` (7 preceding siblings ...)
  2013-04-11 15:44 ` [Qemu-devel] [RFC 08/13] block/nbd: drop nbd_have_request() Stefan Hajnoczi
@ 2013-04-11 15:44 ` Stefan Hajnoczi
  2013-04-11 15:44 ` [Qemu-devel] [RFC 10/13] block/sheepdog: drop have_co_req() and aio_flush_request() Stefan Hajnoczi
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2013-04-11 15:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, pingfank

.io_flush() is no longer called so drop qemu_rbd_aio_flush_cb().
qemu_aio_count is unused now so drop it too.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/rbd.c | 14 +-------------
 1 file changed, 1 insertion(+), 13 deletions(-)

diff --git a/block/rbd.c b/block/rbd.c
index 1a8ea6d..604008e 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -99,7 +99,6 @@ typedef struct BDRVRBDState {
     rados_ioctx_t io_ctx;
     rbd_image_t image;
     char name[RBD_MAX_IMAGE_NAME_SIZE];
-    int qemu_aio_count;
     char *snap;
     int event_reader_pos;
     RADOSCB *event_rcb;
@@ -428,19 +427,11 @@ static void qemu_rbd_aio_event_reader(void *opaque)
             if (s->event_reader_pos == sizeof(s->event_rcb)) {
                 s->event_reader_pos = 0;
                 qemu_rbd_complete_aio(s->event_rcb);
-                s->qemu_aio_count--;
             }
         }
     } while (ret < 0 && errno == EINTR);
 }
 
-static int qemu_rbd_aio_flush_cb(void *opaque)
-{
-    BDRVRBDState *s = opaque;
-
-    return (s->qemu_aio_count > 0);
-}
-
 static int qemu_rbd_open(BlockDriverState *bs, const char *filename,
                          QDict *options, int flags)
 {
@@ -526,7 +517,7 @@ static int qemu_rbd_open(BlockDriverState *bs, const char *filename,
     fcntl(s->fds[0], F_SETFL, O_NONBLOCK);
     fcntl(s->fds[1], F_SETFL, O_NONBLOCK);
     qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], qemu_rbd_aio_event_reader,
-                            NULL, qemu_rbd_aio_flush_cb, s);
+                            NULL, NULL, s);
 
 
     return 0;
@@ -700,8 +691,6 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
     off = sector_num * BDRV_SECTOR_SIZE;
     size = nb_sectors * BDRV_SECTOR_SIZE;
 
-    s->qemu_aio_count++; /* All the RADOSCB */
-
     rcb = g_malloc(sizeof(RADOSCB));
     rcb->done = 0;
     rcb->acb = acb;
@@ -735,7 +724,6 @@ static BlockDriverAIOCB *rbd_start_aio(BlockDriverState *bs,
 
 failed:
     g_free(rcb);
-    s->qemu_aio_count--;
     qemu_aio_release(acb);
     return NULL;
 }
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [RFC 10/13] block/sheepdog: drop have_co_req() and aio_flush_request()
  2013-04-11 15:44 [Qemu-devel] [RFC 00/13] aio: drop io_flush() Stefan Hajnoczi
                   ` (8 preceding siblings ...)
  2013-04-11 15:44 ` [Qemu-devel] [RFC 09/13] block/rbd: drop qemu_rbd_aio_flush_cb() Stefan Hajnoczi
@ 2013-04-11 15:44 ` Stefan Hajnoczi
  2013-04-11 15:44 ` [Qemu-devel] [RFC 11/13] dataplane/virtio-blk: drop flush_true() and flush_io() Stefan Hajnoczi
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2013-04-11 15:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, pingfank

.io_flush() is no longer called so drop have_co_req() and
aio_flush_request().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 block/sheepdog.c | 25 +++++--------------------
 1 file changed, 5 insertions(+), 20 deletions(-)

diff --git a/block/sheepdog.c b/block/sheepdog.c
index 987018e..44973b3 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -503,13 +503,6 @@ static void restart_co_req(void *opaque)
     qemu_coroutine_enter(co, NULL);
 }
 
-static int have_co_req(void *opaque)
-{
-    /* this handler is set only when there is a pending request, so
-     * always returns 1. */
-    return 1;
-}
-
 typedef struct SheepdogReqCo {
     int sockfd;
     SheepdogReq *hdr;
@@ -532,14 +525,14 @@ static coroutine_fn void do_co_req(void *opaque)
     unsigned int *rlen = srco->rlen;
 
     co = qemu_coroutine_self();
-    qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, have_co_req, co);
+    qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, NULL, co);
 
     ret = send_co_req(sockfd, hdr, data, wlen);
     if (ret < 0) {
         goto out;
     }
 
-    qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, have_co_req, co);
+    qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, NULL, co);
 
     ret = qemu_co_recv(sockfd, hdr, sizeof(*hdr));
     if (ret < sizeof(*hdr)) {
@@ -764,14 +757,6 @@ static void co_write_request(void *opaque)
     qemu_coroutine_enter(s->co_send, NULL);
 }
 
-static int aio_flush_request(void *opaque)
-{
-    BDRVSheepdogState *s = opaque;
-
-    return !QLIST_EMPTY(&s->inflight_aio_head) ||
-        !QLIST_EMPTY(&s->pending_aio_head);
-}
-
 /*
  * Return a socket discriptor to read/write objects.
  *
@@ -787,7 +772,7 @@ static int get_sheep_fd(BDRVSheepdogState *s)
         return fd;
     }
 
-    qemu_aio_set_fd_handler(fd, co_read_response, NULL, aio_flush_request, s);
+    qemu_aio_set_fd_handler(fd, co_read_response, NULL, NULL, s);
     return fd;
 }
 
@@ -1034,7 +1019,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
     qemu_co_mutex_lock(&s->lock);
     s->co_send = qemu_coroutine_self();
     qemu_aio_set_fd_handler(s->fd, co_read_response, co_write_request,
-                            aio_flush_request, s);
+                            NULL, s);
     socket_set_cork(s->fd, 1);
 
     /* send a header */
@@ -1056,7 +1041,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
 
     socket_set_cork(s->fd, 0);
     qemu_aio_set_fd_handler(s->fd, co_read_response, NULL,
-                            aio_flush_request, s);
+                            NULL, s);
     qemu_co_mutex_unlock(&s->lock);
 
     return 0;
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [RFC 11/13] dataplane/virtio-blk: drop flush_true() and flush_io()
  2013-04-11 15:44 [Qemu-devel] [RFC 00/13] aio: drop io_flush() Stefan Hajnoczi
                   ` (9 preceding siblings ...)
  2013-04-11 15:44 ` [Qemu-devel] [RFC 10/13] block/sheepdog: drop have_co_req() and aio_flush_request() Stefan Hajnoczi
@ 2013-04-11 15:44 ` Stefan Hajnoczi
  2013-04-11 15:44 ` [Qemu-devel] [RFC 12/13] thread-pool: drop thread_pool_active() Stefan Hajnoczi
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2013-04-11 15:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, pingfank

.io_flush() is no longer called so drop flush_true() and flush_io().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 hw/block/dataplane/virtio-blk.c | 17 ++---------------
 1 file changed, 2 insertions(+), 15 deletions(-)

diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 982896c..56b80b2 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -263,11 +263,6 @@ static int process_request(IOQueue *ioq, struct iovec iov[],
     }
 }
 
-static int flush_true(EventNotifier *e)
-{
-    return true;
-}
-
 static void handle_notify(EventNotifier *e)
 {
     VirtIOBlockDataPlane *s = container_of(e, VirtIOBlockDataPlane,
@@ -347,14 +342,6 @@ static void handle_notify(EventNotifier *e)
     }
 }
 
-static int flush_io(EventNotifier *e)
-{
-    VirtIOBlockDataPlane *s = container_of(e, VirtIOBlockDataPlane,
-                                           io_notifier);
-
-    return s->num_reqs > 0;
-}
-
 static void handle_io(EventNotifier *e)
 {
     VirtIOBlockDataPlane *s = container_of(e, VirtIOBlockDataPlane,
@@ -485,7 +472,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
         exit(1);
     }
     s->host_notifier = *virtio_queue_get_host_notifier(vq);
-    aio_set_event_notifier(s->ctx, &s->host_notifier, handle_notify, flush_true);
+    aio_set_event_notifier(s->ctx, &s->host_notifier, handle_notify, NULL);
 
     /* Set up ioqueue */
     ioq_init(&s->ioqueue, s->fd, REQ_MAX);
@@ -493,7 +480,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
         ioq_put_iocb(&s->ioqueue, &s->requests[i].iocb);
     }
     s->io_notifier = *ioq_get_notifier(&s->ioqueue);
-    aio_set_event_notifier(s->ctx, &s->io_notifier, handle_io, flush_io);
+    aio_set_event_notifier(s->ctx, &s->io_notifier, handle_io, NULL);
 
     s->started = true;
     trace_virtio_blk_data_plane_start(s);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [RFC 12/13] thread-pool: drop thread_pool_active()
  2013-04-11 15:44 [Qemu-devel] [RFC 00/13] aio: drop io_flush() Stefan Hajnoczi
                   ` (10 preceding siblings ...)
  2013-04-11 15:44 ` [Qemu-devel] [RFC 11/13] dataplane/virtio-blk: drop flush_true() and flush_io() Stefan Hajnoczi
@ 2013-04-11 15:44 ` Stefan Hajnoczi
  2013-04-11 15:44 ` [Qemu-devel] [RFC 13/13] aio: drop io_flush argument Stefan Hajnoczi
  2013-04-12  8:02 ` [Qemu-devel] [RFC 00/13] aio: drop io_flush() Kevin Wolf
  13 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2013-04-11 15:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, pingfank

.io_flush() is no longer called so drop thread_pool_active().  The block
layer is the only thread-pool.c user and it already tracks in-flight
requests, therefore we do not need thread_pool_active().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 thread-pool.c | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/thread-pool.c b/thread-pool.c
index 0ebd4c2..096f007 100644
--- a/thread-pool.c
+++ b/thread-pool.c
@@ -197,12 +197,6 @@ restart:
     }
 }
 
-static int thread_pool_active(EventNotifier *notifier)
-{
-    ThreadPool *pool = container_of(notifier, ThreadPool, notifier);
-    return !QLIST_EMPTY(&pool->head);
-}
-
 static void thread_pool_cancel(BlockDriverAIOCB *acb)
 {
     ThreadPoolElement *elem = (ThreadPoolElement *)acb;
@@ -310,7 +304,7 @@ static void thread_pool_init_one(ThreadPool *pool, AioContext *ctx)
     QTAILQ_INIT(&pool->request_list);
 
     aio_set_event_notifier(ctx, &pool->notifier, event_notifier_ready,
-                           thread_pool_active);
+                           NULL);
 }
 
 ThreadPool *thread_pool_new(AioContext *ctx)
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [Qemu-devel] [RFC 13/13] aio: drop io_flush argument
  2013-04-11 15:44 [Qemu-devel] [RFC 00/13] aio: drop io_flush() Stefan Hajnoczi
                   ` (11 preceding siblings ...)
  2013-04-11 15:44 ` [Qemu-devel] [RFC 12/13] thread-pool: drop thread_pool_active() Stefan Hajnoczi
@ 2013-04-11 15:44 ` Stefan Hajnoczi
  2013-04-12  8:02 ` [Qemu-devel] [RFC 00/13] aio: drop io_flush() Kevin Wolf
  13 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2013-04-11 15:44 UTC (permalink / raw)
  To: qemu-devel; +Cc: Kevin Wolf, Paolo Bonzini, Anthony Liguori, pingfank

The .io_flush() handler no longer exists and has no users.  Drop the
io_flush argument to aio_set_fd_handler() and related functions.

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
 aio-posix.c                     |  7 ++-----
 aio-win32.c                     |  3 +--
 async.c                         |  4 ++--
 block/curl.c                    |  9 ++++-----
 block/gluster.c                 |  7 +++----
 block/iscsi.c                   |  3 +--
 block/linux-aio.c               |  3 +--
 block/nbd.c                     | 11 ++++-------
 block/rbd.c                     |  4 ++--
 block/sheepdog.c                | 18 ++++++++----------
 hw/block/dataplane/virtio-blk.c |  8 ++++----
 include/block/aio.h             |  8 ++------
 main-loop.c                     |  9 +++------
 thread-pool.c                   |  5 ++---
 14 files changed, 39 insertions(+), 60 deletions(-)

diff --git a/aio-posix.c b/aio-posix.c
index 569e603..bb12301 100644
--- a/aio-posix.c
+++ b/aio-posix.c
@@ -46,7 +46,6 @@ void aio_set_fd_handler(AioContext *ctx,
                         int fd,
                         IOHandler *io_read,
                         IOHandler *io_write,
-                        AioFlushHandler *io_flush,
                         void *opaque)
 {
     AioHandler *node;
@@ -95,12 +94,10 @@ void aio_set_fd_handler(AioContext *ctx,
 
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *notifier,
-                            EventNotifierHandler *io_read,
-                            AioFlushEventNotifierHandler *io_flush)
+                            EventNotifierHandler *io_read)
 {
     aio_set_fd_handler(ctx, event_notifier_get_fd(notifier),
-                       (IOHandler *)io_read, NULL,
-                       (AioFlushHandler *)io_flush, notifier);
+                       (IOHandler *)io_read, NULL, notifier);
 }
 
 bool aio_pending(AioContext *ctx)
diff --git a/aio-win32.c b/aio-win32.c
index 0980d08..923e598 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -30,8 +30,7 @@ struct AioHandler {
 
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *e,
-                            EventNotifierHandler *io_notify,
-                            AioFlushEventNotifierHandler *io_flush)
+                            EventNotifierHandler *io_notify)
 {
     AioHandler *node;
 
diff --git a/async.c b/async.c
index 90fe906..fe2c8bf 100644
--- a/async.c
+++ b/async.c
@@ -174,7 +174,7 @@ aio_ctx_finalize(GSource     *source)
     AioContext *ctx = (AioContext *) source;
 
     thread_pool_free(ctx->thread_pool);
-    aio_set_event_notifier(ctx, &ctx->notifier, NULL, NULL);
+    aio_set_event_notifier(ctx, &ctx->notifier, NULL);
     event_notifier_cleanup(&ctx->notifier);
     g_array_free(ctx->pollfds, TRUE);
 }
@@ -214,7 +214,7 @@ AioContext *aio_context_new(void)
     event_notifier_init(&ctx->notifier, false);
     aio_set_event_notifier(ctx, &ctx->notifier, 
                            (EventNotifierHandler *)
-                           event_notifier_test_and_clear, NULL);
+                           event_notifier_test_and_clear);
 
     return ctx;
 }
diff --git a/block/curl.c b/block/curl.c
index 41101c4..843ec58 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -92,17 +92,16 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
     DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, fd);
     switch (action) {
         case CURL_POLL_IN:
-            qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, NULL, s);
+            qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, s);
             break;
         case CURL_POLL_OUT:
-            qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, NULL, s);
+            qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, s);
             break;
         case CURL_POLL_INOUT:
-            qemu_aio_set_fd_handler(fd, curl_multi_do, curl_multi_do,
-                                    NULL, s);
+            qemu_aio_set_fd_handler(fd, curl_multi_do, curl_multi_do, s);
             break;
         case CURL_POLL_REMOVE:
-            qemu_aio_set_fd_handler(fd, NULL, NULL, NULL, NULL);
+            qemu_aio_set_fd_handler(fd, NULL, NULL, NULL);
             break;
     }
 
diff --git a/block/gluster.c b/block/gluster.c
index ba80f41..9704579 100644
--- a/block/gluster.c
+++ b/block/gluster.c
@@ -310,7 +310,7 @@ static int qemu_gluster_open(BlockDriverState *bs, const char *filename,
     }
     fcntl(s->fds[GLUSTER_FD_READ], F_SETFL, O_NONBLOCK);
     qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ],
-        qemu_gluster_aio_event_reader, NULL, NULL, s);
+        qemu_gluster_aio_event_reader, NULL, s);
 
 out:
     qemu_gluster_gconf_free(gconf);
@@ -408,8 +408,7 @@ static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
         qemu_aio_release(acb);
         close(s->fds[GLUSTER_FD_READ]);
         close(s->fds[GLUSTER_FD_WRITE]);
-        qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL,
-            NULL);
+        qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL);
         bs->drv = NULL; /* Make the disk inaccessible */
         qemu_mutex_unlock_iothread();
     }
@@ -521,7 +520,7 @@ static void qemu_gluster_close(BlockDriverState *bs)
 
     close(s->fds[GLUSTER_FD_READ]);
     close(s->fds[GLUSTER_FD_WRITE]);
-    qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL);
 
     if (s->fd) {
         glfs_close(s->fd);
diff --git a/block/iscsi.c b/block/iscsi.c
index 2f1a325..f97ea84 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -160,7 +160,6 @@ iscsi_set_events(IscsiLun *iscsilun)
         qemu_aio_set_fd_handler(iscsi_get_fd(iscsi),
                       iscsi_process_read,
                       (ev & POLLOUT) ? iscsi_process_write : NULL,
-                      NULL,
                       iscsilun);
 
     }
@@ -1147,7 +1146,7 @@ static void iscsi_close(BlockDriverState *bs)
         qemu_del_timer(iscsilun->nop_timer);
         qemu_free_timer(iscsilun->nop_timer);
     }
-    qemu_aio_set_fd_handler(iscsi_get_fd(iscsi), NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(iscsi_get_fd(iscsi), NULL, NULL, NULL);
     iscsi_destroy_context(iscsi);
     memset(iscsilun, 0, sizeof(IscsiLun));
 }
diff --git a/block/linux-aio.c b/block/linux-aio.c
index d9128f3..53434e2 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -190,8 +190,7 @@ void *laio_init(void)
         goto out_close_efd;
     }
 
-    qemu_aio_set_event_notifier(&s->e, qemu_laio_completion_cb,
-                                NULL);
+    qemu_aio_set_event_notifier(&s->e, qemu_laio_completion_cb);
 
     return s;
 
diff --git a/block/nbd.c b/block/nbd.c
index 7830aa4..38fad64 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -325,8 +325,7 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
 
     qemu_co_mutex_lock(&s->send_mutex);
     s->send_coroutine = qemu_coroutine_self();
-    qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write,
-                            NULL, s);
+    qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write, s);
     rc = nbd_send_request(s->sock, request);
     if (rc >= 0 && qiov) {
         ret = qemu_co_sendv(s->sock, qiov->iov, qiov->niov,
@@ -335,8 +334,7 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
             return -EIO;
         }
     }
-    qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL,
-                            NULL, s);
+    qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL, s);
     s->send_coroutine = NULL;
     qemu_co_mutex_unlock(&s->send_mutex);
     return rc;
@@ -409,8 +407,7 @@ static int nbd_establish_connection(BlockDriverState *bs)
     /* Now that we're connected, set the socket to be non-blocking and
      * kick the reply mechanism.  */
     qemu_set_nonblock(sock);
-    qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL,
-                            NULL, s);
+    qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL, s);
 
     s->sock = sock;
     s->size = size;
@@ -430,7 +427,7 @@ static void nbd_teardown_connection(BlockDriverState *bs)
     request.len = 0;
     nbd_send_request(s->sock, &request);
 
-    qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL);
     closesocket(s->sock);
 }
 
diff --git a/block/rbd.c b/block/rbd.c
index 604008e..6814b75 100644
--- a/block/rbd.c
+++ b/block/rbd.c
@@ -517,7 +517,7 @@ static int qemu_rbd_open(BlockDriverState *bs, const char *filename,
     fcntl(s->fds[0], F_SETFL, O_NONBLOCK);
     fcntl(s->fds[1], F_SETFL, O_NONBLOCK);
     qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], qemu_rbd_aio_event_reader,
-                            NULL, NULL, s);
+                            NULL, s);
 
 
     return 0;
@@ -538,7 +538,7 @@ static void qemu_rbd_close(BlockDriverState *bs)
 
     close(s->fds[0]);
     close(s->fds[1]);
-    qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(s->fds[RBD_FD_READ], NULL, NULL, NULL);
 
     rbd_close(s->image);
     rados_ioctx_destroy(s->io_ctx);
diff --git a/block/sheepdog.c b/block/sheepdog.c
index 44973b3..bdcc88c 100644
--- a/block/sheepdog.c
+++ b/block/sheepdog.c
@@ -525,14 +525,14 @@ static coroutine_fn void do_co_req(void *opaque)
     unsigned int *rlen = srco->rlen;
 
     co = qemu_coroutine_self();
-    qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, NULL, co);
+    qemu_aio_set_fd_handler(sockfd, NULL, restart_co_req, co);
 
     ret = send_co_req(sockfd, hdr, data, wlen);
     if (ret < 0) {
         goto out;
     }
 
-    qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, NULL, co);
+    qemu_aio_set_fd_handler(sockfd, restart_co_req, NULL, co);
 
     ret = qemu_co_recv(sockfd, hdr, sizeof(*hdr));
     if (ret < sizeof(*hdr)) {
@@ -557,7 +557,7 @@ static coroutine_fn void do_co_req(void *opaque)
 out:
     /* there is at most one request for this sockfd, so it is safe to
      * set each handler to NULL. */
-    qemu_aio_set_fd_handler(sockfd, NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(sockfd, NULL, NULL, NULL);
 
     srco->ret = ret;
     srco->finished = true;
@@ -772,7 +772,7 @@ static int get_sheep_fd(BDRVSheepdogState *s)
         return fd;
     }
 
-    qemu_aio_set_fd_handler(fd, co_read_response, NULL, NULL, s);
+    qemu_aio_set_fd_handler(fd, co_read_response, NULL, s);
     return fd;
 }
 
@@ -1018,8 +1018,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
 
     qemu_co_mutex_lock(&s->lock);
     s->co_send = qemu_coroutine_self();
-    qemu_aio_set_fd_handler(s->fd, co_read_response, co_write_request,
-                            NULL, s);
+    qemu_aio_set_fd_handler(s->fd, co_read_response, co_write_request, s);
     socket_set_cork(s->fd, 1);
 
     /* send a header */
@@ -1040,8 +1039,7 @@ static int coroutine_fn add_aio_request(BDRVSheepdogState *s, AIOReq *aio_req,
     }
 
     socket_set_cork(s->fd, 0);
-    qemu_aio_set_fd_handler(s->fd, co_read_response, NULL,
-                            NULL, s);
+    qemu_aio_set_fd_handler(s->fd, co_read_response, NULL, s);
     qemu_co_mutex_unlock(&s->lock);
 
     return 0;
@@ -1187,7 +1185,7 @@ static int sd_open(BlockDriverState *bs, const char *filename,
     g_free(buf);
     return 0;
 out:
-    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL);
     if (s->fd >= 0) {
         closesocket(s->fd);
     }
@@ -1414,7 +1412,7 @@ static void sd_close(BlockDriverState *bs)
         error_report("%s, %s", sd_strerror(rsp->result), s->name);
     }
 
-    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL, NULL);
+    qemu_aio_set_fd_handler(s->fd, NULL, NULL, NULL);
     closesocket(s->fd);
     g_free(s->host_spec);
 }
diff --git a/hw/block/dataplane/virtio-blk.c b/hw/block/dataplane/virtio-blk.c
index 56b80b2..38b616b 100644
--- a/hw/block/dataplane/virtio-blk.c
+++ b/hw/block/dataplane/virtio-blk.c
@@ -472,7 +472,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
         exit(1);
     }
     s->host_notifier = *virtio_queue_get_host_notifier(vq);
-    aio_set_event_notifier(s->ctx, &s->host_notifier, handle_notify, NULL);
+    aio_set_event_notifier(s->ctx, &s->host_notifier, handle_notify);
 
     /* Set up ioqueue */
     ioq_init(&s->ioqueue, s->fd, REQ_MAX);
@@ -480,7 +480,7 @@ void virtio_blk_data_plane_start(VirtIOBlockDataPlane *s)
         ioq_put_iocb(&s->ioqueue, &s->requests[i].iocb);
     }
     s->io_notifier = *ioq_get_notifier(&s->ioqueue);
-    aio_set_event_notifier(s->ctx, &s->io_notifier, handle_io, NULL);
+    aio_set_event_notifier(s->ctx, &s->io_notifier, handle_io);
 
     s->started = true;
     trace_virtio_blk_data_plane_start(s);
@@ -510,10 +510,10 @@ void virtio_blk_data_plane_stop(VirtIOBlockDataPlane *s)
         qemu_thread_join(&s->thread);
     }
 
-    aio_set_event_notifier(s->ctx, &s->io_notifier, NULL, NULL);
+    aio_set_event_notifier(s->ctx, &s->io_notifier, NULL);
     ioq_cleanup(&s->ioqueue);
 
-    aio_set_event_notifier(s->ctx, &s->host_notifier, NULL, NULL);
+    aio_set_event_notifier(s->ctx, &s->host_notifier, NULL);
     s->vdev->binding->set_host_notifier(s->vdev->binding_opaque, 0, false);
 
     aio_context_unref(s->ctx);
diff --git a/include/block/aio.h b/include/block/aio.h
index 1836793..f7280b1 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -205,7 +205,6 @@ void aio_set_fd_handler(AioContext *ctx,
                         int fd,
                         IOHandler *io_read,
                         IOHandler *io_write,
-                        AioFlushHandler *io_flush,
                         void *opaque);
 #endif
 
@@ -218,8 +217,7 @@ void aio_set_fd_handler(AioContext *ctx,
  */
 void aio_set_event_notifier(AioContext *ctx,
                             EventNotifier *notifier,
-                            EventNotifierHandler *io_read,
-                            AioFlushEventNotifierHandler *io_flush);
+                            EventNotifierHandler *io_read);
 
 /* Return a GSource that lets the main loop poll the file descriptors attached
  * to this AioContext.
@@ -233,14 +231,12 @@ struct ThreadPool *aio_get_thread_pool(AioContext *ctx);
 
 bool qemu_aio_wait(void);
 void qemu_aio_set_event_notifier(EventNotifier *notifier,
-                                 EventNotifierHandler *io_read,
-                                 AioFlushEventNotifierHandler *io_flush);
+                                 EventNotifierHandler *io_read);
 
 #ifdef CONFIG_POSIX
 void qemu_aio_set_fd_handler(int fd,
                              IOHandler *io_read,
                              IOHandler *io_write,
-                             AioFlushHandler *io_flush,
                              void *opaque);
 #endif
 
diff --git a/main-loop.c b/main-loop.c
index f46aece..af72f2b 100644
--- a/main-loop.c
+++ b/main-loop.c
@@ -492,17 +492,14 @@ bool qemu_aio_wait(void)
 void qemu_aio_set_fd_handler(int fd,
                              IOHandler *io_read,
                              IOHandler *io_write,
-                             AioFlushHandler *io_flush,
                              void *opaque)
 {
-    aio_set_fd_handler(qemu_aio_context, fd, io_read, io_write, io_flush,
-                       opaque);
+    aio_set_fd_handler(qemu_aio_context, fd, io_read, io_write, opaque);
 }
 #endif
 
 void qemu_aio_set_event_notifier(EventNotifier *notifier,
-                                 EventNotifierHandler *io_read,
-                                 AioFlushEventNotifierHandler *io_flush)
+                                 EventNotifierHandler *io_read)
 {
-    aio_set_event_notifier(qemu_aio_context, notifier, io_read, io_flush);
+    aio_set_event_notifier(qemu_aio_context, notifier, io_read);
 }
diff --git a/thread-pool.c b/thread-pool.c
index 096f007..5025567 100644
--- a/thread-pool.c
+++ b/thread-pool.c
@@ -303,8 +303,7 @@ static void thread_pool_init_one(ThreadPool *pool, AioContext *ctx)
     QLIST_INIT(&pool->head);
     QTAILQ_INIT(&pool->request_list);
 
-    aio_set_event_notifier(ctx, &pool->notifier, event_notifier_ready,
-                           NULL);
+    aio_set_event_notifier(ctx, &pool->notifier, event_notifier_ready);
 }
 
 ThreadPool *thread_pool_new(AioContext *ctx)
@@ -338,7 +337,7 @@ void thread_pool_free(ThreadPool *pool)
 
     qemu_mutex_unlock(&pool->lock);
 
-    aio_set_event_notifier(pool->ctx, &pool->notifier, NULL, NULL);
+    aio_set_event_notifier(pool->ctx, &pool->notifier, NULL);
     qemu_sem_destroy(&pool->sem);
     qemu_cond_destroy(&pool->check_cancel);
     qemu_cond_destroy(&pool->worker_stopped);
-- 
1.8.1.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [Qemu-devel] [RFC 00/13] aio: drop io_flush()
  2013-04-11 15:44 [Qemu-devel] [RFC 00/13] aio: drop io_flush() Stefan Hajnoczi
                   ` (12 preceding siblings ...)
  2013-04-11 15:44 ` [Qemu-devel] [RFC 13/13] aio: drop io_flush argument Stefan Hajnoczi
@ 2013-04-12  8:02 ` Kevin Wolf
  2013-04-12  9:49   ` Stefan Hajnoczi
  13 siblings, 1 reply; 21+ messages in thread
From: Kevin Wolf @ 2013-04-12  8:02 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Paolo Bonzini, Anthony Liguori, pingfank, qemu-devel

Am 11.04.2013 um 17:44 hat Stefan Hajnoczi geschrieben:
> Here's my entry to the "let's get rid of io_flush()" effort.  It's based on
> Paolo's insight about bdrv_drain_all() that the block layer already has a
> tracked_requests list.  io_flush() is redundant since the block layer already
> knows if requests are pending.

Except when there are requests that don't come from the guest, but are
issued internally. In this case, block.c doesn't know about them, but
only the block driver does, so we need a .bdrv_drain callback to tell
the block layer about these.

The one specific case that comes to mind is the QED timer for resetting
the dirty bit. I think you need to have the .bdrv_drain callback before
you can start ignoring .io_flush.

Kevin

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Qemu-devel] [RFC 00/13] aio: drop io_flush()
  2013-04-12  8:02 ` [Qemu-devel] [RFC 00/13] aio: drop io_flush() Kevin Wolf
@ 2013-04-12  9:49   ` Stefan Hajnoczi
  2013-04-12 10:04     ` Kevin Wolf
  0 siblings, 1 reply; 21+ messages in thread
From: Stefan Hajnoczi @ 2013-04-12  9:49 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: Paolo Bonzini, Anthony Liguori, pingfank, qemu-devel

On Fri, Apr 12, 2013 at 10:02:02AM +0200, Kevin Wolf wrote:
> Am 11.04.2013 um 17:44 hat Stefan Hajnoczi geschrieben:
> > Here's my entry to the "let's get rid of io_flush()" effort.  It's based on
> > Paolo's insight about bdrv_drain_all() that the block layer already has a
> > tracked_requests list.  io_flush() is redundant since the block layer already
> > knows if requests are pending.
> 
> Except when there are requests that don't come from the guest, but are
> issued internally. In this case, block.c doesn't know about them, but
> only the block driver does, so we need a .bdrv_drain callback to tell
> the block layer about these.
> 
> The one specific case that comes to mind is the QED timer for resetting
> the dirty bit. I think you need to have the .bdrv_drain callback before
> you can start ignoring .io_flush.

I think .bdrv_drain() might come in handy in the future if we need it.

Regarding the QED timer, it does not hook into bdrv_drain_all() today.
The semantics are unchanged with this patch series applied.

The timer runs while the guest is running (vm_clock).  It doesn't fire
during synchronous I/O since qemu_aio_wait() does not dispatch timers.

When the image file is closed the need check bit is also cleared.  This
way, the file is marked clean even if the need check timer was pending.

The upshot of the current QED code is that we might consistency check
the image file during live migration since the image is not closed on
the source (therefore the need check bit is not guaranteed to be
cleared).

Anyway, dropping io_flush() doesn't break the timer.

Stefan

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Qemu-devel] [RFC 00/13] aio: drop io_flush()
  2013-04-12  9:49   ` Stefan Hajnoczi
@ 2013-04-12 10:04     ` Kevin Wolf
  2013-04-12 10:27       ` Paolo Bonzini
  0 siblings, 1 reply; 21+ messages in thread
From: Kevin Wolf @ 2013-04-12 10:04 UTC (permalink / raw)
  To: Stefan Hajnoczi; +Cc: Paolo Bonzini, Anthony Liguori, pingfank, qemu-devel

Am 12.04.2013 um 11:49 hat Stefan Hajnoczi geschrieben:
> On Fri, Apr 12, 2013 at 10:02:02AM +0200, Kevin Wolf wrote:
> > Am 11.04.2013 um 17:44 hat Stefan Hajnoczi geschrieben:
> > > Here's my entry to the "let's get rid of io_flush()" effort.  It's based on
> > > Paolo's insight about bdrv_drain_all() that the block layer already has a
> > > tracked_requests list.  io_flush() is redundant since the block layer already
> > > knows if requests are pending.
> > 
> > Except when there are requests that don't come from the guest, but are
> > issued internally. In this case, block.c doesn't know about them, but
> > only the block driver does, so we need a .bdrv_drain callback to tell
> > the block layer about these.
> > 
> > The one specific case that comes to mind is the QED timer for resetting
> > the dirty bit. I think you need to have the .bdrv_drain callback before
> > you can start ignoring .io_flush.
> 
> I think .bdrv_drain() might come in handy in the future if we need it.
> 
> Regarding the QED timer, it does not hook into bdrv_drain_all() today.
> The semantics are unchanged with this patch series applied.
> 
> The timer runs while the guest is running (vm_clock).  It doesn't fire
> during synchronous I/O since qemu_aio_wait() does not dispatch timers.

Okay, I think I need to be more precise. The scenario I have in mind is
the following one:

1. The timer triggers
2. QED sends bdrv_aio_flush() and returns
2b. The flush callback is called and we progress to qed_write_header
    (optional, but maybe having a write instead of flush makes the
    problem more visible) and return again
3. Someone calls bdrv_drain_all()

Before removing .io_flush:

4. bdrv_drain_all() just waits until the header update is complete
   because the raw-posix driver will return io_flush = 1 as long as the
   update is in flight. It will return only then.

After removing .io_flush:

4. bdrv_drain_all() sees that the QED block device doesn't have any
   requests in flight. The raw-posix driver isn't even queried because
   it's an anonymous BDS. bdrv_drain_all() returns, but the request is
   still in flight. Oops.

This specific case could be fixed as well by querying all BDSes instead
of only the named ones, but in general it's not necessary that a block
driver calls into a lower-level BDS for its background operations that
must be stopped.

Or actually, if you want to avoid .bdrv_drain for now, the patch that I
started when I thought a bit about this, had a default .bdrv_drain
implementation that just forwarded the request to bs->file if it wasn't
implemented by a block driver. For the QED case, this would work.

Kevin

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Qemu-devel] [RFC 00/13] aio: drop io_flush()
  2013-04-12 10:04     ` Kevin Wolf
@ 2013-04-12 10:27       ` Paolo Bonzini
  2013-04-12 12:06         ` Stefan Hajnoczi
  0 siblings, 1 reply; 21+ messages in thread
From: Paolo Bonzini @ 2013-04-12 10:27 UTC (permalink / raw)
  To: Kevin Wolf; +Cc: Anthony Liguori, pingfank, qemu-devel, Stefan Hajnoczi

Il 12/04/2013 12:04, Kevin Wolf ha scritto:
> Or actually, if you want to avoid .bdrv_drain for now, the patch that I
> started when I thought a bit about this, had a default .bdrv_drain
> implementation that just forwarded the request to bs->file if it wasn't
> implemented by a block driver. For the QED case, this would work.

It's similar to bdrv_co_flush.  Drain bs first, then bs->backing_hd
(this is not needed in bdrv_co_flush), then bs->file, then the driver
can do it on other files.

Paolo

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Qemu-devel] [RFC 00/13] aio: drop io_flush()
  2013-04-12 10:27       ` Paolo Bonzini
@ 2013-04-12 12:06         ` Stefan Hajnoczi
  2013-04-12 12:22           ` Kevin Wolf
  0 siblings, 1 reply; 21+ messages in thread
From: Stefan Hajnoczi @ 2013-04-12 12:06 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Kevin Wolf, Anthony Liguori, Liu Ping Fan, qemu-devel,
	Stefan Hajnoczi

On Fri, Apr 12, 2013 at 12:27 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 12/04/2013 12:04, Kevin Wolf ha scritto:
>> Or actually, if you want to avoid .bdrv_drain for now, the patch that I
>> started when I thought a bit about this, had a default .bdrv_drain
>> implementation that just forwarded the request to bs->file if it wasn't
>> implemented by a block driver. For the QED case, this would work.
>
> It's similar to bdrv_co_flush.  Drain bs first, then bs->backing_hd
> (this is not needed in bdrv_co_flush), then bs->file, then the driver
> can do it on other files.

Thanks for explaining Kevin.  I didn't check whether bdrv_states
included all BlockDriverStates.

The simplest solution is to put all BlockDriverStates on a global list.

A .bdrv_drain() interface is more flexible but I don't see a need for it yet.

Stefan

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Qemu-devel] [RFC 00/13] aio: drop io_flush()
  2013-04-12 12:06         ` Stefan Hajnoczi
@ 2013-04-12 12:22           ` Kevin Wolf
  2013-04-12 13:41             ` Stefan Hajnoczi
  0 siblings, 1 reply; 21+ messages in thread
From: Kevin Wolf @ 2013-04-12 12:22 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Paolo Bonzini, Anthony Liguori, Liu Ping Fan, qemu-devel,
	Stefan Hajnoczi

Am 12.04.2013 um 14:06 hat Stefan Hajnoczi geschrieben:
> On Fri, Apr 12, 2013 at 12:27 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
> > Il 12/04/2013 12:04, Kevin Wolf ha scritto:
> >> Or actually, if you want to avoid .bdrv_drain for now, the patch that I
> >> started when I thought a bit about this, had a default .bdrv_drain
> >> implementation that just forwarded the request to bs->file if it wasn't
> >> implemented by a block driver. For the QED case, this would work.
> >
> > It's similar to bdrv_co_flush.  Drain bs first, then bs->backing_hd
> > (this is not needed in bdrv_co_flush), then bs->file, then the driver
> > can do it on other files.
> 
> Thanks for explaining Kevin.  I didn't check whether bdrv_states
> included all BlockDriverStates.
> 
> The simplest solution is to put all BlockDriverStates on a global list.
> 
> A .bdrv_drain() interface is more flexible but I don't see a need for it yet.

We don't need the .bdrv_drain() callback yet, but please implement it
without a global list and with a function that forwards requests to
bs->file and bs->backing_hd like Paolo suggested. This will be the
obvious place to add the .bdrv_drain() callback later.

The reason why I prefer this approach is that it allows draining a
single BlockDriverState. After your patches we're really close to
enabling this and converting some bdrv_drain_all() to more specific
bdrv_drain(bs) calls.

Kevin

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [Qemu-devel] [RFC 00/13] aio: drop io_flush()
  2013-04-12 12:22           ` Kevin Wolf
@ 2013-04-12 13:41             ` Stefan Hajnoczi
  0 siblings, 0 replies; 21+ messages in thread
From: Stefan Hajnoczi @ 2013-04-12 13:41 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: Paolo Bonzini, Anthony Liguori, Liu Ping Fan, qemu-devel,
	Stefan Hajnoczi

On Fri, Apr 12, 2013 at 2:22 PM, Kevin Wolf <kwolf@redhat.com> wrote:
> Am 12.04.2013 um 14:06 hat Stefan Hajnoczi geschrieben:
>> On Fri, Apr 12, 2013 at 12:27 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> > Il 12/04/2013 12:04, Kevin Wolf ha scritto:
>> >> Or actually, if you want to avoid .bdrv_drain for now, the patch that I
>> >> started when I thought a bit about this, had a default .bdrv_drain
>> >> implementation that just forwarded the request to bs->file if it wasn't
>> >> implemented by a block driver. For the QED case, this would work.
>> >
>> > It's similar to bdrv_co_flush.  Drain bs first, then bs->backing_hd
>> > (this is not needed in bdrv_co_flush), then bs->file, then the driver
>> > can do it on other files.
>>
>> Thanks for explaining Kevin.  I didn't check whether bdrv_states
>> included all BlockDriverStates.
>>
>> The simplest solution is to put all BlockDriverStates on a global list.
>>
>> A .bdrv_drain() interface is more flexible but I don't see a need for it yet.
>
> We don't need the .bdrv_drain() callback yet, but please implement it
> without a global list and with a function that forwards requests to
> bs->file and bs->backing_hd like Paolo suggested. This will be the
> obvious place to add the .bdrv_drain() callback later.
>
> The reason why I prefer this approach is that it allows draining a
> single BlockDriverState. After your patches we're really close to
> enabling this and converting some bdrv_drain_all() to more specific
> bdrv_drain(bs) calls.

Fair enough, that approach does make it possible to drain single
devices in the future.

Stefan

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2013-04-12 13:41 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-04-11 15:44 [Qemu-devel] [RFC 00/13] aio: drop io_flush() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 01/13] block: stop relying on io_flush() in bdrv_drain_all() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 02/13] dataplane/virtio-blk: check exit conditions before aio_poll() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 03/13] aio: stop using .io_flush() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 04/13] block/curl: drop curl_aio_flush() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 05/13] block/gluster: drop qemu_gluster_aio_flush_cb() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 06/13] block/iscsi: drop iscsi_process_flush() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 07/13] block/linux-aio: drop qemu_laio_completion_cb() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 08/13] block/nbd: drop nbd_have_request() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 09/13] block/rbd: drop qemu_rbd_aio_flush_cb() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 10/13] block/sheepdog: drop have_co_req() and aio_flush_request() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 11/13] dataplane/virtio-blk: drop flush_true() and flush_io() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 12/13] thread-pool: drop thread_pool_active() Stefan Hajnoczi
2013-04-11 15:44 ` [Qemu-devel] [RFC 13/13] aio: drop io_flush argument Stefan Hajnoczi
2013-04-12  8:02 ` [Qemu-devel] [RFC 00/13] aio: drop io_flush() Kevin Wolf
2013-04-12  9:49   ` Stefan Hajnoczi
2013-04-12 10:04     ` Kevin Wolf
2013-04-12 10:27       ` Paolo Bonzini
2013-04-12 12:06         ` Stefan Hajnoczi
2013-04-12 12:22           ` Kevin Wolf
2013-04-12 13:41             ` Stefan Hajnoczi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).