qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/3] AioContext removal: LinuxAioState and ThreadPool
@ 2022-10-31 12:59 Emanuele Giuseppe Esposito
  2022-10-31 12:59 ` [PATCH v4 1/3] linux-aio: use LinuxAioState from the running thread Emanuele Giuseppe Esposito
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-10-31 12:59 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Hanna Reitz, Stefan Weil, Aarushi Mehta,
	Julia Suvorova, Stefan Hajnoczi, Stefano Garzarella, Fam Zheng,
	qemu-devel, Emanuele Giuseppe Esposito

Just remove some AioContext lock in LinuxAioState and ThreadPool.
Not related to anything specific, so I decided to send it as
a separate patch.

These patches are taken from Paolo's old draft series.

---
v4:
* add missing aio_context removal, and fix typo

v3:
* remove qemu_coroutine_enter_if_inactive

v2:
* assertion in thread_pool
* remove useless BlockDriverState * param in patch 1 and 2
* io_uring cleaned too

Emanuele Giuseppe Esposito (2):
  io_uring: use LuringState from the running thread
  thread-pool: use ThreadPool from the running thread

Paolo Bonzini (1):
  linux-aio: use LinuxAioState from the running thread

 block/file-posix.c      | 43 ++++++++++++++++-------------------------
 block/file-win32.c      |  2 +-
 block/io_uring.c        | 22 +++++++++++++--------
 block/linux-aio.c       | 29 +++++++++++++++------------
 block/qcow2-threads.c   |  2 +-
 include/block/aio.h     |  8 --------
 include/block/raw-aio.h | 18 ++++++++---------
 util/thread-pool.c      |  9 ++++-----
 8 files changed, 62 insertions(+), 71 deletions(-)

-- 
2.31.1



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v4 1/3] linux-aio: use LinuxAioState from the running thread
  2022-10-31 12:59 [PATCH v4 0/3] AioContext removal: LinuxAioState and ThreadPool Emanuele Giuseppe Esposito
@ 2022-10-31 12:59 ` Emanuele Giuseppe Esposito
  2022-10-31 19:41   ` Stefan Hajnoczi
  2022-10-31 12:59 ` [PATCH v4 2/3] io_uring: use LuringState " Emanuele Giuseppe Esposito
  2022-10-31 12:59 ` [PATCH v4 3/3] thread-pool: use ThreadPool " Emanuele Giuseppe Esposito
  2 siblings, 1 reply; 7+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-10-31 12:59 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Hanna Reitz, Stefan Weil, Aarushi Mehta,
	Julia Suvorova, Stefan Hajnoczi, Stefano Garzarella, Fam Zheng,
	qemu-devel, Paolo Bonzini, Emanuele Giuseppe Esposito

From: Paolo Bonzini <pbonzini@redhat.com>

Remove usage of aio_context_acquire by always submitting asynchronous
AIO to the current thread's LinuxAioState.

In order to prevent mistakes from the caller side, avoid passing LinuxAioState
in laio_io_{plug/unplug} and laio_co_submit.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
 block/file-posix.c      | 10 +++-------
 block/linux-aio.c       | 29 +++++++++++++++++------------
 include/block/aio.h     |  4 ----
 include/block/raw-aio.h | 10 ++++------
 4 files changed, 24 insertions(+), 29 deletions(-)

diff --git a/block/file-posix.c b/block/file-posix.c
index 23acffb9a4..23fe98eb3e 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -2099,10 +2099,8 @@ static int coroutine_fn raw_co_prw(BlockDriverState *bs, uint64_t offset,
 #endif
 #ifdef CONFIG_LINUX_AIO
     } else if (s->use_linux_aio) {
-        LinuxAioState *aio = aio_get_linux_aio(bdrv_get_aio_context(bs));
         assert(qiov->size == bytes);
-        return laio_co_submit(bs, aio, s->fd, offset, qiov, type,
-                              s->aio_max_batch);
+        return laio_co_submit(s->fd, offset, qiov, type, s->aio_max_batch);
 #endif
     }
 
@@ -2142,8 +2140,7 @@ static void raw_aio_plug(BlockDriverState *bs)
     BDRVRawState __attribute__((unused)) *s = bs->opaque;
 #ifdef CONFIG_LINUX_AIO
     if (s->use_linux_aio) {
-        LinuxAioState *aio = aio_get_linux_aio(bdrv_get_aio_context(bs));
-        laio_io_plug(bs, aio);
+        laio_io_plug();
     }
 #endif
 #ifdef CONFIG_LINUX_IO_URING
@@ -2159,8 +2156,7 @@ static void raw_aio_unplug(BlockDriverState *bs)
     BDRVRawState __attribute__((unused)) *s = bs->opaque;
 #ifdef CONFIG_LINUX_AIO
     if (s->use_linux_aio) {
-        LinuxAioState *aio = aio_get_linux_aio(bdrv_get_aio_context(bs));
-        laio_io_unplug(bs, aio, s->aio_max_batch);
+        laio_io_unplug(s->aio_max_batch);
     }
 #endif
 #ifdef CONFIG_LINUX_IO_URING
diff --git a/block/linux-aio.c b/block/linux-aio.c
index d2cfb7f523..d4983aa8fc 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -16,6 +16,9 @@
 #include "qemu/coroutine.h"
 #include "qapi/error.h"
 
+/* Only used for assertions.  */
+#include "qemu/coroutine_int.h"
+
 #include <libaio.h>
 
 /*
@@ -56,10 +59,8 @@ struct LinuxAioState {
     io_context_t ctx;
     EventNotifier e;
 
-    /* io queue for submit at batch.  Protected by AioContext lock. */
+    /* All data is only used in one I/O thread.  */
     LaioQueue io_q;
-
-    /* I/O completion processing.  Only runs in I/O thread.  */
     QEMUBH *completion_bh;
     int event_idx;
     int event_max;
@@ -102,6 +103,7 @@ static void qemu_laio_process_completion(struct qemu_laiocb *laiocb)
      * later.  Coroutines cannot be entered recursively so avoid doing
      * that!
      */
+    assert(laiocb->co->ctx == laiocb->ctx->aio_context);
     if (!qemu_coroutine_entered(laiocb->co)) {
         aio_co_wake(laiocb->co);
     }
@@ -232,13 +234,11 @@ static void qemu_laio_process_completions(LinuxAioState *s)
 
 static void qemu_laio_process_completions_and_submit(LinuxAioState *s)
 {
-    aio_context_acquire(s->aio_context);
     qemu_laio_process_completions(s);
 
     if (!s->io_q.plugged && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
         ioq_submit(s);
     }
-    aio_context_release(s->aio_context);
 }
 
 static void qemu_laio_completion_bh(void *opaque)
@@ -354,14 +354,19 @@ static uint64_t laio_max_batch(LinuxAioState *s, uint64_t dev_max_batch)
     return max_batch;
 }
 
-void laio_io_plug(BlockDriverState *bs, LinuxAioState *s)
+void laio_io_plug(void)
 {
+    AioContext *ctx = qemu_get_current_aio_context();
+    LinuxAioState *s = aio_get_linux_aio(ctx);
+
     s->io_q.plugged++;
 }
 
-void laio_io_unplug(BlockDriverState *bs, LinuxAioState *s,
-                    uint64_t dev_max_batch)
+void laio_io_unplug(uint64_t dev_max_batch)
 {
+    AioContext *ctx = qemu_get_current_aio_context();
+    LinuxAioState *s = aio_get_linux_aio(ctx);
+
     assert(s->io_q.plugged);
     s->io_q.plugged--;
 
@@ -411,15 +416,15 @@ static int laio_do_submit(int fd, struct qemu_laiocb *laiocb, off_t offset,
     return 0;
 }
 
-int coroutine_fn laio_co_submit(BlockDriverState *bs, LinuxAioState *s, int fd,
-                                uint64_t offset, QEMUIOVector *qiov, int type,
-                                uint64_t dev_max_batch)
+int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
+                                int type, uint64_t dev_max_batch)
 {
     int ret;
+    AioContext *ctx = qemu_get_current_aio_context();
     struct qemu_laiocb laiocb = {
         .co         = qemu_coroutine_self(),
         .nbytes     = qiov->size,
-        .ctx        = s,
+        .ctx        = aio_get_linux_aio(ctx),
         .ret        = -EINPROGRESS,
         .is_read    = (type == QEMU_AIO_READ),
         .qiov       = qiov,
diff --git a/include/block/aio.h b/include/block/aio.h
index d128558f1d..8bb5eea4a9 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -200,10 +200,6 @@ struct AioContext {
     struct ThreadPool *thread_pool;
 
 #ifdef CONFIG_LINUX_AIO
-    /*
-     * State for native Linux AIO.  Uses aio_context_acquire/release for
-     * locking.
-     */
     struct LinuxAioState *linux_aio;
 #endif
 #ifdef CONFIG_LINUX_IO_URING
diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
index 21fc10c4c9..f0f14f14f8 100644
--- a/include/block/raw-aio.h
+++ b/include/block/raw-aio.h
@@ -50,14 +50,12 @@
 typedef struct LinuxAioState LinuxAioState;
 LinuxAioState *laio_init(Error **errp);
 void laio_cleanup(LinuxAioState *s);
-int coroutine_fn laio_co_submit(BlockDriverState *bs, LinuxAioState *s, int fd,
-                                uint64_t offset, QEMUIOVector *qiov, int type,
-                                uint64_t dev_max_batch);
+int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
+                                int type, uint64_t dev_max_batch);
 void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context);
 void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context);
-void laio_io_plug(BlockDriverState *bs, LinuxAioState *s);
-void laio_io_unplug(BlockDriverState *bs, LinuxAioState *s,
-                    uint64_t dev_max_batch);
+void laio_io_plug(void);
+void laio_io_unplug(uint64_t dev_max_batch);
 #endif
 /* io_uring.c - Linux io_uring implementation */
 #ifdef CONFIG_LINUX_IO_URING
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v4 2/3] io_uring: use LuringState from the running thread
  2022-10-31 12:59 [PATCH v4 0/3] AioContext removal: LinuxAioState and ThreadPool Emanuele Giuseppe Esposito
  2022-10-31 12:59 ` [PATCH v4 1/3] linux-aio: use LinuxAioState from the running thread Emanuele Giuseppe Esposito
@ 2022-10-31 12:59 ` Emanuele Giuseppe Esposito
  2022-10-31 19:42   ` Stefan Hajnoczi
  2022-10-31 12:59 ` [PATCH v4 3/3] thread-pool: use ThreadPool " Emanuele Giuseppe Esposito
  2 siblings, 1 reply; 7+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-10-31 12:59 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Hanna Reitz, Stefan Weil, Aarushi Mehta,
	Julia Suvorova, Stefan Hajnoczi, Stefano Garzarella, Fam Zheng,
	qemu-devel, Emanuele Giuseppe Esposito

Remove usage of aio_context_acquire by always submitting asynchronous
AIO to the current thread's LuringState.

In order to prevent mistakes from the caller side, avoid passing LuringState
in luring_io_{plug/unplug} and luring_co_submit.

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
 block/file-posix.c      | 12 ++++--------
 block/io_uring.c        | 22 ++++++++++++++--------
 include/block/aio.h     |  4 ----
 include/block/raw-aio.h |  8 ++++----
 4 files changed, 22 insertions(+), 24 deletions(-)

diff --git a/block/file-posix.c b/block/file-posix.c
index 23fe98eb3e..3800dbd222 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -2093,9 +2093,8 @@ static int coroutine_fn raw_co_prw(BlockDriverState *bs, uint64_t offset,
         type |= QEMU_AIO_MISALIGNED;
 #ifdef CONFIG_LINUX_IO_URING
     } else if (s->use_linux_io_uring) {
-        LuringState *aio = aio_get_linux_io_uring(bdrv_get_aio_context(bs));
         assert(qiov->size == bytes);
-        return luring_co_submit(bs, aio, s->fd, offset, qiov, type);
+        return luring_co_submit(bs, s->fd, offset, qiov, type);
 #endif
 #ifdef CONFIG_LINUX_AIO
     } else if (s->use_linux_aio) {
@@ -2145,8 +2144,7 @@ static void raw_aio_plug(BlockDriverState *bs)
 #endif
 #ifdef CONFIG_LINUX_IO_URING
     if (s->use_linux_io_uring) {
-        LuringState *aio = aio_get_linux_io_uring(bdrv_get_aio_context(bs));
-        luring_io_plug(bs, aio);
+        luring_io_plug();
     }
 #endif
 }
@@ -2161,8 +2159,7 @@ static void raw_aio_unplug(BlockDriverState *bs)
 #endif
 #ifdef CONFIG_LINUX_IO_URING
     if (s->use_linux_io_uring) {
-        LuringState *aio = aio_get_linux_io_uring(bdrv_get_aio_context(bs));
-        luring_io_unplug(bs, aio);
+        luring_io_unplug();
     }
 #endif
 }
@@ -2186,8 +2183,7 @@ static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
 
 #ifdef CONFIG_LINUX_IO_URING
     if (s->use_linux_io_uring) {
-        LuringState *aio = aio_get_linux_io_uring(bdrv_get_aio_context(bs));
-        return luring_co_submit(bs, aio, s->fd, 0, NULL, QEMU_AIO_FLUSH);
+        return luring_co_submit(bs, s->fd, 0, NULL, QEMU_AIO_FLUSH);
     }
 #endif
     return raw_thread_pool_submit(bs, handle_aiocb_flush, &acb);
diff --git a/block/io_uring.c b/block/io_uring.c
index a1760152e0..df1f076cb9 100644
--- a/block/io_uring.c
+++ b/block/io_uring.c
@@ -19,6 +19,8 @@
 #include "qapi/error.h"
 #include "trace.h"
 
+/* Only used for assertions.  */
+#include "qemu/coroutine_int.h"
 
 /* io_uring ring size */
 #define MAX_ENTRIES 128
@@ -52,10 +54,9 @@ typedef struct LuringState {
 
     struct io_uring ring;
 
-    /* io queue for submit at batch.  Protected by AioContext lock. */
+    /* All data is only used in one I/O thread.  */
     LuringQueue io_q;
 
-    /* I/O completion processing.  Only runs in I/O thread.  */
     QEMUBH *completion_bh;
 } LuringState;
 
@@ -211,6 +212,7 @@ end:
          * eventually runs later. Coroutines cannot be entered recursively
          * so avoid doing that!
          */
+        assert(luringcb->co->ctx == luringcb->aio_context);
         if (!qemu_coroutine_entered(luringcb->co)) {
             aio_co_wake(luringcb->co);
         }
@@ -264,13 +266,11 @@ static int ioq_submit(LuringState *s)
 
 static void luring_process_completions_and_submit(LuringState *s)
 {
-    aio_context_acquire(s->aio_context);
     luring_process_completions(s);
 
     if (!s->io_q.plugged && s->io_q.in_queue > 0) {
         ioq_submit(s);
     }
-    aio_context_release(s->aio_context);
 }
 
 static void qemu_luring_completion_bh(void *opaque)
@@ -308,14 +308,18 @@ static void ioq_init(LuringQueue *io_q)
     io_q->blocked = false;
 }
 
-void luring_io_plug(BlockDriverState *bs, LuringState *s)
+void luring_io_plug(void)
 {
+    AioContext *ctx = qemu_get_current_aio_context();
+    LuringState *s = aio_get_linux_io_uring(ctx);
     trace_luring_io_plug(s);
     s->io_q.plugged++;
 }
 
-void luring_io_unplug(BlockDriverState *bs, LuringState *s)
+void luring_io_unplug(void)
 {
+    AioContext *ctx = qemu_get_current_aio_context();
+    LuringState *s = aio_get_linux_io_uring(ctx);
     assert(s->io_q.plugged);
     trace_luring_io_unplug(s, s->io_q.blocked, s->io_q.plugged,
                            s->io_q.in_queue, s->io_q.in_flight);
@@ -375,10 +379,12 @@ static int luring_do_submit(int fd, LuringAIOCB *luringcb, LuringState *s,
     return 0;
 }
 
-int coroutine_fn luring_co_submit(BlockDriverState *bs, LuringState *s, int fd,
-                                  uint64_t offset, QEMUIOVector *qiov, int type)
+int coroutine_fn luring_co_submit(BlockDriverState *bs, int fd, uint64_t offset,
+                                  QEMUIOVector *qiov, int type)
 {
     int ret;
+    AioContext *ctx = qemu_get_current_aio_context();
+    LuringState *s = aio_get_linux_io_uring(ctx);
     LuringAIOCB luringcb = {
         .co         = qemu_coroutine_self(),
         .ret        = -EINPROGRESS,
diff --git a/include/block/aio.h b/include/block/aio.h
index 8bb5eea4a9..15375ff63a 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -203,10 +203,6 @@ struct AioContext {
     struct LinuxAioState *linux_aio;
 #endif
 #ifdef CONFIG_LINUX_IO_URING
-    /*
-     * State for Linux io_uring.  Uses aio_context_acquire/release for
-     * locking.
-     */
     struct LuringState *linux_io_uring;
 
     /* State for file descriptor monitoring using Linux io_uring */
diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
index f0f14f14f8..4d6b0ee125 100644
--- a/include/block/raw-aio.h
+++ b/include/block/raw-aio.h
@@ -62,12 +62,12 @@ void laio_io_unplug(uint64_t dev_max_batch);
 typedef struct LuringState LuringState;
 LuringState *luring_init(Error **errp);
 void luring_cleanup(LuringState *s);
-int coroutine_fn luring_co_submit(BlockDriverState *bs, LuringState *s, int fd,
-                                uint64_t offset, QEMUIOVector *qiov, int type);
+int coroutine_fn luring_co_submit(BlockDriverState *bs, int fd, uint64_t offset,
+                                  QEMUIOVector *qiov, int type);
 void luring_detach_aio_context(LuringState *s, AioContext *old_context);
 void luring_attach_aio_context(LuringState *s, AioContext *new_context);
-void luring_io_plug(BlockDriverState *bs, LuringState *s);
-void luring_io_unplug(BlockDriverState *bs, LuringState *s);
+void luring_io_plug(void);
+void luring_io_unplug(void);
 #endif
 
 #ifdef _WIN32
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v4 3/3] thread-pool: use ThreadPool from the running thread
  2022-10-31 12:59 [PATCH v4 0/3] AioContext removal: LinuxAioState and ThreadPool Emanuele Giuseppe Esposito
  2022-10-31 12:59 ` [PATCH v4 1/3] linux-aio: use LinuxAioState from the running thread Emanuele Giuseppe Esposito
  2022-10-31 12:59 ` [PATCH v4 2/3] io_uring: use LuringState " Emanuele Giuseppe Esposito
@ 2022-10-31 12:59 ` Emanuele Giuseppe Esposito
  2022-10-31 19:48   ` Stefan Hajnoczi
  2 siblings, 1 reply; 7+ messages in thread
From: Emanuele Giuseppe Esposito @ 2022-10-31 12:59 UTC (permalink / raw)
  To: qemu-block
  Cc: Kevin Wolf, Hanna Reitz, Stefan Weil, Aarushi Mehta,
	Julia Suvorova, Stefan Hajnoczi, Stefano Garzarella, Fam Zheng,
	qemu-devel, Emanuele Giuseppe Esposito, Paolo Bonzini

Use qemu_get_current_aio_context() where possible, since we always
submit work to the current thread anyways.

We want to also be sure that the thread submitting the work is
the same as the one processing the pool, to avoid adding
synchronization to the pool list.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
 block/file-posix.c    | 21 ++++++++++-----------
 block/file-win32.c    |  2 +-
 block/qcow2-threads.c |  2 +-
 util/thread-pool.c    |  9 ++++-----
 4 files changed, 16 insertions(+), 18 deletions(-)

diff --git a/block/file-posix.c b/block/file-posix.c
index 3800dbd222..28f12b08c8 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -2044,11 +2044,10 @@ out:
     return result;
 }
 
-static int coroutine_fn raw_thread_pool_submit(BlockDriverState *bs,
-                                               ThreadPoolFunc func, void *arg)
+static int coroutine_fn raw_thread_pool_submit(ThreadPoolFunc func, void *arg)
 {
     /* @bs can be NULL, bdrv_get_aio_context() returns the main context then */
-    ThreadPool *pool = aio_get_thread_pool(bdrv_get_aio_context(bs));
+    ThreadPool *pool = aio_get_thread_pool(qemu_get_current_aio_context());
     return thread_pool_submit_co(pool, func, arg);
 }
 
@@ -2116,7 +2115,7 @@ static int coroutine_fn raw_co_prw(BlockDriverState *bs, uint64_t offset,
     };
 
     assert(qiov->size == bytes);
-    return raw_thread_pool_submit(bs, handle_aiocb_rw, &acb);
+    return raw_thread_pool_submit(handle_aiocb_rw, &acb);
 }
 
 static int coroutine_fn raw_co_preadv(BlockDriverState *bs, int64_t offset,
@@ -2186,7 +2185,7 @@ static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
         return luring_co_submit(bs, s->fd, 0, NULL, QEMU_AIO_FLUSH);
     }
 #endif
-    return raw_thread_pool_submit(bs, handle_aiocb_flush, &acb);
+    return raw_thread_pool_submit(handle_aiocb_flush, &acb);
 }
 
 static void raw_aio_attach_aio_context(BlockDriverState *bs,
@@ -2248,7 +2247,7 @@ raw_regular_truncate(BlockDriverState *bs, int fd, int64_t offset,
         },
     };
 
-    return raw_thread_pool_submit(bs, handle_aiocb_truncate, &acb);
+    return raw_thread_pool_submit(handle_aiocb_truncate, &acb);
 }
 
 static int coroutine_fn raw_co_truncate(BlockDriverState *bs, int64_t offset,
@@ -2998,7 +2997,7 @@ raw_do_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes,
         acb.aio_type |= QEMU_AIO_BLKDEV;
     }
 
-    ret = raw_thread_pool_submit(bs, handle_aiocb_discard, &acb);
+    ret = raw_thread_pool_submit(handle_aiocb_discard, &acb);
     raw_account_discard(s, bytes, ret);
     return ret;
 }
@@ -3073,7 +3072,7 @@ raw_do_pwrite_zeroes(BlockDriverState *bs, int64_t offset, int64_t bytes,
         handler = handle_aiocb_write_zeroes;
     }
 
-    return raw_thread_pool_submit(bs, handler, &acb);
+    return raw_thread_pool_submit(handler, &acb);
 }
 
 static int coroutine_fn raw_co_pwrite_zeroes(
@@ -3284,7 +3283,7 @@ static int coroutine_fn raw_co_copy_range_to(BlockDriverState *bs,
         },
     };
 
-    return raw_thread_pool_submit(bs, handle_aiocb_copy_range, &acb);
+    return raw_thread_pool_submit(handle_aiocb_copy_range, &acb);
 }
 
 BlockDriver bdrv_file = {
@@ -3614,7 +3613,7 @@ hdev_co_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
         struct sg_io_hdr *io_hdr = buf;
         if (io_hdr->cmdp[0] == PERSISTENT_RESERVE_OUT ||
             io_hdr->cmdp[0] == PERSISTENT_RESERVE_IN) {
-            return pr_manager_execute(s->pr_mgr, bdrv_get_aio_context(bs),
+            return pr_manager_execute(s->pr_mgr, qemu_get_current_aio_context(),
                                       s->fd, io_hdr);
         }
     }
@@ -3630,7 +3629,7 @@ hdev_co_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
         },
     };
 
-    return raw_thread_pool_submit(bs, handle_aiocb_ioctl, &acb);
+    return raw_thread_pool_submit(handle_aiocb_ioctl, &acb);
 }
 #endif /* linux */
 
diff --git a/block/file-win32.c b/block/file-win32.c
index ec9d64d0e4..3d7f59a592 100644
--- a/block/file-win32.c
+++ b/block/file-win32.c
@@ -167,7 +167,7 @@ static BlockAIOCB *paio_submit(BlockDriverState *bs, HANDLE hfile,
     acb->aio_offset = offset;
 
     trace_file_paio_submit(acb, opaque, offset, count, type);
-    pool = aio_get_thread_pool(bdrv_get_aio_context(bs));
+    pool = aio_get_thread_pool(qemu_get_current_aio_context());
     return thread_pool_submit_aio(pool, aio_worker, acb, cb, opaque);
 }
 
diff --git a/block/qcow2-threads.c b/block/qcow2-threads.c
index 1914baf456..9e370acbb3 100644
--- a/block/qcow2-threads.c
+++ b/block/qcow2-threads.c
@@ -42,7 +42,7 @@ qcow2_co_process(BlockDriverState *bs, ThreadPoolFunc *func, void *arg)
 {
     int ret;
     BDRVQcow2State *s = bs->opaque;
-    ThreadPool *pool = aio_get_thread_pool(bdrv_get_aio_context(bs));
+    ThreadPool *pool = aio_get_thread_pool(qemu_get_current_aio_context());
 
     qemu_co_mutex_lock(&s->lock);
     while (s->nb_threads >= QCOW2_MAX_THREADS) {
diff --git a/util/thread-pool.c b/util/thread-pool.c
index 31113b5860..a70abb8a59 100644
--- a/util/thread-pool.c
+++ b/util/thread-pool.c
@@ -48,7 +48,7 @@ struct ThreadPoolElement {
     /* Access to this list is protected by lock.  */
     QTAILQ_ENTRY(ThreadPoolElement) reqs;
 
-    /* Access to this list is protected by the global mutex.  */
+    /* This list is only written by the thread pool's mother thread.  */
     QLIST_ENTRY(ThreadPoolElement) all;
 };
 
@@ -175,7 +175,6 @@ static void thread_pool_completion_bh(void *opaque)
     ThreadPool *pool = opaque;
     ThreadPoolElement *elem, *next;
 
-    aio_context_acquire(pool->ctx);
 restart:
     QLIST_FOREACH_SAFE(elem, &pool->head, all, next) {
         if (elem->state != THREAD_DONE) {
@@ -195,9 +194,7 @@ restart:
              */
             qemu_bh_schedule(pool->completion_bh);
 
-            aio_context_release(pool->ctx);
             elem->common.cb(elem->common.opaque, elem->ret);
-            aio_context_acquire(pool->ctx);
 
             /* We can safely cancel the completion_bh here regardless of someone
              * else having scheduled it meanwhile because we reenter the
@@ -211,7 +208,6 @@ restart:
             qemu_aio_unref(elem);
         }
     }
-    aio_context_release(pool->ctx);
 }
 
 static void thread_pool_cancel(BlockAIOCB *acb)
@@ -251,6 +247,9 @@ BlockAIOCB *thread_pool_submit_aio(ThreadPool *pool,
 {
     ThreadPoolElement *req;
 
+    /* Assert that the thread submitting work is the same running the pool */
+    assert(pool->ctx == qemu_get_current_aio_context());
+
     req = qemu_aio_get(&thread_pool_aiocb_info, NULL, cb, opaque);
     req->func = func;
     req->arg = arg;
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 1/3] linux-aio: use LinuxAioState from the running thread
  2022-10-31 12:59 ` [PATCH v4 1/3] linux-aio: use LinuxAioState from the running thread Emanuele Giuseppe Esposito
@ 2022-10-31 19:41   ` Stefan Hajnoczi
  0 siblings, 0 replies; 7+ messages in thread
From: Stefan Hajnoczi @ 2022-10-31 19:41 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito
  Cc: qemu-block, Kevin Wolf, Hanna Reitz, Stefan Weil, Aarushi Mehta,
	Julia Suvorova, Stefano Garzarella, Fam Zheng, qemu-devel,
	Paolo Bonzini

[-- Attachment #1: Type: text/plain, Size: 3584 bytes --]

On Mon, Oct 31, 2022 at 08:59:34AM -0400, Emanuele Giuseppe Esposito wrote:
> @@ -56,10 +59,8 @@ struct LinuxAioState {
>      io_context_t ctx;
>      EventNotifier e;
>  
> -    /* io queue for submit at batch.  Protected by AioContext lock. */
> +    /* All data is only used in one I/O thread.  */
>      LaioQueue io_q;

/* No locking required, only accessed from AioContext home thread */

This is more general because it includes the main loop, which is not an
IOThread.

(Please write "IOThread" for consistency. Most of the documentation and
comments uses "IOThread".)

> -
> -    /* I/O completion processing.  Only runs in I/O thread.  */
>      QEMUBH *completion_bh;
>      int event_idx;
>      int event_max;
> @@ -102,6 +103,7 @@ static void qemu_laio_process_completion(struct qemu_laiocb *laiocb)
>       * later.  Coroutines cannot be entered recursively so avoid doing
>       * that!
>       */
> +    assert(laiocb->co->ctx == laiocb->ctx->aio_context);
>      if (!qemu_coroutine_entered(laiocb->co)) {
>          aio_co_wake(laiocb->co);
>      }
> @@ -232,13 +234,11 @@ static void qemu_laio_process_completions(LinuxAioState *s)
>  
>  static void qemu_laio_process_completions_and_submit(LinuxAioState *s)
>  {
> -    aio_context_acquire(s->aio_context);
>      qemu_laio_process_completions(s);
>  
>      if (!s->io_q.plugged && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
>          ioq_submit(s);
>      }
> -    aio_context_release(s->aio_context);
>  }
>  
>  static void qemu_laio_completion_bh(void *opaque)
> @@ -354,14 +354,19 @@ static uint64_t laio_max_batch(LinuxAioState *s, uint64_t dev_max_batch)
>      return max_batch;
>  }
>  
> -void laio_io_plug(BlockDriverState *bs, LinuxAioState *s)
> +void laio_io_plug(void)
>  {
> +    AioContext *ctx = qemu_get_current_aio_context();
> +    LinuxAioState *s = aio_get_linux_aio(ctx);
> +
>      s->io_q.plugged++;

I see the following code path:

blk_io_plug -> bdrv_io_plug -> raw_aio_plug -> laio_io_plug

blk_io_plug() can be called from any thread but laio_io_plug()
implicitly operates on the current thread's AioContext's LinuxAioState.

This changes the semantics of blk_io_plug() from a global BDS operation
to a thread-local one. The new blk_io_plug() semantics need to be
documented, because it's not obvious that multiple threads can
blk_io_plug/unplug() independently and don't affect each other. It means
the caller must be careful to pair plug/unplug in the same thread.

>  }
>  
> -void laio_io_unplug(BlockDriverState *bs, LinuxAioState *s,
> -                    uint64_t dev_max_batch)
> +void laio_io_unplug(uint64_t dev_max_batch)
>  {
> +    AioContext *ctx = qemu_get_current_aio_context();
> +    LinuxAioState *s = aio_get_linux_aio(ctx);
> +
>      assert(s->io_q.plugged);
>      s->io_q.plugged--;
>  
> @@ -411,15 +416,15 @@ static int laio_do_submit(int fd, struct qemu_laiocb *laiocb, off_t offset,
>      return 0;
>  }
>  
> -int coroutine_fn laio_co_submit(BlockDriverState *bs, LinuxAioState *s, int fd,
> -                                uint64_t offset, QEMUIOVector *qiov, int type,
> -                                uint64_t dev_max_batch)
> +int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
> +                                int type, uint64_t dev_max_batch)

This function needs documentation. It submits I/O requests in the
thread's current AioContext. Before it was explicit via the function
arguments but now it's no longer obvious.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 2/3] io_uring: use LuringState from the running thread
  2022-10-31 12:59 ` [PATCH v4 2/3] io_uring: use LuringState " Emanuele Giuseppe Esposito
@ 2022-10-31 19:42   ` Stefan Hajnoczi
  0 siblings, 0 replies; 7+ messages in thread
From: Stefan Hajnoczi @ 2022-10-31 19:42 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito
  Cc: qemu-block, Kevin Wolf, Hanna Reitz, Stefan Weil, Aarushi Mehta,
	Julia Suvorova, Stefano Garzarella, Fam Zheng, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 707 bytes --]

On Mon, Oct 31, 2022 at 08:59:35AM -0400, Emanuele Giuseppe Esposito wrote:
> Remove usage of aio_context_acquire by always submitting asynchronous
> AIO to the current thread's LuringState.
> 
> In order to prevent mistakes from the caller side, avoid passing LuringState
> in luring_io_{plug/unplug} and luring_co_submit.
> 
> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> ---
>  block/file-posix.c      | 12 ++++--------
>  block/io_uring.c        | 22 ++++++++++++++--------
>  include/block/aio.h     |  4 ----
>  include/block/raw-aio.h |  8 ++++----
>  4 files changed, 22 insertions(+), 24 deletions(-)

The same comments from the previous patch also apply here.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v4 3/3] thread-pool: use ThreadPool from the running thread
  2022-10-31 12:59 ` [PATCH v4 3/3] thread-pool: use ThreadPool " Emanuele Giuseppe Esposito
@ 2022-10-31 19:48   ` Stefan Hajnoczi
  0 siblings, 0 replies; 7+ messages in thread
From: Stefan Hajnoczi @ 2022-10-31 19:48 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito
  Cc: qemu-block, Kevin Wolf, Hanna Reitz, Stefan Weil, Aarushi Mehta,
	Julia Suvorova, Stefano Garzarella, Fam Zheng, qemu-devel,
	Paolo Bonzini

[-- Attachment #1: Type: text/plain, Size: 772 bytes --]

On Mon, Oct 31, 2022 at 08:59:36AM -0400, Emanuele Giuseppe Esposito wrote:
> @@ -251,6 +247,9 @@ BlockAIOCB *thread_pool_submit_aio(ThreadPool *pool,

Documentation must be added to explain that thread_pool_submit_aio(),
thread_pool_submit_co(), and thread_pool_submit() must be called on the
thread's current AioContext's ThreadPool.

>  {
>      ThreadPoolElement *req;
>  
> +    /* Assert that the thread submitting work is the same running the pool */
> +    assert(pool->ctx == qemu_get_current_aio_context());

Did you decide not to remove the ThreadPool *pool argument from this
function because it requires too many changes? All callers must pass
aio_get_thread_pool(qemu_get_current_aio_context()) so it seems like the
argument is unnecessary?

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-10-31 20:24 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-10-31 12:59 [PATCH v4 0/3] AioContext removal: LinuxAioState and ThreadPool Emanuele Giuseppe Esposito
2022-10-31 12:59 ` [PATCH v4 1/3] linux-aio: use LinuxAioState from the running thread Emanuele Giuseppe Esposito
2022-10-31 19:41   ` Stefan Hajnoczi
2022-10-31 12:59 ` [PATCH v4 2/3] io_uring: use LuringState " Emanuele Giuseppe Esposito
2022-10-31 19:42   ` Stefan Hajnoczi
2022-10-31 12:59 ` [PATCH v4 3/3] thread-pool: use ThreadPool " Emanuele Giuseppe Esposito
2022-10-31 19:48   ` Stefan Hajnoczi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).