qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/4] AioContext removal: LinuxAioState and ThreadPool
@ 2023-02-03 13:17 Emanuele Giuseppe Esposito
  2023-02-03 13:17 ` [PATCH v5 1/4] linux-aio: use LinuxAioState from the running thread Emanuele Giuseppe Esposito
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Emanuele Giuseppe Esposito @ 2023-02-03 13:17 UTC (permalink / raw)
  To: qemu-block
  Cc: Stefan Berger, Kevin Wolf, Hanna Reitz, Stefan Weil,
	Aarushi Mehta, Julia Suvorova, Stefan Hajnoczi,
	Stefano Garzarella, Greg Kurz, Christian Schoenebeck,
	Daniel Henrique Barboza, Cédric Le Goater, David Gibson,
	Michael S. Tsirkin, Fam Zheng, Paolo Bonzini, qemu-devel,
	qemu-ppc, Emanuele Giuseppe Esposito

Just remove some AioContext lock in LinuxAioState and ThreadPool.
Not related to anything specific, so I decided to send it as
a separate patch.

These patches are taken from Paolo's old draft series.

---
v5:
* apply Stefan comments, add patch 4 to remove ThreadPool * param
  from thread_pool_submit*
* document that functions run in current IOThread

v4:
* add missing aio_context removal, and fix typo

v3:
* remove qemu_coroutine_enter_if_inactive

v2:
* assertion in thread_pool
* remove useless BlockDriverState * param in patch 1 and 2
* io_uring cleaned too


Emanuele Giuseppe Esposito (4):
  linux-aio: use LinuxAioState from the running thread
  io_uring: use LuringState from the running thread
  thread-pool: use ThreadPool from the running thread
  thread-pool: avoid passing the pool parameter every time

 include/block/aio.h               |  8 ------
 include/block/raw-aio.h           | 33 ++++++++++++++++-------
 include/block/thread-pool.h       | 15 ++++++-----
 include/sysemu/block-backend-io.h |  6 +++++
 backends/tpm/tpm_backend.c        |  4 +--
 block/file-posix.c                | 45 ++++++++++++-------------------
 block/file-win32.c                |  4 +--
 block/io_uring.c                  | 23 ++++++++++------
 block/linux-aio.c                 | 29 +++++++++++---------
 block/qcow2-threads.c             |  3 +--
 hw/9pfs/coth.c                    |  3 +--
 hw/ppc/spapr_nvdimm.c             |  6 ++---
 hw/virtio/virtio-pmem.c           |  3 +--
 scsi/pr-manager.c                 |  3 +--
 scsi/qemu-pr-helper.c             |  3 +--
 tests/unit/test-thread-pool.c     | 12 ++++-----
 util/thread-pool.c                | 25 +++++++++--------
 17 files changed, 113 insertions(+), 112 deletions(-)

-- 
2.39.1



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v5 1/4] linux-aio: use LinuxAioState from the running thread
  2023-02-03 13:17 [PATCH v5 0/4] AioContext removal: LinuxAioState and ThreadPool Emanuele Giuseppe Esposito
@ 2023-02-03 13:17 ` Emanuele Giuseppe Esposito
  2023-03-01 16:16   ` Stefan Hajnoczi
  2023-02-03 13:17 ` [PATCH v5 2/4] io_uring: use LuringState " Emanuele Giuseppe Esposito
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Emanuele Giuseppe Esposito @ 2023-02-03 13:17 UTC (permalink / raw)
  To: qemu-block
  Cc: Stefan Berger, Kevin Wolf, Hanna Reitz, Stefan Weil,
	Aarushi Mehta, Julia Suvorova, Stefan Hajnoczi,
	Stefano Garzarella, Greg Kurz, Christian Schoenebeck,
	Daniel Henrique Barboza, Cédric Le Goater, David Gibson,
	Michael S. Tsirkin, Fam Zheng, Paolo Bonzini, qemu-devel,
	qemu-ppc, Emanuele Giuseppe Esposito

Remove usage of aio_context_acquire by always submitting asynchronous
AIO to the current thread's LinuxAioState.

In order to prevent mistakes from the caller side, avoid passing LinuxAioState
in laio_io_{plug/unplug} and laio_co_submit, and document the functions
to make clear that they work in the current thread's AioContext.

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
 include/block/aio.h               |  4 ----
 include/block/raw-aio.h           | 18 ++++++++++++------
 include/sysemu/block-backend-io.h |  6 ++++++
 block/file-posix.c                | 10 +++-------
 block/linux-aio.c                 | 29 +++++++++++++++++------------
 5 files changed, 38 insertions(+), 29 deletions(-)

diff --git a/include/block/aio.h b/include/block/aio.h
index 8fba6a3584..b6b396cfcb 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -208,10 +208,6 @@ struct AioContext {
     struct ThreadPool *thread_pool;
 
 #ifdef CONFIG_LINUX_AIO
-    /*
-     * State for native Linux AIO.  Uses aio_context_acquire/release for
-     * locking.
-     */
     struct LinuxAioState *linux_aio;
 #endif
 #ifdef CONFIG_LINUX_IO_URING
diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
index f8cda9df91..db614472e6 100644
--- a/include/block/raw-aio.h
+++ b/include/block/raw-aio.h
@@ -49,14 +49,20 @@
 typedef struct LinuxAioState LinuxAioState;
 LinuxAioState *laio_init(Error **errp);
 void laio_cleanup(LinuxAioState *s);
-int coroutine_fn laio_co_submit(BlockDriverState *bs, LinuxAioState *s, int fd,
-                                uint64_t offset, QEMUIOVector *qiov, int type,
-                                uint64_t dev_max_batch);
+
+/* laio_co_submit: submit I/O requests in the thread's current AioContext. */
+int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
+                                int type, uint64_t dev_max_batch);
+
 void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context);
 void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context);
-void laio_io_plug(BlockDriverState *bs, LinuxAioState *s);
-void laio_io_unplug(BlockDriverState *bs, LinuxAioState *s,
-                    uint64_t dev_max_batch);
+
+/*
+ * laio_io_plug/unplug work in the thread's current AioContext, therefore the
+ * caller must ensure that they are paired in the same IOThread.
+ */
+void laio_io_plug(void);
+void laio_io_unplug(uint64_t dev_max_batch);
 #endif
 /* io_uring.c - Linux io_uring implementation */
 #ifdef CONFIG_LINUX_IO_URING
diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-backend-io.h
index 031a27ba10..d41698ccc5 100644
--- a/include/sysemu/block-backend-io.h
+++ b/include/sysemu/block-backend-io.h
@@ -74,8 +74,14 @@ void blk_iostatus_set_err(BlockBackend *blk, int error);
 int blk_get_max_iov(BlockBackend *blk);
 int blk_get_max_hw_iov(BlockBackend *blk);
 
+/*
+ * blk_io_plug/unplug are thread-local operations. This means that multiple
+ * IOThreads can simultaneously call plug/unplug, but the caller must ensure
+ * that each unplug() is called in the same IOThread of the matching plug().
+ */
 void blk_io_plug(BlockBackend *blk);
 void blk_io_unplug(BlockBackend *blk);
+
 AioContext *blk_get_aio_context(BlockBackend *blk);
 BlockAcctStats *blk_get_stats(BlockBackend *blk);
 void *blk_aio_get(const AIOCBInfo *aiocb_info, BlockBackend *blk,
diff --git a/block/file-posix.c b/block/file-posix.c
index fa227d9d14..fa99d1c25a 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -2095,10 +2095,8 @@ static int coroutine_fn raw_co_prw(BlockDriverState *bs, uint64_t offset,
 #endif
 #ifdef CONFIG_LINUX_AIO
     } else if (s->use_linux_aio) {
-        LinuxAioState *aio = aio_get_linux_aio(bdrv_get_aio_context(bs));
         assert(qiov->size == bytes);
-        return laio_co_submit(bs, aio, s->fd, offset, qiov, type,
-                              s->aio_max_batch);
+        return laio_co_submit(s->fd, offset, qiov, type, s->aio_max_batch);
 #endif
     }
 
@@ -2137,8 +2135,7 @@ static void raw_aio_plug(BlockDriverState *bs)
     BDRVRawState __attribute__((unused)) *s = bs->opaque;
 #ifdef CONFIG_LINUX_AIO
     if (s->use_linux_aio) {
-        LinuxAioState *aio = aio_get_linux_aio(bdrv_get_aio_context(bs));
-        laio_io_plug(bs, aio);
+        laio_io_plug();
     }
 #endif
 #ifdef CONFIG_LINUX_IO_URING
@@ -2154,8 +2151,7 @@ static void raw_aio_unplug(BlockDriverState *bs)
     BDRVRawState __attribute__((unused)) *s = bs->opaque;
 #ifdef CONFIG_LINUX_AIO
     if (s->use_linux_aio) {
-        LinuxAioState *aio = aio_get_linux_aio(bdrv_get_aio_context(bs));
-        laio_io_unplug(bs, aio, s->aio_max_batch);
+        laio_io_unplug(s->aio_max_batch);
     }
 #endif
 #ifdef CONFIG_LINUX_IO_URING
diff --git a/block/linux-aio.c b/block/linux-aio.c
index d2cfb7f523..fc50cdd1bf 100644
--- a/block/linux-aio.c
+++ b/block/linux-aio.c
@@ -16,6 +16,9 @@
 #include "qemu/coroutine.h"
 #include "qapi/error.h"
 
+/* Only used for assertions.  */
+#include "qemu/coroutine_int.h"
+
 #include <libaio.h>
 
 /*
@@ -56,10 +59,8 @@ struct LinuxAioState {
     io_context_t ctx;
     EventNotifier e;
 
-    /* io queue for submit at batch.  Protected by AioContext lock. */
+    /* No locking required, only accessed from AioContext home thread */
     LaioQueue io_q;
-
-    /* I/O completion processing.  Only runs in I/O thread.  */
     QEMUBH *completion_bh;
     int event_idx;
     int event_max;
@@ -102,6 +103,7 @@ static void qemu_laio_process_completion(struct qemu_laiocb *laiocb)
      * later.  Coroutines cannot be entered recursively so avoid doing
      * that!
      */
+    assert(laiocb->co->ctx == laiocb->ctx->aio_context);
     if (!qemu_coroutine_entered(laiocb->co)) {
         aio_co_wake(laiocb->co);
     }
@@ -232,13 +234,11 @@ static void qemu_laio_process_completions(LinuxAioState *s)
 
 static void qemu_laio_process_completions_and_submit(LinuxAioState *s)
 {
-    aio_context_acquire(s->aio_context);
     qemu_laio_process_completions(s);
 
     if (!s->io_q.plugged && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
         ioq_submit(s);
     }
-    aio_context_release(s->aio_context);
 }
 
 static void qemu_laio_completion_bh(void *opaque)
@@ -354,14 +354,19 @@ static uint64_t laio_max_batch(LinuxAioState *s, uint64_t dev_max_batch)
     return max_batch;
 }
 
-void laio_io_plug(BlockDriverState *bs, LinuxAioState *s)
+void laio_io_plug(void)
 {
+    AioContext *ctx = qemu_get_current_aio_context();
+    LinuxAioState *s = aio_get_linux_aio(ctx);
+
     s->io_q.plugged++;
 }
 
-void laio_io_unplug(BlockDriverState *bs, LinuxAioState *s,
-                    uint64_t dev_max_batch)
+void laio_io_unplug(uint64_t dev_max_batch)
 {
+    AioContext *ctx = qemu_get_current_aio_context();
+    LinuxAioState *s = aio_get_linux_aio(ctx);
+
     assert(s->io_q.plugged);
     s->io_q.plugged--;
 
@@ -411,15 +416,15 @@ static int laio_do_submit(int fd, struct qemu_laiocb *laiocb, off_t offset,
     return 0;
 }
 
-int coroutine_fn laio_co_submit(BlockDriverState *bs, LinuxAioState *s, int fd,
-                                uint64_t offset, QEMUIOVector *qiov, int type,
-                                uint64_t dev_max_batch)
+int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
+                                int type, uint64_t dev_max_batch)
 {
     int ret;
+    AioContext *ctx = qemu_get_current_aio_context();
     struct qemu_laiocb laiocb = {
         .co         = qemu_coroutine_self(),
         .nbytes     = qiov->size,
-        .ctx        = s,
+        .ctx        = aio_get_linux_aio(ctx),
         .ret        = -EINPROGRESS,
         .is_read    = (type == QEMU_AIO_READ),
         .qiov       = qiov,
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 2/4] io_uring: use LuringState from the running thread
  2023-02-03 13:17 [PATCH v5 0/4] AioContext removal: LinuxAioState and ThreadPool Emanuele Giuseppe Esposito
  2023-02-03 13:17 ` [PATCH v5 1/4] linux-aio: use LinuxAioState from the running thread Emanuele Giuseppe Esposito
@ 2023-02-03 13:17 ` Emanuele Giuseppe Esposito
  2023-02-03 13:17 ` [PATCH v5 3/4] thread-pool: use ThreadPool " Emanuele Giuseppe Esposito
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Emanuele Giuseppe Esposito @ 2023-02-03 13:17 UTC (permalink / raw)
  To: qemu-block
  Cc: Stefan Berger, Kevin Wolf, Hanna Reitz, Stefan Weil,
	Aarushi Mehta, Julia Suvorova, Stefan Hajnoczi,
	Stefano Garzarella, Greg Kurz, Christian Schoenebeck,
	Daniel Henrique Barboza, Cédric Le Goater, David Gibson,
	Michael S. Tsirkin, Fam Zheng, Paolo Bonzini, qemu-devel,
	qemu-ppc, Emanuele Giuseppe Esposito

Remove usage of aio_context_acquire by always submitting asynchronous
AIO to the current thread's LuringState.

In order to prevent mistakes from the caller side, avoid passing LuringState
in luring_io_{plug/unplug} and luring_co_submit, and document the functions
to make clear that they work in the current thread's AioContext.

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
 include/block/aio.h     |  4 ----
 include/block/raw-aio.h | 15 +++++++++++----
 block/file-posix.c      | 12 ++++--------
 block/io_uring.c        | 23 +++++++++++++++--------
 4 files changed, 30 insertions(+), 24 deletions(-)

diff --git a/include/block/aio.h b/include/block/aio.h
index b6b396cfcb..3b7634bef4 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -211,10 +211,6 @@ struct AioContext {
     struct LinuxAioState *linux_aio;
 #endif
 #ifdef CONFIG_LINUX_IO_URING
-    /*
-     * State for Linux io_uring.  Uses aio_context_acquire/release for
-     * locking.
-     */
     struct LuringState *linux_io_uring;
 
     /* State for file descriptor monitoring using Linux io_uring */
diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
index db614472e6..e46a29c3f0 100644
--- a/include/block/raw-aio.h
+++ b/include/block/raw-aio.h
@@ -69,12 +69,19 @@ void laio_io_unplug(uint64_t dev_max_batch);
 typedef struct LuringState LuringState;
 LuringState *luring_init(Error **errp);
 void luring_cleanup(LuringState *s);
-int coroutine_fn luring_co_submit(BlockDriverState *bs, LuringState *s, int fd,
-                                uint64_t offset, QEMUIOVector *qiov, int type);
+
+/* luring_co_submit: submit I/O requests in the thread's current AioContext. */
+int coroutine_fn luring_co_submit(BlockDriverState *bs, int fd, uint64_t offset,
+                                  QEMUIOVector *qiov, int type);
 void luring_detach_aio_context(LuringState *s, AioContext *old_context);
 void luring_attach_aio_context(LuringState *s, AioContext *new_context);
-void luring_io_plug(BlockDriverState *bs, LuringState *s);
-void luring_io_unplug(BlockDriverState *bs, LuringState *s);
+
+/*
+ * luring_io_plug/unplug work in the thread's current AioContext, therefore the
+ * caller must ensure that they are paired in the same IOThread.
+ */
+void luring_io_plug(void);
+void luring_io_unplug(void);
 #endif
 
 #ifdef _WIN32
diff --git a/block/file-posix.c b/block/file-posix.c
index fa99d1c25a..b8ee58201c 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -2089,9 +2089,8 @@ static int coroutine_fn raw_co_prw(BlockDriverState *bs, uint64_t offset,
         type |= QEMU_AIO_MISALIGNED;
 #ifdef CONFIG_LINUX_IO_URING
     } else if (s->use_linux_io_uring) {
-        LuringState *aio = aio_get_linux_io_uring(bdrv_get_aio_context(bs));
         assert(qiov->size == bytes);
-        return luring_co_submit(bs, aio, s->fd, offset, qiov, type);
+        return luring_co_submit(bs, s->fd, offset, qiov, type);
 #endif
 #ifdef CONFIG_LINUX_AIO
     } else if (s->use_linux_aio) {
@@ -2140,8 +2139,7 @@ static void raw_aio_plug(BlockDriverState *bs)
 #endif
 #ifdef CONFIG_LINUX_IO_URING
     if (s->use_linux_io_uring) {
-        LuringState *aio = aio_get_linux_io_uring(bdrv_get_aio_context(bs));
-        luring_io_plug(bs, aio);
+        luring_io_plug();
     }
 #endif
 }
@@ -2156,8 +2154,7 @@ static void raw_aio_unplug(BlockDriverState *bs)
 #endif
 #ifdef CONFIG_LINUX_IO_URING
     if (s->use_linux_io_uring) {
-        LuringState *aio = aio_get_linux_io_uring(bdrv_get_aio_context(bs));
-        luring_io_unplug(bs, aio);
+        luring_io_unplug();
     }
 #endif
 }
@@ -2181,8 +2178,7 @@ static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
 
 #ifdef CONFIG_LINUX_IO_URING
     if (s->use_linux_io_uring) {
-        LuringState *aio = aio_get_linux_io_uring(bdrv_get_aio_context(bs));
-        return luring_co_submit(bs, aio, s->fd, 0, NULL, QEMU_AIO_FLUSH);
+        return luring_co_submit(bs, s->fd, 0, NULL, QEMU_AIO_FLUSH);
     }
 #endif
     return raw_thread_pool_submit(bs, handle_aiocb_flush, &acb);
diff --git a/block/io_uring.c b/block/io_uring.c
index 973e15d876..220fb72ae6 100644
--- a/block/io_uring.c
+++ b/block/io_uring.c
@@ -18,6 +18,9 @@
 #include "qapi/error.h"
 #include "trace.h"
 
+/* Only used for assertions.  */
+#include "qemu/coroutine_int.h"
+
 /* io_uring ring size */
 #define MAX_ENTRIES 128
 
@@ -50,10 +53,9 @@ typedef struct LuringState {
 
     struct io_uring ring;
 
-    /* io queue for submit at batch.  Protected by AioContext lock. */
+    /* No locking required, only accessed from AioContext home thread */
     LuringQueue io_q;
 
-    /* I/O completion processing.  Only runs in I/O thread.  */
     QEMUBH *completion_bh;
 } LuringState;
 
@@ -209,6 +211,7 @@ end:
          * eventually runs later. Coroutines cannot be entered recursively
          * so avoid doing that!
          */
+        assert(luringcb->co->ctx == luringcb->aio_context);
         if (!qemu_coroutine_entered(luringcb->co)) {
             aio_co_wake(luringcb->co);
         }
@@ -262,13 +265,11 @@ static int ioq_submit(LuringState *s)
 
 static void luring_process_completions_and_submit(LuringState *s)
 {
-    aio_context_acquire(s->aio_context);
     luring_process_completions(s);
 
     if (!s->io_q.plugged && s->io_q.in_queue > 0) {
         ioq_submit(s);
     }
-    aio_context_release(s->aio_context);
 }
 
 static void qemu_luring_completion_bh(void *opaque)
@@ -306,14 +307,18 @@ static void ioq_init(LuringQueue *io_q)
     io_q->blocked = false;
 }
 
-void luring_io_plug(BlockDriverState *bs, LuringState *s)
+void luring_io_plug(void)
 {
+    AioContext *ctx = qemu_get_current_aio_context();
+    LuringState *s = aio_get_linux_io_uring(ctx);
     trace_luring_io_plug(s);
     s->io_q.plugged++;
 }
 
-void luring_io_unplug(BlockDriverState *bs, LuringState *s)
+void luring_io_unplug(void)
 {
+    AioContext *ctx = qemu_get_current_aio_context();
+    LuringState *s = aio_get_linux_io_uring(ctx);
     assert(s->io_q.plugged);
     trace_luring_io_unplug(s, s->io_q.blocked, s->io_q.plugged,
                            s->io_q.in_queue, s->io_q.in_flight);
@@ -373,10 +378,12 @@ static int luring_do_submit(int fd, LuringAIOCB *luringcb, LuringState *s,
     return 0;
 }
 
-int coroutine_fn luring_co_submit(BlockDriverState *bs, LuringState *s, int fd,
-                                  uint64_t offset, QEMUIOVector *qiov, int type)
+int coroutine_fn luring_co_submit(BlockDriverState *bs, int fd, uint64_t offset,
+                                  QEMUIOVector *qiov, int type)
 {
     int ret;
+    AioContext *ctx = qemu_get_current_aio_context();
+    LuringState *s = aio_get_linux_io_uring(ctx);
     LuringAIOCB luringcb = {
         .co         = qemu_coroutine_self(),
         .ret        = -EINPROGRESS,
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 3/4] thread-pool: use ThreadPool from the running thread
  2023-02-03 13:17 [PATCH v5 0/4] AioContext removal: LinuxAioState and ThreadPool Emanuele Giuseppe Esposito
  2023-02-03 13:17 ` [PATCH v5 1/4] linux-aio: use LinuxAioState from the running thread Emanuele Giuseppe Esposito
  2023-02-03 13:17 ` [PATCH v5 2/4] io_uring: use LuringState " Emanuele Giuseppe Esposito
@ 2023-02-03 13:17 ` Emanuele Giuseppe Esposito
  2023-02-03 13:17 ` [PATCH v5 4/4] thread-pool: avoid passing the pool parameter every time Emanuele Giuseppe Esposito
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Emanuele Giuseppe Esposito @ 2023-02-03 13:17 UTC (permalink / raw)
  To: qemu-block
  Cc: Stefan Berger, Kevin Wolf, Hanna Reitz, Stefan Weil,
	Aarushi Mehta, Julia Suvorova, Stefan Hajnoczi,
	Stefano Garzarella, Greg Kurz, Christian Schoenebeck,
	Daniel Henrique Barboza, Cédric Le Goater, David Gibson,
	Michael S. Tsirkin, Fam Zheng, Paolo Bonzini, qemu-devel,
	qemu-ppc, Emanuele Giuseppe Esposito

Use qemu_get_current_aio_context() where possible, since we always
submit work to the current thread anyways.

We want to also be sure that the thread submitting the work is
the same as the one processing the pool, to avoid adding
synchronization to the pool list.

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
 include/block/thread-pool.h |  5 +++++
 block/file-posix.c          | 21 ++++++++++-----------
 block/file-win32.c          |  2 +-
 block/qcow2-threads.c       |  2 +-
 util/thread-pool.c          |  9 ++++-----
 5 files changed, 21 insertions(+), 18 deletions(-)

diff --git a/include/block/thread-pool.h b/include/block/thread-pool.h
index 95ff2b0bdb..c408bde74c 100644
--- a/include/block/thread-pool.h
+++ b/include/block/thread-pool.h
@@ -29,12 +29,17 @@ typedef struct ThreadPool ThreadPool;
 ThreadPool *thread_pool_new(struct AioContext *ctx);
 void thread_pool_free(ThreadPool *pool);
 
+/*
+ * thread_pool_submit* API: submit I/O requests in the thread's
+ * current AioContext.
+ */
 BlockAIOCB *thread_pool_submit_aio(ThreadPool *pool,
         ThreadPoolFunc *func, void *arg,
         BlockCompletionFunc *cb, void *opaque);
 int coroutine_fn thread_pool_submit_co(ThreadPool *pool,
         ThreadPoolFunc *func, void *arg);
 void thread_pool_submit(ThreadPool *pool, ThreadPoolFunc *func, void *arg);
+
 void thread_pool_update_params(ThreadPool *pool, struct AioContext *ctx);
 
 #endif
diff --git a/block/file-posix.c b/block/file-posix.c
index b8ee58201c..f7d88fa857 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -2040,11 +2040,10 @@ out:
     return result;
 }
 
-static int coroutine_fn raw_thread_pool_submit(BlockDriverState *bs,
-                                               ThreadPoolFunc func, void *arg)
+static int coroutine_fn raw_thread_pool_submit(ThreadPoolFunc func, void *arg)
 {
     /* @bs can be NULL, bdrv_get_aio_context() returns the main context then */
-    ThreadPool *pool = aio_get_thread_pool(bdrv_get_aio_context(bs));
+    ThreadPool *pool = aio_get_thread_pool(qemu_get_current_aio_context());
     return thread_pool_submit_co(pool, func, arg);
 }
 
@@ -2112,7 +2111,7 @@ static int coroutine_fn raw_co_prw(BlockDriverState *bs, uint64_t offset,
     };
 
     assert(qiov->size == bytes);
-    return raw_thread_pool_submit(bs, handle_aiocb_rw, &acb);
+    return raw_thread_pool_submit(handle_aiocb_rw, &acb);
 }
 
 static int coroutine_fn raw_co_preadv(BlockDriverState *bs, int64_t offset,
@@ -2181,7 +2180,7 @@ static int coroutine_fn raw_co_flush_to_disk(BlockDriverState *bs)
         return luring_co_submit(bs, s->fd, 0, NULL, QEMU_AIO_FLUSH);
     }
 #endif
-    return raw_thread_pool_submit(bs, handle_aiocb_flush, &acb);
+    return raw_thread_pool_submit(handle_aiocb_flush, &acb);
 }
 
 static void raw_aio_attach_aio_context(BlockDriverState *bs,
@@ -2243,7 +2242,7 @@ raw_regular_truncate(BlockDriverState *bs, int fd, int64_t offset,
         },
     };
 
-    return raw_thread_pool_submit(bs, handle_aiocb_truncate, &acb);
+    return raw_thread_pool_submit(handle_aiocb_truncate, &acb);
 }
 
 static int coroutine_fn raw_co_truncate(BlockDriverState *bs, int64_t offset,
@@ -2993,7 +2992,7 @@ raw_do_pdiscard(BlockDriverState *bs, int64_t offset, int64_t bytes,
         acb.aio_type |= QEMU_AIO_BLKDEV;
     }
 
-    ret = raw_thread_pool_submit(bs, handle_aiocb_discard, &acb);
+    ret = raw_thread_pool_submit(handle_aiocb_discard, &acb);
     raw_account_discard(s, bytes, ret);
     return ret;
 }
@@ -3068,7 +3067,7 @@ raw_do_pwrite_zeroes(BlockDriverState *bs, int64_t offset, int64_t bytes,
         handler = handle_aiocb_write_zeroes;
     }
 
-    return raw_thread_pool_submit(bs, handler, &acb);
+    return raw_thread_pool_submit(handler, &acb);
 }
 
 static int coroutine_fn raw_co_pwrite_zeroes(
@@ -3279,7 +3278,7 @@ static int coroutine_fn raw_co_copy_range_to(BlockDriverState *bs,
         },
     };
 
-    return raw_thread_pool_submit(bs, handle_aiocb_copy_range, &acb);
+    return raw_thread_pool_submit(handle_aiocb_copy_range, &acb);
 }
 
 BlockDriver bdrv_file = {
@@ -3609,7 +3608,7 @@ hdev_co_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
         struct sg_io_hdr *io_hdr = buf;
         if (io_hdr->cmdp[0] == PERSISTENT_RESERVE_OUT ||
             io_hdr->cmdp[0] == PERSISTENT_RESERVE_IN) {
-            return pr_manager_execute(s->pr_mgr, bdrv_get_aio_context(bs),
+            return pr_manager_execute(s->pr_mgr, qemu_get_current_aio_context(),
                                       s->fd, io_hdr);
         }
     }
@@ -3625,7 +3624,7 @@ hdev_co_ioctl(BlockDriverState *bs, unsigned long int req, void *buf)
         },
     };
 
-    return raw_thread_pool_submit(bs, handle_aiocb_ioctl, &acb);
+    return raw_thread_pool_submit(handle_aiocb_ioctl, &acb);
 }
 #endif /* linux */
 
diff --git a/block/file-win32.c b/block/file-win32.c
index 12be9c3d0f..1af6d3c810 100644
--- a/block/file-win32.c
+++ b/block/file-win32.c
@@ -168,7 +168,7 @@ static BlockAIOCB *paio_submit(BlockDriverState *bs, HANDLE hfile,
     acb->aio_offset = offset;
 
     trace_file_paio_submit(acb, opaque, offset, count, type);
-    pool = aio_get_thread_pool(bdrv_get_aio_context(bs));
+    pool = aio_get_thread_pool(qemu_get_current_aio_context());
     return thread_pool_submit_aio(pool, aio_worker, acb, cb, opaque);
 }
 
diff --git a/block/qcow2-threads.c b/block/qcow2-threads.c
index 953bbe6df8..6d2e6b7bf4 100644
--- a/block/qcow2-threads.c
+++ b/block/qcow2-threads.c
@@ -43,7 +43,7 @@ qcow2_co_process(BlockDriverState *bs, ThreadPoolFunc *func, void *arg)
 {
     int ret;
     BDRVQcow2State *s = bs->opaque;
-    ThreadPool *pool = aio_get_thread_pool(bdrv_get_aio_context(bs));
+    ThreadPool *pool = aio_get_thread_pool(qemu_get_current_aio_context());
 
     qemu_co_mutex_lock(&s->lock);
     while (s->nb_threads >= QCOW2_MAX_THREADS) {
diff --git a/util/thread-pool.c b/util/thread-pool.c
index 31113b5860..a70abb8a59 100644
--- a/util/thread-pool.c
+++ b/util/thread-pool.c
@@ -48,7 +48,7 @@ struct ThreadPoolElement {
     /* Access to this list is protected by lock.  */
     QTAILQ_ENTRY(ThreadPoolElement) reqs;
 
-    /* Access to this list is protected by the global mutex.  */
+    /* This list is only written by the thread pool's mother thread.  */
     QLIST_ENTRY(ThreadPoolElement) all;
 };
 
@@ -175,7 +175,6 @@ static void thread_pool_completion_bh(void *opaque)
     ThreadPool *pool = opaque;
     ThreadPoolElement *elem, *next;
 
-    aio_context_acquire(pool->ctx);
 restart:
     QLIST_FOREACH_SAFE(elem, &pool->head, all, next) {
         if (elem->state != THREAD_DONE) {
@@ -195,9 +194,7 @@ restart:
              */
             qemu_bh_schedule(pool->completion_bh);
 
-            aio_context_release(pool->ctx);
             elem->common.cb(elem->common.opaque, elem->ret);
-            aio_context_acquire(pool->ctx);
 
             /* We can safely cancel the completion_bh here regardless of someone
              * else having scheduled it meanwhile because we reenter the
@@ -211,7 +208,6 @@ restart:
             qemu_aio_unref(elem);
         }
     }
-    aio_context_release(pool->ctx);
 }
 
 static void thread_pool_cancel(BlockAIOCB *acb)
@@ -251,6 +247,9 @@ BlockAIOCB *thread_pool_submit_aio(ThreadPool *pool,
 {
     ThreadPoolElement *req;
 
+    /* Assert that the thread submitting work is the same running the pool */
+    assert(pool->ctx == qemu_get_current_aio_context());
+
     req = qemu_aio_get(&thread_pool_aiocb_info, NULL, cb, opaque);
     req->func = func;
     req->arg = arg;
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v5 4/4] thread-pool: avoid passing the pool parameter every time
  2023-02-03 13:17 [PATCH v5 0/4] AioContext removal: LinuxAioState and ThreadPool Emanuele Giuseppe Esposito
                   ` (2 preceding siblings ...)
  2023-02-03 13:17 ` [PATCH v5 3/4] thread-pool: use ThreadPool " Emanuele Giuseppe Esposito
@ 2023-02-03 13:17 ` Emanuele Giuseppe Esposito
  2023-03-02 19:58 ` [PATCH v5 0/4] AioContext removal: LinuxAioState and ThreadPool Stefan Hajnoczi
  2023-03-14 20:34 ` Kevin Wolf
  5 siblings, 0 replies; 14+ messages in thread
From: Emanuele Giuseppe Esposito @ 2023-02-03 13:17 UTC (permalink / raw)
  To: qemu-block
  Cc: Stefan Berger, Kevin Wolf, Hanna Reitz, Stefan Weil,
	Aarushi Mehta, Julia Suvorova, Stefan Hajnoczi,
	Stefano Garzarella, Greg Kurz, Christian Schoenebeck,
	Daniel Henrique Barboza, Cédric Le Goater, David Gibson,
	Michael S. Tsirkin, Fam Zheng, Paolo Bonzini, qemu-devel,
	qemu-ppc, Emanuele Giuseppe Esposito

thread_pool_submit_aio() is always called on a pool taken from
qemu_get_current_aio_context(), and that is the only intended
use: each pool runs only in the same thread that is submitting
work to it, it can't run anywhere else.

Therefore simplify the thread_pool_submit* API and remove the
ThreadPool function parameter.

Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
---
 include/block/thread-pool.h   | 10 ++++------
 backends/tpm/tpm_backend.c    |  4 +---
 block/file-posix.c            |  4 +---
 block/file-win32.c            |  4 +---
 block/qcow2-threads.c         |  3 +--
 hw/9pfs/coth.c                |  3 +--
 hw/ppc/spapr_nvdimm.c         |  6 ++----
 hw/virtio/virtio-pmem.c       |  3 +--
 scsi/pr-manager.c             |  3 +--
 scsi/qemu-pr-helper.c         |  3 +--
 tests/unit/test-thread-pool.c | 12 +++++-------
 util/thread-pool.c            | 16 ++++++++--------
 12 files changed, 27 insertions(+), 44 deletions(-)

diff --git a/include/block/thread-pool.h b/include/block/thread-pool.h
index c408bde74c..948ff5f30c 100644
--- a/include/block/thread-pool.h
+++ b/include/block/thread-pool.h
@@ -33,12 +33,10 @@ void thread_pool_free(ThreadPool *pool);
  * thread_pool_submit* API: submit I/O requests in the thread's
  * current AioContext.
  */
-BlockAIOCB *thread_pool_submit_aio(ThreadPool *pool,
-        ThreadPoolFunc *func, void *arg,
-        BlockCompletionFunc *cb, void *opaque);
-int coroutine_fn thread_pool_submit_co(ThreadPool *pool,
-        ThreadPoolFunc *func, void *arg);
-void thread_pool_submit(ThreadPool *pool, ThreadPoolFunc *func, void *arg);
+BlockAIOCB *thread_pool_submit_aio(ThreadPoolFunc *func, void *arg,
+                                   BlockCompletionFunc *cb, void *opaque);
+int coroutine_fn thread_pool_submit_co(ThreadPoolFunc *func, void *arg);
+void thread_pool_submit(ThreadPoolFunc *func, void *arg);
 
 void thread_pool_update_params(ThreadPool *pool, struct AioContext *ctx);
 
diff --git a/backends/tpm/tpm_backend.c b/backends/tpm/tpm_backend.c
index 375587e743..485a20b9e0 100644
--- a/backends/tpm/tpm_backend.c
+++ b/backends/tpm/tpm_backend.c
@@ -100,8 +100,6 @@ bool tpm_backend_had_startup_error(TPMBackend *s)
 
 void tpm_backend_deliver_request(TPMBackend *s, TPMBackendCmd *cmd)
 {
-    ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
-
     if (s->cmd != NULL) {
         error_report("There is a TPM request pending");
         return;
@@ -109,7 +107,7 @@ void tpm_backend_deliver_request(TPMBackend *s, TPMBackendCmd *cmd)
 
     s->cmd = cmd;
     object_ref(OBJECT(s));
-    thread_pool_submit_aio(pool, tpm_backend_worker_thread, s,
+    thread_pool_submit_aio(tpm_backend_worker_thread, s,
                            tpm_backend_request_completed, s);
 }
 
diff --git a/block/file-posix.c b/block/file-posix.c
index f7d88fa857..e4c433d071 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -2042,9 +2042,7 @@ out:
 
 static int coroutine_fn raw_thread_pool_submit(ThreadPoolFunc func, void *arg)
 {
-    /* @bs can be NULL, bdrv_get_aio_context() returns the main context then */
-    ThreadPool *pool = aio_get_thread_pool(qemu_get_current_aio_context());
-    return thread_pool_submit_co(pool, func, arg);
+    return thread_pool_submit_co(func, arg);
 }
 
 /*
diff --git a/block/file-win32.c b/block/file-win32.c
index 1af6d3c810..c4c9c985c8 100644
--- a/block/file-win32.c
+++ b/block/file-win32.c
@@ -153,7 +153,6 @@ static BlockAIOCB *paio_submit(BlockDriverState *bs, HANDLE hfile,
         BlockCompletionFunc *cb, void *opaque, int type)
 {
     RawWin32AIOData *acb = g_new(RawWin32AIOData, 1);
-    ThreadPool *pool;
 
     acb->bs = bs;
     acb->hfile = hfile;
@@ -168,8 +167,7 @@ static BlockAIOCB *paio_submit(BlockDriverState *bs, HANDLE hfile,
     acb->aio_offset = offset;
 
     trace_file_paio_submit(acb, opaque, offset, count, type);
-    pool = aio_get_thread_pool(qemu_get_current_aio_context());
-    return thread_pool_submit_aio(pool, aio_worker, acb, cb, opaque);
+    return thread_pool_submit_aio(aio_worker, acb, cb, opaque);
 }
 
 int qemu_ftruncate64(int fd, int64_t length)
diff --git a/block/qcow2-threads.c b/block/qcow2-threads.c
index 6d2e6b7bf4..d6071a1eae 100644
--- a/block/qcow2-threads.c
+++ b/block/qcow2-threads.c
@@ -43,7 +43,6 @@ qcow2_co_process(BlockDriverState *bs, ThreadPoolFunc *func, void *arg)
 {
     int ret;
     BDRVQcow2State *s = bs->opaque;
-    ThreadPool *pool = aio_get_thread_pool(qemu_get_current_aio_context());
 
     qemu_co_mutex_lock(&s->lock);
     while (s->nb_threads >= QCOW2_MAX_THREADS) {
@@ -52,7 +51,7 @@ qcow2_co_process(BlockDriverState *bs, ThreadPoolFunc *func, void *arg)
     s->nb_threads++;
     qemu_co_mutex_unlock(&s->lock);
 
-    ret = thread_pool_submit_co(pool, func, arg);
+    ret = thread_pool_submit_co(func, arg);
 
     qemu_co_mutex_lock(&s->lock);
     s->nb_threads--;
diff --git a/hw/9pfs/coth.c b/hw/9pfs/coth.c
index 2802d41cce..598f46add9 100644
--- a/hw/9pfs/coth.c
+++ b/hw/9pfs/coth.c
@@ -41,6 +41,5 @@ static int coroutine_enter_func(void *arg)
 void co_run_in_worker_bh(void *opaque)
 {
     Coroutine *co = opaque;
-    thread_pool_submit_aio(aio_get_thread_pool(qemu_get_aio_context()),
-                           coroutine_enter_func, co, coroutine_enter_cb, co);
+    thread_pool_submit_aio(coroutine_enter_func, co, coroutine_enter_cb, co);
 }
diff --git a/hw/ppc/spapr_nvdimm.c b/hw/ppc/spapr_nvdimm.c
index 04a64cada3..a8688243a6 100644
--- a/hw/ppc/spapr_nvdimm.c
+++ b/hw/ppc/spapr_nvdimm.c
@@ -496,7 +496,6 @@ static int spapr_nvdimm_flush_post_load(void *opaque, int version_id)
 {
     SpaprNVDIMMDevice *s_nvdimm = (SpaprNVDIMMDevice *)opaque;
     SpaprNVDIMMDeviceFlushState *state;
-    ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
     HostMemoryBackend *backend = MEMORY_BACKEND(PC_DIMM(s_nvdimm)->hostmem);
     bool is_pmem = object_property_get_bool(OBJECT(backend), "pmem", NULL);
     bool pmem_override = object_property_get_bool(OBJECT(s_nvdimm),
@@ -517,7 +516,7 @@ static int spapr_nvdimm_flush_post_load(void *opaque, int version_id)
     }
 
     QLIST_FOREACH(state, &s_nvdimm->pending_nvdimm_flush_states, node) {
-        thread_pool_submit_aio(pool, flush_worker_cb, state,
+        thread_pool_submit_aio(flush_worker_cb, state,
                                spapr_nvdimm_flush_completion_cb, state);
     }
 
@@ -664,7 +663,6 @@ static target_ulong h_scm_flush(PowerPCCPU *cpu, SpaprMachineState *spapr,
     PCDIMMDevice *dimm;
     HostMemoryBackend *backend = NULL;
     SpaprNVDIMMDeviceFlushState *state;
-    ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
     int fd;
 
     if (!drc || !drc->dev ||
@@ -699,7 +697,7 @@ static target_ulong h_scm_flush(PowerPCCPU *cpu, SpaprMachineState *spapr,
 
         state->drcidx = drc_index;
 
-        thread_pool_submit_aio(pool, flush_worker_cb, state,
+        thread_pool_submit_aio(flush_worker_cb, state,
                                spapr_nvdimm_flush_completion_cb, state);
 
         continue_token = state->continue_token;
diff --git a/hw/virtio/virtio-pmem.c b/hw/virtio/virtio-pmem.c
index dff402f08f..c3512c2dae 100644
--- a/hw/virtio/virtio-pmem.c
+++ b/hw/virtio/virtio-pmem.c
@@ -70,7 +70,6 @@ static void virtio_pmem_flush(VirtIODevice *vdev, VirtQueue *vq)
     VirtIODeviceRequest *req_data;
     VirtIOPMEM *pmem = VIRTIO_PMEM(vdev);
     HostMemoryBackend *backend = MEMORY_BACKEND(pmem->memdev);
-    ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
 
     trace_virtio_pmem_flush_request();
     req_data = virtqueue_pop(vq, sizeof(VirtIODeviceRequest));
@@ -88,7 +87,7 @@ static void virtio_pmem_flush(VirtIODevice *vdev, VirtQueue *vq)
     req_data->fd   = memory_region_get_fd(&backend->mr);
     req_data->pmem = pmem;
     req_data->vdev = vdev;
-    thread_pool_submit_aio(pool, worker_cb, req_data, done_cb, req_data);
+    thread_pool_submit_aio(worker_cb, req_data, done_cb, req_data);
 }
 
 static void virtio_pmem_get_config(VirtIODevice *vdev, uint8_t *config)
diff --git a/scsi/pr-manager.c b/scsi/pr-manager.c
index 2098d7e759..fb5fc29730 100644
--- a/scsi/pr-manager.c
+++ b/scsi/pr-manager.c
@@ -51,7 +51,6 @@ static int pr_manager_worker(void *opaque)
 int coroutine_fn pr_manager_execute(PRManager *pr_mgr, AioContext *ctx, int fd,
                                     struct sg_io_hdr *hdr)
 {
-    ThreadPool *pool = aio_get_thread_pool(ctx);
     PRManagerData data = {
         .pr_mgr = pr_mgr,
         .fd     = fd,
@@ -62,7 +61,7 @@ int coroutine_fn pr_manager_execute(PRManager *pr_mgr, AioContext *ctx, int fd,
 
     /* The matching object_unref is in pr_manager_worker.  */
     object_ref(OBJECT(pr_mgr));
-    return thread_pool_submit_co(pool, pr_manager_worker, &data);
+    return thread_pool_submit_co(pr_manager_worker, &data);
 }
 
 bool pr_manager_is_connected(PRManager *pr_mgr)
diff --git a/scsi/qemu-pr-helper.c b/scsi/qemu-pr-helper.c
index 196b78c00d..55888cd9ac 100644
--- a/scsi/qemu-pr-helper.c
+++ b/scsi/qemu-pr-helper.c
@@ -180,7 +180,6 @@ static int do_sgio_worker(void *opaque)
 static int do_sgio(int fd, const uint8_t *cdb, uint8_t *sense,
                     uint8_t *buf, int *sz, int dir)
 {
-    ThreadPool *pool = aio_get_thread_pool(qemu_get_aio_context());
     int r;
 
     PRHelperSGIOData data = {
@@ -192,7 +191,7 @@ static int do_sgio(int fd, const uint8_t *cdb, uint8_t *sense,
         .dir = dir,
     };
 
-    r = thread_pool_submit_co(pool, do_sgio_worker, &data);
+    r = thread_pool_submit_co(do_sgio_worker, &data);
     *sz = data.sz;
     return r;
 }
diff --git a/tests/unit/test-thread-pool.c b/tests/unit/test-thread-pool.c
index 6020e65d69..448fbf7e5f 100644
--- a/tests/unit/test-thread-pool.c
+++ b/tests/unit/test-thread-pool.c
@@ -8,7 +8,6 @@
 #include "qemu/main-loop.h"
 
 static AioContext *ctx;
-static ThreadPool *pool;
 static int active;
 
 typedef struct {
@@ -47,7 +46,7 @@ static void done_cb(void *opaque, int ret)
 static void test_submit(void)
 {
     WorkerTestData data = { .n = 0 };
-    thread_pool_submit(pool, worker_cb, &data);
+    thread_pool_submit(worker_cb, &data);
     while (data.n == 0) {
         aio_poll(ctx, true);
     }
@@ -57,7 +56,7 @@ static void test_submit(void)
 static void test_submit_aio(void)
 {
     WorkerTestData data = { .n = 0, .ret = -EINPROGRESS };
-    data.aiocb = thread_pool_submit_aio(pool, worker_cb, &data,
+    data.aiocb = thread_pool_submit_aio(worker_cb, &data,
                                         done_cb, &data);
 
     /* The callbacks are not called until after the first wait.  */
@@ -78,7 +77,7 @@ static void co_test_cb(void *opaque)
     active = 1;
     data->n = 0;
     data->ret = -EINPROGRESS;
-    thread_pool_submit_co(pool, worker_cb, data);
+    thread_pool_submit_co(worker_cb, data);
 
     /* The test continues in test_submit_co, after qemu_coroutine_enter... */
 
@@ -122,7 +121,7 @@ static void test_submit_many(void)
     for (i = 0; i < 100; i++) {
         data[i].n = 0;
         data[i].ret = -EINPROGRESS;
-        thread_pool_submit_aio(pool, worker_cb, &data[i], done_cb, &data[i]);
+        thread_pool_submit_aio(worker_cb, &data[i], done_cb, &data[i]);
     }
 
     active = 100;
@@ -150,7 +149,7 @@ static void do_test_cancel(bool sync)
     for (i = 0; i < 100; i++) {
         data[i].n = 0;
         data[i].ret = -EINPROGRESS;
-        data[i].aiocb = thread_pool_submit_aio(pool, long_cb, &data[i],
+        data[i].aiocb = thread_pool_submit_aio(long_cb, &data[i],
                                                done_cb, &data[i]);
     }
 
@@ -235,7 +234,6 @@ int main(int argc, char **argv)
 {
     qemu_init_main_loop(&error_abort);
     ctx = qemu_get_current_aio_context();
-    pool = aio_get_thread_pool(ctx);
 
     g_test_init(&argc, &argv, NULL);
     g_test_add_func("/thread-pool/submit", test_submit);
diff --git a/util/thread-pool.c b/util/thread-pool.c
index a70abb8a59..0d97888df0 100644
--- a/util/thread-pool.c
+++ b/util/thread-pool.c
@@ -241,11 +241,12 @@ static const AIOCBInfo thread_pool_aiocb_info = {
     .get_aio_context    = thread_pool_get_aio_context,
 };
 
-BlockAIOCB *thread_pool_submit_aio(ThreadPool *pool,
-        ThreadPoolFunc *func, void *arg,
-        BlockCompletionFunc *cb, void *opaque)
+BlockAIOCB *thread_pool_submit_aio(ThreadPoolFunc *func, void *arg,
+                                   BlockCompletionFunc *cb, void *opaque)
 {
     ThreadPoolElement *req;
+    AioContext *ctx = qemu_get_current_aio_context();
+    ThreadPool *pool = aio_get_thread_pool(ctx);
 
     /* Assert that the thread submitting work is the same running the pool */
     assert(pool->ctx == qemu_get_current_aio_context());
@@ -283,19 +284,18 @@ static void thread_pool_co_cb(void *opaque, int ret)
     aio_co_wake(co->co);
 }
 
-int coroutine_fn thread_pool_submit_co(ThreadPool *pool, ThreadPoolFunc *func,
-                                       void *arg)
+int coroutine_fn thread_pool_submit_co(ThreadPoolFunc *func, void *arg)
 {
     ThreadPoolCo tpc = { .co = qemu_coroutine_self(), .ret = -EINPROGRESS };
     assert(qemu_in_coroutine());
-    thread_pool_submit_aio(pool, func, arg, thread_pool_co_cb, &tpc);
+    thread_pool_submit_aio(func, arg, thread_pool_co_cb, &tpc);
     qemu_coroutine_yield();
     return tpc.ret;
 }
 
-void thread_pool_submit(ThreadPool *pool, ThreadPoolFunc *func, void *arg)
+void thread_pool_submit(ThreadPoolFunc *func, void *arg)
 {
-    thread_pool_submit_aio(pool, func, arg, NULL, NULL);
+    thread_pool_submit_aio(func, arg, NULL, NULL);
 }
 
 void thread_pool_update_params(ThreadPool *pool, AioContext *ctx)
-- 
2.39.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/4] linux-aio: use LinuxAioState from the running thread
  2023-02-03 13:17 ` [PATCH v5 1/4] linux-aio: use LinuxAioState from the running thread Emanuele Giuseppe Esposito
@ 2023-03-01 16:16   ` Stefan Hajnoczi
  2023-03-07  8:48     ` Kevin Wolf
  0 siblings, 1 reply; 14+ messages in thread
From: Stefan Hajnoczi @ 2023-03-01 16:16 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito, Kevin Wolf, Paolo Bonzini
  Cc: qemu-block, Stefan Berger, Hanna Reitz, Stefan Weil,
	Aarushi Mehta, Julia Suvorova, Stefano Garzarella, Greg Kurz,
	Christian Schoenebeck, Daniel Henrique Barboza,
	Cédric Le Goater, David Gibson, Michael S. Tsirkin,
	Fam Zheng, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 5442 bytes --]

On Fri, Feb 03, 2023 at 08:17:28AM -0500, Emanuele Giuseppe Esposito wrote:
> Remove usage of aio_context_acquire by always submitting asynchronous
> AIO to the current thread's LinuxAioState.
> 
> In order to prevent mistakes from the caller side, avoid passing LinuxAioState
> in laio_io_{plug/unplug} and laio_co_submit, and document the functions
> to make clear that they work in the current thread's AioContext.
> 
> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> ---
>  include/block/aio.h               |  4 ----
>  include/block/raw-aio.h           | 18 ++++++++++++------
>  include/sysemu/block-backend-io.h |  6 ++++++
>  block/file-posix.c                | 10 +++-------
>  block/linux-aio.c                 | 29 +++++++++++++++++------------
>  5 files changed, 38 insertions(+), 29 deletions(-)
> 
> diff --git a/include/block/aio.h b/include/block/aio.h
> index 8fba6a3584..b6b396cfcb 100644
> --- a/include/block/aio.h
> +++ b/include/block/aio.h
> @@ -208,10 +208,6 @@ struct AioContext {
>      struct ThreadPool *thread_pool;
>  
>  #ifdef CONFIG_LINUX_AIO
> -    /*
> -     * State for native Linux AIO.  Uses aio_context_acquire/release for
> -     * locking.
> -     */
>      struct LinuxAioState *linux_aio;
>  #endif
>  #ifdef CONFIG_LINUX_IO_URING
> diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
> index f8cda9df91..db614472e6 100644
> --- a/include/block/raw-aio.h
> +++ b/include/block/raw-aio.h
> @@ -49,14 +49,20 @@
>  typedef struct LinuxAioState LinuxAioState;
>  LinuxAioState *laio_init(Error **errp);
>  void laio_cleanup(LinuxAioState *s);
> -int coroutine_fn laio_co_submit(BlockDriverState *bs, LinuxAioState *s, int fd,
> -                                uint64_t offset, QEMUIOVector *qiov, int type,
> -                                uint64_t dev_max_batch);
> +
> +/* laio_co_submit: submit I/O requests in the thread's current AioContext. */
> +int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
> +                                int type, uint64_t dev_max_batch);
> +
>  void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context);
>  void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context);
> -void laio_io_plug(BlockDriverState *bs, LinuxAioState *s);
> -void laio_io_unplug(BlockDriverState *bs, LinuxAioState *s,
> -                    uint64_t dev_max_batch);
> +
> +/*
> + * laio_io_plug/unplug work in the thread's current AioContext, therefore the
> + * caller must ensure that they are paired in the same IOThread.
> + */
> +void laio_io_plug(void);
> +void laio_io_unplug(uint64_t dev_max_batch);
>  #endif
>  /* io_uring.c - Linux io_uring implementation */
>  #ifdef CONFIG_LINUX_IO_URING
> diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-backend-io.h
> index 031a27ba10..d41698ccc5 100644
> --- a/include/sysemu/block-backend-io.h
> +++ b/include/sysemu/block-backend-io.h
> @@ -74,8 +74,14 @@ void blk_iostatus_set_err(BlockBackend *blk, int error);
>  int blk_get_max_iov(BlockBackend *blk);
>  int blk_get_max_hw_iov(BlockBackend *blk);
>  
> +/*
> + * blk_io_plug/unplug are thread-local operations. This means that multiple
> + * IOThreads can simultaneously call plug/unplug, but the caller must ensure
> + * that each unplug() is called in the same IOThread of the matching plug().
> + */
>  void blk_io_plug(BlockBackend *blk);
>  void blk_io_unplug(BlockBackend *blk);
> +
>  AioContext *blk_get_aio_context(BlockBackend *blk);
>  BlockAcctStats *blk_get_stats(BlockBackend *blk);
>  void *blk_aio_get(const AIOCBInfo *aiocb_info, BlockBackend *blk,
> diff --git a/block/file-posix.c b/block/file-posix.c
> index fa227d9d14..fa99d1c25a 100644
> --- a/block/file-posix.c
> +++ b/block/file-posix.c
> @@ -2095,10 +2095,8 @@ static int coroutine_fn raw_co_prw(BlockDriverState *bs, uint64_t offset,
>  #endif
>  #ifdef CONFIG_LINUX_AIO
>      } else if (s->use_linux_aio) {
> -        LinuxAioState *aio = aio_get_linux_aio(bdrv_get_aio_context(bs));
>          assert(qiov->size == bytes);
> -        return laio_co_submit(bs, aio, s->fd, offset, qiov, type,
> -                              s->aio_max_batch);
> +        return laio_co_submit(s->fd, offset, qiov, type, s->aio_max_batch);

I'm having second thoughts here. This is correct in an IOThread today,
but the main loop thread case concerns me:

This patch changes behavior when the main loop or vCPU thread submits
I/O. Before, the IOThread's LinuxAioState would be used. Now the main
loop's LinuxAioState will be used instead and aio callbacks will be
invoked in the main loop thread instead of the IOThread.

This change will be fine when QEMU block layer support is complete, but
will does it already work today?

When blk_preadv() is called from a non-coroutine in the main loop thread
then the coroutine is spawned in the IOThread today. So we avoid the
issue.

But when blk_preadv() is called from a coroutine in the main loop thread
we'll have multi-queue activity (I/O being processed in both the main
loop thread and IOThread).

I like this patch series and think it's the right thing to do, but I'm
not sure if it's safe to do this yet. We first need to be sure all aio
callbacks are thread-safe (may are already, but there are probably still
some that are not).

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 0/4] AioContext removal: LinuxAioState and ThreadPool
  2023-02-03 13:17 [PATCH v5 0/4] AioContext removal: LinuxAioState and ThreadPool Emanuele Giuseppe Esposito
                   ` (3 preceding siblings ...)
  2023-02-03 13:17 ` [PATCH v5 4/4] thread-pool: avoid passing the pool parameter every time Emanuele Giuseppe Esposito
@ 2023-03-02 19:58 ` Stefan Hajnoczi
  2023-03-14 20:34 ` Kevin Wolf
  5 siblings, 0 replies; 14+ messages in thread
From: Stefan Hajnoczi @ 2023-03-02 19:58 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito
  Cc: qemu-block, Stefan Berger, Kevin Wolf, Hanna Reitz, Stefan Weil,
	Aarushi Mehta, Julia Suvorova, Stefano Garzarella, Greg Kurz,
	Christian Schoenebeck, Daniel Henrique Barboza,
	Cédric Le Goater, David Gibson, Michael S. Tsirkin,
	Fam Zheng, Paolo Bonzini, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 472 bytes --]

On Fri, Feb 03, 2023 at 08:17:27AM -0500, Emanuele Giuseppe Esposito wrote:
> Just remove some AioContext lock in LinuxAioState and ThreadPool.
> Not related to anything specific, so I decided to send it as
> a separate patch.
> 
> These patches are taken from Paolo's old draft series.

Despite the concerns that I mentioned, an x86 guest booted up and ran
fio benchmarks fine in various configurations
(aio=threads/native/io_uring, iothread on/off).

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/4] linux-aio: use LinuxAioState from the running thread
  2023-03-01 16:16   ` Stefan Hajnoczi
@ 2023-03-07  8:48     ` Kevin Wolf
  2023-03-07 10:58       ` Paolo Bonzini
  2023-03-07 14:18       ` Stefan Hajnoczi
  0 siblings, 2 replies; 14+ messages in thread
From: Kevin Wolf @ 2023-03-07  8:48 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Emanuele Giuseppe Esposito, Paolo Bonzini, qemu-block,
	Stefan Berger, Hanna Reitz, Stefan Weil, Aarushi Mehta,
	Julia Suvorova, Stefano Garzarella, Greg Kurz,
	Christian Schoenebeck, Daniel Henrique Barboza,
	Cédric Le Goater, David Gibson, Michael S. Tsirkin,
	Fam Zheng, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 6881 bytes --]

Am 01.03.2023 um 17:16 hat Stefan Hajnoczi geschrieben:
> On Fri, Feb 03, 2023 at 08:17:28AM -0500, Emanuele Giuseppe Esposito wrote:
> > Remove usage of aio_context_acquire by always submitting asynchronous
> > AIO to the current thread's LinuxAioState.
> > 
> > In order to prevent mistakes from the caller side, avoid passing LinuxAioState
> > in laio_io_{plug/unplug} and laio_co_submit, and document the functions
> > to make clear that they work in the current thread's AioContext.
> > 
> > Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> > ---
> >  include/block/aio.h               |  4 ----
> >  include/block/raw-aio.h           | 18 ++++++++++++------
> >  include/sysemu/block-backend-io.h |  6 ++++++
> >  block/file-posix.c                | 10 +++-------
> >  block/linux-aio.c                 | 29 +++++++++++++++++------------
> >  5 files changed, 38 insertions(+), 29 deletions(-)
> > 
> > diff --git a/include/block/aio.h b/include/block/aio.h
> > index 8fba6a3584..b6b396cfcb 100644
> > --- a/include/block/aio.h
> > +++ b/include/block/aio.h
> > @@ -208,10 +208,6 @@ struct AioContext {
> >      struct ThreadPool *thread_pool;
> >  
> >  #ifdef CONFIG_LINUX_AIO
> > -    /*
> > -     * State for native Linux AIO.  Uses aio_context_acquire/release for
> > -     * locking.
> > -     */
> >      struct LinuxAioState *linux_aio;
> >  #endif
> >  #ifdef CONFIG_LINUX_IO_URING
> > diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
> > index f8cda9df91..db614472e6 100644
> > --- a/include/block/raw-aio.h
> > +++ b/include/block/raw-aio.h
> > @@ -49,14 +49,20 @@
> >  typedef struct LinuxAioState LinuxAioState;
> >  LinuxAioState *laio_init(Error **errp);
> >  void laio_cleanup(LinuxAioState *s);
> > -int coroutine_fn laio_co_submit(BlockDriverState *bs, LinuxAioState *s, int fd,
> > -                                uint64_t offset, QEMUIOVector *qiov, int type,
> > -                                uint64_t dev_max_batch);
> > +
> > +/* laio_co_submit: submit I/O requests in the thread's current AioContext. */
> > +int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
> > +                                int type, uint64_t dev_max_batch);
> > +
> >  void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context);
> >  void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context);
> > -void laio_io_plug(BlockDriverState *bs, LinuxAioState *s);
> > -void laio_io_unplug(BlockDriverState *bs, LinuxAioState *s,
> > -                    uint64_t dev_max_batch);
> > +
> > +/*
> > + * laio_io_plug/unplug work in the thread's current AioContext, therefore the
> > + * caller must ensure that they are paired in the same IOThread.
> > + */
> > +void laio_io_plug(void);
> > +void laio_io_unplug(uint64_t dev_max_batch);
> >  #endif
> >  /* io_uring.c - Linux io_uring implementation */
> >  #ifdef CONFIG_LINUX_IO_URING
> > diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-backend-io.h
> > index 031a27ba10..d41698ccc5 100644
> > --- a/include/sysemu/block-backend-io.h
> > +++ b/include/sysemu/block-backend-io.h
> > @@ -74,8 +74,14 @@ void blk_iostatus_set_err(BlockBackend *blk, int error);
> >  int blk_get_max_iov(BlockBackend *blk);
> >  int blk_get_max_hw_iov(BlockBackend *blk);
> >  
> > +/*
> > + * blk_io_plug/unplug are thread-local operations. This means that multiple
> > + * IOThreads can simultaneously call plug/unplug, but the caller must ensure
> > + * that each unplug() is called in the same IOThread of the matching plug().
> > + */
> >  void blk_io_plug(BlockBackend *blk);
> >  void blk_io_unplug(BlockBackend *blk);
> > +
> >  AioContext *blk_get_aio_context(BlockBackend *blk);
> >  BlockAcctStats *blk_get_stats(BlockBackend *blk);
> >  void *blk_aio_get(const AIOCBInfo *aiocb_info, BlockBackend *blk,
> > diff --git a/block/file-posix.c b/block/file-posix.c
> > index fa227d9d14..fa99d1c25a 100644
> > --- a/block/file-posix.c
> > +++ b/block/file-posix.c
> > @@ -2095,10 +2095,8 @@ static int coroutine_fn raw_co_prw(BlockDriverState *bs, uint64_t offset,
> >  #endif
> >  #ifdef CONFIG_LINUX_AIO
> >      } else if (s->use_linux_aio) {
> > -        LinuxAioState *aio = aio_get_linux_aio(bdrv_get_aio_context(bs));
> >          assert(qiov->size == bytes);
> > -        return laio_co_submit(bs, aio, s->fd, offset, qiov, type,
> > -                              s->aio_max_batch);
> > +        return laio_co_submit(s->fd, offset, qiov, type, s->aio_max_batch);
> 
> I'm having second thoughts here. This is correct in an IOThread today,
> but the main loop thread case concerns me:
> 
> This patch changes behavior when the main loop or vCPU thread submits
> I/O. Before, the IOThread's LinuxAioState would be used. Now the main
> loop's LinuxAioState will be used instead and aio callbacks will be
> invoked in the main loop thread instead of the IOThread.

You mean we have a device that has a separate iothread, but a request is
submitted from the main thread? This isn't even allowed today; if a node
is in an iothread, all I/O must be submitted from that iothread. Do you
know any code that does submit I/O from the main thread instead?

> This change will be fine when QEMU block layer support is complete, but
> will does it already work today?
> 
> When blk_preadv() is called from a non-coroutine in the main loop thread
> then the coroutine is spawned in the IOThread today. So we avoid the
> issue.
> 
> But when blk_preadv() is called from a coroutine in the main loop thread
> we'll have multi-queue activity (I/O being processed in both the main
> loop thread and IOThread).

That's a bug then. But calling blk_*() from coroutine context should be
quite rare anyway in the current code. I can think of .run in the block
jobs and possible some exports.

Actually, we may have a bug in the export code. blk_exp_add() enables
support for changing iothreads only depending on whether the user
requested it, but doesn't check if the export driver actually supports
it. Most do, but FUSE just ignores AioContext changes (it does use the
initial iothread of the node, though, not always the main thread).

> I like this patch series and think it's the right thing to do, but I'm
> not sure if it's safe to do this yet. We first need to be sure all aio
> callbacks are thread-safe (may are already, but there are probably
> still some that are not).

I would argue that if we do have buggy code like this, the new code is
probably better than the old one because getting callbacks scheduled in
a different thread is the more surprising behaviour. It's probably done
by code that doesn't expect to ever run in iothreads, so staying in the
main loop certainly feels safer.

Kevin

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/4] linux-aio: use LinuxAioState from the running thread
  2023-03-07  8:48     ` Kevin Wolf
@ 2023-03-07 10:58       ` Paolo Bonzini
  2023-03-07 12:17         ` Kevin Wolf
  2023-03-07 14:18       ` Stefan Hajnoczi
  1 sibling, 1 reply; 14+ messages in thread
From: Paolo Bonzini @ 2023-03-07 10:58 UTC (permalink / raw)
  To: Kevin Wolf, Stefan Hajnoczi
  Cc: Emanuele Giuseppe Esposito, qemu-block, Stefan Berger,
	Hanna Reitz, Stefan Weil, Aarushi Mehta, Julia Suvorova,
	Stefano Garzarella, Greg Kurz, Christian Schoenebeck,
	Daniel Henrique Barboza, Cédric Le Goater, David Gibson,
	Michael S. Tsirkin, Fam Zheng, qemu-devel, qemu-ppc

On 3/7/23 09:48, Kevin Wolf wrote:
> You mean we have a device that has a separate iothread, but a request is
> submitted from the main thread? This isn't even allowed today; if a node
> is in an iothread, all I/O must be submitted from that iothread. Do you
> know any code that does submit I/O from the main thread instead?

I think it is allowed, you just have to take the AioContext lock around 
the bdrv_*?  For example it could happen when you do block device migration.

Paolo



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/4] linux-aio: use LinuxAioState from the running thread
  2023-03-07 10:58       ` Paolo Bonzini
@ 2023-03-07 12:17         ` Kevin Wolf
  0 siblings, 0 replies; 14+ messages in thread
From: Kevin Wolf @ 2023-03-07 12:17 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: Stefan Hajnoczi, Emanuele Giuseppe Esposito, qemu-block,
	Stefan Berger, Hanna Reitz, Stefan Weil, Aarushi Mehta,
	Julia Suvorova, Stefano Garzarella, Greg Kurz,
	Christian Schoenebeck, Daniel Henrique Barboza,
	Cédric Le Goater, David Gibson, Michael S. Tsirkin,
	Fam Zheng, qemu-devel, qemu-ppc

Am 07.03.2023 um 11:58 hat Paolo Bonzini geschrieben:
> On 3/7/23 09:48, Kevin Wolf wrote:
> > You mean we have a device that has a separate iothread, but a request is
> > submitted from the main thread? This isn't even allowed today; if a node
> > is in an iothread, all I/O must be submitted from that iothread. Do you
> > know any code that does submit I/O from the main thread instead?
> 
> I think it is allowed, you just have to take the AioContext lock around the
> bdrv_*?

Didn't we say at some point that we don't want to do this either? Though
maybe it's not strictly forbidden then.

> For example it could happen when you do block device migration.

As in migration/block.c? As far as I can tell, all of the requests made
there are actually processed in the iothread. (blk_aio_*() calls the
callback in the iothread even when it was called from the main thread
itself, which feels like a trap, but it shouldn't be affected by this
change lower in the stack.)

The potentially critical code would be coroutine_fns that call
blk_co_*() or bdrv_co_*() directly while running in a different thread.
Everything else schedules a new coroutine in the AioContext of the block
node.

Kevin



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/4] linux-aio: use LinuxAioState from the running thread
  2023-03-07  8:48     ` Kevin Wolf
  2023-03-07 10:58       ` Paolo Bonzini
@ 2023-03-07 14:18       ` Stefan Hajnoczi
  2023-03-08 11:42         ` Kevin Wolf
  1 sibling, 1 reply; 14+ messages in thread
From: Stefan Hajnoczi @ 2023-03-07 14:18 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: Emanuele Giuseppe Esposito, Paolo Bonzini, qemu-block,
	Stefan Berger, Hanna Reitz, Stefan Weil, Aarushi Mehta,
	Julia Suvorova, Stefano Garzarella, Greg Kurz,
	Christian Schoenebeck, Daniel Henrique Barboza,
	Cédric Le Goater, David Gibson, Michael S. Tsirkin,
	Fam Zheng, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 5926 bytes --]

On Tue, Mar 07, 2023 at 09:48:51AM +0100, Kevin Wolf wrote:
> Am 01.03.2023 um 17:16 hat Stefan Hajnoczi geschrieben:
> > On Fri, Feb 03, 2023 at 08:17:28AM -0500, Emanuele Giuseppe Esposito wrote:
> > > Remove usage of aio_context_acquire by always submitting asynchronous
> > > AIO to the current thread's LinuxAioState.
> > > 
> > > In order to prevent mistakes from the caller side, avoid passing LinuxAioState
> > > in laio_io_{plug/unplug} and laio_co_submit, and document the functions
> > > to make clear that they work in the current thread's AioContext.
> > > 
> > > Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> > > ---
> > >  include/block/aio.h               |  4 ----
> > >  include/block/raw-aio.h           | 18 ++++++++++++------
> > >  include/sysemu/block-backend-io.h |  6 ++++++
> > >  block/file-posix.c                | 10 +++-------
> > >  block/linux-aio.c                 | 29 +++++++++++++++++------------
> > >  5 files changed, 38 insertions(+), 29 deletions(-)
> > > 
> > > diff --git a/include/block/aio.h b/include/block/aio.h
> > > index 8fba6a3584..b6b396cfcb 100644
> > > --- a/include/block/aio.h
> > > +++ b/include/block/aio.h
> > > @@ -208,10 +208,6 @@ struct AioContext {
> > >      struct ThreadPool *thread_pool;
> > >  
> > >  #ifdef CONFIG_LINUX_AIO
> > > -    /*
> > > -     * State for native Linux AIO.  Uses aio_context_acquire/release for
> > > -     * locking.
> > > -     */
> > >      struct LinuxAioState *linux_aio;
> > >  #endif
> > >  #ifdef CONFIG_LINUX_IO_URING
> > > diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
> > > index f8cda9df91..db614472e6 100644
> > > --- a/include/block/raw-aio.h
> > > +++ b/include/block/raw-aio.h
> > > @@ -49,14 +49,20 @@
> > >  typedef struct LinuxAioState LinuxAioState;
> > >  LinuxAioState *laio_init(Error **errp);
> > >  void laio_cleanup(LinuxAioState *s);
> > > -int coroutine_fn laio_co_submit(BlockDriverState *bs, LinuxAioState *s, int fd,
> > > -                                uint64_t offset, QEMUIOVector *qiov, int type,
> > > -                                uint64_t dev_max_batch);
> > > +
> > > +/* laio_co_submit: submit I/O requests in the thread's current AioContext. */
> > > +int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
> > > +                                int type, uint64_t dev_max_batch);
> > > +
> > >  void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context);
> > >  void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context);
> > > -void laio_io_plug(BlockDriverState *bs, LinuxAioState *s);
> > > -void laio_io_unplug(BlockDriverState *bs, LinuxAioState *s,
> > > -                    uint64_t dev_max_batch);
> > > +
> > > +/*
> > > + * laio_io_plug/unplug work in the thread's current AioContext, therefore the
> > > + * caller must ensure that they are paired in the same IOThread.
> > > + */
> > > +void laio_io_plug(void);
> > > +void laio_io_unplug(uint64_t dev_max_batch);
> > >  #endif
> > >  /* io_uring.c - Linux io_uring implementation */
> > >  #ifdef CONFIG_LINUX_IO_URING
> > > diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-backend-io.h
> > > index 031a27ba10..d41698ccc5 100644
> > > --- a/include/sysemu/block-backend-io.h
> > > +++ b/include/sysemu/block-backend-io.h
> > > @@ -74,8 +74,14 @@ void blk_iostatus_set_err(BlockBackend *blk, int error);
> > >  int blk_get_max_iov(BlockBackend *blk);
> > >  int blk_get_max_hw_iov(BlockBackend *blk);
> > >  
> > > +/*
> > > + * blk_io_plug/unplug are thread-local operations. This means that multiple
> > > + * IOThreads can simultaneously call plug/unplug, but the caller must ensure
> > > + * that each unplug() is called in the same IOThread of the matching plug().
> > > + */
> > >  void blk_io_plug(BlockBackend *blk);
> > >  void blk_io_unplug(BlockBackend *blk);
> > > +
> > >  AioContext *blk_get_aio_context(BlockBackend *blk);
> > >  BlockAcctStats *blk_get_stats(BlockBackend *blk);
> > >  void *blk_aio_get(const AIOCBInfo *aiocb_info, BlockBackend *blk,
> > > diff --git a/block/file-posix.c b/block/file-posix.c
> > > index fa227d9d14..fa99d1c25a 100644
> > > --- a/block/file-posix.c
> > > +++ b/block/file-posix.c
> > > @@ -2095,10 +2095,8 @@ static int coroutine_fn raw_co_prw(BlockDriverState *bs, uint64_t offset,
> > >  #endif
> > >  #ifdef CONFIG_LINUX_AIO
> > >      } else if (s->use_linux_aio) {
> > > -        LinuxAioState *aio = aio_get_linux_aio(bdrv_get_aio_context(bs));
> > >          assert(qiov->size == bytes);
> > > -        return laio_co_submit(bs, aio, s->fd, offset, qiov, type,
> > > -                              s->aio_max_batch);
> > > +        return laio_co_submit(s->fd, offset, qiov, type, s->aio_max_batch);
> > 
> > I'm having second thoughts here. This is correct in an IOThread today,
> > but the main loop thread case concerns me:
> > 
> > This patch changes behavior when the main loop or vCPU thread submits
> > I/O. Before, the IOThread's LinuxAioState would be used. Now the main
> > loop's LinuxAioState will be used instead and aio callbacks will be
> > invoked in the main loop thread instead of the IOThread.
> 
> You mean we have a device that has a separate iothread, but a request is
> submitted from the main thread? This isn't even allowed today; if a node
> is in an iothread, all I/O must be submitted from that iothread. Do you
> know any code that does submit I/O from the main thread instead?

I think you're right. My mental model was outdated. Both the coroutine
and non-coroutine code paths schedule coroutines in the AioContext.

However, I think this patch series is still risky because it could
reveal latent bugs. Let's merge it in the next development cycle (soft
freeze is today!) to avoid destabilizing 8.0.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/4] linux-aio: use LinuxAioState from the running thread
  2023-03-07 14:18       ` Stefan Hajnoczi
@ 2023-03-08 11:42         ` Kevin Wolf
  2023-03-08 17:24           ` Stefan Hajnoczi
  0 siblings, 1 reply; 14+ messages in thread
From: Kevin Wolf @ 2023-03-08 11:42 UTC (permalink / raw)
  To: Stefan Hajnoczi
  Cc: Emanuele Giuseppe Esposito, Paolo Bonzini, qemu-block,
	Stefan Berger, Hanna Reitz, Stefan Weil, Aarushi Mehta,
	Julia Suvorova, Stefano Garzarella, Greg Kurz,
	Christian Schoenebeck, Daniel Henrique Barboza,
	Cédric Le Goater, David Gibson, Michael S. Tsirkin,
	Fam Zheng, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 6322 bytes --]

Am 07.03.2023 um 15:18 hat Stefan Hajnoczi geschrieben:
> On Tue, Mar 07, 2023 at 09:48:51AM +0100, Kevin Wolf wrote:
> > Am 01.03.2023 um 17:16 hat Stefan Hajnoczi geschrieben:
> > > On Fri, Feb 03, 2023 at 08:17:28AM -0500, Emanuele Giuseppe Esposito wrote:
> > > > Remove usage of aio_context_acquire by always submitting asynchronous
> > > > AIO to the current thread's LinuxAioState.
> > > > 
> > > > In order to prevent mistakes from the caller side, avoid passing LinuxAioState
> > > > in laio_io_{plug/unplug} and laio_co_submit, and document the functions
> > > > to make clear that they work in the current thread's AioContext.
> > > > 
> > > > Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> > > > ---
> > > >  include/block/aio.h               |  4 ----
> > > >  include/block/raw-aio.h           | 18 ++++++++++++------
> > > >  include/sysemu/block-backend-io.h |  6 ++++++
> > > >  block/file-posix.c                | 10 +++-------
> > > >  block/linux-aio.c                 | 29 +++++++++++++++++------------
> > > >  5 files changed, 38 insertions(+), 29 deletions(-)
> > > > 
> > > > diff --git a/include/block/aio.h b/include/block/aio.h
> > > > index 8fba6a3584..b6b396cfcb 100644
> > > > --- a/include/block/aio.h
> > > > +++ b/include/block/aio.h
> > > > @@ -208,10 +208,6 @@ struct AioContext {
> > > >      struct ThreadPool *thread_pool;
> > > >  
> > > >  #ifdef CONFIG_LINUX_AIO
> > > > -    /*
> > > > -     * State for native Linux AIO.  Uses aio_context_acquire/release for
> > > > -     * locking.
> > > > -     */
> > > >      struct LinuxAioState *linux_aio;
> > > >  #endif
> > > >  #ifdef CONFIG_LINUX_IO_URING
> > > > diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
> > > > index f8cda9df91..db614472e6 100644
> > > > --- a/include/block/raw-aio.h
> > > > +++ b/include/block/raw-aio.h
> > > > @@ -49,14 +49,20 @@
> > > >  typedef struct LinuxAioState LinuxAioState;
> > > >  LinuxAioState *laio_init(Error **errp);
> > > >  void laio_cleanup(LinuxAioState *s);
> > > > -int coroutine_fn laio_co_submit(BlockDriverState *bs, LinuxAioState *s, int fd,
> > > > -                                uint64_t offset, QEMUIOVector *qiov, int type,
> > > > -                                uint64_t dev_max_batch);
> > > > +
> > > > +/* laio_co_submit: submit I/O requests in the thread's current AioContext. */
> > > > +int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
> > > > +                                int type, uint64_t dev_max_batch);
> > > > +
> > > >  void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context);
> > > >  void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context);
> > > > -void laio_io_plug(BlockDriverState *bs, LinuxAioState *s);
> > > > -void laio_io_unplug(BlockDriverState *bs, LinuxAioState *s,
> > > > -                    uint64_t dev_max_batch);
> > > > +
> > > > +/*
> > > > + * laio_io_plug/unplug work in the thread's current AioContext, therefore the
> > > > + * caller must ensure that they are paired in the same IOThread.
> > > > + */
> > > > +void laio_io_plug(void);
> > > > +void laio_io_unplug(uint64_t dev_max_batch);
> > > >  #endif
> > > >  /* io_uring.c - Linux io_uring implementation */
> > > >  #ifdef CONFIG_LINUX_IO_URING
> > > > diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-backend-io.h
> > > > index 031a27ba10..d41698ccc5 100644
> > > > --- a/include/sysemu/block-backend-io.h
> > > > +++ b/include/sysemu/block-backend-io.h
> > > > @@ -74,8 +74,14 @@ void blk_iostatus_set_err(BlockBackend *blk, int error);
> > > >  int blk_get_max_iov(BlockBackend *blk);
> > > >  int blk_get_max_hw_iov(BlockBackend *blk);
> > > >  
> > > > +/*
> > > > + * blk_io_plug/unplug are thread-local operations. This means that multiple
> > > > + * IOThreads can simultaneously call plug/unplug, but the caller must ensure
> > > > + * that each unplug() is called in the same IOThread of the matching plug().
> > > > + */
> > > >  void blk_io_plug(BlockBackend *blk);
> > > >  void blk_io_unplug(BlockBackend *blk);
> > > > +
> > > >  AioContext *blk_get_aio_context(BlockBackend *blk);
> > > >  BlockAcctStats *blk_get_stats(BlockBackend *blk);
> > > >  void *blk_aio_get(const AIOCBInfo *aiocb_info, BlockBackend *blk,
> > > > diff --git a/block/file-posix.c b/block/file-posix.c
> > > > index fa227d9d14..fa99d1c25a 100644
> > > > --- a/block/file-posix.c
> > > > +++ b/block/file-posix.c
> > > > @@ -2095,10 +2095,8 @@ static int coroutine_fn raw_co_prw(BlockDriverState *bs, uint64_t offset,
> > > >  #endif
> > > >  #ifdef CONFIG_LINUX_AIO
> > > >      } else if (s->use_linux_aio) {
> > > > -        LinuxAioState *aio = aio_get_linux_aio(bdrv_get_aio_context(bs));
> > > >          assert(qiov->size == bytes);
> > > > -        return laio_co_submit(bs, aio, s->fd, offset, qiov, type,
> > > > -                              s->aio_max_batch);
> > > > +        return laio_co_submit(s->fd, offset, qiov, type, s->aio_max_batch);
> > > 
> > > I'm having second thoughts here. This is correct in an IOThread today,
> > > but the main loop thread case concerns me:
> > > 
> > > This patch changes behavior when the main loop or vCPU thread submits
> > > I/O. Before, the IOThread's LinuxAioState would be used. Now the main
> > > loop's LinuxAioState will be used instead and aio callbacks will be
> > > invoked in the main loop thread instead of the IOThread.
> > 
> > You mean we have a device that has a separate iothread, but a request is
> > submitted from the main thread? This isn't even allowed today; if a node
> > is in an iothread, all I/O must be submitted from that iothread. Do you
> > know any code that does submit I/O from the main thread instead?
> 
> I think you're right. My mental model was outdated. Both the coroutine
> and non-coroutine code paths schedule coroutines in the AioContext.
> 
> However, I think this patch series is still risky because it could
> reveal latent bugs. Let's merge it in the next development cycle (soft
> freeze is today!) to avoid destabilizing 8.0.

Makes sense, I've already started a block-next anyway.

So is this an R-b or A-b or nothing for now?

Kevin

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 1/4] linux-aio: use LinuxAioState from the running thread
  2023-03-08 11:42         ` Kevin Wolf
@ 2023-03-08 17:24           ` Stefan Hajnoczi
  0 siblings, 0 replies; 14+ messages in thread
From: Stefan Hajnoczi @ 2023-03-08 17:24 UTC (permalink / raw)
  To: Kevin Wolf
  Cc: Emanuele Giuseppe Esposito, Paolo Bonzini, qemu-block,
	Stefan Berger, Hanna Reitz, Stefan Weil, Aarushi Mehta,
	Julia Suvorova, Stefano Garzarella, Greg Kurz,
	Christian Schoenebeck, Daniel Henrique Barboza,
	Cédric Le Goater, David Gibson, Michael S. Tsirkin,
	Fam Zheng, qemu-devel, qemu-ppc

[-- Attachment #1: Type: text/plain, Size: 6717 bytes --]

On Wed, Mar 08, 2023 at 12:42:11PM +0100, Kevin Wolf wrote:
> Am 07.03.2023 um 15:18 hat Stefan Hajnoczi geschrieben:
> > On Tue, Mar 07, 2023 at 09:48:51AM +0100, Kevin Wolf wrote:
> > > Am 01.03.2023 um 17:16 hat Stefan Hajnoczi geschrieben:
> > > > On Fri, Feb 03, 2023 at 08:17:28AM -0500, Emanuele Giuseppe Esposito wrote:
> > > > > Remove usage of aio_context_acquire by always submitting asynchronous
> > > > > AIO to the current thread's LinuxAioState.
> > > > > 
> > > > > In order to prevent mistakes from the caller side, avoid passing LinuxAioState
> > > > > in laio_io_{plug/unplug} and laio_co_submit, and document the functions
> > > > > to make clear that they work in the current thread's AioContext.
> > > > > 
> > > > > Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> > > > > ---
> > > > >  include/block/aio.h               |  4 ----
> > > > >  include/block/raw-aio.h           | 18 ++++++++++++------
> > > > >  include/sysemu/block-backend-io.h |  6 ++++++
> > > > >  block/file-posix.c                | 10 +++-------
> > > > >  block/linux-aio.c                 | 29 +++++++++++++++++------------
> > > > >  5 files changed, 38 insertions(+), 29 deletions(-)
> > > > > 
> > > > > diff --git a/include/block/aio.h b/include/block/aio.h
> > > > > index 8fba6a3584..b6b396cfcb 100644
> > > > > --- a/include/block/aio.h
> > > > > +++ b/include/block/aio.h
> > > > > @@ -208,10 +208,6 @@ struct AioContext {
> > > > >      struct ThreadPool *thread_pool;
> > > > >  
> > > > >  #ifdef CONFIG_LINUX_AIO
> > > > > -    /*
> > > > > -     * State for native Linux AIO.  Uses aio_context_acquire/release for
> > > > > -     * locking.
> > > > > -     */
> > > > >      struct LinuxAioState *linux_aio;
> > > > >  #endif
> > > > >  #ifdef CONFIG_LINUX_IO_URING
> > > > > diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h
> > > > > index f8cda9df91..db614472e6 100644
> > > > > --- a/include/block/raw-aio.h
> > > > > +++ b/include/block/raw-aio.h
> > > > > @@ -49,14 +49,20 @@
> > > > >  typedef struct LinuxAioState LinuxAioState;
> > > > >  LinuxAioState *laio_init(Error **errp);
> > > > >  void laio_cleanup(LinuxAioState *s);
> > > > > -int coroutine_fn laio_co_submit(BlockDriverState *bs, LinuxAioState *s, int fd,
> > > > > -                                uint64_t offset, QEMUIOVector *qiov, int type,
> > > > > -                                uint64_t dev_max_batch);
> > > > > +
> > > > > +/* laio_co_submit: submit I/O requests in the thread's current AioContext. */
> > > > > +int coroutine_fn laio_co_submit(int fd, uint64_t offset, QEMUIOVector *qiov,
> > > > > +                                int type, uint64_t dev_max_batch);
> > > > > +
> > > > >  void laio_detach_aio_context(LinuxAioState *s, AioContext *old_context);
> > > > >  void laio_attach_aio_context(LinuxAioState *s, AioContext *new_context);
> > > > > -void laio_io_plug(BlockDriverState *bs, LinuxAioState *s);
> > > > > -void laio_io_unplug(BlockDriverState *bs, LinuxAioState *s,
> > > > > -                    uint64_t dev_max_batch);
> > > > > +
> > > > > +/*
> > > > > + * laio_io_plug/unplug work in the thread's current AioContext, therefore the
> > > > > + * caller must ensure that they are paired in the same IOThread.
> > > > > + */
> > > > > +void laio_io_plug(void);
> > > > > +void laio_io_unplug(uint64_t dev_max_batch);
> > > > >  #endif
> > > > >  /* io_uring.c - Linux io_uring implementation */
> > > > >  #ifdef CONFIG_LINUX_IO_URING
> > > > > diff --git a/include/sysemu/block-backend-io.h b/include/sysemu/block-backend-io.h
> > > > > index 031a27ba10..d41698ccc5 100644
> > > > > --- a/include/sysemu/block-backend-io.h
> > > > > +++ b/include/sysemu/block-backend-io.h
> > > > > @@ -74,8 +74,14 @@ void blk_iostatus_set_err(BlockBackend *blk, int error);
> > > > >  int blk_get_max_iov(BlockBackend *blk);
> > > > >  int blk_get_max_hw_iov(BlockBackend *blk);
> > > > >  
> > > > > +/*
> > > > > + * blk_io_plug/unplug are thread-local operations. This means that multiple
> > > > > + * IOThreads can simultaneously call plug/unplug, but the caller must ensure
> > > > > + * that each unplug() is called in the same IOThread of the matching plug().
> > > > > + */
> > > > >  void blk_io_plug(BlockBackend *blk);
> > > > >  void blk_io_unplug(BlockBackend *blk);
> > > > > +
> > > > >  AioContext *blk_get_aio_context(BlockBackend *blk);
> > > > >  BlockAcctStats *blk_get_stats(BlockBackend *blk);
> > > > >  void *blk_aio_get(const AIOCBInfo *aiocb_info, BlockBackend *blk,
> > > > > diff --git a/block/file-posix.c b/block/file-posix.c
> > > > > index fa227d9d14..fa99d1c25a 100644
> > > > > --- a/block/file-posix.c
> > > > > +++ b/block/file-posix.c
> > > > > @@ -2095,10 +2095,8 @@ static int coroutine_fn raw_co_prw(BlockDriverState *bs, uint64_t offset,
> > > > >  #endif
> > > > >  #ifdef CONFIG_LINUX_AIO
> > > > >      } else if (s->use_linux_aio) {
> > > > > -        LinuxAioState *aio = aio_get_linux_aio(bdrv_get_aio_context(bs));
> > > > >          assert(qiov->size == bytes);
> > > > > -        return laio_co_submit(bs, aio, s->fd, offset, qiov, type,
> > > > > -                              s->aio_max_batch);
> > > > > +        return laio_co_submit(s->fd, offset, qiov, type, s->aio_max_batch);
> > > > 
> > > > I'm having second thoughts here. This is correct in an IOThread today,
> > > > but the main loop thread case concerns me:
> > > > 
> > > > This patch changes behavior when the main loop or vCPU thread submits
> > > > I/O. Before, the IOThread's LinuxAioState would be used. Now the main
> > > > loop's LinuxAioState will be used instead and aio callbacks will be
> > > > invoked in the main loop thread instead of the IOThread.
> > > 
> > > You mean we have a device that has a separate iothread, but a request is
> > > submitted from the main thread? This isn't even allowed today; if a node
> > > is in an iothread, all I/O must be submitted from that iothread. Do you
> > > know any code that does submit I/O from the main thread instead?
> > 
> > I think you're right. My mental model was outdated. Both the coroutine
> > and non-coroutine code paths schedule coroutines in the AioContext.
> > 
> > However, I think this patch series is still risky because it could
> > reveal latent bugs. Let's merge it in the next development cycle (soft
> > freeze is today!) to avoid destabilizing 8.0.
> 
> Makes sense, I've already started a block-next anyway.
> 
> So is this an R-b or A-b or nothing for now?

I'm happy with it and I've read the code:

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v5 0/4] AioContext removal: LinuxAioState and ThreadPool
  2023-02-03 13:17 [PATCH v5 0/4] AioContext removal: LinuxAioState and ThreadPool Emanuele Giuseppe Esposito
                   ` (4 preceding siblings ...)
  2023-03-02 19:58 ` [PATCH v5 0/4] AioContext removal: LinuxAioState and ThreadPool Stefan Hajnoczi
@ 2023-03-14 20:34 ` Kevin Wolf
  5 siblings, 0 replies; 14+ messages in thread
From: Kevin Wolf @ 2023-03-14 20:34 UTC (permalink / raw)
  To: Emanuele Giuseppe Esposito
  Cc: qemu-block, Stefan Berger, Hanna Reitz, Stefan Weil,
	Aarushi Mehta, Julia Suvorova, Stefan Hajnoczi,
	Stefano Garzarella, Greg Kurz, Christian Schoenebeck,
	Daniel Henrique Barboza, Cédric Le Goater, David Gibson,
	Michael S. Tsirkin, Fam Zheng, Paolo Bonzini, qemu-devel,
	qemu-ppc

Am 03.02.2023 um 14:17 hat Emanuele Giuseppe Esposito geschrieben:
> Just remove some AioContext lock in LinuxAioState and ThreadPool.
> Not related to anything specific, so I decided to send it as
> a separate patch.
> 
> These patches are taken from Paolo's old draft series.

Thanks, applied to the block-next branch.

Kevin



^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-03-14 20:35 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-02-03 13:17 [PATCH v5 0/4] AioContext removal: LinuxAioState and ThreadPool Emanuele Giuseppe Esposito
2023-02-03 13:17 ` [PATCH v5 1/4] linux-aio: use LinuxAioState from the running thread Emanuele Giuseppe Esposito
2023-03-01 16:16   ` Stefan Hajnoczi
2023-03-07  8:48     ` Kevin Wolf
2023-03-07 10:58       ` Paolo Bonzini
2023-03-07 12:17         ` Kevin Wolf
2023-03-07 14:18       ` Stefan Hajnoczi
2023-03-08 11:42         ` Kevin Wolf
2023-03-08 17:24           ` Stefan Hajnoczi
2023-02-03 13:17 ` [PATCH v5 2/4] io_uring: use LuringState " Emanuele Giuseppe Esposito
2023-02-03 13:17 ` [PATCH v5 3/4] thread-pool: use ThreadPool " Emanuele Giuseppe Esposito
2023-02-03 13:17 ` [PATCH v5 4/4] thread-pool: avoid passing the pool parameter every time Emanuele Giuseppe Esposito
2023-03-02 19:58 ` [PATCH v5 0/4] AioContext removal: LinuxAioState and ThreadPool Stefan Hajnoczi
2023-03-14 20:34 ` Kevin Wolf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).