From: Stefan Hajnoczi <stefanha@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefan Hajnoczi <stefanha@redhat.com>,
Kevin Wolf <kwolf@redhat.com>,
eblake@redhat.com, Hanna Czenczek <hreitz@redhat.com>,
qemu-block@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
Fam Zheng <fam@euphon.net>,
hibriansong@gmail.com
Subject: [PATCH v6 12/15] aio-posix: add fdmon_ops->dispatch()
Date: Mon, 3 Nov 2025 21:29:30 -0500 [thread overview]
Message-ID: <20251104022933.618123-13-stefanha@redhat.com> (raw)
In-Reply-To: <20251104022933.618123-1-stefanha@redhat.com>
The ppoll and epoll file descriptor monitoring implementations rely on
the event loop's generic file descriptor, timer, and BH dispatch code to
invoke user callbacks.
The io_uring file descriptor monitoring implementation will need
io_uring-specific dispatch logic for CQE handlers for custom SQEs.
Introduce a new FDMonOps ->dispatch() callback that allows file
descriptor monitoring implementations to invoke user callbacks. The next
patch will use this new callback.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
v5:
- Add patch to introduce FDMonOps->dispatch() callback [Kevin]
---
include/block/aio.h | 19 +++++++++++++++++++
util/aio-posix.c | 9 +++++++++
2 files changed, 28 insertions(+)
diff --git a/include/block/aio.h b/include/block/aio.h
index 9562733fa7..b266daa58f 100644
--- a/include/block/aio.h
+++ b/include/block/aio.h
@@ -107,6 +107,25 @@ typedef struct {
*/
bool (*need_wait)(AioContext *ctx);
+ /*
+ * dispatch:
+ * @ctx: the AioContext
+ *
+ * Dispatch any work that is specific to this file descriptor monitoring
+ * implementation. Usually the event loop's generic file descriptor
+ * monitoring, BH, and timer dispatching code is sufficient, but file
+ * descriptor monitoring implementations offering additional functionality
+ * may need to implement this function for custom behavior. Called at a
+ * point in the event loop when it is safe to invoke user-defined
+ * callbacks.
+ *
+ * This function is optional and may be NULL.
+ *
+ * Returns: true if progress was made (see aio_poll()'s return value),
+ * false otherwise.
+ */
+ bool (*dispatch)(AioContext *ctx);
+
/*
* gsource_prepare:
* @ctx: the AioContext
diff --git a/util/aio-posix.c b/util/aio-posix.c
index c0285a26a3..6ff36b6e51 100644
--- a/util/aio-posix.c
+++ b/util/aio-posix.c
@@ -385,10 +385,15 @@ void aio_dispatch(AioContext *ctx)
AioHandlerList ready_list = QLIST_HEAD_INITIALIZER(ready_list);
qemu_lockcnt_inc(&ctx->list_lock);
+
aio_bh_poll(ctx);
ctx->fdmon_ops->gsource_dispatch(ctx, &ready_list);
+ if (ctx->fdmon_ops->dispatch) {
+ ctx->fdmon_ops->dispatch(ctx);
+ }
+
/* block_ns is 0 because polling is disabled in the glib event loop */
aio_dispatch_ready_handlers(ctx, &ready_list, 0);
@@ -707,6 +712,10 @@ bool aio_poll(AioContext *ctx, bool blocking)
block_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - start;
}
+ if (ctx->fdmon_ops->dispatch) {
+ progress |= ctx->fdmon_ops->dispatch(ctx);
+ }
+
progress |= aio_bh_poll(ctx);
progress |= aio_dispatch_ready_handlers(ctx, &ready_list, block_ns);
--
2.51.1
next prev parent reply other threads:[~2025-11-04 2:31 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-04 2:29 [PATCH v6 00/15] aio: add the aio_add_sqe() io_uring API Stefan Hajnoczi
2025-11-04 2:29 ` [PATCH v6 01/15] aio-posix: fix race between io_uring CQE and AioHandler deletion Stefan Hajnoczi
2025-11-04 2:29 ` [PATCH v6 02/15] aio-posix: fix fdmon-io_uring.c timeout stack variable lifetime Stefan Hajnoczi
2025-11-04 2:29 ` [PATCH v6 03/15] aio-posix: fix spurious return from ->wait() due to signals Stefan Hajnoczi
2025-11-04 2:29 ` [PATCH v6 04/15] aio-posix: keep polling enabled with fdmon-io_uring.c Stefan Hajnoczi
2025-11-04 2:29 ` [PATCH v6 05/15] tests/unit: skip test-nested-aio-poll with io_uring Stefan Hajnoczi
2025-11-04 2:29 ` [PATCH v6 06/15] aio-posix: integrate fdmon into glib event loop Stefan Hajnoczi
2025-11-04 2:29 ` [PATCH v6 07/15] aio: remove aio_context_use_g_source() Stefan Hajnoczi
2025-11-04 2:29 ` [PATCH v6 08/15] aio: free AioContext when aio_context_new() fails Stefan Hajnoczi
2025-11-04 2:29 ` [PATCH v6 09/15] aio: add errp argument to aio_context_setup() Stefan Hajnoczi
2025-11-04 2:29 ` [PATCH v6 10/15] aio-posix: gracefully handle io_uring_queue_init() failure Stefan Hajnoczi
2025-11-04 2:29 ` [PATCH v6 11/15] aio-posix: unindent fdmon_io_uring_destroy() Stefan Hajnoczi
2025-11-04 2:29 ` Stefan Hajnoczi [this message]
2025-11-04 2:29 ` [PATCH v6 13/15] aio-posix: add aio_add_sqe() API for user-defined io_uring requests Stefan Hajnoczi
2025-11-04 2:29 ` [PATCH v6 14/15] block/io_uring: use aio_add_sqe() Stefan Hajnoczi
2025-11-04 2:29 ` [PATCH v6 15/15] block/io_uring: use non-vectored read/write when possible Stefan Hajnoczi
2025-11-04 10:38 ` [PATCH v6 00/15] aio: add the aio_add_sqe() io_uring API Kevin Wolf
2025-11-13 8:27 ` Michael Tokarev
2025-11-13 13:32 ` Kevin Wolf
2025-11-13 14:51 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251104022933.618123-13-stefanha@redhat.com \
--to=stefanha@redhat.com \
--cc=eblake@redhat.com \
--cc=fam@euphon.net \
--cc=hibriansong@gmail.com \
--cc=hreitz@redhat.com \
--cc=kwolf@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).