From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Aarushi Mehta <mehta.aaru20@gmail.com>,
Fam Zheng <fam@euphon.net>,
Stefano Garzarella <sgarzare@redhat.com>,
Hanna Czenczek <hreitz@redhat.com>,
eblake@redhat.com, qemu-block@nongnu.org, hibriansong@gmail.com,
Stefan Weil <sw@weilnetz.de>, Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [PATCH v4 09/12] aio-posix: add aio_add_sqe() API for user-defined io_uring requests
Date: Fri, 10 Oct 2025 17:23:20 +0200 [thread overview]
Message-ID: <aOkk0NL7IMq3gFVl@redhat.com> (raw)
In-Reply-To: <20250910175703.374499-10-stefanha@redhat.com>
Am 10.09.2025 um 19:57 hat Stefan Hajnoczi geschrieben:
> Introduce the aio_add_sqe() API for submitting io_uring requests in the
> current AioContext. This allows other components in QEMU, like the block
> layer, to take advantage of io_uring features without creating their own
> io_uring context.
>
> This API supports nested event loops just like file descriptor
> monitoring and BHs do. This comes at a complexity cost: a BH is required
> to dispatch CQE callbacks and they are placed on a list so that a nested
> event loop can invoke its parent's pending CQE callbacks. If you're
> wondering why CqeHandler exists instead of just a callback function
> pointer, this is why.
This is a mechanism that we know from other places in the code like the
Linux AIO or indeed the old io_uring block driver code, because a BH is
the only thing that makes sure that the main loop will call into the
code again later.
Do we really need it here, though? This _is_ literally the main loop
implementation, we don't have to make the main loop call us.
.need_wait() checks io_uring_cq_ready(), so as long as there are
unprocessed completions, we know that .wait() will be called in nested
event loops. We just can't take more than one completion out of the
queue to process them later for this to work, but have to process them
one by one as we get them from the ring. But that's what we already do.
Am I missing something?
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> Reviewed-by: Eric Blake <eblake@redhat.com>
> ---
> v2:
> - Fix pre_sqe -> prep_sqe typo [Eric]
> - Add #endif terminator comment [Eric]
> ---
> include/block/aio.h | 84 ++++++++++++++++++++++-
> util/aio-posix.h | 1 +
> util/aio-posix.c | 9 +++
> util/fdmon-io_uring.c | 152 ++++++++++++++++++++++++++++++++----------
> 4 files changed, 208 insertions(+), 38 deletions(-)
>
> diff --git a/include/block/aio.h b/include/block/aio.h
> index d919d7c8f4..56ea0d47b7 100644
> --- a/include/block/aio.h
> +++ b/include/block/aio.h
> @@ -61,6 +61,27 @@ typedef struct LuringState LuringState;
> /* Is polling disabled? */
> bool aio_poll_disabled(AioContext *ctx);
>
> +#ifdef CONFIG_LINUX_IO_URING
> +/*
> + * Each io_uring request must have a unique CqeHandler that processes the cqe.
> + * The lifetime of a CqeHandler must be at least from aio_add_sqe() until
> + * ->cb() invocation.
> + */
> +typedef struct CqeHandler CqeHandler;
> +struct CqeHandler {
> + /* Called by the AioContext when the request has completed */
> + void (*cb)(CqeHandler *handler);
> +
> + /* Used internally, do not access this */
> + QSIMPLEQ_ENTRY(CqeHandler) next;
> +
> + /* This field is filled in before ->cb() is called */
> + struct io_uring_cqe cqe;
> +};
> +
> +typedef QSIMPLEQ_HEAD(, CqeHandler) CqeHandlerSimpleQ;
> +#endif /* CONFIG_LINUX_IO_URING */
> +
> /* Callbacks for file descriptor monitoring implementations */
> typedef struct {
> /*
> @@ -138,6 +159,27 @@ typedef struct {
> * Called with list_lock incremented.
> */
> void (*gsource_dispatch)(AioContext *ctx, AioHandlerList *ready_list);
> +
> +#ifdef CONFIG_LINUX_IO_URING
> + /**
> + * aio_add_sqe: Add an io_uring sqe for submission.
s/aio_add_sqe/add_sqe/
> + * @prep_sqe: invoked with an sqe that should be prepared for submission
> + * @opaque: user-defined argument to @prep_sqe()
> + * @cqe_handler: the unique cqe handler associated with this request
> + *
> + * The caller's @prep_sqe() function is invoked to fill in the details of
> + * the sqe. Do not call io_uring_sqe_set_data() on this sqe.
> + *
> + * The kernel may see the sqe as soon as @prep_sqe() returns or it may take
> + * until the next event loop iteration.
> + *
> + * This function is called from the current AioContext and is not
> + * thread-safe.
> + */
> + void (*add_sqe)(AioContext *ctx,
> + void (*prep_sqe)(struct io_uring_sqe *sqe, void *opaque),
> + void *opaque, CqeHandler *cqe_handler);
> +#endif /* CONFIG_LINUX_IO_URING */
> } FDMonOps;
>
> /*
> @@ -255,7 +297,11 @@ struct AioContext {
> struct io_uring fdmon_io_uring;
> AioHandlerSList submit_list;
> gpointer io_uring_fd_tag;
> -#endif
> +
> + /* Pending callback state for cqe handlers */
> + CqeHandlerSimpleQ cqe_handler_ready_list;
> + QEMUBH *cqe_handler_bh;
> +#endif /* CONFIG_LINUX_IO_URING */
>
> /* TimerLists for calling timers - one per clock type. Has its own
> * locking.
> @@ -761,4 +807,40 @@ void aio_context_set_aio_params(AioContext *ctx, int64_t max_batch);
> */
> void aio_context_set_thread_pool_params(AioContext *ctx, int64_t min,
> int64_t max, Error **errp);
> +
> +#ifdef CONFIG_LINUX_IO_URING
> +/**
> + * aio_has_io_uring: Return whether io_uring is available.
> + *
> + * io_uring is either available in all AioContexts or in none, so this only
> + * needs to be called once from within any thread's AioContext.
> + */
> +static inline bool aio_has_io_uring(void)
> +{
> + AioContext *ctx = qemu_get_current_aio_context();
> + return ctx->fdmon_ops->add_sqe;
> +}
> +
> +/**
> + * aio_add_sqe: Add an io_uring sqe for submission.
> + * @prep_sqe: invoked with an sqe that should be prepared for submission
> + * @opaque: user-defined argument to @prep_sqe()
> + * @cqe_handler: the unique cqe handler associated with this request
> + *
> + * The caller's @prep_sqe() function is invoked to fill in the details of the
> + * sqe. Do not call io_uring_sqe_set_data() on this sqe.
> + *
> + * The sqe is submitted by the current AioContext. The kernel may see the sqe
> + * as soon as @prep_sqe() returns or it may take until the next event loop
> + * iteration.
> + *
> + * When the AioContext is destroyed, pending sqes are ignored and their
> + * CqeHandlers are not invoked.
> + *
> + * This function must be called only when aio_has_io_uring() returns true.
> + */
> +void aio_add_sqe(void (*prep_sqe)(struct io_uring_sqe *sqe, void *opaque),
> + void *opaque, CqeHandler *cqe_handler);
> +#endif /* CONFIG_LINUX_IO_URING */
> +
> #endif
> diff --git a/util/aio-posix.h b/util/aio-posix.h
> index 6f9d97d866..57ef801a5f 100644
> --- a/util/aio-posix.h
> +++ b/util/aio-posix.h
> @@ -36,6 +36,7 @@ struct AioHandler {
> #ifdef CONFIG_LINUX_IO_URING
> QSLIST_ENTRY(AioHandler) node_submitted;
> unsigned flags; /* see fdmon-io_uring.c */
> + CqeHandler cqe_handler;
Can we either rename this or add a comment to clarify that this is for
fdmon-internal requests like POLL_ADD/REMOVE?
> #endif
> int64_t poll_idle_timeout; /* when to stop userspace polling */
> bool poll_ready; /* has polling detected an event? */
> diff --git a/util/aio-posix.c b/util/aio-posix.c
> index 800b4debbf..df945312b3 100644
> --- a/util/aio-posix.c
> +++ b/util/aio-posix.c
> +/* Returns true if a handler became ready */
> +static bool process_cqe(AioContext *ctx,
> + AioHandlerList *ready_list,
> + struct io_uring_cqe *cqe)
> +{
> + CqeHandler *cqe_handler = io_uring_cqe_get_data(cqe);
> +
> + /* poll_timeout and poll_remove have a zero user_data field */
> + if (!cqe_handler) {
> + return false;
> + }
> +
> + /*
> + * Special handling for AioHandler cqes. They need ready_list and have a
> + * return value.
> + */
> + if (cqe_handler->cb == fdmon_special_cqe_handler) {
> + AioHandler *node = container_of(cqe_handler, AioHandler, cqe_handler);
> + return process_cqe_aio_handler(ctx, ready_list, node, cqe);
> + }
Is the reason why we special case internal requests (instead of just
using the normal mechanism where the callback is actually called) the
overhead of going through a BH that we don't want to have for every
event?
But if so, why are we okay with the same overhead for other requests?
> +
> + cqe_handler->cqe = *cqe;
> + QSIMPLEQ_INSERT_TAIL(&ctx->cqe_handler_ready_list, cqe_handler, next);
> + qemu_bh_schedule(ctx->cqe_handler_bh);
> + return false;
> +}
> +
> static int process_cq_ring(AioContext *ctx, AioHandlerList *ready_list)
> {
> struct io_uring *ring = &ctx->fdmon_io_uring;
> @@ -368,6 +438,7 @@ static const FDMonOps fdmon_io_uring_ops = {
> .gsource_prepare = fdmon_io_uring_gsource_prepare,
> .gsource_check = fdmon_io_uring_gsource_check,
> .gsource_dispatch = fdmon_io_uring_gsource_dispatch,
> + .add_sqe = fdmon_io_uring_add_sqe,
> };
>
> void fdmon_io_uring_setup(AioContext *ctx, Error **errp)
> @@ -383,6 +454,8 @@ void fdmon_io_uring_setup(AioContext *ctx, Error **errp)
> }
>
> QSLIST_INIT(&ctx->submit_list);
> + QSIMPLEQ_INIT(&ctx->cqe_handler_ready_list);
> + ctx->cqe_handler_bh = aio_bh_new(ctx, cqe_handler_bh, ctx);
> ctx->fdmon_ops = &fdmon_io_uring_ops;
> ctx->io_uring_fd_tag = g_source_add_unix_fd(&ctx->source,
> ctx->fdmon_io_uring.ring_fd, G_IO_IN);
> @@ -390,33 +463,38 @@ void fdmon_io_uring_setup(AioContext *ctx, Error **errp)
>
> void fdmon_io_uring_destroy(AioContext *ctx)
> {
> - if (ctx->fdmon_ops == &fdmon_io_uring_ops) {
> - AioHandler *node;
> + AioHandler *node;
>
> - io_uring_queue_exit(&ctx->fdmon_io_uring);
> + if (ctx->fdmon_ops != &fdmon_io_uring_ops) {
> + return;
> + }
>
> - /* Move handlers due to be removed onto the deleted list */
> - while ((node = QSLIST_FIRST_RCU(&ctx->submit_list))) {
> - unsigned flags = qatomic_fetch_and(&node->flags,
> - ~(FDMON_IO_URING_PENDING |
> - FDMON_IO_URING_ADD |
> - FDMON_IO_URING_REMOVE |
> - FDMON_IO_URING_DELETE_AIO_HANDLER));
> + io_uring_queue_exit(&ctx->fdmon_io_uring);
>
> - if ((flags & FDMON_IO_URING_REMOVE) ||
> - (flags & FDMON_IO_URING_DELETE_AIO_HANDLER)) {
> - QLIST_INSERT_HEAD_RCU(&ctx->deleted_aio_handlers,
> - node, node_deleted);
> - }
> + /* Move handlers due to be removed onto the deleted list */
> + while ((node = QSLIST_FIRST_RCU(&ctx->submit_list))) {
> + unsigned flags = qatomic_fetch_and(&node->flags,
> + ~(FDMON_IO_URING_PENDING |
> + FDMON_IO_URING_ADD |
> + FDMON_IO_URING_REMOVE |
> + FDMON_IO_URING_DELETE_AIO_HANDLER));
>
> - QSLIST_REMOVE_HEAD_RCU(&ctx->submit_list, node_submitted);
> + if ((flags & FDMON_IO_URING_REMOVE) ||
> + (flags & FDMON_IO_URING_DELETE_AIO_HANDLER)) {
> + QLIST_INSERT_HEAD_RCU(&ctx->deleted_aio_handlers,
> + node, node_deleted);
> }
>
> - g_source_remove_unix_fd(&ctx->source, ctx->io_uring_fd_tag);
> - ctx->io_uring_fd_tag = NULL;
> -
> - qemu_lockcnt_lock(&ctx->list_lock);
> - fdmon_poll_downgrade(ctx);
> - qemu_lockcnt_unlock(&ctx->list_lock);
> + QSLIST_REMOVE_HEAD_RCU(&ctx->submit_list, node_submitted);
> }
> +
> + g_source_remove_unix_fd(&ctx->source, ctx->io_uring_fd_tag);
> + ctx->io_uring_fd_tag = NULL;
> +
> + assert(QSIMPLEQ_EMPTY(&ctx->cqe_handler_ready_list));
> + qemu_bh_delete(ctx->cqe_handler_bh);
These are the only two lines that are actually added in the last hunk
apart from switching from a big if to an early return. I wonder if the
improved readability of the diff would be worth a separate patch just
for the switch to the early return. (But git diff -w is helpful enough
once you realise what's going on.)
> + qemu_lockcnt_lock(&ctx->list_lock);
> + fdmon_poll_downgrade(ctx);
> + qemu_lockcnt_unlock(&ctx->list_lock);
> }
Kevin
next prev parent reply other threads:[~2025-10-10 15:26 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-10 17:56 [PATCH v4 00/12] aio: add the aio_add_sqe() io_uring API Stefan Hajnoczi
2025-09-10 17:56 ` [PATCH v4 01/12] aio-posix: fix race between io_uring CQE and AioHandler deletion Stefan Hajnoczi
2025-10-09 14:16 ` Kevin Wolf
2025-09-10 17:56 ` [PATCH v4 02/12] aio-posix: keep polling enabled with fdmon-io_uring.c Stefan Hajnoczi
2025-10-09 14:19 ` Kevin Wolf
2025-09-10 17:56 ` [PATCH v4 03/12] tests/unit: skip test-nested-aio-poll with io_uring Stefan Hajnoczi
2025-10-09 14:20 ` Kevin Wolf
2025-09-10 17:56 ` [PATCH v4 04/12] aio-posix: integrate fdmon into glib event loop Stefan Hajnoczi
2025-10-09 15:25 ` Kevin Wolf
2025-09-10 17:56 ` [PATCH v4 05/12] aio: remove aio_context_use_g_source() Stefan Hajnoczi
2025-10-09 15:46 ` Kevin Wolf
2025-10-09 16:59 ` Kevin Wolf
2025-09-10 17:56 ` [PATCH v4 06/12] aio: free AioContext when aio_context_new() fails Stefan Hajnoczi
2025-10-09 16:06 ` Kevin Wolf
2025-09-10 17:56 ` [PATCH v4 07/12] aio: add errp argument to aio_context_setup() Stefan Hajnoczi
2025-10-09 16:16 ` Kevin Wolf
2025-09-10 17:56 ` [PATCH v4 08/12] aio-posix: gracefully handle io_uring_queue_init() failure Stefan Hajnoczi
2025-10-09 16:19 ` Kevin Wolf
2025-09-10 17:57 ` [PATCH v4 09/12] aio-posix: add aio_add_sqe() API for user-defined io_uring requests Stefan Hajnoczi
2025-10-10 15:23 ` Kevin Wolf [this message]
2025-10-10 16:20 ` Kevin Wolf
2025-09-10 17:57 ` [PATCH v4 10/12] aio-posix: avoid EventNotifier for cqe_handler_bh Stefan Hajnoczi
2025-09-10 17:57 ` [PATCH v4 11/12] block/io_uring: use aio_add_sqe() Stefan Hajnoczi
2025-09-10 17:57 ` [PATCH v4 12/12] block/io_uring: use non-vectored read/write when possible Stefan Hajnoczi
2025-10-10 16:33 ` Kevin Wolf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aOkk0NL7IMq3gFVl@redhat.com \
--to=kwolf@redhat.com \
--cc=eblake@redhat.com \
--cc=fam@euphon.net \
--cc=hibriansong@gmail.com \
--cc=hreitz@redhat.com \
--cc=mehta.aaru20@gmail.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=sgarzare@redhat.com \
--cc=stefanha@redhat.com \
--cc=sw@weilnetz.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).