From: Kevin Wolf <kwolf@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-block@nongnu.org, pbonzini@redhat.com, afaria@redhat.com,
hreitz@redhat.com, qemu-devel@nongnu.org
Subject: Re: [PATCH 5/5] aio-posix: Separate AioPolledEvent per AioHandler
Date: Mon, 10 Mar 2025 12:11:44 +0100 [thread overview]
Message-ID: <Z87I8AVI8X-ARWrM@redhat.com> (raw)
In-Reply-To: <20250310105501.GC359802@fedora>
[-- Attachment #1: Type: text/plain, Size: 5604 bytes --]
Am 10.03.2025 um 11:55 hat Stefan Hajnoczi geschrieben:
> On Fri, Mar 07, 2025 at 11:16:34PM +0100, Kevin Wolf wrote:
> > Adaptive polling has a big problem: It doesn't consider that an event
> > loop can wait for many different events that may have very different
> > typical latencies.
> >
> > For example, think of a guest that tends to send a new I/O request soon
> > after the previous I/O request completes, but the storage on the host is
> > rather slow. In this case, getting the new request from guest quickly
> > means that polling is enabled, but the next thing is performing the I/O
> > request on the backend, which is slow and disables polling again for the
> > next guest request. This means that in such a scenario, polling could
> > help for every other event, but is only ever enabled when it can't
> > succeed.
> >
> > In order to fix this, keep a separate AioPolledEvent for each
> > AioHandler. We will then know that the backend file descriptor always
> > has a high latency and isn't worth polling for, but we also know that
> > the guest is always fast and we should poll for it. This solves at least
> > half of the problem, we can now keep polling for those cases where it
> > makes sense and get the improved performance from it.
> >
> > Since the event loop doesn't know which event will be next, we still do
> > some unnecessary polling while we're waiting for the slow disk. I made
> > some attempts to be more clever than just randomly growing and shrinking
> > the polling time, and even to let callers be explicit about when they
> > expect a new event, but so far this hasn't resulted in improved
> > performance or even caused performance regressions. For now, let's just
> > fix the part that is easy enough to fix, we can revisit the rest later.
> >
> > Signed-off-by: Kevin Wolf <kwolf@redhat.com>
> > ---
> > include/block/aio.h | 1 -
> > util/aio-posix.h | 1 +
> > util/aio-posix.c | 24 +++++++++++++++++++++---
> > util/async.c | 2 --
> > 4 files changed, 22 insertions(+), 6 deletions(-)
> >
> > diff --git a/include/block/aio.h b/include/block/aio.h
> > index 49f46e01cb..0ef7ce48e3 100644
> > --- a/include/block/aio.h
> > +++ b/include/block/aio.h
> > @@ -233,7 +233,6 @@ struct AioContext {
> > int poll_disable_cnt;
> >
> > /* Polling mode parameters */
> > - AioPolledEvent poll;
> > int64_t poll_max_ns; /* maximum polling time in nanoseconds */
> > int64_t poll_grow; /* polling time growth factor */
> > int64_t poll_shrink; /* polling time shrink factor */
> > diff --git a/util/aio-posix.h b/util/aio-posix.h
> > index 4264c518be..82a0201ea4 100644
> > --- a/util/aio-posix.h
> > +++ b/util/aio-posix.h
> > @@ -38,6 +38,7 @@ struct AioHandler {
> > #endif
> > int64_t poll_idle_timeout; /* when to stop userspace polling */
> > bool poll_ready; /* has polling detected an event? */
> > + AioPolledEvent poll;
> > };
> >
> > /* Add a handler to a ready list */
> > diff --git a/util/aio-posix.c b/util/aio-posix.c
> > index 259827c7ad..2251871c61 100644
> > --- a/util/aio-posix.c
> > +++ b/util/aio-posix.c
> > @@ -579,13 +579,19 @@ static bool run_poll_handlers(AioContext *ctx, AioHandlerList *ready_list,
> > static bool try_poll_mode(AioContext *ctx, AioHandlerList *ready_list,
> > int64_t *timeout)
> > {
> > + AioHandler *node;
> > int64_t max_ns;
> >
> > if (QLIST_EMPTY_RCU(&ctx->poll_aio_handlers)) {
> > return false;
> > }
> >
> > - max_ns = qemu_soonest_timeout(*timeout, ctx->poll.ns);
> > + max_ns = 0;
> > + QLIST_FOREACH(node, &ctx->poll_aio_handlers, node_poll) {
> > + max_ns = MAX(max_ns, node->poll.ns);
> > + }
> > + max_ns = qemu_soonest_timeout(*timeout, max_ns);
> > +
> > if (max_ns && !ctx->fdmon_ops->need_wait(ctx)) {
> > /*
> > * Enable poll mode. It pairs with the poll_set_started() in
> > @@ -721,8 +727,14 @@ bool aio_poll(AioContext *ctx, bool blocking)
> >
> > /* Adjust polling time */
> > if (ctx->poll_max_ns) {
> > + AioHandler *node;
> > int64_t block_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - start;
> > - adjust_polling_time(ctx, &ctx->poll, block_ns);
> > +
> > + QLIST_FOREACH(node, &ctx->poll_aio_handlers, node_poll) {
> > + if (QLIST_IS_INSERTED(node, node_ready)) {
> > + adjust_polling_time(ctx, &node->poll, block_ns);
> > + }
> > + }
> > }
> >
> > progress |= aio_bh_poll(ctx);
> > @@ -772,10 +784,16 @@ void aio_context_use_g_source(AioContext *ctx)
> > void aio_context_set_poll_params(AioContext *ctx, int64_t max_ns,
> > int64_t grow, int64_t shrink, Error **errp)
> > {
> > + AioHandler *node;
> > +
> > /* No thread synchronization here, it doesn't matter if an incorrect value
> > * is used once.
> > */
>
> If you respin this series:
>
> This comment is confusing now that qemu_lockcnt_inc() is being used.
> Lockcnt tells other threads in aio_set_fd_handler() not to remove nodes
> from the aio_handlers list (because we're traversing the list).
>
> The comment is about the poll state though, not about the aio_handlers
> list. Moving it down to where poll_max_ns, etc are assigned would make
> it clearer.
Yes, I can do that while applying the series.
Should I add your R-b after making the change?
Kevin
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
next prev parent reply other threads:[~2025-03-10 11:13 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-07 22:16 [PATCH 0/5] block: Improve writethrough performance Kevin Wolf
2025-03-07 22:16 ` [PATCH 1/5] file-posix: Support FUA writes Kevin Wolf
2025-03-10 10:41 ` Stefan Hajnoczi
2025-03-07 22:16 ` [PATCH 2/5] block/io: Ignore FUA with cache.no-flush=on Kevin Wolf
2025-03-10 10:42 ` Stefan Hajnoczi
2025-03-07 22:16 ` [PATCH 3/5] aio: Create AioPolledEvent Kevin Wolf
2025-03-10 10:55 ` Stefan Hajnoczi
2025-03-07 22:16 ` [PATCH 4/5] aio-posix: Factor out adjust_polling_time() Kevin Wolf
2025-03-10 10:55 ` Stefan Hajnoczi
2025-03-07 22:16 ` [PATCH 5/5] aio-posix: Separate AioPolledEvent per AioHandler Kevin Wolf
2025-03-10 10:55 ` Stefan Hajnoczi
2025-03-10 11:11 ` Kevin Wolf [this message]
2025-03-11 2:18 ` Stefan Hajnoczi
2025-03-10 10:55 ` [PATCH 0/5] block: Improve writethrough performance Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z87I8AVI8X-ARWrM@redhat.com \
--to=kwolf@redhat.com \
--cc=afaria@redhat.com \
--cc=hreitz@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).