From: Nicolas Saenz Julienne <nsaenzju@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: kwolf@redhat.com, fam@euphon.net, berrange@redhat.com,
qemu-block@nongnu.org, michael.roth@amd.com, mtosatti@redhat.com,
qemu-devel@nongnu.org, armbru@redhat.com, eduardo@habkost.net,
hreitz@redhat.com, pbonzini@redhat.com, eblake@redhat.com
Subject: Re: [PATCH 3/3] util/event-loop: Introduce options to set the thread pool size
Date: Mon, 28 Feb 2022 20:20:33 +0100 [thread overview]
Message-ID: <bf4b58738cd5dc2273dca781867bb5e135d57596.camel@redhat.com> (raw)
In-Reply-To: <YhdgtPe2FRnPp2J7@stefanha-x1.localdomain>
On Thu, 2022-02-24 at 10:40 +0000, Stefan Hajnoczi wrote:
> On Mon, Feb 21, 2022 at 06:08:45PM +0100, Nicolas Saenz Julienne wrote:
> > The thread pool regulates itself: when idle, it kills threads until
> > empty, when in demand, it creates new threads until full. This behaviour
> > doesn't play well with latency sensitive workloads where the price of
> > creating a new thread is too high. For example, when paired with qemu's
> > '-mlock', or using safety features like SafeStack, creating a new thread
> > has been measured take multiple milliseconds.
> >
> > In order to mitigate this let's introduce a new 'EventLoopBackend'
> > property to set the thread pool size. The threads will be created during
> > the pool's initialization, remain available during its lifetime
> > regardless of demand, and destroyed upon freeing it. A properly
> > characterized workload will then be able to configure the pool to avoid
> > any latency spike.
> >
> > Signed-off-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
> > ---
> > include/block/aio.h | 11 +++++++++++
> > qapi/qom.json | 4 +++-
> > util/async.c | 3 +++
> > util/event-loop.c | 15 ++++++++++++++-
> > util/event-loop.h | 4 ++++
> > util/main-loop.c | 13 +++++++++++++
> > util/thread-pool.c | 41 +++++++++++++++++++++++++++++++++++++----
> > 7 files changed, 85 insertions(+), 6 deletions(-)
> >
> > diff --git a/include/block/aio.h b/include/block/aio.h
> > index 5634173b12..331483d1d1 100644
> > --- a/include/block/aio.h
> > +++ b/include/block/aio.h
> > @@ -192,6 +192,8 @@ struct AioContext {
> > QSLIST_HEAD(, Coroutine) scheduled_coroutines;
> > QEMUBH *co_schedule_bh;
> >
> > + int pool_min;
> > + int pool_max;
>
> Are these fields protected by ThreadPool->lock? Please document. This is
> a clue that maybe these fields belong in ThreadPool.
Yes they are. I'll document it properly.
> Regarding the field names: the AioContext thread pool field is called
> thread_pool and the user-visible parameters are thread-pool-min/max. I
> suggest calling the fields thread_pool_min/max too so it's clear which
> pool we're talking about and there is a correspondence to user-visible
> parameters.
Noted.
> > @@ -350,3 +358,28 @@ void thread_pool_free(ThreadPool *pool)
> > qemu_mutex_destroy(&pool->lock);
> > g_free(pool);
> > }
> > +
> > +void aio_context_set_thread_pool_params(AioContext *ctx, uint64_t min,
> > + uint64_t max, Error **errp)
> > +{
> > + ThreadPool *pool = ctx->thread_pool;
> > +
> > + if (min > max || !max) {
>
> ctx->pool_min/max are int while the min/max arguments are uint64_t.
> Please add an INT_MAX check to detect overflow.
Noted.
> > + error_setg(errp, "bad thread-pool-min/thread-pool-max values");
> > + return;
> > + }
> > +
> > + if (pool) {
> > + qemu_mutex_lock(&pool->lock);
> > + }
>
> This code belongs in util/thread-pool.c. I guess the reason for keeping
> the fields in AioContext instead of ThreadPool is because the ThreadPool
> is created on demand and we'd have nowhere to store the parameter value.
Indeed.
> I suggest we bite the bullet and keep an extra copy of the variables in
> AioContext with a clean ThreadPool interface (thread_pool_set_params())
> instead of letting AioContext and ThreadPool access each other's
> internals.
OK!
> > +
> > + ctx->pool_min = min;
> > + ctx->pool_max = max;
> > +
> > + if (pool) {
> > + for (int i = pool->cur_threads; i < ctx->pool_min; i++) {
> > + spawn_thread(pool);
> > + }
>
> What about the reverse: when min is lowered and there are a bunch of
> idle worker threads we could wake them up so they terminate until
> ->pool_min is reached again?
Makes sense, I'll look into it.
--
Nicolás Sáenz
prev parent reply other threads:[~2022-02-28 19:22 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-02-21 17:08 [PATCH 0/3] util/thread-pool: Expose minimun and maximum size Nicolas Saenz Julienne
2022-02-21 17:08 ` [PATCH 1/3] util & iothread: Introduce event-loop abstract class Nicolas Saenz Julienne
2022-02-24 9:48 ` Stefan Hajnoczi
2022-02-26 7:36 ` Paolo Bonzini
2022-02-28 19:05 ` Nicolas Saenz Julienne
2022-03-01 9:17 ` Stefan Hajnoczi
2022-02-21 17:08 ` [PATCH 2/3] util/main-loop: Introduce the main loop into QOM Nicolas Saenz Julienne
2022-02-22 6:07 ` Markus Armbruster
2022-02-24 10:01 ` Stefan Hajnoczi
2022-02-28 19:12 ` Nicolas Saenz Julienne
2022-02-21 17:08 ` [PATCH 3/3] util/event-loop: Introduce options to set the thread pool size Nicolas Saenz Julienne
2022-02-24 10:40 ` Stefan Hajnoczi
2022-02-28 19:20 ` Nicolas Saenz Julienne [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bf4b58738cd5dc2273dca781867bb5e135d57596.camel@redhat.com \
--to=nsaenzju@redhat.com \
--cc=armbru@redhat.com \
--cc=berrange@redhat.com \
--cc=eblake@redhat.com \
--cc=eduardo@habkost.net \
--cc=fam@euphon.net \
--cc=hreitz@redhat.com \
--cc=kwolf@redhat.com \
--cc=michael.roth@amd.com \
--cc=mtosatti@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).