From: "Daniel P. Berrangé" <berrange@redhat.com>
To: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-devel@nongnu.org, Kevin Wolf <kwolf@redhat.com>,
Sanjay Rao <srao@redhat.com>,
Boaz Ben Shabat <bbenshab@redhat.com>,
Joe Mario <jmario@redhat.com>
Subject: Re: [PATCH] coroutine: cap per-thread local pool size
Date: Tue, 19 Mar 2024 20:10:49 +0000 [thread overview]
Message-ID: <ZfnxSd4lseZuWoQ5@redhat.com> (raw)
In-Reply-To: <20240319175510.GA1127203@fedora>
On Tue, Mar 19, 2024 at 01:55:10PM -0400, Stefan Hajnoczi wrote:
> On Tue, Mar 19, 2024 at 01:43:32PM +0000, Daniel P. Berrangé wrote:
> > On Mon, Mar 18, 2024 at 02:34:29PM -0400, Stefan Hajnoczi wrote:
> > > diff --git a/util/qemu-coroutine.c b/util/qemu-coroutine.c
> > > index 5fd2dbaf8b..2790959eaf 100644
> > > --- a/util/qemu-coroutine.c
> > > +++ b/util/qemu-coroutine.c
> >
> > > +static unsigned int get_global_pool_hard_max_size(void)
> > > +{
> > > +#ifdef __linux__
> > > + g_autofree char *contents = NULL;
> > > + int max_map_count;
> > > +
> > > + /*
> > > + * Linux processes can have up to max_map_count virtual memory areas
> > > + * (VMAs). mmap(2), mprotect(2), etc fail with ENOMEM beyond this limit. We
> > > + * must limit the coroutine pool to a safe size to avoid running out of
> > > + * VMAs.
> > > + */
> > > + if (g_file_get_contents("/proc/sys/vm/max_map_count", &contents, NULL,
> > > + NULL) &&
> > > + qemu_strtoi(contents, NULL, 10, &max_map_count) == 0) {
> > > + /*
> > > + * This is a conservative upper bound that avoids exceeding
> > > + * max_map_count. Leave half for non-coroutine users like library
> > > + * dependencies, vhost-user, etc. Each coroutine takes up 2 VMAs so
> > > + * halve the amount again.
Leaving half for loaded libraries, etc is quite conservative
if max_map_count is the small-ish 64k default.
That reservation could perhaps a fixed number like 5,000 ?
> > > + */
> > > + return max_map_count / 4;
> >
> > That's 256,000 coroutines, which still sounds incredibly large
> > to me.
>
> Any ideas for tweaking this heuristic?
The awkward thing about this limit is that its hardcoded, and
since it is indeed a "heuristic", we know it is going to be
sub-optimal for some use cases / scenarios.
The worst case upper limit is
num virtio-blk * num threads * num queues
Reducing the number of devices isn't practical if the guest
genuinely needs that many volumes.
Reducing the threads or queues artificially limits the peak
performance of a single disk handling in isolation, while
other disks are idle, so that's not desirable.
So there's no way to cap the worst case scenario, while
still maximising the single disk performance possibilities.
With large VMs with many CPUs and many disks, it could be
reasonable to not expect a real guest to need to maximise
I/O on every disk at the same time, and thus want to put
some cap there to control worst case resource usage.
It feels like it leans towards being able to control the
coroutine pool limit explicitly, as a CLI option, to override
this default hueristic.
> > > + }
> > > +#endif
> > > +
> > > + return UINT_MAX;
> >
> > Why UINT_MAX as a default ? If we can't read procfs, we should
> > assume some much smaller sane default IMHO, that corresponds to
> > what current linux default max_map_count would be.
>
> This line is not Linux-specific. I don't know if other OSes have an
> equivalent to max_map_count.
>
> I agree with defaulting to 64k-ish on Linux.
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
next prev parent reply other threads:[~2024-03-19 20:12 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-18 18:34 [PATCH] coroutine: cap per-thread local pool size Stefan Hajnoczi
2024-03-19 13:32 ` Kevin Wolf
2024-03-19 13:45 ` Stefan Hajnoczi
2024-03-19 14:23 ` Sanjay Rao
2024-03-19 13:43 ` Daniel P. Berrangé
2024-03-19 16:54 ` Kevin Wolf
2024-03-19 17:10 ` Daniel P. Berrangé
2024-03-19 17:41 ` Kevin Wolf
2024-03-19 20:14 ` Daniel P. Berrangé
2024-03-19 17:55 ` Stefan Hajnoczi
2024-03-19 20:10 ` Daniel P. Berrangé [this message]
2024-03-20 13:35 ` Stefan Hajnoczi
2024-03-20 14:09 ` Daniel P. Berrangé
2024-03-21 12:21 ` Kevin Wolf
2024-03-21 16:59 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZfnxSd4lseZuWoQ5@redhat.com \
--to=berrange@redhat.com \
--cc=bbenshab@redhat.com \
--cc=jmario@redhat.com \
--cc=kwolf@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=srao@redhat.com \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).