From: "Daniel P. Berrangé" <berrange@redhat.com>
To: Jon Kohler <jon@nutanix.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>, qemu-devel@nongnu.org
Subject: Re: [PATCH v2] util/oslib-posix: increase memprealloc thread count to 32
Date: Thu, 6 Nov 2025 15:53:50 +0000 [thread overview]
Message-ID: <aQzEjov7dGPQeR3f@redhat.com> (raw)
In-Reply-To: <20251106163143.4185468-1-jon@nutanix.com>
On Thu, Nov 06, 2025 at 09:31:43AM -0700, Jon Kohler wrote:
> Increase MAX_MEM_PREALLOC_THREAD_COUNT from 16 to 32. This was last
> touched in 2017 [1] and, since then, physical machine sizes and VMs
> therein have continue to get even bigger, both on average and on the
> extremes.
>
> For very large VMs, using 16 threads to preallocate memory can be a
> non-trivial bottleneck during VM start-up and migration. Increasing
> this limit to 32 threads reduces the time taken for these operations.
>
> Test results from quad socket Intel 8490H (4x 60 cores) show a fairly
> linear gain of 50% with the 2x thread count increase.
>
> ---------------------------------------------
> Idle Guest w/ 2M HugePages | Start-up time
> ---------------------------------------------
> 240 vCPU, 7.5TB (16 threads) | 2m41.955s
> ---------------------------------------------
> 240 vCPU, 7.5TB (32 threads) | 1m19.404s
> ---------------------------------------------
>
> Note: Going above 32 threads appears to have diminishing returns at
> the point where the memory bandwidth and context switching costs
> appear to be a limiting factor to linear scaling. For posterity, on
> the same system as above:
> - 32 threads: 1m19s
> - 48 threads: 1m4s
> - 64 threads: 59s
> - 240 threads: 50s
>
> Additional thread counts also get less interesting as the amount of
> memory is to be preallocated is smaller. Putting that all together,
> 32 threads appears to be a sane number with a solid speedup on fairly
> modern hardware. To go faster, we'd either need to improve the hardware
> (CPU/memory) itself or improve clear_pages_*() on the kernel side to
> be more efficient.
>
> [1] 1e356fc14bea ("mem-prealloc: reduce large guest start-up and migration time.")
>
> Signed-off-by: Jon Kohler <jon@nutanix.com>
> ---
> util/oslib-posix.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
>
> diff --git a/util/oslib-posix.c b/util/oslib-posix.c
> index 3c14b72665..dc001da66d 100644
> --- a/util/oslib-posix.c
> +++ b/util/oslib-posix.c
> @@ -61,7 +61,7 @@
> #include "qemu/memalign.h"
> #include "qemu/mmap-alloc.h"
>
> -#define MAX_MEM_PREALLOC_THREAD_COUNT 16
> +#define MAX_MEM_PREALLOC_THREAD_COUNT 32
>
> struct MemsetThread;
>
> --
> 2.43.0
>
>
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
prev parent reply other threads:[~2025-11-06 15:54 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-06 16:31 [PATCH v2] util/oslib-posix: increase memprealloc thread count to 32 Jon Kohler
2025-11-06 15:53 ` Daniel P. Berrangé [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aQzEjov7dGPQeR3f@redhat.com \
--to=berrange@redhat.com \
--cc=jon@nutanix.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).