qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Daniel P. Berrangé" <berrange@redhat.com>
To: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Michal Privoznik <mprivozn@redhat.com>,
	jmario@redhat.com, qemu-devel@nongnu.org, david@redhat.com
Subject: Re: [PATCH] util: NUMA aware memory preallocation
Date: Wed, 11 May 2022 10:20:52 +0100	[thread overview]
Message-ID: <Ynt/9K/FUFBtLinm@redhat.com> (raw)
In-Reply-To: <Ynt0/9jfeUPg4JxN@work-vm>

On Wed, May 11, 2022 at 09:34:07AM +0100, Dr. David Alan Gilbert wrote:
> * Michal Privoznik (mprivozn@redhat.com) wrote:
> > When allocating large amounts of memory the task is offloaded
> > onto threads. These threads then use various techniques to
> > allocate the memory fully (madvise(), writing into the memory).
> > However, these threads are free to run on any CPU, which becomes
> > problematic on NUMA machines because it may happen that a thread
> > is running on a distant node.
> > 
> > Ideally, this is something that a management application would
> > resolve, but we are not anywhere close to that, Firstly, memory
> > allocation happens before monitor socket is even available. But
> > okay, that's what -preconfig is for. But then the problem is that
> > 'object-add' would not return until all memory is preallocated.
> > 
> > Long story short, management application has no way of learning
> > TIDs of allocator threads so it can't make them run NUMA aware.
> > 
> > But what we can do is to propagate the 'host-nodes' attribute of
> > MemoryBackend object down to where preallocation threads are
> > created and set their affinity according to the attribute.
> 
> Joe (cc'd) sent me some numbers for this which emphasise how useful it
> is:
>  | On systems with 4 physical numa nodes and 2-6 Tb of memory, this numa-aware
>  |preallocation provided about a 25% speedup in touching the pages.
>  |The speedup gets larger as the numa node count and memory sizes grow.
> ....
>  | In a simple parallel 1Gb page-zeroing test on a very large system (32-numa
>  | nodes and 47Tb of memory), the numa-aware preallocation was 2.3X faster
>  | than letting the threads float wherever.
>  | We're working with someone whose large guest normally takes 4.5 hours to
>  | boot.  With Michal P's initial patch to parallelize the preallocation, that
>  | time dropped to about 1 hour.  Including this numa-aware preallocation
>  | would reduce the guest boot time to less than 1/2 hour.
> 
> so chopping *half an hour* off the startup time seems a worthy
> optimisation (even if most of us aren't fortunate enough to have 47T of
> ram).

I presume this test was done with bare QEMU though, not libvirt managed
QEMU, as IIUC, the latter would not be able to set its affinity and so
never see this benefit.


With regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



  reply	other threads:[~2022-05-11  9:36 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-10  6:55 [PATCH] util: NUMA aware memory preallocation Michal Privoznik
2022-05-10  9:12 ` Daniel P. Berrangé
2022-05-10 10:27   ` Dr. David Alan Gilbert
2022-05-11 13:16   ` Michal Prívozník
2022-05-11 14:50     ` David Hildenbrand
2022-05-11 15:08     ` Daniel P. Berrangé
2022-05-11 16:41       ` David Hildenbrand
2022-05-11  8:34 ` Dr. David Alan Gilbert
2022-05-11  9:20   ` Daniel P. Berrangé [this message]
2022-05-11  9:19 ` Daniel P. Berrangé
2022-05-11  9:31   ` David Hildenbrand
2022-05-11  9:34     ` Daniel P. Berrangé
2022-05-11 10:03       ` David Hildenbrand
2022-05-11 10:10         ` Daniel P. Berrangé
2022-05-11 11:07           ` Paolo Bonzini
2022-05-11 16:54             ` Daniel P. Berrangé
2022-05-12  7:41               ` Paolo Bonzini
2022-05-12  8:15                 ` Daniel P. Berrangé
2022-06-08 10:34       ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Ynt/9K/FUFBtLinm@redhat.com \
    --to=berrange@redhat.com \
    --cc=david@redhat.com \
    --cc=dgilbert@redhat.com \
    --cc=jmario@redhat.com \
    --cc=mprivozn@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).