From: Arnaldo Carvalho de Melo <acme@kernel.org>
To: Alan Maguire <alan.maguire@oracle.com>
Cc: Ihor Solodrai <ihor.solodrai@pm.me>,
dwarves@vger.kernel.org, eddyz87@gmail.com, andrii@kernel.org,
mykolal@fb.com, bpf@vger.kernel.org
Subject: Re: [PATCH dwarves] dwarves: set cu->obstack chunk size to 128Kb
Date: Thu, 9 Jan 2025 13:03:26 -0300 [thread overview]
Message-ID: <Z3_zTvfgnOt-fhLZ@x1> (raw)
In-Reply-To: <bf09b28d-e1b6-4de8-8eb2-410b017679ff@oracle.com>
On Wed, Jan 08, 2025 at 05:22:56PM +0000, Alan Maguire wrote:
> On 08/01/2025 16:38, Ihor Solodrai wrote:
> > On Wednesday, January 8th, 2025 at 5:55 AM, Alan Maguire <alan.maguire@oracle.com> wrote:
> >
> >>
> >>
> >> On 21/12/2024 03:04, Ihor Solodrai wrote:
> >>
> >>> In dwarf_loader with growing nr_jobs the wall-clock time of BTF
> >>> encoding starts worsening after a certain point [1].
> >>>
> >>> While some overhead of additional threads is expected, it's not
> >>> supposed to be noticeable unless nr_jobs is set to an unreasonably big
> >>> value.
> >>>
> >>> It turns out when there are "too many" threads decoding DWARF, they
> >>> start competing for memory allocation: significant number of cycles is
> >>> spent in osq_lock - in the depth of malloc called within
> >>> cu__zalloc. Which suggests that many threads are trying to allocate
> >>> memory at the same time.
> >>>
> >>> See an example on a perf flamegraph for run with -j240 [2]. This is
> >>> 12-core machine, so the effect is small. On machines with more cores
> >>> this problem is worse.
> >>>
> >>> Increasing the chunk size of obstacks associated with CUs helps to
> >>> reduce the performance penalty caused by this race condition.
> >>
> >>
> >> Is this because starting with a larger obstack size means we don't have
> >> to keep reallocating as the obstack grows?
> >
> > Yes. Bigger obstack size leads to lower number of malloc calls. The
> > mallocs tend to happen at the same time between threads in the case of
> > DWARF decoding.
> >
> > Curiously, setting a higher obstack chunk size (like 1Mb), does not
> > improve the overall wall-clock time, and can even make it worse.
> > This happens because the kernel takes a different code path to allocate
> > bigger chunks of memory. And also most CUs are not big (at least in case
> > of vmlinux), so a bigger chunk size probably increases wasted memory.
> >
> > 128Kb seems to be close to a sweet spot for the vmlinux.
> > The default is 4Kb.
> >
>
> Thanks for the additional details!
>
> Reviewed-by: Alan Maguire <alan.maguire@oracle.com>
I'm adding these details and your reviewed-by tag to that cset.
Thanks!
- Arnaldo
prev parent reply other threads:[~2025-01-09 16:03 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-21 3:04 [PATCH dwarves] dwarves: set cu->obstack chunk size to 128Kb Ihor Solodrai
2024-12-21 3:08 ` Ihor Solodrai
2025-01-08 13:55 ` Alan Maguire
2025-01-08 16:38 ` Ihor Solodrai
2025-01-08 17:22 ` Alan Maguire
2025-01-09 16:03 ` Arnaldo Carvalho de Melo [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z3_zTvfgnOt-fhLZ@x1 \
--to=acme@kernel.org \
--cc=alan.maguire@oracle.com \
--cc=andrii@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=dwarves@vger.kernel.org \
--cc=eddyz87@gmail.com \
--cc=ihor.solodrai@pm.me \
--cc=mykolal@fb.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox