From: Ingo Molnar <mingo@kernel.org>
To: Jiri Olsa <jolsa@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>,
David Ahern <dsahern@gmail.com>,
linux-kernel@vger.kernel.org,
Frederic Weisbecker <fweisbec@gmail.com>,
Namhyung Kim <namhyung@kernel.org>
Subject: Re: [PATCH] perf top: Make -g refer to callchains
Date: Mon, 18 Nov 2013 15:26:53 +0100 [thread overview]
Message-ID: <20131118142653.GA27191@gmail.com> (raw)
In-Reply-To: <20131118132510.GB24375@krava.brq.redhat.com>
* Jiri Olsa <jolsa@redhat.com> wrote:
> On Mon, Nov 18, 2013 at 09:59:45AM -0300, Arnaldo Carvalho de Melo wrote:
> > Em Fri, Nov 15, 2013 at 06:46:09AM +0100, Ingo Molnar escreveu:
> > > btw., here's some 'perf top' call graph performance and profiling
> > > quality feedback, with the latest perf code:
> > >
> > > 'perf top --call-graph fp' now works very well, using just 0.2%
> > > of CPU time on a fast system:
> > >
> > > 4676 mingo 20 0 612m 56m 9948 S 1 0.2 0:00.68 perf
> > >
> > > 'perf top --call-graph dwarf' on the other hand is horrendously
> > > slow, using 20% of CPU time on a 4 GHz CPU:
> > >
> > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> > > 4646 mingo 20 0 658m 81m 12m R 19 0.3 0:18.17 perf
> > >
> > > On another system with a 2.4GHz CPU it's taking up 100% of CPU
> > > time (!):
> > >
> > > PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> > > 8018 mingo 20 0 290320 45220 8520 R 99.5 0.3 0:58.81 perf
> > >
> > > Profiling 'perf top' shows all sorts of very high dwarf
> > > processing overhead:
> >
> > Yeah, top dwarf callchain has been so far a proof of concept, it
> > exacerbates problems that can be seen on 'report', but since its
> > live, we can see it more clearly.
> >
> > The work on improving callchain processing, (rb_tree'ing, new comm
> > infrastructure) alleviated the problem a bit.
> >
> > Tuning the stack size requested from the kernel and using
> > --max-stack can help when it is really needed, but yes, work on it
> > is *badly* needed.
>
> agreed ;-)
>
> also there's new remote unwind interface recently added into libdw,
> which seems to be faster than libunwind.
>
> I plan on adding this soon.
If the main source of overhead is libunwind (which needs independent
confirmation) then would it make sense to implement dwarf stack unwind
support ourselves?
I think SysProf does that and it appears to be faster - its unwind.c
is only 400 lines long as it only implements the small subset needed
to walk the stack - AFAICS.
Thanks,
Ingo
next prev parent reply other threads:[~2013-11-18 14:27 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-11-15 3:51 [PATCH] perf top: Make -g refer to callchains David Ahern
2013-11-15 5:28 ` Ingo Molnar
2013-11-15 5:46 ` Ingo Molnar
2013-11-18 12:59 ` Arnaldo Carvalho de Melo
2013-11-18 13:25 ` Jiri Olsa
2013-11-18 14:26 ` Ingo Molnar [this message]
2013-11-18 17:49 ` Jiri Olsa
2013-11-18 19:17 ` Ingo Molnar
2013-11-18 20:16 ` Jan Kratochvil
2013-11-19 9:26 ` Jean Pihet
2013-11-19 9:33 ` Jan Kratochvil
2013-11-19 9:24 ` Jean Pihet
2013-11-30 12:49 ` [tip:perf/core] " tip-bot for David Ahern
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20131118142653.GA27191@gmail.com \
--to=mingo@kernel.org \
--cc=acme@ghostprotocols.net \
--cc=dsahern@gmail.com \
--cc=fweisbec@gmail.com \
--cc=jolsa@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=namhyung@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).