From: Arnaldo Carvalho de Melo <acme@kernel.org>
To: Ingo Molnar <mingo@kernel.org>
Cc: linux-kernel@vger.kernel.org, Jiri Olsa <jolsa@redhat.com>,
Jiri Olsa <jolsa@kernel.org>, David Ahern <dsahern@gmail.com>,
Namhyung Kim <namhyung@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Wang Nan <wangnan0@huawei.com>,
Arnaldo Carvalho de Melo <acme@redhat.com>
Subject: [PATCH 2/7] perf callchain: Add order support for libunwind DWARF unwinder
Date: Mon, 23 Nov 2015 18:53:49 -0300 [thread overview]
Message-ID: <1448315634-13592-3-git-send-email-acme@kernel.org> (raw)
In-Reply-To: <1448315634-13592-1-git-send-email-acme@kernel.org>
From: Jiri Olsa <jolsa@redhat.com>
As reported by Milian, currently for DWARF unwind (both libdw and
libunwind) we display callchain in callee order only.
Adding the support to follow callchain order setup to libunwind DWARF
unwinder, so we could get following output for report:
$ perf record --call-graph dwarf ls
...
$ perf report --no-children --stdio
39.26% ls libc-2.21.so [.] __strcoll_l
|
---__strcoll_l
mpsort_with_tmp
mpsort_with_tmp
sort_files
main
__libc_start_main
_start
0
$ perf report -g caller --no-children --stdio
...
39.26% ls libc-2.21.so [.] __strcoll_l
|
---0
_start
__libc_start_main
main
sort_files
mpsort_with_tmp
mpsort_with_tmp
__strcoll_l
Based-on-patch-by: Milian Wolff <milian.wolff@kdab.com>
Reported-by: Milian Wolff <milian.wolff@kdab.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tested-by: Wang Nan <wangnan0@huawei.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/20151118075247.GA5416@krava.brq.redhat.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
---
tools/perf/util/unwind-libunwind.c | 47 ++++++++++++++++++++++++--------------
1 file changed, 30 insertions(+), 17 deletions(-)
diff --git a/tools/perf/util/unwind-libunwind.c b/tools/perf/util/unwind-libunwind.c
index 0ae8844fe7a6..3c258a0e4092 100644
--- a/tools/perf/util/unwind-libunwind.c
+++ b/tools/perf/util/unwind-libunwind.c
@@ -615,34 +615,47 @@ static int get_entries(struct unwind_info *ui, unwind_entry_cb_t cb,
void *arg, int max_stack)
{
u64 val;
+ unw_word_t ips[max_stack];
unw_addr_space_t addr_space;
unw_cursor_t c;
- int ret;
+ int ret, i = 0;
ret = perf_reg_value(&val, &ui->sample->user_regs, PERF_REG_IP);
if (ret)
return ret;
- ret = entry(val, ui->thread, cb, arg);
- if (ret)
- return -ENOMEM;
+ ips[i++] = (unw_word_t) val;
- if (--max_stack == 0)
- return 0;
-
- addr_space = thread__priv(ui->thread);
- if (addr_space == NULL)
- return -1;
+ /*
+ * If we need more than one entry, do the DWARF
+ * unwind itself.
+ */
+ if (max_stack - 1 > 0) {
+ addr_space = thread__priv(ui->thread);
+ if (addr_space == NULL)
+ return -1;
+
+ ret = unw_init_remote(&c, addr_space, ui);
+ if (ret)
+ display_error(ret);
+
+ while (!ret && (unw_step(&c) > 0) && i < max_stack) {
+ unw_get_reg(&c, UNW_REG_IP, &ips[i]);
+ ++i;
+ }
- ret = unw_init_remote(&c, addr_space, ui);
- if (ret)
- display_error(ret);
+ max_stack = i;
+ }
- while (!ret && (unw_step(&c) > 0) && max_stack--) {
- unw_word_t ip;
+ /*
+ * Display what we got based on the order setup.
+ */
+ for (i = 0; i < max_stack && !ret; i++) {
+ int j = i;
- unw_get_reg(&c, UNW_REG_IP, &ip);
- ret = ip ? entry(ip, ui->thread, cb, arg) : 0;
+ if (callchain_param.order == ORDER_CALLER)
+ j = max_stack - i - 1;
+ ret = ips[j] ? entry(ips[j], ui->thread, cb, arg) : 0;
}
return ret;
--
2.1.0
next prev parent reply other threads:[~2015-11-23 21:54 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-11-23 21:53 [GIT PULL 0/7] perf/core improvements and fixes Arnaldo Carvalho de Melo
2015-11-23 21:53 ` [PATCH 1/7] perf callchain: Move initial entry call into get_entries function Arnaldo Carvalho de Melo
2015-11-23 21:53 ` Arnaldo Carvalho de Melo [this message]
2015-11-23 21:53 ` [PATCH 3/7] perf test: Add callchain order setup for DWARF unwinder test Arnaldo Carvalho de Melo
2015-11-23 21:53 ` [PATCH 4/7] perf callchain: Add order support for libdw DWARF unwinder Arnaldo Carvalho de Melo
2015-11-23 21:53 ` [PATCH 5/7] perf tools: Add 'perf config' command Arnaldo Carvalho de Melo
2015-11-23 21:53 ` [PATCH 6/7] perf config: Add initial man page Arnaldo Carvalho de Melo
2015-11-23 21:53 ` [PATCH 7/7] perf callchain: Add missing parent_val initialization Arnaldo Carvalho de Melo
2015-11-24 8:10 ` [GIT PULL 0/7] perf/core improvements and fixes Ingo Molnar
2015-11-24 8:28 ` Jiri Olsa
2015-11-24 8:42 ` Ingo Molnar
2015-11-24 9:26 ` Jiri Olsa
2015-11-24 9:47 ` Jiri Olsa
2015-11-26 11:00 ` Ingo Molnar
2015-11-26 12:47 ` Jiri Olsa
2015-11-26 7:56 ` Ingo Molnar
2015-11-26 8:12 ` Ingo Molnar
2015-11-26 9:09 ` Jiri Olsa
2015-11-24 10:28 ` Jiri Olsa
2015-11-26 8:13 ` Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1448315634-13592-3-git-send-email-acme@kernel.org \
--to=acme@kernel.org \
--cc=acme@redhat.com \
--cc=dsahern@gmail.com \
--cc=jolsa@kernel.org \
--cc=jolsa@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@kernel.org \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=wangnan0@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).