From: Arnaldo Carvalho de Melo <acme@kernel.org>
To: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: James Clark <james.clark@linaro.org>,
"linux-perf-users@vger.kernel.org"
<linux-perf-users@vger.kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>,
Namhyung Kim <namhyung@kernel.org>
Subject: Re: Perf doesn't display kernel symbols anymore (bisected commit 659ad3492b91 ("perf maps: Switch from rbtree to lazily sorted array for addresses"))
Date: Thu, 2 Jan 2025 15:19:02 -0300 [thread overview]
Message-ID: <Z3bYltoidQpqtyJ_@x1> (raw)
In-Reply-To: <5217124a-f033-4085-b9f5-a477c96728d6@csgroup.eu>
On Thu, Jan 02, 2025 at 03:41:17PM +0100, Christophe Leroy wrote:
> Le 26/12/2024 à 16:51, Arnaldo Carvalho de Melo a écrit :
> > On Tue, Dec 17, 2024 at 02:18:07PM +0000, James Clark wrote:
> > > On 16/12/2024 7:01 am, Christophe Leroy wrote:
> > > > I noticed with 6.12 LTS Kernel that perf top and perf record/report
> > > > don't display kernel symbols anymore, instead it displays the raw
> > > > address with [unknown] as object.
> > > > After bisect I see that the problem appears with commit 659ad3492b91
> > > > ("perf maps: Switch from rbtree to lazily sorted array for addresses").
> > > You might want to try applying 0b90dfda222e3 as it claims to fix this
> > > commit. I doubt that will fix your issue but it's worth being sure.
> > > There was also another fix recently that could be related: 23c44f6c83
> > > Did you try the perf-tools-next branch? Maybe something that's already fixed
> > > needs to be backported.
> > Right, I tried reproducing this on perf-tools-next and couldn't, so
> > please test it there.
> I tested it again on latest perf-tools-next (ed60738a9b7e ("perf stat:
> Document and clarify outstate members")) and still have the same problem,
> all kernel symbols appear as [unknown]:
> PerfTop: 4163 irqs/sec kernel:28.3% exact: 0.0% lost: 0/0 drop:
> 0/14199 [4000Hz cpu-clock:ppp], (all, 1 CPU)
So this seems to be on a virt environment? Or have you explicitely used
'-e cpu-clock:ppp'?
Also can you please run with -v or with -vv to see if we can get some
more clues?
Maybe it is somehow not able to read kallsyms or find a suitable
vmlinux?
Also what distro is this?
I did one more test, this time using --stdio on a raspberry pi and
explicitely using '-e cycles:ppp' to get as close to your report as I
could:
root@raspberrypi:~# uname -a
Linux raspberrypi 6.6.51+rpt-rpi-v7 #1 SMP Raspbian 1:6.6.51-1+rpt3 (2024-10-08) armv7l GNU/Linux
root@raspberrypi:~# ~acme/bin/perf --version
perf version 6.13.rc2.ged60738a9b7e
root@raspberrypi:~# timeout 5s ~acme/bin/perf top -e cpu-cycles:ppp --stdio
PerfTop: 926 irqs/sec kernel:79.8% exact: 0.0% lost: 0/0 drop: 0/0 [4000Hz cpu-cycles:ppp], (all, 4 CPUs)
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
12.08% perf [.] io__get_char
8.54% perf [.] kallsyms__parse
5.35% perf [.] __symbols__insert
3.87% [kernel] [k] module_get_kallsym
3.74% [kernel] [k] default_idle_call
3.69% perf [.] io__get_hex
3.50% [kernel] [k] kallsyms_expand_symbol.constprop.0
3.21% perf [.] rb_next
2.84% [kernel] [k] format_decode
2.53% [kernel] [k] number
2.29% [kernel] [k] vsnprintf
2.07% libc.so.6 [.] __libc_calloc
1.96% libc.so.6 [.] strchr
1.76% [kernel] [k] string
1.47% perf [.] symbol__new
1.42% perf [.] dso__load_sym_internal
1.36% libarmmem-v7l.so [.] strlen
1.26% [kernel] [k] memcpy
1.15% libc.so.6 [.] _int_malloc
1.00% perf [.] strlist__node_cmp
0.93% perf [.] rb_insert_color
0.91% [kernel] [k] update_iter
0.82% perf [.] eprintf
0.80% perf [.] rblist__find
0.79% libc.so.6 [.] memset
0.74% perf [.] map__process_kallsym_symbol
0.73% [kernel] [k] finish_task_switch
0.69% perf [.] veprintf
0.57% [kernel] [k] strscpy
0.55% perf [.] perf_hpp__is_dynamic_entry
0.49% libelf-0.188.so [.] gelf_getsym
0.48% [kernel] [k] seq_printf
0.48% libelf-0.188.so [.] gelf_getphdr
0.46% libc.so.6 [.] strcmp
0.45% [kernel] [k] memset
0.40% [kernel] [k] s_show
0.40% [kernel] [k] pointer
0.37% [kernel] [k] seq_read_iter
0.35% perf [.] hist_entry__cmp
root@raspberrypi:~#
next prev parent reply other threads:[~2025-01-02 18:19 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-16 7:01 Perf doesn't display kernel symbols anymore (bisected commit 659ad3492b91 ("perf maps: Switch from rbtree to lazily sorted array for addresses")) Christophe Leroy
2024-12-17 14:18 ` James Clark
2024-12-17 14:50 ` Christophe Leroy
2024-12-26 15:51 ` Arnaldo Carvalho de Melo
2025-01-02 14:41 ` Christophe Leroy
2025-01-02 17:52 ` Christophe Leroy
2025-01-02 18:19 ` Arnaldo Carvalho de Melo [this message]
2025-01-02 19:42 ` Christophe Leroy
2025-01-03 1:08 ` Arnaldo Carvalho de Melo
2025-01-03 6:33 ` Christophe Leroy
2025-01-03 12:40 ` Christophe Leroy
2025-01-03 16:26 ` Arnaldo Carvalho de Melo
2025-01-06 12:38 ` Christophe Leroy
2025-01-06 21:46 ` Namhyung Kim
2025-01-08 17:55 ` Christophe Leroy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z3bYltoidQpqtyJ_@x1 \
--to=acme@kernel.org \
--cc=christophe.leroy@csgroup.eu \
--cc=james.clark@linaro.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).