From: Michael Petlan <mpetlan@redhat.com>
To: linux-perf-users@vger.kernel.org
Cc: vmolnaro@redhat.com, kan.liang@linux.intel.com,
irogers@google.com, acme@redhat.com, ak@linux.intel.com,
alexander.shishkin@linux.intel.com
Subject: perf stat VERSUS perf stat report :: TopDown Metrics
Date: Tue, 21 May 2024 18:54:35 +0200 (CEST) [thread overview]
Message-ID: <alpine.LRH.2.20.2405211842450.4040@Diego> (raw)
Hello!
I have a test for perf-stat record/report functionality, which compares the outputs,
basically whether `perf stat report` is able to reconstruct the same results as
printed by `perf stat`. In the Intel environments with TopDown events/metrics, the
test started failing on the fact that perf-stat-report has a different approach to
handle the metrics:
================================ perf stat ================================
Performance counter stats for 'ls':
0.69 msec task-clock # 0.477 CPUs utilized
0 context-switches # 0.000 /sec
0 cpu-migrations # 0.000 /sec
97 page-faults # 139.648 K/sec
1,776,212 cycles # 2.557 GHz
1,970,435 instructions # 1.11 insn per cycle
403,140 branches # 580.389 M/sec
11,837 branch-misses # 2.94% of all branches
TopdownL1 # 30.0 % tma_backend_bound
# 13.0 % tma_bad_speculation
# 37.5 % tma_frontend_bound
# 19.5 % tma_retiring
TopdownL2 # 12.1 % tma_branch_mispredicts
# 11.7 % tma_core_bound
# 13.2 % tma_fetch_bandwidth
# 24.3 % tma_fetch_latency
# 3.1 % tma_heavy_operations
# 16.3 % tma_light_operations
# 1.0 % tma_machine_clears
# 18.3 % tma_memory_bound
0.001456908 seconds time elapsed
0.000000000 seconds user
0.001647000 seconds sys
================================ perf stat report ================================
Performance counter stats for '/usr/bin/perf stat record ls':
0.69 msec task-clock # 0.477 CPUs utilized
0 context-switches # 0.000 /sec
0 cpu-migrations # 0.000 /sec
97 page-faults # 139.648 K/sec
1,776,212 cycles # 2.557 GHz
1,970,435 instructions # 1.11 insn per cycle
403,140 branches # 580.389 M/sec
11,837 branch-misses # 2.94% of all branches
10,657,272 TOPDOWN.SLOTS
2,089,661 topdown-retiring
4,053,942 topdown-fe-bound
1,964,281 topdown-mem-bound
3,218,078 topdown-be-bound
334,345 topdown-heavy-ops
1,295,589 topdown-br-mispredict
2,632,973 topdown-fetch-lat
1,379,176 topdown-bad-spec
21,248 INT_MISC.UOP_DROPPING # 30.590 M/sec
0.001456908 seconds time elapsed
While perf-stat (and perf-stat-record) calculates the percentages, perf-stat-report
just prints the raw numbers. Thinking about it, it might be useful to know the raw
numbers too, but rather via an option, while by default, both should behave the same,
shouldn't they? Is perf-stat-report missing some metric postprocessing?
Thanks!
Michael
next reply other threads:[~2024-05-21 16:54 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-05-21 16:54 Michael Petlan [this message]
2024-05-23 17:31 ` perf stat VERSUS perf stat report :: TopDown Metrics Liang, Kan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.LRH.2.20.2405211842450.4040@Diego \
--to=mpetlan@redhat.com \
--cc=acme@redhat.com \
--cc=ak@linux.intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=irogers@google.com \
--cc=kan.liang@linux.intel.com \
--cc=linux-perf-users@vger.kernel.org \
--cc=vmolnaro@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).