From: Namhyung Kim <namhyung@kernel.org>
To: Ian Rogers <irogers@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
Alexander Shishkin <alexander.shishkin@linux.intel.com>,
Jiri Olsa <jolsa@kernel.org>,
Adrian Hunter <adrian.hunter@intel.com>,
James Clark <james.clark@linaro.org>, Xu Yang <xu.yang_2@nxp.com>,
Chun-Tse Shao <ctshao@google.com>,
Thomas Richter <tmricht@linux.ibm.com>,
Sumanth Korikkar <sumanthk@linux.ibm.com>,
Collin Funk <collin.funk1@gmail.com>,
Thomas Falcon <thomas.falcon@intel.com>,
Howard Chu <howardchu95@gmail.com>,
Dapeng Mi <dapeng1.mi@linux.intel.com>,
Levi Yun <yeoreum.yun@arm.com>,
Yang Li <yang.lee@linux.alibaba.com>,
linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org
Subject: Re: [PATCH v1 00/22] Switch the default perf stat metrics to json
Date: Mon, 3 Nov 2025 20:47:19 -0800 [thread overview]
Message-ID: <aQmFV7fqURMXQNHC@google.com> (raw)
In-Reply-To: <20251024175857.808401-1-irogers@google.com>
Hi Ian,
On Fri, Oct 24, 2025 at 10:58:35AM -0700, Ian Rogers wrote:
> Prior to this series stat-shadow would produce hard coded metrics if
> certain events appeared in the evlist. This series produces equivalent
> json metrics and cleans up the consequences in tests and display
> output. A before and after of the default display output on a
> tigerlake is:
>
> Before:
> ```
> $ perf stat -a sleep 1
>
> Performance counter stats for 'system wide':
>
> 16,041,816,418 cpu-clock # 15.995 CPUs utilized
> 5,749 context-switches # 358.376 /sec
> 121 cpu-migrations # 7.543 /sec
> 1,806 page-faults # 112.581 /sec
> 825,965,204 instructions # 0.70 insn per cycle
> 1,180,799,101 cycles # 0.074 GHz
> 168,945,109 branches # 10.532 M/sec
> 4,629,567 branch-misses # 2.74% of all branches
> # 30.2 % tma_backend_bound
> # 7.8 % tma_bad_speculation
> # 47.1 % tma_frontend_bound
> # 14.9 % tma_retiring
> ```
>
> After:
> ```
> $ perf stat -a sleep 1
>
> Performance counter stats for 'system wide':
>
> 2,890 context-switches # 179.9 cs/sec cs_per_second
> 16,061,923,339 cpu-clock # 16.0 CPUs CPUs_utilized
> 43 cpu-migrations # 2.7 migrations/sec migrations_per_second
> 5,645 page-faults # 351.5 faults/sec page_faults_per_second
> 5,708,413 branch-misses # 1.4 % branch_miss_rate (88.83%)
> 429,978,120 branches # 26.8 K/sec branch_frequency (88.85%)
> 1,626,915,897 cpu-cycles # 0.1 GHz cycles_frequency (88.84%)
> 2,556,805,534 instructions # 1.5 instructions insn_per_cycle (88.86%)
> TopdownL1 # 20.1 % tma_backend_bound
> # 40.5 % tma_bad_speculation (88.90%)
> # 17.2 % tma_frontend_bound (78.05%)
> # 22.2 % tma_retiring (88.89%)
>
> 1.002994394 seconds time elapsed
> ```
While this looks nicer, I worry about the changes in the output. And I'm
curious why only the "After" output shows the multiplexing percent.
>
> Having the metrics in json brings greater uniformity, allows events to
> be shared by metrics, and it also allows descriptions like:
> ```
> $ perf list cs_per_second
> ...
> cs_per_second
> [Context switches per CPU second]
> ```
>
> A thorn in the side of doing this work was that the hard coded metrics
> were used by perf script with '-F metric'. This functionality didn't
> work for me (I was testing `perf record -e instructions,cycles` and
> then `perf script -F metric` but saw nothing but empty lines)
The documentation says:
With the metric option perf script can compute metrics for
sampling periods, similar to perf stat. This requires
specifying a group with multiple events defining metrics with the :S option
for perf record. perf will sample on the first event, and
print computed metrics for all the events in the group. Please note
that the metric computed is averaged over the whole sampling
period (since the last sample), not just for the sample point.
So I guess it should have 'S' modifiers in a group.
> but anyway I decided to fix it to the best of my ability in this
> series. So the script side counters were removed and the regular ones
> associated with the evsel used. The json metrics were all searched
> looking for ones that have a subset of events matching those in the
> perf script session, and all metrics are printed. This is kind of
> weird as the counters are being set by the period of samples, but I
> carried the behavior forward. I suspect there needs to be follow up
> work to make this better, but what is in the series is superior to
> what is currently in the tree. Follow up work could include finding
> metrics for the machine in the perf.data rather than using the host,
> allowing multiple metrics even if the metric ids of the events differ,
> fixing pre-existing `perf stat record/report` issues, etc.
>
> There is a lot of stat tests that, for example, assume '-e
> instructions,cycles' will produce an IPC metric. These things needed
> tidying as now the metric must be explicitly asked for and when doing
> this ones using software events were preferred to increase
> compatibility. As the test updates were numerous they are distinct to
> the patches updating the functionality causing periods in the series
> where not all tests are passing. If this is undesirable the test fixes
> can be squashed into the functionality updates.
Hmm.. how many of them? I think it'd better to have the test changes at
the same time so that we can assure test success count after the change.
Can the test changes be squashed into one or two commits?
Thanks,
Namhyung
>
> Ian Rogers (22):
> perf evsel: Remove unused metric_events variable
> perf metricgroup: Update comment on location of metric_event list
> perf metricgroup: Missed free on error path
> perf metricgroup: When copy metrics copy default information
> perf metricgroup: Add care to picking the evsel for displaying a
> metric
> perf jevents: Make all tables static
> perf expr: Add #target_cpu literal
> perf jevents: Add set of common metrics based on default ones
> perf jevents: Add metric DefaultShowEvents
> perf stat: Add detail -d,-dd,-ddd metrics
> perf script: Change metric format to use json metrics
> perf stat: Remove hard coded shadow metrics
> perf stat: Fix default metricgroup display on hybrid
> perf stat: Sort default events/metrics
> perf stat: Remove "unit" workarounds for metric-only
> perf test stat+json: Improve metric-only testing
> perf test stat: Ignore failures in Default[234] metricgroups
> perf test stat: Update std_output testing metric expectations
> perf test metrics: Update all metrics for possibly failing default
> metrics
> perf test stat: Update shadow test to use metrics
> perf test stat: Update test expectations and events
> perf test stat csv: Update test expectations and events
>
> tools/perf/builtin-script.c | 238 ++++++++++-
> tools/perf/builtin-stat.c | 154 ++-----
> .../arch/common/common/metrics.json | 151 +++++++
> tools/perf/pmu-events/empty-pmu-events.c | 139 ++++--
> tools/perf/pmu-events/jevents.py | 34 +-
> tools/perf/pmu-events/pmu-events.h | 2 +
> .../tests/shell/lib/perf_json_output_lint.py | 4 +-
> tools/perf/tests/shell/lib/stat_output.sh | 2 +-
> tools/perf/tests/shell/stat+csv_output.sh | 2 +-
> tools/perf/tests/shell/stat+json_output.sh | 2 +-
> tools/perf/tests/shell/stat+shadow_stat.sh | 4 +-
> tools/perf/tests/shell/stat+std_output.sh | 4 +-
> tools/perf/tests/shell/stat.sh | 6 +-
> .../perf/tests/shell/stat_all_metricgroups.sh | 3 +
> tools/perf/tests/shell/stat_all_metrics.sh | 7 +-
> tools/perf/util/evsel.c | 2 -
> tools/perf/util/evsel.h | 2 +-
> tools/perf/util/expr.c | 3 +
> tools/perf/util/metricgroup.c | 95 ++++-
> tools/perf/util/metricgroup.h | 2 +-
> tools/perf/util/stat-display.c | 55 +--
> tools/perf/util/stat-shadow.c | 402 +-----------------
> tools/perf/util/stat.h | 2 +-
> 23 files changed, 672 insertions(+), 643 deletions(-)
> create mode 100644 tools/perf/pmu-events/arch/common/common/metrics.json
>
> --
> 2.51.1.821.gb6fe4d2222-goog
>
next prev parent reply other threads:[~2025-11-04 4:47 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-24 17:58 [PATCH v1 00/22] Switch the default perf stat metrics to json Ian Rogers
2025-10-24 17:58 ` [PATCH v1 01/22] perf evsel: Remove unused metric_events variable Ian Rogers
2025-10-24 17:58 ` [PATCH v1 02/22] perf metricgroup: Update comment on location of metric_event list Ian Rogers
2025-10-24 17:58 ` [PATCH v1 03/22] perf metricgroup: Missed free on error path Ian Rogers
2025-10-24 17:58 ` [PATCH v1 04/22] perf metricgroup: When copy metrics copy default information Ian Rogers
2025-10-24 17:58 ` [PATCH v1 05/22] perf metricgroup: Add care to picking the evsel for displaying a metric Ian Rogers
2025-11-04 4:52 ` Namhyung Kim
2025-11-04 5:28 ` Ian Rogers
2025-11-06 6:03 ` Namhyung Kim
2025-11-06 6:42 ` Ian Rogers
2025-10-24 17:58 ` [PATCH v1 06/22] perf jevents: Make all tables static Ian Rogers
2025-10-24 17:58 ` [PATCH v1 07/22] perf expr: Add #target_cpu literal Ian Rogers
2025-11-04 4:56 ` Namhyung Kim
2025-11-06 18:43 ` Ian Rogers
2025-10-24 17:58 ` [PATCH v1 08/22] perf jevents: Add set of common metrics based on default ones Ian Rogers
2025-11-06 6:22 ` Namhyung Kim
2025-11-06 18:05 ` Ian Rogers
2025-10-24 17:58 ` [PATCH v1 09/22] perf jevents: Add metric DefaultShowEvents Ian Rogers
2025-10-24 17:58 ` [PATCH v1 10/22] perf stat: Add detail -d,-dd,-ddd metrics Ian Rogers
2025-10-24 17:58 ` [PATCH v1 11/22] perf script: Change metric format to use json metrics Ian Rogers
2025-10-24 17:58 ` [PATCH v1 12/22] perf stat: Remove hard coded shadow metrics Ian Rogers
2025-10-24 17:58 ` [PATCH v1 13/22] perf stat: Fix default metricgroup display on hybrid Ian Rogers
2025-10-24 17:58 ` [PATCH v1 14/22] perf stat: Sort default events/metrics Ian Rogers
2025-10-24 17:58 ` [PATCH v1 15/22] perf stat: Remove "unit" workarounds for metric-only Ian Rogers
2025-10-24 17:58 ` [PATCH v1 16/22] perf test stat+json: Improve metric-only testing Ian Rogers
2025-10-24 17:58 ` [PATCH v1 17/22] perf test stat: Ignore failures in Default[234] metricgroups Ian Rogers
2025-10-24 17:58 ` [PATCH v1 18/22] perf test stat: Update std_output testing metric expectations Ian Rogers
2025-10-24 17:58 ` [PATCH v1 19/22] perf test metrics: Update all metrics for possibly failing default metrics Ian Rogers
2025-10-24 17:58 ` [PATCH v1 20/22] perf test stat: Update shadow test to use metrics Ian Rogers
2025-10-24 17:58 ` [PATCH v1 21/22] perf test stat: Update test expectations and events Ian Rogers
2025-10-24 17:58 ` [PATCH v1 22/22] perf test stat csv: " Ian Rogers
2025-10-30 20:51 ` [PATCH v1 00/22] Switch the default perf stat metrics to json Ian Rogers
2025-11-03 17:05 ` Ian Rogers
2025-11-04 4:47 ` Namhyung Kim [this message]
2025-11-04 5:09 ` Ian Rogers
2025-11-06 5:29 ` Namhyung Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aQmFV7fqURMXQNHC@google.com \
--to=namhyung@kernel.org \
--cc=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=collin.funk1@gmail.com \
--cc=ctshao@google.com \
--cc=dapeng1.mi@linux.intel.com \
--cc=howardchu95@gmail.com \
--cc=irogers@google.com \
--cc=james.clark@linaro.org \
--cc=jolsa@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=sumanthk@linux.ibm.com \
--cc=thomas.falcon@intel.com \
--cc=tmricht@linux.ibm.com \
--cc=xu.yang_2@nxp.com \
--cc=yang.lee@linux.alibaba.com \
--cc=yeoreum.yun@arm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).