From: Ian Rogers <irogers@google.com>
To: Perry Taylor <perry.taylor@intel.com>,
Samantha Alt <samantha.alt@intel.com>,
Caleb Biggers <caleb.biggers@intel.com>,
Weilin Wang <weilin.wang@intel.com>,
Edward Baker <edward.baker@intel.com>,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
Namhyung Kim <namhyung@kernel.org>,
Mark Rutland <mark.rutland@arm.com>,
Alexander Shishkin <alexander.shishkin@linux.intel.com>,
Jiri Olsa <jolsa@kernel.org>, Ian Rogers <irogers@google.com>,
Adrian Hunter <adrian.hunter@intel.com>,
James Clark <james.clark@arm.com>,
John Garry <john.g.garry@oracle.com>,
linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org,
Stephane Eranian <eranian@google.com>
Subject: [PATCH v3 05/20] perf jevents: Add tsx metric group for Intel models
Date: Wed, 13 Mar 2024 22:59:04 -0700 [thread overview]
Message-ID: <20240314055919.1979781-6-irogers@google.com> (raw)
In-Reply-To: <20240314055919.1979781-1-irogers@google.com>
Allow duplicated metric to be dropped from json files. Detect when TSX
is supported by a model by using the json events, use sysfs events at
runtime as hypervisors, etc. may disable TSX.
Add CheckPmu to metric to determine if which PMUs have been associated
with the loaded events.
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/perf/pmu-events/intel_metrics.py | 52 +++++++++++++++++++++++++-
1 file changed, 51 insertions(+), 1 deletion(-)
diff --git a/tools/perf/pmu-events/intel_metrics.py b/tools/perf/pmu-events/intel_metrics.py
index f34b4230a4ee..58e243695f0a 100755
--- a/tools/perf/pmu-events/intel_metrics.py
+++ b/tools/perf/pmu-events/intel_metrics.py
@@ -1,12 +1,13 @@
#!/usr/bin/env python3
# SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
-from metric import (d_ratio, has_event, max, Event, JsonEncodeMetric,
+from metric import (d_ratio, has_event, max, CheckPmu, Event, JsonEncodeMetric,
JsonEncodeMetricGroupDescriptions, LoadEvents, Metric,
MetricGroup, MetricRef, Select)
import argparse
import json
import math
import os
+from typing import Optional
# Global command line arguments.
_args = None
@@ -74,6 +75,54 @@ def Smi() -> MetricGroup:
], description = 'System Management Interrupt metrics')
+def Tsx() -> Optional[MetricGroup]:
+ pmu = "cpu_core" if CheckPmu("cpu_core") else "cpu"
+ cycles = Event('cycles')
+ cycles_in_tx = Event(f'{pmu}/cycles\-t/')
+ cycles_in_tx_cp = Event(f'{pmu}/cycles\-ct/')
+ try:
+ # Test if the tsx event is present in the json, prefer the
+ # sysfs version so that we can detect its presence at runtime.
+ transaction_start = Event("RTM_RETIRED.START")
+ transaction_start = Event(f'{pmu}/tx\-start/')
+ except:
+ return None
+
+ elision_start = None
+ try:
+ # Elision start isn't supported by all models, but we'll not
+ # generate the tsx_cycles_per_elision metric in that
+ # case. Again, prefer the sysfs encoding of the event.
+ elision_start = Event("HLE_RETIRED.START")
+ elision_start = Event(f'{pmu}/el\-start/')
+ except:
+ pass
+
+ return MetricGroup('transaction', [
+ Metric('tsx_transactional_cycles',
+ 'Percentage of cycles within a transaction region.',
+ Select(cycles_in_tx / cycles, has_event(cycles_in_tx), 0),
+ '100%'),
+ Metric('tsx_aborted_cycles', 'Percentage of cycles in aborted transactions.',
+ Select(max(cycles_in_tx - cycles_in_tx_cp, 0) / cycles,
+ has_event(cycles_in_tx),
+ 0),
+ '100%'),
+ Metric('tsx_cycles_per_transaction',
+ 'Number of cycles within a transaction divided by the number of transactions.',
+ Select(cycles_in_tx / transaction_start,
+ has_event(cycles_in_tx),
+ 0),
+ "cycles / transaction"),
+ Metric('tsx_cycles_per_elision',
+ 'Number of cycles within a transaction divided by the number of elisions.',
+ Select(cycles_in_tx / elision_start,
+ has_event(elision_start),
+ 0),
+ "cycles / elision") if elision_start else None,
+ ], description="Breakdown of transactional memory statistics")
+
+
def main() -> None:
global _args
@@ -100,6 +149,7 @@ def main() -> None:
Idle(),
Rapl(),
Smi(),
+ Tsx(),
])
--
2.44.0.278.ge034bb2e1d-goog
next prev parent reply other threads:[~2024-03-14 5:59 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-14 5:58 [PATCH v3 00/20] Python generated Intel metrics Ian Rogers
2024-03-14 5:59 ` [PATCH v3 01/20] perf jevents: Add RAPL metrics for all Intel models Ian Rogers
2024-03-14 5:59 ` [PATCH v3 02/20] perf jevents: Add idle metric for " Ian Rogers
2024-03-14 5:59 ` [PATCH v3 03/20] perf jevents: Add smi metric group " Ian Rogers
2024-03-14 5:59 ` [PATCH v3 04/20] perf jevents: Add CheckPmu to see if a PMU is in loaded json events Ian Rogers
2024-03-14 5:59 ` Ian Rogers [this message]
2024-03-14 5:59 ` [PATCH v3 06/20] perf jevents: Add br metric group for branch statistics on Intel Ian Rogers
2024-03-14 5:59 ` [PATCH v3 07/20] perf jevents: Add software prefetch (swpf) metric group for Intel Ian Rogers
2024-03-14 5:59 ` [PATCH v3 08/20] perf jevents: Add ports metric group giving utilization on Intel Ian Rogers
2024-03-14 5:59 ` [PATCH v3 09/20] perf jevents: Add L2 metrics for Intel Ian Rogers
2024-03-14 5:59 ` [PATCH v3 10/20] perf jevents: Add load store breakdown metrics ldst " Ian Rogers
2024-03-14 5:59 ` [PATCH v3 11/20] perf jevents: Add ILP metrics " Ian Rogers
2024-03-14 5:59 ` [PATCH v3 12/20] perf jevents: Add context switch " Ian Rogers
2024-03-14 5:59 ` [PATCH v3 13/20] perf jevents: Add FPU " Ian Rogers
2024-03-14 5:59 ` [PATCH v3 14/20] perf jevents: Add Miss Level Parallelism (MLP) metric " Ian Rogers
2024-03-14 5:59 ` [PATCH v3 15/20] perf jevents: Add mem_bw " Ian Rogers
2024-03-14 5:59 ` [PATCH v3 16/20] perf jevents: Add local/remote "mem" breakdown metrics " Ian Rogers
2024-03-14 5:59 ` [PATCH v3 17/20] perf jevents: Add dir " Ian Rogers
2024-03-14 5:59 ` [PATCH v3 18/20] perf jevents: Add C-State metrics from the PCU PMU " Ian Rogers
2024-03-14 5:59 ` [PATCH v3 19/20] perf jevents: Add local/remote miss latency metrics " Ian Rogers
2024-03-14 5:59 ` [PATCH v3 20/20] perf jevents: Add upi_bw metric " Ian Rogers
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240314055919.1979781-6-irogers@google.com \
--to=irogers@google.com \
--cc=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=caleb.biggers@intel.com \
--cc=edward.baker@intel.com \
--cc=eranian@google.com \
--cc=james.clark@arm.com \
--cc=john.g.garry@oracle.com \
--cc=jolsa@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=perry.taylor@intel.com \
--cc=peterz@infradead.org \
--cc=samantha.alt@intel.com \
--cc=weilin.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).