linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Ian Rogers <irogers@google.com>
To: "Liang, Kan" <kan.liang@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	 Arnaldo Carvalho de Melo <acme@kernel.org>,
	Namhyung Kim <namhyung@kernel.org>,
	 Mark Rutland <mark.rutland@arm.com>,
	 Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Jiri Olsa <jolsa@kernel.org>,
	 Adrian Hunter <adrian.hunter@intel.com>,
	linux-perf-users@vger.kernel.org,  linux-kernel@vger.kernel.org,
	Perry Taylor <perry.taylor@intel.com>,
	 Samantha Alt <samantha.alt@intel.com>,
	Caleb Biggers <caleb.biggers@intel.com>,
	 Weilin Wang <weilin.wang@intel.com>,
	Edward Baker <edward.baker@intel.com>
Subject: Re: [PATCH v4 06/22] perf jevents: Add tsx metric group for Intel models
Date: Wed, 6 Nov 2024 10:15:07 -0800	[thread overview]
Message-ID: <CAP-5=fW1dACyxesnjpMQLAgomnRH+nA1sVphbpLyCFN3A79xSQ@mail.gmail.com> (raw)
In-Reply-To: <244b4c80-2ab2-4248-b930-22fea9ed6429@linux.intel.com>

On Wed, Nov 6, 2024 at 9:53 AM Liang, Kan <kan.liang@linux.intel.com> wrote:
>
>
>
> On 2024-09-26 1:50 p.m., Ian Rogers wrote:
> > Allow duplicated metric to be dropped from json files. Detect when TSX
> > is supported by a model by using the json events, use sysfs events at
> > runtime as hypervisors, etc. may disable TSX.
> >
> > Add CheckPmu to metric to determine if which PMUs have been associated
> > with the loaded events.
> >
> > Signed-off-by: Ian Rogers <irogers@google.com>
> > ---
> >  tools/perf/pmu-events/intel_metrics.py | 52 +++++++++++++++++++++++++-
> >  1 file changed, 51 insertions(+), 1 deletion(-)
> >
> > diff --git a/tools/perf/pmu-events/intel_metrics.py b/tools/perf/pmu-events/intel_metrics.py
> > index f34b4230a4ee..58e243695f0a 100755
> > --- a/tools/perf/pmu-events/intel_metrics.py
> > +++ b/tools/perf/pmu-events/intel_metrics.py
> > @@ -1,12 +1,13 @@
> >  #!/usr/bin/env python3
> >  # SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
> > -from metric import (d_ratio, has_event, max, Event, JsonEncodeMetric,
> > +from metric import (d_ratio, has_event, max, CheckPmu, Event, JsonEncodeMetric,
> >                      JsonEncodeMetricGroupDescriptions, LoadEvents, Metric,
> >                      MetricGroup, MetricRef, Select)
> >  import argparse
> >  import json
> >  import math
> >  import os
> > +from typing import Optional
> >
> >  # Global command line arguments.
> >  _args = None
> > @@ -74,6 +75,54 @@ def Smi() -> MetricGroup:
> >      ], description = 'System Management Interrupt metrics')
> >
> >
> > +def Tsx() -> Optional[MetricGroup]:
> > +  pmu = "cpu_core" if CheckPmu("cpu_core") else "cpu"
> > +  cycles = Event('cycles')
>
> Isn't the pmu prefix required for cycles as well?

Makes sense.

> > +  cycles_in_tx = Event(f'{pmu}/cycles\-t/')
> > +  cycles_in_tx_cp = Event(f'{pmu}/cycles\-ct/')
> > +  try:
> > +    # Test if the tsx event is present in the json, prefer the
> > +    # sysfs version so that we can detect its presence at runtime.
> > +    transaction_start = Event("RTM_RETIRED.START")
> > +    transaction_start = Event(f'{pmu}/tx\-start/')
>
> What's the difference between this check and the later has_event() check?
>
> All the tsx related events are model-specific events. We should check
> them all before using it.

So if there is PMU in the Event name then the Event logic assumes you
are using sysfs and doesn't check the event exists in json. As you
say, I needed a way to detect does this model support TSX? I wanted to
avoid a model lookup table, so I used the existence of
RTM_RETIRED.START for a model as the way to determine if the model
supports TSX. Once we know we have a model supporting TSX then we use
the sysfs event name and has_event check, so that if the TSX and the
event have been disabled the metric doesn't fail parsing.

So, the first check is a compile time check of, "does this model have
TSX?". The "has_event" check is a runtime thing where we want to see
if the event exists in sysfs in case the TSX was disabled say in the
BIOS.

Thanks,
Ian

>
> Thanks,
> Kan
> > +  except:> +    return None
> > +
> > +  elision_start = None
> > +  try:
> > +    # Elision start isn't supported by all models, but we'll not
> > +    # generate the tsx_cycles_per_elision metric in that
> > +    # case. Again, prefer the sysfs encoding of the event.
> > +    elision_start = Event("HLE_RETIRED.START")
> > +    elision_start = Event(f'{pmu}/el\-start/')
> > +  except:
> > +    pass
> > +
> > +  return MetricGroup('transaction', [
> > +      Metric('tsx_transactional_cycles',
> > +             'Percentage of cycles within a transaction region.',
> > +             Select(cycles_in_tx / cycles, has_event(cycles_in_tx), 0),
> > +             '100%'),
> > +      Metric('tsx_aborted_cycles', 'Percentage of cycles in aborted transactions.',
> > +             Select(max(cycles_in_tx - cycles_in_tx_cp, 0) / cycles,
> > +                    has_event(cycles_in_tx),
> > +                    0),
> > +             '100%'),
> > +      Metric('tsx_cycles_per_transaction',
> > +             'Number of cycles within a transaction divided by the number of transactions.',
> > +             Select(cycles_in_tx / transaction_start,
> > +                    has_event(cycles_in_tx),
> > +                    0),
> > +             "cycles / transaction"),
> > +      Metric('tsx_cycles_per_elision',
> > +             'Number of cycles within a transaction divided by the number of elisions.',
> > +             Select(cycles_in_tx / elision_start,
> > +                    has_event(elision_start),
> > +                    0),
> > +             "cycles / elision") if elision_start else None,
> > +  ], description="Breakdown of transactional memory statistics")
> > +
> > +
> >  def main() -> None:
> >    global _args
> >
> > @@ -100,6 +149,7 @@ def main() -> None:
> >        Idle(),
> >        Rapl(),
> >        Smi(),
> > +      Tsx(),
> >    ])
> >
> >
>

  reply	other threads:[~2024-11-06 18:15 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-26 17:50 [PATCH v4 00/22] Python generated Intel metrics Ian Rogers
2024-09-26 17:50 ` [PATCH v4 01/22] perf jevents: Add RAPL metrics for all Intel models Ian Rogers
2024-09-26 17:50 ` [PATCH v4 02/22] perf jevents: Add idle metric for " Ian Rogers
2024-11-06 17:01   ` Liang, Kan
2024-11-06 17:08     ` Liang, Kan
2024-09-26 17:50 ` [PATCH v4 03/22] perf jevents: Add smi metric group " Ian Rogers
2024-11-06 17:32   ` Liang, Kan
2024-11-06 17:42     ` Ian Rogers
2024-11-06 18:29       ` Liang, Kan
2024-09-26 17:50 ` [PATCH v4 04/22] perf jevents: Add CheckPmu to see if a PMU is in loaded json events Ian Rogers
2024-09-26 17:50 ` [PATCH v4 05/22] perf jevents: Mark metrics with experimental events as experimental Ian Rogers
2024-09-26 17:50 ` [PATCH v4 06/22] perf jevents: Add tsx metric group for Intel models Ian Rogers
2024-11-06 17:52   ` Liang, Kan
2024-11-06 18:15     ` Ian Rogers [this message]
2024-11-06 18:48       ` Liang, Kan
2024-09-26 17:50 ` [PATCH v4 07/22] perf jevents: Add br metric group for branch statistics on Intel Ian Rogers
2024-11-07 14:35   ` Liang, Kan
2024-11-07 17:19     ` Ian Rogers
2024-09-26 17:50 ` [PATCH v4 08/22] perf jevents: Add software prefetch (swpf) metric group for Intel Ian Rogers
2024-09-26 17:50 ` [PATCH v4 09/22] perf jevents: Add ports metric group giving utilization on Intel Ian Rogers
2024-11-07 15:00   ` Liang, Kan
2024-11-07 17:12     ` Ian Rogers
2024-11-07 19:36       ` Liang, Kan
2024-11-07 21:00         ` Ian Rogers
2024-11-08 16:45           ` Liang, Kan
2024-09-26 17:50 ` [PATCH v4 10/22] perf jevents: Add L2 metrics for Intel Ian Rogers
2024-09-26 17:50 ` [PATCH v4 11/22] perf jevents: Add load store breakdown metrics ldst " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 12/22] perf jevents: Add ILP metrics " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 13/22] perf jevents: Add context switch " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 14/22] perf jevents: Add FPU " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 15/22] perf jevents: Add Miss Level Parallelism (MLP) metric " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 16/22] perf jevents: Add mem_bw " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 17/22] perf jevents: Add local/remote "mem" breakdown metrics " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 18/22] perf jevents: Add dir " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 19/22] perf jevents: Add C-State metrics from the PCU PMU " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 20/22] perf jevents: Add local/remote miss latency metrics " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 21/22] perf jevents: Add upi_bw metric " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 22/22] perf jevents: Add mesh bandwidth saturation " Ian Rogers
2024-09-27 18:33 ` [PATCH v4 00/22] Python generated Intel metrics Liang, Kan
2024-10-09 16:02   ` Ian Rogers
2024-11-06 16:46     ` Liang, Kan
2024-11-13 23:40       ` Ian Rogers

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAP-5=fW1dACyxesnjpMQLAgomnRH+nA1sVphbpLyCFN3A79xSQ@mail.gmail.com' \
    --to=irogers@google.com \
    --cc=acme@kernel.org \
    --cc=adrian.hunter@intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=caleb.biggers@intel.com \
    --cc=edward.baker@intel.com \
    --cc=jolsa@kernel.org \
    --cc=kan.liang@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=perry.taylor@intel.com \
    --cc=peterz@infradead.org \
    --cc=samantha.alt@intel.com \
    --cc=weilin.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).