linux-perf-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Liang, Kan" <kan.liang@linux.intel.com>
To: Ian Rogers <irogers@google.com>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@redhat.com>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Namhyung Kim <namhyung@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Jiri Olsa <jolsa@kernel.org>,
	Adrian Hunter <adrian.hunter@intel.com>,
	linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org,
	Perry Taylor <perry.taylor@intel.com>,
	Samantha Alt <samantha.alt@intel.com>,
	Caleb Biggers <caleb.biggers@intel.com>,
	Weilin Wang <weilin.wang@intel.com>,
	Edward Baker <edward.baker@intel.com>
Subject: Re: [PATCH v4 09/22] perf jevents: Add ports metric group giving utilization on Intel
Date: Thu, 7 Nov 2024 14:36:23 -0500	[thread overview]
Message-ID: <c48a6f46-5991-40dc-abac-f66f2706c84e@linux.intel.com> (raw)
In-Reply-To: <CAP-5=fWGGQh_Kwr5mWPQv6RO=o8bk2mmShJ6MjR9i1v42e0Ziw@mail.gmail.com>



On 2024-11-07 12:12 p.m., Ian Rogers wrote:
> On Thu, Nov 7, 2024 at 7:00 AM Liang, Kan <kan.liang@linux.intel.com> wrote:
>>
>>
>> On 2024-09-26 1:50 p.m., Ian Rogers wrote:
>>> The ports metric group contains a metric for each port giving its
>>> utilization as a ratio of cycles. The metrics are created by looking
>>> for UOPS_DISPATCHED.PORT events.
>>>
>>> Signed-off-by: Ian Rogers <irogers@google.com>
>>> ---
>>>  tools/perf/pmu-events/intel_metrics.py | 33 ++++++++++++++++++++++++--
>>>  1 file changed, 31 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/tools/perf/pmu-events/intel_metrics.py b/tools/perf/pmu-events/intel_metrics.py
>>> index f4707e964f75..3ef4eb868580 100755
>>> --- a/tools/perf/pmu-events/intel_metrics.py
>>> +++ b/tools/perf/pmu-events/intel_metrics.py
>>> @@ -1,12 +1,13 @@
>>>  #!/usr/bin/env python3
>>>  # SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause)
>>>  from metric import (d_ratio, has_event, max, CheckPmu, Event, JsonEncodeMetric,
>>> -                    JsonEncodeMetricGroupDescriptions, LoadEvents, Metric,
>>> -                    MetricGroup, MetricRef, Select)
>>> +                    JsonEncodeMetricGroupDescriptions, Literal, LoadEvents,
>>> +                    Metric, MetricGroup, MetricRef, Select)
>>>  import argparse
>>>  import json
>>>  import math
>>>  import os
>>> +import re
>>>  from typing import Optional
>>>
>>>  # Global command line arguments.
>>> @@ -260,6 +261,33 @@ def IntelBr():
>>>                       description="breakdown of retired branch instructions")
>>>
>>>
>>> +def IntelPorts() -> Optional[MetricGroup]:
>>> +  pipeline_events = json.load(open(f"{_args.events_path}/x86/{_args.model}/pipeline.json"))
>>> +
>>> +  core_cycles = Event("CPU_CLK_UNHALTED.THREAD_P_ANY",
>>> +                      "CPU_CLK_UNHALTED.DISTRIBUTED",
>>> +                      "cycles")
>>> +  # Number of CPU cycles scaled for SMT.
>>> +  smt_cycles = Select(core_cycles / 2, Literal("#smt_on"), core_cycles)
>>> +
>>> +  metrics = []
>>> +  for x in pipeline_events:
>>> +    if "EventName" in x and re.search("^UOPS_DISPATCHED.PORT", x["EventName"]):
>>> +      name = x["EventName"]
>>> +      port = re.search(r"(PORT_[0-9].*)", name).group(0).lower()
>>> +      if name.endswith("_CORE"):
>>> +        cyc = core_cycles
>>> +      else:
>>> +        cyc = smt_cycles
>>> +      metrics.append(Metric(port, f"{port} utilization (higher is better)",
>>> +                            d_ratio(Event(name), cyc), "100%"))
>>
>> The generated metric highly depends on the event name, which is very
>> fragile. We will probably have the same event in a new generation, but
>> with a different name. Long-term maintenance could be a problem.
>> Is there an idea regarding how to sync the event names for new generations?
> I agree with the idea that it is fragile, it is also strangely robust
> as you say, new generations will gain support if they follow the same
> naming convention. We have tests that load bearing metrics exists on
> our platforms so maybe the appropriate place to test for existence is
> in Weilin's metrics test.
> 
> 
>> Maybe we should improve the event generation script and do an automatic
>> check to tell which metrics are missed. Then we may decide if updating
>> the new event name, dropping the metric or adding a different metric.
> So I'm not sure it is a bug to not have the metric, if it were we
> could just throw rather than return None. We're going to run the
> script for every model including old models like nehalem, so I've
> generally kept it as None. I think doing future work on testing is
> probably best. It would also indicate use of the metric if people
> notice it missing (not that the script aims for that 🙂 ).

The maintenance is still a concern, even if we have a way to test it
out. There is already an "official" metric published in GitHub, which is
maintained by Intel. To be honest, I don't think there is more energy to
maintain these "non-official" metrics.

I don't think it should be a bug without these metrics. So it's very
likely that the issue will not be addressed right away. If we cannot
keep these metrics updated for the future platforms, I couldn't find a
reason to have them.

Thanks,
Kan

  reply	other threads:[~2024-11-07 19:36 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-26 17:50 [PATCH v4 00/22] Python generated Intel metrics Ian Rogers
2024-09-26 17:50 ` [PATCH v4 01/22] perf jevents: Add RAPL metrics for all Intel models Ian Rogers
2024-09-26 17:50 ` [PATCH v4 02/22] perf jevents: Add idle metric for " Ian Rogers
2024-11-06 17:01   ` Liang, Kan
2024-11-06 17:08     ` Liang, Kan
2024-09-26 17:50 ` [PATCH v4 03/22] perf jevents: Add smi metric group " Ian Rogers
2024-11-06 17:32   ` Liang, Kan
2024-11-06 17:42     ` Ian Rogers
2024-11-06 18:29       ` Liang, Kan
2024-09-26 17:50 ` [PATCH v4 04/22] perf jevents: Add CheckPmu to see if a PMU is in loaded json events Ian Rogers
2024-09-26 17:50 ` [PATCH v4 05/22] perf jevents: Mark metrics with experimental events as experimental Ian Rogers
2024-09-26 17:50 ` [PATCH v4 06/22] perf jevents: Add tsx metric group for Intel models Ian Rogers
2024-11-06 17:52   ` Liang, Kan
2024-11-06 18:15     ` Ian Rogers
2024-11-06 18:48       ` Liang, Kan
2024-09-26 17:50 ` [PATCH v4 07/22] perf jevents: Add br metric group for branch statistics on Intel Ian Rogers
2024-11-07 14:35   ` Liang, Kan
2024-11-07 17:19     ` Ian Rogers
2024-09-26 17:50 ` [PATCH v4 08/22] perf jevents: Add software prefetch (swpf) metric group for Intel Ian Rogers
2024-09-26 17:50 ` [PATCH v4 09/22] perf jevents: Add ports metric group giving utilization on Intel Ian Rogers
2024-11-07 15:00   ` Liang, Kan
2024-11-07 17:12     ` Ian Rogers
2024-11-07 19:36       ` Liang, Kan [this message]
2024-11-07 21:00         ` Ian Rogers
2024-11-08 16:45           ` Liang, Kan
2024-09-26 17:50 ` [PATCH v4 10/22] perf jevents: Add L2 metrics for Intel Ian Rogers
2024-09-26 17:50 ` [PATCH v4 11/22] perf jevents: Add load store breakdown metrics ldst " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 12/22] perf jevents: Add ILP metrics " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 13/22] perf jevents: Add context switch " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 14/22] perf jevents: Add FPU " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 15/22] perf jevents: Add Miss Level Parallelism (MLP) metric " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 16/22] perf jevents: Add mem_bw " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 17/22] perf jevents: Add local/remote "mem" breakdown metrics " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 18/22] perf jevents: Add dir " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 19/22] perf jevents: Add C-State metrics from the PCU PMU " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 20/22] perf jevents: Add local/remote miss latency metrics " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 21/22] perf jevents: Add upi_bw metric " Ian Rogers
2024-09-26 17:50 ` [PATCH v4 22/22] perf jevents: Add mesh bandwidth saturation " Ian Rogers
2024-09-27 18:33 ` [PATCH v4 00/22] Python generated Intel metrics Liang, Kan
2024-10-09 16:02   ` Ian Rogers
2024-11-06 16:46     ` Liang, Kan
2024-11-13 23:40       ` Ian Rogers

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c48a6f46-5991-40dc-abac-f66f2706c84e@linux.intel.com \
    --to=kan.liang@linux.intel.com \
    --cc=acme@kernel.org \
    --cc=adrian.hunter@intel.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=caleb.biggers@intel.com \
    --cc=edward.baker@intel.com \
    --cc=irogers@google.com \
    --cc=jolsa@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=perry.taylor@intel.com \
    --cc=peterz@infradead.org \
    --cc=samantha.alt@intel.com \
    --cc=weilin.wang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).