From: "Falcon, Thomas" <thomas.falcon@intel.com>
To: "alexander.shishkin@linux.intel.com"
<alexander.shishkin@linux.intel.com>,
"Biggers, Caleb" <caleb.biggers@intel.com>,
"Hunter, Adrian" <adrian.hunter@intel.com>,
"Taylor, Perry" <perry.taylor@intel.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"mingo@redhat.com" <mingo@redhat.com>,
"irogers@google.com" <irogers@google.com>,
"linux-perf-users@vger.kernel.org"
<linux-perf-users@vger.kernel.org>,
"kan.liang@linux.intel.com" <kan.liang@linux.intel.com>,
"manivannan.sadhasivam@linaro.org"
<manivannan.sadhasivam@linaro.org>,
"peterz@infradead.org" <peterz@infradead.org>,
"alexandre.torgue@foss.st.com" <alexandre.torgue@foss.st.com>,
"Wang, Weilin" <weilin.wang@intel.com>,
"acme@kernel.org" <acme@kernel.org>,
"afaerber@suse.de" <afaerber@suse.de>,
"jolsa@kernel.org" <jolsa@kernel.org>,
"mcoquelin.stm32@gmail.com" <mcoquelin.stm32@gmail.com>,
"namhyung@kernel.org" <namhyung@kernel.org>,
"mark.rutland@arm.com" <mark.rutland@arm.com>
Subject: Re: [PATCH v1 03/35] perf vendor events: Update arrowlake events/metrics
Date: Tue, 25 Mar 2025 00:16:51 +0000 [thread overview]
Message-ID: <48eb46bce14d895b61560907cb7f0df9038df57d.camel@intel.com> (raw)
In-Reply-To: <20250322063403.364981-4-irogers@google.com>
On Fri, 2025-03-21 at 23:33 -0700, Ian Rogers wrote:
> Update events from v1.07 to v1.08.
> Update event topics, metrics to be generated from the TMA spreadsheet
> and other small clean ups.
>
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
> .../arch/x86/arrowlake/arl-metrics.json | 566 +++++++++---------
> .../pmu-events/arch/x86/arrowlake/cache.json | 148 +++++
> .../pmu-events/arch/x86/arrowlake/memory.json | 11 +
> .../pmu-events/arch/x86/arrowlake/other.json | 193 ------
> .../arch/x86/arrowlake/pipeline.json | 163 ++++-
> tools/perf/pmu-events/arch/x86/mapfile.csv | 2 +-
> 6 files changed, 608 insertions(+), 475 deletions(-)
>
>
...
> @@ -1086,18 +1086,18 @@
> "MetricExpr": "cpu_core@MEMORY_STALLS.MEM@ / tma_info_thread_clks",
> "MetricGroup": "MemoryBound;TmaL3mem;TopdownL3;tma_L3_group;tma_memory_bound_group",
> "MetricName": "tma_dram_bound",
> - "MetricThreshold": "tma_dram_bound > 0.1 & tma_memory_bound > 0.2 & tma_backend_bound > 0.2",
> + "MetricThreshold": "tma_dram_bound > 0.1 & (tma_memory_bound > 0.2 & tma_backend_bound > 0.2)",
> "PublicDescription": "This metric estimates how often the CPU was stalled on accesses to external memory (DRAM) by loads. Better caching can improve the latency and increase performance. Sample with: MEM_LOAD_RETIRED.L3_MISS",
> "ScaleUnit": "100%",
> "Unit": "cpu_core"
> },
> {
> "BriefDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline",
> - "MetricExpr": "(cpu_core@IDQ.DSB_UOPS\\,cmask\\=0x8\\,inv\\=0x1@ + cpu_core@IDQ.DSB_UOPS@ / (cpu_core@IDQ.DSB_UOPS@ + cpu_core@IDQ.MITE_UOPS@) * (cpu_core@IDQ_BUBBLES.CYCLES_0_UOPS_DELIV.CORE@ - cpu_core@IDQ_BUBBLES.FETCH_LATENCY@)) / tma_info_thread_clks",
> + "MetricExpr": "(cpu@IDQ.DSB_UOPS\\,cmask\\=0x8\\,inv\\=0x1@ + cpu_core@IDQ.DSB_UOPS@ / (cpu_core@IDQ.DSB_UOPS@ + cpu_core@IDQ.MITE_UOPS@) * (cpu_core@IDQ_BUBBLES.CYCLES_0_UOPS_DELIV.CORE@ - cpu_core@IDQ_BUBBLES.FETCH_LATENCY@)) / tma_info_thread_clks",
> "MetricGroup": "DSB;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
> "MetricName": "tma_dsb",
> "MetricThreshold": "tma_dsb > 0.15 & tma_fetch_bandwidth > 0.2",
> - "PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline. For example; inefficient utilization of the DSB cache structure or bank conflict when reading from it; are categorized here",
> + "PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to DSB (decoded uop cache) fetch pipeline. For example; inefficient utilization of the DSB cache structure or bank conflict when reading from it; are categorized here.",
> "ScaleUnit": "100%",
> "Unit": "cpu_core"
> },
>
I'm seeing some errors for metrics tma_dsb, tma_lsd, and tma_mite on arrowlake. I think cpu should
be cpu_core here
$ sudo ./perf stat -M tma_dsb
event syntax error: '...THREAD!3/,cpu/IDQ.DSB_UOPS,cmask=0x8,inv=0x1,metric-
id=cpu!3IDQ.DSB_UOPS!0cmask!20x8!0inv!20x1!3/,cpu_core/topdown-fe-bound,metric-id=cpu_core!3t..'
\___ Bad event or PMU
Unable to find PMU or event on a PMU of 'cpu'
> "BriefDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to LSD (Loop Stream Detector) unit",
> - "MetricExpr": "cpu_core@LSD.UOPS\\,cmask\\=0x8\\,inv\\=0x1@ / tma_info_thread_clks",
> + "MetricExpr": "cpu@LSD.UOPS\\,cmask\\=0x8\\,inv\\=0x1@ / tma_info_thread_clks",
> "MetricGroup": "FetchBW;LSD;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
> "MetricName": "tma_lsd",
> "MetricThreshold": "tma_lsd > 0.15 & tma_fetch_bandwidth > 0.2",
> - "PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to LSD (Loop Stream Detector) unit. LSD typically does well sustaining Uop supply. However; in some rare cases; optimal uop-delivery could not be reached for small loops whose size (in terms of number of uops) does not suit well the LSD structure",
> + "PublicDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to LSD (Loop Stream Detector) unit. LSD typically does well sustaining Uop supply. However; in some rare cases; optimal uop-delivery could not be reached for small loops whose size (in terms of number of uops) does not suit well the LSD structure.",
> "ScaleUnit": "100%",
> "Unit": "cpu_core"
> },
>
...here
$ sudo ./perf stat -M tma_lsd
event syntax error: '...THREAD!3/,cpu/LSD.UOPS,cmask=0x8,inv=0x1,metric-
id=cpu!3LSD.UOPS!0cmask!20x8!0inv!20x1!3/,cpu_core/topdown-fe-bound,metric-id=cpu_core!3topdown!1..'
\___ Bad event or PMU
Unable to find PMU or event on a PMU of 'cpu'
> },
> {
> "BriefDescription": "This metric represents Core fraction of cycles in which CPU was likely limited due to the MITE pipeline (the legacy decode pipeline)",
> - "MetricExpr": "(cpu_core@IDQ.MITE_UOPS\\,cmask\\=0x8\\,inv\\=0x1@ / tma_info_thread_clks + cpu_core@IDQ.MITE_UOPS@ / (cpu_core@IDQ.DSB_UOPS@ + cpu_core@IDQ.MITE_UOPS@) * (cpu_core@IDQ_BUBBLES.CYCLES_0_UOPS_DELIV.CORE@ - cpu_core@IDQ_BUBBLES.FETCH_LATENCY@)) / tma_info_thread_clks",
> + "MetricExpr": "(cpu@IDQ.MITE_UOPS\\,cmask\\=0x8\\,inv\\=0x1@ / 2 + cpu_core@IDQ.MITE_UOPS@ / (cpu_core@IDQ.DSB_UOPS@ + cpu_core@IDQ.MITE_UOPS@) * (cpu_core@IDQ_BUBBLES.CYCLES_0_UOPS_DELIV.CORE@ - cpu_core@IDQ_BUBBLES.FETCH_LATENCY@)) / tma_info_thread_clks",
> "MetricGroup": "DSBmiss;FetchBW;TopdownL3;tma_L3_group;tma_fetch_bandwidth_group",
> "MetricName": "tma_mite",
> "MetricThreshold": "tma_mite > 0.1 & tma_fetch_bandwidth > 0.2",
...and here?
$ sudo ./perf stat -M tma_mite
event syntax error: '..etiring!3/,cpu/IDQ.MITE_UOPS,cmask=0x8,inv=0x1,metric-
id=cpu!3IDQ.MITE_UOPS!0cmask!20x8!0inv!20x1!3/,cpu_core/topdown-bad-spec,metric-id=cpu_core!..'
\___ Bad event or PMU
Unable to find PMU or event on a PMU of 'cpu'
Thanks,
Tom
next prev parent reply other threads:[~2025-03-25 0:17 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-22 6:33 [PATCH v1 00/35] GNR retirement latencies, topic and metric updates Ian Rogers
2025-03-22 6:33 ` [PATCH v1 01/35] perf vendor events: Update alderlake events/metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 02/35] perf vendor events: Update AlderlakeN events/metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 03/35] perf vendor events: Update arrowlake events/metrics Ian Rogers
2025-03-25 0:16 ` Falcon, Thomas [this message]
2025-03-25 15:09 ` Ian Rogers
2025-03-22 6:33 ` [PATCH v1 04/35] perf vendor events: Update bonnell events Ian Rogers
2025-03-22 6:33 ` [PATCH v1 05/35] perf vendor events: Update broadwell metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 06/35] perf vendor events: Update broadwellde metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 07/35] perf vendor events: Update broadwellx metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 08/35] perf vendor events: Update cascadelakex events/metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 09/35] perf vendor events: Update clearwaterforest events Ian Rogers
2025-03-22 6:33 ` [PATCH v1 10/35] perf vendor events: Update elkhartlake events Ian Rogers
2025-03-22 6:33 ` [PATCH v1 11/35] perf vendor events: Update emeraldrapids events/metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 12/35] perf vendor events: Update grandridge events/metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 13/35] perf vendor events: Add graniterapids retirement latencies Ian Rogers
2025-03-22 6:33 ` [PATCH v1 14/35] perf vendor events: Update haswell metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 15/35] perf vendor events: Update haswellx metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 16/35] perf vendor events: Update icelake events/metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 17/35] perf vendor events: Update icelakex events/metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 18/35] perf vendor events: Update ivybridge metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 19/35] perf vendor events: Update ivytown metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 20/35] perf vendor events: Update jaketown metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 21/35] perf vendor events: Update lunarlake events/metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 22/35] perf vendor events: Update meteorlake events/metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 23/35] perf vendor events: Update nehalemep events Ian Rogers
2025-03-22 6:33 ` [PATCH v1 24/35] perf vendor events: Update nehalemex events Ian Rogers
2025-03-22 6:33 ` [PATCH v1 25/35] perf vendor events: Update rocketlake events/metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 26/35] perf vendor events: Update sandybridge metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 27/35] perf vendor events: Update sapphirerapids events/metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 28/35] perf vendor events: Update sierraforest events Ian Rogers
2025-03-22 6:33 ` [PATCH v1 29/35] perf vendor events: Update skylake metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 30/35] perf vendor events: Update skylakex events/metrics Ian Rogers
2025-03-22 6:33 ` [PATCH v1 31/35] perf vendor events: Update snowridgex events Ian Rogers
2025-03-22 6:34 ` [PATCH v1 32/35] perf vendor events: Update tigerlake metrics Ian Rogers
2025-03-22 6:34 ` [PATCH v1 33/35] perf vendor events: Update westmereep-dp events Ian Rogers
2025-03-22 6:34 ` [PATCH v1 34/35] " Ian Rogers
2025-03-22 6:34 ` [PATCH v1 35/35] " Ian Rogers
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=48eb46bce14d895b61560907cb7f0df9038df57d.camel@intel.com \
--to=thomas.falcon@intel.com \
--cc=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=afaerber@suse.de \
--cc=alexander.shishkin@linux.intel.com \
--cc=alexandre.torgue@foss.st.com \
--cc=caleb.biggers@intel.com \
--cc=irogers@google.com \
--cc=jolsa@kernel.org \
--cc=kan.liang@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=manivannan.sadhasivam@linaro.org \
--cc=mark.rutland@arm.com \
--cc=mcoquelin.stm32@gmail.com \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=perry.taylor@intel.com \
--cc=peterz@infradead.org \
--cc=weilin.wang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).