From: Arnaldo Carvalho de Melo <acme@kernel.org>
To: Sandipan Das <sandipan.das@amd.com>
Cc: linux-perf-users@vger.kernel.org, linux-kernel@vger.kernel.org,
Peter Zijlstra <peterz@infradead.org>,
Ingo Molnar <mingo@redhat.com>,
Namhyung Kim <namhyung@kernel.org>,
Mark Rutland <mark.rutland@arm.com>,
Alexander Shishkin <alexander.shishkin@linux.intel.com>,
Jiri Olsa <jolsa@kernel.org>, Ian Rogers <irogers@google.com>,
Adrian Hunter <adrian.hunter@intel.com>,
James Clark <james.clark@linaro.org>,
Kan Liang <kan.liang@linux.intel.com>,
Caleb Biggers <caleb.biggers@intel.com>,
Stephane Eranian <eranian@google.com>,
Ravi Bangoria <ravi.bangoria@amd.com>,
Ananth Narayan <ananth.narayan@amd.com>
Subject: Re: [PATCH 2/4] perf vendor events amd: Add Zen 6 uncore events
Date: Tue, 13 Jan 2026 17:27:06 -0300 [thread overview]
Message-ID: <aWaqmlLZ9NGpY6Ys@x1> (raw)
In-Reply-To: <b00e27727d8aea8cf49c5e6e08b29357d04bc2b8.1767858676.git.sandipan.das@amd.com>
On Thu, Jan 08, 2026 at 01:22:15PM +0530, Sandipan Das wrote:
> Add uncore events taken from Section 1.6 "L3 Cache Performance Monitor
> Counters" and Section 2.2 "UMC Performance Monitor Events" of the
> Performance Monitor Counters for AMD Family 1Ah Model 50h-57h Processors
> document available at the link below.
>
> This constitutes events which capture L3 cache and UMC command activity.
LD /tmp/build/perf-tools-next/perf-util-in.o
AR /tmp/build/perf-tools-next/libperf-util.a
CC /tmp/build/perf-tools-next/pmu-events/pmu-events.o
/tmp/build/perf-tools-next/pmu-events/pmu-events.c:30902:37: error: ‘pmu_events__amdzen6’ defined but not used [-Werror=unused-const-variable=]
30902 | static const struct pmu_table_entry pmu_events__amdzen6[] = {
| ^~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
make[3]: *** [pmu-events/Build:89: /tmp/build/perf-tools-next/pmu-events/pmu-events.o] Error 1
make[2]: *** [Makefile.perf:772: /tmp/build/perf-tools-next/pmu-events/pmu-events-in.o] Error 2
make[1]: *** [Makefile.perf:288: sub-make] Error 2
make: *** [Makefile:119: install-bin] Error 2
make: Leaving directory '/home/acme/git/perf-tools-next/tools/perf'
⬢ [acme@toolbx perf-tools-next]$
When building at this patch (2/4) it breaks, lemme see if at the end it
works, but even then this can't stand this way, as we break bisection...
Then, on 3/4 we get:
CC /tmp/build/perf-tools-next/pmu-events/pmu-events.o
/tmp/build/perf-tools-next/pmu-events/pmu-events.c:30992:37: error: ‘pmu_metrics__amdzen6’ defined but not used [-Werror=unused-const-variable=]
30992 | static const struct pmu_table_entry pmu_metrics__amdzen6[] = {
| ^~~~~~~~~~~~~~~~~~~~
/tmp/build/perf-tools-next/pmu-events/pmu-events.c:30908:37: error: ‘pmu_events__amdzen6’ defined but not used [-Werror=unused-const-variable=]
30908 | static const struct pmu_table_entry pmu_events__amdzen6[] = {
| ^~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
make[3]: *** [pmu-events/Build:89: /tmp/build/perf-tools-next/pmu-events/pmu-events.o] Error 1
make[2]: *** [Makefile.perf:772: /tmp/build/perf-tools-next/pmu-events/pmu-events-in.o] Error 2
make[1]: *** [Makefile.perf:288: sub-make] Error 2
make: *** [Makefile:119: install-bin] Error 2
make: Leaving directory '/home/acme/git/perf-tools-next/tools/perf'
⬢ [acme@toolbx perf-tools-next]$
Finally, on 4/4 everything builds.
Can you please take a look to check how we can keep all bisection happy?
- Arnaldo
> Link: https://bugzilla.kernel.org/attachment.cgi?id=309149
> Signed-off-by: Sandipan Das <sandipan.das@amd.com>
> ---
> .../pmu-events/arch/x86/amdzen6/l3-cache.json | 177 ++++++++++++++++++
> .../arch/x86/amdzen6/memory-controller.json | 101 ++++++++++
> 2 files changed, 278 insertions(+)
> create mode 100644 tools/perf/pmu-events/arch/x86/amdzen6/l3-cache.json
> create mode 100644 tools/perf/pmu-events/arch/x86/amdzen6/memory-controller.json
>
> diff --git a/tools/perf/pmu-events/arch/x86/amdzen6/l3-cache.json b/tools/perf/pmu-events/arch/x86/amdzen6/l3-cache.json
> new file mode 100644
> index 000000000000..9b9804317da7
> --- /dev/null
> +++ b/tools/perf/pmu-events/arch/x86/amdzen6/l3-cache.json
> @@ -0,0 +1,177 @@
> +[
> + {
> + "EventName": "l3_lookup_state.l3_miss",
> + "EventCode": "0x04",
> + "BriefDescription": "L3 cache misses.",
> + "UMask": "0x01",
> + "Unit": "L3PMC"
> + },
> + {
> + "EventName": "l3_lookup_state.l3_hit",
> + "EventCode": "0x04",
> + "BriefDescription": "L3 cache hits.",
> + "UMask": "0xfe",
> + "Unit": "L3PMC"
> + },
> + {
> + "EventName": "l3_lookup_state.all_coherent_accesses_to_l3",
> + "EventCode": "0x04",
> + "BriefDescription": "L3 cache requests for all coherent accesses.",
> + "UMask": "0xff",
> + "Unit": "L3PMC"
> + },
> + {
> + "EventName": "l3_xi_sampled_latency.dram_near",
> + "EventCode": "0xac",
> + "BriefDescription": "Average sampled latency for L3 requests where data is returned from DRAM in the same NUMA node.",
> + "UMask": "0x01",
> + "EnAllCores": "0x1",
> + "EnAllSlices": "0x1",
> + "SliceId": "0x3",
> + "ThreadMask": "0x3",
> + "Unit": "L3PMC"
> + },
> + {
> + "EventName": "l3_xi_sampled_latency.dram_far",
> + "EventCode": "0xac",
> + "BriefDescription": "Average sampled latency for L3 requests where data is returned from DRAM in a different NUMA node.",
> + "UMask": "0x02",
> + "EnAllCores": "0x1",
> + "EnAllSlices": "0x1",
> + "SliceId": "0x3",
> + "ThreadMask": "0x3",
> + "Unit": "L3PMC"
> + },
> + {
> + "EventName": "l3_xi_sampled_latency.near_cache",
> + "EventCode": "0xac",
> + "BriefDescription": "Average sampled latency for L3 requests where data is returned from cache of another CCX in the same NUMA node.",
> + "UMask": "0x04",
> + "EnAllCores": "0x1",
> + "EnAllSlices": "0x1",
> + "SliceId": "0x3",
> + "ThreadMask": "0x3",
> + "Unit": "L3PMC"
> + },
> + {
> + "EventName": "l3_xi_sampled_latency.far_cache",
> + "EventCode": "0xac",
> + "BriefDescription": "Average sampled latency for L3 requests where data is returned from cache of another CCX in a different NUMA node.",
> + "UMask": "0x08",
> + "EnAllCores": "0x1",
> + "EnAllSlices": "0x1",
> + "SliceId": "0x3",
> + "ThreadMask": "0x3",
> + "Unit": "L3PMC"
> + },
> + {
> + "EventName": "l3_xi_sampled_latency.ext_near",
> + "EventCode": "0xac",
> + "BriefDescription": "Average sampled latency for L3 requests where data is returned from extension memory (CXL) in the same NUMA node.",
> + "UMask": "0x10",
> + "EnAllCores": "0x1",
> + "EnAllSlices": "0x1",
> + "SliceId": "0x3",
> + "ThreadMask": "0x3",
> + "Unit": "L3PMC"
> + },
> + {
> + "EventName": "l3_xi_sampled_latency.ext_far",
> + "EventCode": "0xac",
> + "BriefDescription": "Average sampled latency for L3 requests where data is returned from extension memory (CXL) in a different NUMA node.",
> + "UMask": "0x20",
> + "EnAllCores": "0x1",
> + "EnAllSlices": "0x1",
> + "SliceId": "0x3",
> + "ThreadMask": "0x3",
> + "Unit": "L3PMC"
> + },
> + {
> + "EventName": "l3_xi_sampled_latency.all",
> + "EventCode": "0xac",
> + "BriefDescription": "Average sampled latency for L3 requests where data is returned from all types of sources.",
> + "UMask": "0x3f",
> + "EnAllCores": "0x1",
> + "EnAllSlices": "0x1",
> + "SliceId": "0x3",
> + "ThreadMask": "0x3",
> + "Unit": "L3PMC"
> + },
> + {
> + "EventName": "l3_xi_sampled_latency_requests.dram_near",
> + "EventCode": "0xad",
> + "BriefDescription": "Average sampled L3 requests where data is returned from DRAM in the same NUMA node.",
> + "UMask": "0x01",
> + "EnAllCores": "0x1",
> + "EnAllSlices": "0x1",
> + "SliceId": "0x3",
> + "ThreadMask": "0x3",
> + "Unit": "L3PMC"
> + },
> + {
> + "EventName": "l3_xi_sampled_latency_requests.dram_far",
> + "EventCode": "0xad",
> + "BriefDescription": "Average sampled L3 requests where data is returned from DRAM in a different NUMA node.",
> + "UMask": "0x02",
> + "EnAllCores": "0x1",
> + "EnAllSlices": "0x1",
> + "SliceId": "0x3",
> + "ThreadMask": "0x3",
> + "Unit": "L3PMC"
> + },
> + {
> + "EventName": "l3_xi_sampled_latency_requests.near_cache",
> + "EventCode": "0xad",
> + "BriefDescription": "Average sampled L3 requests where data is returned from cache of another CCX in the same NUMA node.",
> + "UMask": "0x04",
> + "EnAllCores": "0x1",
> + "EnAllSlices": "0x1",
> + "SliceId": "0x3",
> + "ThreadMask": "0x3",
> + "Unit": "L3PMC"
> + },
> + {
> + "EventName": "l3_xi_sampled_latency_requests.far_cache",
> + "EventCode": "0xad",
> + "BriefDescription": "Average sampled L3 requests where data is returned from cache of another CCX in a different NUMA node.",
> + "UMask": "0x08",
> + "EnAllCores": "0x1",
> + "EnAllSlices": "0x1",
> + "SliceId": "0x3",
> + "ThreadMask": "0x3",
> + "Unit": "L3PMC"
> + },
> + {
> + "EventName": "l3_xi_sampled_latency_requests.ext_near",
> + "EventCode": "0xad",
> + "BriefDescription": "Average sampled L3 requests where data is returned from extension memory (CXL) in the same NUMA node.",
> + "UMask": "0x10",
> + "EnAllCores": "0x1",
> + "EnAllSlices": "0x1",
> + "SliceId": "0x3",
> + "ThreadMask": "0x3",
> + "Unit": "L3PMC"
> + },
> + {
> + "EventName": "l3_xi_sampled_latency_requests.ext_far",
> + "EventCode": "0xad",
> + "BriefDescription": "Average sampled L3 requests where data is returned from extension memory (CXL) in a different NUMA node.",
> + "UMask": "0x20",
> + "EnAllCores": "0x1",
> + "EnAllSlices": "0x1",
> + "SliceId": "0x3",
> + "ThreadMask": "0x3",
> + "Unit": "L3PMC"
> + },
> + {
> + "EventName": "l3_xi_sampled_latency_requests.all",
> + "EventCode": "0xad",
> + "BriefDescription": "Average sampled L3 requests where data is returned from all types of sources.",
> + "UMask": "0x3f",
> + "EnAllCores": "0x1",
> + "EnAllSlices": "0x1",
> + "SliceId": "0x3",
> + "ThreadMask": "0x3",
> + "Unit": "L3PMC"
> + }
> +]
> diff --git a/tools/perf/pmu-events/arch/x86/amdzen6/memory-controller.json b/tools/perf/pmu-events/arch/x86/amdzen6/memory-controller.json
> new file mode 100644
> index 000000000000..649a60b09e1b
> --- /dev/null
> +++ b/tools/perf/pmu-events/arch/x86/amdzen6/memory-controller.json
> @@ -0,0 +1,101 @@
> +[
> + {
> + "EventName": "umc_mem_clk",
> + "PublicDescription": "Memory clock (MEMCLK) cycles.",
> + "EventCode": "0x00",
> + "PerPkg": "1",
> + "Unit": "UMCPMC"
> + },
> + {
> + "EventName": "umc_act_cmd.all",
> + "PublicDescription": "ACTIVATE commands sent.",
> + "EventCode": "0x05",
> + "PerPkg": "1",
> + "Unit": "UMCPMC"
> + },
> + {
> + "EventName": "umc_act_cmd.rd",
> + "PublicDescription": "ACTIVATE commands sent for reads.",
> + "EventCode": "0x05",
> + "RdWrMask": "0x1",
> + "PerPkg": "1",
> + "Unit": "UMCPMC"
> + },
> + {
> + "EventName": "umc_act_cmd.wr",
> + "PublicDescription": "ACTIVATE commands sent for writes.",
> + "EventCode": "0x05",
> + "RdWrMask": "0x2",
> + "PerPkg": "1",
> + "Unit": "UMCPMC"
> + },
> + {
> + "EventName": "umc_pchg_cmd.all",
> + "PublicDescription": "PRECHARGE commands sent.",
> + "EventCode": "0x06",
> + "PerPkg": "1",
> + "Unit": "UMCPMC"
> + },
> + {
> + "EventName": "umc_pchg_cmd.rd",
> + "PublicDescription": "PRECHARGE commands sent for reads.",
> + "EventCode": "0x06",
> + "RdWrMask": "0x1",
> + "PerPkg": "1",
> + "Unit": "UMCPMC"
> + },
> + {
> + "EventName": "umc_pchg_cmd.wr",
> + "PublicDescription": "PRECHARGE commands sent for writes.",
> + "EventCode": "0x06",
> + "RdWrMask": "0x2",
> + "PerPkg": "1",
> + "Unit": "UMCPMC"
> + },
> + {
> + "EventName": "umc_cas_cmd.all",
> + "PublicDescription": "CAS commands sent.",
> + "EventCode": "0x0a",
> + "PerPkg": "1",
> + "Unit": "UMCPMC"
> + },
> + {
> + "EventName": "umc_cas_cmd.rd",
> + "PublicDescription": "CAS commands sent for reads.",
> + "EventCode": "0x0a",
> + "RdWrMask": "0x1",
> + "PerPkg": "1",
> + "Unit": "UMCPMC"
> + },
> + {
> + "EventName": "umc_cas_cmd.wr",
> + "PublicDescription": "CAS commands sent for writes.",
> + "EventCode": "0x0a",
> + "RdWrMask": "0x2",
> + "PerPkg": "1",
> + "Unit": "UMCPMC"
> + },
> + {
> + "EventName": "umc_data_slot_clks.all",
> + "PublicDescription": "Clock cycles where the data bus is utilized.",
> + "EventCode": "0x14",
> + "PerPkg": "1",
> + "Unit": "UMCPMC"
> + },
> + {
> + "EventName": "umc_data_slot_clks.rd",
> + "PublicDescription": "Clock cycles where the data bus is utilized for reads.",
> + "EventCode": "0x14",
> + "RdWrMask": "0x1",
> + "PerPkg": "1",
> + "Unit": "UMCPMC"
> + },
> + {
> + "EventName": "umc_data_slot_clks.wr",
> + "PublicDescription": "Clock cycles where the data bus is utilized for writes.",
> + "EventCode": "0x14",
> + "RdWrMask": "0x2",
> + "PerPkg": "1",
> + "Unit": "UMCPMC"
> + }
> +]
> --
> 2.43.0
next prev parent reply other threads:[~2026-01-13 20:27 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-08 7:52 [PATCH 0/4] perf vendor events amd: Add Zen 6 events and metrics Sandipan Das
2026-01-08 7:52 ` [PATCH 1/4] perf vendor events amd: Add Zen 6 core events Sandipan Das
2026-01-08 7:52 ` [PATCH 2/4] perf vendor events amd: Add Zen 6 uncore events Sandipan Das
2026-01-13 20:27 ` Arnaldo Carvalho de Melo [this message]
2026-01-13 20:41 ` Ian Rogers
2026-01-13 21:19 ` Arnaldo Carvalho de Melo
2026-01-08 7:52 ` [PATCH 3/4] perf vendor events amd: Add Zen 6 metrics Sandipan Das
2026-01-08 7:52 ` [PATCH 4/4] perf vendor events amd: Add Zen 6 mapping Sandipan Das
2026-01-08 17:41 ` [PATCH 0/4] perf vendor events amd: Add Zen 6 events and metrics Ian Rogers
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aWaqmlLZ9NGpY6Ys@x1 \
--to=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=ananth.narayan@amd.com \
--cc=caleb.biggers@intel.com \
--cc=eranian@google.com \
--cc=irogers@google.com \
--cc=james.clark@linaro.org \
--cc=jolsa@kernel.org \
--cc=kan.liang@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mark.rutland@arm.com \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=ravi.bangoria@amd.com \
--cc=sandipan.das@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox