From: Ian Rogers <irogers@google.com>
To: "Thomas Falcon" <thomas.falcon@intel.com>,
"Peter Zijlstra" <peterz@infradead.org>,
"Ingo Molnar" <mingo@redhat.com>,
"Arnaldo Carvalho de Melo" <acme@kernel.org>,
"Namhyung Kim" <namhyung@kernel.org>,
"Alexander Shishkin" <alexander.shishkin@linux.intel.com>,
"Jiri Olsa" <jolsa@kernel.org>, "Ian Rogers" <irogers@google.com>,
"Adrian Hunter" <adrian.hunter@intel.com>,
"Kan Liang" <kan.liang@linux.intel.com>,
"Andreas Färber" <afaerber@suse.de>,
"Manivannan Sadhasivam" <mani@kernel.org>,
"Caleb Biggers" <caleb.biggers@intel.com>,
linux-kernel@vger.kernel.org, linux-perf-users@vger.kernel.org
Subject: [PATCH v1 10/10] perf vendor events intel: Update sierraforest events to v1.12
Date: Tue, 23 Sep 2025 23:02:29 -0700 [thread overview]
Message-ID: <20250924060229.375718-11-irogers@google.com> (raw)
In-Reply-To: <20250924060229.375718-1-irogers@google.com>
Update sierraforest events to v1.12 released in:
https://github.com/intel/perfmon/commit/8279984b0b2eef35412c0281983ef59ae74f19ed
Event json automatically generated by:
https://github.com/intel/perfmon/blob/main/scripts/create_perf_json.py
Signed-off-by: Ian Rogers <irogers@google.com>
---
tools/perf/pmu-events/arch/x86/mapfile.csv | 2 +-
.../arch/x86/sierraforest/cache.json | 61 +++++++++--
.../x86/sierraforest/uncore-interconnect.json | 10 +-
.../arch/x86/sierraforest/uncore-io.json | 1 -
.../arch/x86/sierraforest/uncore-memory.json | 103 ++++++++++++------
5 files changed, 133 insertions(+), 44 deletions(-)
diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-events/arch/x86/mapfile.csv
index 3938555d661e..32093bded949 100644
--- a/tools/perf/pmu-events/arch/x86/mapfile.csv
+++ b/tools/perf/pmu-events/arch/x86/mapfile.csv
@@ -30,7 +30,7 @@ GenuineIntel-6-CC,v1.00,pantherlake,core
GenuineIntel-6-A7,v1.04,rocketlake,core
GenuineIntel-6-2A,v19,sandybridge,core
GenuineIntel-6-8F,v1.35,sapphirerapids,core
-GenuineIntel-6-AF,v1.11,sierraforest,core
+GenuineIntel-6-AF,v1.12,sierraforest,core
GenuineIntel-6-(37|4A|4C|4D|5A),v15,silvermont,core
GenuineIntel-6-(4E|5E|8E|9E|A5|A6),v59,skylake,core
GenuineIntel-6-55-[01234],v1.37,skylakex,core
diff --git a/tools/perf/pmu-events/arch/x86/sierraforest/cache.json b/tools/perf/pmu-events/arch/x86/sierraforest/cache.json
index 877052db1490..b2650e8ae252 100644
--- a/tools/perf/pmu-events/arch/x86/sierraforest/cache.json
+++ b/tools/perf/pmu-events/arch/x86/sierraforest/cache.json
@@ -162,6 +162,14 @@
"SampleAfterValue": "1000003",
"UMask": "0x6"
},
+ {
+ "BriefDescription": "Counts the number of unhalted cycles when the core is stalled due to a demand load miss which hit in the LLC, no snoop was required. LLC provides the data. If the core has access to an L3 cache, an LLC hit refers to an L3 cache hit, otherwise it counts zeros.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0x34",
+ "EventName": "MEM_BOUND_STALLS_LOAD.LLC_HIT_NOSNOOP",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
{
"BriefDescription": "Counts the number of unhalted cycles when the core is stalled due to a demand load miss which missed all the local caches. If the core has access to an L3 cache, an LLC miss refers to an L3 cache miss, otherwise it is an L2 cache miss.",
"Counter": "0,1,2,3,4,5,6,7",
@@ -178,6 +186,14 @@
"SampleAfterValue": "1000003",
"UMask": "0x80"
},
+ {
+ "BriefDescription": "Counts the total number of load ops retired that miss the L3 cache.",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd3",
+ "EventName": "MEM_LOAD_UOPS_L3_MISS_RETIRED.ALL",
+ "SampleAfterValue": "1000003",
+ "UMask": "0xff"
+ },
{
"BriefDescription": "Counts the number of load ops retired that miss the L3 cache and hit in DRAM",
"Counter": "0,1,2,3,4,5,6,7",
@@ -186,6 +202,31 @@
"SampleAfterValue": "1000003",
"UMask": "0x1"
},
+ {
+ "BriefDescription": "Counts the number of load ops retired that miss the L3 cache and hit in a Remote DRAM",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd3",
+ "EventName": "MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_DRAM_OR_NOFWD",
+ "PublicDescription": "Counts the number of load ops retired that miss the L3 cache and hit in a Remote DRAM, OR had a Remote snoop miss/no fwd and hit in the Local Dram",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x2"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that miss the L3 cache and hit in a Remote Cache and modified data was forwarded",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd3",
+ "EventName": "MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD_HITM",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x8"
+ },
+ {
+ "BriefDescription": "Counts the number of load ops retired that miss the L3 cache and hit in a Remote Cache and non-modified data was forwarded",
+ "Counter": "0,1,2,3,4,5,6,7",
+ "EventCode": "0xd3",
+ "EventName": "MEM_LOAD_UOPS_L3_MISS_RETIRED.REMOTE_FWD_NONM",
+ "SampleAfterValue": "1000003",
+ "UMask": "0x4"
+ },
{
"BriefDescription": "Counts the number of load ops retired that hit the L1 data cache.",
"Counter": "0,1,2,3,4,5,6,7",
@@ -286,7 +327,7 @@
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
- "Counter": "0,1",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_1024",
@@ -297,7 +338,7 @@
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
- "Counter": "0,1",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_128",
@@ -308,7 +349,7 @@
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
- "Counter": "0,1",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_16",
@@ -319,7 +360,7 @@
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
- "Counter": "0,1",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_2048",
@@ -330,7 +371,7 @@
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
- "Counter": "0,1",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_256",
@@ -341,7 +382,7 @@
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
- "Counter": "0,1",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_32",
@@ -352,7 +393,7 @@
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
- "Counter": "0,1",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_4",
@@ -363,7 +404,7 @@
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
- "Counter": "0,1",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_512",
@@ -374,7 +415,7 @@
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
- "Counter": "0,1",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_64",
@@ -385,7 +426,7 @@
},
{
"BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled.",
- "Counter": "0,1",
+ "Counter": "0,1,2,3,4,5,6,7",
"Data_LA": "1",
"EventCode": "0xd0",
"EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_8",
diff --git a/tools/perf/pmu-events/arch/x86/sierraforest/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/sierraforest/uncore-interconnect.json
index 952b6de3fefc..251e5d20fefe 100644
--- a/tools/perf/pmu-events/arch/x86/sierraforest/uncore-interconnect.json
+++ b/tools/perf/pmu-events/arch/x86/sierraforest/uncore-interconnect.json
@@ -839,11 +839,19 @@
"Counter": "0,1,2,3",
"EventCode": "0x1F",
"EventName": "UNC_I_MISC1.LOST_FWD",
- "Experimental": "1",
"PerPkg": "1",
"UMask": "0x10",
"Unit": "IRP"
},
+ {
+ "BriefDescription": "Misc Events - Set 1 : Received Invalid : Secondary received a transfer that did not have sufficient MESI state",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x1F",
+ "EventName": "UNC_I_MISC1.SEC_RCVD_INVLD",
+ "PerPkg": "1",
+ "UMask": "0x20",
+ "Unit": "IRP"
+ },
{
"BriefDescription": "Snoop Hit E/S responses",
"Counter": "0,1,2,3",
diff --git a/tools/perf/pmu-events/arch/x86/sierraforest/uncore-io.json b/tools/perf/pmu-events/arch/x86/sierraforest/uncore-io.json
index f4f956966e16..2ea2637df3fb 100644
--- a/tools/perf/pmu-events/arch/x86/sierraforest/uncore-io.json
+++ b/tools/perf/pmu-events/arch/x86/sierraforest/uncore-io.json
@@ -1321,7 +1321,6 @@
"FCMask": "0x01",
"PerPkg": "1",
"PortMask": "0x0FF",
- "PublicDescription": "-",
"UMask": "0x4",
"Unit": "IIO"
},
diff --git a/tools/perf/pmu-events/arch/x86/sierraforest/uncore-memory.json b/tools/perf/pmu-events/arch/x86/sierraforest/uncore-memory.json
index c7e9dbe02eb0..a9fd7a34b24b 100644
--- a/tools/perf/pmu-events/arch/x86/sierraforest/uncore-memory.json
+++ b/tools/perf/pmu-events/arch/x86/sierraforest/uncore-memory.json
@@ -56,6 +56,33 @@
"UMask": "0xcf",
"Unit": "IMC"
},
+ {
+ "BriefDescription": "CAS count for SubChannel 0 regular reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.RD_NON_UNDERFILL",
+ "PerPkg": "1",
+ "UMask": "0xc3",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0 auto-precharge reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.RD_PRE_REG",
+ "PerPkg": "1",
+ "UMask": "0xc2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 0 auto-precharge underfill reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.RD_PRE_UNDERFILL",
+ "PerPkg": "1",
+ "UMask": "0xc8",
+ "Unit": "IMC"
+ },
{
"BriefDescription": "CAS count for SubChannel 0 regular reads",
"Counter": "0,1,2,3",
@@ -74,6 +101,15 @@
"UMask": "0xc4",
"Unit": "IMC"
},
+ {
+ "BriefDescription": "CAS count for SubChannel 0 underfill reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x05",
+ "EventName": "UNC_M_CAS_COUNT_SCH0.RD_UNDERFILL_ALL",
+ "PerPkg": "1",
+ "UMask": "0xcc",
+ "Unit": "IMC"
+ },
{
"BriefDescription": "CAS count for SubChannel 0, all writes",
"Counter": "0,1,2,3",
@@ -121,6 +157,33 @@
"UMask": "0xcf",
"Unit": "IMC"
},
+ {
+ "BriefDescription": "CAS count for SubChannel 1 regular reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.RD_NON_UNDERFILL",
+ "PerPkg": "1",
+ "UMask": "0xc3",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1 auto-precharge reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.RD_PRE_REG",
+ "PerPkg": "1",
+ "UMask": "0xc2",
+ "Unit": "IMC"
+ },
+ {
+ "BriefDescription": "CAS count for SubChannel 1 auto-precharge underfill reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.RD_PRE_UNDERFILL",
+ "PerPkg": "1",
+ "UMask": "0xc8",
+ "Unit": "IMC"
+ },
{
"BriefDescription": "CAS count for SubChannel 1 regular reads",
"Counter": "0,1,2,3",
@@ -139,6 +202,15 @@
"UMask": "0xc4",
"Unit": "IMC"
},
+ {
+ "BriefDescription": "CAS count for SubChannel 1 underfill reads",
+ "Counter": "0,1,2,3",
+ "EventCode": "0x06",
+ "EventName": "UNC_M_CAS_COUNT_SCH1.RD_UNDERFILL_ALL",
+ "PerPkg": "1",
+ "UMask": "0xcc",
+ "Unit": "IMC"
+ },
{
"BriefDescription": "CAS count for SubChannel 1, all writes",
"Counter": "0,1,2,3",
@@ -195,7 +267,6 @@
"EventName": "UNC_M_MR4_2XREF_CYCLES.SCH0_DIMM0",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x1",
"Unit": "IMC"
},
@@ -206,7 +277,6 @@
"EventName": "UNC_M_MR4_2XREF_CYCLES.SCH0_DIMM1",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x2",
"Unit": "IMC"
},
@@ -217,7 +287,6 @@
"EventName": "UNC_M_MR4_2XREF_CYCLES.SCH1_DIMM0",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x4",
"Unit": "IMC"
},
@@ -228,7 +297,6 @@
"EventName": "UNC_M_MR4_2XREF_CYCLES.SCH1_DIMM1",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x8",
"Unit": "IMC"
},
@@ -239,7 +307,6 @@
"EventName": "UNC_M_PDC_MR4ACTIVE_CYCLES.SCH0_DIMM0",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x1",
"Unit": "IMC"
},
@@ -250,7 +317,6 @@
"EventName": "UNC_M_PDC_MR4ACTIVE_CYCLES.SCH0_DIMM1",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x2",
"Unit": "IMC"
},
@@ -261,7 +327,6 @@
"EventName": "UNC_M_PDC_MR4ACTIVE_CYCLES.SCH1_DIMM0",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x4",
"Unit": "IMC"
},
@@ -272,7 +337,6 @@
"EventName": "UNC_M_PDC_MR4ACTIVE_CYCLES.SCH1_DIMM1",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x8",
"Unit": "IMC"
},
@@ -283,7 +347,6 @@
"EventName": "UNC_M_POWERDOWN_CYCLES.SCH0_RANK0",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x1",
"Unit": "IMC"
},
@@ -294,7 +357,6 @@
"EventName": "UNC_M_POWERDOWN_CYCLES.SCH0_RANK1",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x2",
"Unit": "IMC"
},
@@ -305,7 +367,6 @@
"EventName": "UNC_M_POWERDOWN_CYCLES.SCH0_RANK2",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x4",
"Unit": "IMC"
},
@@ -316,7 +377,6 @@
"EventName": "UNC_M_POWERDOWN_CYCLES.SCH0_RANK3",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x8",
"Unit": "IMC"
},
@@ -327,7 +387,6 @@
"EventName": "UNC_M_POWERDOWN_CYCLES.SCH1_RANK0",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x10",
"Unit": "IMC"
},
@@ -338,7 +397,6 @@
"EventName": "UNC_M_POWERDOWN_CYCLES.SCH1_RANK1",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x20",
"Unit": "IMC"
},
@@ -349,7 +407,6 @@
"EventName": "UNC_M_POWERDOWN_CYCLES.SCH1_RANK2",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x40",
"Unit": "IMC"
},
@@ -360,7 +417,6 @@
"EventName": "UNC_M_POWERDOWN_CYCLES.SCH1_RANK3",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x80",
"Unit": "IMC"
},
@@ -371,7 +427,6 @@
"EventName": "UNC_M_POWER_CHANNEL_PPD_CYCLES",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"Unit": "IMC"
},
{
@@ -381,7 +436,6 @@
"EventName": "UNC_M_POWER_CRITICAL_THROTTLE_CYCLES.SLOT0",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x1",
"Unit": "IMC"
},
@@ -392,7 +446,6 @@
"EventName": "UNC_M_POWER_CRITICAL_THROTTLE_CYCLES.SLOT1",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x2",
"Unit": "IMC"
},
@@ -423,7 +476,6 @@
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.MR4BLKEN",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x8",
"Unit": "IMC"
},
@@ -434,7 +486,6 @@
"EventName": "UNC_M_POWER_THROTTLE_CYCLES.RAPLBLK",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x4",
"Unit": "IMC"
},
@@ -617,7 +668,6 @@
"EventName": "UNC_M_SELF_REFRESH.ENTER_SUCCESS",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "UNC_M_SELF_REFRESH.ENTER_SUCCESS",
"UMask": "0x2",
"Unit": "IMC"
},
@@ -628,7 +678,6 @@
"EventName": "UNC_M_SELF_REFRESH.ENTER_SUCCESS_CYCLES",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x1",
"Unit": "IMC"
},
@@ -639,7 +688,6 @@
"EventName": "UNC_M_THROTTLE_CRIT_CYCLES.SLOT0",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x1",
"Unit": "IMC"
},
@@ -650,7 +698,6 @@
"EventName": "UNC_M_THROTTLE_CRIT_CYCLES.SLOT1",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x2",
"Unit": "IMC"
},
@@ -661,7 +708,6 @@
"EventName": "UNC_M_THROTTLE_HIGH_CYCLES.SLOT0",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x1",
"Unit": "IMC"
},
@@ -672,7 +718,6 @@
"EventName": "UNC_M_THROTTLE_HIGH_CYCLES.SLOT1",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x2",
"Unit": "IMC"
},
@@ -683,7 +728,6 @@
"EventName": "UNC_M_THROTTLE_LOW_CYCLES.SLOT0",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x1",
"Unit": "IMC"
},
@@ -694,7 +738,6 @@
"EventName": "UNC_M_THROTTLE_LOW_CYCLES.SLOT1",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x2",
"Unit": "IMC"
},
@@ -705,7 +748,6 @@
"EventName": "UNC_M_THROTTLE_MID_CYCLES.SLOT0",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x1",
"Unit": "IMC"
},
@@ -716,7 +758,6 @@
"EventName": "UNC_M_THROTTLE_MID_CYCLES.SLOT1",
"Experimental": "1",
"PerPkg": "1",
- "PublicDescription": "-",
"UMask": "0x2",
"Unit": "IMC"
},
--
2.51.0.534.gc79095c0ca-goog
next prev parent reply other threads:[~2025-09-24 6:03 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-24 6:02 [PATCH v1 00/10] perf vendor events intel update Ian Rogers
2025-09-24 6:02 ` [PATCH v1 01/10] perf vendor events intel: Update alderlake events to v1.34 Ian Rogers
2025-09-24 6:02 ` [PATCH v1 02/10] perf vendor events intel: Update arrowlake events to v1.13 Ian Rogers
2025-09-24 6:02 ` [PATCH v1 03/10] perf vendor events intel: Update emeraldrapids events to v1.20 Ian Rogers
2025-09-24 6:02 ` [PATCH v1 04/10] perf vendor events intel: Update grandridge events to v1.10 Ian Rogers
2025-09-24 6:02 ` [PATCH v1 05/10] perf vendor events intel: Update graniterapids events to v1.15 Ian Rogers
2025-09-24 6:02 ` [PATCH v1 06/10] perf vendor events intel: Update lunarlake events to v1.18 Ian Rogers
2025-09-24 6:02 ` [PATCH v1 07/10] perf vendor events intel: Update meteorlake events to v1.17 Ian Rogers
2025-09-24 6:02 ` [PATCH v1 08/10] perf vendor events intel: Update pantherlake events to v1.00 Ian Rogers
2025-09-24 6:02 ` [PATCH v1 09/10] perf vendor events intel: Update sapphirerapids events to v1.35 Ian Rogers
2025-09-24 6:02 ` Ian Rogers [this message]
2025-09-24 21:12 ` [PATCH v1 00/10] perf vendor events intel update Falcon, Thomas
2025-09-24 21:49 ` Ian Rogers
2025-09-24 22:20 ` Taylor, Perry
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250924060229.375718-11-irogers@google.com \
--to=irogers@google.com \
--cc=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=afaerber@suse.de \
--cc=alexander.shishkin@linux.intel.com \
--cc=caleb.biggers@intel.com \
--cc=jolsa@kernel.org \
--cc=kan.liang@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=mani@kernel.org \
--cc=mingo@redhat.com \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=thomas.falcon@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox