* [PATCH 00/11] perf/x86/intel: Fix inaccurate hard-coded event configurations
@ 2026-05-15 6:11 Dapeng Mi
2026-05-15 6:11 ` [PATCH 01/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for ICX Dapeng Mi
` (10 more replies)
0 siblings, 11 replies; 14+ messages in thread
From: Dapeng Mi @ 2026-05-15 6:11 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Ian Rogers, Adrian Hunter, Alexander Shishkin,
Andi Kleen, Eranian Stephane
Cc: linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao, Dapeng Mi
Currently, the Intel perf code defines several hard-coded event
configurations for each platform. These configurations include event
constraints, extra MSR settings, and predefined extra MSR values for
certain cache events.
For example, the following five hard-coded event configurations are
defined for Sapphire Rapids:
- intel_glc_event_constraints[]: Non-PEBS event constraints.
- intel_glc_pebs_event_constraints[]: PEBS event constraints.
- intel_glc_extra_regs[]: Event-to-extra MSR mapping for events requiring
extra MSR access.
- glc_hw_cache_event_ids[]: Cache event IDs.
- glc_hw_cache_extra_regs[]: Extra MSR values for L3 and node events,
mainly for OCR/OMR events.
However, these hard-coded configurations can become outdated or incorrect
as perfmon events are continuously updated
(see: https://github.com/intel/perfmon). This can result in events being
scheduled on incorrect hardware counters, leading to inaccurate counts,
especially for legacy cache events such as llc-load-misses and
llc-store-misses. While these legacy events are less commonly used since
the introduction of JSON-based cache events, it is still important to keep
them accurate.
This patchset addresses all identified mismatches on mainstream platforms,
including server platforms (ICX, SPR, EMR, GNR, DMR, SRF, and CWF) and
client platforms (ADL, MTL, LNL, ARL, PTL, and NVL).
Note: Due to issues in the 7.1-rc2 release that cause boot-up hangs on
Intel hybrid platforms, this patchset was developed and tested against the
7.0 release.
Testing:
All tests below were run on the platforms mentioned above, with no issues
found:
1. Perf counting test:
$perf test 114
2. Perf sampling test:
$perf test 148
3. Legacy LLC cache events counting test:
$perf stat -e llc-loads,llc-load-misses,llc-stores,llc-store-misses -a
Dapeng Mi (11):
perf/x86/intel: Update event constraints and cache_extra_regs[] for
ICX
perf/x86/intel: Update event constraints and cache_extra_regs[] for
SPR
perf/x86/intel: Update event constraints for DMR
perf/x86/intel: Update event constraints and cache_extra_regs[] for
ADL
perf/x86/intel: Update event constraints and cache_extra_regs[] for
MTL
perf/x86/intel: Update event constraints and cache_extra_regs[] for
LNL
perf/x86/intel: Update event constraints and cache_extra_regs[] for
ARL
perf/x86/intel: Update event constraints for PTL
perf/x86/intel: Update event constraints and cache_extra_regs[] for
NVL
perf/x86/intel: Update event constraints and cache_extra_regs[] for
SRF
perf/x86/intel: Update event constraints and cache_extra_regs[] for
CWF
arch/x86/events/intel/core.c | 476 +++++++++++++++++++++++++++++------
arch/x86/events/intel/ds.c | 23 +-
arch/x86/events/perf_event.h | 4 +-
3 files changed, 421 insertions(+), 82 deletions(-)
base-commit: 028ef9c96e96197026887c0f092424679298aae8
--
2.34.1
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 01/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for ICX
2026-05-15 6:11 [PATCH 00/11] perf/x86/intel: Fix inaccurate hard-coded event configurations Dapeng Mi
@ 2026-05-15 6:11 ` Dapeng Mi
2026-05-15 6:11 ` [PATCH 02/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for SPR Dapeng Mi
` (9 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Dapeng Mi @ 2026-05-15 6:11 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Ian Rogers, Adrian Hunter, Alexander Shishkin,
Andi Kleen, Eranian Stephane
Cc: linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao, Dapeng Mi
Update perf hard-coded event constraints and cache_extra_regs[] for
Icelake server according to the latest ICX perfmon events (v1.30).
Since the value of cache extra registers differs with previous
generations, introduce new snc_hw_cache_extra_regs[] to represent the
value of extra registers on ICX.
ICX perfmon events:
https://github.com/intel/perfmon/blob/main/ICX/events/icelakex_core.json
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/core.c | 48 ++++++++++++++++++++++++++++++++----
1 file changed, 43 insertions(+), 5 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 793335c3ce78..1390d1da985b 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -310,10 +310,11 @@ static struct extra_reg intel_skl_extra_regs[] __read_mostly = {
static struct event_constraint intel_icl_event_constraints[] = {
FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */
FIXED_EVENT_CONSTRAINT(0x01c0, 0), /* old INST_RETIRED.PREC_DIST */
- FIXED_EVENT_CONSTRAINT(0x0100, 0), /* INST_RETIRED.PREC_DIST */
+ FIXED_EVENT_CONSTRAINT(0x0100, 0), /* pseudo INST_RETIRED.ANY */
FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */
- FIXED_EVENT_CONSTRAINT(0x0300, 2), /* CPU_CLK_UNHALTED.REF */
- FIXED_EVENT_CONSTRAINT(0x0400, 3), /* SLOTS */
+ FIXED_EVENT_CONSTRAINT(0x0200, 1), /* pseudo CPU_CLK_UNHALTED.THREAD */
+ FIXED_EVENT_CONSTRAINT(0x0300, 2), /* pseudo CPU_CLK_UNHALTED.REF_TSC */
+ FIXED_EVENT_CONSTRAINT(0x0400, 3), /* pseudo TOPDOWN.SLOTS */
METRIC_EVENT_CONSTRAINT(INTEL_TD_METRIC_RETIRING, 0),
METRIC_EVENT_CONSTRAINT(INTEL_TD_METRIC_BAD_SPEC, 1),
METRIC_EVENT_CONSTRAINT(INTEL_TD_METRIC_FE_BOUND, 2),
@@ -1019,6 +1020,41 @@ static __initconst const u64 skl_hw_cache_extra_regs
},
};
+static __initconst const u64 snc_hw_cache_extra_regs
+ [PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX] =
+{
+ [ C(LL ) ] = {
+ [ C(OP_READ) ] = {
+ [ C(RESULT_ACCESS) ] = 0x10001, /* OCR.DEMAND_DATA_RD.ANY_RESPONSE */
+ [ C(RESULT_MISS) ] = 0x3FBFC00001, /* OCR.DEMAND_DATA_RD.L3_MISS */
+ },
+ [ C(OP_WRITE) ] = {
+ [ C(RESULT_ACCESS) ] = 0x3F3FFC0002, /* OCR.DEMAND_RFO.ANY_RESPONSE */
+ [ C(RESULT_MISS) ] = 0x3F3FC00002, /* OCR.DEMAND_RFO.L3_MISS */
+ },
+ [ C(OP_PREFETCH) ] = {
+ [ C(RESULT_ACCESS) ] = 0x0,
+ [ C(RESULT_MISS) ] = 0x0,
+ },
+ },
+ [ C(NODE) ] = {
+ [ C(OP_READ) ] = {
+ [ C(RESULT_ACCESS) ] = 0x104000001, /* OCR.DEMAND_DATA_RD.LOCAL_DRAM */
+ [ C(RESULT_MISS) ] = 0x730000001, /* OCR.DEMAND_DATA_RD.REMOTE_DRAM */
+ },
+ [ C(OP_WRITE) ] = {
+ [ C(RESULT_ACCESS) ] = 0x104000002, /* OCR.DEMAND_RFO.LOCAL_DRAM */
+ [ C(RESULT_MISS) ] = 0x730000002, /* OCR.DEMAND_RFO.REMOTE_DRAM */
+ },
+ [ C(OP_PREFETCH) ] = {
+ [ C(RESULT_ACCESS) ] = 0x0,
+ [ C(RESULT_MISS) ] = 0x0,
+ },
+ },
+};
+
#define SNB_DMND_DATA_RD (1ULL << 0)
#define SNB_DMND_RFO (1ULL << 1)
#define SNB_DMND_IFETCH (1ULL << 2)
@@ -8119,17 +8155,19 @@ __init int intel_pmu_init(void)
case INTEL_ICELAKE_X:
case INTEL_ICELAKE_D:
+ memcpy(hw_cache_extra_regs, snc_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
x86_pmu.pebs_ept = 1;
pmem = true;
- fallthrough;
+ goto snc_common;
case INTEL_ICELAKE_L:
case INTEL_ICELAKE:
case INTEL_TIGERLAKE_L:
case INTEL_TIGERLAKE:
case INTEL_ROCKETLAKE:
+ memcpy(hw_cache_extra_regs, skl_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
+ snc_common:
x86_pmu.late_ack = true;
memcpy(hw_cache_event_ids, skl_hw_cache_event_ids, sizeof(hw_cache_event_ids));
- memcpy(hw_cache_extra_regs, skl_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
hw_cache_event_ids[C(ITLB)][C(OP_READ)][C(RESULT_ACCESS)] = -1;
intel_pmu_lbr_init_skl();
--
2.34.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 02/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for SPR
2026-05-15 6:11 [PATCH 00/11] perf/x86/intel: Fix inaccurate hard-coded event configurations Dapeng Mi
2026-05-15 6:11 ` [PATCH 01/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for ICX Dapeng Mi
@ 2026-05-15 6:11 ` Dapeng Mi
2026-05-15 6:11 ` [PATCH 03/11] perf/x86/intel: Update event constraints for DMR Dapeng Mi
` (8 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Dapeng Mi @ 2026-05-15 6:11 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Ian Rogers, Adrian Hunter, Alexander Shishkin,
Andi Kleen, Eranian Stephane
Cc: linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao, Dapeng Mi
Update perf hard-coded event constraints and cache_extra_regs[] for
Sapphire rapids according to the latest SPR perfmon events (v1.39).
Emerald Rapids (EMR) and Granite Rapids (GNR) share exactly same event
constraints and extra MSR values with SPR. No extra changes are needed
for EMR and GNR.
Please note the change could temporarily impact other platforms which
share the hard coded data structures, but it would be fixed in
subsequent patches soon.
SPR perfmon events:
https://github.com/intel/perfmon/blob/main/SPR/events/sapphirerapids_core.json
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/core.c | 23 ++++++++++++++---------
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 1390d1da985b..b3ccc785a4f6 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -356,11 +356,12 @@ static struct extra_reg intel_glc_extra_regs[] __read_mostly = {
static struct event_constraint intel_glc_event_constraints[] = {
FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */
- FIXED_EVENT_CONSTRAINT(0x0100, 0), /* INST_RETIRED.PREC_DIST */
+ FIXED_EVENT_CONSTRAINT(0x0100, 0), /* pseudo INST_RETIRED.ANY */
FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */
- FIXED_EVENT_CONSTRAINT(0x0300, 2), /* CPU_CLK_UNHALTED.REF */
+ FIXED_EVENT_CONSTRAINT(0x0200, 1), /* pseudo CPU_CLK_UNHALTED.THREAD */
+ FIXED_EVENT_CONSTRAINT(0x0300, 2), /* pseudo CPU_CLK_UNHALTED.REF_TSC */
FIXED_EVENT_CONSTRAINT(0x013c, 2), /* CPU_CLK_UNHALTED.REF_TSC_P */
- FIXED_EVENT_CONSTRAINT(0x0400, 3), /* SLOTS */
+ FIXED_EVENT_CONSTRAINT(0x0400, 3), /* pseudo TOPDOWN.SLOTS */
METRIC_EVENT_CONSTRAINT(INTEL_TD_METRIC_RETIRING, 0),
METRIC_EVENT_CONSTRAINT(INTEL_TD_METRIC_BAD_SPEC, 1),
METRIC_EVENT_CONSTRAINT(INTEL_TD_METRIC_FE_BOUND, 2),
@@ -380,9 +381,13 @@ static struct event_constraint intel_glc_event_constraints[] = {
INTEL_UEVENT_CONSTRAINT(0x01a3, 0xf),
INTEL_UEVENT_CONSTRAINT(0x02a3, 0xf),
+ INTEL_UEVENT_CONSTRAINT(0x05a3, 0xf),
+ INTEL_UEVENT_CONSTRAINT(0x06a3, 0xf),
INTEL_UEVENT_CONSTRAINT(0x08a3, 0xf),
+ INTEL_UEVENT_CONSTRAINT(0x0ca3, 0xf),
INTEL_UEVENT_CONSTRAINT(0x04a4, 0x1),
INTEL_UEVENT_CONSTRAINT(0x08a4, 0x1),
+ INTEL_UEVENT_CONSTRAINT(0x01cd, 0xfe),
INTEL_UEVENT_CONSTRAINT(0x02cd, 0x1),
INTEL_EVENT_CONSTRAINT(0xce, 0x1),
INTEL_EVENT_CONSTRAINT_RANGE(0xd0, 0xdf, 0xf),
@@ -714,18 +719,18 @@ static __initconst const u64 glc_hw_cache_extra_regs
{
[ C(LL ) ] = {
[ C(OP_READ) ] = {
- [ C(RESULT_ACCESS) ] = 0x10001,
- [ C(RESULT_MISS) ] = 0x3fbfc00001,
+ [ C(RESULT_ACCESS) ] = 0x10001, /* OCR.DEMAND_DATA_RD.ANY_RESPONSE */
+ [ C(RESULT_MISS) ] = 0x3fbfc00001, /* OCR.DEMAND_DATA_RD.L3_MISS */
},
[ C(OP_WRITE) ] = {
- [ C(RESULT_ACCESS) ] = 0x3f3ffc0002,
- [ C(RESULT_MISS) ] = 0x3f3fc00002,
+ [ C(RESULT_ACCESS) ] = 0x3f3ffc0002, /* OCR.DEMAND_RFO.ANY_RESPONSE */
+ [ C(RESULT_MISS) ] = 0x3f3fc00002, /* OCR.DEMAND_RFO.L3_MISS */
},
},
[ C(NODE) ] = {
[ C(OP_READ) ] = {
- [ C(RESULT_ACCESS) ] = 0x10c000001,
- [ C(RESULT_MISS) ] = 0x3fb3000001,
+ [ C(RESULT_ACCESS) ] = 0x104000001, /* OCR.DEMAND_DATA_RD.LOCAL_DRAM */
+ [ C(RESULT_MISS) ] = 0x730000001, /* OCR.DEMAND_DATA_RD.REMOTE_DRAM */
},
},
};
--
2.34.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 03/11] perf/x86/intel: Update event constraints for DMR
2026-05-15 6:11 [PATCH 00/11] perf/x86/intel: Fix inaccurate hard-coded event configurations Dapeng Mi
2026-05-15 6:11 ` [PATCH 01/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for ICX Dapeng Mi
2026-05-15 6:11 ` [PATCH 02/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for SPR Dapeng Mi
@ 2026-05-15 6:11 ` Dapeng Mi
2026-05-15 6:11 ` [PATCH 04/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for ADL Dapeng Mi
` (7 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Dapeng Mi @ 2026-05-15 6:11 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Ian Rogers, Adrian Hunter, Alexander Shishkin,
Andi Kleen, Eranian Stephane
Cc: linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao, Dapeng Mi
Add missed event constraint for 0x0200 event and add comments to show
the event names in pnc_hw_cache_extra_regs[].
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/core.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index b3ccc785a4f6..0d0edc2d1b90 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -466,11 +466,12 @@ static struct extra_reg intel_lnc_extra_regs[] __read_mostly = {
static struct event_constraint intel_pnc_event_constraints[] = {
FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */
- FIXED_EVENT_CONSTRAINT(0x0100, 0), /* INST_RETIRED.PREC_DIST */
+ FIXED_EVENT_CONSTRAINT(0x0100, 0), /* pseudo INST_RETIRED.ANY */
FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */
- FIXED_EVENT_CONSTRAINT(0x0300, 2), /* CPU_CLK_UNHALTED.REF */
+ FIXED_EVENT_CONSTRAINT(0x0200, 1), /* pseudo CPU_CLK_UNHALTED.THREAD */
+ FIXED_EVENT_CONSTRAINT(0x0300, 2), /* pseudo CPU_CLK_UNHALTED.REF_TSC */
FIXED_EVENT_CONSTRAINT(0x013c, 2), /* CPU_CLK_UNHALTED.REF_TSC_P */
- FIXED_EVENT_CONSTRAINT(0x0400, 3), /* SLOTS */
+ FIXED_EVENT_CONSTRAINT(0x0400, 3), /* pseudo TOPDOWN.SLOTS */
METRIC_EVENT_CONSTRAINT(INTEL_TD_METRIC_RETIRING, 0),
METRIC_EVENT_CONSTRAINT(INTEL_TD_METRIC_BAD_SPEC, 1),
METRIC_EVENT_CONSTRAINT(INTEL_TD_METRIC_FE_BOUND, 2),
@@ -821,12 +822,12 @@ static __initconst const u64 pnc_hw_cache_extra_regs
{
[ C(LL ) ] = {
[ C(OP_READ) ] = {
- [ C(RESULT_ACCESS) ] = 0x4000000000000001,
- [ C(RESULT_MISS) ] = 0xFFFFF000000001,
+ [ C(RESULT_ACCESS) ] = 0x4000000000000001, /* OMR.DEMAND_DATA_RD.ANY_RESPONSE */
+ [ C(RESULT_MISS) ] = 0xFFFFF000000001, /* OMR.DEMAND_DATA_RD.L3_MISS */
},
[ C(OP_WRITE) ] = {
- [ C(RESULT_ACCESS) ] = 0x4000000000000002,
- [ C(RESULT_MISS) ] = 0xFFFFF000000002,
+ [ C(RESULT_ACCESS) ] = 0x4000000000000002, /* OMR.DEMAND_RFO.ANY_RESPONSE */
+ [ C(RESULT_MISS) ] = 0xFFFFF000000002, /* OMR.DEMAND_RFO.L3_MISS */
},
},
};
--
2.34.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 04/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for ADL
2026-05-15 6:11 [PATCH 00/11] perf/x86/intel: Fix inaccurate hard-coded event configurations Dapeng Mi
` (2 preceding siblings ...)
2026-05-15 6:11 ` [PATCH 03/11] perf/x86/intel: Update event constraints for DMR Dapeng Mi
@ 2026-05-15 6:11 ` Dapeng Mi
2026-05-15 6:38 ` sashiko-bot
2026-05-15 6:11 ` [PATCH 05/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for MTL Dapeng Mi
` (6 subsequent siblings)
10 siblings, 1 reply; 14+ messages in thread
From: Dapeng Mi @ 2026-05-15 6:11 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Ian Rogers, Adrian Hunter, Alexander Shishkin,
Andi Kleen, Eranian Stephane
Cc: linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao, Dapeng Mi
Update perf hard-coded event constraints and cache_extra_regs[] for
Alderlake according to the latest ADL perfmon events (V1.39).
One important note is that ADL has differences on the L3/node related
OCR events although it shares same uarch with SPR server, e.g.,
ADL has different extra MSR values and no node events. So some variants
of structures and functions are introduced to reflect these
differences, like adl_glc_hw_cache_event_ids[],
adl_glc_hw_cache_extra_regs[] and intel_pmu_init_glc_hybrid(), etc.
Please note these changes would temporarily impact other platforms like
MTL/ARL-U which shares hard-coded event structures, but it would be
fixed soon in subsequent patches.
ADL perfmon events:
https://github.com/intel/perfmon/blob/main/ADL/events/alderlake_goldencove_core.json
https://github.com/intel/perfmon/blob/main/ADL/events/alderlake_gracemont_core.json
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/core.c | 137 +++++++++++++++++++++++++++++++++--
arch/x86/events/intel/ds.c | 2 +-
2 files changed, 131 insertions(+), 8 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 0d0edc2d1b90..7948e3afc291 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -215,8 +215,10 @@ static struct event_constraint intel_slm_event_constraints[] __read_mostly =
static struct event_constraint intel_grt_event_constraints[] __read_mostly = {
FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */
+ FIXED_EVENT_CONSTRAINT(0x0100, 0), /* pseudo INST_RETIRED.ANY */
FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */
- FIXED_EVENT_CONSTRAINT(0x0300, 2), /* pseudo CPU_CLK_UNHALTED.REF */
+ FIXED_EVENT_CONSTRAINT(0x0200, 1), /* pseudo CPU_CLK_UNHALTED.THREAD */
+ FIXED_EVENT_CONSTRAINT(0x0300, 2), /* pseudo CPU_CLK_UNHALTED.REF_TSC */
FIXED_EVENT_CONSTRAINT(0x013c, 2), /* CPU_CLK_UNHALTED.REF_TSC_P */
EVENT_CONSTRAINT_END
};
@@ -713,6 +715,80 @@ static __initconst const u64 glc_hw_cache_event_ids
},
};
+/* ADL P-core (Golden cove) specific event code. */
+static __initconst const u64 adl_glc_hw_cache_event_ids
+ [PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX] =
+{
+ [ C(L1D ) ] = {
+ [ C(OP_READ) ] = {
+ [ C(RESULT_ACCESS) ] = 0x81d0,
+ [ C(RESULT_MISS) ] = 0xe124,
+ },
+ [ C(OP_WRITE) ] = {
+ [ C(RESULT_ACCESS) ] = 0x82d0,
+ },
+ },
+ [ C(L1I ) ] = {
+ [ C(OP_READ) ] = {
+ [ C(RESULT_MISS) ] = 0xe424,
+ },
+ [ C(OP_WRITE) ] = {
+ [ C(RESULT_ACCESS) ] = -1,
+ [ C(RESULT_MISS) ] = -1,
+ },
+ },
+ [ C(LL ) ] = {
+ [ C(OP_READ) ] = {
+ [ C(RESULT_ACCESS) ] = 0x12a,
+ [ C(RESULT_MISS) ] = 0x12a,
+ },
+ [ C(OP_WRITE) ] = {
+ [ C(RESULT_ACCESS) ] = 0x12a,
+ [ C(RESULT_MISS) ] = 0x12a,
+ },
+ },
+ [ C(DTLB) ] = {
+ [ C(OP_READ) ] = {
+ [ C(RESULT_ACCESS) ] = 0x81d0,
+ [ C(RESULT_MISS) ] = 0xe12,
+ },
+ [ C(OP_WRITE) ] = {
+ [ C(RESULT_ACCESS) ] = 0x82d0,
+ [ C(RESULT_MISS) ] = 0xe13,
+ },
+ },
+ [ C(ITLB) ] = {
+ [ C(OP_READ) ] = {
+ [ C(RESULT_ACCESS) ] = -1,
+ [ C(RESULT_MISS) ] = 0xe11,
+ },
+ [ C(OP_WRITE) ] = {
+ [ C(RESULT_ACCESS) ] = -1,
+ [ C(RESULT_MISS) ] = -1,
+ },
+ [ C(OP_PREFETCH) ] = {
+ [ C(RESULT_ACCESS) ] = -1,
+ [ C(RESULT_MISS) ] = -1,
+ },
+ },
+ [ C(BPU ) ] = {
+ [ C(OP_READ) ] = {
+ [ C(RESULT_ACCESS) ] = 0x4c4,
+ [ C(RESULT_MISS) ] = 0x4c5,
+ },
+ [ C(OP_WRITE) ] = {
+ [ C(RESULT_ACCESS) ] = -1,
+ [ C(RESULT_MISS) ] = -1,
+ },
+ [ C(OP_PREFETCH) ] = {
+ [ C(RESULT_ACCESS) ] = -1,
+ [ C(RESULT_MISS) ] = -1,
+ },
+ },
+};
+
static __initconst const u64 glc_hw_cache_extra_regs
[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
@@ -736,6 +812,24 @@ static __initconst const u64 glc_hw_cache_extra_regs
},
};
+/* ADL P-core (Golden cove) specific extra regs value. */
+static __initconst const u64 adl_glc_hw_cache_extra_regs
+ [PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX] =
+{
+ [ C(LL ) ] = {
+ [ C(OP_READ) ] = {
+ [ C(RESULT_ACCESS) ] = 0x10001, /* OCR.DEMAND_DATA_RD.ANY_RESPONSE */
+ [ C(RESULT_MISS) ] = 0x3fbfc00001, /* OCR.DEMAND_DATA_RD.L3_MISS */
+ },
+ [ C(OP_WRITE) ] = {
+ [ C(RESULT_ACCESS) ] = 0x10002, /* OCR.DEMAND_RFO.ANY_RESPONSE */
+ [ C(RESULT_MISS) ] = 0x3fbfc00002, /* OCR.DEMAND_RFO.L3_MISS */
+ },
+ },
+};
+
static __initconst const u64 pnc_hw_cache_event_ids
[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
@@ -2384,6 +2478,23 @@ static __initconst const u64 tnt_hw_cache_extra_regs
},
};
+static __initconst const u64 grt_hw_cache_extra_regs
+ [PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX] = {
+ [C(LL)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x10001, /* OCR.DEMAND_DATA_RD.ANY_RESPONSE */
+ [C(RESULT_MISS)] = 0x3F84400001, /* OCR.DEMAND_DATA_RD.L3_MISS */
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = 0x10002, /* OCR.DEMAND_RFO.ANY_RESPONSE */
+ [C(RESULT_MISS)] = 0x3F84400002, /* OCR.DEMAND_RFO.L3_MISS */
+ },
+ },
+};
+
+
static __initconst const u64 arw_hw_cache_extra_regs
[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
@@ -2434,9 +2545,12 @@ static struct attribute *grt_mem_attrs[] = {
};
static struct extra_reg intel_grt_extra_regs[] __read_mostly = {
- /* must define OFFCORE_RSP_X first, see intel_fixup_er() */
- INTEL_UEVENT_EXTRA_REG(0x01b7, MSR_OFFCORE_RSP_0, 0x3fffffffffull, RSP_0),
- INTEL_UEVENT_EXTRA_REG(0x02b7, MSR_OFFCORE_RSP_1, 0x3fffffffffull, RSP_1),
+ /*
+ * Must define OFFCORE_RSP_X first, see intel_fixup_er().
+ * Bit 63 only valid on OFFCORE_RSP_0 MSR.
+ */
+ INTEL_UEVENT_EXTRA_REG(0x01b7, MSR_OFFCORE_RSP_0, 0x8003f03fffffffffull, RSP_0),
+ INTEL_UEVENT_EXTRA_REG(0x02b7, MSR_OFFCORE_RSP_1, 0x3f03fffffffffull, RSP_1),
INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x5d0),
EVENT_EXTRA_END
};
@@ -7499,6 +7613,15 @@ static __always_inline void intel_pmu_init_glc(struct pmu *pmu)
intel_pmu_ref_cycles_ext();
}
+static __always_inline void intel_pmu_init_glc_hybrid(struct pmu *pmu)
+{
+ intel_pmu_init_glc(pmu);
+
+ /* ADL has different extra MSR values from Server for the L3 or node OCR/OMR events. */
+ memcpy(hybrid_var(pmu, hw_cache_event_ids), adl_glc_hw_cache_event_ids, sizeof(hw_cache_event_ids));
+ memcpy(hybrid_var(pmu, hw_cache_extra_regs), adl_glc_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
+}
+
static __always_inline void intel_pmu_init_grt(struct pmu *pmu)
{
x86_pmu.mid_ack = true;
@@ -7511,7 +7634,7 @@ static __always_inline void intel_pmu_init_grt(struct pmu *pmu)
x86_pmu.flags |= PMU_FL_INSTR_LATENCY;
memcpy(hybrid_var(pmu, hw_cache_event_ids), glp_hw_cache_event_ids, sizeof(hw_cache_event_ids));
- memcpy(hybrid_var(pmu, hw_cache_extra_regs), tnt_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
+ memcpy(hybrid_var(pmu, hw_cache_extra_regs), grt_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
hybrid_var(pmu, hw_cache_event_ids)[C(ITLB)][C(OP_READ)][C(RESULT_ACCESS)] = -1;
hybrid(pmu, event_constraints) = intel_grt_event_constraints;
hybrid(pmu, pebs_constraints) = intel_grt_pebs_event_constraints;
@@ -8269,7 +8392,7 @@ __init int intel_pmu_init(void)
/* Initialize big core specific PerfMon capabilities.*/
pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX];
- intel_pmu_init_glc(&pmu->pmu);
+ intel_pmu_init_glc_hybrid(&pmu->pmu);
if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU)) {
pmu->cntr_mask64 <<= 2;
pmu->cntr_mask64 |= 0x3;
@@ -8326,7 +8449,7 @@ __init int intel_pmu_init(void)
/* Initialize big core specific PerfMon capabilities.*/
pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX];
- intel_pmu_init_glc(&pmu->pmu);
+ intel_pmu_init_glc_hybrid(&pmu->pmu);
pmu->extra_regs = intel_rwc_extra_regs;
/* Initialize Atom core specific PerfMon capabilities.*/
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 7f0d515c07c5..efab3cb47885 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1292,7 +1292,7 @@ struct event_constraint intel_glm_pebs_event_constraints[] = {
struct event_constraint intel_grt_pebs_event_constraints[] = {
/* Allow all events as PEBS with no flags */
INTEL_HYBRID_LAT_CONSTRAINT(0x5d0, 0x3),
- INTEL_HYBRID_LAT_CONSTRAINT(0x6d0, 0xf),
+ INTEL_HYBRID_LAT_CONSTRAINT(0x6d0, 0x3f),
EVENT_CONSTRAINT_END
};
--
2.34.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 05/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for MTL
2026-05-15 6:11 [PATCH 00/11] perf/x86/intel: Fix inaccurate hard-coded event configurations Dapeng Mi
` (3 preceding siblings ...)
2026-05-15 6:11 ` [PATCH 04/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for ADL Dapeng Mi
@ 2026-05-15 6:11 ` Dapeng Mi
2026-05-15 6:11 ` [PATCH 06/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for LNL Dapeng Mi
` (5 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Dapeng Mi @ 2026-05-15 6:11 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Ian Rogers, Adrian Hunter, Alexander Shishkin,
Andi Kleen, Eranian Stephane
Cc: linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao, Dapeng Mi
Update perf hard-coded event constraints and cache_extra_regs[] for
Meteor Lake according to the latest MTL perfmon events (V1.21).
MTL P-core (redwoodcove) inherits same perf events list from previous
generation (Goldencove), but the E-core (Crestmont) brings some
difference on the perf event list comparing with Gracemont. So apply
the changes for Crestmont core.
MTL perfmon events:
https://github.com/intel/perfmon/blob/main/MTL/events/meteorlake_redwoodcove_core.json
https://github.com/intel/perfmon/blob/main/MTL/events/meteorlake_crestmont_core.json
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/core.c | 27 +++++++++++++++++++++++++--
arch/x86/events/intel/ds.c | 7 +++++++
arch/x86/events/perf_event.h | 2 ++
3 files changed, 34 insertions(+), 2 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 7948e3afc291..5d99cfd7e701 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -2494,6 +2494,21 @@ static __initconst const u64 grt_hw_cache_extra_regs
},
};
+static __initconst const u64 cmt_hw_cache_extra_regs
+ [PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX] = {
+ [C(LL)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x10001, /* OCR.DEMAND_DATA_RD.ANY_RESPONSE */
+ [C(RESULT_MISS)] = 0x3fbfc00001, /* OCR.DEMAND_DATA_RD.L3_MISS */
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = 0x10002, /* OCR.DEMAND_RFO.ANY_RESPONSE */
+ [C(RESULT_MISS)] = 0x3fbfc00002, /* OCR.DEMAND_RFO.L3_MISS */
+ },
+ },
+};
static __initconst const u64 arw_hw_cache_extra_regs
[PERF_COUNT_HW_CACHE_MAX]
@@ -7643,6 +7658,15 @@ static __always_inline void intel_pmu_init_grt(struct pmu *pmu)
intel_pmu_ref_cycles_ext();
}
+static __always_inline void intel_pmu_init_cmt(struct pmu *pmu)
+{
+ intel_pmu_init_grt(pmu);
+ memcpy(hybrid_var(pmu, hw_cache_extra_regs),
+ cmt_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
+ hybrid(pmu, pebs_constraints) = intel_cmt_pebs_event_constraints;
+ hybrid(pmu, extra_regs) = intel_cmt_extra_regs;
+}
+
static __always_inline void intel_pmu_init_lnc(struct pmu *pmu)
{
intel_pmu_init_glc(pmu);
@@ -8454,8 +8478,7 @@ __init int intel_pmu_init(void)
/* Initialize Atom core specific PerfMon capabilities.*/
pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_ATOM_IDX];
- intel_pmu_init_grt(&pmu->pmu);
- pmu->extra_regs = intel_cmt_extra_regs;
+ intel_pmu_init_cmt(&pmu->pmu);
intel_pmu_pebs_data_source_mtl();
pr_cont("Meteorlake Hybrid events, ");
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index efab3cb47885..75b7f6f6d8bc 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1296,6 +1296,13 @@ struct event_constraint intel_grt_pebs_event_constraints[] = {
EVENT_CONSTRAINT_END
};
+struct event_constraint intel_cmt_pebs_event_constraints[] = {
+ /* Allow all events as PEBS with no flags */
+ INTEL_HYBRID_LAT_CONSTRAINT(0x5d0, 0x3),
+ INTEL_HYBRID_LAT_CONSTRAINT(0x6d0, 0xff),
+ EVENT_CONSTRAINT_END
+};
+
struct event_constraint intel_arw_pebs_event_constraints[] = {
/* Allow all events as PEBS with no flags */
INTEL_HYBRID_LAT_CONSTRAINT(0x5d0, 0xff),
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index fad87d3c8b2c..fad99183f4d8 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -1702,6 +1702,8 @@ extern struct event_constraint intel_glp_pebs_event_constraints[];
extern struct event_constraint intel_grt_pebs_event_constraints[];
+extern struct event_constraint intel_cmt_pebs_event_constraints[];
+
extern struct event_constraint intel_arw_pebs_event_constraints[];
extern struct event_constraint intel_nehalem_pebs_event_constraints[];
--
2.34.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 06/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for LNL
2026-05-15 6:11 [PATCH 00/11] perf/x86/intel: Fix inaccurate hard-coded event configurations Dapeng Mi
` (4 preceding siblings ...)
2026-05-15 6:11 ` [PATCH 05/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for MTL Dapeng Mi
@ 2026-05-15 6:11 ` Dapeng Mi
2026-05-15 6:11 ` [PATCH 07/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for ARL Dapeng Mi
` (4 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Dapeng Mi @ 2026-05-15 6:11 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Ian Rogers, Adrian Hunter, Alexander Shishkin,
Andi Kleen, Eranian Stephane
Cc: linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao, Dapeng Mi
Update perf hard-coded event constraints and cache_extra_regs[] for
Lunarlake according to the latest LNL perfmon events (V1.22).
LNL introduces new extra register values for the OCR L3 cache events,
so introduce lnc_hw_cache_extra_regs[] and skt_hw_cache_extra_regs[] to
reflect the changes.
LNL perfmon events:
https://github.com/intel/perfmon/blob/main/LNL/events/lunarlake_lioncove_core.json
https://github.com/intel/perfmon/blob/main/LNL/events/lunarlake_skymont_core.json
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/core.c | 62 +++++++++++++++++++++++++++++-------
arch/x86/events/intel/ds.c | 8 +++++
2 files changed, 59 insertions(+), 11 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 5d99cfd7e701..d9e421d4b3ed 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -225,12 +225,17 @@ static struct event_constraint intel_grt_event_constraints[] __read_mostly = {
static struct event_constraint intel_skt_event_constraints[] __read_mostly = {
FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */
+ FIXED_EVENT_CONSTRAINT(0x0100, 0), /* pseudo INST_RETIRED.ANY */
FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */
- FIXED_EVENT_CONSTRAINT(0x0300, 2), /* pseudo CPU_CLK_UNHALTED.REF */
+ FIXED_EVENT_CONSTRAINT(0x0200, 1), /* pseudo CPU_CLK_UNHALTED.THREAD */
+ FIXED_EVENT_CONSTRAINT(0x0300, 2), /* pseudo CPU_CLK_UNHALTED.REF_TSC */
FIXED_EVENT_CONSTRAINT(0x013c, 2), /* CPU_CLK_UNHALTED.REF_TSC_P */
FIXED_EVENT_CONSTRAINT(0x0073, 4), /* TOPDOWN_BAD_SPECULATION.ALL */
+ FIXED_EVENT_CONSTRAINT(0x0500, 4), /* pseudo TOPDOWN_BAD_SPECULATION.ALL */
FIXED_EVENT_CONSTRAINT(0x019c, 5), /* TOPDOWN_FE_BOUND.ALL */
+ FIXED_EVENT_CONSTRAINT(0x0600, 5), /* pseudo TOPDOWN_FE_BOUND.ALL */
FIXED_EVENT_CONSTRAINT(0x02c2, 6), /* TOPDOWN_RETIRING.ALL */
+ FIXED_EVENT_CONSTRAINT(0x0700, 6), /* pseudo TOPDOWN_RETIRING.ALL */
EVENT_CONSTRAINT_END
};
@@ -415,11 +420,12 @@ static struct extra_reg intel_rwc_extra_regs[] __read_mostly = {
static struct event_constraint intel_lnc_event_constraints[] = {
FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */
- FIXED_EVENT_CONSTRAINT(0x0100, 0), /* INST_RETIRED.PREC_DIST */
+ FIXED_EVENT_CONSTRAINT(0x0100, 0), /* pseudo INST_RETIRED.ANY */
FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */
- FIXED_EVENT_CONSTRAINT(0x0300, 2), /* CPU_CLK_UNHALTED.REF */
+ FIXED_EVENT_CONSTRAINT(0x0200, 1), /* pseudo CPU_CLK_UNHALTED.THREAD */
+ FIXED_EVENT_CONSTRAINT(0x0300, 2), /* pseudo CPU_CLK_UNHALTED.REF_TSC */
FIXED_EVENT_CONSTRAINT(0x013c, 2), /* CPU_CLK_UNHALTED.REF_TSC_P */
- FIXED_EVENT_CONSTRAINT(0x0400, 3), /* SLOTS */
+ FIXED_EVENT_CONSTRAINT(0x0400, 3), /* pseudo TOPDOWN.SLOTS */
METRIC_EVENT_CONSTRAINT(INTEL_TD_METRIC_RETIRING, 0),
METRIC_EVENT_CONSTRAINT(INTEL_TD_METRIC_BAD_SPEC, 1),
METRIC_EVENT_CONSTRAINT(INTEL_TD_METRIC_FE_BOUND, 2),
@@ -431,8 +437,6 @@ static struct event_constraint intel_lnc_event_constraints[] = {
INTEL_EVENT_CONSTRAINT(0x20, 0xf),
- INTEL_UEVENT_CONSTRAINT(0x012a, 0xf),
- INTEL_UEVENT_CONSTRAINT(0x012b, 0xf),
INTEL_UEVENT_CONSTRAINT(0x0148, 0x4),
INTEL_UEVENT_CONSTRAINT(0x0175, 0x4),
@@ -443,15 +447,14 @@ static struct event_constraint intel_lnc_event_constraints[] = {
INTEL_UEVENT_CONSTRAINT(0x0ca3, 0x4),
INTEL_UEVENT_CONSTRAINT(0x04a4, 0x1),
INTEL_UEVENT_CONSTRAINT(0x08a4, 0x1),
- INTEL_UEVENT_CONSTRAINT(0x10a4, 0x1),
+ INTEL_UEVENT_CONSTRAINT(0x10a4, 0x8),
INTEL_UEVENT_CONSTRAINT(0x01b1, 0x8),
INTEL_UEVENT_CONSTRAINT(0x01cd, 0x3fc),
INTEL_UEVENT_CONSTRAINT(0x02cd, 0x3),
+ INTEL_UEVENT_CONSTRAINT(0x87d0, 0x3ff),
INTEL_EVENT_CONSTRAINT_RANGE(0xd0, 0xdf, 0xf),
- INTEL_UEVENT_CONSTRAINT(0x00e0, 0xf),
-
EVENT_CONSTRAINT_END
};
@@ -830,6 +833,23 @@ static __initconst const u64 adl_glc_hw_cache_extra_regs
},
};
+static __initconst const u64 lnc_hw_cache_extra_regs
+ [PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX] =
+{
+ [ C(LL ) ] = {
+ [ C(OP_READ) ] = {
+ [ C(RESULT_ACCESS) ] = 0x10001, /* OCR.DEMAND_DATA_RD.ANY_RESPONSE */
+ [ C(RESULT_MISS) ] = 0x9E7FA000001, /* OCR.DEMAND_DATA_RD.L3_MISS */
+ },
+ [ C(OP_WRITE) ] = {
+ [ C(RESULT_ACCESS) ] = 0x10002, /* OCR.DEMAND_RFO.ANY_RESPONSE */
+ [ C(RESULT_MISS) ] = 0x9E7FA000002, /* OCR.DEMAND_RFO.L3_MISS */
+ },
+ },
+};
+
static __initconst const u64 pnc_hw_cache_event_ids
[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
@@ -2510,6 +2530,22 @@ static __initconst const u64 cmt_hw_cache_extra_regs
},
};
+static __initconst const u64 skt_hw_cache_extra_regs
+ [PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX] = {
+ [C(LL)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x10001, /* OCR.DEMAND_DATA_RD.ANY_RESPONSE */
+ [C(RESULT_MISS)] = 0x13FBFC00001, /* OCR.DEMAND_DATA_RD.L3_MISS */
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = 0x10002, /* OCR.DEMAND_RFO.ANY_RESPONSE */
+ [C(RESULT_MISS)] = 0x13FBFC00002, /* OCR.DEMAND_RFO.L3_MISS */
+ },
+ },
+};
+
static __initconst const u64 arw_hw_cache_extra_regs
[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
@@ -7673,6 +7709,9 @@ static __always_inline void intel_pmu_init_lnc(struct pmu *pmu)
hybrid(pmu, event_constraints) = intel_lnc_event_constraints;
hybrid(pmu, pebs_constraints) = intel_lnc_pebs_event_constraints;
hybrid(pmu, extra_regs) = intel_lnc_extra_regs;
+
+ memcpy(hybrid_var(pmu, hw_cache_event_ids), adl_glc_hw_cache_event_ids, sizeof(hw_cache_event_ids));
+ memcpy(hybrid_var(pmu, hw_cache_extra_regs), lnc_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
}
static __always_inline void intel_pmu_init_pnc(struct pmu *pmu)
@@ -7691,9 +7730,10 @@ static __always_inline void intel_pmu_init_pnc(struct pmu *pmu)
static __always_inline void intel_pmu_init_skt(struct pmu *pmu)
{
- intel_pmu_init_grt(pmu);
+ intel_pmu_init_cmt(pmu);
hybrid(pmu, event_constraints) = intel_skt_event_constraints;
- hybrid(pmu, extra_regs) = intel_cmt_extra_regs;
+ memcpy(hybrid_var(pmu, hw_cache_extra_regs),
+ skt_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
static_call_update(intel_pmu_enable_acr_event, intel_pmu_enable_acr);
}
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 75b7f6f6d8bc..ce23b50f449a 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1507,6 +1507,13 @@ struct event_constraint intel_lnc_pebs_event_constraints[] = {
INTEL_FLAGS_UEVENT_CONSTRAINT(0x100, 0x100000000ULL), /* INST_RETIRED.PREC_DIST */
INTEL_FLAGS_UEVENT_CONSTRAINT(0x0400, 0x800000000ULL),
+ INTEL_FLAGS_UEVENT_CONSTRAINT(0x012a, 0x1), /* OCR.* events */
+ INTEL_FLAGS_UEVENT_CONSTRAINT(0x012b, 0x1), /* OCR.* events */
+
+ INTEL_FLAGS_UEVENT_CONSTRAINT(0x04a4, 0x1), /* TOPDOWN.BAD_SPEC_SLOTS */
+ INTEL_FLAGS_UEVENT_CONSTRAINT(0x08a4, 0x1), /* TOPDOWN.BR_MISPREDICT_SLOTS */
+ INTEL_FLAGS_UEVENT_CONSTRAINT(0x10a4, 0x8), /* TOPDOWN.MEMORY_BOUND_SLOTS */
+
INTEL_HYBRID_LDLAT_CONSTRAINT(0x1cd, 0x3fc),
INTEL_HYBRID_STLAT_CONSTRAINT(0x2cd, 0x3),
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x11d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_LOADS */
@@ -1516,6 +1523,7 @@ struct event_constraint intel_lnc_pebs_event_constraints[] = {
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x42d0, 0xf), /* MEM_INST_RETIRED.SPLIT_STORES */
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x81d0, 0xf), /* MEM_INST_RETIRED.ALL_LOADS */
INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x82d0, 0xf), /* MEM_INST_RETIRED.ALL_STORES */
+ INTEL_FLAGS_UEVENT_CONSTRAINT(0x87d0, 0x3ff), /* MEM_INST_RETIRED.ANY */
INTEL_FLAGS_EVENT_CONSTRAINT_DATALA_LD_RANGE(0xd1, 0xd4, 0xf),
--
2.34.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 07/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for ARL
2026-05-15 6:11 [PATCH 00/11] perf/x86/intel: Fix inaccurate hard-coded event configurations Dapeng Mi
` (5 preceding siblings ...)
2026-05-15 6:11 ` [PATCH 06/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for LNL Dapeng Mi
@ 2026-05-15 6:11 ` Dapeng Mi
2026-05-15 6:40 ` sashiko-bot
2026-05-15 6:11 ` [PATCH 08/11] perf/x86/intel: Update event constraints for PTL Dapeng Mi
` (3 subsequent siblings)
10 siblings, 1 reply; 14+ messages in thread
From: Dapeng Mi @ 2026-05-15 6:11 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Ian Rogers, Adrian Hunter, Alexander Shishkin,
Andi Kleen, Eranian Stephane
Cc: linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao, Dapeng Mi
Update perf hard-coded event constraints and cache_extra_regs[] for
Arrowlake according to the latest ARL perfmon events (V1.17).
ARL shares almost same event constraints and extra MSR configuration
with LNL except 2 differences.
- ARL P-core has different extra MSR value for OCR.DEMAND_DATA_RD.L3_MISS
and OCR.DEMAND_RFO.L3_MISS. So introduce arl_lnc_hw_cache_extra_regs[]
to reflect the difference.
- ARL-H has extra LPE cores which use crestmont architectures. Add
crestmont specific event constraints and hw_cache_extra_regs[] for LPE
cores.
ARL perfmon events:
https://github.com/intel/perfmon/blob/main/ARL/events/arrowlake_lioncove_core.json
https://github.com/intel/perfmon/blob/main/ARL/events/arrowlake_skymont_core.json
https://github.com/intel/perfmon/blob/main/ARL/events/arrowlake_crestmont_core.json
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/core.c | 56 ++++++++++++++++++++++++++++++------
1 file changed, 48 insertions(+), 8 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index d9e421d4b3ed..dc5ab18888ea 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -850,6 +850,24 @@ static __initconst const u64 lnc_hw_cache_extra_regs
},
};
+/* ARL specific lioncove hw_cache_extra_regs[] variant. */
+static __initconst const u64 arl_lnc_hw_cache_extra_regs
+ [PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX] =
+{
+ [ C(LL ) ] = {
+ [ C(OP_READ) ] = {
+ [ C(RESULT_ACCESS) ] = 0x10001, /* OCR.DEMAND_DATA_RD.ANY_RESPONSE */
+ [ C(RESULT_MISS) ] = 0xFE7F8000001, /* OCR.DEMAND_DATA_RD.L3_MISS */
+ },
+ [ C(OP_WRITE) ] = {
+ [ C(RESULT_ACCESS) ] = 0x10002, /* OCR.DEMAND_RFO.ANY_RESPONSE */
+ [ C(RESULT_MISS) ] = 0xFE7F8000002, /* OCR.DEMAND_RFO.L3_MISS */
+ },
+ },
+};
+
static __initconst const u64 pnc_hw_cache_event_ids
[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
@@ -8529,16 +8547,41 @@ __init int intel_pmu_init(void)
case INTEL_WILDCATLAKE_L:
pr_cont("Pantherlake Hybrid events, ");
name = "pantherlake_hybrid";
+
+ intel_pmu_init_hybrid(hybrid_big_small);
+
+ /* Initialize big core specific PerfMon capabilities.*/
+ pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX];
+ intel_pmu_init_lnc(&pmu->pmu);
+
goto lnl_common;
- case INTEL_LUNARLAKE_M:
case INTEL_ARROWLAKE:
+ pr_cont("Arrowlake Hybrid events, ");
+ name = "arrowlake_hybrid";
+
+ intel_pmu_init_hybrid(hybrid_big_small);
+
+ /* Initialize big core specific PerfMon capabilities.*/
+ pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX];
+ intel_pmu_init_lnc(&pmu->pmu);
+ memcpy(hybrid_var(&pmu->pmu, hw_cache_extra_regs),
+ arl_lnc_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
+
+ goto lnl_common;
+
+ case INTEL_LUNARLAKE_M:
pr_cont("Lunarlake Hybrid events, ");
name = "lunarlake_hybrid";
- lnl_common:
intel_pmu_init_hybrid(hybrid_big_small);
+ /* Initialize big core specific PerfMon capabilities.*/
+ pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX];
+ intel_pmu_init_lnc(&pmu->pmu);
+
+ lnl_common:
+
x86_pmu.pebs_latency_data = lnl_latency_data;
x86_pmu.get_event_constraints = mtl_get_event_constraints;
x86_pmu.hw_config = adl_hw_config;
@@ -8549,10 +8592,6 @@ __init int intel_pmu_init(void)
extra_attr = boot_cpu_has(X86_FEATURE_RTM) ?
mtl_hybrid_extra_attr_rtm : mtl_hybrid_extra_attr;
- /* Initialize big core specific PerfMon capabilities.*/
- pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX];
- intel_pmu_init_lnc(&pmu->pmu);
-
/* Initialize Atom core specific PerfMon capabilities.*/
pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_ATOM_IDX];
intel_pmu_init_skt(&pmu->pmu);
@@ -8576,6 +8615,8 @@ __init int intel_pmu_init(void)
/* Initialize big core specific PerfMon capabilities. */
pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX];
intel_pmu_init_lnc(&pmu->pmu);
+ memcpy(hybrid_var(&pmu->pmu, hw_cache_extra_regs),
+ arl_lnc_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
/* Initialize Atom core specific PerfMon capabilities. */
pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_ATOM_IDX];
@@ -8583,8 +8624,7 @@ __init int intel_pmu_init(void)
/* Initialize Lower Power Atom specific PerfMon capabilities. */
pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_TINY_IDX];
- intel_pmu_init_grt(&pmu->pmu);
- pmu->extra_regs = intel_cmt_extra_regs;
+ intel_pmu_init_cmt(&pmu->pmu);
intel_pmu_pebs_data_source_arl_h();
pr_cont("ArrowLake-H Hybrid events, ");
--
2.34.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 08/11] perf/x86/intel: Update event constraints for PTL
2026-05-15 6:11 [PATCH 00/11] perf/x86/intel: Fix inaccurate hard-coded event configurations Dapeng Mi
` (6 preceding siblings ...)
2026-05-15 6:11 ` [PATCH 07/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for ARL Dapeng Mi
@ 2026-05-15 6:11 ` Dapeng Mi
2026-05-15 6:11 ` [PATCH 09/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for NVL Dapeng Mi
` (2 subsequent siblings)
10 siblings, 0 replies; 14+ messages in thread
From: Dapeng Mi @ 2026-05-15 6:11 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Ian Rogers, Adrian Hunter, Alexander Shishkin,
Andi Kleen, Eranian Stephane
Cc: linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao, Dapeng Mi
Update perf hard-coded event constraints for Pantherlake according to
the latest PTL perfmon events (V1.05).
PTL has almost same perf event list as LNL except some PEBS event
constraints of E-core (exactly same on P-core). Define
intel_dkt_pebs_event_constraints[] to reflect the PTL E-core specific
PEBS event constraints.
PTL perfmon events:
https://github.com/intel/perfmon/blob/main/PTL/events/pantherlake_cougarcove_core.json
https://github.com/intel/perfmon/blob/main/PTL/events/pantherlake_darkmont_core.json
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/core.c | 20 ++++++++++++++++----
arch/x86/events/intel/ds.c | 7 +++++++
arch/x86/events/perf_event.h | 2 ++
3 files changed, 25 insertions(+), 4 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index dc5ab18888ea..b281402c3753 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -7755,6 +7755,13 @@ static __always_inline void intel_pmu_init_skt(struct pmu *pmu)
static_call_update(intel_pmu_enable_acr_event, intel_pmu_enable_acr);
}
+/* Hybrid client variant. */
+static __always_inline void intel_pmu_init_dkt_hybrid(struct pmu *pmu)
+{
+ intel_pmu_init_skt(pmu);
+ hybrid(pmu, pebs_constraints) = intel_dkt_pebs_event_constraints;
+}
+
static __always_inline void intel_pmu_init_arw(struct pmu *pmu)
{
intel_pmu_init_grt(pmu);
@@ -8553,6 +8560,9 @@ __init int intel_pmu_init(void)
/* Initialize big core specific PerfMon capabilities.*/
pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX];
intel_pmu_init_lnc(&pmu->pmu);
+ /* Initialize Atom core specific PerfMon capabilities.*/
+ pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_ATOM_IDX];
+ intel_pmu_init_dkt_hybrid(&pmu->pmu);
goto lnl_common;
@@ -8567,6 +8577,9 @@ __init int intel_pmu_init(void)
intel_pmu_init_lnc(&pmu->pmu);
memcpy(hybrid_var(&pmu->pmu, hw_cache_extra_regs),
arl_lnc_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
+ /* Initialize Atom core specific PerfMon capabilities.*/
+ pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_ATOM_IDX];
+ intel_pmu_init_skt(&pmu->pmu);
goto lnl_common;
@@ -8579,6 +8592,9 @@ __init int intel_pmu_init(void)
/* Initialize big core specific PerfMon capabilities.*/
pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX];
intel_pmu_init_lnc(&pmu->pmu);
+ /* Initialize Atom core specific PerfMon capabilities.*/
+ pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_ATOM_IDX];
+ intel_pmu_init_skt(&pmu->pmu);
lnl_common:
@@ -8592,10 +8608,6 @@ __init int intel_pmu_init(void)
extra_attr = boot_cpu_has(X86_FEATURE_RTM) ?
mtl_hybrid_extra_attr_rtm : mtl_hybrid_extra_attr;
- /* Initialize Atom core specific PerfMon capabilities.*/
- pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_ATOM_IDX];
- intel_pmu_init_skt(&pmu->pmu);
-
intel_pmu_pebs_data_source_lnl();
break;
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index ce23b50f449a..5159adabb9a2 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1303,6 +1303,13 @@ struct event_constraint intel_cmt_pebs_event_constraints[] = {
EVENT_CONSTRAINT_END
};
+struct event_constraint intel_dkt_pebs_event_constraints[] = {
+ /* Allow all events as PEBS with no flags */
+ INTEL_HYBRID_LAT_CONSTRAINT(0x5d0, 0xff),
+ INTEL_HYBRID_LAT_CONSTRAINT(0x6d0, 0xff),
+ EVENT_CONSTRAINT_END
+};
+
struct event_constraint intel_arw_pebs_event_constraints[] = {
/* Allow all events as PEBS with no flags */
INTEL_HYBRID_LAT_CONSTRAINT(0x5d0, 0xff),
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index fad99183f4d8..f9ea07d60930 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -1704,6 +1704,8 @@ extern struct event_constraint intel_grt_pebs_event_constraints[];
extern struct event_constraint intel_cmt_pebs_event_constraints[];
+extern struct event_constraint intel_dkt_pebs_event_constraints[];
+
extern struct event_constraint intel_arw_pebs_event_constraints[];
extern struct event_constraint intel_nehalem_pebs_event_constraints[];
--
2.34.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 09/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for NVL
2026-05-15 6:11 [PATCH 00/11] perf/x86/intel: Fix inaccurate hard-coded event configurations Dapeng Mi
` (7 preceding siblings ...)
2026-05-15 6:11 ` [PATCH 08/11] perf/x86/intel: Update event constraints for PTL Dapeng Mi
@ 2026-05-15 6:11 ` Dapeng Mi
2026-05-15 6:11 ` [PATCH 10/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for SRF Dapeng Mi
2026-05-15 6:11 ` [PATCH 11/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for CWF Dapeng Mi
10 siblings, 0 replies; 14+ messages in thread
From: Dapeng Mi @ 2026-05-15 6:11 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Ian Rogers, Adrian Hunter, Alexander Shishkin,
Andi Kleen, Eranian Stephane
Cc: linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao, Dapeng Mi
Update perf hard-coded event constraints and cache_extra_regs[] for
Novalake according to the latest NVL perfmon events.
The 4 PRECISE_OMR events (0xd4) are broken on Arcticwolf and would be
removed from upcoming released event list, so delete them from event
constraints and extra_regs array accordingly.
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/core.c | 55 +++++++++++++++++++++++-------------
arch/x86/events/intel/ds.c | 11 --------
arch/x86/events/perf_event.h | 2 --
3 files changed, 36 insertions(+), 32 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index b281402c3753..587167dbb98f 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -241,20 +241,21 @@ static struct event_constraint intel_skt_event_constraints[] __read_mostly = {
static struct event_constraint intel_arw_event_constraints[] __read_mostly = {
FIXED_EVENT_CONSTRAINT(0x00c0, 0), /* INST_RETIRED.ANY */
+ FIXED_EVENT_CONSTRAINT(0x0100, 0), /* pseudo INST_RETIRED.ANY */
FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */
- FIXED_EVENT_CONSTRAINT(0x0300, 2), /* pseudo CPU_CLK_UNHALTED.REF */
+ FIXED_EVENT_CONSTRAINT(0x0200, 1), /* pseudo CPU_CLK_UNHALTED.THREAD */
+ FIXED_EVENT_CONSTRAINT(0x0300, 2), /* pseudo CPU_CLK_UNHALTED.REF_TSC */
FIXED_EVENT_CONSTRAINT(0x013c, 2), /* CPU_CLK_UNHALTED.REF_TSC_P */
FIXED_EVENT_CONSTRAINT(0x0073, 4), /* TOPDOWN_BAD_SPECULATION.ALL */
+ FIXED_EVENT_CONSTRAINT(0x0500, 4), /* pseudo TOPDOWN_BAD_SPECULATION.ALL */
FIXED_EVENT_CONSTRAINT(0x019c, 5), /* TOPDOWN_FE_BOUND.ALL */
+ FIXED_EVENT_CONSTRAINT(0x0600, 5), /* pseudo TOPDOWN_FE_BOUND.ALL */
FIXED_EVENT_CONSTRAINT(0x02c2, 6), /* TOPDOWN_RETIRING.ALL */
+ FIXED_EVENT_CONSTRAINT(0x0700, 6), /* pseudo TOPDOWN_RETIRING.ALL */
INTEL_UEVENT_CONSTRAINT(0x01b7, 0x1),
INTEL_UEVENT_CONSTRAINT(0x02b7, 0x2),
INTEL_UEVENT_CONSTRAINT(0x04b7, 0x4),
INTEL_UEVENT_CONSTRAINT(0x08b7, 0x8),
- INTEL_UEVENT_CONSTRAINT(0x01d4, 0x1),
- INTEL_UEVENT_CONSTRAINT(0x02d4, 0x2),
- INTEL_UEVENT_CONSTRAINT(0x04d4, 0x4),
- INTEL_UEVENT_CONSTRAINT(0x08d4, 0x8),
INTEL_UEVENT_CONSTRAINT(0x0175, 0x1),
INTEL_UEVENT_CONSTRAINT(0x0275, 0x2),
INTEL_UEVENT_CONSTRAINT(0x21d3, 0x1),
@@ -964,6 +965,23 @@ static __initconst const u64 pnc_hw_cache_extra_regs
},
};
+static __initconst const u64 cyc_hw_cache_extra_regs
+ [PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX] =
+{
+ [ C(LL ) ] = {
+ [ C(OP_READ) ] = {
+ [ C(RESULT_ACCESS) ] = 0x4000000000000001, /* OMR.DEMAND_DATA_RD.ANY_RESPONSE */
+ [ C(RESULT_MISS) ] = 0xFF03F000000001, /* OMR.DEMAND_DATA_RD.L3_MISS */
+ },
+ [ C(OP_WRITE) ] = {
+ [ C(RESULT_ACCESS) ] = 0x4000000000000002, /* OMR.DEMAND_RFO.ANY_RESPONSE */
+ [ C(RESULT_MISS) ] = 0xFF03F000000002, /* OMR.DEMAND_RFO.L3_MISS */
+ },
+ },
+};
+
/*
* Notes on the events:
* - data reads do not include code reads (comparable to earlier tables)
@@ -2570,16 +2588,12 @@ static __initconst const u64 arw_hw_cache_extra_regs
[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
[C(LL)] = {
[C(OP_READ)] = {
- [C(RESULT_ACCESS)] = 0x4000000000000001,
- [C(RESULT_MISS)] = 0xFFFFF000000001,
+ [C(RESULT_ACCESS)] = 0x4000000000000009, /* OMR.DEMAND_DATA_RD.ANY_RESPONSE */
+ [C(RESULT_MISS)] = 0xFF03F000000009, /* OMR.DEMAND_DATA_RD.L3_MISS */
},
[C(OP_WRITE)] = {
- [C(RESULT_ACCESS)] = 0x4000000000000002,
- [C(RESULT_MISS)] = 0xFFFFF000000002,
- },
- [C(OP_PREFETCH)] = {
- [C(RESULT_ACCESS)] = 0x0,
- [C(RESULT_MISS)] = 0x0,
+ [C(RESULT_ACCESS)] = 0x400000000000000A, /* OMR.DEMAND_RFO.ANY_RESPONSE */
+ [C(RESULT_MISS)] = 0xFF03F00000000A, /* OMR.DEMAND_RFO.L3_MISS */
},
},
};
@@ -2651,10 +2665,6 @@ static struct extra_reg intel_arw_extra_regs[] __read_mostly = {
INTEL_UEVENT_EXTRA_REG(0x02b7, MSR_OMR_1, 0xc0ffffffffffffffull, OMR_1),
INTEL_UEVENT_EXTRA_REG(0x04b7, MSR_OMR_2, 0xc0ffffffffffffffull, OMR_2),
INTEL_UEVENT_EXTRA_REG(0x08b7, MSR_OMR_3, 0xc0ffffffffffffffull, OMR_3),
- INTEL_UEVENT_EXTRA_REG(0x01d4, MSR_OMR_0, 0xc0ffffffffffffffull, OMR_0),
- INTEL_UEVENT_EXTRA_REG(0x02d4, MSR_OMR_1, 0xc0ffffffffffffffull, OMR_1),
- INTEL_UEVENT_EXTRA_REG(0x04d4, MSR_OMR_2, 0xc0ffffffffffffffull, OMR_2),
- INTEL_UEVENT_EXTRA_REG(0x08d4, MSR_OMR_3, 0xc0ffffffffffffffull, OMR_3),
INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x5d0),
INTEL_UEVENT_EXTRA_REG(0x0127, MSR_SNOOP_RSP_0, 0xffffffffffffffffull, SNOOP_0),
INTEL_UEVENT_EXTRA_REG(0x0227, MSR_SNOOP_RSP_1, 0xffffffffffffffffull, SNOOP_1),
@@ -7746,6 +7756,13 @@ static __always_inline void intel_pmu_init_pnc(struct pmu *pmu)
hybrid(pmu, extra_regs) = intel_pnc_extra_regs;
}
+static __always_inline void intel_pmu_init_cyc(struct pmu *pmu)
+{
+ intel_pmu_init_pnc(pmu);
+ memcpy(hybrid_var(pmu, hw_cache_extra_regs),
+ cyc_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
+}
+
static __always_inline void intel_pmu_init_skt(struct pmu *pmu)
{
intel_pmu_init_cmt(pmu);
@@ -7770,7 +7787,7 @@ static __always_inline void intel_pmu_init_arw(struct pmu *pmu)
memcpy(hybrid_var(pmu, hw_cache_extra_regs),
arw_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
hybrid(pmu, event_constraints) = intel_arw_event_constraints;
- hybrid(pmu, pebs_constraints) = intel_arw_pebs_event_constraints;
+ hybrid(pmu, pebs_constraints) = intel_dkt_pebs_event_constraints;
hybrid(pmu, extra_regs) = intel_arw_extra_regs;
static_call_update(intel_pmu_enable_acr_event, intel_pmu_enable_acr);
}
@@ -8661,7 +8678,7 @@ __init int intel_pmu_init(void)
/* Initialize big core specific PerfMon capabilities.*/
pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX];
- intel_pmu_init_pnc(&pmu->pmu);
+ intel_pmu_init_cyc(&pmu->pmu);
/* Initialize Atom core specific PerfMon capabilities.*/
pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_ATOM_IDX];
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 5159adabb9a2..cb72af9b61ce 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -1310,17 +1310,6 @@ struct event_constraint intel_dkt_pebs_event_constraints[] = {
EVENT_CONSTRAINT_END
};
-struct event_constraint intel_arw_pebs_event_constraints[] = {
- /* Allow all events as PEBS with no flags */
- INTEL_HYBRID_LAT_CONSTRAINT(0x5d0, 0xff),
- INTEL_HYBRID_LAT_CONSTRAINT(0x6d0, 0xff),
- INTEL_FLAGS_UEVENT_CONSTRAINT(0x01d4, 0x1),
- INTEL_FLAGS_UEVENT_CONSTRAINT(0x02d4, 0x2),
- INTEL_FLAGS_UEVENT_CONSTRAINT(0x04d4, 0x4),
- INTEL_FLAGS_UEVENT_CONSTRAINT(0x08d4, 0x8),
- EVENT_CONSTRAINT_END
-};
-
struct event_constraint intel_nehalem_pebs_event_constraints[] = {
INTEL_PLD_CONSTRAINT(0x100b, 0xf), /* MEM_INST_RETIRED.* */
INTEL_FLAGS_EVENT_CONSTRAINT(0x0f, 0xf), /* MEM_UNCORE_RETIRED.* */
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index f9ea07d60930..a4525589bec1 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -1706,8 +1706,6 @@ extern struct event_constraint intel_cmt_pebs_event_constraints[];
extern struct event_constraint intel_dkt_pebs_event_constraints[];
-extern struct event_constraint intel_arw_pebs_event_constraints[];
-
extern struct event_constraint intel_nehalem_pebs_event_constraints[];
extern struct event_constraint intel_westmere_pebs_event_constraints[];
--
2.34.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 10/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for SRF
2026-05-15 6:11 [PATCH 00/11] perf/x86/intel: Fix inaccurate hard-coded event configurations Dapeng Mi
` (8 preceding siblings ...)
2026-05-15 6:11 ` [PATCH 09/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for NVL Dapeng Mi
@ 2026-05-15 6:11 ` Dapeng Mi
2026-05-15 6:11 ` [PATCH 11/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for CWF Dapeng Mi
10 siblings, 0 replies; 14+ messages in thread
From: Dapeng Mi @ 2026-05-15 6:11 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Ian Rogers, Adrian Hunter, Alexander Shishkin,
Andi Kleen, Eranian Stephane
Cc: linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao, Dapeng Mi
Update perf hard-coded event constraints and cache_extra_regs[] for
Sierra Forest according to the latest SRF perfmon events (V1.17).
SRF has same uarch (crestmont) as MTL E-core and shares same perf
events, so directly apply the crestmont perf events.
SRF perfmon events:
https://github.com/intel/perfmon/blob/main/SRF/events/sierraforest_core.json
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/core.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 587167dbb98f..e1c6fb127f10 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -8101,8 +8101,7 @@ __init int intel_pmu_init(void)
case INTEL_ATOM_CRESTMONT:
case INTEL_ATOM_CRESTMONT_X:
- intel_pmu_init_grt(NULL);
- x86_pmu.extra_regs = intel_cmt_extra_regs;
+ intel_pmu_init_cmt(NULL);
intel_pmu_pebs_data_source_cmt();
x86_pmu.pebs_latency_data = cmt_latency_data;
x86_pmu.get_event_constraints = cmt_get_event_constraints;
--
2.34.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 11/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for CWF
2026-05-15 6:11 [PATCH 00/11] perf/x86/intel: Fix inaccurate hard-coded event configurations Dapeng Mi
` (9 preceding siblings ...)
2026-05-15 6:11 ` [PATCH 10/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for SRF Dapeng Mi
@ 2026-05-15 6:11 ` Dapeng Mi
10 siblings, 0 replies; 14+ messages in thread
From: Dapeng Mi @ 2026-05-15 6:11 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Namhyung Kim, Ian Rogers, Adrian Hunter, Alexander Shishkin,
Andi Kleen, Eranian Stephane
Cc: linux-kernel, linux-perf-users, Dapeng Mi, Zide Chen,
Falcon Thomas, Xudong Hao, Dapeng Mi
Update perf hard-coded event constraints and cache_extra_regs[] for
Clearwater Forest according to the latest CWF perfmon events (V1.02).
An important difference is that CWF introduce new extra register values
for the L3 cache OCR events, so define darkmont specific
dkt_hw_cache_extra_regs[] array.
CWF perfmon events:
https://github.com/intel/perfmon/blob/main/CWF/events/clearwaterforest_core.json
Signed-off-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
---
arch/x86/events/intel/core.c | 30 +++++++++++++++++++++++++++++-
1 file changed, 29 insertions(+), 1 deletion(-)
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index e1c6fb127f10..eaa25239f2ec 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -2582,6 +2582,22 @@ static __initconst const u64 skt_hw_cache_extra_regs
},
};
+static __initconst const u64 dkt_hw_cache_extra_regs
+ [PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX] = {
+ [C(LL)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x10001, /* OCR.DEMAND_DATA_RD.ANY_RESPONSE */
+ [C(RESULT_MISS)] = 0x33FBFC00001, /* OCR.DEMAND_DATA_RD.L3_MISS */
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = 0x10002, /* OCR.DEMAND_RFO.ANY_RESPONSE */
+ [C(RESULT_MISS)] = 0x33FBFC00002, /* OCR.DEMAND_RFO.L3_MISS */
+ },
+ },
+};
+
static __initconst const u64 arw_hw_cache_extra_regs
[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
@@ -7779,6 +7795,18 @@ static __always_inline void intel_pmu_init_dkt_hybrid(struct pmu *pmu)
hybrid(pmu, pebs_constraints) = intel_dkt_pebs_event_constraints;
}
+/*
+ * Darkmont is used by the CWF and PTL E-cores, but their L3 OCR
+ * events require different extra MSR values. Keep a separate init
+ * function for the non-hybrid server variant.
+ */
+static __always_inline void intel_pmu_init_dkt(struct pmu *pmu)
+{
+ intel_pmu_init_dkt_hybrid(pmu);
+ memcpy(hybrid_var(pmu, hw_cache_extra_regs),
+ dkt_hw_cache_extra_regs, sizeof(hw_cache_extra_regs));
+}
+
static __always_inline void intel_pmu_init_arw(struct pmu *pmu)
{
intel_pmu_init_grt(pmu);
@@ -8113,7 +8141,7 @@ __init int intel_pmu_init(void)
break;
case INTEL_ATOM_DARKMONT_X:
- intel_pmu_init_skt(NULL);
+ intel_pmu_init_dkt(NULL);
intel_pmu_pebs_data_source_cmt();
x86_pmu.pebs_latency_data = cmt_latency_data;
x86_pmu.get_event_constraints = cmt_get_event_constraints;
--
2.34.1
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 04/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for ADL
2026-05-15 6:11 ` [PATCH 04/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for ADL Dapeng Mi
@ 2026-05-15 6:38 ` sashiko-bot
0 siblings, 0 replies; 14+ messages in thread
From: sashiko-bot @ 2026-05-15 6:38 UTC (permalink / raw)
To: Dapeng Mi; +Cc: linux-perf-users
Thank you for your contribution! Sashiko AI review found 2 potential issue(s) to consider:
- [Medium] The commit introduces a bisectability regression for MTL and ARL-U platforms by temporarily applying incorrect Alderlake-specific cache event structures.
- [Low] Unnecessary duplication of the Golden Cove hardware cache event IDs array.
--
commit 58bdcbf21858fcf6337dcba2e5e091a86ddafd3c
Author: Dapeng Mi <dapeng1.mi@linux.intel.com>
perf/x86/intel: Update event constraints and cache_extra_regs[] for ADL
This commit updates the hard-coded event constraints and cache_extra_regs[]
for Alderlake according to the latest ADL perfmon events. It introduces
ADL-specific variants of structures and functions to reflect differences
in L3/node related OCR events.
> Please note these changes would temporarily impact other platforms like
> MTL/ARL-U which shares hard-coded event structures, but it would be
> fixed soon in subsequent patches.
Does this introduce a bisectability regression for Meteorlake and Arrowlake-U
platforms?
Every commit needs to be self-contained and functionally correct to keep the
tree bisectable. I noticed the fixes are included later in the series.
Could the patch series be reorganized to avoid leaving any platform in a
regressed state during bisection?
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index 0d0edc2d1b906..7948e3afc2916 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
[ ... ]
> @@ -713,6 +715,80 @@ static __initconst const u64 glc_hw_cache_event_ids
> },
> };
>
> +/* ADL P-core (Golden cove) specific event code. */
> +static __initconst const u64 adl_glc_hw_cache_event_ids
> + [PERF_COUNT_HW_CACHE_MAX]
> + [PERF_COUNT_HW_CACHE_OP_MAX]
> + [PERF_COUNT_HW_CACHE_RESULT_MAX] =
> +{
> + [ C(L1D ) ] = {
> + [ C(OP_READ) ] = {
> + [ C(RESULT_ACCESS) ] = 0x81d0,
> + [ C(RESULT_MISS) ] = 0xe124,
> + },
> + [ C(OP_WRITE) ] = {
> + [ C(RESULT_ACCESS) ] = 0x82d0,
> + },
> + },
[ ... ]
> + [ C(BPU ) ] = {
> + [ C(OP_READ) ] = {
> + [ C(RESULT_ACCESS) ] = 0x4c4,
> + [ C(RESULT_MISS) ] = 0x4c5,
> + },
> + [ C(OP_WRITE) ] = {
> + [ C(RESULT_ACCESS) ] = -1,
> + [ C(RESULT_MISS) ] = -1,
> + },
> + [ C(OP_PREFETCH) ] = {
> + [ C(RESULT_ACCESS) ] = -1,
> + [ C(RESULT_MISS) ] = -1,
> + },
> + },
> +};
This array appears to be identical to the existing glc_hw_cache_event_ids
array, except that the C(NODE) entries are omitted and default to zero.
Instead of duplicating the entire array and risking silent divergence when
future updates are made, could we dynamically clear the C(NODE) entries for
Alderlake during initialization?
A similar pattern of localized overrides is already utilized in
intel_pmu_init_grt() for the C(ITLB) event.
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260515061143.338553-1-dapeng1.mi@linux.intel.com?part=4
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 07/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for ARL
2026-05-15 6:11 ` [PATCH 07/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for ARL Dapeng Mi
@ 2026-05-15 6:40 ` sashiko-bot
0 siblings, 0 replies; 14+ messages in thread
From: sashiko-bot @ 2026-05-15 6:40 UTC (permalink / raw)
To: Dapeng Mi; +Cc: linux-perf-users
Thank you for your contribution! Sashiko AI review found 1 potential issue(s) to consider:
- [High] Unhandled memory allocation failure in intel_pmu_init_hybrid() leads to a NULL pointer dereference.
--
commit e86d9a7a48c7a21c02757243f3a40ac6b6e47297
Author: Dapeng Mi <dapeng1.mi@linux.intel.com>
perf/x86/intel: Update event constraints and cache_extra_regs[] for ARL
Updates perf hard-coded event constraints and cache_extra_regs[] for
Arrowlake. Introduces ARL-specific arl_lnc_hw_cache_extra_regs and adds
crestmont specific constraints for LPE cores.
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
[ ... ]
> @@ -8529,16 +8547,41 @@ __init int intel_pmu_init(void)
> case INTEL_WILDCATLAKE_L:
> pr_cont("Pantherlake Hybrid events, ");
> name = "pantherlake_hybrid";
> +
> + intel_pmu_init_hybrid(hybrid_big_small);
> +
> + /* Initialize big core specific PerfMon capabilities.*/
> + pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX];
> + intel_pmu_init_lnc(&pmu->pmu);
> +
> goto lnl_common;
>
> - case INTEL_LUNARLAKE_M:
> case INTEL_ARROWLAKE:
> + pr_cont("Arrowlake Hybrid events, ");
> + name = "arrowlake_hybrid";
> +
> + intel_pmu_init_hybrid(hybrid_big_small);
> +
> + /* Initialize big core specific PerfMon capabilities.*/
> + pmu = &x86_pmu.hybrid_pmu[X86_HYBRID_PMU_CORE_IDX];
> + intel_pmu_init_lnc(&pmu->pmu);
Does this code risk a NULL pointer dereference if the memory allocation in
intel_pmu_init_hybrid() fails?
Looking at intel_pmu_init_hybrid(), it allocates x86_pmu.hybrid_pmu and
returns -ENOMEM if kzalloc_objs() fails.
If that allocation fails here, the return value is ignored. The code
immediately proceeds to calculate the pmu pointer using x86_pmu.hybrid_pmu
and passes it to intel_pmu_init_lnc(), which would dereference it.
While this pattern existed before this patch, it is being propagated into
the new Pantherlake and Arrowlake initialization blocks. Could we add error
handling here to catch the allocation failure before accessing the array?
--
Sashiko AI review · https://sashiko.dev/#/patchset/20260515061143.338553-1-dapeng1.mi@linux.intel.com?part=7
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2026-05-15 6:40 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-15 6:11 [PATCH 00/11] perf/x86/intel: Fix inaccurate hard-coded event configurations Dapeng Mi
2026-05-15 6:11 ` [PATCH 01/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for ICX Dapeng Mi
2026-05-15 6:11 ` [PATCH 02/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for SPR Dapeng Mi
2026-05-15 6:11 ` [PATCH 03/11] perf/x86/intel: Update event constraints for DMR Dapeng Mi
2026-05-15 6:11 ` [PATCH 04/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for ADL Dapeng Mi
2026-05-15 6:38 ` sashiko-bot
2026-05-15 6:11 ` [PATCH 05/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for MTL Dapeng Mi
2026-05-15 6:11 ` [PATCH 06/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for LNL Dapeng Mi
2026-05-15 6:11 ` [PATCH 07/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for ARL Dapeng Mi
2026-05-15 6:40 ` sashiko-bot
2026-05-15 6:11 ` [PATCH 08/11] perf/x86/intel: Update event constraints for PTL Dapeng Mi
2026-05-15 6:11 ` [PATCH 09/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for NVL Dapeng Mi
2026-05-15 6:11 ` [PATCH 10/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for SRF Dapeng Mi
2026-05-15 6:11 ` [PATCH 11/11] perf/x86/intel: Update event constraints and cache_extra_regs[] for CWF Dapeng Mi
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox