* [PATCH v1 2/2] perf vendor events intel: Fix spelling mistakes
2023-08-24 20:24 [PATCH v1 1/2] perf vendor events intel: Add lunarlake v1.0 Ian Rogers
@ 2023-08-24 20:24 ` Ian Rogers
2023-08-24 20:36 ` Ian Rogers
2023-08-25 13:28 ` [PATCH v1 1/2] perf vendor events intel: Add lunarlake v1.0 Liang, Kan
2023-09-06 16:00 ` Arnaldo Carvalho de Melo
2 siblings, 1 reply; 6+ messages in thread
From: Ian Rogers @ 2023-08-24 20:24 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
Ian Rogers, Adrian Hunter, Maxime Coquelin, Alexandre Torgue,
Kan Liang, Zhengjun Xing, linux-kernel, linux-perf-users,
Colin Ian King, Edward Baker
Update perf json files with spelling fixes by Colin Ian King
<colin.i.king@gmail.com> contributed in:
https://github.com/intel/perfmon/pull/96
Fix various spelling mistakes and typos as found using codespell #96
Signed-off-by: Ian Rogers <irogers@google.com>
---
.../arch/x86/alderlake/adl-metrics.json | 6 +++---
.../arch/x86/alderlake/pipeline.json | 2 +-
.../arch/x86/alderlaken/adln-metrics.json | 6 +++---
.../x86/broadwellde/uncore-interconnect.json | 18 +++++++++---------
.../x86/broadwellx/uncore-interconnect.json | 18 +++++++++---------
.../pmu-events/arch/x86/haswell/memory.json | 2 +-
.../pmu-events/arch/x86/haswellx/memory.json | 2 +-
.../arch/x86/haswellx/uncore-interconnect.json | 18 +++++++++---------
.../arch/x86/ivytown/uncore-interconnect.json | 18 +++++++++---------
.../arch/x86/jaketown/uncore-interconnect.json | 18 +++++++++---------
.../arch/x86/nehalemep/floating-point.json | 2 +-
.../arch/x86/nehalemex/floating-point.json | 2 +-
.../arch/x86/westmereep-dp/floating-point.json | 2 +-
.../arch/x86/westmereep-sp/floating-point.json | 2 +-
.../arch/x86/westmereex/floating-point.json | 2 +-
15 files changed, 59 insertions(+), 59 deletions(-)
diff --git a/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json b/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json
index c6780d5c456b..8b6bed3bc766 100644
--- a/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json
+++ b/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json
@@ -395,13 +395,13 @@
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Instructions per Branch (lower number means higher occurance rate)",
+ "BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
"MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.ALL_BRANCHES",
"MetricName": "tma_info_inst_mix_ipbranch",
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Instruction per (near) call (lower number means higher occurance rate)",
+ "BriefDescription": "Instruction per (near) call (lower number means higher occurrence rate)",
"MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.CALL",
"MetricName": "tma_info_inst_mix_ipcall",
"Unit": "cpu_atom"
@@ -726,7 +726,7 @@
"Unit": "cpu_atom"
},
{
- "BriefDescription": "Counts the numer of issue slots that result in retirement slots.",
+ "BriefDescription": "Counts the number of issue slots that result in retirement slots.",
"DefaultMetricgroupName": "TopdownL1",
"MetricExpr": "TOPDOWN_RETIRING.ALL / tma_info_core_slots",
"MetricGroup": "Default;TopdownL1;tma_L1_group",
diff --git a/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json b/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json
index cb5b8611064b..a92013cdf136 100644
--- a/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json
+++ b/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json
@@ -1145,7 +1145,7 @@
"BriefDescription": "TMA slots wasted due to incorrect speculation by branch mispredictions",
"EventCode": "0xa4",
"EventName": "TOPDOWN.BR_MISPREDICT_SLOTS",
- "PublicDescription": "Number of TMA slots that were wasted due to incorrect speculation by (any type of) branch mispredictions. This event estimates number of specualtive operations that were issued but not retired as well as the out-of-order engine recovery past a branch misprediction.",
+ "PublicDescription": "Number of TMA slots that were wasted due to incorrect speculation by (any type of) branch mispredictions. This event estimates number of speculative operations that were issued but not retired as well as the out-of-order engine recovery past a branch misprediction.",
"SampleAfterValue": "10000003",
"UMask": "0x8",
"Unit": "cpu_core"
diff --git a/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json b/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json
index 06e67e34e1bf..c150c14ac6ed 100644
--- a/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json
+++ b/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json
@@ -328,12 +328,12 @@
"MetricName": "tma_info_inst_mix_idiv_uop_ratio"
},
{
- "BriefDescription": "Instructions per Branch (lower number means higher occurance rate)",
+ "BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
"MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.ALL_BRANCHES",
"MetricName": "tma_info_inst_mix_ipbranch"
},
{
- "BriefDescription": "Instruction per (near) call (lower number means higher occurance rate)",
+ "BriefDescription": "Instruction per (near) call (lower number means higher occurrence rate)",
"MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.CALL",
"MetricName": "tma_info_inst_mix_ipcall"
},
@@ -616,7 +616,7 @@
"ScaleUnit": "100%"
},
{
- "BriefDescription": "Counts the numer of issue slots that result in retirement slots.",
+ "BriefDescription": "Counts the number of issue slots that result in retirement slots.",
"DefaultMetricgroupName": "TopdownL1",
"MetricExpr": "TOPDOWN_RETIRING.ALL / tma_info_core_slots",
"MetricGroup": "Default;TopdownL1;tma_L1_group",
diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/broadwellde/uncore-interconnect.json
index 8a327e0f1441..910395977a6e 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellde/uncore-interconnect.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellde/uncore-interconnect.json
@@ -253,7 +253,7 @@
"EventCode": "0x4",
"EventName": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -261,7 +261,7 @@
"EventCode": "0x1",
"EventName": "UNC_I_RxR_BL_DRS_INSERTS",
"PerPkg": "1",
- "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -269,7 +269,7 @@
"EventCode": "0x7",
"EventName": "UNC_I_RxR_BL_DRS_OCCUPANCY",
"PerPkg": "1",
- "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -277,7 +277,7 @@
"EventCode": "0x5",
"EventName": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -285,7 +285,7 @@
"EventCode": "0x2",
"EventName": "UNC_I_RxR_BL_NCB_INSERTS",
"PerPkg": "1",
- "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -293,7 +293,7 @@
"EventCode": "0x8",
"EventName": "UNC_I_RxR_BL_NCB_OCCUPANCY",
"PerPkg": "1",
- "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -301,7 +301,7 @@
"EventCode": "0x6",
"EventName": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -309,7 +309,7 @@
"EventCode": "0x3",
"EventName": "UNC_I_RxR_BL_NCS_INSERTS",
"PerPkg": "1",
- "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -317,7 +317,7 @@
"EventCode": "0x9",
"EventName": "UNC_I_RxR_BL_NCS_OCCUPANCY",
"PerPkg": "1",
- "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json
index e61a23f68899..b9fb216bee16 100644
--- a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json
+++ b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json
@@ -271,7 +271,7 @@
"EventCode": "0x4",
"EventName": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -279,7 +279,7 @@
"EventCode": "0x1",
"EventName": "UNC_I_RxR_BL_DRS_INSERTS",
"PerPkg": "1",
- "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -287,7 +287,7 @@
"EventCode": "0x7",
"EventName": "UNC_I_RxR_BL_DRS_OCCUPANCY",
"PerPkg": "1",
- "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -295,7 +295,7 @@
"EventCode": "0x5",
"EventName": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -303,7 +303,7 @@
"EventCode": "0x2",
"EventName": "UNC_I_RxR_BL_NCB_INSERTS",
"PerPkg": "1",
- "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -311,7 +311,7 @@
"EventCode": "0x8",
"EventName": "UNC_I_RxR_BL_NCB_OCCUPANCY",
"PerPkg": "1",
- "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -319,7 +319,7 @@
"EventCode": "0x6",
"EventName": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -327,7 +327,7 @@
"EventCode": "0x3",
"EventName": "UNC_I_RxR_BL_NCS_INSERTS",
"PerPkg": "1",
- "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -335,7 +335,7 @@
"EventCode": "0x9",
"EventName": "UNC_I_RxR_BL_NCS_OCCUPANCY",
"PerPkg": "1",
- "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
diff --git a/tools/perf/pmu-events/arch/x86/haswell/memory.json b/tools/perf/pmu-events/arch/x86/haswell/memory.json
index 2fc25e22a42a..df44c28efeeb 100644
--- a/tools/perf/pmu-events/arch/x86/haswell/memory.json
+++ b/tools/perf/pmu-events/arch/x86/haswell/memory.json
@@ -62,7 +62,7 @@
"BriefDescription": "Counts the number of machine clears due to memory order conflicts.",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
- "PublicDescription": "This event counts the number of memory ordering machine clears detected. Memory ordering machine clears can result from memory address aliasing or snoops from another hardware thread or core to data inflight in the pipeline. Machine clears can have a significant performance impact if they are happening frequently.",
+ "PublicDescription": "This event counts the number of memory ordering machine clears detected. Memory ordering machine clears can result from memory address aliasing or snoops from another hardware thread or core to data in-flight in the pipeline. Machine clears can have a significant performance impact if they are happening frequently.",
"SampleAfterValue": "100003",
"UMask": "0x2"
},
diff --git a/tools/perf/pmu-events/arch/x86/haswellx/memory.json b/tools/perf/pmu-events/arch/x86/haswellx/memory.json
index 2d212cf59e92..d66e465ce41a 100644
--- a/tools/perf/pmu-events/arch/x86/haswellx/memory.json
+++ b/tools/perf/pmu-events/arch/x86/haswellx/memory.json
@@ -62,7 +62,7 @@
"BriefDescription": "Counts the number of machine clears due to memory order conflicts.",
"EventCode": "0xC3",
"EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
- "PublicDescription": "This event counts the number of memory ordering machine clears detected. Memory ordering machine clears can result from memory address aliasing or snoops from another hardware thread or core to data inflight in the pipeline. Machine clears can have a significant performance impact if they are happening frequently.",
+ "PublicDescription": "This event counts the number of memory ordering machine clears detected. Memory ordering machine clears can result from memory address aliasing or snoops from another hardware thread or core to data in-flight in the pipeline. Machine clears can have a significant performance impact if they are happening frequently.",
"SampleAfterValue": "100003",
"UMask": "0x2"
},
diff --git a/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json
index 954e8198c7a5..bef1f5ef6f31 100644
--- a/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json
+++ b/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json
@@ -271,7 +271,7 @@
"EventCode": "0x4",
"EventName": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -279,7 +279,7 @@
"EventCode": "0x1",
"EventName": "UNC_I_RxR_BL_DRS_INSERTS",
"PerPkg": "1",
- "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -287,7 +287,7 @@
"EventCode": "0x7",
"EventName": "UNC_I_RxR_BL_DRS_OCCUPANCY",
"PerPkg": "1",
- "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -295,7 +295,7 @@
"EventCode": "0x5",
"EventName": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -303,7 +303,7 @@
"EventCode": "0x2",
"EventName": "UNC_I_RxR_BL_NCB_INSERTS",
"PerPkg": "1",
- "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -311,7 +311,7 @@
"EventCode": "0x8",
"EventName": "UNC_I_RxR_BL_NCB_OCCUPANCY",
"PerPkg": "1",
- "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -319,7 +319,7 @@
"EventCode": "0x6",
"EventName": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -327,7 +327,7 @@
"EventCode": "0x3",
"EventName": "UNC_I_RxR_BL_NCS_INSERTS",
"PerPkg": "1",
- "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -335,7 +335,7 @@
"EventCode": "0x9",
"EventName": "UNC_I_RxR_BL_NCS_OCCUPANCY",
"PerPkg": "1",
- "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
diff --git a/tools/perf/pmu-events/arch/x86/ivytown/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/ivytown/uncore-interconnect.json
index ccf451534d16..f4d11da01383 100644
--- a/tools/perf/pmu-events/arch/x86/ivytown/uncore-interconnect.json
+++ b/tools/perf/pmu-events/arch/x86/ivytown/uncore-interconnect.json
@@ -140,7 +140,7 @@
"EventCode": "0x4",
"EventName": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -148,21 +148,21 @@
"EventCode": "0x1",
"EventName": "UNC_I_RxR_BL_DRS_INSERTS",
"PerPkg": "1",
- "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
"EventCode": "0x7",
"EventName": "UNC_I_RxR_BL_DRS_OCCUPANCY",
"PerPkg": "1",
- "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
"EventCode": "0x5",
"EventName": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -170,21 +170,21 @@
"EventCode": "0x2",
"EventName": "UNC_I_RxR_BL_NCB_INSERTS",
"PerPkg": "1",
- "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
"EventCode": "0x8",
"EventName": "UNC_I_RxR_BL_NCB_OCCUPANCY",
"PerPkg": "1",
- "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
"EventCode": "0x6",
"EventName": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -192,14 +192,14 @@
"EventCode": "0x3",
"EventName": "UNC_I_RxR_BL_NCS_INSERTS",
"PerPkg": "1",
- "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
"EventCode": "0x9",
"EventName": "UNC_I_RxR_BL_NCS_OCCUPANCY",
"PerPkg": "1",
- "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
diff --git a/tools/perf/pmu-events/arch/x86/jaketown/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/jaketown/uncore-interconnect.json
index 874f15ea8228..0fc907e5cf3c 100644
--- a/tools/perf/pmu-events/arch/x86/jaketown/uncore-interconnect.json
+++ b/tools/perf/pmu-events/arch/x86/jaketown/uncore-interconnect.json
@@ -140,7 +140,7 @@
"EventCode": "0x4",
"EventName": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -148,21 +148,21 @@
"EventCode": "0x1",
"EventName": "UNC_I_RxR_BL_DRS_INSERTS",
"PerPkg": "1",
- "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
"EventCode": "0x7",
"EventName": "UNC_I_RxR_BL_DRS_OCCUPANCY",
"PerPkg": "1",
- "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
"EventCode": "0x5",
"EventName": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -170,21 +170,21 @@
"EventCode": "0x2",
"EventName": "UNC_I_RxR_BL_NCB_INSERTS",
"PerPkg": "1",
- "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
"EventCode": "0x8",
"EventName": "UNC_I_RxR_BL_NCB_OCCUPANCY",
"PerPkg": "1",
- "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
"EventCode": "0x6",
"EventName": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
"PerPkg": "1",
- "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
@@ -192,14 +192,14 @@
"EventCode": "0x3",
"EventName": "UNC_I_RxR_BL_NCS_INSERTS",
"PerPkg": "1",
- "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
"EventCode": "0x9",
"EventName": "UNC_I_RxR_BL_NCS_OCCUPANCY",
"PerPkg": "1",
- "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
+ "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
"Unit": "IRP"
},
{
diff --git a/tools/perf/pmu-events/arch/x86/nehalemep/floating-point.json b/tools/perf/pmu-events/arch/x86/nehalemep/floating-point.json
index c03f8990fa82..196ae1d9b157 100644
--- a/tools/perf/pmu-events/arch/x86/nehalemep/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/nehalemep/floating-point.json
@@ -8,7 +8,7 @@
"UMask": "0x1"
},
{
- "BriefDescription": "X87 Floating poiint assists for invalid input value (Precise Event)",
+ "BriefDescription": "X87 Floating point assists for invalid input value (Precise Event)",
"EventCode": "0xF7",
"EventName": "FP_ASSIST.INPUT",
"PEBS": "1",
diff --git a/tools/perf/pmu-events/arch/x86/nehalemex/floating-point.json b/tools/perf/pmu-events/arch/x86/nehalemex/floating-point.json
index c03f8990fa82..196ae1d9b157 100644
--- a/tools/perf/pmu-events/arch/x86/nehalemex/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/nehalemex/floating-point.json
@@ -8,7 +8,7 @@
"UMask": "0x1"
},
{
- "BriefDescription": "X87 Floating poiint assists for invalid input value (Precise Event)",
+ "BriefDescription": "X87 Floating point assists for invalid input value (Precise Event)",
"EventCode": "0xF7",
"EventName": "FP_ASSIST.INPUT",
"PEBS": "1",
diff --git a/tools/perf/pmu-events/arch/x86/westmereep-dp/floating-point.json b/tools/perf/pmu-events/arch/x86/westmereep-dp/floating-point.json
index c03f8990fa82..196ae1d9b157 100644
--- a/tools/perf/pmu-events/arch/x86/westmereep-dp/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/westmereep-dp/floating-point.json
@@ -8,7 +8,7 @@
"UMask": "0x1"
},
{
- "BriefDescription": "X87 Floating poiint assists for invalid input value (Precise Event)",
+ "BriefDescription": "X87 Floating point assists for invalid input value (Precise Event)",
"EventCode": "0xF7",
"EventName": "FP_ASSIST.INPUT",
"PEBS": "1",
diff --git a/tools/perf/pmu-events/arch/x86/westmereep-sp/floating-point.json b/tools/perf/pmu-events/arch/x86/westmereep-sp/floating-point.json
index c03f8990fa82..196ae1d9b157 100644
--- a/tools/perf/pmu-events/arch/x86/westmereep-sp/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/westmereep-sp/floating-point.json
@@ -8,7 +8,7 @@
"UMask": "0x1"
},
{
- "BriefDescription": "X87 Floating poiint assists for invalid input value (Precise Event)",
+ "BriefDescription": "X87 Floating point assists for invalid input value (Precise Event)",
"EventCode": "0xF7",
"EventName": "FP_ASSIST.INPUT",
"PEBS": "1",
diff --git a/tools/perf/pmu-events/arch/x86/westmereex/floating-point.json b/tools/perf/pmu-events/arch/x86/westmereex/floating-point.json
index c03f8990fa82..196ae1d9b157 100644
--- a/tools/perf/pmu-events/arch/x86/westmereex/floating-point.json
+++ b/tools/perf/pmu-events/arch/x86/westmereex/floating-point.json
@@ -8,7 +8,7 @@
"UMask": "0x1"
},
{
- "BriefDescription": "X87 Floating poiint assists for invalid input value (Precise Event)",
+ "BriefDescription": "X87 Floating point assists for invalid input value (Precise Event)",
"EventCode": "0xF7",
"EventName": "FP_ASSIST.INPUT",
"PEBS": "1",
--
2.42.0.rc2.253.gd59a3bf2b4-goog
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v1 2/2] perf vendor events intel: Fix spelling mistakes
2023-08-24 20:24 ` [PATCH v1 2/2] perf vendor events intel: Fix spelling mistakes Ian Rogers
@ 2023-08-24 20:36 ` Ian Rogers
2023-08-24 20:40 ` Colin King (gmail)
0 siblings, 1 reply; 6+ messages in thread
From: Ian Rogers @ 2023-08-24 20:36 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
Ian Rogers, Adrian Hunter, Maxime Coquelin, Alexandre Torgue,
Kan Liang, Zhengjun Xing, linux-kernel, linux-perf-users,
Colin Ian King, Edward Baker
On Thu, Aug 24, 2023 at 1:24 PM Ian Rogers <irogers@google.com> wrote:
>
> Update perf json files with spelling fixes by Colin Ian King
> <colin.i.king@gmail.com> contributed in:
> https://github.com/intel/perfmon/pull/96
> Fix various spelling mistakes and typos as found using codespell #96
>
> Signed-off-by: Ian Rogers <irogers@google.com>
I think it would be more correct here if Colin Ian King
<colin.i.king@gmail.com> were the author here. I generated the changes
from their work and they posted similar changes to LKML in the past.
I'm not sure what the policy is, and I didn't have Colin's permission
to list them as author, but it'd feel more correct to me for the
author to be changed.
Thanks,
Ian
> ---
> .../arch/x86/alderlake/adl-metrics.json | 6 +++---
> .../arch/x86/alderlake/pipeline.json | 2 +-
> .../arch/x86/alderlaken/adln-metrics.json | 6 +++---
> .../x86/broadwellde/uncore-interconnect.json | 18 +++++++++---------
> .../x86/broadwellx/uncore-interconnect.json | 18 +++++++++---------
> .../pmu-events/arch/x86/haswell/memory.json | 2 +-
> .../pmu-events/arch/x86/haswellx/memory.json | 2 +-
> .../arch/x86/haswellx/uncore-interconnect.json | 18 +++++++++---------
> .../arch/x86/ivytown/uncore-interconnect.json | 18 +++++++++---------
> .../arch/x86/jaketown/uncore-interconnect.json | 18 +++++++++---------
> .../arch/x86/nehalemep/floating-point.json | 2 +-
> .../arch/x86/nehalemex/floating-point.json | 2 +-
> .../arch/x86/westmereep-dp/floating-point.json | 2 +-
> .../arch/x86/westmereep-sp/floating-point.json | 2 +-
> .../arch/x86/westmereex/floating-point.json | 2 +-
> 15 files changed, 59 insertions(+), 59 deletions(-)
>
> diff --git a/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json b/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json
> index c6780d5c456b..8b6bed3bc766 100644
> --- a/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json
> +++ b/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json
> @@ -395,13 +395,13 @@
> "Unit": "cpu_atom"
> },
> {
> - "BriefDescription": "Instructions per Branch (lower number means higher occurance rate)",
> + "BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
> "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.ALL_BRANCHES",
> "MetricName": "tma_info_inst_mix_ipbranch",
> "Unit": "cpu_atom"
> },
> {
> - "BriefDescription": "Instruction per (near) call (lower number means higher occurance rate)",
> + "BriefDescription": "Instruction per (near) call (lower number means higher occurrence rate)",
> "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.CALL",
> "MetricName": "tma_info_inst_mix_ipcall",
> "Unit": "cpu_atom"
> @@ -726,7 +726,7 @@
> "Unit": "cpu_atom"
> },
> {
> - "BriefDescription": "Counts the numer of issue slots that result in retirement slots.",
> + "BriefDescription": "Counts the number of issue slots that result in retirement slots.",
> "DefaultMetricgroupName": "TopdownL1",
> "MetricExpr": "TOPDOWN_RETIRING.ALL / tma_info_core_slots",
> "MetricGroup": "Default;TopdownL1;tma_L1_group",
> diff --git a/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json b/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json
> index cb5b8611064b..a92013cdf136 100644
> --- a/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json
> +++ b/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json
> @@ -1145,7 +1145,7 @@
> "BriefDescription": "TMA slots wasted due to incorrect speculation by branch mispredictions",
> "EventCode": "0xa4",
> "EventName": "TOPDOWN.BR_MISPREDICT_SLOTS",
> - "PublicDescription": "Number of TMA slots that were wasted due to incorrect speculation by (any type of) branch mispredictions. This event estimates number of specualtive operations that were issued but not retired as well as the out-of-order engine recovery past a branch misprediction.",
> + "PublicDescription": "Number of TMA slots that were wasted due to incorrect speculation by (any type of) branch mispredictions. This event estimates number of speculative operations that were issued but not retired as well as the out-of-order engine recovery past a branch misprediction.",
> "SampleAfterValue": "10000003",
> "UMask": "0x8",
> "Unit": "cpu_core"
> diff --git a/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json b/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json
> index 06e67e34e1bf..c150c14ac6ed 100644
> --- a/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json
> +++ b/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json
> @@ -328,12 +328,12 @@
> "MetricName": "tma_info_inst_mix_idiv_uop_ratio"
> },
> {
> - "BriefDescription": "Instructions per Branch (lower number means higher occurance rate)",
> + "BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
> "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.ALL_BRANCHES",
> "MetricName": "tma_info_inst_mix_ipbranch"
> },
> {
> - "BriefDescription": "Instruction per (near) call (lower number means higher occurance rate)",
> + "BriefDescription": "Instruction per (near) call (lower number means higher occurrence rate)",
> "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.CALL",
> "MetricName": "tma_info_inst_mix_ipcall"
> },
> @@ -616,7 +616,7 @@
> "ScaleUnit": "100%"
> },
> {
> - "BriefDescription": "Counts the numer of issue slots that result in retirement slots.",
> + "BriefDescription": "Counts the number of issue slots that result in retirement slots.",
> "DefaultMetricgroupName": "TopdownL1",
> "MetricExpr": "TOPDOWN_RETIRING.ALL / tma_info_core_slots",
> "MetricGroup": "Default;TopdownL1;tma_L1_group",
> diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/broadwellde/uncore-interconnect.json
> index 8a327e0f1441..910395977a6e 100644
> --- a/tools/perf/pmu-events/arch/x86/broadwellde/uncore-interconnect.json
> +++ b/tools/perf/pmu-events/arch/x86/broadwellde/uncore-interconnect.json
> @@ -253,7 +253,7 @@
> "EventCode": "0x4",
> "EventName": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -261,7 +261,7 @@
> "EventCode": "0x1",
> "EventName": "UNC_I_RxR_BL_DRS_INSERTS",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -269,7 +269,7 @@
> "EventCode": "0x7",
> "EventName": "UNC_I_RxR_BL_DRS_OCCUPANCY",
> "PerPkg": "1",
> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -277,7 +277,7 @@
> "EventCode": "0x5",
> "EventName": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -285,7 +285,7 @@
> "EventCode": "0x2",
> "EventName": "UNC_I_RxR_BL_NCB_INSERTS",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -293,7 +293,7 @@
> "EventCode": "0x8",
> "EventName": "UNC_I_RxR_BL_NCB_OCCUPANCY",
> "PerPkg": "1",
> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -301,7 +301,7 @@
> "EventCode": "0x6",
> "EventName": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -309,7 +309,7 @@
> "EventCode": "0x3",
> "EventName": "UNC_I_RxR_BL_NCS_INSERTS",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -317,7 +317,7 @@
> "EventCode": "0x9",
> "EventName": "UNC_I_RxR_BL_NCS_OCCUPANCY",
> "PerPkg": "1",
> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json
> index e61a23f68899..b9fb216bee16 100644
> --- a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json
> +++ b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json
> @@ -271,7 +271,7 @@
> "EventCode": "0x4",
> "EventName": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -279,7 +279,7 @@
> "EventCode": "0x1",
> "EventName": "UNC_I_RxR_BL_DRS_INSERTS",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -287,7 +287,7 @@
> "EventCode": "0x7",
> "EventName": "UNC_I_RxR_BL_DRS_OCCUPANCY",
> "PerPkg": "1",
> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -295,7 +295,7 @@
> "EventCode": "0x5",
> "EventName": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -303,7 +303,7 @@
> "EventCode": "0x2",
> "EventName": "UNC_I_RxR_BL_NCB_INSERTS",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -311,7 +311,7 @@
> "EventCode": "0x8",
> "EventName": "UNC_I_RxR_BL_NCB_OCCUPANCY",
> "PerPkg": "1",
> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -319,7 +319,7 @@
> "EventCode": "0x6",
> "EventName": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -327,7 +327,7 @@
> "EventCode": "0x3",
> "EventName": "UNC_I_RxR_BL_NCS_INSERTS",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -335,7 +335,7 @@
> "EventCode": "0x9",
> "EventName": "UNC_I_RxR_BL_NCS_OCCUPANCY",
> "PerPkg": "1",
> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> diff --git a/tools/perf/pmu-events/arch/x86/haswell/memory.json b/tools/perf/pmu-events/arch/x86/haswell/memory.json
> index 2fc25e22a42a..df44c28efeeb 100644
> --- a/tools/perf/pmu-events/arch/x86/haswell/memory.json
> +++ b/tools/perf/pmu-events/arch/x86/haswell/memory.json
> @@ -62,7 +62,7 @@
> "BriefDescription": "Counts the number of machine clears due to memory order conflicts.",
> "EventCode": "0xC3",
> "EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
> - "PublicDescription": "This event counts the number of memory ordering machine clears detected. Memory ordering machine clears can result from memory address aliasing or snoops from another hardware thread or core to data inflight in the pipeline. Machine clears can have a significant performance impact if they are happening frequently.",
> + "PublicDescription": "This event counts the number of memory ordering machine clears detected. Memory ordering machine clears can result from memory address aliasing or snoops from another hardware thread or core to data in-flight in the pipeline. Machine clears can have a significant performance impact if they are happening frequently.",
> "SampleAfterValue": "100003",
> "UMask": "0x2"
> },
> diff --git a/tools/perf/pmu-events/arch/x86/haswellx/memory.json b/tools/perf/pmu-events/arch/x86/haswellx/memory.json
> index 2d212cf59e92..d66e465ce41a 100644
> --- a/tools/perf/pmu-events/arch/x86/haswellx/memory.json
> +++ b/tools/perf/pmu-events/arch/x86/haswellx/memory.json
> @@ -62,7 +62,7 @@
> "BriefDescription": "Counts the number of machine clears due to memory order conflicts.",
> "EventCode": "0xC3",
> "EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
> - "PublicDescription": "This event counts the number of memory ordering machine clears detected. Memory ordering machine clears can result from memory address aliasing or snoops from another hardware thread or core to data inflight in the pipeline. Machine clears can have a significant performance impact if they are happening frequently.",
> + "PublicDescription": "This event counts the number of memory ordering machine clears detected. Memory ordering machine clears can result from memory address aliasing or snoops from another hardware thread or core to data in-flight in the pipeline. Machine clears can have a significant performance impact if they are happening frequently.",
> "SampleAfterValue": "100003",
> "UMask": "0x2"
> },
> diff --git a/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json
> index 954e8198c7a5..bef1f5ef6f31 100644
> --- a/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json
> +++ b/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json
> @@ -271,7 +271,7 @@
> "EventCode": "0x4",
> "EventName": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -279,7 +279,7 @@
> "EventCode": "0x1",
> "EventName": "UNC_I_RxR_BL_DRS_INSERTS",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -287,7 +287,7 @@
> "EventCode": "0x7",
> "EventName": "UNC_I_RxR_BL_DRS_OCCUPANCY",
> "PerPkg": "1",
> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -295,7 +295,7 @@
> "EventCode": "0x5",
> "EventName": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -303,7 +303,7 @@
> "EventCode": "0x2",
> "EventName": "UNC_I_RxR_BL_NCB_INSERTS",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -311,7 +311,7 @@
> "EventCode": "0x8",
> "EventName": "UNC_I_RxR_BL_NCB_OCCUPANCY",
> "PerPkg": "1",
> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -319,7 +319,7 @@
> "EventCode": "0x6",
> "EventName": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -327,7 +327,7 @@
> "EventCode": "0x3",
> "EventName": "UNC_I_RxR_BL_NCS_INSERTS",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -335,7 +335,7 @@
> "EventCode": "0x9",
> "EventName": "UNC_I_RxR_BL_NCS_OCCUPANCY",
> "PerPkg": "1",
> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> diff --git a/tools/perf/pmu-events/arch/x86/ivytown/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/ivytown/uncore-interconnect.json
> index ccf451534d16..f4d11da01383 100644
> --- a/tools/perf/pmu-events/arch/x86/ivytown/uncore-interconnect.json
> +++ b/tools/perf/pmu-events/arch/x86/ivytown/uncore-interconnect.json
> @@ -140,7 +140,7 @@
> "EventCode": "0x4",
> "EventName": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -148,21 +148,21 @@
> "EventCode": "0x1",
> "EventName": "UNC_I_RxR_BL_DRS_INSERTS",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> "EventCode": "0x7",
> "EventName": "UNC_I_RxR_BL_DRS_OCCUPANCY",
> "PerPkg": "1",
> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> "EventCode": "0x5",
> "EventName": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -170,21 +170,21 @@
> "EventCode": "0x2",
> "EventName": "UNC_I_RxR_BL_NCB_INSERTS",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> "EventCode": "0x8",
> "EventName": "UNC_I_RxR_BL_NCB_OCCUPANCY",
> "PerPkg": "1",
> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> "EventCode": "0x6",
> "EventName": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -192,14 +192,14 @@
> "EventCode": "0x3",
> "EventName": "UNC_I_RxR_BL_NCS_INSERTS",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> "EventCode": "0x9",
> "EventName": "UNC_I_RxR_BL_NCS_OCCUPANCY",
> "PerPkg": "1",
> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> diff --git a/tools/perf/pmu-events/arch/x86/jaketown/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/jaketown/uncore-interconnect.json
> index 874f15ea8228..0fc907e5cf3c 100644
> --- a/tools/perf/pmu-events/arch/x86/jaketown/uncore-interconnect.json
> +++ b/tools/perf/pmu-events/arch/x86/jaketown/uncore-interconnect.json
> @@ -140,7 +140,7 @@
> "EventCode": "0x4",
> "EventName": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -148,21 +148,21 @@
> "EventCode": "0x1",
> "EventName": "UNC_I_RxR_BL_DRS_INSERTS",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> "EventCode": "0x7",
> "EventName": "UNC_I_RxR_BL_DRS_OCCUPANCY",
> "PerPkg": "1",
> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> "EventCode": "0x5",
> "EventName": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -170,21 +170,21 @@
> "EventCode": "0x2",
> "EventName": "UNC_I_RxR_BL_NCB_INSERTS",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> "EventCode": "0x8",
> "EventName": "UNC_I_RxR_BL_NCB_OCCUPANCY",
> "PerPkg": "1",
> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> "EventCode": "0x6",
> "EventName": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> @@ -192,14 +192,14 @@
> "EventCode": "0x3",
> "EventName": "UNC_I_RxR_BL_NCS_INSERTS",
> "PerPkg": "1",
> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> "EventCode": "0x9",
> "EventName": "UNC_I_RxR_BL_NCS_OCCUPANCY",
> "PerPkg": "1",
> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
> "Unit": "IRP"
> },
> {
> diff --git a/tools/perf/pmu-events/arch/x86/nehalemep/floating-point.json b/tools/perf/pmu-events/arch/x86/nehalemep/floating-point.json
> index c03f8990fa82..196ae1d9b157 100644
> --- a/tools/perf/pmu-events/arch/x86/nehalemep/floating-point.json
> +++ b/tools/perf/pmu-events/arch/x86/nehalemep/floating-point.json
> @@ -8,7 +8,7 @@
> "UMask": "0x1"
> },
> {
> - "BriefDescription": "X87 Floating poiint assists for invalid input value (Precise Event)",
> + "BriefDescription": "X87 Floating point assists for invalid input value (Precise Event)",
> "EventCode": "0xF7",
> "EventName": "FP_ASSIST.INPUT",
> "PEBS": "1",
> diff --git a/tools/perf/pmu-events/arch/x86/nehalemex/floating-point.json b/tools/perf/pmu-events/arch/x86/nehalemex/floating-point.json
> index c03f8990fa82..196ae1d9b157 100644
> --- a/tools/perf/pmu-events/arch/x86/nehalemex/floating-point.json
> +++ b/tools/perf/pmu-events/arch/x86/nehalemex/floating-point.json
> @@ -8,7 +8,7 @@
> "UMask": "0x1"
> },
> {
> - "BriefDescription": "X87 Floating poiint assists for invalid input value (Precise Event)",
> + "BriefDescription": "X87 Floating point assists for invalid input value (Precise Event)",
> "EventCode": "0xF7",
> "EventName": "FP_ASSIST.INPUT",
> "PEBS": "1",
> diff --git a/tools/perf/pmu-events/arch/x86/westmereep-dp/floating-point.json b/tools/perf/pmu-events/arch/x86/westmereep-dp/floating-point.json
> index c03f8990fa82..196ae1d9b157 100644
> --- a/tools/perf/pmu-events/arch/x86/westmereep-dp/floating-point.json
> +++ b/tools/perf/pmu-events/arch/x86/westmereep-dp/floating-point.json
> @@ -8,7 +8,7 @@
> "UMask": "0x1"
> },
> {
> - "BriefDescription": "X87 Floating poiint assists for invalid input value (Precise Event)",
> + "BriefDescription": "X87 Floating point assists for invalid input value (Precise Event)",
> "EventCode": "0xF7",
> "EventName": "FP_ASSIST.INPUT",
> "PEBS": "1",
> diff --git a/tools/perf/pmu-events/arch/x86/westmereep-sp/floating-point.json b/tools/perf/pmu-events/arch/x86/westmereep-sp/floating-point.json
> index c03f8990fa82..196ae1d9b157 100644
> --- a/tools/perf/pmu-events/arch/x86/westmereep-sp/floating-point.json
> +++ b/tools/perf/pmu-events/arch/x86/westmereep-sp/floating-point.json
> @@ -8,7 +8,7 @@
> "UMask": "0x1"
> },
> {
> - "BriefDescription": "X87 Floating poiint assists for invalid input value (Precise Event)",
> + "BriefDescription": "X87 Floating point assists for invalid input value (Precise Event)",
> "EventCode": "0xF7",
> "EventName": "FP_ASSIST.INPUT",
> "PEBS": "1",
> diff --git a/tools/perf/pmu-events/arch/x86/westmereex/floating-point.json b/tools/perf/pmu-events/arch/x86/westmereex/floating-point.json
> index c03f8990fa82..196ae1d9b157 100644
> --- a/tools/perf/pmu-events/arch/x86/westmereex/floating-point.json
> +++ b/tools/perf/pmu-events/arch/x86/westmereex/floating-point.json
> @@ -8,7 +8,7 @@
> "UMask": "0x1"
> },
> {
> - "BriefDescription": "X87 Floating poiint assists for invalid input value (Precise Event)",
> + "BriefDescription": "X87 Floating point assists for invalid input value (Precise Event)",
> "EventCode": "0xF7",
> "EventName": "FP_ASSIST.INPUT",
> "PEBS": "1",
> --
> 2.42.0.rc2.253.gd59a3bf2b4-goog
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v1 2/2] perf vendor events intel: Fix spelling mistakes
2023-08-24 20:36 ` Ian Rogers
@ 2023-08-24 20:40 ` Colin King (gmail)
0 siblings, 0 replies; 6+ messages in thread
From: Colin King (gmail) @ 2023-08-24 20:40 UTC (permalink / raw)
To: Ian Rogers, Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
Adrian Hunter, Maxime Coquelin, Alexandre Torgue, Kan Liang,
Zhengjun Xing, linux-kernel, linux-perf-users, Edward Baker
On 24/08/2023 21:36, Ian Rogers wrote:
> On Thu, Aug 24, 2023 at 1:24 PM Ian Rogers <irogers@google.com> wrote:
>>
>> Update perf json files with spelling fixes by Colin Ian King
>> <colin.i.king@gmail.com> contributed in:
>> https://github.com/intel/perfmon/pull/96
>> Fix various spelling mistakes and typos as found using codespell #96
>>
>> Signed-off-by: Ian Rogers <irogers@google.com>
>
> I think it would be more correct here if Colin Ian King
> <colin.i.king@gmail.com> were the author here. I generated the changes
> from their work and they posted similar changes to LKML in the past.
> I'm not sure what the policy is, and I didn't have Colin's permission
> to list them as author, but it'd feel more correct to me for the
> author to be changed.
I don't mind either way, what ever is easiest.
>
> Thanks,
> Ian
>
>> ---
>> .../arch/x86/alderlake/adl-metrics.json | 6 +++---
>> .../arch/x86/alderlake/pipeline.json | 2 +-
>> .../arch/x86/alderlaken/adln-metrics.json | 6 +++---
>> .../x86/broadwellde/uncore-interconnect.json | 18 +++++++++---------
>> .../x86/broadwellx/uncore-interconnect.json | 18 +++++++++---------
>> .../pmu-events/arch/x86/haswell/memory.json | 2 +-
>> .../pmu-events/arch/x86/haswellx/memory.json | 2 +-
>> .../arch/x86/haswellx/uncore-interconnect.json | 18 +++++++++---------
>> .../arch/x86/ivytown/uncore-interconnect.json | 18 +++++++++---------
>> .../arch/x86/jaketown/uncore-interconnect.json | 18 +++++++++---------
>> .../arch/x86/nehalemep/floating-point.json | 2 +-
>> .../arch/x86/nehalemex/floating-point.json | 2 +-
>> .../arch/x86/westmereep-dp/floating-point.json | 2 +-
>> .../arch/x86/westmereep-sp/floating-point.json | 2 +-
>> .../arch/x86/westmereex/floating-point.json | 2 +-
>> 15 files changed, 59 insertions(+), 59 deletions(-)
>>
>> diff --git a/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json b/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json
>> index c6780d5c456b..8b6bed3bc766 100644
>> --- a/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json
>> +++ b/tools/perf/pmu-events/arch/x86/alderlake/adl-metrics.json
>> @@ -395,13 +395,13 @@
>> "Unit": "cpu_atom"
>> },
>> {
>> - "BriefDescription": "Instructions per Branch (lower number means higher occurance rate)",
>> + "BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
>> "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.ALL_BRANCHES",
>> "MetricName": "tma_info_inst_mix_ipbranch",
>> "Unit": "cpu_atom"
>> },
>> {
>> - "BriefDescription": "Instruction per (near) call (lower number means higher occurance rate)",
>> + "BriefDescription": "Instruction per (near) call (lower number means higher occurrence rate)",
>> "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.CALL",
>> "MetricName": "tma_info_inst_mix_ipcall",
>> "Unit": "cpu_atom"
>> @@ -726,7 +726,7 @@
>> "Unit": "cpu_atom"
>> },
>> {
>> - "BriefDescription": "Counts the numer of issue slots that result in retirement slots.",
>> + "BriefDescription": "Counts the number of issue slots that result in retirement slots.",
>> "DefaultMetricgroupName": "TopdownL1",
>> "MetricExpr": "TOPDOWN_RETIRING.ALL / tma_info_core_slots",
>> "MetricGroup": "Default;TopdownL1;tma_L1_group",
>> diff --git a/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json b/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json
>> index cb5b8611064b..a92013cdf136 100644
>> --- a/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json
>> +++ b/tools/perf/pmu-events/arch/x86/alderlake/pipeline.json
>> @@ -1145,7 +1145,7 @@
>> "BriefDescription": "TMA slots wasted due to incorrect speculation by branch mispredictions",
>> "EventCode": "0xa4",
>> "EventName": "TOPDOWN.BR_MISPREDICT_SLOTS",
>> - "PublicDescription": "Number of TMA slots that were wasted due to incorrect speculation by (any type of) branch mispredictions. This event estimates number of specualtive operations that were issued but not retired as well as the out-of-order engine recovery past a branch misprediction.",
>> + "PublicDescription": "Number of TMA slots that were wasted due to incorrect speculation by (any type of) branch mispredictions. This event estimates number of speculative operations that were issued but not retired as well as the out-of-order engine recovery past a branch misprediction.",
>> "SampleAfterValue": "10000003",
>> "UMask": "0x8",
>> "Unit": "cpu_core"
>> diff --git a/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json b/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json
>> index 06e67e34e1bf..c150c14ac6ed 100644
>> --- a/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json
>> +++ b/tools/perf/pmu-events/arch/x86/alderlaken/adln-metrics.json
>> @@ -328,12 +328,12 @@
>> "MetricName": "tma_info_inst_mix_idiv_uop_ratio"
>> },
>> {
>> - "BriefDescription": "Instructions per Branch (lower number means higher occurance rate)",
>> + "BriefDescription": "Instructions per Branch (lower number means higher occurrence rate)",
>> "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.ALL_BRANCHES",
>> "MetricName": "tma_info_inst_mix_ipbranch"
>> },
>> {
>> - "BriefDescription": "Instruction per (near) call (lower number means higher occurance rate)",
>> + "BriefDescription": "Instruction per (near) call (lower number means higher occurrence rate)",
>> "MetricExpr": "INST_RETIRED.ANY / BR_INST_RETIRED.CALL",
>> "MetricName": "tma_info_inst_mix_ipcall"
>> },
>> @@ -616,7 +616,7 @@
>> "ScaleUnit": "100%"
>> },
>> {
>> - "BriefDescription": "Counts the numer of issue slots that result in retirement slots.",
>> + "BriefDescription": "Counts the number of issue slots that result in retirement slots.",
>> "DefaultMetricgroupName": "TopdownL1",
>> "MetricExpr": "TOPDOWN_RETIRING.ALL / tma_info_core_slots",
>> "MetricGroup": "Default;TopdownL1;tma_L1_group",
>> diff --git a/tools/perf/pmu-events/arch/x86/broadwellde/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/broadwellde/uncore-interconnect.json
>> index 8a327e0f1441..910395977a6e 100644
>> --- a/tools/perf/pmu-events/arch/x86/broadwellde/uncore-interconnect.json
>> +++ b/tools/perf/pmu-events/arch/x86/broadwellde/uncore-interconnect.json
>> @@ -253,7 +253,7 @@
>> "EventCode": "0x4",
>> "EventName": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -261,7 +261,7 @@
>> "EventCode": "0x1",
>> "EventName": "UNC_I_RxR_BL_DRS_INSERTS",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -269,7 +269,7 @@
>> "EventCode": "0x7",
>> "EventName": "UNC_I_RxR_BL_DRS_OCCUPANCY",
>> "PerPkg": "1",
>> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -277,7 +277,7 @@
>> "EventCode": "0x5",
>> "EventName": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -285,7 +285,7 @@
>> "EventCode": "0x2",
>> "EventName": "UNC_I_RxR_BL_NCB_INSERTS",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -293,7 +293,7 @@
>> "EventCode": "0x8",
>> "EventName": "UNC_I_RxR_BL_NCB_OCCUPANCY",
>> "PerPkg": "1",
>> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -301,7 +301,7 @@
>> "EventCode": "0x6",
>> "EventName": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -309,7 +309,7 @@
>> "EventCode": "0x3",
>> "EventName": "UNC_I_RxR_BL_NCS_INSERTS",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -317,7 +317,7 @@
>> "EventCode": "0x9",
>> "EventName": "UNC_I_RxR_BL_NCS_OCCUPANCY",
>> "PerPkg": "1",
>> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> diff --git a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json
>> index e61a23f68899..b9fb216bee16 100644
>> --- a/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json
>> +++ b/tools/perf/pmu-events/arch/x86/broadwellx/uncore-interconnect.json
>> @@ -271,7 +271,7 @@
>> "EventCode": "0x4",
>> "EventName": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -279,7 +279,7 @@
>> "EventCode": "0x1",
>> "EventName": "UNC_I_RxR_BL_DRS_INSERTS",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -287,7 +287,7 @@
>> "EventCode": "0x7",
>> "EventName": "UNC_I_RxR_BL_DRS_OCCUPANCY",
>> "PerPkg": "1",
>> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -295,7 +295,7 @@
>> "EventCode": "0x5",
>> "EventName": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -303,7 +303,7 @@
>> "EventCode": "0x2",
>> "EventName": "UNC_I_RxR_BL_NCB_INSERTS",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -311,7 +311,7 @@
>> "EventCode": "0x8",
>> "EventName": "UNC_I_RxR_BL_NCB_OCCUPANCY",
>> "PerPkg": "1",
>> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -319,7 +319,7 @@
>> "EventCode": "0x6",
>> "EventName": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -327,7 +327,7 @@
>> "EventCode": "0x3",
>> "EventName": "UNC_I_RxR_BL_NCS_INSERTS",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -335,7 +335,7 @@
>> "EventCode": "0x9",
>> "EventName": "UNC_I_RxR_BL_NCS_OCCUPANCY",
>> "PerPkg": "1",
>> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> diff --git a/tools/perf/pmu-events/arch/x86/haswell/memory.json b/tools/perf/pmu-events/arch/x86/haswell/memory.json
>> index 2fc25e22a42a..df44c28efeeb 100644
>> --- a/tools/perf/pmu-events/arch/x86/haswell/memory.json
>> +++ b/tools/perf/pmu-events/arch/x86/haswell/memory.json
>> @@ -62,7 +62,7 @@
>> "BriefDescription": "Counts the number of machine clears due to memory order conflicts.",
>> "EventCode": "0xC3",
>> "EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
>> - "PublicDescription": "This event counts the number of memory ordering machine clears detected. Memory ordering machine clears can result from memory address aliasing or snoops from another hardware thread or core to data inflight in the pipeline. Machine clears can have a significant performance impact if they are happening frequently.",
>> + "PublicDescription": "This event counts the number of memory ordering machine clears detected. Memory ordering machine clears can result from memory address aliasing or snoops from another hardware thread or core to data in-flight in the pipeline. Machine clears can have a significant performance impact if they are happening frequently.",
>> "SampleAfterValue": "100003",
>> "UMask": "0x2"
>> },
>> diff --git a/tools/perf/pmu-events/arch/x86/haswellx/memory.json b/tools/perf/pmu-events/arch/x86/haswellx/memory.json
>> index 2d212cf59e92..d66e465ce41a 100644
>> --- a/tools/perf/pmu-events/arch/x86/haswellx/memory.json
>> +++ b/tools/perf/pmu-events/arch/x86/haswellx/memory.json
>> @@ -62,7 +62,7 @@
>> "BriefDescription": "Counts the number of machine clears due to memory order conflicts.",
>> "EventCode": "0xC3",
>> "EventName": "MACHINE_CLEARS.MEMORY_ORDERING",
>> - "PublicDescription": "This event counts the number of memory ordering machine clears detected. Memory ordering machine clears can result from memory address aliasing or snoops from another hardware thread or core to data inflight in the pipeline. Machine clears can have a significant performance impact if they are happening frequently.",
>> + "PublicDescription": "This event counts the number of memory ordering machine clears detected. Memory ordering machine clears can result from memory address aliasing or snoops from another hardware thread or core to data in-flight in the pipeline. Machine clears can have a significant performance impact if they are happening frequently.",
>> "SampleAfterValue": "100003",
>> "UMask": "0x2"
>> },
>> diff --git a/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json
>> index 954e8198c7a5..bef1f5ef6f31 100644
>> --- a/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json
>> +++ b/tools/perf/pmu-events/arch/x86/haswellx/uncore-interconnect.json
>> @@ -271,7 +271,7 @@
>> "EventCode": "0x4",
>> "EventName": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -279,7 +279,7 @@
>> "EventCode": "0x1",
>> "EventName": "UNC_I_RxR_BL_DRS_INSERTS",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -287,7 +287,7 @@
>> "EventCode": "0x7",
>> "EventName": "UNC_I_RxR_BL_DRS_OCCUPANCY",
>> "PerPkg": "1",
>> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -295,7 +295,7 @@
>> "EventCode": "0x5",
>> "EventName": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -303,7 +303,7 @@
>> "EventCode": "0x2",
>> "EventName": "UNC_I_RxR_BL_NCB_INSERTS",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -311,7 +311,7 @@
>> "EventCode": "0x8",
>> "EventName": "UNC_I_RxR_BL_NCB_OCCUPANCY",
>> "PerPkg": "1",
>> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -319,7 +319,7 @@
>> "EventCode": "0x6",
>> "EventName": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -327,7 +327,7 @@
>> "EventCode": "0x3",
>> "EventName": "UNC_I_RxR_BL_NCS_INSERTS",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -335,7 +335,7 @@
>> "EventCode": "0x9",
>> "EventName": "UNC_I_RxR_BL_NCS_OCCUPANCY",
>> "PerPkg": "1",
>> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> diff --git a/tools/perf/pmu-events/arch/x86/ivytown/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/ivytown/uncore-interconnect.json
>> index ccf451534d16..f4d11da01383 100644
>> --- a/tools/perf/pmu-events/arch/x86/ivytown/uncore-interconnect.json
>> +++ b/tools/perf/pmu-events/arch/x86/ivytown/uncore-interconnect.json
>> @@ -140,7 +140,7 @@
>> "EventCode": "0x4",
>> "EventName": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -148,21 +148,21 @@
>> "EventCode": "0x1",
>> "EventName": "UNC_I_RxR_BL_DRS_INSERTS",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> "EventCode": "0x7",
>> "EventName": "UNC_I_RxR_BL_DRS_OCCUPANCY",
>> "PerPkg": "1",
>> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> "EventCode": "0x5",
>> "EventName": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -170,21 +170,21 @@
>> "EventCode": "0x2",
>> "EventName": "UNC_I_RxR_BL_NCB_INSERTS",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> "EventCode": "0x8",
>> "EventName": "UNC_I_RxR_BL_NCB_OCCUPANCY",
>> "PerPkg": "1",
>> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> "EventCode": "0x6",
>> "EventName": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -192,14 +192,14 @@
>> "EventCode": "0x3",
>> "EventName": "UNC_I_RxR_BL_NCS_INSERTS",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> "EventCode": "0x9",
>> "EventName": "UNC_I_RxR_BL_NCS_OCCUPANCY",
>> "PerPkg": "1",
>> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> diff --git a/tools/perf/pmu-events/arch/x86/jaketown/uncore-interconnect.json b/tools/perf/pmu-events/arch/x86/jaketown/uncore-interconnect.json
>> index 874f15ea8228..0fc907e5cf3c 100644
>> --- a/tools/perf/pmu-events/arch/x86/jaketown/uncore-interconnect.json
>> +++ b/tools/perf/pmu-events/arch/x86/jaketown/uncore-interconnect.json
>> @@ -140,7 +140,7 @@
>> "EventCode": "0x4",
>> "EventName": "UNC_I_RxR_BL_DRS_CYCLES_FULL",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -148,21 +148,21 @@
>> "EventCode": "0x1",
>> "EventName": "UNC_I_RxR_BL_DRS_INSERTS",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> "EventCode": "0x7",
>> "EventName": "UNC_I_RxR_BL_DRS_OCCUPANCY",
>> "PerPkg": "1",
>> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> "EventCode": "0x5",
>> "EventName": "UNC_I_RxR_BL_NCB_CYCLES_FULL",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -170,21 +170,21 @@
>> "EventCode": "0x2",
>> "EventName": "UNC_I_RxR_BL_NCB_INSERTS",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> "EventCode": "0x8",
>> "EventName": "UNC_I_RxR_BL_NCB_OCCUPANCY",
>> "PerPkg": "1",
>> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> "EventCode": "0x6",
>> "EventName": "UNC_I_RxR_BL_NCS_CYCLES_FULL",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of cycles when the BL Ingress is full. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> @@ -192,14 +192,14 @@
>> "EventCode": "0x3",
>> "EventName": "UNC_I_RxR_BL_NCS_INSERTS",
>> "PerPkg": "1",
>> - "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Counts the number of allocations into the BL Ingress. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> "EventCode": "0x9",
>> "EventName": "UNC_I_RxR_BL_NCS_OCCUPANCY",
>> "PerPkg": "1",
>> - "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requets as well as outbound MMIO writes.",
>> + "PublicDescription": "Accumulates the occupancy of the BL Ingress in each cycles. This queue is where the IRP receives data from R2PCIe (the ring). It is used for data returns from read requests as well as outbound MMIO writes.",
>> "Unit": "IRP"
>> },
>> {
>> diff --git a/tools/perf/pmu-events/arch/x86/nehalemep/floating-point.json b/tools/perf/pmu-events/arch/x86/nehalemep/floating-point.json
>> index c03f8990fa82..196ae1d9b157 100644
>> --- a/tools/perf/pmu-events/arch/x86/nehalemep/floating-point.json
>> +++ b/tools/perf/pmu-events/arch/x86/nehalemep/floating-point.json
>> @@ -8,7 +8,7 @@
>> "UMask": "0x1"
>> },
>> {
>> - "BriefDescription": "X87 Floating poiint assists for invalid input value (Precise Event)",
>> + "BriefDescription": "X87 Floating point assists for invalid input value (Precise Event)",
>> "EventCode": "0xF7",
>> "EventName": "FP_ASSIST.INPUT",
>> "PEBS": "1",
>> diff --git a/tools/perf/pmu-events/arch/x86/nehalemex/floating-point.json b/tools/perf/pmu-events/arch/x86/nehalemex/floating-point.json
>> index c03f8990fa82..196ae1d9b157 100644
>> --- a/tools/perf/pmu-events/arch/x86/nehalemex/floating-point.json
>> +++ b/tools/perf/pmu-events/arch/x86/nehalemex/floating-point.json
>> @@ -8,7 +8,7 @@
>> "UMask": "0x1"
>> },
>> {
>> - "BriefDescription": "X87 Floating poiint assists for invalid input value (Precise Event)",
>> + "BriefDescription": "X87 Floating point assists for invalid input value (Precise Event)",
>> "EventCode": "0xF7",
>> "EventName": "FP_ASSIST.INPUT",
>> "PEBS": "1",
>> diff --git a/tools/perf/pmu-events/arch/x86/westmereep-dp/floating-point.json b/tools/perf/pmu-events/arch/x86/westmereep-dp/floating-point.json
>> index c03f8990fa82..196ae1d9b157 100644
>> --- a/tools/perf/pmu-events/arch/x86/westmereep-dp/floating-point.json
>> +++ b/tools/perf/pmu-events/arch/x86/westmereep-dp/floating-point.json
>> @@ -8,7 +8,7 @@
>> "UMask": "0x1"
>> },
>> {
>> - "BriefDescription": "X87 Floating poiint assists for invalid input value (Precise Event)",
>> + "BriefDescription": "X87 Floating point assists for invalid input value (Precise Event)",
>> "EventCode": "0xF7",
>> "EventName": "FP_ASSIST.INPUT",
>> "PEBS": "1",
>> diff --git a/tools/perf/pmu-events/arch/x86/westmereep-sp/floating-point.json b/tools/perf/pmu-events/arch/x86/westmereep-sp/floating-point.json
>> index c03f8990fa82..196ae1d9b157 100644
>> --- a/tools/perf/pmu-events/arch/x86/westmereep-sp/floating-point.json
>> +++ b/tools/perf/pmu-events/arch/x86/westmereep-sp/floating-point.json
>> @@ -8,7 +8,7 @@
>> "UMask": "0x1"
>> },
>> {
>> - "BriefDescription": "X87 Floating poiint assists for invalid input value (Precise Event)",
>> + "BriefDescription": "X87 Floating point assists for invalid input value (Precise Event)",
>> "EventCode": "0xF7",
>> "EventName": "FP_ASSIST.INPUT",
>> "PEBS": "1",
>> diff --git a/tools/perf/pmu-events/arch/x86/westmereex/floating-point.json b/tools/perf/pmu-events/arch/x86/westmereex/floating-point.json
>> index c03f8990fa82..196ae1d9b157 100644
>> --- a/tools/perf/pmu-events/arch/x86/westmereex/floating-point.json
>> +++ b/tools/perf/pmu-events/arch/x86/westmereex/floating-point.json
>> @@ -8,7 +8,7 @@
>> "UMask": "0x1"
>> },
>> {
>> - "BriefDescription": "X87 Floating poiint assists for invalid input value (Precise Event)",
>> + "BriefDescription": "X87 Floating point assists for invalid input value (Precise Event)",
>> "EventCode": "0xF7",
>> "EventName": "FP_ASSIST.INPUT",
>> "PEBS": "1",
>> --
>> 2.42.0.rc2.253.gd59a3bf2b4-goog
>>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v1 1/2] perf vendor events intel: Add lunarlake v1.0
2023-08-24 20:24 [PATCH v1 1/2] perf vendor events intel: Add lunarlake v1.0 Ian Rogers
2023-08-24 20:24 ` [PATCH v1 2/2] perf vendor events intel: Fix spelling mistakes Ian Rogers
@ 2023-08-25 13:28 ` Liang, Kan
2023-09-06 16:00 ` Arnaldo Carvalho de Melo
2 siblings, 0 replies; 6+ messages in thread
From: Liang, Kan @ 2023-08-25 13:28 UTC (permalink / raw)
To: Ian Rogers, Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim,
Adrian Hunter, Maxime Coquelin, Alexandre Torgue, linux-kernel,
linux-perf-users, Colin Ian King, Edward Baker
On 2023-08-24 4:24 p.m., Ian Rogers wrote:
> Add lunarlake events that were added at intel's perfmon site in:
> https://github.com/intel/perfmon/pull/97
> LNL: Release initial events
>
> Signed-off-by: Ian Rogers <irogers@google.com>
Thanks Ian and Colin. The series looks good to me.
Reviewed-by: Kan Liang <kan.liang@linux.intel.com>
Thanks,
Kan
> ---
> .../pmu-events/arch/x86/lunarlake/cache.json | 219 ++++++++++++++++++
> .../arch/x86/lunarlake/frontend.json | 27 +++
> .../pmu-events/arch/x86/lunarlake/memory.json | 183 +++++++++++++++
> .../pmu-events/arch/x86/lunarlake/other.json | 62 +++++
> .../arch/x86/lunarlake/pipeline.json | 217 +++++++++++++++++
> .../arch/x86/lunarlake/virtual-memory.json | 56 +++++
> tools/perf/pmu-events/arch/x86/mapfile.csv | 1 +
> 7 files changed, 765 insertions(+)
> create mode 100644 tools/perf/pmu-events/arch/x86/lunarlake/cache.json
> create mode 100644 tools/perf/pmu-events/arch/x86/lunarlake/frontend.json
> create mode 100644 tools/perf/pmu-events/arch/x86/lunarlake/memory.json
> create mode 100644 tools/perf/pmu-events/arch/x86/lunarlake/other.json
> create mode 100644 tools/perf/pmu-events/arch/x86/lunarlake/pipeline.json
> create mode 100644 tools/perf/pmu-events/arch/x86/lunarlake/virtual-memory.json
>
> diff --git a/tools/perf/pmu-events/arch/x86/lunarlake/cache.json b/tools/perf/pmu-events/arch/x86/lunarlake/cache.json
> new file mode 100644
> index 000000000000..1823149067b5
> --- /dev/null
> +++ b/tools/perf/pmu-events/arch/x86/lunarlake/cache.json
> @@ -0,0 +1,219 @@
> +[
> + {
> + "BriefDescription": "Counts the number of L2 Cache Accesses Counts the total number of L2 Cache Accesses - sum of hits, misses, rejects front door requests for CRd/DRd/RFO/ItoM/L2 Prefetches only, per core event",
> + "EventCode": "0x24",
> + "EventName": "L2_REQUEST.ALL",
> + "PublicDescription": "Counts the number of L2 Cache Accesses Counts the total number of L2 Cache Accesses - sum of hits, misses, rejects front door requests for CRd/DRd/RFO/ItoM/L2 Prefetches only.",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x7",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of cacheable memory requests that miss in the LLC. Counts on a per core basis.",
> + "EventCode": "0x2e",
> + "EventName": "LONGEST_LAT_CACHE.MISS",
> + "PublicDescription": "Counts the number of cacheable memory requests that miss in the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the platform has an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
> + "SampleAfterValue": "200003",
> + "UMask": "0x41",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Core-originated cacheable requests that missed L3 (Except hardware prefetches to the L3)",
> + "EventCode": "0x2e",
> + "EventName": "LONGEST_LAT_CACHE.MISS",
> + "PublicDescription": "Counts core-originated cacheable requests that miss the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.",
> + "SampleAfterValue": "100003",
> + "UMask": "0x41",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts the number of cacheable memory requests that access the LLC. Counts on a per core basis.",
> + "EventCode": "0x2e",
> + "EventName": "LONGEST_LAT_CACHE.REFERENCE",
> + "PublicDescription": "Counts the number of cacheable memory requests that access the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the platform has an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x4f",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Core-originated cacheable requests that refer to L3 (Except hardware prefetches to the L3)",
> + "EventCode": "0x2e",
> + "EventName": "LONGEST_LAT_CACHE.REFERENCE",
> + "PublicDescription": "Counts core-originated cacheable requests to the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.",
> + "SampleAfterValue": "100003",
> + "UMask": "0x4f",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Retired load instructions.",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_INST_RETIRED.ALL_LOADS",
> + "PEBS": "1",
> + "PublicDescription": "Counts all retired load instructions. This event accounts for SW prefetch instructions of PREFETCHNTA or PREFETCHT0/1/2 or PREFETCHW.",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x81",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Retired store instructions.",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_INST_RETIRED.ALL_STORES",
> + "PEBS": "1",
> + "PublicDescription": "Counts all retired store instructions.",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x82",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts the number of load uops retired.",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.ALL_LOADS",
> + "PEBS": "1",
> + "SampleAfterValue": "200003",
> + "UMask": "0x81",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of store uops retired.",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.ALL_STORES",
> + "PEBS": "1",
> + "SampleAfterValue": "200003",
> + "UMask": "0x82",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_1024",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x400",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_128",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x80",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_16",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x10",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_2048",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x800",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_256",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x100",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_32",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x20",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_4",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x4",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_512",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x200",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_64",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x40",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_8",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x8",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of stores uops retired same as MEM_UOPS_RETIRED.ALL_STORES",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.STORE_LATENCY",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x6",
> + "Unit": "cpu_atom"
> + }
> +]
> diff --git a/tools/perf/pmu-events/arch/x86/lunarlake/frontend.json b/tools/perf/pmu-events/arch/x86/lunarlake/frontend.json
> new file mode 100644
> index 000000000000..5e4ef81b43d6
> --- /dev/null
> +++ b/tools/perf/pmu-events/arch/x86/lunarlake/frontend.json
> @@ -0,0 +1,27 @@
> +[
> + {
> + "BriefDescription": "Counts every time the code stream enters into a new cache line by walking sequential from the previous line or being redirected by a jump.",
> + "EventCode": "0x80",
> + "EventName": "ICACHE.ACCESSES",
> + "SampleAfterValue": "200003",
> + "UMask": "0x3",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts every time the code stream enters into a new cache line by walking sequential from the previous line or being redirected by a jump and the instruction cache registers bytes are not present. -",
> + "EventCode": "0x80",
> + "EventName": "ICACHE.MISSES",
> + "SampleAfterValue": "200003",
> + "UMask": "0x2",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "This event counts a subset of the Topdown Slots event that were no operation was delivered to the back-end pipeline due to instruction fetch limitations when the back-end could have accepted more operations. Common examples include instruction cache misses or x86 instruction decode limitations.",
> + "EventCode": "0x9c",
> + "EventName": "IDQ_BUBBLES.CORE",
> + "PublicDescription": "This event counts a subset of the Topdown Slots event that were no operation was delivered to the back-end pipeline due to instruction fetch limitations when the back-end could have accepted more operations. Common examples include instruction cache misses or x86 instruction decode limitations.\nSoftware can use this event as the numerator for the Frontend Bound metric (or top-level category) of the Top-down Microarchitecture Analysis method.",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + }
> +]
> diff --git a/tools/perf/pmu-events/arch/x86/lunarlake/memory.json b/tools/perf/pmu-events/arch/x86/lunarlake/memory.json
> new file mode 100644
> index 000000000000..51d70ba00bd4
> --- /dev/null
> +++ b/tools/perf/pmu-events/arch/x86/lunarlake/memory.json
> @@ -0,0 +1,183 @@
> +[
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 1024 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_1024",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x400",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 1024 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "53",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_128",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x80",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "1009",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_16",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x10",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "20011",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 2048 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_2048",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x800",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 2048 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "23",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_256",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x100",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "503",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_32",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x20",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "100007",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_4",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x4",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_512",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x200",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "101",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_64",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x40",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "2003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_8",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x8",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "50021",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Retired memory store access operations. A PDist event for PEBS Store Latency Facility.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.STORE_SAMPLE",
> + "PEBS": "2",
> + "PublicDescription": "Counts Retired memory accesses with at least 1 store operation. This PEBS event is the precisely-distributed (PDist) trigger covering all stores uops for sampling by the PEBS Store Latency Facility. The facility is described in Intel SDM Volume 3 section 19.9.8",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x2",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts cacheable demand data reads were not supplied by the L3 cache.",
> + "EventCode": "0xB7",
> + "EventName": "OCR.DEMAND_DATA_RD.L3_MISS",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x3FBFC00001",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts demand data reads that were not supplied by the L3 cache.",
> + "EventCode": "0x2A,0x2B",
> + "EventName": "OCR.DEMAND_DATA_RD.L3_MISS",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x3FBFC00001",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts demand reads for ownership, including SWPREFETCHW which is an RFO were not supplied by the L3 cache.",
> + "EventCode": "0xB7",
> + "EventName": "OCR.DEMAND_RFO.L3_MISS",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x3FBFC00002",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache.",
> + "EventCode": "0x2A,0x2B",
> + "EventName": "OCR.DEMAND_RFO.L3_MISS",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x3FBFC00002",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + }
> +]
> diff --git a/tools/perf/pmu-events/arch/x86/lunarlake/other.json b/tools/perf/pmu-events/arch/x86/lunarlake/other.json
> new file mode 100644
> index 000000000000..69adaed5686d
> --- /dev/null
> +++ b/tools/perf/pmu-events/arch/x86/lunarlake/other.json
> @@ -0,0 +1,62 @@
> +[
> + {
> + "BriefDescription": "Counts cacheable demand data reads Catch all value for any response types - this includes response types not define in the OCR. If this is set all other response types will be ignored",
> + "EventCode": "0xB7",
> + "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x10001",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts demand data reads that have any type of response.",
> + "EventCode": "0x2A,0x2B",
> + "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x10001",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts cacheable demand data reads were supplied by DRAM.",
> + "EventCode": "0xB7",
> + "EventName": "OCR.DEMAND_DATA_RD.DRAM",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x184000001",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts demand data reads that were supplied by DRAM.",
> + "EventCode": "0x2A,0x2B",
> + "EventName": "OCR.DEMAND_DATA_RD.DRAM",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x184000001",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts demand reads for ownership, including SWPREFETCHW which is an RFO Catch all value for any response types - this includes response types not define in the OCR. If this is set all other response types will be ignored",
> + "EventCode": "0xB7",
> + "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x10002",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
> + "EventCode": "0x2A,0x2B",
> + "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x10002",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + }
> +]
> diff --git a/tools/perf/pmu-events/arch/x86/lunarlake/pipeline.json b/tools/perf/pmu-events/arch/x86/lunarlake/pipeline.json
> new file mode 100644
> index 000000000000..2bde664fdc0f
> --- /dev/null
> +++ b/tools/perf/pmu-events/arch/x86/lunarlake/pipeline.json
> @@ -0,0 +1,217 @@
> +[
> + {
> + "BriefDescription": "Counts the total number of branch instructions retired for all branch types.",
> + "EventCode": "0xc4",
> + "EventName": "BR_INST_RETIRED.ALL_BRANCHES",
> + "PEBS": "1",
> + "PublicDescription": "Counts the total number of instructions in which the instruction pointer (IP) of the processor is resteered due to a branch instruction and the branch instruction successfully retires. All branch type instructions are accounted for.",
> + "SampleAfterValue": "200003",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "All branch instructions retired.",
> + "EventCode": "0xc4",
> + "EventName": "BR_INST_RETIRED.ALL_BRANCHES",
> + "PEBS": "1",
> + "PublicDescription": "Counts all branch instructions retired.",
> + "SampleAfterValue": "400009",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts the total number of mispredicted branch instructions retired for all branch types.",
> + "EventCode": "0xc5",
> + "EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
> + "PEBS": "1",
> + "PublicDescription": "Counts the total number of mispredicted branch instructions retired. All branch type instructions are accounted for. Prediction of the branch target address enables the processor to begin executing instructions before the non-speculative execution path is known. The branch prediction unit (BPU) predicts the target address based on the instruction pointer (IP) of the branch and on the execution path through which execution reached this IP. A branch misprediction occurs when the prediction is wrong, and results in discarding all instructions executed in the speculative path and re-fetching from the correct path.",
> + "SampleAfterValue": "200003",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "All mispredicted branch instructions retired.",
> + "EventCode": "0xc5",
> + "EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
> + "PEBS": "1",
> + "PublicDescription": "Counts all the retired branch instructions that were mispredicted by the processor. A branch misprediction occurs when the processor incorrectly predicts the destination of the branch. When the misprediction is discovered at execution, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.",
> + "SampleAfterValue": "400009",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Fixed Counter: Counts the number of unhalted core clock cycles",
> + "EventName": "CPU_CLK_UNHALTED.CORE",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x2",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of unhalted core clock cycles [This event is alias to CPU_CLK_UNHALTED.THREAD_P]",
> + "EventCode": "0x3c",
> + "EventName": "CPU_CLK_UNHALTED.CORE_P",
> + "SampleAfterValue": "2000003",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Fixed Counter: Counts the number of unhalted reference clock cycles",
> + "EventName": "CPU_CLK_UNHALTED.REF_TSC",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x3",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Reference cycles when the core is not in halt state.",
> + "EventName": "CPU_CLK_UNHALTED.REF_TSC",
> + "PublicDescription": "Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
> + "SampleAfterValue": "2000003",
> + "UMask": "0x3",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Reference cycles when the core is not in halt state.",
> + "EventCode": "0x3c",
> + "EventName": "CPU_CLK_UNHALTED.REF_TSC_P",
> + "PublicDescription": "Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
> + "SampleAfterValue": "2000003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Core cycles when the thread is not in halt state",
> + "EventName": "CPU_CLK_UNHALTED.THREAD",
> + "PublicDescription": "Counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other events.",
> + "SampleAfterValue": "2000003",
> + "UMask": "0x2",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts the number of unhalted core clock cycles [This event is alias to CPU_CLK_UNHALTED.CORE_P]",
> + "EventCode": "0x3c",
> + "EventName": "CPU_CLK_UNHALTED.THREAD_P",
> + "SampleAfterValue": "2000003",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Thread cycles when thread is not in halt state",
> + "EventCode": "0x3c",
> + "EventName": "CPU_CLK_UNHALTED.THREAD_P",
> + "PublicDescription": "This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time.",
> + "SampleAfterValue": "2000003",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Fixed Counter: Counts the number of instructions retired",
> + "EventName": "INST_RETIRED.ANY",
> + "PEBS": "1",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x1",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Number of instructions retired. Fixed Counter - architectural event",
> + "EventName": "INST_RETIRED.ANY",
> + "PEBS": "1",
> + "PublicDescription": "Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter.",
> + "SampleAfterValue": "2000003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts the number of instructions retired",
> + "EventCode": "0xc0",
> + "EventName": "INST_RETIRED.ANY_P",
> + "PEBS": "1",
> + "SampleAfterValue": "2000003",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Number of instructions retired. General Counter - architectural event",
> + "EventCode": "0xc0",
> + "EventName": "INST_RETIRED.ANY_P",
> + "PEBS": "1",
> + "PublicDescription": "Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter.",
> + "SampleAfterValue": "2000003",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts the number of occurrences a retired load gets blocked because its address partially overlaps with an older store (size mismatch) - unknown_sta/bad_forward",
> + "EventCode": "0x03",
> + "EventName": "LD_BLOCKS.STORE_FORWARD",
> + "PEBS": "1",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x2",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Loads blocked due to overlapping with a preceding store that cannot be forwarded.",
> + "EventCode": "0x03",
> + "EventName": "LD_BLOCKS.STORE_FORWARD",
> + "PublicDescription": "Counts the number of times where store forwarding was prevented for a load operation. The most common case is a load blocked due to the address of memory access (partially) overlapping with a preceding uncompleted store. Note: See the table of not supported store forwards in the Optimization Guide.",
> + "SampleAfterValue": "100003",
> + "UMask": "0x82",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "This event counts a subset of the Topdown Slots event that were not consumed by the back-end pipeline due to lack of back-end resources, as a result of memory subsystem delays, execution units limitations, or other conditions.",
> + "EventCode": "0xa4",
> + "EventName": "TOPDOWN.BACKEND_BOUND_SLOTS",
> + "PublicDescription": "This event counts a subset of the Topdown Slots event that were not consumed by the back-end pipeline due to lack of back-end resources, as a result of memory subsystem delays, execution units limitations, or other conditions.\nSoftware can use this event as the numerator for the Backend Bound metric (or top-level category) of the Top-down Microarchitecture Analysis method.",
> + "SampleAfterValue": "10000003",
> + "UMask": "0x2",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "TMA slots available for an unhalted logical processor. Fixed counter - architectural event",
> + "EventName": "TOPDOWN.SLOTS",
> + "PublicDescription": "Number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method (TMA). Software can use this event as the denominator for the top-level metrics of the TMA method. This architectural event is counted on a designated fixed counter (Fixed Counter 3).",
> + "SampleAfterValue": "10000003",
> + "UMask": "0x4",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "TMA slots available for an unhalted logical processor. General counter - architectural event",
> + "EventCode": "0xa4",
> + "EventName": "TOPDOWN.SLOTS_P",
> + "PublicDescription": "Counts the number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method.",
> + "SampleAfterValue": "10000003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Fixed Counter: Counts the number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear.",
> + "EventName": "TOPDOWN_BAD_SPECULATION.ALL",
> + "PublicDescription": "Fixed Counter: Counts the number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Counts all issue slots blocked during this recovery window including relevant microcode flows and while uops are not yet available in the IQ. Also, includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine clear.",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of retirement slots not consumed due to backend stalls",
> + "EventCode": "0xa4",
> + "EventName": "TOPDOWN_BE_BOUND.ALL",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x2",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Fixed Counter: Counts the number of retirement slots not consumed due to front end stalls",
> + "EventName": "TOPDOWN_FE_BOUND.ALL",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x6",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Fixed Counter: Counts the number of consumed retirement slots. Similar to UOPS_RETIRED.ALL",
> + "EventName": "TOPDOWN_RETIRING.ALL",
> + "PEBS": "1",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x7",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "This event counts a subset of the Topdown Slots event that are utilized by operations that eventually get retired (committed) by the processor pipeline. Usually, this event positively correlates with higher performance for example, as measured by the instructions-per-cycle metric.",
> + "EventCode": "0xc2",
> + "EventName": "UOPS_RETIRED.SLOTS",
> + "PublicDescription": "This event counts a subset of the Topdown Slots event that are utilized by operations that eventually get retired (committed) by the processor pipeline. Usually, this event positively correlates with higher performance for example, as measured by the instructions-per-cycle metric.\nSoftware can use this event as the numerator for the Retiring metric (or top-level category) of the Top-down Microarchitecture Analysis method.",
> + "SampleAfterValue": "2000003",
> + "UMask": "0x2",
> + "Unit": "cpu_core"
> + }
> +]
> diff --git a/tools/perf/pmu-events/arch/x86/lunarlake/virtual-memory.json b/tools/perf/pmu-events/arch/x86/lunarlake/virtual-memory.json
> new file mode 100644
> index 000000000000..bb9458799f1c
> --- /dev/null
> +++ b/tools/perf/pmu-events/arch/x86/lunarlake/virtual-memory.json
> @@ -0,0 +1,56 @@
> +[
> + {
> + "BriefDescription": "Counts the number of page walks completed due to load DTLB misses to any page size.",
> + "EventCode": "0x08",
> + "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
> + "PublicDescription": "Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
> + "SampleAfterValue": "200003",
> + "UMask": "0xe",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Load miss in all TLB levels causes a page walk that completes. (All page sizes)",
> + "EventCode": "0x12",
> + "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
> + "PublicDescription": "Counts completed page walks (all page sizes) caused by demand data loads. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
> + "SampleAfterValue": "100003",
> + "UMask": "0xe",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts the number of page walks completed due to store DTLB misses to any page size.",
> + "EventCode": "0x49",
> + "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
> + "PublicDescription": "Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
> + "SampleAfterValue": "2000003",
> + "UMask": "0xe",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Store misses in all TLB levels causes a page walk that completes. (All page sizes)",
> + "EventCode": "0x13",
> + "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
> + "PublicDescription": "Counts completed page walks (all page sizes) caused by demand data stores. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
> + "SampleAfterValue": "100003",
> + "UMask": "0xe",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts the number of page walks completed due to instruction fetch misses to any page size.",
> + "EventCode": "0x85",
> + "EventName": "ITLB_MISSES.WALK_COMPLETED",
> + "PublicDescription": "Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
> + "SampleAfterValue": "2000003",
> + "UMask": "0xe",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (All page sizes)",
> + "EventCode": "0x11",
> + "EventName": "ITLB_MISSES.WALK_COMPLETED",
> + "PublicDescription": "Counts completed page walks (all page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
> + "SampleAfterValue": "100003",
> + "UMask": "0xe",
> + "Unit": "cpu_core"
> + }
> +]
> diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-events/arch/x86/mapfile.csv
> index 3a8770e29fe8..de9582f183cb 100644
> --- a/tools/perf/pmu-events/arch/x86/mapfile.csv
> +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv
> @@ -19,6 +19,7 @@ GenuineIntel-6-3A,v24,ivybridge,core
> GenuineIntel-6-3E,v23,ivytown,core
> GenuineIntel-6-2D,v23,jaketown,core
> GenuineIntel-6-(57|85),v10,knightslanding,core
> +GenuineIntel-6-BD,v1.00,lunarlake,core
> GenuineIntel-6-A[AC],v1.04,meteorlake,core
> GenuineIntel-6-1[AEF],v3,nehalemep,core
> GenuineIntel-6-2E,v3,nehalemex,core
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v1 1/2] perf vendor events intel: Add lunarlake v1.0
2023-08-24 20:24 [PATCH v1 1/2] perf vendor events intel: Add lunarlake v1.0 Ian Rogers
2023-08-24 20:24 ` [PATCH v1 2/2] perf vendor events intel: Fix spelling mistakes Ian Rogers
2023-08-25 13:28 ` [PATCH v1 1/2] perf vendor events intel: Add lunarlake v1.0 Liang, Kan
@ 2023-09-06 16:00 ` Arnaldo Carvalho de Melo
2 siblings, 0 replies; 6+ messages in thread
From: Arnaldo Carvalho de Melo @ 2023-09-06 16:00 UTC (permalink / raw)
To: Ian Rogers
Cc: Peter Zijlstra, Ingo Molnar, Mark Rutland, Alexander Shishkin,
Jiri Olsa, Namhyung Kim, Adrian Hunter, Maxime Coquelin,
Alexandre Torgue, Kan Liang, Zhengjun Xing, linux-kernel,
linux-perf-users, Colin Ian King, Edward Baker
Em Thu, Aug 24, 2023 at 01:24:01PM -0700, Ian Rogers escreveu:
> Add lunarlake events that were added at intel's perfmon site in:
> https://github.com/intel/perfmon/pull/97
> LNL: Release initial events
Thanks, applied the series.
- Arnaldo
> Signed-off-by: Ian Rogers <irogers@google.com>
> ---
> .../pmu-events/arch/x86/lunarlake/cache.json | 219 ++++++++++++++++++
> .../arch/x86/lunarlake/frontend.json | 27 +++
> .../pmu-events/arch/x86/lunarlake/memory.json | 183 +++++++++++++++
> .../pmu-events/arch/x86/lunarlake/other.json | 62 +++++
> .../arch/x86/lunarlake/pipeline.json | 217 +++++++++++++++++
> .../arch/x86/lunarlake/virtual-memory.json | 56 +++++
> tools/perf/pmu-events/arch/x86/mapfile.csv | 1 +
> 7 files changed, 765 insertions(+)
> create mode 100644 tools/perf/pmu-events/arch/x86/lunarlake/cache.json
> create mode 100644 tools/perf/pmu-events/arch/x86/lunarlake/frontend.json
> create mode 100644 tools/perf/pmu-events/arch/x86/lunarlake/memory.json
> create mode 100644 tools/perf/pmu-events/arch/x86/lunarlake/other.json
> create mode 100644 tools/perf/pmu-events/arch/x86/lunarlake/pipeline.json
> create mode 100644 tools/perf/pmu-events/arch/x86/lunarlake/virtual-memory.json
>
> diff --git a/tools/perf/pmu-events/arch/x86/lunarlake/cache.json b/tools/perf/pmu-events/arch/x86/lunarlake/cache.json
> new file mode 100644
> index 000000000000..1823149067b5
> --- /dev/null
> +++ b/tools/perf/pmu-events/arch/x86/lunarlake/cache.json
> @@ -0,0 +1,219 @@
> +[
> + {
> + "BriefDescription": "Counts the number of L2 Cache Accesses Counts the total number of L2 Cache Accesses - sum of hits, misses, rejects front door requests for CRd/DRd/RFO/ItoM/L2 Prefetches only, per core event",
> + "EventCode": "0x24",
> + "EventName": "L2_REQUEST.ALL",
> + "PublicDescription": "Counts the number of L2 Cache Accesses Counts the total number of L2 Cache Accesses - sum of hits, misses, rejects front door requests for CRd/DRd/RFO/ItoM/L2 Prefetches only.",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x7",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of cacheable memory requests that miss in the LLC. Counts on a per core basis.",
> + "EventCode": "0x2e",
> + "EventName": "LONGEST_LAT_CACHE.MISS",
> + "PublicDescription": "Counts the number of cacheable memory requests that miss in the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the platform has an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
> + "SampleAfterValue": "200003",
> + "UMask": "0x41",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Core-originated cacheable requests that missed L3 (Except hardware prefetches to the L3)",
> + "EventCode": "0x2e",
> + "EventName": "LONGEST_LAT_CACHE.MISS",
> + "PublicDescription": "Counts core-originated cacheable requests that miss the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.",
> + "SampleAfterValue": "100003",
> + "UMask": "0x41",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts the number of cacheable memory requests that access the LLC. Counts on a per core basis.",
> + "EventCode": "0x2e",
> + "EventName": "LONGEST_LAT_CACHE.REFERENCE",
> + "PublicDescription": "Counts the number of cacheable memory requests that access the Last Level Cache (LLC). Requests include demand loads, reads for ownership (RFO), instruction fetches and L1 HW prefetches. If the platform has an L3 cache, the LLC is the L3 cache, otherwise it is the L2 cache. Counts on a per core basis.",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x4f",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Core-originated cacheable requests that refer to L3 (Except hardware prefetches to the L3)",
> + "EventCode": "0x2e",
> + "EventName": "LONGEST_LAT_CACHE.REFERENCE",
> + "PublicDescription": "Counts core-originated cacheable requests to the L3 cache (Longest Latency cache). Requests include data and code reads, Reads-for-Ownership (RFOs), speculative accesses and hardware prefetches to the L1 and L2. It does not include hardware prefetches to the L3, and may not count other types of requests to the L3.",
> + "SampleAfterValue": "100003",
> + "UMask": "0x4f",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Retired load instructions.",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_INST_RETIRED.ALL_LOADS",
> + "PEBS": "1",
> + "PublicDescription": "Counts all retired load instructions. This event accounts for SW prefetch instructions of PREFETCHNTA or PREFETCHT0/1/2 or PREFETCHW.",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x81",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Retired store instructions.",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_INST_RETIRED.ALL_STORES",
> + "PEBS": "1",
> + "PublicDescription": "Counts all retired store instructions.",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x82",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts the number of load uops retired.",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.ALL_LOADS",
> + "PEBS": "1",
> + "SampleAfterValue": "200003",
> + "UMask": "0x81",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of store uops retired.",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.ALL_STORES",
> + "PEBS": "1",
> + "SampleAfterValue": "200003",
> + "UMask": "0x82",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_1024",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x400",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_128",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x80",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_16",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x10",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_2048",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x800",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_256",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x100",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_32",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x20",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_4",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x4",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_512",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x200",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_64",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x40",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of tagged load uops retired that exceed the latency threshold defined in MEC_CR_PEBS_LD_LAT_THRESHOLD - Only counts with PEBS enabled",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.LOAD_LATENCY_GT_8",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x8",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of stores uops retired same as MEM_UOPS_RETIRED.ALL_STORES",
> + "Data_LA": "1",
> + "EventCode": "0xd0",
> + "EventName": "MEM_UOPS_RETIRED.STORE_LATENCY",
> + "PEBS": "2",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x6",
> + "Unit": "cpu_atom"
> + }
> +]
> diff --git a/tools/perf/pmu-events/arch/x86/lunarlake/frontend.json b/tools/perf/pmu-events/arch/x86/lunarlake/frontend.json
> new file mode 100644
> index 000000000000..5e4ef81b43d6
> --- /dev/null
> +++ b/tools/perf/pmu-events/arch/x86/lunarlake/frontend.json
> @@ -0,0 +1,27 @@
> +[
> + {
> + "BriefDescription": "Counts every time the code stream enters into a new cache line by walking sequential from the previous line or being redirected by a jump.",
> + "EventCode": "0x80",
> + "EventName": "ICACHE.ACCESSES",
> + "SampleAfterValue": "200003",
> + "UMask": "0x3",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts every time the code stream enters into a new cache line by walking sequential from the previous line or being redirected by a jump and the instruction cache registers bytes are not present. -",
> + "EventCode": "0x80",
> + "EventName": "ICACHE.MISSES",
> + "SampleAfterValue": "200003",
> + "UMask": "0x2",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "This event counts a subset of the Topdown Slots event that were no operation was delivered to the back-end pipeline due to instruction fetch limitations when the back-end could have accepted more operations. Common examples include instruction cache misses or x86 instruction decode limitations.",
> + "EventCode": "0x9c",
> + "EventName": "IDQ_BUBBLES.CORE",
> + "PublicDescription": "This event counts a subset of the Topdown Slots event that were no operation was delivered to the back-end pipeline due to instruction fetch limitations when the back-end could have accepted more operations. Common examples include instruction cache misses or x86 instruction decode limitations.\nSoftware can use this event as the numerator for the Frontend Bound metric (or top-level category) of the Top-down Microarchitecture Analysis method.",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + }
> +]
> diff --git a/tools/perf/pmu-events/arch/x86/lunarlake/memory.json b/tools/perf/pmu-events/arch/x86/lunarlake/memory.json
> new file mode 100644
> index 000000000000..51d70ba00bd4
> --- /dev/null
> +++ b/tools/perf/pmu-events/arch/x86/lunarlake/memory.json
> @@ -0,0 +1,183 @@
> +[
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 1024 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_1024",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x400",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 1024 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "53",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_128",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x80",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 128 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "1009",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_16",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x10",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 16 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "20011",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 2048 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_2048",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x800",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 2048 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "23",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_256",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x100",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 256 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "503",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_32",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x20",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 32 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "100007",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_4",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x4",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 4 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_512",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x200",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 512 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "101",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_64",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x40",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 64 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "2003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.LOAD_LATENCY_GT_8",
> + "MSRIndex": "0x3F6",
> + "MSRValue": "0x8",
> + "PEBS": "2",
> + "PublicDescription": "Counts randomly selected loads when the latency from first dispatch to completion is greater than 8 cycles. Reported latency may be longer than just the memory latency.",
> + "SampleAfterValue": "50021",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Retired memory store access operations. A PDist event for PEBS Store Latency Facility.",
> + "Data_LA": "1",
> + "EventCode": "0xcd",
> + "EventName": "MEM_TRANS_RETIRED.STORE_SAMPLE",
> + "PEBS": "2",
> + "PublicDescription": "Counts Retired memory accesses with at least 1 store operation. This PEBS event is the precisely-distributed (PDist) trigger covering all stores uops for sampling by the PEBS Store Latency Facility. The facility is described in Intel SDM Volume 3 section 19.9.8",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x2",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts cacheable demand data reads were not supplied by the L3 cache.",
> + "EventCode": "0xB7",
> + "EventName": "OCR.DEMAND_DATA_RD.L3_MISS",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x3FBFC00001",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts demand data reads that were not supplied by the L3 cache.",
> + "EventCode": "0x2A,0x2B",
> + "EventName": "OCR.DEMAND_DATA_RD.L3_MISS",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x3FBFC00001",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts demand reads for ownership, including SWPREFETCHW which is an RFO were not supplied by the L3 cache.",
> + "EventCode": "0xB7",
> + "EventName": "OCR.DEMAND_RFO.L3_MISS",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x3FBFC00002",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that were not supplied by the L3 cache.",
> + "EventCode": "0x2A,0x2B",
> + "EventName": "OCR.DEMAND_RFO.L3_MISS",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x3FBFC00002",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + }
> +]
> diff --git a/tools/perf/pmu-events/arch/x86/lunarlake/other.json b/tools/perf/pmu-events/arch/x86/lunarlake/other.json
> new file mode 100644
> index 000000000000..69adaed5686d
> --- /dev/null
> +++ b/tools/perf/pmu-events/arch/x86/lunarlake/other.json
> @@ -0,0 +1,62 @@
> +[
> + {
> + "BriefDescription": "Counts cacheable demand data reads Catch all value for any response types - this includes response types not define in the OCR. If this is set all other response types will be ignored",
> + "EventCode": "0xB7",
> + "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x10001",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts demand data reads that have any type of response.",
> + "EventCode": "0x2A,0x2B",
> + "EventName": "OCR.DEMAND_DATA_RD.ANY_RESPONSE",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x10001",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts cacheable demand data reads were supplied by DRAM.",
> + "EventCode": "0xB7",
> + "EventName": "OCR.DEMAND_DATA_RD.DRAM",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x184000001",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts demand data reads that were supplied by DRAM.",
> + "EventCode": "0x2A,0x2B",
> + "EventName": "OCR.DEMAND_DATA_RD.DRAM",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x184000001",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts demand reads for ownership, including SWPREFETCHW which is an RFO Catch all value for any response types - this includes response types not define in the OCR. If this is set all other response types will be ignored",
> + "EventCode": "0xB7",
> + "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x10002",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts demand read for ownership (RFO) requests and software prefetches for exclusive ownership (PREFETCHW) that have any type of response.",
> + "EventCode": "0x2A,0x2B",
> + "EventName": "OCR.DEMAND_RFO.ANY_RESPONSE",
> + "MSRIndex": "0x1a6,0x1a7",
> + "MSRValue": "0x10002",
> + "SampleAfterValue": "100003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + }
> +]
> diff --git a/tools/perf/pmu-events/arch/x86/lunarlake/pipeline.json b/tools/perf/pmu-events/arch/x86/lunarlake/pipeline.json
> new file mode 100644
> index 000000000000..2bde664fdc0f
> --- /dev/null
> +++ b/tools/perf/pmu-events/arch/x86/lunarlake/pipeline.json
> @@ -0,0 +1,217 @@
> +[
> + {
> + "BriefDescription": "Counts the total number of branch instructions retired for all branch types.",
> + "EventCode": "0xc4",
> + "EventName": "BR_INST_RETIRED.ALL_BRANCHES",
> + "PEBS": "1",
> + "PublicDescription": "Counts the total number of instructions in which the instruction pointer (IP) of the processor is resteered due to a branch instruction and the branch instruction successfully retires. All branch type instructions are accounted for.",
> + "SampleAfterValue": "200003",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "All branch instructions retired.",
> + "EventCode": "0xc4",
> + "EventName": "BR_INST_RETIRED.ALL_BRANCHES",
> + "PEBS": "1",
> + "PublicDescription": "Counts all branch instructions retired.",
> + "SampleAfterValue": "400009",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts the total number of mispredicted branch instructions retired for all branch types.",
> + "EventCode": "0xc5",
> + "EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
> + "PEBS": "1",
> + "PublicDescription": "Counts the total number of mispredicted branch instructions retired. All branch type instructions are accounted for. Prediction of the branch target address enables the processor to begin executing instructions before the non-speculative execution path is known. The branch prediction unit (BPU) predicts the target address based on the instruction pointer (IP) of the branch and on the execution path through which execution reached this IP. A branch misprediction occurs when the prediction is wrong, and results in discarding all instructions executed in the speculative path and re-fetching from the correct path.",
> + "SampleAfterValue": "200003",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "All mispredicted branch instructions retired.",
> + "EventCode": "0xc5",
> + "EventName": "BR_MISP_RETIRED.ALL_BRANCHES",
> + "PEBS": "1",
> + "PublicDescription": "Counts all the retired branch instructions that were mispredicted by the processor. A branch misprediction occurs when the processor incorrectly predicts the destination of the branch. When the misprediction is discovered at execution, all the instructions executed in the wrong (speculative) path must be discarded, and the processor must start fetching from the correct path.",
> + "SampleAfterValue": "400009",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Fixed Counter: Counts the number of unhalted core clock cycles",
> + "EventName": "CPU_CLK_UNHALTED.CORE",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x2",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of unhalted core clock cycles [This event is alias to CPU_CLK_UNHALTED.THREAD_P]",
> + "EventCode": "0x3c",
> + "EventName": "CPU_CLK_UNHALTED.CORE_P",
> + "SampleAfterValue": "2000003",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Fixed Counter: Counts the number of unhalted reference clock cycles",
> + "EventName": "CPU_CLK_UNHALTED.REF_TSC",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x3",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Reference cycles when the core is not in halt state.",
> + "EventName": "CPU_CLK_UNHALTED.REF_TSC",
> + "PublicDescription": "Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
> + "SampleAfterValue": "2000003",
> + "UMask": "0x3",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Reference cycles when the core is not in halt state.",
> + "EventCode": "0x3c",
> + "EventName": "CPU_CLK_UNHALTED.REF_TSC_P",
> + "PublicDescription": "Counts the number of reference cycles when the core is not in a halt state. The core enters the halt state when it is running the HLT instruction or the MWAIT instruction. This event is not affected by core frequency changes (for example, P states, TM2 transitions) but has the same incrementing frequency as the time stamp counter. This event can approximate elapsed time while the core was not in a halt state. Note: On all current platforms this event stops counting during 'throttling (TM)' states duty off periods the processor is 'halted'. The counter update is done at a lower clock rate then the core clock the overflow status bit for this counter may appear 'sticky'. After the counter has overflowed and software clears the overflow status bit and resets the counter to less than MAX. The reset value to the counter is not clocked immediately so the overflow status bit will flip 'high (1)' and generate another PMI (if enabled) after which the reset value gets clocked into the counter. Therefore, software will get the interrupt, read the overflow status bit '1 for bit 34 while the counter value is less than MAX. Software should ignore this case.",
> + "SampleAfterValue": "2000003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Core cycles when the thread is not in halt state",
> + "EventName": "CPU_CLK_UNHALTED.THREAD",
> + "PublicDescription": "Counts the number of core cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. This event is a component in many key event ratios. The core frequency may change from time to time due to transitions associated with Enhanced Intel SpeedStep Technology or TM2. For this reason this event may have a changing ratio with regards to time. When the core frequency is constant, this event can approximate elapsed time while the core was not in the halt state. It is counted on a dedicated fixed counter, leaving the eight programmable counters available for other events.",
> + "SampleAfterValue": "2000003",
> + "UMask": "0x2",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts the number of unhalted core clock cycles [This event is alias to CPU_CLK_UNHALTED.CORE_P]",
> + "EventCode": "0x3c",
> + "EventName": "CPU_CLK_UNHALTED.THREAD_P",
> + "SampleAfterValue": "2000003",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Thread cycles when thread is not in halt state",
> + "EventCode": "0x3c",
> + "EventName": "CPU_CLK_UNHALTED.THREAD_P",
> + "PublicDescription": "This is an architectural event that counts the number of thread cycles while the thread is not in a halt state. The thread enters the halt state when it is running the HLT instruction. The core frequency may change from time to time due to power or thermal throttling. For this reason, this event may have a changing ratio with regards to wall clock time.",
> + "SampleAfterValue": "2000003",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Fixed Counter: Counts the number of instructions retired",
> + "EventName": "INST_RETIRED.ANY",
> + "PEBS": "1",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x1",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Number of instructions retired. Fixed Counter - architectural event",
> + "EventName": "INST_RETIRED.ANY",
> + "PEBS": "1",
> + "PublicDescription": "Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter.",
> + "SampleAfterValue": "2000003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts the number of instructions retired",
> + "EventCode": "0xc0",
> + "EventName": "INST_RETIRED.ANY_P",
> + "PEBS": "1",
> + "SampleAfterValue": "2000003",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Number of instructions retired. General Counter - architectural event",
> + "EventCode": "0xc0",
> + "EventName": "INST_RETIRED.ANY_P",
> + "PEBS": "1",
> + "PublicDescription": "Counts the number of X86 instructions retired - an Architectural PerfMon event. Counting continues during hardware interrupts, traps, and inside interrupt handlers. Notes: INST_RETIRED.ANY is counted by a designated fixed counter freeing up programmable counters to count other events. INST_RETIRED.ANY_P is counted by a programmable counter.",
> + "SampleAfterValue": "2000003",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts the number of occurrences a retired load gets blocked because its address partially overlaps with an older store (size mismatch) - unknown_sta/bad_forward",
> + "EventCode": "0x03",
> + "EventName": "LD_BLOCKS.STORE_FORWARD",
> + "PEBS": "1",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x2",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Loads blocked due to overlapping with a preceding store that cannot be forwarded.",
> + "EventCode": "0x03",
> + "EventName": "LD_BLOCKS.STORE_FORWARD",
> + "PublicDescription": "Counts the number of times where store forwarding was prevented for a load operation. The most common case is a load blocked due to the address of memory access (partially) overlapping with a preceding uncompleted store. Note: See the table of not supported store forwards in the Optimization Guide.",
> + "SampleAfterValue": "100003",
> + "UMask": "0x82",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "This event counts a subset of the Topdown Slots event that were not consumed by the back-end pipeline due to lack of back-end resources, as a result of memory subsystem delays, execution units limitations, or other conditions.",
> + "EventCode": "0xa4",
> + "EventName": "TOPDOWN.BACKEND_BOUND_SLOTS",
> + "PublicDescription": "This event counts a subset of the Topdown Slots event that were not consumed by the back-end pipeline due to lack of back-end resources, as a result of memory subsystem delays, execution units limitations, or other conditions.\nSoftware can use this event as the numerator for the Backend Bound metric (or top-level category) of the Top-down Microarchitecture Analysis method.",
> + "SampleAfterValue": "10000003",
> + "UMask": "0x2",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "TMA slots available for an unhalted logical processor. Fixed counter - architectural event",
> + "EventName": "TOPDOWN.SLOTS",
> + "PublicDescription": "Number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method (TMA). Software can use this event as the denominator for the top-level metrics of the TMA method. This architectural event is counted on a designated fixed counter (Fixed Counter 3).",
> + "SampleAfterValue": "10000003",
> + "UMask": "0x4",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "TMA slots available for an unhalted logical processor. General counter - architectural event",
> + "EventCode": "0xa4",
> + "EventName": "TOPDOWN.SLOTS_P",
> + "PublicDescription": "Counts the number of available slots for an unhalted logical processor. The event increments by machine-width of the narrowest pipeline as employed by the Top-down Microarchitecture Analysis method.",
> + "SampleAfterValue": "10000003",
> + "UMask": "0x1",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Fixed Counter: Counts the number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear.",
> + "EventName": "TOPDOWN_BAD_SPECULATION.ALL",
> + "PublicDescription": "Fixed Counter: Counts the number of issue slots that were not consumed by the backend because allocation is stalled due to a mispredicted jump or a machine clear. Counts all issue slots blocked during this recovery window including relevant microcode flows and while uops are not yet available in the IQ. Also, includes the issue slots that were consumed by the backend but were thrown away because they were younger than the mispredict or machine clear.",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x5",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Counts the number of retirement slots not consumed due to backend stalls",
> + "EventCode": "0xa4",
> + "EventName": "TOPDOWN_BE_BOUND.ALL",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x2",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Fixed Counter: Counts the number of retirement slots not consumed due to front end stalls",
> + "EventName": "TOPDOWN_FE_BOUND.ALL",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x6",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Fixed Counter: Counts the number of consumed retirement slots. Similar to UOPS_RETIRED.ALL",
> + "EventName": "TOPDOWN_RETIRING.ALL",
> + "PEBS": "1",
> + "SampleAfterValue": "1000003",
> + "UMask": "0x7",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "This event counts a subset of the Topdown Slots event that are utilized by operations that eventually get retired (committed) by the processor pipeline. Usually, this event positively correlates with higher performance for example, as measured by the instructions-per-cycle metric.",
> + "EventCode": "0xc2",
> + "EventName": "UOPS_RETIRED.SLOTS",
> + "PublicDescription": "This event counts a subset of the Topdown Slots event that are utilized by operations that eventually get retired (committed) by the processor pipeline. Usually, this event positively correlates with higher performance for example, as measured by the instructions-per-cycle metric.\nSoftware can use this event as the numerator for the Retiring metric (or top-level category) of the Top-down Microarchitecture Analysis method.",
> + "SampleAfterValue": "2000003",
> + "UMask": "0x2",
> + "Unit": "cpu_core"
> + }
> +]
> diff --git a/tools/perf/pmu-events/arch/x86/lunarlake/virtual-memory.json b/tools/perf/pmu-events/arch/x86/lunarlake/virtual-memory.json
> new file mode 100644
> index 000000000000..bb9458799f1c
> --- /dev/null
> +++ b/tools/perf/pmu-events/arch/x86/lunarlake/virtual-memory.json
> @@ -0,0 +1,56 @@
> +[
> + {
> + "BriefDescription": "Counts the number of page walks completed due to load DTLB misses to any page size.",
> + "EventCode": "0x08",
> + "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
> + "PublicDescription": "Counts the number of page walks completed due to loads (including SW prefetches) whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
> + "SampleAfterValue": "200003",
> + "UMask": "0xe",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Load miss in all TLB levels causes a page walk that completes. (All page sizes)",
> + "EventCode": "0x12",
> + "EventName": "DTLB_LOAD_MISSES.WALK_COMPLETED",
> + "PublicDescription": "Counts completed page walks (all page sizes) caused by demand data loads. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
> + "SampleAfterValue": "100003",
> + "UMask": "0xe",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts the number of page walks completed due to store DTLB misses to any page size.",
> + "EventCode": "0x49",
> + "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
> + "PublicDescription": "Counts the number of page walks completed due to stores whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
> + "SampleAfterValue": "2000003",
> + "UMask": "0xe",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Store misses in all TLB levels causes a page walk that completes. (All page sizes)",
> + "EventCode": "0x13",
> + "EventName": "DTLB_STORE_MISSES.WALK_COMPLETED",
> + "PublicDescription": "Counts completed page walks (all page sizes) caused by demand data stores. This implies it missed in the DTLB and further levels of TLB. The page walk can end with or without a fault.",
> + "SampleAfterValue": "100003",
> + "UMask": "0xe",
> + "Unit": "cpu_core"
> + },
> + {
> + "BriefDescription": "Counts the number of page walks completed due to instruction fetch misses to any page size.",
> + "EventCode": "0x85",
> + "EventName": "ITLB_MISSES.WALK_COMPLETED",
> + "PublicDescription": "Counts the number of page walks completed due to instruction fetches whose address translations missed in all Translation Lookaside Buffer (TLB) levels and were mapped to any page size. Includes page walks that page fault.",
> + "SampleAfterValue": "2000003",
> + "UMask": "0xe",
> + "Unit": "cpu_atom"
> + },
> + {
> + "BriefDescription": "Code miss in all TLB levels causes a page walk that completes. (All page sizes)",
> + "EventCode": "0x11",
> + "EventName": "ITLB_MISSES.WALK_COMPLETED",
> + "PublicDescription": "Counts completed page walks (all page sizes) caused by a code fetch. This implies it missed in the ITLB (Instruction TLB) and further levels of TLB. The page walk can end with or without a fault.",
> + "SampleAfterValue": "100003",
> + "UMask": "0xe",
> + "Unit": "cpu_core"
> + }
> +]
> diff --git a/tools/perf/pmu-events/arch/x86/mapfile.csv b/tools/perf/pmu-events/arch/x86/mapfile.csv
> index 3a8770e29fe8..de9582f183cb 100644
> --- a/tools/perf/pmu-events/arch/x86/mapfile.csv
> +++ b/tools/perf/pmu-events/arch/x86/mapfile.csv
> @@ -19,6 +19,7 @@ GenuineIntel-6-3A,v24,ivybridge,core
> GenuineIntel-6-3E,v23,ivytown,core
> GenuineIntel-6-2D,v23,jaketown,core
> GenuineIntel-6-(57|85),v10,knightslanding,core
> +GenuineIntel-6-BD,v1.00,lunarlake,core
> GenuineIntel-6-A[AC],v1.04,meteorlake,core
> GenuineIntel-6-1[AEF],v3,nehalemep,core
> GenuineIntel-6-2E,v3,nehalemex,core
> --
> 2.42.0.rc2.253.gd59a3bf2b4-goog
>
--
- Arnaldo
^ permalink raw reply [flat|nested] 6+ messages in thread