* [PATCH v2 0/6] arm64: perf: Broadcom Vulcan PMU support
@ 2016-03-24 12:52 Ashok Kumar
2016-03-24 12:52 ` [PATCH v2 1/6] arm64/perf: Changed events naming as per ARM ARM Ashok Kumar
` (5 more replies)
0 siblings, 6 replies; 11+ messages in thread
From: Ashok Kumar @ 2016-03-24 12:52 UTC (permalink / raw)
To: linux-arm-kernel
Cleaned up event naming convention as per ARM ARM.
Added macros for complete ARMv8 recommended implementation defined events.
Common architectural and micro-architectural events which are exported to /sys
are now filtered using PMCEIDn_EL0
Added support for Broadcom Vulcan PMU.
changes since v1 [1]:
Incorporated the following review comments from Will.
* cleaned up event naming convention as per ARM ARM
* Filtered common events based on PMCEIDn_EL0
* Removed exposing implementation defined events to /sys.
[1] http://www.spinics.net/lists/arm-kernel/msg490954.html
Ashok Kumar (6):
arm64/perf: Changed events naming as per ARM ARM
arm64/perf: Define complete ARMv8 recommended implementation defined
events
arm64/perf: Filter common events based on PMCEIDn_EL0
arm64/perf: Add Broadcom Vulcan PMU support
arm64: dts: Add Broadcom Vulcan PMU in dts
Documentation: arm64: pmu: Add Broadcom Vulcan PMU binding
Documentation/devicetree/bindings/arm/pmu.txt | 3 +-
arch/arm64/boot/dts/broadcom/vulcan.dtsi | 2 +-
arch/arm64/kernel/perf_event.c | 717 +++++++++++++++++---------
3 files changed, 485 insertions(+), 237 deletions(-)
--
2.1.0
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 1/6] arm64/perf: Changed events naming as per ARM ARM
2016-03-24 12:52 [PATCH v2 0/6] arm64: perf: Broadcom Vulcan PMU support Ashok Kumar
@ 2016-03-24 12:52 ` Ashok Kumar
2016-03-24 12:52 ` [PATCH v2 2/6] arm64/perf: Define complete ARMv8 recommended implementation defined events Ashok Kumar
` (4 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Ashok Kumar @ 2016-03-24 12:52 UTC (permalink / raw)
To: linux-arm-kernel
changed all the events name definition as per ARM ARM
naming convention.
Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
---
arch/arm64/kernel/perf_event.c | 302 ++++++++++++++++++++---------------------
1 file changed, 151 insertions(+), 151 deletions(-)
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 1cc61fc..d1a93cf 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -31,43 +31,43 @@
*/
/* Required events. */
-#define ARMV8_PMUV3_PERFCTR_PMNC_SW_INCR 0x00
-#define ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL 0x03
-#define ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS 0x04
-#define ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED 0x10
-#define ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES 0x11
-#define ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED 0x12
+#define ARMV8_PMUV3_PERFCTR_SW_INCR 0x00
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL 0x03
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE 0x04
+#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED 0x10
+#define ARMV8_PMUV3_PERFCTR_CPU_CYCLES 0x11
+#define ARMV8_PMUV3_PERFCTR_BR_PRED 0x12
/* At least one of the following is required. */
-#define ARMV8_PMUV3_PERFCTR_INSTR_EXECUTED 0x08
-#define ARMV8_PMUV3_PERFCTR_OP_SPEC 0x1B
+#define ARMV8_PMUV3_PERFCTR_INST_RETIRED 0x08
+#define ARMV8_PMUV3_PERFCTR_INST_SPEC 0x1B
/* Common architectural events. */
-#define ARMV8_PMUV3_PERFCTR_MEM_READ 0x06
-#define ARMV8_PMUV3_PERFCTR_MEM_WRITE 0x07
+#define ARMV8_PMUV3_PERFCTR_LD_RETIRED 0x06
+#define ARMV8_PMUV3_PERFCTR_ST_RETIRED 0x07
#define ARMV8_PMUV3_PERFCTR_EXC_TAKEN 0x09
-#define ARMV8_PMUV3_PERFCTR_EXC_EXECUTED 0x0A
-#define ARMV8_PMUV3_PERFCTR_CID_WRITE 0x0B
-#define ARMV8_PMUV3_PERFCTR_PC_WRITE 0x0C
-#define ARMV8_PMUV3_PERFCTR_PC_IMM_BRANCH 0x0D
-#define ARMV8_PMUV3_PERFCTR_PC_PROC_RETURN 0x0E
-#define ARMV8_PMUV3_PERFCTR_MEM_UNALIGNED_ACCESS 0x0F
-#define ARMV8_PMUV3_PERFCTR_TTBR_WRITE 0x1C
+#define ARMV8_PMUV3_PERFCTR_EXC_RETURN 0x0A
+#define ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED 0x0B
+#define ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED 0x0C
+#define ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED 0x0D
+#define ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED 0x0E
+#define ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED 0x0F
+#define ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED 0x1C
#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E
#define ARMV8_PMUV3_PERFCTR_BR_RETIRED 0x21
/* Common microarchitectural events. */
-#define ARMV8_PMUV3_PERFCTR_L1_ICACHE_REFILL 0x01
-#define ARMV8_PMUV3_PERFCTR_ITLB_REFILL 0x02
-#define ARMV8_PMUV3_PERFCTR_DTLB_REFILL 0x05
+#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL 0x01
+#define ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL 0x02
+#define ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL 0x05
#define ARMV8_PMUV3_PERFCTR_MEM_ACCESS 0x13
-#define ARMV8_PMUV3_PERFCTR_L1_ICACHE_ACCESS 0x14
-#define ARMV8_PMUV3_PERFCTR_L1_DCACHE_WB 0x15
-#define ARMV8_PMUV3_PERFCTR_L2_CACHE_ACCESS 0x16
-#define ARMV8_PMUV3_PERFCTR_L2_CACHE_REFILL 0x17
-#define ARMV8_PMUV3_PERFCTR_L2_CACHE_WB 0x18
+#define ARMV8_PMUV3_PERFCTR_L1I_CACHE 0x14
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB 0x15
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE 0x16
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL 0x17
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB 0x18
#define ARMV8_PMUV3_PERFCTR_BUS_ACCESS 0x19
-#define ARMV8_PMUV3_PERFCTR_MEM_ERROR 0x1A
+#define ARMV8_PMUV3_PERFCTR_MEMORY_ERROR 0x1A
#define ARMV8_PMUV3_PERFCTR_BUS_CYCLES 0x1D
#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE 0x1F
#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE 0x20
@@ -83,71 +83,71 @@
#define ARMV8_PMUV3_PERFCTR_L3D_CACHE 0x2B
#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB 0x2C
#define ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL 0x2D
-#define ARMV8_PMUV3_PERFCTR_L21_TLB_REFILL 0x2E
+#define ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL 0x2E
#define ARMV8_PMUV3_PERFCTR_L2D_TLB 0x2F
-#define ARMV8_PMUV3_PERFCTR_L21_TLB 0x30
-
-/* ARMv8 implementation defined event types. */
-#define ARMV8_IMPDEF_PERFCTR_L1_DCACHE_ACCESS_LD 0x40
-#define ARMV8_IMPDEF_PERFCTR_L1_DCACHE_ACCESS_ST 0x41
-#define ARMV8_IMPDEF_PERFCTR_L1_DCACHE_REFILL_LD 0x42
-#define ARMV8_IMPDEF_PERFCTR_L1_DCACHE_REFILL_ST 0x43
-#define ARMV8_IMPDEF_PERFCTR_DTLB_REFILL_LD 0x4C
-#define ARMV8_IMPDEF_PERFCTR_DTLB_REFILL_ST 0x4D
-#define ARMV8_IMPDEF_PERFCTR_DTLB_ACCESS_LD 0x4E
-#define ARMV8_IMPDEF_PERFCTR_DTLB_ACCESS_ST 0x4F
+#define ARMV8_PMUV3_PERFCTR_L2I_TLB 0x30
+
+/* ARMv8 recommended implementation defined event types */
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD 0x40
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR 0x41
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD 0x42
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR 0x43
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x4C
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x4D
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD 0x4E
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x4F
/* ARMv8 Cortex-A53 specific event types. */
#define ARMV8_A53_PERFCTR_PREFETCH_LINEFILL 0xC2
/* ARMv8 Cavium ThunderX specific event types. */
-#define ARMV8_THUNDER_PERFCTR_L1_DCACHE_MISS_ST 0xE9
-#define ARMV8_THUNDER_PERFCTR_L1_DCACHE_PREF_ACCESS 0xEA
-#define ARMV8_THUNDER_PERFCTR_L1_DCACHE_PREF_MISS 0xEB
-#define ARMV8_THUNDER_PERFCTR_L1_ICACHE_PREF_ACCESS 0xEC
-#define ARMV8_THUNDER_PERFCTR_L1_ICACHE_PREF_MISS 0xED
+#define ARMV8_THUNDER_PERFCTR_L1D_CACHE_MISS_ST 0xE9
+#define ARMV8_THUNDER_PERFCTR_L1D_CACHE_PREF_ACCESS 0xEA
+#define ARMV8_THUNDER_PERFCTR_L1D_CACHE_PREF_MISS 0xEB
+#define ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_ACCESS 0xEC
+#define ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_MISS 0xED
/* PMUv3 HW events mapping. */
static const unsigned armv8_pmuv3_perf_map[PERF_COUNT_HW_MAX] = {
PERF_MAP_ALL_UNSUPPORTED,
- [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES,
- [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INSTR_EXECUTED,
- [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
- [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
- [PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
+ [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CPU_CYCLES,
+ [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INST_RETIRED,
+ [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE,
+ [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL,
+ [PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
};
/* ARM Cortex-A53 HW events mapping. */
static const unsigned armv8_a53_perf_map[PERF_COUNT_HW_MAX] = {
PERF_MAP_ALL_UNSUPPORTED,
- [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES,
- [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INSTR_EXECUTED,
- [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
- [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
- [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_PC_WRITE,
- [PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
+ [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CPU_CYCLES,
+ [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INST_RETIRED,
+ [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE,
+ [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL,
+ [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED,
+ [PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
[PERF_COUNT_HW_BUS_CYCLES] = ARMV8_PMUV3_PERFCTR_BUS_CYCLES,
};
/* ARM Cortex-A57 and Cortex-A72 events mapping. */
static const unsigned armv8_a57_perf_map[PERF_COUNT_HW_MAX] = {
PERF_MAP_ALL_UNSUPPORTED,
- [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES,
- [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INSTR_EXECUTED,
- [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
- [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
- [PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
+ [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CPU_CYCLES,
+ [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INST_RETIRED,
+ [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE,
+ [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL,
+ [PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
[PERF_COUNT_HW_BUS_CYCLES] = ARMV8_PMUV3_PERFCTR_BUS_CYCLES,
};
static const unsigned armv8_thunder_perf_map[PERF_COUNT_HW_MAX] = {
PERF_MAP_ALL_UNSUPPORTED,
- [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES,
- [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INSTR_EXECUTED,
- [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
- [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
- [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_PC_WRITE,
- [PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
+ [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CPU_CYCLES,
+ [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INST_RETIRED,
+ [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE,
+ [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL,
+ [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED,
+ [PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = ARMV8_PMUV3_PERFCTR_STALL_FRONTEND,
[PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = ARMV8_PMUV3_PERFCTR_STALL_BACKEND,
};
@@ -157,15 +157,15 @@ static const unsigned armv8_pmuv3_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
PERF_CACHE_MAP_ALL_UNSUPPORTED,
- [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
- [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
- [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
- [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
+ [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1D_CACHE,
+ [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1D_CACHE,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL,
- [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED,
- [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
- [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED,
- [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
+ [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
};
static const unsigned armv8_a53_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
@@ -173,21 +173,21 @@ static const unsigned armv8_a53_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
PERF_CACHE_MAP_ALL_UNSUPPORTED,
- [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
- [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
- [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
- [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
+ [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1D_CACHE,
+ [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1D_CACHE,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL,
[C(L1D)][C(OP_PREFETCH)][C(RESULT_MISS)] = ARMV8_A53_PERFCTR_PREFETCH_LINEFILL,
- [C(L1I)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1_ICACHE_ACCESS,
- [C(L1I)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1_ICACHE_REFILL,
+ [C(L1I)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1I_CACHE,
+ [C(L1I)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL,
- [C(ITLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_ITLB_REFILL,
+ [C(ITLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL,
- [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED,
- [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
- [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED,
- [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
+ [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
};
static const unsigned armv8_a57_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
@@ -195,23 +195,23 @@ static const unsigned armv8_a57_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
PERF_CACHE_MAP_ALL_UNSUPPORTED,
- [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1_DCACHE_ACCESS_LD,
- [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1_DCACHE_REFILL_LD,
- [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1_DCACHE_ACCESS_ST,
- [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1_DCACHE_REFILL_ST,
+ [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD,
+ [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR,
- [C(L1I)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1_ICACHE_ACCESS,
- [C(L1I)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1_ICACHE_REFILL,
+ [C(L1I)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1I_CACHE,
+ [C(L1I)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL,
- [C(DTLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_DTLB_REFILL_LD,
- [C(DTLB)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_DTLB_REFILL_ST,
+ [C(DTLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD,
+ [C(DTLB)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR,
- [C(ITLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_ITLB_REFILL,
+ [C(ITLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL,
- [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED,
- [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
- [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED,
- [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
+ [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
};
static const unsigned armv8_thunder_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
@@ -219,29 +219,29 @@ static const unsigned armv8_thunder_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
PERF_CACHE_MAP_ALL_UNSUPPORTED,
- [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1_DCACHE_ACCESS_LD,
- [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1_DCACHE_REFILL_LD,
- [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1_DCACHE_ACCESS_ST,
- [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_THUNDER_PERFCTR_L1_DCACHE_MISS_ST,
- [C(L1D)][C(OP_PREFETCH)][C(RESULT_ACCESS)] = ARMV8_THUNDER_PERFCTR_L1_DCACHE_PREF_ACCESS,
- [C(L1D)][C(OP_PREFETCH)][C(RESULT_MISS)] = ARMV8_THUNDER_PERFCTR_L1_DCACHE_PREF_MISS,
-
- [C(L1I)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1_ICACHE_ACCESS,
- [C(L1I)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1_ICACHE_REFILL,
- [C(L1I)][C(OP_PREFETCH)][C(RESULT_ACCESS)] = ARMV8_THUNDER_PERFCTR_L1_ICACHE_PREF_ACCESS,
- [C(L1I)][C(OP_PREFETCH)][C(RESULT_MISS)] = ARMV8_THUNDER_PERFCTR_L1_ICACHE_PREF_MISS,
-
- [C(DTLB)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_DTLB_ACCESS_LD,
- [C(DTLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_DTLB_REFILL_LD,
- [C(DTLB)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_DTLB_ACCESS_ST,
- [C(DTLB)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_DTLB_REFILL_ST,
-
- [C(ITLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_ITLB_REFILL,
-
- [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED,
- [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
- [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED,
- [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
+ [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD,
+ [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_THUNDER_PERFCTR_L1D_CACHE_MISS_ST,
+ [C(L1D)][C(OP_PREFETCH)][C(RESULT_ACCESS)] = ARMV8_THUNDER_PERFCTR_L1D_CACHE_PREF_ACCESS,
+ [C(L1D)][C(OP_PREFETCH)][C(RESULT_MISS)] = ARMV8_THUNDER_PERFCTR_L1D_CACHE_PREF_MISS,
+
+ [C(L1I)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1I_CACHE,
+ [C(L1I)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL,
+ [C(L1I)][C(OP_PREFETCH)][C(RESULT_ACCESS)] = ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_ACCESS,
+ [C(L1I)][C(OP_PREFETCH)][C(RESULT_MISS)] = ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_MISS,
+
+ [C(DTLB)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD,
+ [C(DTLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD,
+ [C(DTLB)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR,
+ [C(DTLB)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR,
+
+ [C(ITLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL,
+
+ [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
};
#define ARMV8_EVENT_ATTR_RESOLVE(m) #m
@@ -249,35 +249,35 @@ static const unsigned armv8_thunder_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
PMU_EVENT_ATTR_STRING(name, armv8_event_attr_##name, \
"event=" ARMV8_EVENT_ATTR_RESOLVE(config))
-ARMV8_EVENT_ATTR(sw_incr, ARMV8_PMUV3_PERFCTR_PMNC_SW_INCR);
-ARMV8_EVENT_ATTR(l1i_cache_refill, ARMV8_PMUV3_PERFCTR_L1_ICACHE_REFILL);
-ARMV8_EVENT_ATTR(l1i_tlb_refill, ARMV8_PMUV3_PERFCTR_ITLB_REFILL);
-ARMV8_EVENT_ATTR(l1d_cache_refill, ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL);
-ARMV8_EVENT_ATTR(l1d_cache, ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS);
-ARMV8_EVENT_ATTR(l1d_tlb_refill, ARMV8_PMUV3_PERFCTR_DTLB_REFILL);
-ARMV8_EVENT_ATTR(ld_retired, ARMV8_PMUV3_PERFCTR_MEM_READ);
-ARMV8_EVENT_ATTR(st_retired, ARMV8_PMUV3_PERFCTR_MEM_WRITE);
-ARMV8_EVENT_ATTR(inst_retired, ARMV8_PMUV3_PERFCTR_INSTR_EXECUTED);
+ARMV8_EVENT_ATTR(sw_incr, ARMV8_PMUV3_PERFCTR_SW_INCR);
+ARMV8_EVENT_ATTR(l1i_cache_refill, ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL);
+ARMV8_EVENT_ATTR(l1i_tlb_refill, ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL);
+ARMV8_EVENT_ATTR(l1d_cache_refill, ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL);
+ARMV8_EVENT_ATTR(l1d_cache, ARMV8_PMUV3_PERFCTR_L1D_CACHE);
+ARMV8_EVENT_ATTR(l1d_tlb_refill, ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL);
+ARMV8_EVENT_ATTR(ld_retired, ARMV8_PMUV3_PERFCTR_LD_RETIRED);
+ARMV8_EVENT_ATTR(st_retired, ARMV8_PMUV3_PERFCTR_ST_RETIRED);
+ARMV8_EVENT_ATTR(inst_retired, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
ARMV8_EVENT_ATTR(exc_taken, ARMV8_PMUV3_PERFCTR_EXC_TAKEN);
-ARMV8_EVENT_ATTR(exc_return, ARMV8_PMUV3_PERFCTR_EXC_EXECUTED);
-ARMV8_EVENT_ATTR(cid_write_retired, ARMV8_PMUV3_PERFCTR_CID_WRITE);
-ARMV8_EVENT_ATTR(pc_write_retired, ARMV8_PMUV3_PERFCTR_PC_WRITE);
-ARMV8_EVENT_ATTR(br_immed_retired, ARMV8_PMUV3_PERFCTR_PC_IMM_BRANCH);
-ARMV8_EVENT_ATTR(br_return_retired, ARMV8_PMUV3_PERFCTR_PC_PROC_RETURN);
-ARMV8_EVENT_ATTR(unaligned_ldst_retired, ARMV8_PMUV3_PERFCTR_MEM_UNALIGNED_ACCESS);
-ARMV8_EVENT_ATTR(br_mis_pred, ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED);
-ARMV8_EVENT_ATTR(cpu_cycles, ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES);
-ARMV8_EVENT_ATTR(br_pred, ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED);
+ARMV8_EVENT_ATTR(exc_return, ARMV8_PMUV3_PERFCTR_EXC_RETURN);
+ARMV8_EVENT_ATTR(cid_write_retired, ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED);
+ARMV8_EVENT_ATTR(pc_write_retired, ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED);
+ARMV8_EVENT_ATTR(br_immed_retired, ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED);
+ARMV8_EVENT_ATTR(br_return_retired, ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED);
+ARMV8_EVENT_ATTR(unaligned_ldst_retired, ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED);
+ARMV8_EVENT_ATTR(br_mis_pred, ARMV8_PMUV3_PERFCTR_BR_MIS_PRED);
+ARMV8_EVENT_ATTR(cpu_cycles, ARMV8_PMUV3_PERFCTR_CPU_CYCLES);
+ARMV8_EVENT_ATTR(br_pred, ARMV8_PMUV3_PERFCTR_BR_PRED);
ARMV8_EVENT_ATTR(mem_access, ARMV8_PMUV3_PERFCTR_MEM_ACCESS);
-ARMV8_EVENT_ATTR(l1i_cache, ARMV8_PMUV3_PERFCTR_L1_ICACHE_ACCESS);
-ARMV8_EVENT_ATTR(l1d_cache_wb, ARMV8_PMUV3_PERFCTR_L1_DCACHE_WB);
-ARMV8_EVENT_ATTR(l2d_cache, ARMV8_PMUV3_PERFCTR_L2_CACHE_ACCESS);
-ARMV8_EVENT_ATTR(l2d_cache_refill, ARMV8_PMUV3_PERFCTR_L2_CACHE_REFILL);
-ARMV8_EVENT_ATTR(l2d_cache_wb, ARMV8_PMUV3_PERFCTR_L2_CACHE_WB);
+ARMV8_EVENT_ATTR(l1i_cache, ARMV8_PMUV3_PERFCTR_L1I_CACHE);
+ARMV8_EVENT_ATTR(l1d_cache_wb, ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB);
+ARMV8_EVENT_ATTR(l2d_cache, ARMV8_PMUV3_PERFCTR_L2D_CACHE);
+ARMV8_EVENT_ATTR(l2d_cache_refill, ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL);
+ARMV8_EVENT_ATTR(l2d_cache_wb, ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB);
ARMV8_EVENT_ATTR(bus_access, ARMV8_PMUV3_PERFCTR_BUS_ACCESS);
-ARMV8_EVENT_ATTR(memory_error, ARMV8_PMUV3_PERFCTR_MEM_ERROR);
-ARMV8_EVENT_ATTR(inst_spec, ARMV8_PMUV3_PERFCTR_OP_SPEC);
-ARMV8_EVENT_ATTR(ttbr_write_retired, ARMV8_PMUV3_PERFCTR_TTBR_WRITE);
+ARMV8_EVENT_ATTR(memory_error, ARMV8_PMUV3_PERFCTR_MEMORY_ERROR);
+ARMV8_EVENT_ATTR(inst_spec, ARMV8_PMUV3_PERFCTR_INST_SPEC);
+ARMV8_EVENT_ATTR(ttbr_write_retired, ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED);
ARMV8_EVENT_ATTR(bus_cycles, ARMV8_PMUV3_PERFCTR_BUS_CYCLES);
ARMV8_EVENT_ATTR(chain, ARMV8_PMUV3_PERFCTR_CHAIN);
ARMV8_EVENT_ATTR(l1d_cache_allocate, ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE);
@@ -295,9 +295,9 @@ ARMV8_EVENT_ATTR(l3d_cache_refill, ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL);
ARMV8_EVENT_ATTR(l3d_cache, ARMV8_PMUV3_PERFCTR_L3D_CACHE);
ARMV8_EVENT_ATTR(l3d_cache_wb, ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB);
ARMV8_EVENT_ATTR(l2d_tlb_refill, ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL);
-ARMV8_EVENT_ATTR(l21_tlb_refill, ARMV8_PMUV3_PERFCTR_L21_TLB_REFILL);
+ARMV8_EVENT_ATTR(l2i_tlb_refill, ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL);
ARMV8_EVENT_ATTR(l2d_tlb, ARMV8_PMUV3_PERFCTR_L2D_TLB);
-ARMV8_EVENT_ATTR(l21_tlb, ARMV8_PMUV3_PERFCTR_L21_TLB);
+ARMV8_EVENT_ATTR(l2i_tlb, ARMV8_PMUV3_PERFCTR_L2I_TLB);
static struct attribute *armv8_pmuv3_event_attrs[] = {
&armv8_event_attr_sw_incr.attr.attr,
@@ -346,9 +346,9 @@ static struct attribute *armv8_pmuv3_event_attrs[] = {
&armv8_event_attr_l3d_cache.attr.attr,
&armv8_event_attr_l3d_cache_wb.attr.attr,
&armv8_event_attr_l2d_tlb_refill.attr.attr,
- &armv8_event_attr_l21_tlb_refill.attr.attr,
+ &armv8_event_attr_l2i_tlb_refill.attr.attr,
&armv8_event_attr_l2d_tlb.attr.attr,
- &armv8_event_attr_l21_tlb.attr.attr,
+ &armv8_event_attr_l2i_tlb.attr.attr,
NULL,
};
@@ -719,7 +719,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc,
unsigned long evtype = hwc->config_base & ARMV8_EVTYPE_EVENT;
/* Always place a cycle counter into the cycle counter. */
- if (evtype == ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES) {
+ if (evtype == ARMV8_PMUV3_PERFCTR_CPU_CYCLES) {
if (test_and_set_bit(ARMV8_IDX_CYCLE_COUNTER, cpuc->used_mask))
return -EAGAIN;
--
2.1.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 2/6] arm64/perf: Define complete ARMv8 recommended implementation defined events
2016-03-24 12:52 [PATCH v2 0/6] arm64: perf: Broadcom Vulcan PMU support Ashok Kumar
2016-03-24 12:52 ` [PATCH v2 1/6] arm64/perf: Changed events naming as per ARM ARM Ashok Kumar
@ 2016-03-24 12:52 ` Ashok Kumar
2016-03-24 12:52 ` [PATCH v2 3/6] arm64/perf: Filter common events based on PMCEIDn_EL0 Ashok Kumar
` (3 subsequent siblings)
5 siblings, 0 replies; 11+ messages in thread
From: Ashok Kumar @ 2016-03-24 12:52 UTC (permalink / raw)
To: linux-arm-kernel
Defined all the ARMv8 recommended implementation defined events
from J3 - "ARM recommendations for IMPLEMENTATION DEFINED event numbers"
in ARMv8 ARM.
Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
---
arch/arm64/kernel/perf_event.c | 79 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 79 insertions(+)
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index d1a93cf..5358587 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -92,10 +92,89 @@
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR 0x41
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD 0x42
#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR 0x43
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_INNER 0x44
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_OUTER 0x45
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_VICTIM 0x46
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_CLEAN 0x47
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_INVAL 0x48
+
#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x4C
#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x4D
#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD 0x4E
#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x4F
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_RD 0x50
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WR 0x51
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_RD 0x52
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_WR 0x53
+
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_VICTIM 0x56
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_CLEAN 0x57
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_INVAL 0x58
+
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_RD 0x5C
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_WR 0x5D
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_RD 0x5E
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_WR 0x5F
+
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD 0x60
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR 0x61
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_SHARED 0x62
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NOT_SHARED 0x63
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NORMAL 0x64
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_PERIPH 0x65
+
+#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_RD 0x66
+#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_WR 0x67
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LD_SPEC 0x68
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_ST_SPEC 0x69
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LDST_SPEC 0x6A
+
+#define ARMV8_IMPDEF_PERFCTR_LDREX_SPEC 0x6C
+#define ARMV8_IMPDEF_PERFCTR_STREX_PASS_SPEC 0x6D
+#define ARMV8_IMPDEF_PERFCTR_STREX_FAIL_SPEC 0x6E
+#define ARMV8_IMPDEF_PERFCTR_STREX_SPEC 0x6F
+#define ARMV8_IMPDEF_PERFCTR_LD_SPEC 0x70
+#define ARMV8_IMPDEF_PERFCTR_ST_SPEC 0x71
+#define ARMV8_IMPDEF_PERFCTR_LDST_SPEC 0x72
+#define ARMV8_IMPDEF_PERFCTR_DP_SPEC 0x73
+#define ARMV8_IMPDEF_PERFCTR_ASE_SPEC 0x74
+#define ARMV8_IMPDEF_PERFCTR_VFP_SPEC 0x75
+#define ARMV8_IMPDEF_PERFCTR_PC_WRITE_SPEC 0x76
+#define ARMV8_IMPDEF_PERFCTR_CRYPTO_SPEC 0x77
+#define ARMV8_IMPDEF_PERFCTR_BR_IMMED_SPEC 0x78
+#define ARMV8_IMPDEF_PERFCTR_BR_RETURN_SPEC 0x79
+#define ARMV8_IMPDEF_PERFCTR_BR_INDIRECT_SPEC 0x7A
+
+#define ARMV8_IMPDEF_PERFCTR_ISB_SPEC 0x7C
+#define ARMV8_IMPDEF_PERFCTR_DSB_SPEC 0x7D
+#define ARMV8_IMPDEF_PERFCTR_DMB_SPEC 0x7E
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_UNDEF 0x81
+#define ARMV8_IMPDEF_PERFCTR_EXC_SVC 0x82
+#define ARMV8_IMPDEF_PERFCTR_EXC_PABORT 0x83
+#define ARMV8_IMPDEF_PERFCTR_EXC_DABORT 0x84
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_IRQ 0x86
+#define ARMV8_IMPDEF_PERFCTR_EXC_FIQ 0x87
+#define ARMV8_IMPDEF_PERFCTR_EXC_SMC 0x88
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_HVC 0x8A
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_PABORT 0x8B
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_DABORT 0x8C
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_OTHER 0x8D
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_IRQ 0x8E
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_FIQ 0x8F
+#define ARMV8_IMPDEF_PERFCTR_RC_LD_SPEC 0x90
+#define ARMV8_IMPDEF_PERFCTR_RC_ST_SPEC 0x91
+
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_RD 0xA0
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WR 0xA1
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_RD 0xA2
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_WR 0xA3
+
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_VICTIM 0xA6
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_CLEAN 0xA7
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_INVAL 0xA8
/* ARMv8 Cortex-A53 specific event types. */
#define ARMV8_A53_PERFCTR_PREFETCH_LINEFILL 0xC2
--
2.1.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 3/6] arm64/perf: Filter common events based on PMCEIDn_EL0
2016-03-24 12:52 [PATCH v2 0/6] arm64: perf: Broadcom Vulcan PMU support Ashok Kumar
2016-03-24 12:52 ` [PATCH v2 1/6] arm64/perf: Changed events naming as per ARM ARM Ashok Kumar
2016-03-24 12:52 ` [PATCH v2 2/6] arm64/perf: Define complete ARMv8 recommended implementation defined events Ashok Kumar
@ 2016-03-24 12:52 ` Ashok Kumar
2016-03-24 15:28 ` Suzuki K. Poulose
2016-03-24 16:14 ` Mark Rutland
2016-03-24 12:52 ` [PATCH v2 4/6] arm64/perf: Add Broadcom Vulcan PMU support Ashok Kumar
` (2 subsequent siblings)
5 siblings, 2 replies; 11+ messages in thread
From: Ashok Kumar @ 2016-03-24 12:52 UTC (permalink / raw)
To: linux-arm-kernel
The complete common architectural and micro-architectural
event number structure is filtered based on PMCEIDn_EL0 and
copied to a new structure which is exposed to /sys
The function which derives event bitmap from PMCEIDn_EL0 is
executed in the cpus, which has the pmu being initialized,
for heterogeneous pmu support.
Enforced armv8_pmuv3_event_attrs array for event number
ordering as this is indexed with event number now.
Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
---
arch/arm64/kernel/perf_event.c | 281 ++++++++++++++++++++++++++++-------------
1 file changed, 195 insertions(+), 86 deletions(-)
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index 5358587..a025ec2 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -378,62 +378,106 @@ ARMV8_EVENT_ATTR(l2i_tlb_refill, ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL);
ARMV8_EVENT_ATTR(l2d_tlb, ARMV8_PMUV3_PERFCTR_L2D_TLB);
ARMV8_EVENT_ATTR(l2i_tlb, ARMV8_PMUV3_PERFCTR_L2I_TLB);
-static struct attribute *armv8_pmuv3_event_attrs[] = {
- &armv8_event_attr_sw_incr.attr.attr,
- &armv8_event_attr_l1i_cache_refill.attr.attr,
- &armv8_event_attr_l1i_tlb_refill.attr.attr,
- &armv8_event_attr_l1d_cache_refill.attr.attr,
- &armv8_event_attr_l1d_cache.attr.attr,
- &armv8_event_attr_l1d_tlb_refill.attr.attr,
- &armv8_event_attr_ld_retired.attr.attr,
- &armv8_event_attr_st_retired.attr.attr,
- &armv8_event_attr_inst_retired.attr.attr,
- &armv8_event_attr_exc_taken.attr.attr,
- &armv8_event_attr_exc_return.attr.attr,
- &armv8_event_attr_cid_write_retired.attr.attr,
- &armv8_event_attr_pc_write_retired.attr.attr,
- &armv8_event_attr_br_immed_retired.attr.attr,
- &armv8_event_attr_br_return_retired.attr.attr,
- &armv8_event_attr_unaligned_ldst_retired.attr.attr,
- &armv8_event_attr_br_mis_pred.attr.attr,
- &armv8_event_attr_cpu_cycles.attr.attr,
- &armv8_event_attr_br_pred.attr.attr,
- &armv8_event_attr_mem_access.attr.attr,
- &armv8_event_attr_l1i_cache.attr.attr,
- &armv8_event_attr_l1d_cache_wb.attr.attr,
- &armv8_event_attr_l2d_cache.attr.attr,
- &armv8_event_attr_l2d_cache_refill.attr.attr,
- &armv8_event_attr_l2d_cache_wb.attr.attr,
- &armv8_event_attr_bus_access.attr.attr,
- &armv8_event_attr_memory_error.attr.attr,
- &armv8_event_attr_inst_spec.attr.attr,
- &armv8_event_attr_ttbr_write_retired.attr.attr,
- &armv8_event_attr_bus_cycles.attr.attr,
- &armv8_event_attr_chain.attr.attr,
- &armv8_event_attr_l1d_cache_allocate.attr.attr,
- &armv8_event_attr_l2d_cache_allocate.attr.attr,
- &armv8_event_attr_br_retired.attr.attr,
- &armv8_event_attr_br_mis_pred_retired.attr.attr,
- &armv8_event_attr_stall_frontend.attr.attr,
- &armv8_event_attr_stall_backend.attr.attr,
- &armv8_event_attr_l1d_tlb.attr.attr,
- &armv8_event_attr_l1i_tlb.attr.attr,
- &armv8_event_attr_l2i_cache.attr.attr,
- &armv8_event_attr_l2i_cache_refill.attr.attr,
- &armv8_event_attr_l3d_cache_allocate.attr.attr,
- &armv8_event_attr_l3d_cache_refill.attr.attr,
- &armv8_event_attr_l3d_cache.attr.attr,
- &armv8_event_attr_l3d_cache_wb.attr.attr,
- &armv8_event_attr_l2d_tlb_refill.attr.attr,
- &armv8_event_attr_l2i_tlb_refill.attr.attr,
- &armv8_event_attr_l2d_tlb.attr.attr,
- &armv8_event_attr_l2i_tlb.attr.attr,
- NULL,
-};
-
-static struct attribute_group armv8_pmuv3_events_attr_group = {
- .name = "events",
- .attrs = armv8_pmuv3_event_attrs,
+#define ARMV8_PMUV3_MAX_COMMON_EVENTS 0x40
+static struct attribute *armv8_pmuv3_event_attrs[ARMV8_PMUV3_MAX_COMMON_EVENTS] = {
+ [ARMV8_PMUV3_PERFCTR_SW_INCR] =
+ &armv8_event_attr_sw_incr.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL] =
+ &armv8_event_attr_l1i_cache_refill.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL] =
+ &armv8_event_attr_l1i_tlb_refill.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL] =
+ &armv8_event_attr_l1d_cache_refill.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L1D_CACHE] =
+ &armv8_event_attr_l1d_cache.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL] =
+ &armv8_event_attr_l1d_tlb_refill.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_LD_RETIRED] =
+ &armv8_event_attr_ld_retired.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_ST_RETIRED] =
+ &armv8_event_attr_st_retired.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_INST_RETIRED] =
+ &armv8_event_attr_inst_retired.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_EXC_TAKEN] =
+ &armv8_event_attr_exc_taken.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_EXC_RETURN] =
+ &armv8_event_attr_exc_return.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED] =
+ &armv8_event_attr_cid_write_retired.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED] =
+ &armv8_event_attr_pc_write_retired.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED] =
+ &armv8_event_attr_br_immed_retired.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED] =
+ &armv8_event_attr_br_return_retired.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED] =
+ &armv8_event_attr_unaligned_ldst_retired.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_BR_MIS_PRED] =
+ &armv8_event_attr_br_mis_pred.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_CPU_CYCLES] =
+ &armv8_event_attr_cpu_cycles.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_BR_PRED] =
+ &armv8_event_attr_br_pred.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_MEM_ACCESS] =
+ &armv8_event_attr_mem_access.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L1I_CACHE] =
+ &armv8_event_attr_l1i_cache.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB] =
+ &armv8_event_attr_l1d_cache_wb.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L2D_CACHE] =
+ &armv8_event_attr_l2d_cache.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL] =
+ &armv8_event_attr_l2d_cache_refill.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB] =
+ &armv8_event_attr_l2d_cache_wb.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_BUS_ACCESS] =
+ &armv8_event_attr_bus_access.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_MEMORY_ERROR] =
+ &armv8_event_attr_memory_error.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_INST_SPEC] =
+ &armv8_event_attr_inst_spec.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED] =
+ &armv8_event_attr_ttbr_write_retired.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_BUS_CYCLES] =
+ &armv8_event_attr_bus_cycles.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_CHAIN] =
+ &armv8_event_attr_chain.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE] =
+ &armv8_event_attr_l1d_cache_allocate.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE] =
+ &armv8_event_attr_l2d_cache_allocate.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_BR_RETIRED] =
+ &armv8_event_attr_br_retired.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_BR_MIS_PRED_RETIRED] =
+ &armv8_event_attr_br_mis_pred_retired.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_STALL_FRONTEND] =
+ &armv8_event_attr_stall_frontend.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_STALL_BACKEND] =
+ &armv8_event_attr_stall_backend.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L1D_TLB] =
+ &armv8_event_attr_l1d_tlb.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L1I_TLB] =
+ &armv8_event_attr_l1i_tlb.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L2I_CACHE] =
+ &armv8_event_attr_l2i_cache.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L2I_CACHE_REFILL] =
+ &armv8_event_attr_l2i_cache_refill.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L3D_CACHE_ALLOCATE] =
+ &armv8_event_attr_l3d_cache_allocate.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL] =
+ &armv8_event_attr_l3d_cache_refill.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L3D_CACHE] =
+ &armv8_event_attr_l3d_cache.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB] =
+ &armv8_event_attr_l3d_cache_wb.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL] =
+ &armv8_event_attr_l2d_tlb_refill.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL] =
+ &armv8_event_attr_l2i_tlb_refill.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L2D_TLB] =
+ &armv8_event_attr_l2d_tlb.attr.attr,
+ [ARMV8_PMUV3_PERFCTR_L2I_TLB] =
+ &armv8_event_attr_l2i_tlb.attr.attr,
};
PMU_FORMAT_ATTR(event, "config:0-9");
@@ -448,12 +492,6 @@ static struct attribute_group armv8_pmuv3_format_attr_group = {
.attrs = armv8_pmuv3_format_attrs,
};
-static const struct attribute_group *armv8_pmuv3_attr_groups[] = {
- &armv8_pmuv3_events_attr_group,
- &armv8_pmuv3_format_attr_group,
- NULL,
-};
-
/*
* Perf Events' indices
*/
@@ -522,6 +560,18 @@ static inline void armv8pmu_pmcr_write(u32 val)
asm volatile("msr pmcr_el0, %0" :: "r" (val));
}
+static inline u32 armv8pmu_pmceidn_read(int reg)
+{
+ u32 val = 0;
+
+ if (reg == 0)
+ asm volatile("mrs %0, pmceid0_el0" : "=r" (val));
+ else if (reg == 1)
+ asm volatile("mrs %0, pmceid1_el0" : "=r" (val));
+
+ return val;
+}
+
static inline int armv8pmu_has_overflowed(u32 pmovsr)
{
return pmovsr & ARMV8_OVERFLOWED_MASK;
@@ -890,26 +940,53 @@ static int armv8_thunder_map_event(struct perf_event *event)
ARMV8_EVTYPE_EVENT);
}
-static void armv8pmu_read_num_pmnc_events(void *info)
+static unsigned int armv8pmu_read_num_pmnc_events(void)
{
- int *nb_cnt = info;
-
+ unsigned int nb_cnt;
/* Read the nb of CNTx counters supported from PMNC */
- *nb_cnt = (armv8pmu_pmcr_read() >> ARMV8_PMCR_N_SHIFT) & ARMV8_PMCR_N_MASK;
+ nb_cnt = (armv8pmu_pmcr_read() >> ARMV8_PMCR_N_SHIFT) & ARMV8_PMCR_N_MASK;
/* Add the CPU cycles counter */
- *nb_cnt += 1;
+ nb_cnt += 1;
+
+ return nb_cnt;
+}
+
+static void armv8pmu_read_common_events_bitmap(unsigned long *bmp)
+{
+ u32 reg[2];
+
+ reg[0] = armv8pmu_pmceidn_read(0);
+ reg[1] = armv8pmu_pmceidn_read(1);
+
+ bitmap_from_u32array(bmp, ARMV8_PMUV3_MAX_COMMON_EVENTS,
+ reg, ARRAY_SIZE(reg));
+ return;
}
-static int armv8pmu_probe_num_events(struct arm_pmu *arm_pmu)
+struct armv8pmu_probe_pmu_data {
+ int num_evt_cntrs;
+ DECLARE_BITMAP(events_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS);
+};
+
+static void armv8pmu_probe_pmu(void *info)
{
- return smp_call_function_any(&arm_pmu->supported_cpus,
- armv8pmu_read_num_pmnc_events,
- &arm_pmu->num_events, 1);
+ struct armv8pmu_probe_pmu_data *data = info;
+
+ data->num_evt_cntrs = armv8pmu_read_num_pmnc_events();
+ armv8pmu_read_common_events_bitmap(data->events_bitmap);
}
-static void armv8_pmu_init(struct arm_pmu *cpu_pmu)
+static int armv8_pmu_init(struct arm_pmu *cpu_pmu)
{
+ u32 evt, idx = 0;
+ struct attribute **event_attrs;
+ struct attribute_group *events_attr_group;
+ const struct attribute_group **attr_groups;
+ struct device *dev = &cpu_pmu->plat_device->dev;
+ struct armv8pmu_probe_pmu_data pmu_data;
+ int error;
+
cpu_pmu->handle_irq = armv8pmu_handle_irq,
cpu_pmu->enable = armv8pmu_enable_event,
cpu_pmu->disable = armv8pmu_disable_event,
@@ -921,50 +998,82 @@ static void armv8_pmu_init(struct arm_pmu *cpu_pmu)
cpu_pmu->reset = armv8pmu_reset,
cpu_pmu->max_period = (1LLU << 32) - 1,
cpu_pmu->set_event_filter = armv8pmu_set_event_filter;
+
+ error = smp_call_function_any(&cpu_pmu->supported_cpus,
+ armv8pmu_probe_pmu,
+ &pmu_data, 1);
+ if (error)
+ goto out;
+
+ cpu_pmu->num_events = pmu_data.num_evt_cntrs;
+
+ event_attrs = devm_kcalloc(dev, bitmap_weight(pmu_data.events_bitmap,
+ ARMV8_PMUV3_MAX_COMMON_EVENTS) + 1,
+ sizeof(*event_attrs), GFP_KERNEL);
+ if (!event_attrs)
+ goto mem_out;
+
+ events_attr_group = devm_kzalloc(dev, sizeof(*events_attr_group),
+ GFP_KERNEL);
+ if (!events_attr_group)
+ goto mem_out;
+
+ attr_groups = devm_kcalloc(dev, 3, sizeof(*attr_groups), GFP_KERNEL);
+ if (!attr_groups)
+ goto mem_out;
+
+ for_each_set_bit(evt, pmu_data.events_bitmap,
+ ARMV8_PMUV3_MAX_COMMON_EVENTS)
+ event_attrs[idx++] = armv8_pmuv3_event_attrs[evt];
+
+ events_attr_group->name = "events";
+ events_attr_group->attrs = event_attrs;
+
+ attr_groups[0] = events_attr_group;
+ attr_groups[1] = &armv8_pmuv3_format_attr_group;
+
+ cpu_pmu->pmu.attr_groups = attr_groups;
+
+ return 0;
+mem_out:
+ error = -ENOMEM;
+out:
+ return error;
}
static int armv8_pmuv3_init(struct arm_pmu *cpu_pmu)
{
- armv8_pmu_init(cpu_pmu);
cpu_pmu->name = "armv8_pmuv3";
cpu_pmu->map_event = armv8_pmuv3_map_event;
- return armv8pmu_probe_num_events(cpu_pmu);
+ return armv8_pmu_init(cpu_pmu);
}
static int armv8_a53_pmu_init(struct arm_pmu *cpu_pmu)
{
- armv8_pmu_init(cpu_pmu);
cpu_pmu->name = "armv8_cortex_a53";
cpu_pmu->map_event = armv8_a53_map_event;
- cpu_pmu->pmu.attr_groups = armv8_pmuv3_attr_groups;
- return armv8pmu_probe_num_events(cpu_pmu);
+ return armv8_pmu_init(cpu_pmu);
}
static int armv8_a57_pmu_init(struct arm_pmu *cpu_pmu)
{
- armv8_pmu_init(cpu_pmu);
cpu_pmu->name = "armv8_cortex_a57";
cpu_pmu->map_event = armv8_a57_map_event;
- cpu_pmu->pmu.attr_groups = armv8_pmuv3_attr_groups;
- return armv8pmu_probe_num_events(cpu_pmu);
+ return armv8_pmu_init(cpu_pmu);
}
static int armv8_a72_pmu_init(struct arm_pmu *cpu_pmu)
{
- armv8_pmu_init(cpu_pmu);
cpu_pmu->name = "armv8_cortex_a72";
cpu_pmu->map_event = armv8_a57_map_event;
- cpu_pmu->pmu.attr_groups = armv8_pmuv3_attr_groups;
- return armv8pmu_probe_num_events(cpu_pmu);
+ return armv8_pmu_init(cpu_pmu);
}
static int armv8_thunder_pmu_init(struct arm_pmu *cpu_pmu)
{
- armv8_pmu_init(cpu_pmu);
cpu_pmu->name = "armv8_cavium_thunder";
cpu_pmu->map_event = armv8_thunder_map_event;
- cpu_pmu->pmu.attr_groups = armv8_pmuv3_attr_groups;
- return armv8pmu_probe_num_events(cpu_pmu);
+ return armv8_pmu_init(cpu_pmu);
}
static const struct of_device_id armv8_pmu_of_device_ids[] = {
--
2.1.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 4/6] arm64/perf: Add Broadcom Vulcan PMU support
2016-03-24 12:52 [PATCH v2 0/6] arm64: perf: Broadcom Vulcan PMU support Ashok Kumar
` (2 preceding siblings ...)
2016-03-24 12:52 ` [PATCH v2 3/6] arm64/perf: Filter common events based on PMCEIDn_EL0 Ashok Kumar
@ 2016-03-24 12:52 ` Ashok Kumar
2016-03-24 12:52 ` [PATCH v2 5/6] arm64: dts: Add Broadcom Vulcan PMU in dts Ashok Kumar
2016-03-24 12:52 ` [PATCH v2 6/6] Documentation: arm64: pmu: Add Broadcom Vulcan PMU binding Ashok Kumar
5 siblings, 0 replies; 11+ messages in thread
From: Ashok Kumar @ 2016-03-24 12:52 UTC (permalink / raw)
To: linux-arm-kernel
Broadcom Vulcan uses ARMv8 PMUv3 and supports most of
the ARMv8 recommended implementation defined events.
Added Vulcan events mapping for perf and perf_cache map.
Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
---
arch/arm64/kernel/perf_event.c | 59 ++++++++++++++++++++++++++++++++++++++++++
1 file changed, 59 insertions(+)
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index a025ec2..e8b985c 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -231,6 +231,20 @@ static const unsigned armv8_thunder_perf_map[PERF_COUNT_HW_MAX] = {
[PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = ARMV8_PMUV3_PERFCTR_STALL_BACKEND,
};
+/* Broadcom Vulcan events mapping */
+static const unsigned armv8_vulcan_perf_map[PERF_COUNT_HW_MAX] = {
+ PERF_MAP_ALL_UNSUPPORTED,
+ [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CPU_CYCLES,
+ [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INST_RETIRED,
+ [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE,
+ [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL,
+ [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_BR_RETIRED,
+ [PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
+ [PERF_COUNT_HW_BUS_CYCLES] = ARMV8_PMUV3_PERFCTR_BUS_CYCLES,
+ [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = ARMV8_PMUV3_PERFCTR_STALL_FRONTEND,
+ [PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = ARMV8_PMUV3_PERFCTR_STALL_BACKEND,
+};
+
static const unsigned armv8_pmuv3_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
@@ -323,6 +337,36 @@ static const unsigned armv8_thunder_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
};
+static const unsigned armv8_vulcan_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX] = {
+ PERF_CACHE_MAP_ALL_UNSUPPORTED,
+
+ [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD,
+ [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR,
+
+ [C(L1I)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1I_CACHE,
+ [C(L1I)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL,
+
+ [C(ITLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL,
+ [C(ITLB)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1I_TLB,
+
+ [C(DTLB)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD,
+ [C(DTLB)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR,
+ [C(DTLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD,
+ [C(DTLB)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR,
+
+ [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
+
+ [C(NODE)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD,
+ [C(NODE)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR,
+};
+
#define ARMV8_EVENT_ATTR_RESOLVE(m) #m
#define ARMV8_EVENT_ATTR(name, config) \
PMU_EVENT_ATTR_STRING(name, armv8_event_attr_##name, \
@@ -940,6 +984,13 @@ static int armv8_thunder_map_event(struct perf_event *event)
ARMV8_EVTYPE_EVENT);
}
+static int armv8_vulcan_map_event(struct perf_event *event)
+{
+ return armpmu_map_event(event, &armv8_vulcan_perf_map,
+ &armv8_vulcan_perf_cache_map,
+ ARMV8_EVTYPE_EVENT);
+}
+
static unsigned int armv8pmu_read_num_pmnc_events(void)
{
unsigned int nb_cnt;
@@ -1076,12 +1127,20 @@ static int armv8_thunder_pmu_init(struct arm_pmu *cpu_pmu)
return armv8_pmu_init(cpu_pmu);
}
+static int armv8_vulcan_pmu_init(struct arm_pmu *cpu_pmu)
+{
+ cpu_pmu->name = "armv8_brcm_vulcan";
+ cpu_pmu->map_event = armv8_vulcan_map_event;
+ return armv8_pmu_init(cpu_pmu);
+}
+
static const struct of_device_id armv8_pmu_of_device_ids[] = {
{.compatible = "arm,armv8-pmuv3", .data = armv8_pmuv3_init},
{.compatible = "arm,cortex-a53-pmu", .data = armv8_a53_pmu_init},
{.compatible = "arm,cortex-a57-pmu", .data = armv8_a57_pmu_init},
{.compatible = "arm,cortex-a72-pmu", .data = armv8_a72_pmu_init},
{.compatible = "cavium,thunder-pmu", .data = armv8_thunder_pmu_init},
+ {.compatible = "brcm,vulcan-pmu", .data = armv8_vulcan_pmu_init},
{},
};
--
2.1.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 5/6] arm64: dts: Add Broadcom Vulcan PMU in dts
2016-03-24 12:52 [PATCH v2 0/6] arm64: perf: Broadcom Vulcan PMU support Ashok Kumar
` (3 preceding siblings ...)
2016-03-24 12:52 ` [PATCH v2 4/6] arm64/perf: Add Broadcom Vulcan PMU support Ashok Kumar
@ 2016-03-24 12:52 ` Ashok Kumar
2016-03-24 12:52 ` [PATCH v2 6/6] Documentation: arm64: pmu: Add Broadcom Vulcan PMU binding Ashok Kumar
5 siblings, 0 replies; 11+ messages in thread
From: Ashok Kumar @ 2016-03-24 12:52 UTC (permalink / raw)
To: linux-arm-kernel
Add "brcm,vulcan-pmu" compatible string for Broadcom Vulcan PMU.
Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
---
arch/arm64/boot/dts/broadcom/vulcan.dtsi | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/arm64/boot/dts/broadcom/vulcan.dtsi b/arch/arm64/boot/dts/broadcom/vulcan.dtsi
index 85820e2..34e11a9 100644
--- a/arch/arm64/boot/dts/broadcom/vulcan.dtsi
+++ b/arch/arm64/boot/dts/broadcom/vulcan.dtsi
@@ -86,7 +86,7 @@
};
pmu {
- compatible = "arm,armv8-pmuv3";
+ compatible = "brcm,vulcan-pmu", "arm,armv8-pmuv3";
interrupts = <GIC_PPI 7 IRQ_TYPE_LEVEL_HIGH>; /* PMU overflow */
};
--
2.1.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 6/6] Documentation: arm64: pmu: Add Broadcom Vulcan PMU binding
2016-03-24 12:52 [PATCH v2 0/6] arm64: perf: Broadcom Vulcan PMU support Ashok Kumar
` (4 preceding siblings ...)
2016-03-24 12:52 ` [PATCH v2 5/6] arm64: dts: Add Broadcom Vulcan PMU in dts Ashok Kumar
@ 2016-03-24 12:52 ` Ashok Kumar
5 siblings, 0 replies; 11+ messages in thread
From: Ashok Kumar @ 2016-03-24 12:52 UTC (permalink / raw)
To: linux-arm-kernel
Document the compatible string for Broadcom Vulcan PMU.
Also arranged the list in alphabetical order.
Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
---
Documentation/devicetree/bindings/arm/pmu.txt | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/Documentation/devicetree/bindings/arm/pmu.txt b/Documentation/devicetree/bindings/arm/pmu.txt
index d3999a1..b73a7c7 100644
--- a/Documentation/devicetree/bindings/arm/pmu.txt
+++ b/Documentation/devicetree/bindings/arm/pmu.txt
@@ -22,10 +22,11 @@ Required properties:
"arm,arm11mpcore-pmu"
"arm,arm1176-pmu"
"arm,arm1136-pmu"
+ "brcm,vulcan-pmu"
+ "cavium,thunder-pmu"
"qcom,scorpion-pmu"
"qcom,scorpion-mp-pmu"
"qcom,krait-pmu"
- "cavium,thunder-pmu"
- interrupts : 1 combined interrupt or 1 per core. If the interrupt is a per-cpu
interrupt (PPI) then 1 interrupt should be specified.
--
2.1.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH v2 3/6] arm64/perf: Filter common events based on PMCEIDn_EL0
2016-03-24 12:52 ` [PATCH v2 3/6] arm64/perf: Filter common events based on PMCEIDn_EL0 Ashok Kumar
@ 2016-03-24 15:28 ` Suzuki K. Poulose
2016-03-28 12:31 ` Ashok Sekar
2016-03-24 16:14 ` Mark Rutland
1 sibling, 1 reply; 11+ messages in thread
From: Suzuki K. Poulose @ 2016-03-24 15:28 UTC (permalink / raw)
To: linux-arm-kernel
On 24/03/16 12:52, Ashok Kumar wrote:
> The complete common architectural and micro-architectural
> event number structure is filtered based on PMCEIDn_EL0 and
> copied to a new structure which is exposed to /sys
>
> The function which derives event bitmap from PMCEIDn_EL0 is
> executed in the cpus, which has the pmu being initialized,
> for heterogeneous pmu support.
>
> Enforced armv8_pmuv3_event_attrs array for event number
> ordering as this is indexed with event number now.
>
> Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
> ---
> arch/arm64/kernel/perf_event.c | 281 ++++++++++++++++++++++++++++-------------
> 1 file changed, 195 insertions(+), 86 deletions(-)
>
> +static inline u32 armv8pmu_pmceidn_read(int reg)
> +{
> + u32 val = 0;
> +
> + if (reg == 0)
> + asm volatile("mrs %0, pmceid0_el0" : "=r" (val));
> + else if (reg == 1)
> + asm volatile("mrs %0, pmceid1_el0" : "=r" (val));
> +
> + return val;
> +}
> + reg[0] = armv8pmu_pmceidn_read(0);
> + reg[1] = armv8pmu_pmceidn_read(1);
Could we not add definitions for SYS_PMCEID0_EL0 and SYS_PMCEID1_EL0
in asm/sysreg.h and simply do
reg[0] = read_cpuid(SYS_PMCEID0_EL0);
reg[1] = read_cpuid(SYS_PMCEID1_EL0);
?
Thanks
Suzuki
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 3/6] arm64/perf: Filter common events based on PMCEIDn_EL0
2016-03-24 12:52 ` [PATCH v2 3/6] arm64/perf: Filter common events based on PMCEIDn_EL0 Ashok Kumar
2016-03-24 15:28 ` Suzuki K. Poulose
@ 2016-03-24 16:14 ` Mark Rutland
2016-03-28 12:25 ` Ashok Sekar
1 sibling, 1 reply; 11+ messages in thread
From: Mark Rutland @ 2016-03-24 16:14 UTC (permalink / raw)
To: linux-arm-kernel
HI,
On Thu, Mar 24, 2016 at 05:52:37AM -0700, Ashok Kumar wrote:
> The complete common architectural and micro-architectural
> event number structure is filtered based on PMCEIDn_EL0 and
> copied to a new structure which is exposed to /sys
>
> The function which derives event bitmap from PMCEIDn_EL0 is
> executed in the cpus, which has the pmu being initialized,
> for heterogeneous pmu support.
I would prefer it we could instead filter the list at run time by
implementing attribute_group::is_visible() for the events attribute
group, and share a common set of attributes and attr groups. That would
avoid the filtering, copying, and assocaited memory allocation.
e.g. have an array of:
struct event_attribute {
struct attribute attr;
int pmceid_idx;
};
Then cache the pmceid value at probe time, and have and something like:
umode_t event_attr_is_visible(struct kobject *kobj, struct attribute *attr, int)
{
struct arm_pmu *arm_pmu;
struct event_attribute *e_attr;
arm_pmu = pmu_kobj_to_armpmu(kobj);
e_attr = container_of(attr, struct event_attr, attr);
if (test_bit(e_attr->pmceid_idx, &arm_pmu->cached_pmceid))
return 0444;
return 0;
}
Thanks,
Mark.
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 3/6] arm64/perf: Filter common events based on PMCEIDn_EL0
2016-03-24 16:14 ` Mark Rutland
@ 2016-03-28 12:25 ` Ashok Sekar
0 siblings, 0 replies; 11+ messages in thread
From: Ashok Sekar @ 2016-03-28 12:25 UTC (permalink / raw)
To: linux-arm-kernel
Hi Mark,
On Thu, Mar 24, 2016 at 9:44 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> HI,
>
> On Thu, Mar 24, 2016 at 05:52:37AM -0700, Ashok Kumar wrote:
>> The complete common architectural and micro-architectural
>> event number structure is filtered based on PMCEIDn_EL0 and
>> copied to a new structure which is exposed to /sys
>>
>> The function which derives event bitmap from PMCEIDn_EL0 is
>> executed in the cpus, which has the pmu being initialized,
>> for heterogeneous pmu support.
>
> I would prefer it we could instead filter the list at run time by
> implementing attribute_group::is_visible() for the events attribute
> group, and share a common set of attributes and attr groups. That would
> avoid the filtering, copying, and assocaited memory allocation.
>
> e.g. have an array of:
>
> struct event_attribute {
> struct attribute attr;
> int pmceid_idx;
> };
>
> Then cache the pmceid value at probe time, and have and something like:
>
> umode_t event_attr_is_visible(struct kobject *kobj, struct attribute *attr, int)
> {
> struct arm_pmu *arm_pmu;
> struct event_attribute *e_attr;
>
> arm_pmu = pmu_kobj_to_armpmu(kobj);
> e_attr = container_of(attr, struct event_attr, attr);
>
> if (test_bit(e_attr->pmceid_idx, &arm_pmu->cached_pmceid))
> return 0444;
>
> return 0;
> }
This looks better - posted a v3 based on your suggestion.
Thanks,
Ashok
>
> Thanks,
> Mark.
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH v2 3/6] arm64/perf: Filter common events based on PMCEIDn_EL0
2016-03-24 15:28 ` Suzuki K. Poulose
@ 2016-03-28 12:31 ` Ashok Sekar
0 siblings, 0 replies; 11+ messages in thread
From: Ashok Sekar @ 2016-03-28 12:31 UTC (permalink / raw)
To: linux-arm-kernel
Hi Suzuki,
On Thu, Mar 24, 2016 at 8:58 PM, Suzuki K. Poulose
<Suzuki.Poulose@arm.com> wrote:
> On 24/03/16 12:52, Ashok Kumar wrote:
>>
>> The complete common architectural and micro-architectural
>> event number structure is filtered based on PMCEIDn_EL0 and
>> copied to a new structure which is exposed to /sys
>>
>> The function which derives event bitmap from PMCEIDn_EL0 is
>> executed in the cpus, which has the pmu being initialized,
>> for heterogeneous pmu support.
>>
>> Enforced armv8_pmuv3_event_attrs array for event number
>> ordering as this is indexed with event number now.
>>
>> Signed-off-by: Ashok Kumar <ashoks@broadcom.com>
>> ---
>> arch/arm64/kernel/perf_event.c | 281
>> ++++++++++++++++++++++++++++-------------
>> 1 file changed, 195 insertions(+), 86 deletions(-)
>>
>
>> +static inline u32 armv8pmu_pmceidn_read(int reg)
>> +{
>> + u32 val = 0;
>> +
>> + if (reg == 0)
>> + asm volatile("mrs %0, pmceid0_el0" : "=r" (val));
>> + else if (reg == 1)
>> + asm volatile("mrs %0, pmceid1_el0" : "=r" (val));
>> +
>> + return val;
>> +}
>
>
>> + reg[0] = armv8pmu_pmceidn_read(0);
>> + reg[1] = armv8pmu_pmceidn_read(1);
>
>
> Could we not add definitions for SYS_PMCEID0_EL0 and SYS_PMCEID1_EL0
> in asm/sysreg.h and simply do
>
> reg[0] = read_cpuid(SYS_PMCEID0_EL0);
> reg[1] = read_cpuid(SYS_PMCEID1_EL0);
>
> ?
Yes, it is possible but looks like read_cpuid is used only in cpuid
related contexts.
Maybe we need to define separate definition for pmu/general context and use it?
perf_event.c has other instances where the pmu registers are directly read and
needs to be converted to this format if we are doing it. Let me know
your suggestion.
Currently I have posted v3 adhering to the convention of already
existing pmu register reads.
Thanks,
Ashok
>
> Thanks
> Suzuki
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2016-03-28 12:31 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-03-24 12:52 [PATCH v2 0/6] arm64: perf: Broadcom Vulcan PMU support Ashok Kumar
2016-03-24 12:52 ` [PATCH v2 1/6] arm64/perf: Changed events naming as per ARM ARM Ashok Kumar
2016-03-24 12:52 ` [PATCH v2 2/6] arm64/perf: Define complete ARMv8 recommended implementation defined events Ashok Kumar
2016-03-24 12:52 ` [PATCH v2 3/6] arm64/perf: Filter common events based on PMCEIDn_EL0 Ashok Kumar
2016-03-24 15:28 ` Suzuki K. Poulose
2016-03-28 12:31 ` Ashok Sekar
2016-03-24 16:14 ` Mark Rutland
2016-03-28 12:25 ` Ashok Sekar
2016-03-24 12:52 ` [PATCH v2 4/6] arm64/perf: Add Broadcom Vulcan PMU support Ashok Kumar
2016-03-24 12:52 ` [PATCH v2 5/6] arm64: dts: Add Broadcom Vulcan PMU in dts Ashok Kumar
2016-03-24 12:52 ` [PATCH v2 6/6] Documentation: arm64: pmu: Add Broadcom Vulcan PMU binding Ashok Kumar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).