* [PATCH V12 00/10] arm64/perf: Enable branch stack sampling
@ 2023-06-15 13:32 Anshuman Khandual
2023-06-15 13:32 ` [PATCH V12 01/10] drivers: perf: arm_pmu: Add new sched_task() callback Anshuman Khandual
` (10 more replies)
0 siblings, 11 replies; 25+ messages in thread
From: Anshuman Khandual @ 2023-06-15 13:32 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, will, catalin.marinas,
mark.rutland
Cc: Anshuman Khandual, Mark Brown, James Clark, Rob Herring,
Marc Zyngier, Suzuki Poulose, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, linux-perf-users
This series enables perf branch stack sampling support on arm64 platform
via a new arch feature called Branch Record Buffer Extension (BRBE). All
relevant register definitions could be accessed here.
https://developer.arm.com/documentation/ddi0601/2021-12/AArch64-Registers
This series applies on 6.4-rc6.
Changes in V12:
- Replaced branch types with complete DIRECT/INDIRECT prefixes/suffixes
- Replaced branch types with complete INSN/ALIGN prefixes/suffixes
- Replaced return branch types as simple RET/ERET
- Replaced time field GST_PHYSICAL as GUEST_PHYSICAL
- Added 0 padding for BRBIDR0_EL1.NUMREC enum values
- Dropped helper arm_pmu_branch_stack_supported()
- Renamed armv8pmu_branch_valid() as armv8pmu_branch_attr_valid()
- Separated perf_task_ctx_cache setup from arm_pmu private allocation
- Collected changes to branch_records_alloc() in a single patch [5/10]
- Reworked and cleaned up branch_records_alloc()
- Reworked armv8pmu_branch_read() with new loop iterations in patch [6/10]
- Reworked capture_brbe_regset() with new loop iterations in patch [8/10]
- Updated the comment in branch_type_to_brbcr()
- Fixed the comment before stitch_stored_live_entries()
- Fixed BRBINFINJ_EL1 definition for VALID_FULL enum field
- Factored out helper __read_brbe_regset() from capture_brbe_regset()
- Dropped the helper copy_brbe_regset()
- Simplified stitch_stored_live_entries() with memcpy(), memmove()
- Reworked armv8pmu_probe_pmu() to bail out early with !probe.present
- Rework brbe_attributes_probe() without 'struct brbe_hw_attr'
- Dropped 'struct brbe_hw_attr' argument from capture_brbe_regset()
- Dropped 'struct brbe_hw_attr' argument from brbe_branch_save()
- Dropped arm_pmu->private and added arm_pmu->reg_trbidr instead
Changes in V11:
https://lore.kernel.org/all/20230531040428.501523-1-anshuman.khandual@arm.com/
- Fixed the crash for per-cpu events without event->pmu_ctx->task_ctx_data
Changes in V10:
https://lore.kernel.org/all/20230517022410.722287-1-anshuman.khandual@arm.com/
- Rebased the series on v6.4-rc2
- Moved ARMV8 PMUV3 changes inside drivers/perf/arm_pmuv3.c
- Moved BRBE driver changes inside drivers/perf/arm_brbe.[c|h]
- Moved the WARN_ON() inside the if condition in armv8pmu_handle_irq()
Changes in V9:
https://lore.kernel.org/all/20230315051444.1683170-1-anshuman.khandual@arm.com/
- Fixed build problem with has_branch_stack() in arm64 header
- BRBINF_EL1 definition has been changed from 'Sysreg' to 'SysregFields'
- Renamed all BRBINF_EL1 call sites as BRBINFx_EL1
- Dropped static const char branch_filter_error_msg[]
- Implemented a positive list check for BRBE supported perf branch filters
- Added a comment in armv8pmu_handle_irq()
- Implemented per-cpu allocation for struct branch_record records
- Skipped looping through bank 1 if an invalid record is detected in bank 0
- Added comment in armv8pmu_branch_read() explaining prohibited region etc
- Added comment warning about erroneously marking transactions as aborted
- Replaced the first argument (perf_branch_entry) in capture_brbe_flags()
- Dropped the last argument (idx) in capture_brbe_flags()
- Dropped the brbcr argument from capture_brbe_flags()
- Used perf_sample_save_brstack() to capture branch records for perf_sample_data
- Added comment explaining rationale for setting BRBCR_EL1_FZP for user only traces
- Dropped BRBE prohibited state mechanism while in armv8pmu_branch_read()
- Implemented event task context based branch records save mechanism
Changes in V8:
https://lore.kernel.org/all/20230123125956.1350336-1-anshuman.khandual@arm.com/
- Replaced arm_pmu->features as arm_pmu->has_branch_stack, updated its helper
- Added a comment and line break before arm_pmu->private element
- Added WARN_ON_ONCE() in helpers i.e armv8pmu_branch_[read|valid|enable|disable]()
- Dropped comments in armv8pmu_enable_event() and armv8pmu_disable_event()
- Replaced open bank encoding in BRBFCR_EL1 with SYS_FIELD_PREP()
- Changed brbe_hw_attr->brbe_version from 'bool' to 'int'
- Updated pr_warn() as pr_warn_once() with values in brbe_get_perf_[type|priv]()
- Replaced all pr_warn_once() as pr_debug_once() in armv8pmu_branch_valid()
- Added a comment in branch_type_to_brbcr() for the BRBCR_EL1 privilege settings
- Modified the comment related to BRBINFx_EL1.LASTFAILED in capture_brbe_flags()
- Modified brbe_get_perf_entry_type() as brbe_set_perf_entry_type()
- Renamed brbe_valid() as brbe_record_is_complete()
- Renamed brbe_source() as brbe_record_is_source_only()
- Renamed brbe_target() as brbe_record_is_target_only()
- Inverted checks for !brbe_record_is_[target|source]_only() for info capture
- Replaced 'fetch' with 'get' in all helpers that extract field value
- Dropped 'static int brbe_current_bank' optimization in select_brbe_bank()
- Dropped select_brbe_bank_index() completely, added capture_branch_entry()
- Process captured branch entries in two separate loops one for each BRBE bank
- Moved branch_records_alloc() inside armv8pmu_probe_pmu()
- Added a forward declaration for the helper has_branch_stack()
- Added new callbacks armv8pmu_private_alloc() and armv8pmu_private_free()
- Updated armv8pmu_probe_pmu() to allocate the private structure before SMP call
Changes in V7:
https://lore.kernel.org/all/20230105031039.207972-1-anshuman.khandual@arm.com/
- Folded [PATCH 7/7] into [PATCH 3/7] which enables branch stack sampling event
- Defined BRBFCR_EL1_BRANCH_FILTERS, BRBCR_EL1_DEFAULT_CONFIG in the header
- Defined BRBFCR_EL1_DEFAULT_CONFIG in the header
- Updated BRBCR_EL1_DEFAULT_CONFIG with BRBCR_EL1_FZP
- Defined BRBCR_EL1_DEFAULT_TS in the header
- Updated BRBCR_EL1_DEFAULT_CONFIG with BRBCR_EL1_DEFAULT_TS
- Moved BRBCR_EL1_DEFAULT_CONFIG check inside branch_type_to_brbcr()
- Moved down BRBCR_EL1_CC, BRBCR_EL1_MPRED later in branch_type_to_brbcr()
- Also set BRBE in paused state in armv8pmu_branch_disable()
- Dropped brbe_paused(), set_brbe_paused() helpers
- Extracted error string via branch_filter_error_msg[] for armv8pmu_branch_valid()
- Replaced brbe_v1p1 with brbe_version in struct brbe_hw_attr
- Added valid_brbe_[cc, format, version]() helpers
- Split a separate brbe_attributes_probe() from armv8pmu_branch_probe()
- Capture event->attr.branch_sample_type earlier in armv8pmu_branch_valid()
- Defined enum brbe_bank_idx with possible values for BRBE bank indices
- Changed armpmu->hw_attr into armpmu->private
- Added missing space in stub definition for armv8pmu_branch_valid()
- Replaced both kmalloc() with kzalloc()
- Added BRBE_BANK_MAX_ENTRIES
- Updated comment for capture_brbe_flags()
- Updated comment for struct brbe_hw_attr
- Dropped space after type cast in couple of places
- Replaced inverse with negation for testing BRBCR_EL1_FZP in armv8pmu_branch_read()
- Captured cpuc->branches->branch_entries[idx] in a local variable
- Dropped saved_priv from armv8pmu_branch_read()
- Reorganize PERF_SAMPLE_BRANCH_NO_[CYCLES|NO_FLAGS] related configuration
- Replaced with FIELD_GET() and FIELD_PREP() wherever applicable
- Replaced BRBCR_EL1_TS_PHYSICAL with BRBCR_EL1_TS_VIRTUAL
- Moved valid_brbe_nr(), valid_brbe_cc(), valid_brbe_format(), valid_brbe_version()
select_brbe_bank(), select_brbe_bank_index() helpers inside the C implementation
- Reorganized brbe_valid_nr() and dropped the pr_warn() message
- Changed probe sequence in brbe_attributes_probe()
- Added 'brbcr' argument into capture_brbe_flags() to ascertain correct state
- Disable BRBE before disabling the PMU event counter
- Enable PERF_SAMPLE_BRANCH_HV filters when is_kernel_in_hyp_mode()
- Guard armv8pmu_reset() & armv8pmu_sched_task() with arm_pmu_branch_stack_supported()
Changes in V6:
https://lore.kernel.org/linux-arm-kernel/20221208084402.863310-1-anshuman.khandual@arm.com/
- Restore the exception level privilege after reading the branch records
- Unpause the buffer after reading the branch records
- Decouple BRBCR_EL1_EXCEPTION/ERTN from perf event privilege level
- Reworked BRBE implementation and branch stack sampling support on arm pmu
- BRBE implementation is now part of overall ARMV8 PMU implementation
- BRBE implementation moved from drivers/perf/ to inside arch/arm64/kernel/
- CONFIG_ARM_BRBE_PMU renamed as CONFIG_ARM64_BRBE in arch/arm64/Kconfig
- File moved - drivers/perf/arm_pmu_brbe.c -> arch/arm64/kernel/brbe.c
- File moved - drivers/perf/arm_pmu_brbe.h -> arch/arm64/kernel/brbe.h
- BRBE name has been dropped from struct arm_pmu and struct hw_pmu_events
- BRBE name has been abstracted out as 'branches' in arm_pmu and hw_pmu_events
- BRBE name has been abstracted out as 'branches' in ARMV8 PMU implementation
- Added sched_task() callback into struct arm_pmu
- Added 'hw_attr' into struct arm_pmu encapsulating possible PMU HW attributes
- Dropped explicit attributes brbe_(v1p1, nr, cc, format) from struct arm_pmu
- Dropped brbfcr, brbcr, registers scratch area from struct hw_pmu_events
- Dropped brbe_users, brbe_context tracking in struct hw_pmu_events
- Added 'features' tracking into struct arm_pmu with ARM_PMU_BRANCH_STACK flag
- armpmu->hw_attr maps into 'struct brbe_hw_attr' inside BRBE implementation
- Set ARM_PMU_BRANCH_STACK in 'arm_pmu->features' after successful BRBE probe
- Added armv8pmu_branch_reset() inside armv8pmu_branch_enable()
- Dropped brbe_supported() as events will be rejected via ARM_PMU_BRANCH_STACK
- Dropped set_brbe_disabled() as well
- Reformated armv8pmu_branch_valid() warnings while rejecting unsupported events
Changes in V5:
https://lore.kernel.org/linux-arm-kernel/20221107062514.2851047-1-anshuman.khandual@arm.com/
- Changed BRBCR_EL1.VIRTUAL from 0b1 to 0b01
- Changed BRBFCR_EL1.EnL into BRBFCR_EL1.EnI
- Changed config ARM_BRBE_PMU from 'tristate' to 'bool'
Changes in V4:
https://lore.kernel.org/all/20221017055713.451092-1-anshuman.khandual@arm.com/
- Changed ../tools/sysreg declarations as suggested
- Set PERF_SAMPLE_BRANCH_STACK in data.sample_flags
- Dropped perfmon_capable() check in armpmu_event_init()
- s/pr_warn_once/pr_info in armpmu_event_init()
- Added brbe_format element into struct pmu_hw_events
- Changed v1p1 as brbe_v1p1 in struct pmu_hw_events
- Dropped pr_info() from arm64_pmu_brbe_probe(), solved LOCKDEP warning
Changes in V3:
https://lore.kernel.org/all/20220929075857.158358-1-anshuman.khandual@arm.com/
- Moved brbe_stack from the stack and now dynamically allocated
- Return PERF_BR_PRIV_UNKNOWN instead of -1 in brbe_fetch_perf_priv()
- Moved BRBIDR0, BRBCR, BRBFCR registers and fields into tools/sysreg
- Created dummy BRBINF_EL1 field definitions in tools/sysreg
- Dropped ARMPMU_EVT_PRIV framework which cached perfmon_capable()
- Both exception and exception return branche records are now captured
only if the event has PERF_SAMPLE_BRANCH_KERNEL which would already
been checked in generic perf via perf_allow_kernel()
Changes in V2:
https://lore.kernel.org/all/20220908051046.465307-1-anshuman.khandual@arm.com/
- Dropped branch sample filter helpers consolidation patch from this series
- Added new hw_perf_event.flags element ARMPMU_EVT_PRIV to cache perfmon_capable()
- Use cached perfmon_capable() while configuring BRBE branch record filters
Changes in V1:
https://lore.kernel.org/linux-arm-kernel/20220613100119.684673-1-anshuman.khandual@arm.com/
- Added CONFIG_PERF_EVENTS wrapper for all branch sample filter helpers
- Process new perf branch types via PERF_BR_EXTEND_ABI
Changes in RFC V2:
https://lore.kernel.org/linux-arm-kernel/20220412115455.293119-1-anshuman.khandual@arm.com/
- Added branch_sample_priv() while consolidating other branch sample filter helpers
- Changed all SYS_BRBXXXN_EL1 register definition encodings per Marc
- Changed the BRBE driver as per proposed BRBE related perf ABI changes (V5)
- Added documentation for struct arm_pmu changes, updated commit message
- Updated commit message for BRBE detection infrastructure patch
- PERF_SAMPLE_BRANCH_KERNEL gets checked during arm event init (outside the driver)
- Branch privilege state capture mechanism has now moved inside the driver
Changes in RFC V1:
https://lore.kernel.org/all/1642998653-21377-1-git-send-email-anshuman.khandual@arm.com/
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: James Clark <james.clark@arm.com>
Cc: Rob Herring <robh@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-perf-users@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Anshuman Khandual (10):
drivers: perf: arm_pmu: Add new sched_task() callback
arm64/perf: Add BRBE registers and fields
arm64/perf: Add branch stack support in struct arm_pmu
arm64/perf: Add branch stack support in struct pmu_hw_events
arm64/perf: Add branch stack support in ARMV8 PMU
arm64/perf: Enable branch stack events via FEAT_BRBE
arm64/perf: Add PERF_ATTACH_TASK_DATA to events with has_branch_stack()
arm64/perf: Add struct brbe_regset helper functions
arm64/perf: Implement branch records save on task sched out
arm64/perf: Implement branch records save on PMU IRQ
arch/arm64/include/asm/perf_event.h | 46 ++
arch/arm64/include/asm/sysreg.h | 103 ++++
arch/arm64/tools/sysreg | 158 ++++++
drivers/perf/Kconfig | 11 +
drivers/perf/Makefile | 1 +
drivers/perf/arm_brbe.c | 722 ++++++++++++++++++++++++++++
drivers/perf/arm_brbe.h | 270 +++++++++++
drivers/perf/arm_pmu.c | 12 +-
drivers/perf/arm_pmuv3.c | 110 ++++-
include/linux/perf/arm_pmu.h | 19 +-
10 files changed, 1425 insertions(+), 27 deletions(-)
create mode 100644 drivers/perf/arm_brbe.c
create mode 100644 drivers/perf/arm_brbe.h
--
2.25.1
^ permalink raw reply [flat|nested] 25+ messages in thread
* [PATCH V12 01/10] drivers: perf: arm_pmu: Add new sched_task() callback
2023-06-15 13:32 [PATCH V12 00/10] arm64/perf: Enable branch stack sampling Anshuman Khandual
@ 2023-06-15 13:32 ` Anshuman Khandual
2023-06-15 13:32 ` [PATCH V12 02/10] arm64/perf: Add BRBE registers and fields Anshuman Khandual
` (9 subsequent siblings)
10 siblings, 0 replies; 25+ messages in thread
From: Anshuman Khandual @ 2023-06-15 13:32 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, will, catalin.marinas,
mark.rutland
Cc: Anshuman Khandual, Mark Brown, James Clark, Rob Herring,
Marc Zyngier, Suzuki Poulose, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, linux-perf-users
This adds armpmu_sched_task(), as generic pmu's sched_task() override which
in turn can utilize a new arm_pmu.sched_task() callback when available from
the arm_pmu instance. This new callback will be used while enabling BRBE in
ARMV8 PMU.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Tested-by: James Clark <james.clark@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
drivers/perf/arm_pmu.c | 9 +++++++++
include/linux/perf/arm_pmu.h | 1 +
2 files changed, 10 insertions(+)
diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index 15bd1e34a88e..aada47e3b126 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -517,6 +517,14 @@ static int armpmu_event_init(struct perf_event *event)
return __hw_perf_event_init(event);
}
+static void armpmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
+{
+ struct arm_pmu *armpmu = to_arm_pmu(pmu_ctx->pmu);
+
+ if (armpmu->sched_task)
+ armpmu->sched_task(pmu_ctx, sched_in);
+}
+
static void armpmu_enable(struct pmu *pmu)
{
struct arm_pmu *armpmu = to_arm_pmu(pmu);
@@ -858,6 +866,7 @@ struct arm_pmu *armpmu_alloc(void)
}
pmu->pmu = (struct pmu) {
+ .sched_task = armpmu_sched_task,
.pmu_enable = armpmu_enable,
.pmu_disable = armpmu_disable,
.event_init = armpmu_event_init,
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index 525b5d64e394..f7fbd162ca4c 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -100,6 +100,7 @@ struct arm_pmu {
void (*stop)(struct arm_pmu *);
void (*reset)(void *);
int (*map_event)(struct perf_event *event);
+ void (*sched_task)(struct perf_event_pmu_context *pmu_ctx, bool sched_in);
int num_events;
bool secure_access; /* 32-bit ARM only */
#define ARMV8_PMUV3_MAX_COMMON_EVENTS 0x40
--
2.25.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V12 02/10] arm64/perf: Add BRBE registers and fields
2023-06-15 13:32 [PATCH V12 00/10] arm64/perf: Enable branch stack sampling Anshuman Khandual
2023-06-15 13:32 ` [PATCH V12 01/10] drivers: perf: arm_pmu: Add new sched_task() callback Anshuman Khandual
@ 2023-06-15 13:32 ` Anshuman Khandual
2023-06-15 13:32 ` [PATCH V12 03/10] arm64/perf: Add branch stack support in struct arm_pmu Anshuman Khandual
` (8 subsequent siblings)
10 siblings, 0 replies; 25+ messages in thread
From: Anshuman Khandual @ 2023-06-15 13:32 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, will, catalin.marinas,
mark.rutland
Cc: Anshuman Khandual, Mark Brown, James Clark, Rob Herring,
Marc Zyngier, Suzuki Poulose, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, linux-perf-users
This adds BRBE related register definitions and various other related field
macros there in. These will be used subsequently in a BRBE driver which is
being added later on.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Tested-by: James Clark <james.clark@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/arm64/include/asm/sysreg.h | 103 +++++++++++++++++++++
arch/arm64/tools/sysreg | 158 ++++++++++++++++++++++++++++++++
2 files changed, 261 insertions(+)
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index eefd712f2430..18828435e1df 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -171,6 +171,109 @@
#define SYS_DBGDTRTX_EL0 sys_reg(2, 3, 0, 5, 0)
#define SYS_DBGVCR32_EL2 sys_reg(2, 4, 0, 7, 0)
+#define __SYS_BRBINFO(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 0))
+#define __SYS_BRBSRC(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 1))
+#define __SYS_BRBTGT(n) sys_reg(2, 1, 8, ((n) & 0xf), ((((n) & 0x10)) >> 2 + 2))
+
+#define SYS_BRBINF0_EL1 __SYS_BRBINFO(0)
+#define SYS_BRBINF1_EL1 __SYS_BRBINFO(1)
+#define SYS_BRBINF2_EL1 __SYS_BRBINFO(2)
+#define SYS_BRBINF3_EL1 __SYS_BRBINFO(3)
+#define SYS_BRBINF4_EL1 __SYS_BRBINFO(4)
+#define SYS_BRBINF5_EL1 __SYS_BRBINFO(5)
+#define SYS_BRBINF6_EL1 __SYS_BRBINFO(6)
+#define SYS_BRBINF7_EL1 __SYS_BRBINFO(7)
+#define SYS_BRBINF8_EL1 __SYS_BRBINFO(8)
+#define SYS_BRBINF9_EL1 __SYS_BRBINFO(9)
+#define SYS_BRBINF10_EL1 __SYS_BRBINFO(10)
+#define SYS_BRBINF11_EL1 __SYS_BRBINFO(11)
+#define SYS_BRBINF12_EL1 __SYS_BRBINFO(12)
+#define SYS_BRBINF13_EL1 __SYS_BRBINFO(13)
+#define SYS_BRBINF14_EL1 __SYS_BRBINFO(14)
+#define SYS_BRBINF15_EL1 __SYS_BRBINFO(15)
+#define SYS_BRBINF16_EL1 __SYS_BRBINFO(16)
+#define SYS_BRBINF17_EL1 __SYS_BRBINFO(17)
+#define SYS_BRBINF18_EL1 __SYS_BRBINFO(18)
+#define SYS_BRBINF19_EL1 __SYS_BRBINFO(19)
+#define SYS_BRBINF20_EL1 __SYS_BRBINFO(20)
+#define SYS_BRBINF21_EL1 __SYS_BRBINFO(21)
+#define SYS_BRBINF22_EL1 __SYS_BRBINFO(22)
+#define SYS_BRBINF23_EL1 __SYS_BRBINFO(23)
+#define SYS_BRBINF24_EL1 __SYS_BRBINFO(24)
+#define SYS_BRBINF25_EL1 __SYS_BRBINFO(25)
+#define SYS_BRBINF26_EL1 __SYS_BRBINFO(26)
+#define SYS_BRBINF27_EL1 __SYS_BRBINFO(27)
+#define SYS_BRBINF28_EL1 __SYS_BRBINFO(28)
+#define SYS_BRBINF29_EL1 __SYS_BRBINFO(29)
+#define SYS_BRBINF30_EL1 __SYS_BRBINFO(30)
+#define SYS_BRBINF31_EL1 __SYS_BRBINFO(31)
+
+#define SYS_BRBSRC0_EL1 __SYS_BRBSRC(0)
+#define SYS_BRBSRC1_EL1 __SYS_BRBSRC(1)
+#define SYS_BRBSRC2_EL1 __SYS_BRBSRC(2)
+#define SYS_BRBSRC3_EL1 __SYS_BRBSRC(3)
+#define SYS_BRBSRC4_EL1 __SYS_BRBSRC(4)
+#define SYS_BRBSRC5_EL1 __SYS_BRBSRC(5)
+#define SYS_BRBSRC6_EL1 __SYS_BRBSRC(6)
+#define SYS_BRBSRC7_EL1 __SYS_BRBSRC(7)
+#define SYS_BRBSRC8_EL1 __SYS_BRBSRC(8)
+#define SYS_BRBSRC9_EL1 __SYS_BRBSRC(9)
+#define SYS_BRBSRC10_EL1 __SYS_BRBSRC(10)
+#define SYS_BRBSRC11_EL1 __SYS_BRBSRC(11)
+#define SYS_BRBSRC12_EL1 __SYS_BRBSRC(12)
+#define SYS_BRBSRC13_EL1 __SYS_BRBSRC(13)
+#define SYS_BRBSRC14_EL1 __SYS_BRBSRC(14)
+#define SYS_BRBSRC15_EL1 __SYS_BRBSRC(15)
+#define SYS_BRBSRC16_EL1 __SYS_BRBSRC(16)
+#define SYS_BRBSRC17_EL1 __SYS_BRBSRC(17)
+#define SYS_BRBSRC18_EL1 __SYS_BRBSRC(18)
+#define SYS_BRBSRC19_EL1 __SYS_BRBSRC(19)
+#define SYS_BRBSRC20_EL1 __SYS_BRBSRC(20)
+#define SYS_BRBSRC21_EL1 __SYS_BRBSRC(21)
+#define SYS_BRBSRC22_EL1 __SYS_BRBSRC(22)
+#define SYS_BRBSRC23_EL1 __SYS_BRBSRC(23)
+#define SYS_BRBSRC24_EL1 __SYS_BRBSRC(24)
+#define SYS_BRBSRC25_EL1 __SYS_BRBSRC(25)
+#define SYS_BRBSRC26_EL1 __SYS_BRBSRC(26)
+#define SYS_BRBSRC27_EL1 __SYS_BRBSRC(27)
+#define SYS_BRBSRC28_EL1 __SYS_BRBSRC(28)
+#define SYS_BRBSRC29_EL1 __SYS_BRBSRC(29)
+#define SYS_BRBSRC30_EL1 __SYS_BRBSRC(30)
+#define SYS_BRBSRC31_EL1 __SYS_BRBSRC(31)
+
+#define SYS_BRBTGT0_EL1 __SYS_BRBTGT(0)
+#define SYS_BRBTGT1_EL1 __SYS_BRBTGT(1)
+#define SYS_BRBTGT2_EL1 __SYS_BRBTGT(2)
+#define SYS_BRBTGT3_EL1 __SYS_BRBTGT(3)
+#define SYS_BRBTGT4_EL1 __SYS_BRBTGT(4)
+#define SYS_BRBTGT5_EL1 __SYS_BRBTGT(5)
+#define SYS_BRBTGT6_EL1 __SYS_BRBTGT(6)
+#define SYS_BRBTGT7_EL1 __SYS_BRBTGT(7)
+#define SYS_BRBTGT8_EL1 __SYS_BRBTGT(8)
+#define SYS_BRBTGT9_EL1 __SYS_BRBTGT(9)
+#define SYS_BRBTGT10_EL1 __SYS_BRBTGT(10)
+#define SYS_BRBTGT11_EL1 __SYS_BRBTGT(11)
+#define SYS_BRBTGT12_EL1 __SYS_BRBTGT(12)
+#define SYS_BRBTGT13_EL1 __SYS_BRBTGT(13)
+#define SYS_BRBTGT14_EL1 __SYS_BRBTGT(14)
+#define SYS_BRBTGT15_EL1 __SYS_BRBTGT(15)
+#define SYS_BRBTGT16_EL1 __SYS_BRBTGT(16)
+#define SYS_BRBTGT17_EL1 __SYS_BRBTGT(17)
+#define SYS_BRBTGT18_EL1 __SYS_BRBTGT(18)
+#define SYS_BRBTGT19_EL1 __SYS_BRBTGT(19)
+#define SYS_BRBTGT20_EL1 __SYS_BRBTGT(20)
+#define SYS_BRBTGT21_EL1 __SYS_BRBTGT(21)
+#define SYS_BRBTGT22_EL1 __SYS_BRBTGT(22)
+#define SYS_BRBTGT23_EL1 __SYS_BRBTGT(23)
+#define SYS_BRBTGT24_EL1 __SYS_BRBTGT(24)
+#define SYS_BRBTGT25_EL1 __SYS_BRBTGT(25)
+#define SYS_BRBTGT26_EL1 __SYS_BRBTGT(26)
+#define SYS_BRBTGT27_EL1 __SYS_BRBTGT(27)
+#define SYS_BRBTGT28_EL1 __SYS_BRBTGT(28)
+#define SYS_BRBTGT29_EL1 __SYS_BRBTGT(29)
+#define SYS_BRBTGT30_EL1 __SYS_BRBTGT(30)
+#define SYS_BRBTGT31_EL1 __SYS_BRBTGT(31)
+
#define SYS_MIDR_EL1 sys_reg(3, 0, 0, 0, 0)
#define SYS_MPIDR_EL1 sys_reg(3, 0, 0, 0, 5)
#define SYS_REVIDR_EL1 sys_reg(3, 0, 0, 0, 6)
diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
index c9a0d1fa3209..9a6313086e93 100644
--- a/arch/arm64/tools/sysreg
+++ b/arch/arm64/tools/sysreg
@@ -947,6 +947,164 @@ UnsignedEnum 3:0 BT
EndEnum
EndSysreg
+
+SysregFields BRBINFx_EL1
+Res0 63:47
+Field 46 CCU
+Field 45:32 CC
+Res0 31:18
+Field 17 LASTFAILED
+Field 16 T
+Res0 15:14
+Enum 13:8 TYPE
+ 0b000000 UNCOND_DIRECT
+ 0b000001 INDIRECT
+ 0b000010 DIRECT_LINK
+ 0b000011 INDIRECT_LINK
+ 0b000101 RET
+ 0b000111 ERET
+ 0b001000 COND_DIRECT
+ 0b100001 DEBUG_HALT
+ 0b100010 CALL
+ 0b100011 TRAP
+ 0b100100 SERROR
+ 0b100110 INSN_DEBUG
+ 0b100111 DATA_DEBUG
+ 0b101010 ALIGN_FAULT
+ 0b101011 INSN_FAULT
+ 0b101100 DATA_FAULT
+ 0b101110 IRQ
+ 0b101111 FIQ
+ 0b111001 DEBUG_EXIT
+EndEnum
+Enum 7:6 EL
+ 0b00 EL0
+ 0b01 EL1
+ 0b10 EL2
+ 0b11 EL3
+EndEnum
+Field 5 MPRED
+Res0 4:2
+Enum 1:0 VALID
+ 0b00 NONE
+ 0b01 TARGET
+ 0b10 SOURCE
+ 0b11 FULL
+EndEnum
+EndSysregFields
+
+Sysreg BRBCR_EL1 2 1 9 0 0
+Res0 63:24
+Field 23 EXCEPTION
+Field 22 ERTN
+Res0 21:9
+Field 8 FZP
+Res0 7
+Enum 6:5 TS
+ 0b01 VIRTUAL
+ 0b10 GUEST_PHYSICAL
+ 0b11 PHYSICAL
+EndEnum
+Field 4 MPRED
+Field 3 CC
+Res0 2
+Field 1 E1BRE
+Field 0 E0BRE
+EndSysreg
+
+Sysreg BRBFCR_EL1 2 1 9 0 1
+Res0 63:30
+Enum 29:28 BANK
+ 0b0 FIRST
+ 0b1 SECOND
+EndEnum
+Res0 27:23
+Field 22 CONDDIR
+Field 21 DIRCALL
+Field 20 INDCALL
+Field 19 RTN
+Field 18 INDIRECT
+Field 17 DIRECT
+Field 16 EnI
+Res0 15:8
+Field 7 PAUSED
+Field 6 LASTFAILED
+Res0 5:0
+EndSysreg
+
+Sysreg BRBTS_EL1 2 1 9 0 2
+Field 63:0 TS
+EndSysreg
+
+Sysreg BRBINFINJ_EL1 2 1 9 1 0
+Res0 63:47
+Field 46 CCU
+Field 45:32 CC
+Res0 31:18
+Field 17 LASTFAILED
+Field 16 T
+Res0 15:14
+Enum 13:8 TYPE
+ 0b000000 UNCOND_DIRECT
+ 0b000001 INDIRECT
+ 0b000010 DIRECT_LINK
+ 0b000011 INDIRECT_LINK
+ 0b000101 RET
+ 0b000111 ERET
+ 0b001000 COND_DIRECT
+ 0b100001 DEBUG_HALT
+ 0b100010 CALL
+ 0b100011 TRAP
+ 0b100100 SERROR
+ 0b100110 INSN_DEBUG
+ 0b100111 DATA_DEBUG
+ 0b101010 ALIGN_FAULT
+ 0b101011 INSN_FAULT
+ 0b101100 DATA_FAULT
+ 0b101110 IRQ
+ 0b101111 FIQ
+ 0b111001 DEBUG_EXIT
+EndEnum
+Enum 7:6 EL
+ 0b00 EL0
+ 0b01 EL1
+ 0b10 EL2
+ 0b11 EL3
+EndEnum
+Field 5 MPRED
+Res0 4:2
+Enum 1:0 VALID
+ 0b00 NONE
+ 0b01 TARGET
+ 0b10 SOURCE
+ 0b11 FULL
+EndEnum
+EndSysreg
+
+Sysreg BRBSRCINJ_EL1 2 1 9 1 1
+Field 63:0 ADDRESS
+EndSysreg
+
+Sysreg BRBTGTINJ_EL1 2 1 9 1 2
+Field 63:0 ADDRESS
+EndSysreg
+
+Sysreg BRBIDR0_EL1 2 1 9 2 0
+Res0 63:16
+Enum 15:12 CC
+ 0b101 20_BIT
+EndEnum
+Enum 11:8 FORMAT
+ 0b0 0
+EndEnum
+Enum 7:0 NUMREC
+ 0b0001000 8
+ 0b0010000 16
+ 0b0100000 32
+ 0b1000000 64
+EndEnum
+EndSysreg
+
Sysreg ID_AA64ZFR0_EL1 3 0 0 4 4
Res0 63:60
UnsignedEnum 59:56 F64MM
--
2.25.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V12 03/10] arm64/perf: Add branch stack support in struct arm_pmu
2023-06-15 13:32 [PATCH V12 00/10] arm64/perf: Enable branch stack sampling Anshuman Khandual
2023-06-15 13:32 ` [PATCH V12 01/10] drivers: perf: arm_pmu: Add new sched_task() callback Anshuman Khandual
2023-06-15 13:32 ` [PATCH V12 02/10] arm64/perf: Add BRBE registers and fields Anshuman Khandual
@ 2023-06-15 13:32 ` Anshuman Khandual
2023-06-15 13:32 ` [PATCH V12 04/10] arm64/perf: Add branch stack support in struct pmu_hw_events Anshuman Khandual
` (7 subsequent siblings)
10 siblings, 0 replies; 25+ messages in thread
From: Anshuman Khandual @ 2023-06-15 13:32 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, will, catalin.marinas,
mark.rutland
Cc: Anshuman Khandual, Mark Brown, James Clark, Rob Herring,
Marc Zyngier, Suzuki Poulose, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, linux-perf-users
This updates 'struct arm_pmu' for branch stack sampling support being added
later. This adds an element 'reg_trbidr' to capture BRBE attribute details.
These updates here will help in tracking any branch stack sampling support.
This also enables perf branch stack sampling event on all 'struct arm pmu',
supporting the feature but after removing the current gate that blocks such
events unconditionally in armpmu_event_init(). Instead a quick probe can be
initiated via arm_pmu->has_branch_stack to ascertain the support.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Tested-by: James Clark <james.clark@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
drivers/perf/arm_pmu.c | 3 +--
include/linux/perf/arm_pmu.h | 9 ++++++++-
2 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
index aada47e3b126..d9ffe9e56e74 100644
--- a/drivers/perf/arm_pmu.c
+++ b/drivers/perf/arm_pmu.c
@@ -510,8 +510,7 @@ static int armpmu_event_init(struct perf_event *event)
!cpumask_test_cpu(event->cpu, &armpmu->supported_cpus))
return -ENOENT;
- /* does not support taken branch sampling */
- if (has_branch_stack(event))
+ if (has_branch_stack(event) && !armpmu->has_branch_stack)
return -EOPNOTSUPP;
return __hw_perf_event_init(event);
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index f7fbd162ca4c..ba4204bdcebf 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -102,7 +102,9 @@ struct arm_pmu {
int (*map_event)(struct perf_event *event);
void (*sched_task)(struct perf_event_pmu_context *pmu_ctx, bool sched_in);
int num_events;
- bool secure_access; /* 32-bit ARM only */
+ unsigned int secure_access : 1, /* 32-bit ARM only */
+ has_branch_stack: 1, /* 64-bit ARM only */
+ reserved : 30;
#define ARMV8_PMUV3_MAX_COMMON_EVENTS 0x40
DECLARE_BITMAP(pmceid_bitmap, ARMV8_PMUV3_MAX_COMMON_EVENTS);
#define ARMV8_PMUV3_EXT_COMMON_EVENT_BASE 0x4000
@@ -118,6 +120,11 @@ struct arm_pmu {
/* Only to be used by ACPI probing code */
unsigned long acpi_cpuid;
+
+ /* Implementation specific attributes */
+#ifdef CONFIG_ARM64_BRBE
+ u64 reg_brbidr;
+#endif
};
#define to_arm_pmu(p) (container_of(p, struct arm_pmu, pmu))
--
2.25.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V12 04/10] arm64/perf: Add branch stack support in struct pmu_hw_events
2023-06-15 13:32 [PATCH V12 00/10] arm64/perf: Enable branch stack sampling Anshuman Khandual
` (2 preceding siblings ...)
2023-06-15 13:32 ` [PATCH V12 03/10] arm64/perf: Add branch stack support in struct arm_pmu Anshuman Khandual
@ 2023-06-15 13:32 ` Anshuman Khandual
2023-06-15 13:32 ` [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU Anshuman Khandual
` (6 subsequent siblings)
10 siblings, 0 replies; 25+ messages in thread
From: Anshuman Khandual @ 2023-06-15 13:32 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, will, catalin.marinas,
mark.rutland
Cc: Anshuman Khandual, Mark Brown, James Clark, Rob Herring,
Marc Zyngier, Suzuki Poulose, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, linux-perf-users
This adds branch records buffer pointer in 'struct pmu_hw_events' which can
be used to capture branch records during PMU interrupt. This percpu pointer
here needs to be allocated first before usage.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Tested-by: James Clark <james.clark@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
include/linux/perf/arm_pmu.h | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/include/linux/perf/arm_pmu.h b/include/linux/perf/arm_pmu.h
index ba4204bdcebf..a9c6d395aad7 100644
--- a/include/linux/perf/arm_pmu.h
+++ b/include/linux/perf/arm_pmu.h
@@ -44,6 +44,13 @@ static_assert((PERF_EVENT_FLAG_ARCH & ARMPMU_EVT_47BIT) == ARMPMU_EVT_47BIT);
}, \
}
+#define MAX_BRANCH_RECORDS 64
+
+struct branch_records {
+ struct perf_branch_stack branch_stack;
+ struct perf_branch_entry branch_entries[MAX_BRANCH_RECORDS];
+};
+
/* The events for a given PMU register set. */
struct pmu_hw_events {
/*
@@ -70,6 +77,8 @@ struct pmu_hw_events {
struct arm_pmu *percpu_pmu;
int irq;
+
+ struct branch_records *branches;
};
enum armpmu_attr_groups {
--
2.25.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU
2023-06-15 13:32 [PATCH V12 00/10] arm64/perf: Enable branch stack sampling Anshuman Khandual
` (3 preceding siblings ...)
2023-06-15 13:32 ` [PATCH V12 04/10] arm64/perf: Add branch stack support in struct pmu_hw_events Anshuman Khandual
@ 2023-06-15 13:32 ` Anshuman Khandual
2023-06-15 23:42 ` kernel test robot
2023-06-16 3:41 ` kernel test robot
2023-06-15 13:32 ` [PATCH V12 06/10] arm64/perf: Enable branch stack events via FEAT_BRBE Anshuman Khandual
` (5 subsequent siblings)
10 siblings, 2 replies; 25+ messages in thread
From: Anshuman Khandual @ 2023-06-15 13:32 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, will, catalin.marinas,
mark.rutland
Cc: Anshuman Khandual, Mark Brown, James Clark, Rob Herring,
Marc Zyngier, Suzuki Poulose, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, linux-perf-users
This enables support for branch stack sampling event in ARMV8 PMU, checking
has_branch_stack() on the event inside 'struct arm_pmu' callbacks. Although
these branch stack helpers armv8pmu_branch_XXXXX() are just dummy functions
for now. While here, this also defines arm_pmu's sched_task() callback with
armv8pmu_sched_task(), which resets the branch record buffer on a sched_in.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Tested-by: James Clark <james.clark@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/arm64/include/asm/perf_event.h | 31 +++++++++++
drivers/perf/arm_pmuv3.c | 86 +++++++++++++++++++++--------
2 files changed, 93 insertions(+), 24 deletions(-)
diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h
index eb7071c9eb34..ebc392ba3559 100644
--- a/arch/arm64/include/asm/perf_event.h
+++ b/arch/arm64/include/asm/perf_event.h
@@ -24,4 +24,35 @@ extern unsigned long perf_misc_flags(struct pt_regs *regs);
(regs)->pstate = PSR_MODE_EL1h; \
}
+struct pmu_hw_events;
+struct arm_pmu;
+struct perf_event;
+
+#ifdef CONFIG_PERF_EVENTS
+static inline bool has_branch_stack(struct perf_event *event);
+
+static inline void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
+{
+ WARN_ON_ONCE(!has_branch_stack(event));
+}
+
+static inline bool armv8pmu_branch_attr_valid(struct perf_event *event)
+{
+ WARN_ON_ONCE(!has_branch_stack(event));
+ return false;
+}
+
+static inline void armv8pmu_branch_enable(struct perf_event *event)
+{
+ WARN_ON_ONCE(!has_branch_stack(event));
+}
+
+static inline void armv8pmu_branch_disable(struct perf_event *event)
+{
+ WARN_ON_ONCE(!has_branch_stack(event));
+}
+
+static inline void armv8pmu_branch_probe(struct arm_pmu *arm_pmu) { }
+static inline void armv8pmu_branch_reset(void) { }
+#endif
#endif
diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c
index c98e4039386d..54c80f393eb6 100644
--- a/drivers/perf/arm_pmuv3.c
+++ b/drivers/perf/arm_pmuv3.c
@@ -705,38 +705,21 @@ static void armv8pmu_enable_event(struct perf_event *event)
* Enable counter and interrupt, and set the counter to count
* the event that we're interested in.
*/
-
- /*
- * Disable counter
- */
armv8pmu_disable_event_counter(event);
-
- /*
- * Set event.
- */
armv8pmu_write_event_type(event);
-
- /*
- * Enable interrupt for this counter
- */
armv8pmu_enable_event_irq(event);
-
- /*
- * Enable counter
- */
armv8pmu_enable_event_counter(event);
+
+ if (has_branch_stack(event))
+ armv8pmu_branch_enable(event);
}
static void armv8pmu_disable_event(struct perf_event *event)
{
- /*
- * Disable counter
- */
- armv8pmu_disable_event_counter(event);
+ if (has_branch_stack(event))
+ armv8pmu_branch_disable(event);
- /*
- * Disable interrupt for this counter
- */
+ armv8pmu_disable_event_counter(event);
armv8pmu_disable_event_irq(event);
}
@@ -814,6 +797,11 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
if (!armpmu_event_set_period(event))
continue;
+ if (has_branch_stack(event) && !WARN_ON(!cpuc->branches)) {
+ armv8pmu_branch_read(cpuc, event);
+ perf_sample_save_brstack(&data, event, &cpuc->branches->branch_stack);
+ }
+
/*
* Perf event overflow will queue the processing of the event as
* an irq_work which will be taken care of in the handling of
@@ -912,6 +900,14 @@ static int armv8pmu_user_event_idx(struct perf_event *event)
return event->hw.idx;
}
+static void armv8pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
+{
+ struct arm_pmu *armpmu = to_arm_pmu(pmu_ctx->pmu);
+
+ if (sched_in && armpmu->has_branch_stack)
+ armv8pmu_branch_reset();
+}
+
/*
* Add an event filter to a given event.
*/
@@ -982,6 +978,9 @@ static void armv8pmu_reset(void *info)
pmcr |= ARMV8_PMU_PMCR_LP;
armv8pmu_pmcr_write(pmcr);
+
+ if (cpu_pmu->has_branch_stack)
+ armv8pmu_branch_reset();
}
static int __armv8_pmuv3_map_event_id(struct arm_pmu *armpmu,
@@ -1019,6 +1018,9 @@ static int __armv8_pmuv3_map_event(struct perf_event *event,
hw_event_id = __armv8_pmuv3_map_event_id(armpmu, event);
+ if (has_branch_stack(event) && !armv8pmu_branch_attr_valid(event))
+ return -EOPNOTSUPP;
+
/*
* CHAIN events only work when paired with an adjacent counter, and it
* never makes sense for a user to open one in isolation, as they'll be
@@ -1135,6 +1137,33 @@ static void __armv8pmu_probe_pmu(void *info)
cpu_pmu->reg_pmmir = read_pmmir();
else
cpu_pmu->reg_pmmir = 0;
+ armv8pmu_branch_probe(cpu_pmu);
+}
+
+static int branch_records_alloc(struct arm_pmu *armpmu)
+{
+ struct branch_records __percpu *records;
+ int cpu;
+
+ records = alloc_percpu_gfp(struct branch_records, GFP_KERNEL);
+ if (!records)
+ return -ENOMEM;
+
+ /*
+ * FIXME: Memory allocated via records gets completely
+ * consumed here, never required to be freed up later. Hence
+ * losing access to on stack 'records' is acceptable.
+ * Otherwise this alloc handle has to be saved some where.
+ */
+ for_each_possible_cpu(cpu) {
+ struct pmu_hw_events *events_cpu;
+ struct branch_records *records_cpu;
+
+ events_cpu = per_cpu_ptr(armpmu->hw_events, cpu);
+ records_cpu = per_cpu_ptr(records, cpu);
+ events_cpu->branches = records_cpu;
+ }
+ return 0;
}
static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
@@ -1151,7 +1180,15 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
if (ret)
return ret;
- return probe.present ? 0 : -ENODEV;
+ if (!probe.present)
+ return -ENODEV;
+
+ if (cpu_pmu->has_branch_stack) {
+ ret = branch_records_alloc(cpu_pmu);
+ if (ret)
+ return ret;
+ }
+ return 0;
}
static void armv8pmu_disable_user_access_ipi(void *unused)
@@ -1214,6 +1251,7 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name,
cpu_pmu->set_event_filter = armv8pmu_set_event_filter;
cpu_pmu->pmu.event_idx = armv8pmu_user_event_idx;
+ cpu_pmu->sched_task = armv8pmu_sched_task;
cpu_pmu->name = name;
cpu_pmu->map_event = map_event;
--
2.25.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V12 06/10] arm64/perf: Enable branch stack events via FEAT_BRBE
2023-06-15 13:32 [PATCH V12 00/10] arm64/perf: Enable branch stack sampling Anshuman Khandual
` (4 preceding siblings ...)
2023-06-15 13:32 ` [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU Anshuman Khandual
@ 2023-06-15 13:32 ` Anshuman Khandual
2023-06-15 13:32 ` [PATCH V12 07/10] arm64/perf: Add PERF_ATTACH_TASK_DATA to events with has_branch_stack() Anshuman Khandual
` (4 subsequent siblings)
10 siblings, 0 replies; 25+ messages in thread
From: Anshuman Khandual @ 2023-06-15 13:32 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, will, catalin.marinas,
mark.rutland
Cc: Anshuman Khandual, Mark Brown, James Clark, Rob Herring,
Marc Zyngier, Suzuki Poulose, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, linux-perf-users
This enables branch stack sampling events in ARMV8 PMU, via an architecture
feature FEAT_BRBE aka branch record buffer extension. This defines required
branch helper functions pmuv8pmu_branch_XXXXX() and the implementation here
is wrapped with a new config option CONFIG_ARM64_BRBE.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Tested-by: James Clark <james.clark@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/arm64/include/asm/perf_event.h | 9 +
drivers/perf/Kconfig | 11 +
drivers/perf/Makefile | 1 +
drivers/perf/arm_brbe.c | 549 ++++++++++++++++++++++++++++
drivers/perf/arm_brbe.h | 257 +++++++++++++
drivers/perf/arm_pmuv3.c | 4 +
6 files changed, 831 insertions(+)
create mode 100644 drivers/perf/arm_brbe.c
create mode 100644 drivers/perf/arm_brbe.h
diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h
index ebc392ba3559..49a973571415 100644
--- a/arch/arm64/include/asm/perf_event.h
+++ b/arch/arm64/include/asm/perf_event.h
@@ -31,6 +31,14 @@ struct perf_event;
#ifdef CONFIG_PERF_EVENTS
static inline bool has_branch_stack(struct perf_event *event);
+#ifdef CONFIG_ARM64_BRBE
+void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event);
+bool armv8pmu_branch_attr_valid(struct perf_event *event);
+void armv8pmu_branch_enable(struct perf_event *event);
+void armv8pmu_branch_disable(struct perf_event *event);
+void armv8pmu_branch_probe(struct arm_pmu *arm_pmu);
+void armv8pmu_branch_reset(void);
+#else
static inline void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
{
WARN_ON_ONCE(!has_branch_stack(event));
@@ -56,3 +64,4 @@ static inline void armv8pmu_branch_probe(struct arm_pmu *arm_pmu) { }
static inline void armv8pmu_branch_reset(void) { }
#endif
#endif
+#endif
diff --git a/drivers/perf/Kconfig b/drivers/perf/Kconfig
index 711f82400086..7d07aa79e5b0 100644
--- a/drivers/perf/Kconfig
+++ b/drivers/perf/Kconfig
@@ -172,6 +172,17 @@ config ARM_SPE_PMU
Extension, which provides periodic sampling of operations in
the CPU pipeline and reports this via the perf AUX interface.
+config ARM64_BRBE
+ bool "Enable support for Branch Record Buffer Extension (BRBE)"
+ depends on PERF_EVENTS && ARM64 && ARM_PMU
+ default y
+ help
+ Enable perf support for Branch Record Buffer Extension (BRBE) which
+ records all branches taken in an execution path. This supports some
+ branch types and privilege based filtering. It captured additional
+ relevant information such as cycle count, misprediction and branch
+ type, branch privilege level etc.
+
config ARM_DMC620_PMU
tristate "Enable PMU support for the ARM DMC-620 memory controller"
depends on (ARM64 && ACPI) || COMPILE_TEST
diff --git a/drivers/perf/Makefile b/drivers/perf/Makefile
index dabc859540ce..29d256f2deaa 100644
--- a/drivers/perf/Makefile
+++ b/drivers/perf/Makefile
@@ -17,6 +17,7 @@ obj-$(CONFIG_RISCV_PMU_SBI) += riscv_pmu_sbi.o
obj-$(CONFIG_THUNDERX2_PMU) += thunderx2_pmu.o
obj-$(CONFIG_XGENE_PMU) += xgene_pmu.o
obj-$(CONFIG_ARM_SPE_PMU) += arm_spe_pmu.o
+obj-$(CONFIG_ARM64_BRBE) += arm_brbe.o
obj-$(CONFIG_ARM_DMC620_PMU) += arm_dmc620_pmu.o
obj-$(CONFIG_MARVELL_CN10K_TAD_PMU) += marvell_cn10k_tad_pmu.o
obj-$(CONFIG_MARVELL_CN10K_DDR_PMU) += marvell_cn10k_ddr_pmu.o
diff --git a/drivers/perf/arm_brbe.c b/drivers/perf/arm_brbe.c
new file mode 100644
index 000000000000..90bc9131223d
--- /dev/null
+++ b/drivers/perf/arm_brbe.c
@@ -0,0 +1,549 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Branch Record Buffer Extension Driver.
+ *
+ * Copyright (C) 2022 ARM Limited
+ *
+ * Author: Anshuman Khandual <anshuman.khandual@arm.com>
+ */
+#include "arm_brbe.h"
+
+static bool valid_brbe_nr(int brbe_nr)
+{
+ return brbe_nr == BRBIDR0_EL1_NUMREC_8 ||
+ brbe_nr == BRBIDR0_EL1_NUMREC_16 ||
+ brbe_nr == BRBIDR0_EL1_NUMREC_32 ||
+ brbe_nr == BRBIDR0_EL1_NUMREC_64;
+}
+
+static bool valid_brbe_cc(int brbe_cc)
+{
+ return brbe_cc == BRBIDR0_EL1_CC_20_BIT;
+}
+
+static bool valid_brbe_format(int brbe_format)
+{
+ return brbe_format == BRBIDR0_EL1_FORMAT_0;
+}
+
+static bool valid_brbe_version(int brbe_version)
+{
+ return brbe_version == ID_AA64DFR0_EL1_BRBE_IMP ||
+ brbe_version == ID_AA64DFR0_EL1_BRBE_BRBE_V1P1;
+}
+
+static void select_brbe_bank(int bank)
+{
+ u64 brbfcr;
+
+ WARN_ON(bank > BRBE_BANK_IDX_1);
+ brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
+ brbfcr &= ~BRBFCR_EL1_BANK_MASK;
+ brbfcr |= SYS_FIELD_PREP(BRBFCR_EL1, BANK, bank);
+ write_sysreg_s(brbfcr, SYS_BRBFCR_EL1);
+ isb();
+}
+
+/*
+ * Generic perf branch filters supported on BRBE
+ *
+ * New branch filters need to be evaluated whether they could be supported on
+ * BRBE. This ensures that such branch filters would not just be accepted, to
+ * fail silently. PERF_SAMPLE_BRANCH_HV is a special case that is selectively
+ * supported only on platforms where kernel is in hyp mode.
+ */
+#define BRBE_EXCLUDE_BRANCH_FILTERS (PERF_SAMPLE_BRANCH_ABORT_TX | \
+ PERF_SAMPLE_BRANCH_IN_TX | \
+ PERF_SAMPLE_BRANCH_NO_TX | \
+ PERF_SAMPLE_BRANCH_CALL_STACK)
+
+#define BRBE_ALLOWED_BRANCH_FILTERS (PERF_SAMPLE_BRANCH_USER | \
+ PERF_SAMPLE_BRANCH_KERNEL | \
+ PERF_SAMPLE_BRANCH_HV | \
+ PERF_SAMPLE_BRANCH_ANY | \
+ PERF_SAMPLE_BRANCH_ANY_CALL | \
+ PERF_SAMPLE_BRANCH_ANY_RETURN | \
+ PERF_SAMPLE_BRANCH_IND_CALL | \
+ PERF_SAMPLE_BRANCH_COND | \
+ PERF_SAMPLE_BRANCH_IND_JUMP | \
+ PERF_SAMPLE_BRANCH_CALL | \
+ PERF_SAMPLE_BRANCH_NO_FLAGS | \
+ PERF_SAMPLE_BRANCH_NO_CYCLES | \
+ PERF_SAMPLE_BRANCH_TYPE_SAVE | \
+ PERF_SAMPLE_BRANCH_HW_INDEX | \
+ PERF_SAMPLE_BRANCH_PRIV_SAVE)
+
+#define BRBE_PERF_BRANCH_FILTERS (BRBE_ALLOWED_BRANCH_FILTERS | \
+ BRBE_EXCLUDE_BRANCH_FILTERS)
+
+bool armv8pmu_branch_attr_valid(struct perf_event *event)
+{
+ u64 branch_type = event->attr.branch_sample_type;
+
+ /*
+ * Ensure both perf branch filter allowed and exclude
+ * masks are always in sync with the generic perf ABI.
+ */
+ BUILD_BUG_ON(BRBE_PERF_BRANCH_FILTERS != (PERF_SAMPLE_BRANCH_MAX - 1));
+
+ if (branch_type & ~BRBE_ALLOWED_BRANCH_FILTERS) {
+ pr_debug_once("requested branch filter not supported 0x%llx\n", branch_type);
+ return false;
+ }
+
+ /*
+ * If the event does not have at least one of the privilege
+ * branch filters as in PERF_SAMPLE_BRANCH_PLM_ALL, the core
+ * perf will adjust its value based on perf event's existing
+ * privilege level via attr.exclude_[user|kernel|hv].
+ *
+ * As event->attr.branch_sample_type might have been changed
+ * when the event reaches here, it is not possible to figure
+ * out whether the event originally had HV privilege request
+ * or got added via the core perf. Just report this situation
+ * once and continue ignoring if there are other instances.
+ */
+ if ((branch_type & PERF_SAMPLE_BRANCH_HV) && !is_kernel_in_hyp_mode())
+ pr_debug_once("hypervisor privilege filter not supported 0x%llx\n", branch_type);
+
+ return true;
+}
+
+static int brbe_attributes_probe(struct arm_pmu *armpmu, u32 brbe)
+{
+ u64 brbidr = read_sysreg_s(SYS_BRBIDR0_EL1);
+ int brbe_version, brbe_format, brbe_cc, brbe_nr;
+
+ brbe_version = brbe;
+ brbe_format = brbe_get_format(brbidr);
+ brbe_cc = brbe_get_cc_bits(brbidr);
+ brbe_nr = brbe_get_numrec(brbidr);
+ armpmu->reg_brbidr = brbidr;
+
+ if (!valid_brbe_version(brbe_version) ||
+ !valid_brbe_format(brbe_format) ||
+ !valid_brbe_cc(brbe_cc) ||
+ !valid_brbe_nr(brbe_nr))
+ return -EOPNOTSUPP;
+ return 0;
+}
+
+void armv8pmu_branch_probe(struct arm_pmu *armpmu)
+{
+ u64 aa64dfr0 = read_sysreg_s(SYS_ID_AA64DFR0_EL1);
+ u32 brbe;
+
+ brbe = cpuid_feature_extract_unsigned_field(aa64dfr0, ID_AA64DFR0_EL1_BRBE_SHIFT);
+ if (!brbe)
+ return;
+
+ if (brbe_attributes_probe(armpmu, brbe))
+ return;
+
+ armpmu->has_branch_stack = 1;
+}
+
+static u64 branch_type_to_brbfcr(int branch_type)
+{
+ u64 brbfcr = 0;
+
+ if (branch_type & PERF_SAMPLE_BRANCH_ANY) {
+ brbfcr |= BRBFCR_EL1_BRANCH_FILTERS;
+ return brbfcr;
+ }
+
+ if (branch_type & PERF_SAMPLE_BRANCH_ANY_CALL) {
+ brbfcr |= BRBFCR_EL1_INDCALL;
+ brbfcr |= BRBFCR_EL1_DIRCALL;
+ }
+
+ if (branch_type & PERF_SAMPLE_BRANCH_ANY_RETURN)
+ brbfcr |= BRBFCR_EL1_RTN;
+
+ if (branch_type & PERF_SAMPLE_BRANCH_IND_CALL)
+ brbfcr |= BRBFCR_EL1_INDCALL;
+
+ if (branch_type & PERF_SAMPLE_BRANCH_COND)
+ brbfcr |= BRBFCR_EL1_CONDDIR;
+
+ if (branch_type & PERF_SAMPLE_BRANCH_IND_JUMP)
+ brbfcr |= BRBFCR_EL1_INDIRECT;
+
+ if (branch_type & PERF_SAMPLE_BRANCH_CALL)
+ brbfcr |= BRBFCR_EL1_DIRCALL;
+
+ return brbfcr;
+}
+
+static u64 branch_type_to_brbcr(int branch_type)
+{
+ u64 brbcr = BRBCR_EL1_DEFAULT_TS;
+
+ /*
+ * BRBE should be paused on PMU interrupt while tracing kernel
+ * space to stop capturing further branch records. Otherwise
+ * interrupt handler branch records might get into the samples
+ * which is not desired.
+ *
+ * BRBE need not be paused on PMU interrupt while tracing only
+ * the user space, because it will automatically be inside the
+ * prohibited region. But even after PMU overflow occurs, the
+ * interrupt could still take much more cycles, before it can
+ * be taken and by that time BRBE will have been overwritten.
+ * Hence enable pause on PMU interrupt mechanism even for user
+ * only traces as well.
+ */
+ brbcr |= BRBCR_EL1_FZP;
+
+ /*
+ * When running in the hyp mode, writing into BRBCR_EL1
+ * actually writes into BRBCR_EL2 instead. Field E2BRE
+ * is also at the same position as E1BRE.
+ */
+ if (branch_type & PERF_SAMPLE_BRANCH_USER)
+ brbcr |= BRBCR_EL1_E0BRE;
+
+ if (branch_type & PERF_SAMPLE_BRANCH_KERNEL)
+ brbcr |= BRBCR_EL1_E1BRE;
+
+ if (branch_type & PERF_SAMPLE_BRANCH_HV) {
+ if (is_kernel_in_hyp_mode())
+ brbcr |= BRBCR_EL1_E1BRE;
+ }
+
+ if (!(branch_type & PERF_SAMPLE_BRANCH_NO_CYCLES))
+ brbcr |= BRBCR_EL1_CC;
+
+ if (!(branch_type & PERF_SAMPLE_BRANCH_NO_FLAGS))
+ brbcr |= BRBCR_EL1_MPRED;
+
+ /*
+ * The exception and exception return branches could be
+ * captured, irrespective of the perf event's privilege.
+ * If the perf event does not have enough privilege for
+ * a given exception level, then addresses which falls
+ * under that exception level will be reported as zero
+ * for the captured branch record, creating source only
+ * or target only records.
+ */
+ if (branch_type & PERF_SAMPLE_BRANCH_ANY) {
+ brbcr |= BRBCR_EL1_EXCEPTION;
+ brbcr |= BRBCR_EL1_ERTN;
+ }
+
+ if (branch_type & PERF_SAMPLE_BRANCH_ANY_CALL)
+ brbcr |= BRBCR_EL1_EXCEPTION;
+
+ if (branch_type & PERF_SAMPLE_BRANCH_ANY_RETURN)
+ brbcr |= BRBCR_EL1_ERTN;
+
+ return brbcr & BRBCR_EL1_DEFAULT_CONFIG;
+}
+
+void armv8pmu_branch_enable(struct perf_event *event)
+{
+ u64 branch_type = event->attr.branch_sample_type;
+ u64 brbfcr, brbcr;
+
+ brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
+ brbfcr &= ~BRBFCR_EL1_DEFAULT_CONFIG;
+ brbfcr |= branch_type_to_brbfcr(branch_type);
+ write_sysreg_s(brbfcr, SYS_BRBFCR_EL1);
+ isb();
+
+ brbcr = read_sysreg_s(SYS_BRBCR_EL1);
+ brbcr &= ~BRBCR_EL1_DEFAULT_CONFIG;
+ brbcr |= branch_type_to_brbcr(branch_type);
+ write_sysreg_s(brbcr, SYS_BRBCR_EL1);
+ isb();
+ armv8pmu_branch_reset();
+}
+
+void armv8pmu_branch_disable(struct perf_event *event)
+{
+ u64 brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
+ u64 brbcr = read_sysreg_s(SYS_BRBCR_EL1);
+
+ brbcr &= ~(BRBCR_EL1_E0BRE | BRBCR_EL1_E1BRE);
+ brbfcr |= BRBFCR_EL1_PAUSED;
+ write_sysreg_s(brbcr, SYS_BRBCR_EL1);
+ write_sysreg_s(brbfcr, SYS_BRBFCR_EL1);
+ isb();
+}
+
+static void brbe_set_perf_entry_type(struct perf_branch_entry *entry, u64 brbinf)
+{
+ int brbe_type = brbe_get_type(brbinf);
+
+ switch (brbe_type) {
+ case BRBINFx_EL1_TYPE_UNCOND_DIRECT:
+ entry->type = PERF_BR_UNCOND;
+ break;
+ case BRBINFx_EL1_TYPE_INDIRECT:
+ entry->type = PERF_BR_IND;
+ break;
+ case BRBINFx_EL1_TYPE_DIRECT_LINK:
+ entry->type = PERF_BR_CALL;
+ break;
+ case BRBINFx_EL1_TYPE_INDIRECT_LINK:
+ entry->type = PERF_BR_IND_CALL;
+ break;
+ case BRBINFx_EL1_TYPE_RET:
+ entry->type = PERF_BR_RET;
+ break;
+ case BRBINFx_EL1_TYPE_COND_DIRECT:
+ entry->type = PERF_BR_COND;
+ break;
+ case BRBINFx_EL1_TYPE_CALL:
+ entry->type = PERF_BR_CALL;
+ break;
+ case BRBINFx_EL1_TYPE_TRAP:
+ entry->type = PERF_BR_SYSCALL;
+ break;
+ case BRBINFx_EL1_TYPE_ERET:
+ entry->type = PERF_BR_ERET;
+ break;
+ case BRBINFx_EL1_TYPE_IRQ:
+ entry->type = PERF_BR_IRQ;
+ break;
+ case BRBINFx_EL1_TYPE_DEBUG_HALT:
+ entry->type = PERF_BR_EXTEND_ABI;
+ entry->new_type = PERF_BR_ARM64_DEBUG_HALT;
+ break;
+ case BRBINFx_EL1_TYPE_SERROR:
+ entry->type = PERF_BR_SERROR;
+ break;
+ case BRBINFx_EL1_TYPE_INSN_DEBUG:
+ entry->type = PERF_BR_EXTEND_ABI;
+ entry->new_type = PERF_BR_ARM64_DEBUG_INST;
+ break;
+ case BRBINFx_EL1_TYPE_DATA_DEBUG:
+ entry->type = PERF_BR_EXTEND_ABI;
+ entry->new_type = PERF_BR_ARM64_DEBUG_DATA;
+ break;
+ case BRBINFx_EL1_TYPE_ALIGN_FAULT:
+ entry->type = PERF_BR_EXTEND_ABI;
+ entry->new_type = PERF_BR_NEW_FAULT_ALGN;
+ break;
+ case BRBINFx_EL1_TYPE_INSN_FAULT:
+ entry->type = PERF_BR_EXTEND_ABI;
+ entry->new_type = PERF_BR_NEW_FAULT_INST;
+ break;
+ case BRBINFx_EL1_TYPE_DATA_FAULT:
+ entry->type = PERF_BR_EXTEND_ABI;
+ entry->new_type = PERF_BR_NEW_FAULT_DATA;
+ break;
+ case BRBINFx_EL1_TYPE_FIQ:
+ entry->type = PERF_BR_EXTEND_ABI;
+ entry->new_type = PERF_BR_ARM64_FIQ;
+ break;
+ case BRBINFx_EL1_TYPE_DEBUG_EXIT:
+ entry->type = PERF_BR_EXTEND_ABI;
+ entry->new_type = PERF_BR_ARM64_DEBUG_EXIT;
+ break;
+ default:
+ pr_warn_once("%d - unknown branch type captured\n", brbe_type);
+ entry->type = PERF_BR_UNKNOWN;
+ break;
+ }
+}
+
+static int brbe_get_perf_priv(u64 brbinf)
+{
+ int brbe_el = brbe_get_el(brbinf);
+
+ switch (brbe_el) {
+ case BRBINFx_EL1_EL_EL0:
+ return PERF_BR_PRIV_USER;
+ case BRBINFx_EL1_EL_EL1:
+ return PERF_BR_PRIV_KERNEL;
+ case BRBINFx_EL1_EL_EL2:
+ if (is_kernel_in_hyp_mode())
+ return PERF_BR_PRIV_KERNEL;
+ return PERF_BR_PRIV_HV;
+ default:
+ pr_warn_once("%d - unknown branch privilege captured\n", brbe_el);
+ return PERF_BR_PRIV_UNKNOWN;
+ }
+}
+
+static void capture_brbe_flags(struct perf_branch_entry *entry, struct perf_event *event,
+ u64 brbinf)
+{
+ if (branch_sample_type(event))
+ brbe_set_perf_entry_type(entry, brbinf);
+
+ if (!branch_sample_no_cycles(event))
+ entry->cycles = brbe_get_cycles(brbinf);
+
+ if (!branch_sample_no_flags(event)) {
+ /*
+ * BRBINFx_EL1.LASTFAILED indicates that a TME transaction failed (or
+ * was cancelled) prior to this record, and some number of records
+ * prior to this one, may have been generated during an attempt to
+ * execute the transaction.
+ *
+ * We will remove such entries later in process_branch_aborts().
+ */
+ entry->abort = brbe_get_lastfailed(brbinf);
+
+ /*
+ * All these information (i.e transaction state and mispredicts)
+ * are available for source only and complete branch records.
+ */
+ if (brbe_record_is_complete(brbinf) ||
+ brbe_record_is_source_only(brbinf)) {
+ entry->mispred = brbe_get_mispredict(brbinf);
+ entry->predicted = !entry->mispred;
+ entry->in_tx = brbe_get_in_tx(brbinf);
+ }
+ }
+
+ if (branch_sample_priv(event)) {
+ /*
+ * All these information (i.e branch privilege level) are
+ * available for target only and complete branch records.
+ */
+ if (brbe_record_is_complete(brbinf) ||
+ brbe_record_is_target_only(brbinf))
+ entry->priv = brbe_get_perf_priv(brbinf);
+ }
+}
+
+/*
+ * A branch record with BRBINFx_EL1.LASTFAILED set, implies that all
+ * preceding consecutive branch records, that were in a transaction
+ * (i.e their BRBINFx_EL1.TX set) have been aborted.
+ *
+ * Similarly BRBFCR_EL1.LASTFAILED set, indicate that all preceding
+ * consecutive branch records up to the last record, which were in a
+ * transaction (i.e their BRBINFx_EL1.TX set) have been aborted.
+ *
+ * --------------------------------- -------------------
+ * | 00 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX success]
+ * --------------------------------- -------------------
+ * | 01 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX success]
+ * --------------------------------- -------------------
+ * | 02 | BRBSRC | BRBTGT | BRBINF | | TX = 0 | LF = 0 |
+ * --------------------------------- -------------------
+ * | 03 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed]
+ * --------------------------------- -------------------
+ * | 04 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed]
+ * --------------------------------- -------------------
+ * | 05 | BRBSRC | BRBTGT | BRBINF | | TX = 0 | LF = 1 |
+ * --------------------------------- -------------------
+ * | .. | BRBSRC | BRBTGT | BRBINF | | TX = 0 | LF = 0 |
+ * --------------------------------- -------------------
+ * | 61 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed]
+ * --------------------------------- -------------------
+ * | 62 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed]
+ * --------------------------------- -------------------
+ * | 63 | BRBSRC | BRBTGT | BRBINF | | TX = 1 | LF = 0 | [TX failed]
+ * --------------------------------- -------------------
+ *
+ * BRBFCR_EL1.LASTFAILED == 1
+ *
+ * BRBFCR_EL1.LASTFAILED fails all those consecutive, in transaction
+ * branches records near the end of the BRBE buffer.
+ *
+ * Architecture does not guarantee a non transaction (TX = 0) branch
+ * record between two different transactions. So it is possible that
+ * a subsequent lastfailed record (TX = 0, LF = 1) might erroneously
+ * mark more than required transactions as aborted.
+ */
+static void process_branch_aborts(struct pmu_hw_events *cpuc)
+{
+ u64 brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
+ bool lastfailed = !!(brbfcr & BRBFCR_EL1_LASTFAILED);
+ int idx = brbe_get_numrec(cpuc->percpu_pmu->reg_brbidr) - 1;
+ struct perf_branch_entry *entry;
+
+ do {
+ entry = &cpuc->branches->branch_entries[idx];
+ if (entry->in_tx) {
+ entry->abort = lastfailed;
+ } else {
+ lastfailed = entry->abort;
+ entry->abort = false;
+ }
+ } while (idx--, idx >= 0);
+}
+
+void armv8pmu_branch_reset(void)
+{
+ asm volatile(BRB_IALL);
+ isb();
+}
+
+static bool capture_branch_entry(struct pmu_hw_events *cpuc,
+ struct perf_event *event, int idx)
+{
+ struct perf_branch_entry *entry = &cpuc->branches->branch_entries[idx];
+ u64 brbinf = get_brbinf_reg(idx);
+
+ /*
+ * There are no valid entries anymore on the buffer.
+ * Abort the branch record processing to save some
+ * cycles and also reduce the capture/process load
+ * for the user space as well.
+ */
+ if (brbe_invalid(brbinf))
+ return false;
+
+ perf_clear_branch_entry_bitfields(entry);
+ if (brbe_record_is_complete(brbinf)) {
+ entry->from = get_brbsrc_reg(idx);
+ entry->to = get_brbtgt_reg(idx);
+ } else if (brbe_record_is_source_only(brbinf)) {
+ entry->from = get_brbsrc_reg(idx);
+ entry->to = 0;
+ } else if (brbe_record_is_target_only(brbinf)) {
+ entry->from = 0;
+ entry->to = get_brbtgt_reg(idx);
+ }
+ capture_brbe_flags(entry, event, brbinf);
+ return true;
+}
+
+void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
+{
+ int nr_hw_entries = brbe_get_numrec(cpuc->percpu_pmu->reg_brbidr);
+ u64 brbfcr, brbcr;
+ int idx = 0;
+
+ brbcr = read_sysreg_s(SYS_BRBCR_EL1);
+ brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
+
+ /* Ensure pause on PMU interrupt is enabled */
+ WARN_ON_ONCE(!(brbcr & BRBCR_EL1_FZP));
+
+ /* Pause the buffer */
+ write_sysreg_s(brbfcr | BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
+ isb();
+
+ /* Loop through bank 0 */
+ select_brbe_bank(BRBE_BANK_IDX_0);
+ while (idx < nr_hw_entries && idx < BRBE_BANK0_IDX_MAX) {
+ if (!capture_branch_entry(cpuc, event, idx))
+ goto skip_bank_1;
+ idx++;
+ }
+
+ /* Loop through bank 1 */
+ select_brbe_bank(BRBE_BANK_IDX_1);
+ while (idx < nr_hw_entries && idx < BRBE_BANK1_IDX_MAX) {
+ if (!capture_branch_entry(cpuc, event, idx))
+ break;
+ idx++;
+ }
+
+skip_bank_1:
+ cpuc->branches->branch_stack.nr = idx;
+ cpuc->branches->branch_stack.hw_idx = -1ULL;
+ process_branch_aborts(cpuc);
+
+ /* Unpause the buffer */
+ write_sysreg_s(brbfcr & ~BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
+ isb();
+ armv8pmu_branch_reset();
+}
diff --git a/drivers/perf/arm_brbe.h b/drivers/perf/arm_brbe.h
new file mode 100644
index 000000000000..a47480eec070
--- /dev/null
+++ b/drivers/perf/arm_brbe.h
@@ -0,0 +1,257 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Branch Record Buffer Extension Helpers.
+ *
+ * Copyright (C) 2022 ARM Limited
+ *
+ * Author: Anshuman Khandual <anshuman.khandual@arm.com>
+ */
+#define pr_fmt(fmt) "brbe: " fmt
+
+#include <linux/perf/arm_pmu.h>
+
+#define BRBFCR_EL1_BRANCH_FILTERS (BRBFCR_EL1_DIRECT | \
+ BRBFCR_EL1_INDIRECT | \
+ BRBFCR_EL1_RTN | \
+ BRBFCR_EL1_INDCALL | \
+ BRBFCR_EL1_DIRCALL | \
+ BRBFCR_EL1_CONDDIR)
+
+#define BRBFCR_EL1_DEFAULT_CONFIG (BRBFCR_EL1_BANK_MASK | \
+ BRBFCR_EL1_PAUSED | \
+ BRBFCR_EL1_EnI | \
+ BRBFCR_EL1_BRANCH_FILTERS)
+
+/*
+ * BRBTS_EL1 is currently not used for branch stack implementation
+ * purpose but BRBCR_EL1.TS needs to have a valid value from all
+ * available options. BRBCR_EL1_TS_VIRTUAL is selected for this.
+ */
+#define BRBCR_EL1_DEFAULT_TS FIELD_PREP(BRBCR_EL1_TS_MASK, BRBCR_EL1_TS_VIRTUAL)
+
+#define BRBCR_EL1_DEFAULT_CONFIG (BRBCR_EL1_EXCEPTION | \
+ BRBCR_EL1_ERTN | \
+ BRBCR_EL1_CC | \
+ BRBCR_EL1_MPRED | \
+ BRBCR_EL1_E1BRE | \
+ BRBCR_EL1_E0BRE | \
+ BRBCR_EL1_FZP | \
+ BRBCR_EL1_DEFAULT_TS)
+/*
+ * BRBE Instructions
+ *
+ * BRB_IALL : Invalidate the entire buffer
+ * BRB_INJ : Inject latest branch record derived from [BRBSRCINJ, BRBTGTINJ, BRBINFINJ]
+ */
+#define BRB_IALL __emit_inst(0xD5000000 | sys_insn(1, 1, 7, 2, 4) | (0x1f))
+#define BRB_INJ __emit_inst(0xD5000000 | sys_insn(1, 1, 7, 2, 5) | (0x1f))
+
+/*
+ * BRBE Buffer Organization
+ *
+ * BRBE buffer is arranged as multiple banks of 32 branch record
+ * entries each. An individual branch record in a given bank could
+ * be accessed, after selecting the bank in BRBFCR_EL1.BANK and
+ * accessing the registers i.e [BRBSRC, BRBTGT, BRBINF] set with
+ * indices [0..31].
+ *
+ * Bank 0
+ *
+ * --------------------------------- ------
+ * | 00 | BRBSRC | BRBTGT | BRBINF | | 00 |
+ * --------------------------------- ------
+ * | 01 | BRBSRC | BRBTGT | BRBINF | | 01 |
+ * --------------------------------- ------
+ * | .. | BRBSRC | BRBTGT | BRBINF | | .. |
+ * --------------------------------- ------
+ * | 31 | BRBSRC | BRBTGT | BRBINF | | 31 |
+ * --------------------------------- ------
+ *
+ * Bank 1
+ *
+ * --------------------------------- ------
+ * | 32 | BRBSRC | BRBTGT | BRBINF | | 00 |
+ * --------------------------------- ------
+ * | 33 | BRBSRC | BRBTGT | BRBINF | | 01 |
+ * --------------------------------- ------
+ * | .. | BRBSRC | BRBTGT | BRBINF | | .. |
+ * --------------------------------- ------
+ * | 63 | BRBSRC | BRBTGT | BRBINF | | 31 |
+ * --------------------------------- ------
+ */
+#define BRBE_BANK_MAX_ENTRIES 32
+
+#define BRBE_BANK0_IDX_MIN 0
+#define BRBE_BANK0_IDX_MAX 31
+#define BRBE_BANK1_IDX_MIN 32
+#define BRBE_BANK1_IDX_MAX 63
+
+struct brbe_hw_attr {
+ int brbe_version;
+ int brbe_cc;
+ int brbe_nr;
+ int brbe_format;
+};
+
+enum brbe_bank_idx {
+ BRBE_BANK_IDX_INVALID = -1,
+ BRBE_BANK_IDX_0,
+ BRBE_BANK_IDX_1,
+ BRBE_BANK_IDX_MAX
+};
+
+#define RETURN_READ_BRBSRCN(n) \
+ read_sysreg_s(SYS_BRBSRC##n##_EL1)
+
+#define RETURN_READ_BRBTGTN(n) \
+ read_sysreg_s(SYS_BRBTGT##n##_EL1)
+
+#define RETURN_READ_BRBINFN(n) \
+ read_sysreg_s(SYS_BRBINF##n##_EL1)
+
+#define BRBE_REGN_CASE(n, case_macro) \
+ case n: return case_macro(n); break
+
+#define BRBE_REGN_SWITCH(x, case_macro) \
+ do { \
+ switch (x) { \
+ BRBE_REGN_CASE(0, case_macro); \
+ BRBE_REGN_CASE(1, case_macro); \
+ BRBE_REGN_CASE(2, case_macro); \
+ BRBE_REGN_CASE(3, case_macro); \
+ BRBE_REGN_CASE(4, case_macro); \
+ BRBE_REGN_CASE(5, case_macro); \
+ BRBE_REGN_CASE(6, case_macro); \
+ BRBE_REGN_CASE(7, case_macro); \
+ BRBE_REGN_CASE(8, case_macro); \
+ BRBE_REGN_CASE(9, case_macro); \
+ BRBE_REGN_CASE(10, case_macro); \
+ BRBE_REGN_CASE(11, case_macro); \
+ BRBE_REGN_CASE(12, case_macro); \
+ BRBE_REGN_CASE(13, case_macro); \
+ BRBE_REGN_CASE(14, case_macro); \
+ BRBE_REGN_CASE(15, case_macro); \
+ BRBE_REGN_CASE(16, case_macro); \
+ BRBE_REGN_CASE(17, case_macro); \
+ BRBE_REGN_CASE(18, case_macro); \
+ BRBE_REGN_CASE(19, case_macro); \
+ BRBE_REGN_CASE(20, case_macro); \
+ BRBE_REGN_CASE(21, case_macro); \
+ BRBE_REGN_CASE(22, case_macro); \
+ BRBE_REGN_CASE(23, case_macro); \
+ BRBE_REGN_CASE(24, case_macro); \
+ BRBE_REGN_CASE(25, case_macro); \
+ BRBE_REGN_CASE(26, case_macro); \
+ BRBE_REGN_CASE(27, case_macro); \
+ BRBE_REGN_CASE(28, case_macro); \
+ BRBE_REGN_CASE(29, case_macro); \
+ BRBE_REGN_CASE(30, case_macro); \
+ BRBE_REGN_CASE(31, case_macro); \
+ default: \
+ pr_warn("unknown register index\n"); \
+ return -1; \
+ } \
+ } while (0)
+
+static inline int buffer_to_brbe_idx(int buffer_idx)
+{
+ return buffer_idx % BRBE_BANK_MAX_ENTRIES;
+}
+
+static inline u64 get_brbsrc_reg(int buffer_idx)
+{
+ int brbe_idx = buffer_to_brbe_idx(buffer_idx);
+
+ BRBE_REGN_SWITCH(brbe_idx, RETURN_READ_BRBSRCN);
+}
+
+static inline u64 get_brbtgt_reg(int buffer_idx)
+{
+ int brbe_idx = buffer_to_brbe_idx(buffer_idx);
+
+ BRBE_REGN_SWITCH(brbe_idx, RETURN_READ_BRBTGTN);
+}
+
+static inline u64 get_brbinf_reg(int buffer_idx)
+{
+ int brbe_idx = buffer_to_brbe_idx(buffer_idx);
+
+ BRBE_REGN_SWITCH(brbe_idx, RETURN_READ_BRBINFN);
+}
+
+static inline u64 brbe_record_valid(u64 brbinf)
+{
+ return FIELD_GET(BRBINFx_EL1_VALID_MASK, brbinf);
+}
+
+static inline bool brbe_invalid(u64 brbinf)
+{
+ return brbe_record_valid(brbinf) == BRBINFx_EL1_VALID_NONE;
+}
+
+static inline bool brbe_record_is_complete(u64 brbinf)
+{
+ return brbe_record_valid(brbinf) == BRBINFx_EL1_VALID_FULL;
+}
+
+static inline bool brbe_record_is_source_only(u64 brbinf)
+{
+ return brbe_record_valid(brbinf) == BRBINFx_EL1_VALID_SOURCE;
+}
+
+static inline bool brbe_record_is_target_only(u64 brbinf)
+{
+ return brbe_record_valid(brbinf) == BRBINFx_EL1_VALID_TARGET;
+}
+
+static inline int brbe_get_in_tx(u64 brbinf)
+{
+ return FIELD_GET(BRBINFx_EL1_T_MASK, brbinf);
+}
+
+static inline int brbe_get_mispredict(u64 brbinf)
+{
+ return FIELD_GET(BRBINFx_EL1_MPRED_MASK, brbinf);
+}
+
+static inline int brbe_get_lastfailed(u64 brbinf)
+{
+ return FIELD_GET(BRBINFx_EL1_LASTFAILED_MASK, brbinf);
+}
+
+static inline int brbe_get_cycles(u64 brbinf)
+{
+ /*
+ * Captured cycle count is unknown and hence
+ * should not be passed on to the user space.
+ */
+ if (brbinf & BRBINFx_EL1_CCU)
+ return 0;
+
+ return FIELD_GET(BRBINFx_EL1_CC_MASK, brbinf);
+}
+
+static inline int brbe_get_type(u64 brbinf)
+{
+ return FIELD_GET(BRBINFx_EL1_TYPE_MASK, brbinf);
+}
+
+static inline int brbe_get_el(u64 brbinf)
+{
+ return FIELD_GET(BRBINFx_EL1_EL_MASK, brbinf);
+}
+
+static inline int brbe_get_numrec(u64 brbidr)
+{
+ return FIELD_GET(BRBIDR0_EL1_NUMREC_MASK, brbidr);
+}
+
+static inline int brbe_get_format(u64 brbidr)
+{
+ return FIELD_GET(BRBIDR0_EL1_FORMAT_MASK, brbidr);
+}
+
+static inline int brbe_get_cc_bits(u64 brbidr)
+{
+ return FIELD_GET(BRBIDR0_EL1_CC_MASK, brbidr);
+}
diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c
index 54c80f393eb6..02907371523a 100644
--- a/drivers/perf/arm_pmuv3.c
+++ b/drivers/perf/arm_pmuv3.c
@@ -797,6 +797,10 @@ static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
if (!armpmu_event_set_period(event))
continue;
+ /*
+ * PMU IRQ should remain asserted until all branch records
+ * are captured and processed into struct perf_sample_data.
+ */
if (has_branch_stack(event) && !WARN_ON(!cpuc->branches)) {
armv8pmu_branch_read(cpuc, event);
perf_sample_save_brstack(&data, event, &cpuc->branches->branch_stack);
--
2.25.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V12 07/10] arm64/perf: Add PERF_ATTACH_TASK_DATA to events with has_branch_stack()
2023-06-15 13:32 [PATCH V12 00/10] arm64/perf: Enable branch stack sampling Anshuman Khandual
` (5 preceding siblings ...)
2023-06-15 13:32 ` [PATCH V12 06/10] arm64/perf: Enable branch stack events via FEAT_BRBE Anshuman Khandual
@ 2023-06-15 13:32 ` Anshuman Khandual
2023-06-16 2:38 ` kernel test robot
2023-06-15 13:32 ` [PATCH V12 08/10] arm64/perf: Add struct brbe_regset helper functions Anshuman Khandual
` (3 subsequent siblings)
10 siblings, 1 reply; 25+ messages in thread
From: Anshuman Khandual @ 2023-06-15 13:32 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, will, catalin.marinas,
mark.rutland
Cc: Anshuman Khandual, Mark Brown, James Clark, Rob Herring,
Marc Zyngier, Suzuki Poulose, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, linux-perf-users
Short running processes i.e those getting very small cpu run time each time
when they get scheduled on, might not accumulate much branch records before
a PMU IRQ really happens. This increases possibility, for such processes to
loose much of its branch records, while being scheduled in-out of various
cpus on the system.
There is a need to save all occurred branch records during the cpu run time
while the process gets scheduled out. It requires an event context specific
buffer for such storage.
This adds PERF_ATTACH_TASK_DATA flag unconditionally, for all branch stack
sampling events, which would allocate task_ctx_data during its event init.
This also creates a platform specific task_ctx_data kmem cache which will
serve such allocation requests.
This adds a new structure 'arm64_perf_task_context' which encapsulates brbe
register set for maximum possible BRBE entries on the HW along with a valid
records tracking element.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Tested-by: James Clark <james.clark@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/arm64/include/asm/perf_event.h | 4 ++++
drivers/perf/arm_brbe.c | 21 +++++++++++++++++++++
drivers/perf/arm_brbe.h | 13 +++++++++++++
drivers/perf/arm_pmuv3.c | 16 +++++++++++++---
4 files changed, 51 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h
index 49a973571415..b0c12a5882df 100644
--- a/arch/arm64/include/asm/perf_event.h
+++ b/arch/arm64/include/asm/perf_event.h
@@ -38,6 +38,8 @@ void armv8pmu_branch_enable(struct perf_event *event);
void armv8pmu_branch_disable(struct perf_event *event);
void armv8pmu_branch_probe(struct arm_pmu *arm_pmu);
void armv8pmu_branch_reset(void);
+int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *arm_pmu);
+void armv8pmu_task_ctx_cache_free(struct arm_pmu *arm_pmu);
#else
static inline void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
{
@@ -62,6 +64,8 @@ static inline void armv8pmu_branch_disable(struct perf_event *event)
static inline void armv8pmu_branch_probe(struct arm_pmu *arm_pmu) { }
static inline void armv8pmu_branch_reset(void) { }
+static inline int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *arm_pmu) { return 0; }
+static inline void armv8pmu_task_ctx_cache_free(struct arm_pmu *arm_pmu) { }
#endif
#endif
#endif
diff --git a/drivers/perf/arm_brbe.c b/drivers/perf/arm_brbe.c
index 90bc9131223d..4729cb49282b 100644
--- a/drivers/perf/arm_brbe.c
+++ b/drivers/perf/arm_brbe.c
@@ -109,6 +109,27 @@ bool armv8pmu_branch_attr_valid(struct perf_event *event)
return true;
}
+static inline struct kmem_cache *
+arm64_create_brbe_task_ctx_kmem_cache(size_t size)
+{
+ return kmem_cache_create("arm64_brbe_task_ctx", size, 0, 0, NULL);
+}
+
+int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *arm_pmu)
+{
+ size_t size = sizeof(struct arm64_perf_task_context);
+
+ arm_pmu->pmu.task_ctx_cache = arm64_create_brbe_task_ctx_kmem_cache(size);
+ if (!arm_pmu->pmu.task_ctx_cache)
+ return -ENOMEM;
+ return 0;
+}
+
+void armv8pmu_task_ctx_cache_free(struct arm_pmu *arm_pmu)
+{
+ kmem_cache_destroy(arm_pmu->pmu.task_ctx_cache);
+}
+
static int brbe_attributes_probe(struct arm_pmu *armpmu, u32 brbe)
{
u64 brbidr = read_sysreg_s(SYS_BRBIDR0_EL1);
diff --git a/drivers/perf/arm_brbe.h b/drivers/perf/arm_brbe.h
index a47480eec070..4a72c2ba7140 100644
--- a/drivers/perf/arm_brbe.h
+++ b/drivers/perf/arm_brbe.h
@@ -80,12 +80,25 @@
* --------------------------------- ------
*/
#define BRBE_BANK_MAX_ENTRIES 32
+#define BRBE_MAX_BANK 2
+#define BRBE_MAX_ENTRIES (BRBE_BANK_MAX_ENTRIES * BRBE_MAX_BANK)
#define BRBE_BANK0_IDX_MIN 0
#define BRBE_BANK0_IDX_MAX 31
#define BRBE_BANK1_IDX_MIN 32
#define BRBE_BANK1_IDX_MAX 63
+struct brbe_regset {
+ unsigned long brbsrc;
+ unsigned long brbtgt;
+ unsigned long brbinf;
+};
+
+struct arm64_perf_task_context {
+ struct brbe_regset store[BRBE_MAX_ENTRIES];
+ int nr_brbe_records;
+};
+
struct brbe_hw_attr {
int brbe_version;
int brbe_cc;
diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c
index 02907371523a..3c079051a63a 100644
--- a/drivers/perf/arm_pmuv3.c
+++ b/drivers/perf/arm_pmuv3.c
@@ -1022,8 +1022,12 @@ static int __armv8_pmuv3_map_event(struct perf_event *event,
hw_event_id = __armv8_pmuv3_map_event_id(armpmu, event);
- if (has_branch_stack(event) && !armv8pmu_branch_attr_valid(event))
- return -EOPNOTSUPP;
+ if (has_branch_stack(event)) {
+ if (!armv8pmu_branch_attr_valid(event))
+ return -EOPNOTSUPP;
+
+ event->attach_state |= PERF_ATTACH_TASK_DATA;
+ }
/*
* CHAIN events only work when paired with an adjacent counter, and it
@@ -1188,9 +1192,15 @@ static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
return -ENODEV;
if (cpu_pmu->has_branch_stack) {
- ret = branch_records_alloc(cpu_pmu);
+ ret = armv8pmu_task_ctx_cache_alloc(cpu_pmu);
if (ret)
return ret;
+
+ ret = branch_records_alloc(cpu_pmu);
+ if (ret) {
+ armv8pmu_task_ctx_cache_free(cpu_pmu);
+ return ret;
+ }
}
return 0;
}
--
2.25.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V12 08/10] arm64/perf: Add struct brbe_regset helper functions
2023-06-15 13:32 [PATCH V12 00/10] arm64/perf: Enable branch stack sampling Anshuman Khandual
` (6 preceding siblings ...)
2023-06-15 13:32 ` [PATCH V12 07/10] arm64/perf: Add PERF_ATTACH_TASK_DATA to events with has_branch_stack() Anshuman Khandual
@ 2023-06-15 13:32 ` Anshuman Khandual
2023-06-21 13:15 ` Mark Rutland
2023-06-15 13:32 ` [PATCH V12 09/10] arm64/perf: Implement branch records save on task sched out Anshuman Khandual
` (2 subsequent siblings)
10 siblings, 1 reply; 25+ messages in thread
From: Anshuman Khandual @ 2023-06-15 13:32 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, will, catalin.marinas,
mark.rutland
Cc: Anshuman Khandual, Mark Brown, James Clark, Rob Herring,
Marc Zyngier, Suzuki Poulose, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, linux-perf-users
The primary abstraction level for fetching branch records from BRBE HW has
been changed as 'struct brbe_regset', which contains storage for all three
BRBE registers i.e BRBSRC, BRBTGT, BRBINF. Whether branch record processing
happens in the task sched out path, or in the PMU IRQ handling path, these
registers need to be extracted from the HW. Afterwards both live and stored
sets need to be stitched together to create final branch records set. This
adds required helper functions for such operations.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Tested-by: James Clark <james.clark@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
drivers/perf/arm_brbe.c | 127 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 127 insertions(+)
diff --git a/drivers/perf/arm_brbe.c b/drivers/perf/arm_brbe.c
index 4729cb49282b..f6693699fade 100644
--- a/drivers/perf/arm_brbe.c
+++ b/drivers/perf/arm_brbe.c
@@ -44,6 +44,133 @@ static void select_brbe_bank(int bank)
isb();
}
+static bool __read_brbe_regset(struct brbe_regset *entry, int idx)
+{
+ entry->brbinf = get_brbinf_reg(idx);
+
+ /*
+ * There are no valid entries anymore on the buffer.
+ * Abort the branch record processing to save some
+ * cycles and also reduce the capture/process load
+ * for the user space as well.
+ */
+ if (brbe_invalid(entry->brbinf))
+ return false;
+
+ entry->brbsrc = get_brbsrc_reg(idx);
+ entry->brbtgt = get_brbtgt_reg(idx);
+ return true;
+}
+
+/*
+ * This scans over BRBE register banks and captures individual branch records
+ * [BRBSRC, BRBTGT, BRBINF] into a pre-allocated 'struct brbe_regset' buffer,
+ * until an invalid one gets encountered. The caller for this function needs
+ * to ensure BRBE is an appropriate state before the records can be captured.
+ */
+static int capture_brbe_regset(int nr_hw_entries, struct brbe_regset *buf)
+{
+ int idx = 0;
+
+ select_brbe_bank(BRBE_BANK_IDX_0);
+ while (idx < nr_hw_entries && idx < BRBE_BANK0_IDX_MAX) {
+ if (!__read_brbe_regset(&buf[idx], idx))
+ return idx;
+ idx++;
+ }
+
+ select_brbe_bank(BRBE_BANK_IDX_1);
+ while (idx < nr_hw_entries && idx < BRBE_BANK1_IDX_MAX) {
+ if (!__read_brbe_regset(&buf[idx], idx))
+ return idx;
+ idx++;
+ }
+ return idx;
+}
+
+/*
+ * This function concatenates branch records from stored and live buffer
+ * up to maximum nr_max records and the stored buffer holds the resultant
+ * buffer. The concatenated buffer contains all the branch records from
+ * the live buffer but might contain some from stored buffer considering
+ * the maximum combined length does not exceed 'nr_max'.
+ *
+ * Stored records Live records
+ * ------------------------------------------------^
+ * | S0 | L0 | Newest |
+ * --------------------------------- |
+ * | S1 | L1 | |
+ * --------------------------------- |
+ * | S2 | L2 | |
+ * --------------------------------- |
+ * | S3 | L3 | |
+ * --------------------------------- |
+ * | S4 | L4 | nr_max
+ * --------------------------------- |
+ * | | L5 | |
+ * --------------------------------- |
+ * | | L6 | |
+ * --------------------------------- |
+ * | | L7 | |
+ * --------------------------------- |
+ * | | | |
+ * --------------------------------- |
+ * | | | Oldest |
+ * ------------------------------------------------V
+ *
+ *
+ * S0 is the newest in the stored records, where as L7 is the oldest in
+ * the live records. Unless the live buffer is detected as being full
+ * thus potentially dropping off some older records, L7 and S0 records
+ * are contiguous in time for a user task context. The stitched buffer
+ * here represents maximum possible branch records, contiguous in time.
+ *
+ * Stored records Live records
+ * ------------------------------------------------^
+ * | L0 | L0 | Newest |
+ * --------------------------------- |
+ * | L0 | L1 | |
+ * --------------------------------- |
+ * | L2 | L2 | |
+ * --------------------------------- |
+ * | L3 | L3 | |
+ * --------------------------------- |
+ * | L4 | L4 | nr_max
+ * --------------------------------- |
+ * | L5 | L5 | |
+ * --------------------------------- |
+ * | L6 | L6 | |
+ * --------------------------------- |
+ * | L7 | L7 | |
+ * --------------------------------- |
+ * | S0 | | |
+ * --------------------------------- |
+ * | S1 | | Oldest |
+ * ------------------------------------------------V
+ * | S2 | <----|
+ * ----------------- |
+ * | S3 | <----| Dropped off after nr_max
+ * ----------------- |
+ * | S4 | <----|
+ * -----------------
+ */
+static int stitch_stored_live_entries(struct brbe_regset *stored,
+ struct brbe_regset *live,
+ int nr_stored, int nr_live,
+ int nr_max)
+{
+ int nr_move = min(nr_stored, nr_max - nr_live);
+
+ /* Move the tail of the buffer to make room for the new entries */
+ memmove(&stored[nr_live], &stored[0], nr_move * sizeof(*stored));
+
+ /* Copy the new entries into the head of the buffer */
+ memcpy(&stored[0], &live[0], nr_live * sizeof(*stored));
+
+ /* Return the number of entries in the stitched buffer */
+ return min(nr_live + nr_stored, nr_max);
+}
+
/*
* Generic perf branch filters supported on BRBE
*
--
2.25.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V12 09/10] arm64/perf: Implement branch records save on task sched out
2023-06-15 13:32 [PATCH V12 00/10] arm64/perf: Enable branch stack sampling Anshuman Khandual
` (7 preceding siblings ...)
2023-06-15 13:32 ` [PATCH V12 08/10] arm64/perf: Add struct brbe_regset helper functions Anshuman Khandual
@ 2023-06-15 13:32 ` Anshuman Khandual
2023-06-21 13:16 ` Mark Rutland
2023-06-15 13:32 ` [PATCH V12 10/10] arm64/perf: Implement branch records save on PMU IRQ Anshuman Khandual
2023-06-21 13:23 ` [PATCH V12 00/10] arm64/perf: Enable branch stack sampling Mark Rutland
10 siblings, 1 reply; 25+ messages in thread
From: Anshuman Khandual @ 2023-06-15 13:32 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, will, catalin.marinas,
mark.rutland
Cc: Anshuman Khandual, Mark Brown, James Clark, Rob Herring,
Marc Zyngier, Suzuki Poulose, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, linux-perf-users
This modifies current armv8pmu_sched_task(), to implement a branch records
save mechanism via armv8pmu_branch_save() when a task scheds out of a cpu.
BRBE is paused and disabled for all exception levels before branch records
get captured, which then get concatenated with all existing stored records
present in the task context maintaining the contiguity. Although the final
length of the concatenated buffer does not exceed implemented BRBE length.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Tested-by: James Clark <james.clark@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
arch/arm64/include/asm/perf_event.h | 2 ++
drivers/perf/arm_brbe.c | 30 +++++++++++++++++++++++++++++
drivers/perf/arm_pmuv3.c | 14 ++++++++++++--
3 files changed, 44 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h
index b0c12a5882df..36e7dfb466a6 100644
--- a/arch/arm64/include/asm/perf_event.h
+++ b/arch/arm64/include/asm/perf_event.h
@@ -40,6 +40,7 @@ void armv8pmu_branch_probe(struct arm_pmu *arm_pmu);
void armv8pmu_branch_reset(void);
int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *arm_pmu);
void armv8pmu_task_ctx_cache_free(struct arm_pmu *arm_pmu);
+void armv8pmu_branch_save(struct arm_pmu *arm_pmu, void *ctx);
#else
static inline void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
{
@@ -66,6 +67,7 @@ static inline void armv8pmu_branch_probe(struct arm_pmu *arm_pmu) { }
static inline void armv8pmu_branch_reset(void) { }
static inline int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *arm_pmu) { return 0; }
static inline void armv8pmu_task_ctx_cache_free(struct arm_pmu *arm_pmu) { }
+static inline void armv8pmu_branch_save(struct arm_pmu *arm_pmu, void *ctx) { }
#endif
#endif
#endif
diff --git a/drivers/perf/arm_brbe.c b/drivers/perf/arm_brbe.c
index f6693699fade..3bb17ced2b1d 100644
--- a/drivers/perf/arm_brbe.c
+++ b/drivers/perf/arm_brbe.c
@@ -171,6 +171,36 @@ static int stitch_stored_live_entries(struct brbe_regset *stored,
return min(nr_live + nr_stored, nr_max);
}
+static int brbe_branch_save(int nr_hw_entries, struct brbe_regset *live)
+{
+ u64 brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
+ int nr_live;
+
+ write_sysreg_s(brbfcr | BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
+ isb();
+
+ nr_live = capture_brbe_regset(nr_hw_entries, live);
+
+ write_sysreg_s(brbfcr & ~BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
+ isb();
+
+ return nr_live;
+}
+
+void armv8pmu_branch_save(struct arm_pmu *arm_pmu, void *ctx)
+{
+ struct arm64_perf_task_context *task_ctx = ctx;
+ struct brbe_regset live[BRBE_MAX_ENTRIES];
+ int nr_live, nr_store, nr_hw_entries;
+
+ nr_hw_entries = brbe_get_numrec(arm_pmu->reg_brbidr);
+ nr_live = brbe_branch_save(nr_hw_entries, live);
+ nr_store = task_ctx->nr_brbe_records;
+ nr_store = stitch_stored_live_entries(task_ctx->store, live, nr_store,
+ nr_live, nr_hw_entries);
+ task_ctx->nr_brbe_records = nr_store;
+}
+
/*
* Generic perf branch filters supported on BRBE
*
diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c
index 3c079051a63a..53f404618891 100644
--- a/drivers/perf/arm_pmuv3.c
+++ b/drivers/perf/arm_pmuv3.c
@@ -907,9 +907,19 @@ static int armv8pmu_user_event_idx(struct perf_event *event)
static void armv8pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
{
struct arm_pmu *armpmu = to_arm_pmu(pmu_ctx->pmu);
+ void *task_ctx = pmu_ctx ? pmu_ctx->task_ctx_data : NULL;
- if (sched_in && armpmu->has_branch_stack)
- armv8pmu_branch_reset();
+ if (armpmu->has_branch_stack) {
+ /* Save branch records in task_ctx on sched out */
+ if (task_ctx && !sched_in) {
+ armv8pmu_branch_save(armpmu, task_ctx);
+ return;
+ }
+
+ /* Reset branch records on sched in */
+ if (sched_in)
+ armv8pmu_branch_reset();
+ }
}
/*
--
2.25.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* [PATCH V12 10/10] arm64/perf: Implement branch records save on PMU IRQ
2023-06-15 13:32 [PATCH V12 00/10] arm64/perf: Enable branch stack sampling Anshuman Khandual
` (8 preceding siblings ...)
2023-06-15 13:32 ` [PATCH V12 09/10] arm64/perf: Implement branch records save on task sched out Anshuman Khandual
@ 2023-06-15 13:32 ` Anshuman Khandual
2023-06-21 13:20 ` Mark Rutland
2023-06-21 13:23 ` [PATCH V12 00/10] arm64/perf: Enable branch stack sampling Mark Rutland
10 siblings, 1 reply; 25+ messages in thread
From: Anshuman Khandual @ 2023-06-15 13:32 UTC (permalink / raw)
To: linux-arm-kernel, linux-kernel, will, catalin.marinas,
mark.rutland
Cc: Anshuman Khandual, Mark Brown, James Clark, Rob Herring,
Marc Zyngier, Suzuki Poulose, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, linux-perf-users
This modifies armv8pmu_branch_read() to concatenate live entries along with
task context stored entries and then process the resultant buffer to create
perf branch entry array for perf_sample_data. It follows the same principle
like task sched out.
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Tested-by: James Clark <james.clark@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
drivers/perf/arm_brbe.c | 69 +++++++++++++++++++----------------------
1 file changed, 32 insertions(+), 37 deletions(-)
diff --git a/drivers/perf/arm_brbe.c b/drivers/perf/arm_brbe.c
index 3bb17ced2b1d..d28067c896e2 100644
--- a/drivers/perf/arm_brbe.c
+++ b/drivers/perf/arm_brbe.c
@@ -653,41 +653,44 @@ void armv8pmu_branch_reset(void)
isb();
}
-static bool capture_branch_entry(struct pmu_hw_events *cpuc,
- struct perf_event *event, int idx)
+static void brbe_regset_branch_entries(struct pmu_hw_events *cpuc, struct perf_event *event,
+ struct brbe_regset *regset, int idx)
{
struct perf_branch_entry *entry = &cpuc->branches->branch_entries[idx];
- u64 brbinf = get_brbinf_reg(idx);
-
- /*
- * There are no valid entries anymore on the buffer.
- * Abort the branch record processing to save some
- * cycles and also reduce the capture/process load
- * for the user space as well.
- */
- if (brbe_invalid(brbinf))
- return false;
+ u64 brbinf = regset[idx].brbinf;
perf_clear_branch_entry_bitfields(entry);
if (brbe_record_is_complete(brbinf)) {
- entry->from = get_brbsrc_reg(idx);
- entry->to = get_brbtgt_reg(idx);
+ entry->from = regset[idx].brbsrc;
+ entry->to = regset[idx].brbtgt;
} else if (brbe_record_is_source_only(brbinf)) {
- entry->from = get_brbsrc_reg(idx);
+ entry->from = regset[idx].brbsrc;
entry->to = 0;
} else if (brbe_record_is_target_only(brbinf)) {
entry->from = 0;
- entry->to = get_brbtgt_reg(idx);
+ entry->to = regset[idx].brbtgt;
}
capture_brbe_flags(entry, event, brbinf);
- return true;
+}
+
+static void process_branch_entries(struct pmu_hw_events *cpuc, struct perf_event *event,
+ struct brbe_regset *regset, int nr_regset)
+{
+ int idx;
+
+ for (idx = 0; idx < nr_regset; idx++)
+ brbe_regset_branch_entries(cpuc, event, regset, idx);
+
+ cpuc->branches->branch_stack.nr = nr_regset;
+ cpuc->branches->branch_stack.hw_idx = -1ULL;
}
void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
{
- int nr_hw_entries = brbe_get_numrec(cpuc->percpu_pmu->reg_brbidr);
+ struct arm64_perf_task_context *task_ctx = event->pmu_ctx->task_ctx_data;
+ struct brbe_regset live[BRBE_MAX_ENTRIES];
+ int nr_live, nr_store, nr_hw_entries;
u64 brbfcr, brbcr;
- int idx = 0;
brbcr = read_sysreg_s(SYS_BRBCR_EL1);
brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
@@ -699,25 +702,17 @@ void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
write_sysreg_s(brbfcr | BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
isb();
- /* Loop through bank 0 */
- select_brbe_bank(BRBE_BANK_IDX_0);
- while (idx < nr_hw_entries && idx < BRBE_BANK0_IDX_MAX) {
- if (!capture_branch_entry(cpuc, event, idx))
- goto skip_bank_1;
- idx++;
- }
-
- /* Loop through bank 1 */
- select_brbe_bank(BRBE_BANK_IDX_1);
- while (idx < nr_hw_entries && idx < BRBE_BANK1_IDX_MAX) {
- if (!capture_branch_entry(cpuc, event, idx))
- break;
- idx++;
+ nr_hw_entries = brbe_get_numrec(cpuc->percpu_pmu->reg_brbidr);
+ nr_live = capture_brbe_regset(nr_hw_entries, live);
+ if (event->ctx->task) {
+ nr_store = task_ctx->nr_brbe_records;
+ nr_store = stitch_stored_live_entries(task_ctx->store, live, nr_store,
+ nr_live, nr_hw_entries);
+ process_branch_entries(cpuc, event, task_ctx->store, nr_store);
+ task_ctx->nr_brbe_records = 0;
+ } else {
+ process_branch_entries(cpuc, event, live, nr_live);
}
-
-skip_bank_1:
- cpuc->branches->branch_stack.nr = idx;
- cpuc->branches->branch_stack.hw_idx = -1ULL;
process_branch_aborts(cpuc);
/* Unpause the buffer */
--
2.25.1
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU
2023-06-15 13:32 ` [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU Anshuman Khandual
@ 2023-06-15 23:42 ` kernel test robot
2023-06-16 1:27 ` Anshuman Khandual
2023-06-16 3:41 ` kernel test robot
1 sibling, 1 reply; 25+ messages in thread
From: kernel test robot @ 2023-06-15 23:42 UTC (permalink / raw)
To: Anshuman Khandual, linux-arm-kernel, linux-kernel, will,
catalin.marinas, mark.rutland
Cc: llvm, oe-kbuild-all, Anshuman Khandual, Mark Brown, James Clark,
Rob Herring, Marc Zyngier, Suzuki Poulose, Peter Zijlstra,
Ingo Molnar, Arnaldo Carvalho de Melo, linux-perf-users
Hi Anshuman,
kernel test robot noticed the following build errors:
[auto build test ERROR on arm64/for-next/core]
[also build test ERROR on tip/perf/core acme/perf/core linus/master v6.4-rc6 next-20230615]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Anshuman-Khandual/drivers-perf-arm_pmu-Add-new-sched_task-callback/20230615-223352
base: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
patch link: https://lore.kernel.org/r/20230615133239.442736-6-anshuman.khandual%40arm.com
patch subject: [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU
config: arm-randconfig-r004-20230615 (https://download.01.org/0day-ci/archive/20230616/202306160706.Uei5XDoi-lkp@intel.com/config)
compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a)
reproduce (this is a W=1 build):
mkdir -p ~/bin
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install arm cross compiling tool for clang build
# apt-get install binutils-arm-linux-gnueabi
git remote add arm64 https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git
git fetch arm64 for-next/core
git checkout arm64/for-next/core
b4 shazam https://lore.kernel.org/r/20230615133239.442736-6-anshuman.khandual@arm.com
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=arm olddefconfig
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=arm SHELL=/bin/bash drivers/perf/
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202306160706.Uei5XDoi-lkp@intel.com/
All errors (new ones prefixed by >>):
| ^~~~~~
drivers/perf/arm_pmuv3.c:140:2: note: previous initialization is here
140 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:43:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
43 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:35:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
35 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
drivers/perf/arm_pmuv3.c:147:44: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
147 | [C(DTLB)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:133:44: note: expanded from macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD'
133 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD 0x004E
| ^~~~~~
drivers/perf/arm_pmuv3.c:140:2: note: previous initialization is here
140 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:43:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
43 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:35:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
35 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
drivers/perf/arm_pmuv3.c:148:45: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
148 | [C(DTLB)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:134:44: note: expanded from macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR'
134 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x004F
| ^~~~~~
drivers/perf/arm_pmuv3.c:140:2: note: previous initialization is here
140 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:43:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
43 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:35:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
35 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
drivers/perf/arm_pmuv3.c:149:42: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
149 | [C(DTLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:131:50: note: expanded from macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD'
131 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x004C
| ^~~~~~
drivers/perf/arm_pmuv3.c:140:2: note: previous initialization is here
140 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:43:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
43 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:35:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
35 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
drivers/perf/arm_pmuv3.c:150:43: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
150 | [C(DTLB)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:132:50: note: expanded from macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR'
132 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x004D
| ^~~~~~
drivers/perf/arm_pmuv3.c:140:2: note: previous initialization is here
140 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:43:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
43 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:35:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
35 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
drivers/perf/arm_pmuv3.c:152:44: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
152 | [C(NODE)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:148:46: note: expanded from macro 'ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD'
148 | #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD 0x0060
| ^~~~~~
drivers/perf/arm_pmuv3.c:140:2: note: previous initialization is here
140 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:43:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
43 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:35:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
35 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
drivers/perf/arm_pmuv3.c:153:45: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
153 | [C(NODE)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:149:46: note: expanded from macro 'ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR'
149 | #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR 0x0061
| ^~~~~~
drivers/perf/arm_pmuv3.c:140:2: note: previous initialization is here
140 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:43:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
43 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:35:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
35 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
>> drivers/perf/arm_pmuv3.c:714:3: error: call to undeclared function 'armv8pmu_branch_enable'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
714 | armv8pmu_branch_enable(event);
| ^
>> drivers/perf/arm_pmuv3.c:720:3: error: call to undeclared function 'armv8pmu_branch_disable'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
720 | armv8pmu_branch_disable(event);
| ^
>> drivers/perf/arm_pmuv3.c:801:4: error: call to undeclared function 'armv8pmu_branch_read'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
801 | armv8pmu_branch_read(cpuc, event);
| ^
drivers/perf/arm_pmuv3.c:801:4: note: did you mean 'armv8pmu_pmcr_read'?
drivers/perf/arm_pmuv3.c:430:19: note: 'armv8pmu_pmcr_read' declared here
430 | static inline u32 armv8pmu_pmcr_read(void)
| ^
>> drivers/perf/arm_pmuv3.c:908:3: error: call to undeclared function 'armv8pmu_branch_reset'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
908 | armv8pmu_branch_reset();
| ^
drivers/perf/arm_pmuv3.c:983:3: error: call to undeclared function 'armv8pmu_branch_reset'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
983 | armv8pmu_branch_reset();
| ^
>> drivers/perf/arm_pmuv3.c:1021:34: error: call to undeclared function 'armv8pmu_branch_attr_valid'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
1021 | if (has_branch_stack(event) && !armv8pmu_branch_attr_valid(event))
| ^
>> drivers/perf/arm_pmuv3.c:1140:2: error: call to undeclared function 'armv8pmu_branch_probe'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
1140 | armv8pmu_branch_probe(cpu_pmu);
| ^
55 warnings and 7 errors generated.
vim +/armv8pmu_branch_enable +714 drivers/perf/arm_pmuv3.c
701
702 static void armv8pmu_enable_event(struct perf_event *event)
703 {
704 /*
705 * Enable counter and interrupt, and set the counter to count
706 * the event that we're interested in.
707 */
708 armv8pmu_disable_event_counter(event);
709 armv8pmu_write_event_type(event);
710 armv8pmu_enable_event_irq(event);
711 armv8pmu_enable_event_counter(event);
712
713 if (has_branch_stack(event))
> 714 armv8pmu_branch_enable(event);
715 }
716
717 static void armv8pmu_disable_event(struct perf_event *event)
718 {
719 if (has_branch_stack(event))
> 720 armv8pmu_branch_disable(event);
721
722 armv8pmu_disable_event_counter(event);
723 armv8pmu_disable_event_irq(event);
724 }
725
726 static void armv8pmu_start(struct arm_pmu *cpu_pmu)
727 {
728 struct perf_event_context *ctx;
729 int nr_user = 0;
730
731 ctx = perf_cpu_task_ctx();
732 if (ctx)
733 nr_user = ctx->nr_user;
734
735 if (sysctl_perf_user_access && nr_user)
736 armv8pmu_enable_user_access(cpu_pmu);
737 else
738 armv8pmu_disable_user_access();
739
740 /* Enable all counters */
741 armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
742 }
743
744 static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
745 {
746 /* Disable all counters */
747 armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
748 }
749
750 static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
751 {
752 u32 pmovsr;
753 struct perf_sample_data data;
754 struct pmu_hw_events *cpuc = this_cpu_ptr(cpu_pmu->hw_events);
755 struct pt_regs *regs;
756 int idx;
757
758 /*
759 * Get and reset the IRQ flags
760 */
761 pmovsr = armv8pmu_getreset_flags();
762
763 /*
764 * Did an overflow occur?
765 */
766 if (!armv8pmu_has_overflowed(pmovsr))
767 return IRQ_NONE;
768
769 /*
770 * Handle the counter(s) overflow(s)
771 */
772 regs = get_irq_regs();
773
774 /*
775 * Stop the PMU while processing the counter overflows
776 * to prevent skews in group events.
777 */
778 armv8pmu_stop(cpu_pmu);
779 for (idx = 0; idx < cpu_pmu->num_events; ++idx) {
780 struct perf_event *event = cpuc->events[idx];
781 struct hw_perf_event *hwc;
782
783 /* Ignore if we don't have an event. */
784 if (!event)
785 continue;
786
787 /*
788 * We have a single interrupt for all counters. Check that
789 * each counter has overflowed before we process it.
790 */
791 if (!armv8pmu_counter_has_overflowed(pmovsr, idx))
792 continue;
793
794 hwc = &event->hw;
795 armpmu_event_update(event);
796 perf_sample_data_init(&data, 0, hwc->last_period);
797 if (!armpmu_event_set_period(event))
798 continue;
799
800 if (has_branch_stack(event) && !WARN_ON(!cpuc->branches)) {
> 801 armv8pmu_branch_read(cpuc, event);
802 perf_sample_save_brstack(&data, event, &cpuc->branches->branch_stack);
803 }
804
805 /*
806 * Perf event overflow will queue the processing of the event as
807 * an irq_work which will be taken care of in the handling of
808 * IPI_IRQ_WORK.
809 */
810 if (perf_event_overflow(event, &data, regs))
811 cpu_pmu->disable(event);
812 }
813 armv8pmu_start(cpu_pmu);
814
815 return IRQ_HANDLED;
816 }
817
818 static int armv8pmu_get_single_idx(struct pmu_hw_events *cpuc,
819 struct arm_pmu *cpu_pmu)
820 {
821 int idx;
822
823 for (idx = ARMV8_IDX_COUNTER0; idx < cpu_pmu->num_events; idx++) {
824 if (!test_and_set_bit(idx, cpuc->used_mask))
825 return idx;
826 }
827 return -EAGAIN;
828 }
829
830 static int armv8pmu_get_chain_idx(struct pmu_hw_events *cpuc,
831 struct arm_pmu *cpu_pmu)
832 {
833 int idx;
834
835 /*
836 * Chaining requires two consecutive event counters, where
837 * the lower idx must be even.
838 */
839 for (idx = ARMV8_IDX_COUNTER0 + 1; idx < cpu_pmu->num_events; idx += 2) {
840 if (!test_and_set_bit(idx, cpuc->used_mask)) {
841 /* Check if the preceding even counter is available */
842 if (!test_and_set_bit(idx - 1, cpuc->used_mask))
843 return idx;
844 /* Release the Odd counter */
845 clear_bit(idx, cpuc->used_mask);
846 }
847 }
848 return -EAGAIN;
849 }
850
851 static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc,
852 struct perf_event *event)
853 {
854 struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
855 struct hw_perf_event *hwc = &event->hw;
856 unsigned long evtype = hwc->config_base & ARMV8_PMU_EVTYPE_EVENT;
857
858 /* Always prefer to place a cycle counter into the cycle counter. */
859 if (evtype == ARMV8_PMUV3_PERFCTR_CPU_CYCLES) {
860 if (!test_and_set_bit(ARMV8_IDX_CYCLE_COUNTER, cpuc->used_mask))
861 return ARMV8_IDX_CYCLE_COUNTER;
862 else if (armv8pmu_event_is_64bit(event) &&
863 armv8pmu_event_want_user_access(event) &&
864 !armv8pmu_has_long_event(cpu_pmu))
865 return -EAGAIN;
866 }
867
868 /*
869 * Otherwise use events counters
870 */
871 if (armv8pmu_event_is_chained(event))
872 return armv8pmu_get_chain_idx(cpuc, cpu_pmu);
873 else
874 return armv8pmu_get_single_idx(cpuc, cpu_pmu);
875 }
876
877 static void armv8pmu_clear_event_idx(struct pmu_hw_events *cpuc,
878 struct perf_event *event)
879 {
880 int idx = event->hw.idx;
881
882 clear_bit(idx, cpuc->used_mask);
883 if (armv8pmu_event_is_chained(event))
884 clear_bit(idx - 1, cpuc->used_mask);
885 }
886
887 static int armv8pmu_user_event_idx(struct perf_event *event)
888 {
889 if (!sysctl_perf_user_access || !armv8pmu_event_has_user_read(event))
890 return 0;
891
892 /*
893 * We remap the cycle counter index to 32 to
894 * match the offset applied to the rest of
895 * the counter indices.
896 */
897 if (event->hw.idx == ARMV8_IDX_CYCLE_COUNTER)
898 return ARMV8_IDX_CYCLE_COUNTER_USER;
899
900 return event->hw.idx;
901 }
902
903 static void armv8pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
904 {
905 struct arm_pmu *armpmu = to_arm_pmu(pmu_ctx->pmu);
906
907 if (sched_in && armpmu->has_branch_stack)
> 908 armv8pmu_branch_reset();
909 }
910
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU
2023-06-15 23:42 ` kernel test robot
@ 2023-06-16 1:27 ` Anshuman Khandual
2023-06-16 9:21 ` Catalin Marinas
0 siblings, 1 reply; 25+ messages in thread
From: Anshuman Khandual @ 2023-06-16 1:27 UTC (permalink / raw)
To: kernel test robot, linux-arm-kernel, linux-kernel, will,
catalin.marinas, mark.rutland
Cc: llvm, oe-kbuild-all, Mark Brown, James Clark, Rob Herring,
Marc Zyngier, Suzuki Poulose, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, linux-perf-users
On 6/16/23 05:12, kernel test robot wrote:
> Hi Anshuman,
>
> kernel test robot noticed the following build errors:
>
> [auto build test ERROR on arm64/for-next/core]
> [also build test ERROR on tip/perf/core acme/perf/core linus/master v6.4-rc6 next-20230615]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
>
> url: https://github.com/intel-lab-lkp/linux/commits/Anshuman-Khandual/drivers-perf-arm_pmu-Add-new-sched_task-callback/20230615-223352
> base: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
> patch link: https://lore.kernel.org/r/20230615133239.442736-6-anshuman.khandual%40arm.com
> patch subject: [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU
> config: arm-randconfig-r004-20230615 (https://download.01.org/0day-ci/archive/20230616/202306160706.Uei5XDoi-lkp@intel.com/config)
> compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a)
> reproduce (this is a W=1 build):
> mkdir -p ~/bin
> wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # install arm cross compiling tool for clang build
> # apt-get install binutils-arm-linux-gnueabi
> git remote add arm64 https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git
> git fetch arm64 for-next/core
> git checkout arm64/for-next/core
> b4 shazam https://lore.kernel.org/r/20230615133239.442736-6-anshuman.khandual@arm.com
> # save the config file
> mkdir build_dir && cp config build_dir/.config
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=arm olddefconfig
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=arm SHELL=/bin/bash drivers/perf/
I am unable to reproduce this on mainline 6.4-rc6 via default cross compiler
on a W=1 build. Looking at all other problems reported on the file, it seems
something is not right here. Reported build problems around these callbacks,
i.e armv8pmu_branch_XXXX() do not make sense as they are available via config
CONFIG_PERF_EVENTS which is also enabled along with CONFIG_ARM_PMUV3 in this
test config.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V12 07/10] arm64/perf: Add PERF_ATTACH_TASK_DATA to events with has_branch_stack()
2023-06-15 13:32 ` [PATCH V12 07/10] arm64/perf: Add PERF_ATTACH_TASK_DATA to events with has_branch_stack() Anshuman Khandual
@ 2023-06-16 2:38 ` kernel test robot
2023-06-19 6:28 ` Anshuman Khandual
0 siblings, 1 reply; 25+ messages in thread
From: kernel test robot @ 2023-06-16 2:38 UTC (permalink / raw)
To: Anshuman Khandual, linux-arm-kernel, linux-kernel, will,
catalin.marinas, mark.rutland
Cc: llvm, oe-kbuild-all, Anshuman Khandual, Mark Brown, James Clark,
Rob Herring, Marc Zyngier, Suzuki Poulose, Peter Zijlstra,
Ingo Molnar, Arnaldo Carvalho de Melo, linux-perf-users
Hi Anshuman,
kernel test robot noticed the following build errors:
[auto build test ERROR on arm64/for-next/core]
[also build test ERROR on tip/perf/core acme/perf/core linus/master v6.4-rc6 next-20230615]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Anshuman-Khandual/drivers-perf-arm_pmu-Add-new-sched_task-callback/20230615-223352
base: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
patch link: https://lore.kernel.org/r/20230615133239.442736-8-anshuman.khandual%40arm.com
patch subject: [PATCH V12 07/10] arm64/perf: Add PERF_ATTACH_TASK_DATA to events with has_branch_stack()
config: arm-randconfig-r004-20230615 (https://download.01.org/0day-ci/archive/20230616/202306161016.jJeqG6mc-lkp@intel.com/config)
compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a)
reproduce (this is a W=1 build):
mkdir -p ~/bin
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# install arm cross compiling tool for clang build
# apt-get install binutils-arm-linux-gnueabi
git remote add arm64 https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git
git fetch arm64 for-next/core
git checkout arm64/for-next/core
b4 shazam https://lore.kernel.org/r/20230615133239.442736-8-anshuman.khandual@arm.com
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=arm olddefconfig
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=arm SHELL=/bin/bash drivers/perf/
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202306161016.jJeqG6mc-lkp@intel.com/
All errors (new ones prefixed by >>):
drivers/perf/arm_pmuv3.c:148:45: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
148 | [C(DTLB)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:134:44: note: expanded from macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR'
134 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x004F
| ^~~~~~
drivers/perf/arm_pmuv3.c:140:2: note: previous initialization is here
140 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:43:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
43 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:35:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
35 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
drivers/perf/arm_pmuv3.c:149:42: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
149 | [C(DTLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:131:50: note: expanded from macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD'
131 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x004C
| ^~~~~~
drivers/perf/arm_pmuv3.c:140:2: note: previous initialization is here
140 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:43:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
43 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:35:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
35 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
drivers/perf/arm_pmuv3.c:150:43: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
150 | [C(DTLB)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:132:50: note: expanded from macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR'
132 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x004D
| ^~~~~~
drivers/perf/arm_pmuv3.c:140:2: note: previous initialization is here
140 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:43:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
43 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:35:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
35 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
drivers/perf/arm_pmuv3.c:152:44: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
152 | [C(NODE)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:148:46: note: expanded from macro 'ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD'
148 | #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD 0x0060
| ^~~~~~
drivers/perf/arm_pmuv3.c:140:2: note: previous initialization is here
140 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:43:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
43 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:35:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
35 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
drivers/perf/arm_pmuv3.c:153:45: warning: initializer overrides prior initialization of this subobject [-Winitializer-overrides]
153 | [C(NODE)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:149:46: note: expanded from macro 'ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR'
149 | #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR 0x0061
| ^~~~~~
drivers/perf/arm_pmuv3.c:140:2: note: previous initialization is here
140 | PERF_CACHE_MAP_ALL_UNSUPPORTED,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:43:31: note: expanded from macro 'PERF_CACHE_MAP_ALL_UNSUPPORTED'
43 | [0 ... C(RESULT_MAX) - 1] = CACHE_OP_UNSUPPORTED, \
| ^~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmu.h:35:31: note: expanded from macro 'CACHE_OP_UNSUPPORTED'
35 | #define CACHE_OP_UNSUPPORTED 0xFFFF
| ^~~~~~
drivers/perf/arm_pmuv3.c:714:3: error: call to undeclared function 'armv8pmu_branch_enable'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
714 | armv8pmu_branch_enable(event);
| ^
drivers/perf/arm_pmuv3.c:720:3: error: call to undeclared function 'armv8pmu_branch_disable'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
720 | armv8pmu_branch_disable(event);
| ^
drivers/perf/arm_pmuv3.c:805:4: error: call to undeclared function 'armv8pmu_branch_read'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
805 | armv8pmu_branch_read(cpuc, event);
| ^
drivers/perf/arm_pmuv3.c:805:4: note: did you mean 'armv8pmu_pmcr_read'?
drivers/perf/arm_pmuv3.c:430:19: note: 'armv8pmu_pmcr_read' declared here
430 | static inline u32 armv8pmu_pmcr_read(void)
| ^
drivers/perf/arm_pmuv3.c:912:3: error: call to undeclared function 'armv8pmu_branch_reset'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
912 | armv8pmu_branch_reset();
| ^
drivers/perf/arm_pmuv3.c:987:3: error: call to undeclared function 'armv8pmu_branch_reset'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
987 | armv8pmu_branch_reset();
| ^
drivers/perf/arm_pmuv3.c:1026:8: error: call to undeclared function 'armv8pmu_branch_attr_valid'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
1026 | if (!armv8pmu_branch_attr_valid(event))
| ^
drivers/perf/arm_pmuv3.c:1148:2: error: call to undeclared function 'armv8pmu_branch_probe'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
1148 | armv8pmu_branch_probe(cpu_pmu);
| ^
>> drivers/perf/arm_pmuv3.c:1195:9: error: call to undeclared function 'armv8pmu_task_ctx_cache_alloc'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
1195 | ret = armv8pmu_task_ctx_cache_alloc(cpu_pmu);
| ^
>> drivers/perf/arm_pmuv3.c:1201:4: error: call to undeclared function 'armv8pmu_task_ctx_cache_free'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
1201 | armv8pmu_task_ctx_cache_free(cpu_pmu);
| ^
55 warnings and 9 errors generated.
vim +/armv8pmu_task_ctx_cache_alloc +1195 drivers/perf/arm_pmuv3.c
1176
1177 static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
1178 {
1179 struct armv8pmu_probe_info probe = {
1180 .pmu = cpu_pmu,
1181 .present = false,
1182 };
1183 int ret;
1184
1185 ret = smp_call_function_any(&cpu_pmu->supported_cpus,
1186 __armv8pmu_probe_pmu,
1187 &probe, 1);
1188 if (ret)
1189 return ret;
1190
1191 if (!probe.present)
1192 return -ENODEV;
1193
1194 if (cpu_pmu->has_branch_stack) {
> 1195 ret = armv8pmu_task_ctx_cache_alloc(cpu_pmu);
1196 if (ret)
1197 return ret;
1198
1199 ret = branch_records_alloc(cpu_pmu);
1200 if (ret) {
> 1201 armv8pmu_task_ctx_cache_free(cpu_pmu);
1202 return ret;
1203 }
1204 }
1205 return 0;
1206 }
1207
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU
2023-06-15 13:32 ` [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU Anshuman Khandual
2023-06-15 23:42 ` kernel test robot
@ 2023-06-16 3:41 ` kernel test robot
1 sibling, 0 replies; 25+ messages in thread
From: kernel test robot @ 2023-06-16 3:41 UTC (permalink / raw)
To: Anshuman Khandual, linux-arm-kernel, linux-kernel, will,
catalin.marinas, mark.rutland
Cc: oe-kbuild-all, Anshuman Khandual, Mark Brown, James Clark,
Rob Herring, Marc Zyngier, Suzuki Poulose, Peter Zijlstra,
Ingo Molnar, Arnaldo Carvalho de Melo, linux-perf-users
Hi Anshuman,
kernel test robot noticed the following build errors:
[auto build test ERROR on arm64/for-next/core]
[also build test ERROR on tip/perf/core acme/perf/core linus/master v6.4-rc6 next-20230615]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Anshuman-Khandual/drivers-perf-arm_pmu-Add-new-sched_task-callback/20230615-223352
base: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
patch link: https://lore.kernel.org/r/20230615133239.442736-6-anshuman.khandual%40arm.com
patch subject: [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU
config: arm-allmodconfig (https://download.01.org/0day-ci/archive/20230616/202306161154.PwcAiVfV-lkp@intel.com/config)
compiler: arm-linux-gnueabi-gcc (GCC) 12.3.0
reproduce (this is a W=1 build):
mkdir -p ~/bin
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
git remote add arm64 https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git
git fetch arm64 for-next/core
git checkout arm64/for-next/core
b4 shazam https://lore.kernel.org/r/20230615133239.442736-6-anshuman.khandual@arm.com
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.3.0 ~/bin/make.cross W=1 O=build_dir ARCH=arm olddefconfig
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.3.0 ~/bin/make.cross W=1 O=build_dir ARCH=arm SHELL=/bin/bash drivers/perf/
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202306161154.PwcAiVfV-lkp@intel.com/
All errors (new ones prefixed by >>):
drivers/perf/arm_pmuv3.c:143:51: note: in expansion of macro 'ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD'
143 | [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:122:65: warning: initialized field overwritten [-Woverride-init]
122 | #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR 0x0041
| ^~~~~~
drivers/perf/arm_pmuv3.c:144:51: note: in expansion of macro 'ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR'
144 | [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:122:65: note: (near initialization for 'armv8_vulcan_perf_cache_map[0][1][0]')
122 | #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR 0x0041
| ^~~~~~
drivers/perf/arm_pmuv3.c:144:51: note: in expansion of macro 'ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR'
144 | [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:124:65: warning: initialized field overwritten [-Woverride-init]
124 | #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR 0x0043
| ^~~~~~
drivers/perf/arm_pmuv3.c:145:51: note: in expansion of macro 'ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR'
145 | [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:124:65: note: (near initialization for 'armv8_vulcan_perf_cache_map[0][1][1]')
124 | #define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR 0x0043
| ^~~~~~
drivers/perf/arm_pmuv3.c:145:51: note: in expansion of macro 'ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR'
145 | [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:133:65: warning: initialized field overwritten [-Woverride-init]
133 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD 0x004E
| ^~~~~~
drivers/perf/arm_pmuv3.c:147:51: note: in expansion of macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD'
147 | [C(DTLB)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:133:65: note: (near initialization for 'armv8_vulcan_perf_cache_map[3][0][0]')
133 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD 0x004E
| ^~~~~~
drivers/perf/arm_pmuv3.c:147:51: note: in expansion of macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD'
147 | [C(DTLB)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:134:65: warning: initialized field overwritten [-Woverride-init]
134 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x004F
| ^~~~~~
drivers/perf/arm_pmuv3.c:148:52: note: in expansion of macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR'
148 | [C(DTLB)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:134:65: note: (near initialization for 'armv8_vulcan_perf_cache_map[3][1][0]')
134 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x004F
| ^~~~~~
drivers/perf/arm_pmuv3.c:148:52: note: in expansion of macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR'
148 | [C(DTLB)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:131:65: warning: initialized field overwritten [-Woverride-init]
131 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x004C
| ^~~~~~
drivers/perf/arm_pmuv3.c:149:51: note: in expansion of macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD'
149 | [C(DTLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:131:65: note: (near initialization for 'armv8_vulcan_perf_cache_map[3][0][1]')
131 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x004C
| ^~~~~~
drivers/perf/arm_pmuv3.c:149:51: note: in expansion of macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD'
149 | [C(DTLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:132:65: warning: initialized field overwritten [-Woverride-init]
132 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x004D
| ^~~~~~
drivers/perf/arm_pmuv3.c:150:51: note: in expansion of macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR'
150 | [C(DTLB)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:132:65: note: (near initialization for 'armv8_vulcan_perf_cache_map[3][1][1]')
132 | #define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x004D
| ^~~~~~
drivers/perf/arm_pmuv3.c:150:51: note: in expansion of macro 'ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR'
150 | [C(DTLB)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:148:65: warning: initialized field overwritten [-Woverride-init]
148 | #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD 0x0060
| ^~~~~~
drivers/perf/arm_pmuv3.c:152:51: note: in expansion of macro 'ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD'
152 | [C(NODE)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:148:65: note: (near initialization for 'armv8_vulcan_perf_cache_map[6][0][0]')
148 | #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD 0x0060
| ^~~~~~
drivers/perf/arm_pmuv3.c:152:51: note: in expansion of macro 'ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD'
152 | [C(NODE)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:149:65: warning: initialized field overwritten [-Woverride-init]
149 | #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR 0x0061
| ^~~~~~
drivers/perf/arm_pmuv3.c:153:52: note: in expansion of macro 'ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR'
153 | [C(NODE)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/perf/arm_pmuv3.h:149:65: note: (near initialization for 'armv8_vulcan_perf_cache_map[6][1][0]')
149 | #define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR 0x0061
| ^~~~~~
drivers/perf/arm_pmuv3.c:153:52: note: in expansion of macro 'ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR'
153 | [C(NODE)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/perf/arm_pmuv3.c: In function 'armv8pmu_enable_event':
>> drivers/perf/arm_pmuv3.c:714:17: error: implicit declaration of function 'armv8pmu_branch_enable'; did you mean 'static_branch_enable'? [-Werror=implicit-function-declaration]
714 | armv8pmu_branch_enable(event);
| ^~~~~~~~~~~~~~~~~~~~~~
| static_branch_enable
drivers/perf/arm_pmuv3.c: In function 'armv8pmu_disable_event':
>> drivers/perf/arm_pmuv3.c:720:17: error: implicit declaration of function 'armv8pmu_branch_disable'; did you mean 'static_branch_disable'? [-Werror=implicit-function-declaration]
720 | armv8pmu_branch_disable(event);
| ^~~~~~~~~~~~~~~~~~~~~~~
| static_branch_disable
drivers/perf/arm_pmuv3.c: In function 'armv8pmu_handle_irq':
>> drivers/perf/arm_pmuv3.c:801:25: error: implicit declaration of function 'armv8pmu_branch_read'; did you mean 'armv8pmu_pmcr_read'? [-Werror=implicit-function-declaration]
801 | armv8pmu_branch_read(cpuc, event);
| ^~~~~~~~~~~~~~~~~~~~
| armv8pmu_pmcr_read
drivers/perf/arm_pmuv3.c: In function 'armv8pmu_sched_task':
>> drivers/perf/arm_pmuv3.c:908:17: error: implicit declaration of function 'armv8pmu_branch_reset' [-Werror=implicit-function-declaration]
908 | armv8pmu_branch_reset();
| ^~~~~~~~~~~~~~~~~~~~~
drivers/perf/arm_pmuv3.c: In function '__armv8_pmuv3_map_event':
>> drivers/perf/arm_pmuv3.c:1021:41: error: implicit declaration of function 'armv8pmu_branch_attr_valid' [-Werror=implicit-function-declaration]
1021 | if (has_branch_stack(event) && !armv8pmu_branch_attr_valid(event))
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
drivers/perf/arm_pmuv3.c: In function '__armv8pmu_probe_pmu':
>> drivers/perf/arm_pmuv3.c:1140:9: error: implicit declaration of function 'armv8pmu_branch_probe'; did you mean 'arm_pmu_acpi_probe'? [-Werror=implicit-function-declaration]
1140 | armv8pmu_branch_probe(cpu_pmu);
| ^~~~~~~~~~~~~~~~~~~~~
| arm_pmu_acpi_probe
cc1: some warnings being treated as errors
vim +714 drivers/perf/arm_pmuv3.c
701
702 static void armv8pmu_enable_event(struct perf_event *event)
703 {
704 /*
705 * Enable counter and interrupt, and set the counter to count
706 * the event that we're interested in.
707 */
708 armv8pmu_disable_event_counter(event);
709 armv8pmu_write_event_type(event);
710 armv8pmu_enable_event_irq(event);
711 armv8pmu_enable_event_counter(event);
712
713 if (has_branch_stack(event))
> 714 armv8pmu_branch_enable(event);
715 }
716
717 static void armv8pmu_disable_event(struct perf_event *event)
718 {
719 if (has_branch_stack(event))
> 720 armv8pmu_branch_disable(event);
721
722 armv8pmu_disable_event_counter(event);
723 armv8pmu_disable_event_irq(event);
724 }
725
726 static void armv8pmu_start(struct arm_pmu *cpu_pmu)
727 {
728 struct perf_event_context *ctx;
729 int nr_user = 0;
730
731 ctx = perf_cpu_task_ctx();
732 if (ctx)
733 nr_user = ctx->nr_user;
734
735 if (sysctl_perf_user_access && nr_user)
736 armv8pmu_enable_user_access(cpu_pmu);
737 else
738 armv8pmu_disable_user_access();
739
740 /* Enable all counters */
741 armv8pmu_pmcr_write(armv8pmu_pmcr_read() | ARMV8_PMU_PMCR_E);
742 }
743
744 static void armv8pmu_stop(struct arm_pmu *cpu_pmu)
745 {
746 /* Disable all counters */
747 armv8pmu_pmcr_write(armv8pmu_pmcr_read() & ~ARMV8_PMU_PMCR_E);
748 }
749
750 static irqreturn_t armv8pmu_handle_irq(struct arm_pmu *cpu_pmu)
751 {
752 u32 pmovsr;
753 struct perf_sample_data data;
754 struct pmu_hw_events *cpuc = this_cpu_ptr(cpu_pmu->hw_events);
755 struct pt_regs *regs;
756 int idx;
757
758 /*
759 * Get and reset the IRQ flags
760 */
761 pmovsr = armv8pmu_getreset_flags();
762
763 /*
764 * Did an overflow occur?
765 */
766 if (!armv8pmu_has_overflowed(pmovsr))
767 return IRQ_NONE;
768
769 /*
770 * Handle the counter(s) overflow(s)
771 */
772 regs = get_irq_regs();
773
774 /*
775 * Stop the PMU while processing the counter overflows
776 * to prevent skews in group events.
777 */
778 armv8pmu_stop(cpu_pmu);
779 for (idx = 0; idx < cpu_pmu->num_events; ++idx) {
780 struct perf_event *event = cpuc->events[idx];
781 struct hw_perf_event *hwc;
782
783 /* Ignore if we don't have an event. */
784 if (!event)
785 continue;
786
787 /*
788 * We have a single interrupt for all counters. Check that
789 * each counter has overflowed before we process it.
790 */
791 if (!armv8pmu_counter_has_overflowed(pmovsr, idx))
792 continue;
793
794 hwc = &event->hw;
795 armpmu_event_update(event);
796 perf_sample_data_init(&data, 0, hwc->last_period);
797 if (!armpmu_event_set_period(event))
798 continue;
799
800 if (has_branch_stack(event) && !WARN_ON(!cpuc->branches)) {
> 801 armv8pmu_branch_read(cpuc, event);
802 perf_sample_save_brstack(&data, event, &cpuc->branches->branch_stack);
803 }
804
805 /*
806 * Perf event overflow will queue the processing of the event as
807 * an irq_work which will be taken care of in the handling of
808 * IPI_IRQ_WORK.
809 */
810 if (perf_event_overflow(event, &data, regs))
811 cpu_pmu->disable(event);
812 }
813 armv8pmu_start(cpu_pmu);
814
815 return IRQ_HANDLED;
816 }
817
818 static int armv8pmu_get_single_idx(struct pmu_hw_events *cpuc,
819 struct arm_pmu *cpu_pmu)
820 {
821 int idx;
822
823 for (idx = ARMV8_IDX_COUNTER0; idx < cpu_pmu->num_events; idx++) {
824 if (!test_and_set_bit(idx, cpuc->used_mask))
825 return idx;
826 }
827 return -EAGAIN;
828 }
829
830 static int armv8pmu_get_chain_idx(struct pmu_hw_events *cpuc,
831 struct arm_pmu *cpu_pmu)
832 {
833 int idx;
834
835 /*
836 * Chaining requires two consecutive event counters, where
837 * the lower idx must be even.
838 */
839 for (idx = ARMV8_IDX_COUNTER0 + 1; idx < cpu_pmu->num_events; idx += 2) {
840 if (!test_and_set_bit(idx, cpuc->used_mask)) {
841 /* Check if the preceding even counter is available */
842 if (!test_and_set_bit(idx - 1, cpuc->used_mask))
843 return idx;
844 /* Release the Odd counter */
845 clear_bit(idx, cpuc->used_mask);
846 }
847 }
848 return -EAGAIN;
849 }
850
851 static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc,
852 struct perf_event *event)
853 {
854 struct arm_pmu *cpu_pmu = to_arm_pmu(event->pmu);
855 struct hw_perf_event *hwc = &event->hw;
856 unsigned long evtype = hwc->config_base & ARMV8_PMU_EVTYPE_EVENT;
857
858 /* Always prefer to place a cycle counter into the cycle counter. */
859 if (evtype == ARMV8_PMUV3_PERFCTR_CPU_CYCLES) {
860 if (!test_and_set_bit(ARMV8_IDX_CYCLE_COUNTER, cpuc->used_mask))
861 return ARMV8_IDX_CYCLE_COUNTER;
862 else if (armv8pmu_event_is_64bit(event) &&
863 armv8pmu_event_want_user_access(event) &&
864 !armv8pmu_has_long_event(cpu_pmu))
865 return -EAGAIN;
866 }
867
868 /*
869 * Otherwise use events counters
870 */
871 if (armv8pmu_event_is_chained(event))
872 return armv8pmu_get_chain_idx(cpuc, cpu_pmu);
873 else
874 return armv8pmu_get_single_idx(cpuc, cpu_pmu);
875 }
876
877 static void armv8pmu_clear_event_idx(struct pmu_hw_events *cpuc,
878 struct perf_event *event)
879 {
880 int idx = event->hw.idx;
881
882 clear_bit(idx, cpuc->used_mask);
883 if (armv8pmu_event_is_chained(event))
884 clear_bit(idx - 1, cpuc->used_mask);
885 }
886
887 static int armv8pmu_user_event_idx(struct perf_event *event)
888 {
889 if (!sysctl_perf_user_access || !armv8pmu_event_has_user_read(event))
890 return 0;
891
892 /*
893 * We remap the cycle counter index to 32 to
894 * match the offset applied to the rest of
895 * the counter indices.
896 */
897 if (event->hw.idx == ARMV8_IDX_CYCLE_COUNTER)
898 return ARMV8_IDX_CYCLE_COUNTER_USER;
899
900 return event->hw.idx;
901 }
902
903 static void armv8pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
904 {
905 struct arm_pmu *armpmu = to_arm_pmu(pmu_ctx->pmu);
906
907 if (sched_in && armpmu->has_branch_stack)
> 908 armv8pmu_branch_reset();
909 }
910
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU
2023-06-16 1:27 ` Anshuman Khandual
@ 2023-06-16 9:21 ` Catalin Marinas
2023-06-19 5:45 ` Anshuman Khandual
0 siblings, 1 reply; 25+ messages in thread
From: Catalin Marinas @ 2023-06-16 9:21 UTC (permalink / raw)
To: Anshuman Khandual
Cc: kernel test robot, linux-arm-kernel, linux-kernel, will,
mark.rutland, llvm, oe-kbuild-all, Mark Brown, James Clark,
Rob Herring, Marc Zyngier, Suzuki Poulose, Peter Zijlstra,
Ingo Molnar, Arnaldo Carvalho de Melo, linux-perf-users
On Fri, Jun 16, 2023 at 06:57:52AM +0530, Anshuman Khandual wrote:
> On 6/16/23 05:12, kernel test robot wrote:
> > kernel test robot noticed the following build errors:
> >
> > [auto build test ERROR on arm64/for-next/core]
> > [also build test ERROR on tip/perf/core acme/perf/core linus/master v6.4-rc6 next-20230615]
> > [If your patch is applied to the wrong git tree, kindly drop us a note.
> > And when submitting patch, we suggest to use '--base' as documented in
> > https://git-scm.com/docs/git-format-patch#_base_tree_information]
> >
> > url: https://github.com/intel-lab-lkp/linux/commits/Anshuman-Khandual/drivers-perf-arm_pmu-Add-new-sched_task-callback/20230615-223352
> > base: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
> > patch link: https://lore.kernel.org/r/20230615133239.442736-6-anshuman.khandual%40arm.com
> > patch subject: [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU
> > config: arm-randconfig-r004-20230615 (https://download.01.org/0day-ci/archive/20230616/202306160706.Uei5XDoi-lkp@intel.com/config)
> > compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a)
> > reproduce (this is a W=1 build):
> > mkdir -p ~/bin
> > wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> > chmod +x ~/bin/make.cross
> > # install arm cross compiling tool for clang build
> > # apt-get install binutils-arm-linux-gnueabi
> > git remote add arm64 https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git
> > git fetch arm64 for-next/core
> > git checkout arm64/for-next/core
> > b4 shazam https://lore.kernel.org/r/20230615133239.442736-6-anshuman.khandual@arm.com
> > # save the config file
> > mkdir build_dir && cp config build_dir/.config
> > COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=arm olddefconfig
> > COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=arm SHELL=/bin/bash drivers/perf/
>
> I am unable to reproduce this on mainline 6.4-rc6 via default cross compiler
> on a W=1 build. Looking at all other problems reported on the file, it seems
> something is not right here. Reported build problems around these callbacks,
> i.e armv8pmu_branch_XXXX() do not make sense as they are available via config
> CONFIG_PERF_EVENTS which is also enabled along with CONFIG_ARM_PMUV3 in this
> test config.
Have you tried applying this series on top of the arm64 for-next/core
branch? That's what the robot it testing (in the absence of a --base
option when generating the patches).
--
Catalin
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU
2023-06-16 9:21 ` Catalin Marinas
@ 2023-06-19 5:45 ` Anshuman Khandual
2023-06-19 9:08 ` Marc Zyngier
0 siblings, 1 reply; 25+ messages in thread
From: Anshuman Khandual @ 2023-06-19 5:45 UTC (permalink / raw)
To: Catalin Marinas
Cc: kernel test robot, linux-arm-kernel, linux-kernel, will,
mark.rutland, llvm, oe-kbuild-all, Mark Brown, James Clark,
Rob Herring, Marc Zyngier, Suzuki Poulose, Peter Zijlstra,
Ingo Molnar, Arnaldo Carvalho de Melo, linux-perf-users
On 6/16/23 14:51, Catalin Marinas wrote:
> On Fri, Jun 16, 2023 at 06:57:52AM +0530, Anshuman Khandual wrote:
>> On 6/16/23 05:12, kernel test robot wrote:
>>> kernel test robot noticed the following build errors:
>>>
>>> [auto build test ERROR on arm64/for-next/core]
>>> [also build test ERROR on tip/perf/core acme/perf/core linus/master v6.4-rc6 next-20230615]
>>> [If your patch is applied to the wrong git tree, kindly drop us a note.
>>> And when submitting patch, we suggest to use '--base' as documented in
>>> https://git-scm.com/docs/git-format-patch#_base_tree_information]
>>>
>>> url: https://github.com/intel-lab-lkp/linux/commits/Anshuman-Khandual/drivers-perf-arm_pmu-Add-new-sched_task-callback/20230615-223352
>>> base: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
>>> patch link: https://lore.kernel.org/r/20230615133239.442736-6-anshuman.khandual%40arm.com
>>> patch subject: [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU
>>> config: arm-randconfig-r004-20230615 (https://download.01.org/0day-ci/archive/20230616/202306160706.Uei5XDoi-lkp@intel.com/config)
>>> compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a)
>>> reproduce (this is a W=1 build):
>>> mkdir -p ~/bin
>>> wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>>> chmod +x ~/bin/make.cross
>>> # install arm cross compiling tool for clang build
>>> # apt-get install binutils-arm-linux-gnueabi
>>> git remote add arm64 https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git
>>> git fetch arm64 for-next/core
>>> git checkout arm64/for-next/core
>>> b4 shazam https://lore.kernel.org/r/20230615133239.442736-6-anshuman.khandual@arm.com
>>> # save the config file
>>> mkdir build_dir && cp config build_dir/.config
>>> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=arm olddefconfig
>>> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=arm SHELL=/bin/bash drivers/perf/
>>
>> I am unable to reproduce this on mainline 6.4-rc6 via default cross compiler
>> on a W=1 build. Looking at all other problems reported on the file, it seems
>> something is not right here. Reported build problems around these callbacks,
>> i.e armv8pmu_branch_XXXX() do not make sense as they are available via config
>> CONFIG_PERF_EVENTS which is also enabled along with CONFIG_ARM_PMUV3 in this
>> test config.
>
> Have you tried applying this series on top of the arm64 for-next/core
> branch? That's what the robot it testing (in the absence of a --base
> option when generating the patches).
Right, it turned out to be a build problem on arm (32 bit) platform instead.
After arm_pmuv3.c moved into common ./drivers/perf from ./arch/arm64/kernel/,
it can no longer access arch/arm64/include/asm/perf_event.h defined functions
without breaking arm (32) bit. The following code block needs to be moved out
from arch/arm64/include/asm/perf_event.h into include/linux/perf/arm_pmuv3.h
(which is preferred as all call sites are inside drivers/perf/arm_pmuv3.c) or
may be arm_pmu.h (which is one step higher in the abstraction).
struct pmu_hw_events;
struct arm_pmu;
struct perf_event;
#ifdef CONFIG_PERF_EVENTS
static inline bool has_branch_stack(struct perf_event *event);
#ifdef CONFIG_ARM64_BRBE
void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event);
bool armv8pmu_branch_attr_valid(struct perf_event *event);
void armv8pmu_branch_enable(struct perf_event *event);
void armv8pmu_branch_disable(struct perf_event *event);
void armv8pmu_branch_probe(struct arm_pmu *arm_pmu);
void armv8pmu_branch_reset(void);
int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *arm_pmu);
void armv8pmu_task_ctx_cache_free(struct arm_pmu *arm_pmu);
void armv8pmu_branch_save(struct arm_pmu *arm_pmu, void *ctx);
#else
static inline void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
{
WARN_ON_ONCE(!has_branch_stack(event));
}
static inline bool armv8pmu_branch_attr_valid(struct perf_event *event)
{
WARN_ON_ONCE(!has_branch_stack(event));
return false;
}
static inline void armv8pmu_branch_enable(struct perf_event *event)
{
WARN_ON_ONCE(!has_branch_stack(event));
}
static inline void armv8pmu_branch_disable(struct perf_event *event)
{
WARN_ON_ONCE(!has_branch_stack(event));
}
static inline void armv8pmu_branch_probe(struct arm_pmu *arm_pmu) { }
static inline void armv8pmu_branch_reset(void) { }
static inline int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *arm_pmu) { return 0; }
static inline void armv8pmu_task_ctx_cache_free(struct arm_pmu *arm_pmu) { }
static inline void armv8pmu_branch_save(struct arm_pmu *arm_pmu, void *ctx) { }
#endif
#endif
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V12 07/10] arm64/perf: Add PERF_ATTACH_TASK_DATA to events with has_branch_stack()
2023-06-16 2:38 ` kernel test robot
@ 2023-06-19 6:28 ` Anshuman Khandual
0 siblings, 0 replies; 25+ messages in thread
From: Anshuman Khandual @ 2023-06-19 6:28 UTC (permalink / raw)
To: kernel test robot, linux-arm-kernel, linux-kernel, will,
catalin.marinas, mark.rutland
Cc: llvm, oe-kbuild-all, Mark Brown, James Clark, Rob Herring,
Marc Zyngier, Suzuki Poulose, Peter Zijlstra, Ingo Molnar,
Arnaldo Carvalho de Melo, linux-perf-users
On 6/16/23 08:08, kernel test robot wrote:
> Hi Anshuman,
>
> kernel test robot noticed the following build errors:
>
> [auto build test ERROR on arm64/for-next/core]
> [also build test ERROR on tip/perf/core acme/perf/core linus/master v6.4-rc6 next-20230615]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
>
> url: https://github.com/intel-lab-lkp/linux/commits/Anshuman-Khandual/drivers-perf-arm_pmu-Add-new-sched_task-callback/20230615-223352
> base: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
> patch link: https://lore.kernel.org/r/20230615133239.442736-8-anshuman.khandual%40arm.com
> patch subject: [PATCH V12 07/10] arm64/perf: Add PERF_ATTACH_TASK_DATA to events with has_branch_stack()
> config: arm-randconfig-r004-20230615 (https://download.01.org/0day-ci/archive/20230616/202306161016.jJeqG6mc-lkp@intel.com/config)
> compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a)
> reproduce (this is a W=1 build):
> mkdir -p ~/bin
> wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> chmod +x ~/bin/make.cross
> # install arm cross compiling tool for clang build
> # apt-get install binutils-arm-linux-gnueabi
> git remote add arm64 https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git
> git fetch arm64 for-next/core
> git checkout arm64/for-next/core
> b4 shazam https://lore.kernel.org/r/20230615133239.442736-8-anshuman.khandual@arm.com
> # save the config file
> mkdir build_dir && cp config build_dir/.config
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=arm olddefconfig
> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=arm SHELL=/bin/bash drivers/perf/
This build failure too gets solved via the header code block movement as mentioned earlier.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU
2023-06-19 5:45 ` Anshuman Khandual
@ 2023-06-19 9:08 ` Marc Zyngier
2023-06-22 1:52 ` Anshuman Khandual
0 siblings, 1 reply; 25+ messages in thread
From: Marc Zyngier @ 2023-06-19 9:08 UTC (permalink / raw)
To: Anshuman Khandual
Cc: Catalin Marinas, kernel test robot, linux-arm-kernel,
linux-kernel, will, mark.rutland, llvm, oe-kbuild-all, Mark Brown,
James Clark, Rob Herring, Suzuki Poulose, Peter Zijlstra,
Ingo Molnar, Arnaldo Carvalho de Melo, linux-perf-users
On Mon, 19 Jun 2023 06:45:07 +0100,
Anshuman Khandual <anshuman.khandual@arm.com> wrote:
>
>
>
> On 6/16/23 14:51, Catalin Marinas wrote:
> > On Fri, Jun 16, 2023 at 06:57:52AM +0530, Anshuman Khandual wrote:
> >> On 6/16/23 05:12, kernel test robot wrote:
> >>> kernel test robot noticed the following build errors:
> >>>
> >>> [auto build test ERROR on arm64/for-next/core]
> >>> [also build test ERROR on tip/perf/core acme/perf/core linus/master v6.4-rc6 next-20230615]
> >>> [If your patch is applied to the wrong git tree, kindly drop us a note.
> >>> And when submitting patch, we suggest to use '--base' as documented in
> >>> https://git-scm.com/docs/git-format-patch#_base_tree_information]
> >>>
> >>> url: https://github.com/intel-lab-lkp/linux/commits/Anshuman-Khandual/drivers-perf-arm_pmu-Add-new-sched_task-callback/20230615-223352
> >>> base: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
> >>> patch link: https://lore.kernel.org/r/20230615133239.442736-6-anshuman.khandual%40arm.com
> >>> patch subject: [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU
> >>> config: arm-randconfig-r004-20230615 (https://download.01.org/0day-ci/archive/20230616/202306160706.Uei5XDoi-lkp@intel.com/config)
> >>> compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a)
> >>> reproduce (this is a W=1 build):
> >>> mkdir -p ~/bin
> >>> wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
> >>> chmod +x ~/bin/make.cross
> >>> # install arm cross compiling tool for clang build
> >>> # apt-get install binutils-arm-linux-gnueabi
> >>> git remote add arm64 https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git
> >>> git fetch arm64 for-next/core
> >>> git checkout arm64/for-next/core
> >>> b4 shazam https://lore.kernel.org/r/20230615133239.442736-6-anshuman.khandual@arm.com
> >>> # save the config file
> >>> mkdir build_dir && cp config build_dir/.config
> >>> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=arm olddefconfig
> >>> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=arm SHELL=/bin/bash drivers/perf/
> >>
> >> I am unable to reproduce this on mainline 6.4-rc6 via default cross compiler
> >> on a W=1 build. Looking at all other problems reported on the file, it seems
> >> something is not right here. Reported build problems around these callbacks,
> >> i.e armv8pmu_branch_XXXX() do not make sense as they are available via config
> >> CONFIG_PERF_EVENTS which is also enabled along with CONFIG_ARM_PMUV3 in this
> >> test config.
> >
> > Have you tried applying this series on top of the arm64 for-next/core
> > branch? That's what the robot it testing (in the absence of a --base
> > option when generating the patches).
>
> Right, it turned out to be a build problem on arm (32 bit) platform instead.
> After arm_pmuv3.c moved into common ./drivers/perf from ./arch/arm64/kernel/,
> it can no longer access arch/arm64/include/asm/perf_event.h defined functions
> without breaking arm (32) bit. The following code block needs to be moved out
> from arch/arm64/include/asm/perf_event.h into include/linux/perf/arm_pmuv3.h
> (which is preferred as all call sites are inside drivers/perf/arm_pmuv3.c) or
> may be arm_pmu.h (which is one step higher in the abstraction).
No, that's the wrong approach. The 32bit backend must have its own
stubs for the stuff it implements or not.
Just add something like the patch below, and please *test* that a
32bit VM using PMUv3 doesn't have any regression.
Thanks,
M.
From 017362ca518e6d6ac3262514d1f7f27e73232799 Mon Sep 17 00:00:00 2001
From: Marc Zyngier <maz@kernel.org>
Date: Mon, 19 Jun 2023 10:05:52 +0100
Subject: [PATCH] 32bit hack
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
arch/arm/include/asm/arm_pmuv3.h | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pmuv3.h
index f4db3e75d75f..c4bcb7a18267 100644
--- a/arch/arm/include/asm/arm_pmuv3.h
+++ b/arch/arm/include/asm/arm_pmuv3.h
@@ -244,4 +244,22 @@ static inline bool is_pmuv3p5(int pmuver)
return pmuver >= ARMV8_PMU_DFR_VER_V3P5;
}
+/* BRBE stubs */
+static inline void armv8pmu_branch_enable(struct perf_event *event) { }
+static inline void armv8pmu_branch_disable(struct perf_event *event) { }
+static inline void armv8pmu_branch_read(struct pmu_hw_events * cpuc,
+ struct perf_event *event) { }
+static inline void armv8pmu_branch_save(struct arm_pmu *armpmu, void *ctx) {}
+static inline void armv8pmu_branch_reset(void) {}
+static inline bool armv8pmu_branch_attr_valid(struct perf_event *event)
+{
+ return false;
+}
+static inline void armv8pmu_branch_probe(struct arm_pmu *armpmu) {}
+static inline int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *armpmu)
+{
+ return 0;
+}
+static inline void armv8pmu_task_ctx_cache_free(struct arm_pmu *armpmu) {}
+
#endif
--
2.39.2
--
Without deviation from the norm, progress is not possible.
^ permalink raw reply related [flat|nested] 25+ messages in thread
* Re: [PATCH V12 08/10] arm64/perf: Add struct brbe_regset helper functions
2023-06-15 13:32 ` [PATCH V12 08/10] arm64/perf: Add struct brbe_regset helper functions Anshuman Khandual
@ 2023-06-21 13:15 ` Mark Rutland
2023-06-22 2:07 ` Anshuman Khandual
0 siblings, 1 reply; 25+ messages in thread
From: Mark Rutland @ 2023-06-21 13:15 UTC (permalink / raw)
To: Anshuman Khandual
Cc: linux-arm-kernel, linux-kernel, will, catalin.marinas, Mark Brown,
James Clark, Rob Herring, Marc Zyngier, Suzuki Poulose,
Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
linux-perf-users
Hi Anshuman,
Thanks, this is looking much better; I just a have a couple of minor comments.
With those fixed up:
Acked-by: Mark Rutland <mark.rutland@arm.com>
Mark.
On Thu, Jun 15, 2023 at 07:02:37PM +0530, Anshuman Khandual wrote:
> The primary abstraction level for fetching branch records from BRBE HW has
> been changed as 'struct brbe_regset', which contains storage for all three
> BRBE registers i.e BRBSRC, BRBTGT, BRBINF. Whether branch record processing
> happens in the task sched out path, or in the PMU IRQ handling path, these
> registers need to be extracted from the HW. Afterwards both live and stored
> sets need to be stitched together to create final branch records set. This
> adds required helper functions for such operations.
>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linux-kernel@vger.kernel.org
> Tested-by: James Clark <james.clark@arm.com>
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
> drivers/perf/arm_brbe.c | 127 ++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 127 insertions(+)
>
> diff --git a/drivers/perf/arm_brbe.c b/drivers/perf/arm_brbe.c
> index 4729cb49282b..f6693699fade 100644
> --- a/drivers/perf/arm_brbe.c
> +++ b/drivers/perf/arm_brbe.c
> @@ -44,6 +44,133 @@ static void select_brbe_bank(int bank)
> isb();
> }
>
> +static bool __read_brbe_regset(struct brbe_regset *entry, int idx)
> +{
> + entry->brbinf = get_brbinf_reg(idx);
> +
> + /*
> + * There are no valid entries anymore on the buffer.
> + * Abort the branch record processing to save some
> + * cycles and also reduce the capture/process load
> + * for the user space as well.
> + */
This comment refers to the process of handling multiple entries, though it's
only handling one entry, and I don't think we need to mention saving cycles here.
Could we please delete this comment entirely? The comment above
capture_brbe_regset() already explains that we read until the first invalid
entry.
> + if (brbe_invalid(entry->brbinf))
> + return false;
> +
> + entry->brbsrc = get_brbsrc_reg(idx);
> + entry->brbtgt = get_brbtgt_reg(idx);
> + return true;
> +}
> +
> +/*
> + * This scans over BRBE register banks and captures individual branch records
> + * [BRBSRC, BRBTGT, BRBINF] into a pre-allocated 'struct brbe_regset' buffer,
> + * until an invalid one gets encountered. The caller for this function needs
> + * to ensure BRBE is an appropriate state before the records can be captured.
> + */
Could we simplify this to:
/*
* Read all BRBE entries in HW until the first invalid entry.
*
* The caller must ensure that the BRBE is not concurrently modifying these
* entries.
*/
> +static int capture_brbe_regset(int nr_hw_entries, struct brbe_regset *buf)
> +{
> + int idx = 0;
> +
> + select_brbe_bank(BRBE_BANK_IDX_0);
> + while (idx < nr_hw_entries && idx < BRBE_BANK0_IDX_MAX) {
> + if (!__read_brbe_regset(&buf[idx], idx))
> + return idx;
> + idx++;
> + }
> +
> + select_brbe_bank(BRBE_BANK_IDX_1);
> + while (idx < nr_hw_entries && idx < BRBE_BANK1_IDX_MAX) {
> + if (!__read_brbe_regset(&buf[idx], idx))
> + return idx;
> + idx++;
> + }
> + return idx;
> +}
> +
> +/*
> + * This function concatenates branch records from stored and live buffer
> + * up to maximum nr_max records and the stored buffer holds the resultant
> + * buffer. The concatenated buffer contains all the branch records from
> + * the live buffer but might contain some from stored buffer considering
> + * the maximum combined length does not exceed 'nr_max'.
> + *
> + * Stored records Live records
> + * ------------------------------------------------^
> + * | S0 | L0 | Newest |
> + * --------------------------------- |
> + * | S1 | L1 | |
> + * --------------------------------- |
> + * | S2 | L2 | |
> + * --------------------------------- |
> + * | S3 | L3 | |
> + * --------------------------------- |
> + * | S4 | L4 | nr_max
> + * --------------------------------- |
> + * | | L5 | |
> + * --------------------------------- |
> + * | | L6 | |
> + * --------------------------------- |
> + * | | L7 | |
> + * --------------------------------- |
> + * | | | |
> + * --------------------------------- |
> + * | | | Oldest |
> + * ------------------------------------------------V
> + *
> + *
> + * S0 is the newest in the stored records, where as L7 is the oldest in
> + * the live records. Unless the live buffer is detected as being full
> + * thus potentially dropping off some older records, L7 and S0 records
> + * are contiguous in time for a user task context. The stitched buffer
> + * here represents maximum possible branch records, contiguous in time.
> + *
> + * Stored records Live records
> + * ------------------------------------------------^
> + * | L0 | L0 | Newest |
> + * --------------------------------- |
> + * | L0 | L1 | |
> + * --------------------------------- |
> + * | L2 | L2 | |
> + * --------------------------------- |
> + * | L3 | L3 | |
> + * --------------------------------- |
> + * | L4 | L4 | nr_max
> + * --------------------------------- |
> + * | L5 | L5 | |
> + * --------------------------------- |
> + * | L6 | L6 | |
> + * --------------------------------- |
> + * | L7 | L7 | |
> + * --------------------------------- |
> + * | S0 | | |
> + * --------------------------------- |
> + * | S1 | | Oldest |
> + * ------------------------------------------------V
> + * | S2 | <----|
> + * ----------------- |
> + * | S3 | <----| Dropped off after nr_max
> + * ----------------- |
> + * | S4 | <----|
> + * -----------------
> + */
> +static int stitch_stored_live_entries(struct brbe_regset *stored,
> + struct brbe_regset *live,
> + int nr_stored, int nr_live,
> + int nr_max)
> +{
> + int nr_move = min(nr_stored, nr_max - nr_live);
> +
> + /* Move the tail of the buffer to make room for the new entries */
> + memmove(&stored[nr_live], &stored[0], nr_move * sizeof(*stored));
> +
> + /* Copy the new entries into the head of the buffer */
> + memcpy(&stored[0], &live[0], nr_live * sizeof(*stored));
> +
> + /* Return the number of entries in the stitched buffer */
> + return min(nr_live + nr_stored, nr_max);
> +}
> +
> /*
> * Generic perf branch filters supported on BRBE
> *
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V12 09/10] arm64/perf: Implement branch records save on task sched out
2023-06-15 13:32 ` [PATCH V12 09/10] arm64/perf: Implement branch records save on task sched out Anshuman Khandual
@ 2023-06-21 13:16 ` Mark Rutland
0 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2023-06-21 13:16 UTC (permalink / raw)
To: Anshuman Khandual
Cc: linux-arm-kernel, linux-kernel, will, catalin.marinas, Mark Brown,
James Clark, Rob Herring, Marc Zyngier, Suzuki Poulose,
Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
linux-perf-users
On Thu, Jun 15, 2023 at 07:02:38PM +0530, Anshuman Khandual wrote:
> This modifies current armv8pmu_sched_task(), to implement a branch records
> save mechanism via armv8pmu_branch_save() when a task scheds out of a cpu.
> BRBE is paused and disabled for all exception levels before branch records
> get captured, which then get concatenated with all existing stored records
> present in the task context maintaining the contiguity. Although the final
> length of the concatenated buffer does not exceed implemented BRBE length.
>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linux-kernel@vger.kernel.org
> Tested-by: James Clark <james.clark@arm.com>
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Mark.
> ---
> arch/arm64/include/asm/perf_event.h | 2 ++
> drivers/perf/arm_brbe.c | 30 +++++++++++++++++++++++++++++
> drivers/perf/arm_pmuv3.c | 14 ++++++++++++--
> 3 files changed, 44 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/include/asm/perf_event.h b/arch/arm64/include/asm/perf_event.h
> index b0c12a5882df..36e7dfb466a6 100644
> --- a/arch/arm64/include/asm/perf_event.h
> +++ b/arch/arm64/include/asm/perf_event.h
> @@ -40,6 +40,7 @@ void armv8pmu_branch_probe(struct arm_pmu *arm_pmu);
> void armv8pmu_branch_reset(void);
> int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *arm_pmu);
> void armv8pmu_task_ctx_cache_free(struct arm_pmu *arm_pmu);
> +void armv8pmu_branch_save(struct arm_pmu *arm_pmu, void *ctx);
> #else
> static inline void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
> {
> @@ -66,6 +67,7 @@ static inline void armv8pmu_branch_probe(struct arm_pmu *arm_pmu) { }
> static inline void armv8pmu_branch_reset(void) { }
> static inline int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *arm_pmu) { return 0; }
> static inline void armv8pmu_task_ctx_cache_free(struct arm_pmu *arm_pmu) { }
> +static inline void armv8pmu_branch_save(struct arm_pmu *arm_pmu, void *ctx) { }
> #endif
> #endif
> #endif
> diff --git a/drivers/perf/arm_brbe.c b/drivers/perf/arm_brbe.c
> index f6693699fade..3bb17ced2b1d 100644
> --- a/drivers/perf/arm_brbe.c
> +++ b/drivers/perf/arm_brbe.c
> @@ -171,6 +171,36 @@ static int stitch_stored_live_entries(struct brbe_regset *stored,
> return min(nr_live + nr_stored, nr_max);
> }
>
> +static int brbe_branch_save(int nr_hw_entries, struct brbe_regset *live)
> +{
> + u64 brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
> + int nr_live;
> +
> + write_sysreg_s(brbfcr | BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
> + isb();
> +
> + nr_live = capture_brbe_regset(nr_hw_entries, live);
> +
> + write_sysreg_s(brbfcr & ~BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
> + isb();
> +
> + return nr_live;
> +}
> +
> +void armv8pmu_branch_save(struct arm_pmu *arm_pmu, void *ctx)
> +{
> + struct arm64_perf_task_context *task_ctx = ctx;
> + struct brbe_regset live[BRBE_MAX_ENTRIES];
> + int nr_live, nr_store, nr_hw_entries;
> +
> + nr_hw_entries = brbe_get_numrec(arm_pmu->reg_brbidr);
> + nr_live = brbe_branch_save(nr_hw_entries, live);
> + nr_store = task_ctx->nr_brbe_records;
> + nr_store = stitch_stored_live_entries(task_ctx->store, live, nr_store,
> + nr_live, nr_hw_entries);
> + task_ctx->nr_brbe_records = nr_store;
> +}
> +
> /*
> * Generic perf branch filters supported on BRBE
> *
> diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c
> index 3c079051a63a..53f404618891 100644
> --- a/drivers/perf/arm_pmuv3.c
> +++ b/drivers/perf/arm_pmuv3.c
> @@ -907,9 +907,19 @@ static int armv8pmu_user_event_idx(struct perf_event *event)
> static void armv8pmu_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in)
> {
> struct arm_pmu *armpmu = to_arm_pmu(pmu_ctx->pmu);
> + void *task_ctx = pmu_ctx ? pmu_ctx->task_ctx_data : NULL;
>
> - if (sched_in && armpmu->has_branch_stack)
> - armv8pmu_branch_reset();
> + if (armpmu->has_branch_stack) {
> + /* Save branch records in task_ctx on sched out */
> + if (task_ctx && !sched_in) {
> + armv8pmu_branch_save(armpmu, task_ctx);
> + return;
> + }
> +
> + /* Reset branch records on sched in */
> + if (sched_in)
> + armv8pmu_branch_reset();
> + }
> }
>
> /*
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V12 10/10] arm64/perf: Implement branch records save on PMU IRQ
2023-06-15 13:32 ` [PATCH V12 10/10] arm64/perf: Implement branch records save on PMU IRQ Anshuman Khandual
@ 2023-06-21 13:20 ` Mark Rutland
0 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2023-06-21 13:20 UTC (permalink / raw)
To: Anshuman Khandual
Cc: linux-arm-kernel, linux-kernel, will, catalin.marinas, Mark Brown,
James Clark, Rob Herring, Marc Zyngier, Suzuki Poulose,
Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
linux-perf-users
On Thu, Jun 15, 2023 at 07:02:39PM +0530, Anshuman Khandual wrote:
> This modifies armv8pmu_branch_read() to concatenate live entries along with
> task context stored entries and then process the resultant buffer to create
> perf branch entry array for perf_sample_data. It follows the same principle
> like task sched out.
>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: linux-kernel@vger.kernel.org
> Tested-by: James Clark <james.clark@arm.com>
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
The resulting logic looks fine here, but it would be nicer if we had the
resulting structure from the outset rather than having to rewrite it (i.e. if
when we introduced this we captured all the recores then processed them), as
that would keep the diff minimal and make it much clearer as to what wwas
happening here.
Either way:
Acked-by: Mark Rutland <mark.rutland@arm.com>
Mark.
> ---
> drivers/perf/arm_brbe.c | 69 +++++++++++++++++++----------------------
> 1 file changed, 32 insertions(+), 37 deletions(-)
>
> diff --git a/drivers/perf/arm_brbe.c b/drivers/perf/arm_brbe.c
> index 3bb17ced2b1d..d28067c896e2 100644
> --- a/drivers/perf/arm_brbe.c
> +++ b/drivers/perf/arm_brbe.c
> @@ -653,41 +653,44 @@ void armv8pmu_branch_reset(void)
> isb();
> }
>
> -static bool capture_branch_entry(struct pmu_hw_events *cpuc,
> - struct perf_event *event, int idx)
> +static void brbe_regset_branch_entries(struct pmu_hw_events *cpuc, struct perf_event *event,
> + struct brbe_regset *regset, int idx)
> {
> struct perf_branch_entry *entry = &cpuc->branches->branch_entries[idx];
> - u64 brbinf = get_brbinf_reg(idx);
> -
> - /*
> - * There are no valid entries anymore on the buffer.
> - * Abort the branch record processing to save some
> - * cycles and also reduce the capture/process load
> - * for the user space as well.
> - */
> - if (brbe_invalid(brbinf))
> - return false;
> + u64 brbinf = regset[idx].brbinf;
>
> perf_clear_branch_entry_bitfields(entry);
> if (brbe_record_is_complete(brbinf)) {
> - entry->from = get_brbsrc_reg(idx);
> - entry->to = get_brbtgt_reg(idx);
> + entry->from = regset[idx].brbsrc;
> + entry->to = regset[idx].brbtgt;
> } else if (brbe_record_is_source_only(brbinf)) {
> - entry->from = get_brbsrc_reg(idx);
> + entry->from = regset[idx].brbsrc;
> entry->to = 0;
> } else if (brbe_record_is_target_only(brbinf)) {
> entry->from = 0;
> - entry->to = get_brbtgt_reg(idx);
> + entry->to = regset[idx].brbtgt;
> }
> capture_brbe_flags(entry, event, brbinf);
> - return true;
> +}
> +
> +static void process_branch_entries(struct pmu_hw_events *cpuc, struct perf_event *event,
> + struct brbe_regset *regset, int nr_regset)
> +{
> + int idx;
> +
> + for (idx = 0; idx < nr_regset; idx++)
> + brbe_regset_branch_entries(cpuc, event, regset, idx);
> +
> + cpuc->branches->branch_stack.nr = nr_regset;
> + cpuc->branches->branch_stack.hw_idx = -1ULL;
> }
>
> void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
> {
> - int nr_hw_entries = brbe_get_numrec(cpuc->percpu_pmu->reg_brbidr);
> + struct arm64_perf_task_context *task_ctx = event->pmu_ctx->task_ctx_data;
> + struct brbe_regset live[BRBE_MAX_ENTRIES];
> + int nr_live, nr_store, nr_hw_entries;
> u64 brbfcr, brbcr;
> - int idx = 0;
>
> brbcr = read_sysreg_s(SYS_BRBCR_EL1);
> brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
> @@ -699,25 +702,17 @@ void armv8pmu_branch_read(struct pmu_hw_events *cpuc, struct perf_event *event)
> write_sysreg_s(brbfcr | BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
> isb();
>
> - /* Loop through bank 0 */
> - select_brbe_bank(BRBE_BANK_IDX_0);
> - while (idx < nr_hw_entries && idx < BRBE_BANK0_IDX_MAX) {
> - if (!capture_branch_entry(cpuc, event, idx))
> - goto skip_bank_1;
> - idx++;
> - }
> -
> - /* Loop through bank 1 */
> - select_brbe_bank(BRBE_BANK_IDX_1);
> - while (idx < nr_hw_entries && idx < BRBE_BANK1_IDX_MAX) {
> - if (!capture_branch_entry(cpuc, event, idx))
> - break;
> - idx++;
> + nr_hw_entries = brbe_get_numrec(cpuc->percpu_pmu->reg_brbidr);
> + nr_live = capture_brbe_regset(nr_hw_entries, live);
> + if (event->ctx->task) {
> + nr_store = task_ctx->nr_brbe_records;
> + nr_store = stitch_stored_live_entries(task_ctx->store, live, nr_store,
> + nr_live, nr_hw_entries);
> + process_branch_entries(cpuc, event, task_ctx->store, nr_store);
> + task_ctx->nr_brbe_records = 0;
> + } else {
> + process_branch_entries(cpuc, event, live, nr_live);
> }
> -
> -skip_bank_1:
> - cpuc->branches->branch_stack.nr = idx;
> - cpuc->branches->branch_stack.hw_idx = -1ULL;
> process_branch_aborts(cpuc);
>
> /* Unpause the buffer */
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V12 00/10] arm64/perf: Enable branch stack sampling
2023-06-15 13:32 [PATCH V12 00/10] arm64/perf: Enable branch stack sampling Anshuman Khandual
` (9 preceding siblings ...)
2023-06-15 13:32 ` [PATCH V12 10/10] arm64/perf: Implement branch records save on PMU IRQ Anshuman Khandual
@ 2023-06-21 13:23 ` Mark Rutland
10 siblings, 0 replies; 25+ messages in thread
From: Mark Rutland @ 2023-06-21 13:23 UTC (permalink / raw)
To: Anshuman Khandual
Cc: linux-arm-kernel, linux-kernel, will, catalin.marinas, Mark Brown,
James Clark, Rob Herring, Marc Zyngier, Suzuki Poulose,
Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
linux-perf-users
On Thu, Jun 15, 2023 at 07:02:29PM +0530, Anshuman Khandual wrote:
> This series enables perf branch stack sampling support on arm64 platform
> via a new arch feature called Branch Record Buffer Extension (BRBE). All
> relevant register definitions could be accessed here.
>
> https://developer.arm.com/documentation/ddi0601/2021-12/AArch64-Registers
>
> This series applies on 6.4-rc6.
This largely looks good, but obviously this will need a respin to address the
fallout on 32-bit.
Thanks,
Mark.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU
2023-06-19 9:08 ` Marc Zyngier
@ 2023-06-22 1:52 ` Anshuman Khandual
0 siblings, 0 replies; 25+ messages in thread
From: Anshuman Khandual @ 2023-06-22 1:52 UTC (permalink / raw)
To: Marc Zyngier
Cc: Catalin Marinas, kernel test robot, linux-arm-kernel,
linux-kernel, will, mark.rutland, llvm, oe-kbuild-all, Mark Brown,
James Clark, Rob Herring, Suzuki Poulose, Peter Zijlstra,
Ingo Molnar, Arnaldo Carvalho de Melo, linux-perf-users
On 6/19/23 14:38, Marc Zyngier wrote:
> On Mon, 19 Jun 2023 06:45:07 +0100,
> Anshuman Khandual <anshuman.khandual@arm.com> wrote:
>>
>>
>>
>> On 6/16/23 14:51, Catalin Marinas wrote:
>>> On Fri, Jun 16, 2023 at 06:57:52AM +0530, Anshuman Khandual wrote:
>>>> On 6/16/23 05:12, kernel test robot wrote:
>>>>> kernel test robot noticed the following build errors:
>>>>>
>>>>> [auto build test ERROR on arm64/for-next/core]
>>>>> [also build test ERROR on tip/perf/core acme/perf/core linus/master v6.4-rc6 next-20230615]
>>>>> [If your patch is applied to the wrong git tree, kindly drop us a note.
>>>>> And when submitting patch, we suggest to use '--base' as documented in
>>>>> https://git-scm.com/docs/git-format-patch#_base_tree_information]
>>>>>
>>>>> url: https://github.com/intel-lab-lkp/linux/commits/Anshuman-Khandual/drivers-perf-arm_pmu-Add-new-sched_task-callback/20230615-223352
>>>>> base: https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-next/core
>>>>> patch link: https://lore.kernel.org/r/20230615133239.442736-6-anshuman.khandual%40arm.com
>>>>> patch subject: [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU
>>>>> config: arm-randconfig-r004-20230615 (https://download.01.org/0day-ci/archive/20230616/202306160706.Uei5XDoi-lkp@intel.com/config)
>>>>> compiler: clang version 17.0.0 (https://github.com/llvm/llvm-project.git 4a5ac14ee968ff0ad5d2cc1ffa0299048db4c88a)
>>>>> reproduce (this is a W=1 build):
>>>>> mkdir -p ~/bin
>>>>> wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>>>>> chmod +x ~/bin/make.cross
>>>>> # install arm cross compiling tool for clang build
>>>>> # apt-get install binutils-arm-linux-gnueabi
>>>>> git remote add arm64 https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git
>>>>> git fetch arm64 for-next/core
>>>>> git checkout arm64/for-next/core
>>>>> b4 shazam https://lore.kernel.org/r/20230615133239.442736-6-anshuman.khandual@arm.com
>>>>> # save the config file
>>>>> mkdir build_dir && cp config build_dir/.config
>>>>> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=arm olddefconfig
>>>>> COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang ~/bin/make.cross W=1 O=build_dir ARCH=arm SHELL=/bin/bash drivers/perf/
>>>>
>>>> I am unable to reproduce this on mainline 6.4-rc6 via default cross compiler
>>>> on a W=1 build. Looking at all other problems reported on the file, it seems
>>>> something is not right here. Reported build problems around these callbacks,
>>>> i.e armv8pmu_branch_XXXX() do not make sense as they are available via config
>>>> CONFIG_PERF_EVENTS which is also enabled along with CONFIG_ARM_PMUV3 in this
>>>> test config.
>>>
>>> Have you tried applying this series on top of the arm64 for-next/core
>>> branch? That's what the robot it testing (in the absence of a --base
>>> option when generating the patches).
>>
>> Right, it turned out to be a build problem on arm (32 bit) platform instead.
>> After arm_pmuv3.c moved into common ./drivers/perf from ./arch/arm64/kernel/,
>> it can no longer access arch/arm64/include/asm/perf_event.h defined functions
>> without breaking arm (32) bit. The following code block needs to be moved out
>> from arch/arm64/include/asm/perf_event.h into include/linux/perf/arm_pmuv3.h
>> (which is preferred as all call sites are inside drivers/perf/arm_pmuv3.c) or
>> may be arm_pmu.h (which is one step higher in the abstraction).
>
> No, that's the wrong approach. The 32bit backend must have its own
> stubs for the stuff it implements or not.
Okay.
>
> Just add something like the patch below, and please *test* that a
> 32bit VM using PMUv3 doesn't have any regression.
Sure.
>
> Thanks,
>
> M.
>
>>From 017362ca518e6d6ac3262514d1f7f27e73232799 Mon Sep 17 00:00:00 2001
> From: Marc Zyngier <maz@kernel.org>
> Date: Mon, 19 Jun 2023 10:05:52 +0100
> Subject: [PATCH] 32bit hack
>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
> arch/arm/include/asm/arm_pmuv3.h | 18 ++++++++++++++++++
> 1 file changed, 18 insertions(+)
>
> diff --git a/arch/arm/include/asm/arm_pmuv3.h b/arch/arm/include/asm/arm_pmuv3.h
> index f4db3e75d75f..c4bcb7a18267 100644
> --- a/arch/arm/include/asm/arm_pmuv3.h
> +++ b/arch/arm/include/asm/arm_pmuv3.h
> @@ -244,4 +244,22 @@ static inline bool is_pmuv3p5(int pmuver)
> return pmuver >= ARMV8_PMU_DFR_VER_V3P5;
> }
>
> +/* BRBE stubs */
These stubs also need to be wrapped around with #ifdef CONFIG_PERF_EVENTS
> +static inline void armv8pmu_branch_enable(struct perf_event *event) { }
> +static inline void armv8pmu_branch_disable(struct perf_event *event) { }
> +static inline void armv8pmu_branch_read(struct pmu_hw_events * cpuc,
> + struct perf_event *event) { }
> +static inline void armv8pmu_branch_save(struct arm_pmu *armpmu, void *ctx) {}
> +static inline void armv8pmu_branch_reset(void) {}
> +static inline bool armv8pmu_branch_attr_valid(struct perf_event *event)
> +{
> + return false;
> +}
> +static inline void armv8pmu_branch_probe(struct arm_pmu *armpmu) {}
> +static inline int armv8pmu_task_ctx_cache_alloc(struct arm_pmu *armpmu)
> +{
> + return 0;
> +}
> +static inline void armv8pmu_task_ctx_cache_free(struct arm_pmu *armpmu) {}
> +
> #endif
Sure, will make all the necessary changes.
^ permalink raw reply [flat|nested] 25+ messages in thread
* Re: [PATCH V12 08/10] arm64/perf: Add struct brbe_regset helper functions
2023-06-21 13:15 ` Mark Rutland
@ 2023-06-22 2:07 ` Anshuman Khandual
0 siblings, 0 replies; 25+ messages in thread
From: Anshuman Khandual @ 2023-06-22 2:07 UTC (permalink / raw)
To: Mark Rutland
Cc: linux-arm-kernel, linux-kernel, will, catalin.marinas, Mark Brown,
James Clark, Rob Herring, Marc Zyngier, Suzuki Poulose,
Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo,
linux-perf-users
On 6/21/23 18:45, Mark Rutland wrote:
> Hi Anshuman,
>
> Thanks, this is looking much better; I just a have a couple of minor comments.
>
> With those fixed up:
>
> Acked-by: Mark Rutland <mark.rutland@arm.com>
>
> Mark.
>
> On Thu, Jun 15, 2023 at 07:02:37PM +0530, Anshuman Khandual wrote:
>> The primary abstraction level for fetching branch records from BRBE HW has
>> been changed as 'struct brbe_regset', which contains storage for all three
>> BRBE registers i.e BRBSRC, BRBTGT, BRBINF. Whether branch record processing
>> happens in the task sched out path, or in the PMU IRQ handling path, these
>> registers need to be extracted from the HW. Afterwards both live and stored
>> sets need to be stitched together to create final branch records set. This
>> adds required helper functions for such operations.
>>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Will Deacon <will@kernel.org>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: linux-arm-kernel@lists.infradead.org
>> Cc: linux-kernel@vger.kernel.org
>> Tested-by: James Clark <james.clark@arm.com>
>> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
>> ---
>> drivers/perf/arm_brbe.c | 127 ++++++++++++++++++++++++++++++++++++++++
>> 1 file changed, 127 insertions(+)
>>
>> diff --git a/drivers/perf/arm_brbe.c b/drivers/perf/arm_brbe.c
>> index 4729cb49282b..f6693699fade 100644
>> --- a/drivers/perf/arm_brbe.c
>> +++ b/drivers/perf/arm_brbe.c
>> @@ -44,6 +44,133 @@ static void select_brbe_bank(int bank)
>> isb();
>> }
>>
>> +static bool __read_brbe_regset(struct brbe_regset *entry, int idx)
>> +{
>> + entry->brbinf = get_brbinf_reg(idx);
>> +
>> + /*
>> + * There are no valid entries anymore on the buffer.
>> + * Abort the branch record processing to save some
>> + * cycles and also reduce the capture/process load
>> + * for the user space as well.
>> + */
>
> This comment refers to the process of handling multiple entries, though it's
> only handling one entry, and I don't think we need to mention saving cycles here.
>
> Could we please delete this comment entirely? The comment above
> capture_brbe_regset() already explains that we read until the first invalid
> entry.
Sure, will drop the comment.
>
>> + if (brbe_invalid(entry->brbinf))
>> + return false;
>> +
>> + entry->brbsrc = get_brbsrc_reg(idx);
>> + entry->brbtgt = get_brbtgt_reg(idx);
>> + return true;
>> +}
>> +
>> +/*
>> + * This scans over BRBE register banks and captures individual branch records
>> + * [BRBSRC, BRBTGT, BRBINF] into a pre-allocated 'struct brbe_regset' buffer,
>> + * until an invalid one gets encountered. The caller for this function needs
>> + * to ensure BRBE is an appropriate state before the records can be captured.
>> + */
>
> Could we simplify this to:
>
> /*
> * Read all BRBE entries in HW until the first invalid entry.
> *
> * The caller must ensure that the BRBE is not concurrently modifying these
> * entries.
> */
Okay, will change the comment as suggested.
^ permalink raw reply [flat|nested] 25+ messages in thread
end of thread, other threads:[~2023-06-22 2:07 UTC | newest]
Thread overview: 25+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-06-15 13:32 [PATCH V12 00/10] arm64/perf: Enable branch stack sampling Anshuman Khandual
2023-06-15 13:32 ` [PATCH V12 01/10] drivers: perf: arm_pmu: Add new sched_task() callback Anshuman Khandual
2023-06-15 13:32 ` [PATCH V12 02/10] arm64/perf: Add BRBE registers and fields Anshuman Khandual
2023-06-15 13:32 ` [PATCH V12 03/10] arm64/perf: Add branch stack support in struct arm_pmu Anshuman Khandual
2023-06-15 13:32 ` [PATCH V12 04/10] arm64/perf: Add branch stack support in struct pmu_hw_events Anshuman Khandual
2023-06-15 13:32 ` [PATCH V12 05/10] arm64/perf: Add branch stack support in ARMV8 PMU Anshuman Khandual
2023-06-15 23:42 ` kernel test robot
2023-06-16 1:27 ` Anshuman Khandual
2023-06-16 9:21 ` Catalin Marinas
2023-06-19 5:45 ` Anshuman Khandual
2023-06-19 9:08 ` Marc Zyngier
2023-06-22 1:52 ` Anshuman Khandual
2023-06-16 3:41 ` kernel test robot
2023-06-15 13:32 ` [PATCH V12 06/10] arm64/perf: Enable branch stack events via FEAT_BRBE Anshuman Khandual
2023-06-15 13:32 ` [PATCH V12 07/10] arm64/perf: Add PERF_ATTACH_TASK_DATA to events with has_branch_stack() Anshuman Khandual
2023-06-16 2:38 ` kernel test robot
2023-06-19 6:28 ` Anshuman Khandual
2023-06-15 13:32 ` [PATCH V12 08/10] arm64/perf: Add struct brbe_regset helper functions Anshuman Khandual
2023-06-21 13:15 ` Mark Rutland
2023-06-22 2:07 ` Anshuman Khandual
2023-06-15 13:32 ` [PATCH V12 09/10] arm64/perf: Implement branch records save on task sched out Anshuman Khandual
2023-06-21 13:16 ` Mark Rutland
2023-06-15 13:32 ` [PATCH V12 10/10] arm64/perf: Implement branch records save on PMU IRQ Anshuman Khandual
2023-06-21 13:20 ` Mark Rutland
2023-06-21 13:23 ` [PATCH V12 00/10] arm64/perf: Enable branch stack sampling Mark Rutland
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).