* [PATCH v2 0/4] arm64: Add BRBE support for bpf_get_branch_snapshot()
@ 2026-03-18 17:16 Puranjay Mohan
2026-03-18 17:16 ` [PATCH v2 1/4] perf/arm_pmuv3: Fix NULL pointer dereference in armv8pmu_sched_task() Puranjay Mohan
` (4 more replies)
0 siblings, 5 replies; 7+ messages in thread
From: Puranjay Mohan @ 2026-03-18 17:16 UTC (permalink / raw)
To: bpf
Cc: Puranjay Mohan, Puranjay Mohan, Alexei Starovoitov,
Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Will Deacon,
Mark Rutland, Catalin Marinas, Leo Yan, Rob Herring, Breno Leitao,
linux-arm-kernel, linux-perf-users, kernel-team
v1: https://lore.kernel.org/all/20260313180352.3800358-1-puranjay@kernel.org/
Changes in v2:
- Rebased on arm64/for-next/core
- Add per-CPU brbe_active flag to guard against UNDEFINED sysreg access
on non-BRBE CPUs in heterogeneous big.LITTLE systems.
- Fix pre-existing bug in perf_clear_branch_entry_bitfields() that missed
zeroing new_type and priv bitfields, added as a separate patch with
Fixes tags (new patch 2).
- Use architecture-specific selftest threshold (#if defined(__aarch64__))
instead of raising the global threshold, to preserve x86 regression
detection.
RFC: https://lore.kernel.org/all/20260102214043.1410242-1-puranjay@kernel.org/
Changes from RFC:
- Fix pre-existing NULL pointer dereference in armv8pmu_sched_task()
found by Leo Yan during testing (patch 1)
- Pause BRBE before local_daif_save() to avoid branch pollution from
trace_hardirqs_off()
- Use local_daif_save() to prevent pNMI race from counter overflow
(Mark Rutland)
- Reuse perf_entry_from_brbe_regset() instead of duplicating register
read logic, by making it accept NULL event (Mark Rutland)
- Invalidate BRBE after reading to maintain record contiguity for
other consumers (Mark Rutland)
- Adjust selftest wasted_entries threshold for ARM64 (patch 3)
- Tested on ARM FVP with BRBE enabled
This series enables the bpf_get_branch_snapshot() BPF helper on ARM64
by implementing the perf_snapshot_branch_stack static call for ARM's
Branch Record Buffer Extension (BRBE).
bpf_get_branch_snapshot() [1] allows BPF programs to capture hardware
branch records on-demand from any BPF tracing context. This was
previously only available on x86 (Intel LBR) since v5.16. With BRBE
available on ARMv9, this series closes the gap for ARM64.
Usage model
-----------
The helper works in conjunction with perf events. The userspace
component of the BPF application opens a perf event with
PERF_SAMPLE_BRANCH_STACK on each CPU, which configures the hardware
to continuously record branches into BRBE (on ARM64) or LBR (on x86).
A BPF program attached to a tracepoint, kprobe, or fentry hook can
then call bpf_get_branch_snapshot() to snapshot the branch buffer at
any point. Without an active perf event, BRBE is not recording and
the buffer is empty.
On-demand branch snapshots from BPF are useful for diagnosing which
specific code path was taken inside a function. Stack traces only show
function boundaries, but branch records reveal the exact sequence of
jumps, calls, and returns within a function -- making it possible to
identify which specific error check triggered a failure, or which
callback implementation was invoked through a function pointer.
For example, retsnoop [2] is a BPF-based tool for non-intrusive
mass-tracing of kernel internals. Its LBR mode (--lbr) creates per-CPU
perf events with PERF_SAMPLE_BRANCH_STACK and then uses
bpf_get_branch_snapshot() in its fentry/fexit BPF programs to capture
branch records whenever a traced function returns an error.
Consider debugging a bpf() syscall that returns -EINVAL when creating
a BPF map with invalid parameters. Running retsnoop on an ARM64 FVP
with BRBE to trace the bpf() syscall and array_map_alloc_check():
$ retsnoop -e '*sys_bpf' -a 'array_map_alloc_check' --lbr=any \
-F -k vmlinux --debug full-lbr
$ simfail bpf-bad-map-max-entries-array # in another terminal
Output of retsnoop:
--- fentry BPF program (entries #63-#17) ---
[#63-#59] __htab_map_lookup_elem: hash table walk with memcmp (hashtab.c)
[#58] __htab_map_lookup_elem+0x98 -> dump_bpf_prog+0xc850 (hashtab.c:750)
[#57-#55] ... dump_bpf_prog internal branches ...
[#54] dump_bpf_prog+0xcab8 -> bpf_get_current_pid_tgid+0x0 (helpers.c:225)
[#53] bpf_get_current_pid_tgid+0x1c -> dump_bpf_prog+0xcabc (helpers.c:225)
[#52-#51] ... dump_bpf_prog -> __htab_map_lookup_elem ...
[#50-#47] __htab_map_lookup_elem: htab_map_hash (jhash2), select_bucket
[#46-#42] lookup_nulls_elem_raw: hash chain walk with memcmp (hashtab.c:717)
[#41] __htab_map_lookup_elem+0x98 -> dump_bpf_prog+0xcaf8 (hashtab.c:750)
[#40-#37] ... dump_bpf_prog -> bpf_ktime_get_ns ...
[#36] bpf_ktime_get_ns+0x10 -> ktime_get_mono_fast_ns+0x0 (helpers.c:178)
[#35-#32] ktime_get_mono_fast_ns: tk_clock_read -> arch_counter_get_cntpct
[#31] ktime_get_mono_fast_ns+0x9c -> bpf_ktime_get_ns+0x14 (timekeeping.c:493)
[#30] bpf_ktime_get_ns+0x18 -> dump_bpf_prog+0xcd50 (helpers.c:178)
[#29-#25] ... dump_bpf_prog internal branches ...
[#24] dump_bpf_prog+0x11b28 -> __bpf_prog_exit_recur+0x0 (trampoline.c:1190)
[#23-#17] __bpf_prog_exit_recur: rcu_read_unlock, migrate_enable (trampoline.c:1195)
--- array_map_alloc_check (entries #16-#12) ---
[#16] dump_bpf_prog+0x11b38 -> array_map_alloc_check+0x8 (arraymap.c:55)
[#15] array_map_alloc_check+0x18 -> array_map_alloc_check+0xb8 (arraymap.c:56)
. bpf_map_attr_numa_node . bpf_map_attr_numa_node
[#14] array_map_alloc_check+0xbc -> array_map_alloc_check+0x20 (arraymap.c:59)
. bpf_map_attr_numa_node
[#13] array_map_alloc_check+0x24 -> array_map_alloc_check+0x94 (arraymap.c:64)
[#12] array_map_alloc_check+0x98 -> dump_bpf_prog+0x11b3c (arraymap.c:82)
--- fexit trampoline overhead (entries #11-#00) ---
[#11] dump_bpf_prog+0x11b5c -> __bpf_prog_enter_recur+0x0 (trampoline.c:1145)
[#10-#03] __bpf_prog_enter_recur: rcu_read_lock, migrate_disable (trampoline.c:1146)
[#02] __bpf_prog_enter_recur+0x114 -> dump_bpf_prog+0x11b60 (trampoline.c:1157)
[#01] dump_bpf_prog+0x11b6c -> dump_bpf_prog+0xd230
[#00] dump_bpf_prog+0xd340 -> arm_brbe_snapshot_branch_stack+0x0 (arm_brbe.c:814)
el0t_64_sync+0x168
el0t_64_sync_handler+0x98
el0_svc+0x28
do_el0_svc+0x4c
invoke_syscall.constprop.0+0x54
373us [-EINVAL] __arm64_sys_bpf+0x8
__sys_bpf+0x87c
map_create+0x120
95us [-EINVAL] array_map_alloc_check+0x8
The FVP's BRBE buffer has 64 entries (BRBE supports 8, 16, 32, or
64). Of these, entries #63-#17 (47) are consumed by the fentry BPF
trampoline that ran before the function, and entries #11-#00 (12)
are consumed by the fexit trampoline that runs after. Entry #00
shows the very last branch recorded before BRBE is paused: the call
into arm_brbe_snapshot_branch_stack().
The 5 useful entries (#16-#12) show the exact path taken inside
array_map_alloc_check(). Record #14 shows a jump from line 56
(bpf_map_attr_numa_node) to line 59 (the if-condition), and #13
shows an immediate jump from line 59 (attr->max_entries == 0) to
line 64 (return -EINVAL), skipping lines 60-63. This pinpoints
max_entries==0 as the cause -- a diagnosis impossible with stack
traces alone.
[1] 856c02dbce4f ("bpf: Introduce helper bpf_get_branch_snapshot")
[2] https://github.com/anakryiko/retsnoop
Puranjay Mohan (4):
perf/arm_pmuv3: Fix NULL pointer dereference in armv8pmu_sched_task()
perf: Fix uninitialized bitfields in
perf_clear_branch_entry_bitfields()
perf/arm64: Add BRBE support for bpf_get_branch_snapshot()
selftests/bpf: Adjust wasted entries threshold for ARM64 BRBE
drivers/perf/arm_brbe.c | 79 ++++++++++++++++++-
drivers/perf/arm_brbe.h | 9 +++
drivers/perf/arm_pmuv3.c | 16 +++-
include/linux/perf_event.h | 2 +
.../bpf/prog_tests/get_branch_snapshot.c | 13 ++-
5 files changed, 110 insertions(+), 9 deletions(-)
base-commit: d118f32246fdabfb4f6a3fd2e511dc5e622bc553
--
2.52.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v2 1/4] perf/arm_pmuv3: Fix NULL pointer dereference in armv8pmu_sched_task()
2026-03-18 17:16 [PATCH v2 0/4] arm64: Add BRBE support for bpf_get_branch_snapshot() Puranjay Mohan
@ 2026-03-18 17:16 ` Puranjay Mohan
2026-03-18 17:16 ` [PATCH v2 2/4] perf: Fix uninitialized bitfields in perf_clear_branch_entry_bitfields() Puranjay Mohan
` (3 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Puranjay Mohan @ 2026-03-18 17:16 UTC (permalink / raw)
To: bpf
Cc: Puranjay Mohan, Puranjay Mohan, Alexei Starovoitov,
Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Will Deacon,
Mark Rutland, Catalin Marinas, Leo Yan, Rob Herring, Breno Leitao,
linux-arm-kernel, linux-perf-users, kernel-team
This is easily triggered with:
perf record -b -e cycles -a -- ls
which crashes on the first context switch with:
Unable to handle kernel NULL pointer dereference at virtual address 00[.]
PC is at armv8pmu_sched_task+0x14/0x50
LR is at perf_pmu_sched_task+0xac/0x108
Call trace:
armv8pmu_sched_task+0x14/0x50 (P)
perf_pmu_sched_task+0xac/0x108
__perf_event_task_sched_out+0x6c/0xe0
prepare_task_switch+0x120/0x268
__schedule+0x1e8/0x828
...
perf_pmu_sched_task() invokes the PMU sched callback with cpc->task_epc,
which is NULL when no per-task events exist for this PMU. With CPU-wide
branch-stack events, armv8pmu_sched_task() is still registered and
dereferences pmu_ctx->pmu unconditionally, causing the crash.
The bug was introduced by commit fa9d27773873 ("perf: arm_pmu: Kill last
use of per-CPU cpu_armpmu pointer") which changed the function from
using the per-CPU cpu_armpmu pointer (always valid) to dereferencing
pmu_ctx->pmu without adding a NULL check.
Add a NULL check for pmu_ctx to avoid the crash.
Fixes: fa9d27773873 ("perf: arm_pmu: Kill last use of per-CPU cpu_armpmu pointer")
Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
---
drivers/perf/arm_pmuv3.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c
index 8014ff766cff..2d097fad9c10 100644
--- a/drivers/perf/arm_pmuv3.c
+++ b/drivers/perf/arm_pmuv3.c
@@ -1074,8 +1074,15 @@ static int armv8pmu_user_event_idx(struct perf_event *event)
static void armv8pmu_sched_task(struct perf_event_pmu_context *pmu_ctx,
struct task_struct *task, bool sched_in)
{
- struct arm_pmu *armpmu = to_arm_pmu(pmu_ctx->pmu);
- struct pmu_hw_events *hw_events = this_cpu_ptr(armpmu->hw_events);
+ struct arm_pmu *armpmu;
+ struct pmu_hw_events *hw_events;
+
+ /* cpc->task_epc is NULL when no per-task events exist for this PMU */
+ if (!pmu_ctx)
+ return;
+
+ armpmu = to_arm_pmu(pmu_ctx->pmu);
+ hw_events = this_cpu_ptr(armpmu->hw_events);
if (!hw_events->branch_users)
return;
--
2.52.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 2/4] perf: Fix uninitialized bitfields in perf_clear_branch_entry_bitfields()
2026-03-18 17:16 [PATCH v2 0/4] arm64: Add BRBE support for bpf_get_branch_snapshot() Puranjay Mohan
2026-03-18 17:16 ` [PATCH v2 1/4] perf/arm_pmuv3: Fix NULL pointer dereference in armv8pmu_sched_task() Puranjay Mohan
@ 2026-03-18 17:16 ` Puranjay Mohan
2026-03-18 17:16 ` [PATCH v2 3/4] perf/arm64: Add BRBE support for bpf_get_branch_snapshot() Puranjay Mohan
` (2 subsequent siblings)
4 siblings, 0 replies; 7+ messages in thread
From: Puranjay Mohan @ 2026-03-18 17:16 UTC (permalink / raw)
To: bpf
Cc: Puranjay Mohan, Puranjay Mohan, Alexei Starovoitov,
Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Will Deacon,
Mark Rutland, Catalin Marinas, Leo Yan, Rob Herring, Breno Leitao,
linux-arm-kernel, linux-perf-users, kernel-team
perf_clear_branch_entry_bitfields() zeroes individual bitfields of struct
perf_branch_entry but misses the new_type (4 bits) and priv (3 bits)
fields. This means any code path that relies on this function to produce
a clean entry may expose stale or uninitialised data in these fields to
userspace.
The function was introduced by commit bfe4daf850f4 ("perf/core: Add
perf_clear_branch_entry_bitfields() helper") specifically to "centralize
the initialization to avoid missing a field in case more are added."
Unfortunately, the commits that later added new_type and priv to struct
perf_branch_entry only updated the UAPI header and did not update this
clearing function.
Zero new_type and priv alongside the other bitfields.
Fixes: b190bc4ac9e6 ("perf: Extend branch type classification")
Fixes: 5402d25aa571 ("perf: Capture branch privilege information")
Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
---
include/linux/perf_event.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index 48d851fbd8ea..d7f39b7e9cda 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -1481,6 +1481,8 @@ static inline void perf_clear_branch_entry_bitfields(struct perf_branch_entry *b
br->cycles = 0;
br->type = 0;
br->spec = PERF_BR_SPEC_NA;
+ br->new_type = 0;
+ br->priv = 0;
br->reserved = 0;
}
--
2.52.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 3/4] perf/arm64: Add BRBE support for bpf_get_branch_snapshot()
2026-03-18 17:16 [PATCH v2 0/4] arm64: Add BRBE support for bpf_get_branch_snapshot() Puranjay Mohan
2026-03-18 17:16 ` [PATCH v2 1/4] perf/arm_pmuv3: Fix NULL pointer dereference in armv8pmu_sched_task() Puranjay Mohan
2026-03-18 17:16 ` [PATCH v2 2/4] perf: Fix uninitialized bitfields in perf_clear_branch_entry_bitfields() Puranjay Mohan
@ 2026-03-18 17:16 ` Puranjay Mohan
2026-03-18 17:16 ` [PATCH v2 4/4] selftests/bpf: Adjust wasted entries threshold for ARM64 BRBE Puranjay Mohan
2026-03-26 8:57 ` [PATCH v2 0/4] arm64: Add BRBE support for bpf_get_branch_snapshot() Puranjay Mohan
4 siblings, 0 replies; 7+ messages in thread
From: Puranjay Mohan @ 2026-03-18 17:16 UTC (permalink / raw)
To: bpf
Cc: Puranjay Mohan, Puranjay Mohan, Alexei Starovoitov,
Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Will Deacon,
Mark Rutland, Catalin Marinas, Leo Yan, Rob Herring, Breno Leitao,
linux-arm-kernel, linux-perf-users, kernel-team
Enable the bpf_get_branch_snapshot() BPF helper on ARM64 by implementing
the perf_snapshot_branch_stack static call for ARM's Branch Record Buffer
Extension (BRBE).
The BPF helper bpf_get_branch_snapshot() allows BPF programs to capture
hardware branch records on-demand. This was previously only available on
x86 (Intel LBR) but not on ARM64 despite BRBE being available since
ARMv9.
BRBE is paused before disabling interrupts because local_irq_save() can
trigger trace_hardirqs_off() which performs stack walking and pollutes
the branch buffer. The sysreg read/write and ISB used to pause BRBE are
branchless, so pausing first avoids this pollution.
All exceptions are masked after pausing BRBE using local_daif_save() to
prevent pseudo-NMI from PMU counter overflow from interfering with the
snapshot read. A PMU overflow arriving between the pause and
local_daif_save() can re-enable BRBE via the interrupt handler; the
snapshot detects this by re-checking BRBFCR_EL1.PAUSED and bailing out.
Branch records are read using the existing perf_entry_from_brbe_regset()
helper with a NULL event pointer, which bypasses event-specific filtering
and captures all recorded branches. The BPF program is responsible for
filtering entries based on its own criteria. The BRBE buffer is
invalidated after reading to maintain contiguity for other consumers.
On heterogeneous big.LITTLE systems, only some CPUs may implement
FEAT_BRBE. The perf_snapshot_branch_stack static call is system-wide, so
a per-CPU brbe_active flag is used to prevent BRBE sysreg access on CPUs
that do not implement FEAT_BRBE, where such access would be UNDEFINED.
Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
---
drivers/perf/arm_brbe.c | 79 +++++++++++++++++++++++++++++++++++++++-
drivers/perf/arm_brbe.h | 9 +++++
drivers/perf/arm_pmuv3.c | 5 ++-
3 files changed, 90 insertions(+), 3 deletions(-)
diff --git a/drivers/perf/arm_brbe.c b/drivers/perf/arm_brbe.c
index ba554e0c846c..527c2d5ebba6 100644
--- a/drivers/perf/arm_brbe.c
+++ b/drivers/perf/arm_brbe.c
@@ -8,9 +8,13 @@
*/
#include <linux/types.h>
#include <linux/bitmap.h>
+#include <linux/percpu.h>
#include <linux/perf/arm_pmu.h>
+#include <asm/daifflags.h>
#include "arm_brbe.h"
+static DEFINE_PER_CPU(bool, brbe_active);
+
#define BRBFCR_EL1_BRANCH_FILTERS (BRBFCR_EL1_DIRECT | \
BRBFCR_EL1_INDIRECT | \
BRBFCR_EL1_RTN | \
@@ -533,6 +537,8 @@ void brbe_enable(const struct arm_pmu *arm_pmu)
/* Finally write SYS_BRBFCR_EL to unpause BRBE */
write_sysreg_s(brbfcr, SYS_BRBFCR_EL1);
/* Synchronization in PMCR write ensures ordering WRT PMU enabling */
+
+ this_cpu_write(brbe_active, true);
}
void brbe_disable(void)
@@ -544,6 +550,7 @@ void brbe_disable(void)
*/
write_sysreg_s(BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
write_sysreg_s(0, SYS_BRBCR_EL1);
+ this_cpu_write(brbe_active, false);
}
static const int brbe_type_to_perf_type_map[BRBINFx_EL1_TYPE_DEBUG_EXIT + 1][2] = {
@@ -618,10 +625,10 @@ static bool perf_entry_from_brbe_regset(int index, struct perf_branch_entry *ent
brbe_set_perf_entry_type(entry, brbinf);
- if (!branch_sample_no_cycles(event))
+ if (!event || !branch_sample_no_cycles(event))
entry->cycles = brbinf_get_cycles(brbinf);
- if (!branch_sample_no_flags(event)) {
+ if (!event || !branch_sample_no_flags(event)) {
/* Mispredict info is available for source only and complete branch records. */
if (!brbe_record_is_target_only(brbinf)) {
entry->mispred = brbinf_get_mispredict(brbinf);
@@ -803,3 +810,71 @@ void brbe_read_filtered_entries(struct perf_branch_stack *branch_stack,
done:
branch_stack->nr = nr_filtered;
}
+
+/*
+ * Best-effort BRBE snapshot for BPF tracing. Pause BRBE to avoid
+ * self-recording and return 0 if the snapshot state appears disturbed.
+ */
+int arm_brbe_snapshot_branch_stack(struct perf_branch_entry *entries, unsigned int cnt)
+{
+ unsigned long flags;
+ int nr_hw, nr_banks, nr_copied = 0;
+ u64 brbidr, brbfcr, brbcr;
+
+ if (!cnt || !__this_cpu_read(brbe_active))
+ return 0;
+
+ /* Pause BRBE first to avoid recording our own branches. */
+ brbfcr = read_sysreg_s(SYS_BRBFCR_EL1);
+ brbcr = read_sysreg_s(SYS_BRBCR_EL1);
+ write_sysreg_s(brbfcr | BRBFCR_EL1_PAUSED, SYS_BRBFCR_EL1);
+ isb();
+
+ /* Block local exception delivery while reading the buffer. */
+ flags = local_daif_save();
+
+ /*
+ * A PMU overflow before local_daif_save() could have re-enabled
+ * BRBE, clearing the PAUSED bit. The overflow handler already
+ * restored BRBE to its correct state, so just bail out.
+ */
+ if (!(read_sysreg_s(SYS_BRBFCR_EL1) & BRBFCR_EL1_PAUSED)) {
+ local_daif_restore(flags);
+ return 0;
+ }
+
+ brbidr = read_sysreg_s(SYS_BRBIDR0_EL1);
+ if (!valid_brbidr(brbidr))
+ goto out;
+
+ nr_hw = FIELD_GET(BRBIDR0_EL1_NUMREC_MASK, brbidr);
+ nr_banks = DIV_ROUND_UP(nr_hw, BRBE_BANK_MAX_ENTRIES);
+
+ for (int bank = 0; bank < nr_banks; bank++) {
+ int nr_remaining = nr_hw - (bank * BRBE_BANK_MAX_ENTRIES);
+ int nr_this_bank = min(nr_remaining, BRBE_BANK_MAX_ENTRIES);
+
+ select_brbe_bank(bank);
+
+ for (int i = 0; i < nr_this_bank; i++) {
+ if (nr_copied >= cnt)
+ goto done;
+
+ if (!perf_entry_from_brbe_regset(i, &entries[nr_copied], NULL))
+ goto done;
+
+ nr_copied++;
+ }
+ }
+
+done:
+ brbe_invalidate();
+out:
+ /* Restore BRBCR before unpausing via BRBFCR, matching brbe_enable(). */
+ write_sysreg_s(brbcr, SYS_BRBCR_EL1);
+ isb();
+ write_sysreg_s(brbfcr, SYS_BRBFCR_EL1);
+ local_daif_restore(flags);
+
+ return nr_copied;
+}
diff --git a/drivers/perf/arm_brbe.h b/drivers/perf/arm_brbe.h
index b7c7d8796c86..c2a1824437fb 100644
--- a/drivers/perf/arm_brbe.h
+++ b/drivers/perf/arm_brbe.h
@@ -10,6 +10,7 @@
struct arm_pmu;
struct perf_branch_stack;
struct perf_event;
+struct perf_branch_entry;
#ifdef CONFIG_ARM64_BRBE
void brbe_probe(struct arm_pmu *arm_pmu);
@@ -22,6 +23,8 @@ void brbe_disable(void);
bool brbe_branch_attr_valid(struct perf_event *event);
void brbe_read_filtered_entries(struct perf_branch_stack *branch_stack,
const struct perf_event *event);
+int arm_brbe_snapshot_branch_stack(struct perf_branch_entry *entries,
+ unsigned int cnt);
#else
static inline void brbe_probe(struct arm_pmu *arm_pmu) { }
static inline unsigned int brbe_num_branch_records(const struct arm_pmu *armpmu)
@@ -44,4 +47,10 @@ static void brbe_read_filtered_entries(struct perf_branch_stack *branch_stack,
const struct perf_event *event)
{
}
+
+static inline int arm_brbe_snapshot_branch_stack(struct perf_branch_entry *entries,
+ unsigned int cnt)
+{
+ return 0;
+}
#endif
diff --git a/drivers/perf/arm_pmuv3.c b/drivers/perf/arm_pmuv3.c
index 2d097fad9c10..e00c7c47a98d 100644
--- a/drivers/perf/arm_pmuv3.c
+++ b/drivers/perf/arm_pmuv3.c
@@ -1456,8 +1456,11 @@ static int armv8_pmu_init(struct arm_pmu *cpu_pmu, char *name,
cpu_pmu->set_event_filter = armv8pmu_set_event_filter;
cpu_pmu->pmu.event_idx = armv8pmu_user_event_idx;
- if (brbe_num_branch_records(cpu_pmu))
+ if (brbe_num_branch_records(cpu_pmu)) {
cpu_pmu->pmu.sched_task = armv8pmu_sched_task;
+ static_call_update(perf_snapshot_branch_stack,
+ arm_brbe_snapshot_branch_stack);
+ }
cpu_pmu->name = name;
cpu_pmu->map_event = map_event;
--
2.52.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 4/4] selftests/bpf: Adjust wasted entries threshold for ARM64 BRBE
2026-03-18 17:16 [PATCH v2 0/4] arm64: Add BRBE support for bpf_get_branch_snapshot() Puranjay Mohan
` (2 preceding siblings ...)
2026-03-18 17:16 ` [PATCH v2 3/4] perf/arm64: Add BRBE support for bpf_get_branch_snapshot() Puranjay Mohan
@ 2026-03-18 17:16 ` Puranjay Mohan
2026-03-26 8:57 ` [PATCH v2 0/4] arm64: Add BRBE support for bpf_get_branch_snapshot() Puranjay Mohan
4 siblings, 0 replies; 7+ messages in thread
From: Puranjay Mohan @ 2026-03-18 17:16 UTC (permalink / raw)
To: bpf
Cc: Puranjay Mohan, Puranjay Mohan, Alexei Starovoitov,
Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Will Deacon,
Mark Rutland, Catalin Marinas, Leo Yan, Rob Herring, Breno Leitao,
linux-arm-kernel, linux-perf-users, kernel-team
The get_branch_snapshot test checks that bpf_get_branch_snapshot()
doesn't waste too many branch entries on infrastructure overhead. The
threshold of < 10 was calibrated for x86 where about 7 entries are
wasted.
On ARM64, the BPF trampoline generates more branches than x86,
resulting in about 13 wasted entries. The overhead comes from the BPF
trampoline calling __bpf_prog_enter_recur which on ARM64 makes
out-of-line calls to __rcu_read_lock and generates more conditional
branches than x86:
[#12] bpf_testmod_loop_test+0x40 -> bpf_trampoline_...+0x48
[#11] bpf_trampoline_...+0x68 -> __bpf_prog_enter_recur+0x0
[#10] __bpf_prog_enter_recur+0x20 -> __bpf_prog_enter_recur+0x118
[#09] __bpf_prog_enter_recur+0x154 -> __bpf_prog_enter_recur+0x160
[#08] __bpf_prog_enter_recur+0x164 -> __bpf_prog_enter_recur+0x2c
[#07] __bpf_prog_enter_recur+0x2c -> __rcu_read_lock+0x0
[#06] __rcu_read_lock+0x18 -> __bpf_prog_enter_recur+0x30
[#05] __bpf_prog_enter_recur+0x9c -> __bpf_prog_enter_recur+0xf0
[#04] __bpf_prog_enter_recur+0xf4 -> __bpf_prog_enter_recur+0xa8
[#03] __bpf_prog_enter_recur+0xb8 -> __bpf_prog_enter_recur+0x100
[#02] __bpf_prog_enter_recur+0x114 -> bpf_trampoline_...+0x6c
[#01] bpf_trampoline_...+0x78 -> bpf_prog_...test1+0x0
[#00] bpf_prog_...test1+0x58 -> arm_brbe_snapshot_branch_stack+0x0
Use an architecture-specific threshold of < 14 for ARM64 to accommodate
this overhead while still detecting regressions.
Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
---
.../selftests/bpf/prog_tests/get_branch_snapshot.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c b/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c
index 0394a1156d99..8d1a3480767f 100644
--- a/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c
+++ b/tools/testing/selftests/bpf/prog_tests/get_branch_snapshot.c
@@ -116,13 +116,18 @@ void serial_test_get_branch_snapshot(void)
ASSERT_GT(skel->bss->test1_hits, 6, "find_looptest_in_lbr");
- /* Given we stop LBR in software, we will waste a few entries.
+ /* Given we stop LBR/BRBE in software, we will waste a few entries.
* But we should try to waste as few as possible entries. We are at
- * about 7 on x86_64 systems.
- * Add a check for < 10 so that we get heads-up when something
- * changes and wastes too many entries.
+ * about 7 on x86_64 and about 13 on arm64 systems (the arm64 BPF
+ * trampoline generates more branches than x86_64).
+ * Add a check so that we get heads-up when something changes and
+ * wastes too many entries.
*/
+#if defined(__aarch64__)
+ ASSERT_LT(skel->bss->wasted_entries, 14, "check_wasted_entries");
+#else
ASSERT_LT(skel->bss->wasted_entries, 10, "check_wasted_entries");
+#endif
cleanup:
get_branch_snapshot__destroy(skel);
--
2.52.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2 0/4] arm64: Add BRBE support for bpf_get_branch_snapshot()
2026-03-18 17:16 [PATCH v2 0/4] arm64: Add BRBE support for bpf_get_branch_snapshot() Puranjay Mohan
` (3 preceding siblings ...)
2026-03-18 17:16 ` [PATCH v2 4/4] selftests/bpf: Adjust wasted entries threshold for ARM64 BRBE Puranjay Mohan
@ 2026-03-26 8:57 ` Puranjay Mohan
2026-03-26 11:01 ` Will Deacon
4 siblings, 1 reply; 7+ messages in thread
From: Puranjay Mohan @ 2026-03-26 8:57 UTC (permalink / raw)
To: bpf, Mark Rutland, Catalin Marinas, Will Deacon
Cc: Alexei Starovoitov, Andrii Nakryiko, Daniel Borkmann,
Martin KaFai Lau, Eduard Zingerman, Kumar Kartikeya Dwivedi,
Leo Yan, Rob Herring, Breno Leitao, linux-arm-kernel,
linux-perf-users, kernel-team
Hi Catalin, Mark, and Will,
Would you mind taking a look at this patchset when you have a chance?
Thanks,
Puranjay
On Wed, Mar 18, 2026 at 5:17 PM Puranjay Mohan <puranjay@kernel.org> wrote:
>
> v1: https://lore.kernel.org/all/20260313180352.3800358-1-puranjay@kernel.org/
> Changes in v2:
> - Rebased on arm64/for-next/core
> - Add per-CPU brbe_active flag to guard against UNDEFINED sysreg access
> on non-BRBE CPUs in heterogeneous big.LITTLE systems.
> - Fix pre-existing bug in perf_clear_branch_entry_bitfields() that missed
> zeroing new_type and priv bitfields, added as a separate patch with
> Fixes tags (new patch 2).
> - Use architecture-specific selftest threshold (#if defined(__aarch64__))
> instead of raising the global threshold, to preserve x86 regression
> detection.
>
> RFC: https://lore.kernel.org/all/20260102214043.1410242-1-puranjay@kernel.org/
> Changes from RFC:
> - Fix pre-existing NULL pointer dereference in armv8pmu_sched_task()
> found by Leo Yan during testing (patch 1)
> - Pause BRBE before local_daif_save() to avoid branch pollution from
> trace_hardirqs_off()
> - Use local_daif_save() to prevent pNMI race from counter overflow
> (Mark Rutland)
> - Reuse perf_entry_from_brbe_regset() instead of duplicating register
> read logic, by making it accept NULL event (Mark Rutland)
> - Invalidate BRBE after reading to maintain record contiguity for
> other consumers (Mark Rutland)
> - Adjust selftest wasted_entries threshold for ARM64 (patch 3)
> - Tested on ARM FVP with BRBE enabled
>
> This series enables the bpf_get_branch_snapshot() BPF helper on ARM64
> by implementing the perf_snapshot_branch_stack static call for ARM's
> Branch Record Buffer Extension (BRBE).
>
> bpf_get_branch_snapshot() [1] allows BPF programs to capture hardware
> branch records on-demand from any BPF tracing context. This was
> previously only available on x86 (Intel LBR) since v5.16. With BRBE
> available on ARMv9, this series closes the gap for ARM64.
>
> Usage model
> -----------
>
> The helper works in conjunction with perf events. The userspace
> component of the BPF application opens a perf event with
> PERF_SAMPLE_BRANCH_STACK on each CPU, which configures the hardware
> to continuously record branches into BRBE (on ARM64) or LBR (on x86).
> A BPF program attached to a tracepoint, kprobe, or fentry hook can
> then call bpf_get_branch_snapshot() to snapshot the branch buffer at
> any point. Without an active perf event, BRBE is not recording and
> the buffer is empty.
>
> On-demand branch snapshots from BPF are useful for diagnosing which
> specific code path was taken inside a function. Stack traces only show
> function boundaries, but branch records reveal the exact sequence of
> jumps, calls, and returns within a function -- making it possible to
> identify which specific error check triggered a failure, or which
> callback implementation was invoked through a function pointer.
>
> For example, retsnoop [2] is a BPF-based tool for non-intrusive
> mass-tracing of kernel internals. Its LBR mode (--lbr) creates per-CPU
> perf events with PERF_SAMPLE_BRANCH_STACK and then uses
> bpf_get_branch_snapshot() in its fentry/fexit BPF programs to capture
> branch records whenever a traced function returns an error.
>
> Consider debugging a bpf() syscall that returns -EINVAL when creating
> a BPF map with invalid parameters. Running retsnoop on an ARM64 FVP
> with BRBE to trace the bpf() syscall and array_map_alloc_check():
>
> $ retsnoop -e '*sys_bpf' -a 'array_map_alloc_check' --lbr=any \
> -F -k vmlinux --debug full-lbr
> $ simfail bpf-bad-map-max-entries-array # in another terminal
>
> Output of retsnoop:
>
> --- fentry BPF program (entries #63-#17) ---
>
> [#63-#59] __htab_map_lookup_elem: hash table walk with memcmp (hashtab.c)
> [#58] __htab_map_lookup_elem+0x98 -> dump_bpf_prog+0xc850 (hashtab.c:750)
> [#57-#55] ... dump_bpf_prog internal branches ...
> [#54] dump_bpf_prog+0xcab8 -> bpf_get_current_pid_tgid+0x0 (helpers.c:225)
> [#53] bpf_get_current_pid_tgid+0x1c -> dump_bpf_prog+0xcabc (helpers.c:225)
> [#52-#51] ... dump_bpf_prog -> __htab_map_lookup_elem ...
> [#50-#47] __htab_map_lookup_elem: htab_map_hash (jhash2), select_bucket
> [#46-#42] lookup_nulls_elem_raw: hash chain walk with memcmp (hashtab.c:717)
> [#41] __htab_map_lookup_elem+0x98 -> dump_bpf_prog+0xcaf8 (hashtab.c:750)
> [#40-#37] ... dump_bpf_prog -> bpf_ktime_get_ns ...
> [#36] bpf_ktime_get_ns+0x10 -> ktime_get_mono_fast_ns+0x0 (helpers.c:178)
> [#35-#32] ktime_get_mono_fast_ns: tk_clock_read -> arch_counter_get_cntpct
> [#31] ktime_get_mono_fast_ns+0x9c -> bpf_ktime_get_ns+0x14 (timekeeping.c:493)
> [#30] bpf_ktime_get_ns+0x18 -> dump_bpf_prog+0xcd50 (helpers.c:178)
> [#29-#25] ... dump_bpf_prog internal branches ...
> [#24] dump_bpf_prog+0x11b28 -> __bpf_prog_exit_recur+0x0 (trampoline.c:1190)
> [#23-#17] __bpf_prog_exit_recur: rcu_read_unlock, migrate_enable (trampoline.c:1195)
>
> --- array_map_alloc_check (entries #16-#12) ---
>
> [#16] dump_bpf_prog+0x11b38 -> array_map_alloc_check+0x8 (arraymap.c:55)
> [#15] array_map_alloc_check+0x18 -> array_map_alloc_check+0xb8 (arraymap.c:56)
> . bpf_map_attr_numa_node . bpf_map_attr_numa_node
> [#14] array_map_alloc_check+0xbc -> array_map_alloc_check+0x20 (arraymap.c:59)
> . bpf_map_attr_numa_node
> [#13] array_map_alloc_check+0x24 -> array_map_alloc_check+0x94 (arraymap.c:64)
> [#12] array_map_alloc_check+0x98 -> dump_bpf_prog+0x11b3c (arraymap.c:82)
>
> --- fexit trampoline overhead (entries #11-#00) ---
>
> [#11] dump_bpf_prog+0x11b5c -> __bpf_prog_enter_recur+0x0 (trampoline.c:1145)
> [#10-#03] __bpf_prog_enter_recur: rcu_read_lock, migrate_disable (trampoline.c:1146)
> [#02] __bpf_prog_enter_recur+0x114 -> dump_bpf_prog+0x11b60 (trampoline.c:1157)
> [#01] dump_bpf_prog+0x11b6c -> dump_bpf_prog+0xd230
> [#00] dump_bpf_prog+0xd340 -> arm_brbe_snapshot_branch_stack+0x0 (arm_brbe.c:814)
>
> el0t_64_sync+0x168
> el0t_64_sync_handler+0x98
> el0_svc+0x28
> do_el0_svc+0x4c
> invoke_syscall.constprop.0+0x54
> 373us [-EINVAL] __arm64_sys_bpf+0x8
> __sys_bpf+0x87c
> map_create+0x120
> 95us [-EINVAL] array_map_alloc_check+0x8
>
> The FVP's BRBE buffer has 64 entries (BRBE supports 8, 16, 32, or
> 64). Of these, entries #63-#17 (47) are consumed by the fentry BPF
> trampoline that ran before the function, and entries #11-#00 (12)
> are consumed by the fexit trampoline that runs after. Entry #00
> shows the very last branch recorded before BRBE is paused: the call
> into arm_brbe_snapshot_branch_stack().
>
> The 5 useful entries (#16-#12) show the exact path taken inside
> array_map_alloc_check(). Record #14 shows a jump from line 56
> (bpf_map_attr_numa_node) to line 59 (the if-condition), and #13
> shows an immediate jump from line 59 (attr->max_entries == 0) to
> line 64 (return -EINVAL), skipping lines 60-63. This pinpoints
> max_entries==0 as the cause -- a diagnosis impossible with stack
> traces alone.
>
> [1] 856c02dbce4f ("bpf: Introduce helper bpf_get_branch_snapshot")
> [2] https://github.com/anakryiko/retsnoop
>
> Puranjay Mohan (4):
> perf/arm_pmuv3: Fix NULL pointer dereference in armv8pmu_sched_task()
> perf: Fix uninitialized bitfields in
> perf_clear_branch_entry_bitfields()
> perf/arm64: Add BRBE support for bpf_get_branch_snapshot()
> selftests/bpf: Adjust wasted entries threshold for ARM64 BRBE
>
> drivers/perf/arm_brbe.c | 79 ++++++++++++++++++-
> drivers/perf/arm_brbe.h | 9 +++
> drivers/perf/arm_pmuv3.c | 16 +++-
> include/linux/perf_event.h | 2 +
> .../bpf/prog_tests/get_branch_snapshot.c | 13 ++-
> 5 files changed, 110 insertions(+), 9 deletions(-)
>
>
> base-commit: d118f32246fdabfb4f6a3fd2e511dc5e622bc553
> --
> 2.52.0
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 0/4] arm64: Add BRBE support for bpf_get_branch_snapshot()
2026-03-26 8:57 ` [PATCH v2 0/4] arm64: Add BRBE support for bpf_get_branch_snapshot() Puranjay Mohan
@ 2026-03-26 11:01 ` Will Deacon
0 siblings, 0 replies; 7+ messages in thread
From: Will Deacon @ 2026-03-26 11:01 UTC (permalink / raw)
To: Puranjay Mohan
Cc: bpf, Mark Rutland, Catalin Marinas, Alexei Starovoitov,
Andrii Nakryiko, Daniel Borkmann, Martin KaFai Lau,
Eduard Zingerman, Kumar Kartikeya Dwivedi, Leo Yan, Rob Herring,
Breno Leitao, linux-arm-kernel, linux-perf-users, kernel-team,
anshuman.khandual
On Thu, Mar 26, 2026 at 08:57:14AM +0000, Puranjay Mohan wrote:
> Hi Catalin, Mark, and Will,
>
> Would you mind taking a look at this patchset when you have a chance?
Adding Rob and Anshuman, as they wrote the perf driver for BRBE and are
the best people to review this stuff.
Will
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2026-03-26 11:02 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-18 17:16 [PATCH v2 0/4] arm64: Add BRBE support for bpf_get_branch_snapshot() Puranjay Mohan
2026-03-18 17:16 ` [PATCH v2 1/4] perf/arm_pmuv3: Fix NULL pointer dereference in armv8pmu_sched_task() Puranjay Mohan
2026-03-18 17:16 ` [PATCH v2 2/4] perf: Fix uninitialized bitfields in perf_clear_branch_entry_bitfields() Puranjay Mohan
2026-03-18 17:16 ` [PATCH v2 3/4] perf/arm64: Add BRBE support for bpf_get_branch_snapshot() Puranjay Mohan
2026-03-18 17:16 ` [PATCH v2 4/4] selftests/bpf: Adjust wasted entries threshold for ARM64 BRBE Puranjay Mohan
2026-03-26 8:57 ` [PATCH v2 0/4] arm64: Add BRBE support for bpf_get_branch_snapshot() Puranjay Mohan
2026-03-26 11:01 ` Will Deacon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox